WorldWideScience

Sample records for modeling analyses showed

  1. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  2. Advanced Behavioral Analyses Show that the Presence of Food Causes Subtle Changes in C. elegans Movement.

    Science.gov (United States)

    Angstman, Nicholas B; Frank, Hans-Georg; Schmitz, Christoph

    2016-01-01

    As a widely used and studied model organism, Caenorhabditis elegans worms offer the ability to investigate implications of behavioral change. Although, investigation of C. elegans behavioral traits has been shown, analysis is often narrowed down to measurements based off a single point, and thus cannot pick up on subtle behavioral and morphological changes. In the present study videos were captured of four different C. elegans strains grown in liquid cultures and transferred to NGM-agar plates with an E. coli lawn or with no lawn. Using an advanced software, WormLab, the full skeleton and outline of worms were tracked to determine whether the presence of food affects behavioral traits. In all seven investigated parameters, statistically significant differences were found in worm behavior between those moving on NGM-agar plates with an E. coli lawn and NGM-agar plates with no lawn. Furthermore, multiple test groups showed differences in interaction between variables as the parameters that significantly correlated statistically with speed of locomotion varied. In the present study, we demonstrate the validity of a model to analyze C. elegans behavior beyond simple speed of locomotion. The need to account for a nested design while performing statistical analyses in similar studies is also demonstrated. With extended analyses, C. elegans behavioral change can be investigated with greater sensitivity, which could have wide utility in fields such as, but not limited to, toxicology, drug discovery, and RNAi screening.

  3. Advanced behavioral analyses show that the presence of food causes subtle changes in C. elegans movement

    Directory of Open Access Journals (Sweden)

    Nicholas eAngstman

    2016-03-01

    Full Text Available As a widely used and studied model organism, C. elegans worms offer the ability to investigate implications of behavioral change. Although investigation of C. elegans behavioral traits has been shown, analysis is often narrowed down to measurements based off a single point, and thus cannot pick up on subtle behavioral and morphological changes. In the present study videos were captured of four different C. elegans strains grown in liquid cultures and transferred to NGM-agar plates with an E. coli lawn or with no lawn. Using an advanced software, WormLab, the full skeleton and outline of worms were tracked to determine whether the presence of food affects behavioral traits. In all seven investigated parameters, statistically significant differences were found in worm behavior between those moving on NGM-agar plates with an E. coli lawn and NGM-agar plates with no lawn. Furthermore, multiple test groups showed differences in interaction between variables as the parameters that significantly correlated statistically with speed of locomotion varied. In the present study, we demonstrate the validity of a model to analyze C. elegans behavior beyond simple speed of locomotion. The need to account for a nested design while performing statistical analyses in similar studies is also demonstrated. With extended analyses, C. elegans behavioral change can be investigated with greater sensitivity, which could have wide utility in fields such as, but not limited to, toxicology, drug discovery, and RNAi screening.

  4. Chin Shan analyses show advantages of whole pool multi-rack approach

    International Nuclear Information System (INIS)

    Singh, K.P.; Soler, A.I.

    1991-01-01

    Nuclear fuel storage racks are essentially thin-walled, cellular structures of prismatic cross-section. Although the details of design vary from one supplier to another, certain key physical attributes are common to all designs. For example, all racks feature square cells of sufficient opening size and height to enable insertion and withdrawal of the fuel assembly. The array of cells is positioned in a vertical orientation and is supported off the pool slab surface by four or more support legs. The spent fuel pool is filled with the individual fuel racks. The plenum created by the support legs is essential for proper cooling of the fuel assemblies stored in the rack, which relies on natural convective cooling to extract the heat emitted by the spent fuel. However, it has the insalutary effect of making it kinematically less stable. Regulatory authorities require careful and comprehensive analysis of the response of the racks under the seismic motions postulated for the pool slab. Results from whole pool multi-rack (WPMR) analyses at the Chin Shan and Oyster Creek nuclear plants point up the potential inadequacies of single rack 3D analyses, and show just how important it is to carry out WPMR simulations, despite their abstruseness and high cost. (author)

  5. SVM models for analysing the headstreams of mine water inrush

    Energy Technology Data Exchange (ETDEWEB)

    Yan Zhi-gang; Du Pei-jun; Guo Da-zhi [China University of Science and Technology, Xuzhou (China). School of Environmental Science and Spatial Informatics

    2007-08-15

    The support vector machine (SVM) model was introduced to analyse the headstrean of water inrush in a coal mine. The SVM model, based on a hydrogeochemical method, was constructed for recognising two kinds of headstreams and the H-SVMs model was constructed for recognising multi- headstreams. The SVM method was applied to analyse the conditions of two mixed headstreams and the value of the SVM decision function was investigated as a means of denoting the hydrogeochemical abnormality. The experimental results show that the SVM is based on a strict mathematical theory, has a simple structure and a good overall performance. Moreover the parameter W in the decision function can describe the weights of discrimination indices of the headstream of water inrush. The value of the decision function can denote hydrogeochemistry abnormality, which is significant in the prevention of water inrush in a coal mine. 9 refs., 1 fig., 7 tabs.

  6. Experimental and Computational Modal Analyses for Launch Vehicle Models considering Liquid Propellant and Flange Joints

    Directory of Open Access Journals (Sweden)

    Chang-Hoon Sim

    2018-01-01

    Full Text Available In this research, modal tests and analyses are performed for a simplified and scaled first-stage model of a space launch vehicle using liquid propellant. This study aims to establish finite element modeling techniques for computational modal analyses by considering the liquid propellant and flange joints of launch vehicles. The modal tests measure the natural frequencies and mode shapes in the first and second lateral bending modes. As the liquid filling ratio increases, the measured frequencies decrease. In addition, as the number of flange joints increases, the measured natural frequencies increase. Computational modal analyses using the finite element method are conducted. The liquid is modeled by the virtual mass method, and the flange joints are modeled using one-dimensional spring elements along with the node-to-node connection. Comparison of the modal test results and predicted natural frequencies shows good or moderate agreement. The correlation between the modal tests and analyses establishes finite element modeling techniques for modeling the liquid propellant and flange joints of space launch vehicles.

  7. Analysing and controlling the tax evasion dynamics via majority-vote model

    Energy Technology Data Exchange (ETDEWEB)

    Lima, F W S, E-mail: fwslima@gmail.co, E-mail: wel@ufpi.edu.b [Departamento de Fisica, Universidade Federal do PiauI, 64049-550, Teresina - PI (Brazil)

    2010-09-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q{sub c} to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  8. Analysing and controlling the tax evasion dynamics via majority-vote model

    International Nuclear Information System (INIS)

    Lima, F W S

    2010-01-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q c to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  9. Time dependent patient no-show predictive modelling development.

    Science.gov (United States)

    Huang, Yu-Li; Hanauer, David A

    2016-05-09

    Purpose - The purpose of this paper is to develop evident-based predictive no-show models considering patients' each past appointment status, a time-dependent component, as an independent predictor to improve predictability. Design/methodology/approach - A ten-year retrospective data set was extracted from a pediatric clinic. It consisted of 7,291 distinct patients who had at least two visits along with their appointment characteristics, patient demographics, and insurance information. Logistic regression was adopted to develop no-show models using two-thirds of the data for training and the remaining data for validation. The no-show threshold was then determined based on minimizing the misclassification of show/no-show assignments. There were a total of 26 predictive model developed based on the number of available past appointments. Simulation was employed to test the effective of each model on costs of patient wait time, physician idle time, and overtime. Findings - The results demonstrated the misclassification rate and the area under the curve of the receiver operating characteristic gradually improved as more appointment history was included until around the 20th predictive model. The overbooking method with no-show predictive models suggested incorporating up to the 16th model and outperformed other overbooking methods by as much as 9.4 per cent in the cost per patient while allowing two additional patients in a clinic day. Research limitations/implications - The challenge now is to actually implement the no-show predictive model systematically to further demonstrate its robustness and simplicity in various scheduling systems. Originality/value - This paper provides examples of how to build the no-show predictive models with time-dependent components to improve the overbooking policy. Accurately identifying scheduled patients' show/no-show status allows clinics to proactively schedule patients to reduce the negative impact of patient no-shows.

  10. Human Commercial Models' Eye Colour Shows Negative Frequency-Dependent Selection.

    Directory of Open Access Journals (Sweden)

    Isabela Rodrigues Nogueira Forti

    Full Text Available In this study we investigated the eye colour of human commercial models registered in the UK (400 female and 400 male and Brazil (400 female and 400 male to test the hypothesis that model eye colour frequency was the result of negative frequency-dependent selection. The eye colours of the models were classified as: blue, brown or intermediate. Chi-square analyses of data for countries separated by sex showed that in the United Kingdom brown eyes and intermediate colours were significantly more frequent than expected in comparison to the general United Kingdom population (P<0.001. In Brazil, the most frequent eye colour brown was significantly less frequent than expected in comparison to the general Brazilian population. These results support the hypothesis that model eye colour is the result of negative frequency-dependent selection. This could be the result of people using eye colour as a marker of genetic diversity and finding rarer eye colours more attractive because of the potential advantage more genetically diverse offspring that could result from such a choice. Eye colour may be important because in comparison to many other physical traits (e.g., hair colour it is hard to modify, hide or disguise, and it is highly polymorphic.

  11. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  12. Modelling and analysing oriented fibrous structures

    International Nuclear Information System (INIS)

    Rantala, M; Lassas, M; Siltanen, S; Sampo, J; Takalo, J; Timonen, J

    2014-01-01

    A mathematical model for fibrous structures using a direction dependent scaling law is presented. The orientation of fibrous nets (e.g. paper) is analysed with a method based on the curvelet transform. The curvelet-based orientation analysis has been tested successfully on real data from paper samples: the major directions of fibrefibre orientation can apparently be recovered. Similar results are achieved in tests on data simulated by the new model, allowing a comparison with ground truth

  13. A non-equilibrium neutral model for analysing cultural change.

    Science.gov (United States)

    Kandler, Anne; Shennan, Stephen

    2013-08-07

    Neutral evolution is a frequently used model to analyse changes in frequencies of cultural variants over time. Variants are chosen to be copied according to their relative frequency and new variants are introduced by a process of random mutation. Here we present a non-equilibrium neutral model which accounts for temporally varying population sizes and mutation rates and makes it possible to analyse the cultural system under consideration at any point in time. This framework gives an indication whether observed changes in the frequency distributions of a set of cultural variants between two time points are consistent with the random copying hypothesis. We find that the likelihood of the existence of the observed assemblage at the end of the considered time period (expressed by the probability of the observed number of cultural variants present in the population during the whole period under neutral evolution) is a powerful indicator of departures from neutrality. Further, we study the effects of frequency-dependent selection on the evolutionary trajectories and present a case study of change in the decoration of pottery in early Neolithic Central Europe. Based on the framework developed we show that neutral evolution is not an adequate description of the observed changes in frequency. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...... attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate...

  15. An IEEE 802.11 EDCA Model with Support for Analysing Networks with Misbehaving Nodes

    Directory of Open Access Journals (Sweden)

    Szott Szymon

    2010-01-01

    Full Text Available We present a novel model of IEEE 802.11 EDCA with support for analysing networks with misbehaving nodes. In particular, we consider backoff misbehaviour. Firstly, we verify the model by extensive simulation analysis and by comparing it to three other IEEE 802.11 models. The results show that our model behaves satisfactorily and outperforms other widely acknowledged models. Secondly, a comparison with simulation results in several scenarios with misbehaving nodes proves that our model performs correctly for these scenarios. The proposed model can, therefore, be considered as an original contribution to the area of EDCA models and backoff misbehaviour.

  16. Development of ITER 3D neutronics model and nuclear analyses

    International Nuclear Information System (INIS)

    Zeng, Q.; Zheng, S.; Lu, L.; Li, Y.; Ding, A.; Hu, H.; Wu, Y.

    2007-01-01

    ITER nuclear analyses rely on the calculations with the three-dimensional (3D) Monte Carlo code e.g. the widely-used MCNP. However, continuous changes in the design of the components require the 3D neutronics model for nuclear analyses should be updated. Nevertheless, the modeling of a complex geometry with MCNP by hand is a very time-consuming task. It is an efficient way to develop CAD-based interface code for automatic conversion from CAD models to MCNP input files. Based on the latest CAD model and the available interface codes, the two approaches of updating 3D nuetronics model have been discussed by ITER IT (International Team): The first is to start with the existing MCNP model 'Brand' and update it through a combination of direct modification of the MCNP input file and generation of models for some components directly from the CAD data; The second is to start from the full CAD model, make the necessary simplifications, and generate the MCNP model by one of the interface codes. MCAM as an advanced CAD-based MCNP interface code developed by FDS Team in China has been successfully applied to update the ITER 3D neutronics model by adopting the above two approaches. The Brand model has been updated to generate portions of the geometry based on the newest CAD model by MCAM. MCAM has also successfully performed conversion to MCNP neutronics model from a full ITER CAD model which is simplified and issued by ITER IT to benchmark the above interface codes. Based on the two updated 3D neutronics models, the related nuclear analyses are performed. This paper presents the status of ITER 3D modeling by using MCAM and its nuclear analyses, as well as a brief introduction of advanced version of MCAM. (authors)

  17. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  18. Duchenne muscular dystrophy models show their age

    OpenAIRE

    Chamberlain, Jeffrey S.

    2010-01-01

    The lack of appropriate animal models has hampered efforts to develop therapies for Duchenne muscular dystrophy (DMD). A new mouse model lacking both dystrophin and telomerase (Sacco et al., 2010) closely mimics the pathological progression of human DMD and shows that muscle stem cell activity is a key determinant of disease severity.

  19. Performance of neutron kinetics models for ADS transient analyses

    International Nuclear Information System (INIS)

    Rineiski, A.; Maschek, W.; Rimpault, G.

    2002-01-01

    Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One

  20. A dialogue game for analysing group model building: framing collaborative modelling and its facilitation

    NARCIS (Netherlands)

    Hoppenbrouwers, S.J.B.A.; Rouwette, E.A.J.A.

    2012-01-01

    This paper concerns a specific approach to analysing and structuring operational situations in collaborative modelling. Collaborative modelling is viewed here as 'the goal-driven creation and shaping of models that are based on the principles of rational description and reasoning'. Our long term

  1. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    Science.gov (United States)

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  2. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  3. Model-based Recursive Partitioning for Subgroup Analyses

    OpenAIRE

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-01-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by...

  4. Beta-Poisson model for single-cell RNA-seq data analyses.

    Science.gov (United States)

    Vu, Trung Nghia; Wills, Quin F; Kalari, Krishna R; Niu, Nifang; Wang, Liewei; Rantalainen, Mattias; Pawitan, Yudi

    2016-07-15

    Single-cell RNA-sequencing technology allows detection of gene expression at the single-cell level. One typical feature of the data is a bimodality in the cellular distribution even for highly expressed genes, primarily caused by a proportion of non-expressing cells. The standard and the over-dispersed gamma-Poisson models that are commonly used in bulk-cell RNA-sequencing are not able to capture this property. We introduce a beta-Poisson mixture model that can capture the bimodality of the single-cell gene expression distribution. We further integrate the model into the generalized linear model framework in order to perform differential expression analyses. The whole analytical procedure is called BPSC. The results from several real single-cell RNA-seq datasets indicate that ∼90% of the transcripts are well characterized by the beta-Poisson model; the model-fit from BPSC is better than the fit of the standard gamma-Poisson model in > 80% of the transcripts. Moreover, in differential expression analyses of simulated and real datasets, BPSC performs well against edgeR, a conventional method widely used in bulk-cell RNA-sequencing data, and against scde and MAST, two recent methods specifically designed for single-cell RNA-seq data. An R package BPSC for model fitting and differential expression analyses of single-cell RNA-seq data is available under GPL-3 license at https://github.com/nghiavtr/BPSC CONTACT: yudi.pawitan@ki.se or mattias.rantalainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  5. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Science.gov (United States)

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  6. Development of CFD fire models for deterministic analyses of the cable issues in the nuclear power plant

    International Nuclear Information System (INIS)

    Lin, C.-H.; Ferng, Y.-M.; Pei, B.-S.

    2009-01-01

    Additional fire barriers of electrical cables are required for the nuclear power plants (NPPs) in Taiwan due to the separation requirements of Appendix R to 10 CFR Part 50. The risk-informed fire analysis (RIFA) may provide a viable method to resolve these fire barrier issues. However, it is necessary to perform the fire scenario analyses so that RIFA can quantitatively determine the risk related to the fire barrier wrap. The CFD fire models are then proposed in this paper to help the RIFA in resolving these issues. Three typical fire scenarios are selected to assess the present CFD models. Compared with the experimental data and other model's simulations, the present calculated results show reasonable agreements, rendering that present CFD fire models can provide the quantitative information for RIFA analyses to release the cable wrap requirements for NPPs

  7. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2011-01-01

    Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.

  8. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  9. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  10. Present status of theories and data analyses of mathematical models for carcinogenesis

    International Nuclear Information System (INIS)

    Kai, Michiaki; Kawaguchi, Isao

    2007-01-01

    Reviewed are the basic mathematical models (hazard functions), present trend of the model studies and that for radiation carcinogenesis. Hazard functions of carcinogenesis are described for multi-stage model and 2-event model related with cell dynamics. At present, the age distribution of cancer mortality is analyzed, relationship between mutation and carcinogenesis is discussed, and models for colorectal carcinogenesis are presented. As for radiation carcinogenesis, models of Armitage-Doll and of generalized MVK (Moolgavkar, Venson, Knudson, 1971-1990) by 2-stage clonal expansion have been applied to analysis of carcinogenesis in A-bomb survivors, workers in uranium mine (Rn exposure) and smoking doctors in UK and other cases, of which characteristics are discussed. In analyses of A-bomb survivors, models above are applied to solid tumors and leukemia to see the effect, if any, of stage, age of exposure, time progression etc. In miners and smokers, stages of the initiation, promotion and progression in carcinogenesis are discussed on the analyses. Others contain the analyses of workers in Canadian atomic power plant, and of patients who underwent the radiation therapy. Model analysis can help to understand the carcinogenic process in a quantitative aspect rather than to describe the process. (R.T.)

  11. Model-Based Recursive Partitioning for Subgroup Analyses.

    Science.gov (United States)

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-05-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by predictive factors. The method starts with a model for the overall treatment effect as defined for the primary analysis in the study protocol and uses measures for detecting parameter instabilities in this treatment effect. The procedure produces a segmented model with differential treatment parameters corresponding to each patient subgroup. The subgroups are linked to predictive factors by means of a decision tree. The method is applied to the search for subgroups of patients suffering from amyotrophic lateral sclerosis that differ with respect to their Riluzole treatment effect, the only currently approved drug for this disease.

  12. Bayesian uncertainty analyses of probabilistic risk models

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1989-01-01

    Applications of Bayesian principles to the uncertainty analyses are discussed in the paper. A short review of the most important uncertainties and their causes is provided. An application of the principle of maximum entropy to the determination of Bayesian prior distributions is described. An approach based on so called probabilistic structures is presented in order to develop a method of quantitative evaluation of modelling uncertainties. The method is applied to a small example case. Ideas for application areas for the proposed method are discussed

  13. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    Science.gov (United States)

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  14. Analyses and simulations in income frame regulation model for the network sector from 2007

    International Nuclear Information System (INIS)

    Askeland, Thomas Haave; Fjellstad, Bjoern

    2007-01-01

    Analyses of the income frame regulation model for the network sector in Norway, introduced 1.st of January 2007. The model's treatment of the norm cost is evaluated, especially the effect analyses carried out by a so called Data Envelopment Analysis model. It is argued that there may exist an age lopsidedness in the data set, and that this should and can be corrected in the effect analyses. The adjustment is proposed corrected for by introducing an age parameter in the data set. Analyses of how the calibration effects in the regulation model affect the business' total income frame, as well as each network company's income frame have been made. It is argued that the calibration, the way it is presented, is not working according to its intention, and should be adjusted in order to provide the sector with the rate of reference in return (ml)

  15. Control designs and stability analyses for Helly’s car-following model

    Science.gov (United States)

    Rosas-Jaimes, Oscar A.; Quezada-Téllez, Luis A.; Fernández-Anaya, Guillermo

    Car-following is an approach to understand traffic behavior restricted to pairs of cars, identifying a “leader” moving in front of a “follower”, which at the same time, it is assumed that it does not surpass to the first one. From the first attempts to formulate the way in which individual cars are affected in a road through these models, linear differential equations were suggested by author like Pipes or Helly. These expressions represent such phenomena quite well, even though they have been overcome by other more recent and accurate models. However, in this paper, we show that those early formulations have some properties that are not fully reported, presenting the different ways in which they can be expressed, and analyzing them in their stability behaviors. Pipes’ model can be extended to what it is known as Helly’s model, which is viewed as a more precise model to emulate this microscopic approach to traffic. Once established some convenient forms of expression, two control designs are suggested herein. These regulation schemes are also complemented with their respective stability analyses, which reflect some important properties with implications in real driving. It is significant that these linear designs can be very easy to understand and to implement, including those important features related to safety and comfort.

  16. Use of flow models to analyse loss of coolant accidents

    International Nuclear Information System (INIS)

    Pinet, Bernard

    1978-01-01

    This article summarises current work on developing the use of flow models to analyse loss-of-coolant accident in pressurized-water plants. This work is being done jointly, in the context of the LOCA Technical Committee, by the CEA, EDF and FRAMATOME. The construction of the flow model is very closely based on some theoretical studies of the two-fluid model. The laws of transfer at the interface and at the wall are tested experimentally. The representativity of the model then has to be checked in experiments involving several elementary physical phenomena [fr

  17. Small GSK-3 Inhibitor Shows Efficacy in a Motor Neuron Disease Murine Model Modulating Autophagy.

    Directory of Open Access Journals (Sweden)

    Estefanía de Munck

    Full Text Available Amyotrophic lateral sclerosis (ALS is a progressive motor neuron degenerative disease that has no effective treatment up to date. Drug discovery tasks have been hampered due to the lack of knowledge in its molecular etiology together with the limited animal models for research. Recently, a motor neuron disease animal model has been developed using β-N-methylamino-L-alanine (L-BMAA, a neurotoxic amino acid related to the appearing of ALS. In the present work, the neuroprotective role of VP2.51, a small heterocyclic GSK-3 inhibitor, is analysed in this novel murine model together with the analysis of autophagy. VP2.51 daily administration for two weeks, starting the first day after L-BMAA treatment, leads to total recovery of neurological symptoms and prevents the activation of autophagic processes in rats. These results show that the L-BMAA murine model can be used to test the efficacy of new drugs. In addition, the results confirm the therapeutic potential of GSK-3 inhibitors, and specially VP2.51, for the disease-modifying future treatment of motor neuron disorders like ALS.

  18. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  19. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  20. Models and analyses for inertial-confinement fusion-reactor studies

    International Nuclear Information System (INIS)

    Bohachevsky, I.O.

    1981-05-01

    This report describes models and analyses devised at Los Alamos National Laboratory to determine the technical characteristics of different inertial confinement fusion (ICF) reactor elements required for component integration into a functional unit. We emphasize the generic properties of the different elements rather than specific designs. The topics discussed are general ICF reactor design considerations; reactor cavity phenomena, including the restoration of interpulse ambient conditions; first-wall temperature increases and material losses; reactor neutronics and hydrodynamic blanket response to neutron energy deposition; and analyses of loads and stresses in the reactor vessel walls, including remarks about the generation and propagation of very short wavelength stress waves. A discussion of analytic approaches useful in integrations and optimizations of ICF reactor systems concludes the report

  1. Estimating carbon and showing impacts of drought using satellite data in regression-tree models

    Science.gov (United States)

    Boyte, Stephen; Wylie, Bruce K.; Howard, Danny; Dahal, Devendra; Gilmanov, Tagir G.

    2018-01-01

    Integrating spatially explicit biogeophysical and remotely sensed data into regression-tree models enables the spatial extrapolation of training data over large geographic spaces, allowing a better understanding of broad-scale ecosystem processes. The current study presents annual gross primary production (GPP) and annual ecosystem respiration (RE) for 2000–2013 in several short-statured vegetation types using carbon flux data from towers that are located strategically across the conterminous United States (CONUS). We calculate carbon fluxes (annual net ecosystem production [NEP]) for each year in our study period, which includes 2012 when drought and higher-than-normal temperatures influence vegetation productivity in large parts of the study area. We present and analyse carbon flux dynamics in the CONUS to better understand how drought affects GPP, RE, and NEP. Model accuracy metrics show strong correlation coefficients (r) (r ≥ 94%) between training and estimated data for both GPP and RE. Overall, average annual GPP, RE, and NEP are relatively constant throughout the study period except during 2012 when almost 60% less carbon is sequestered than normal. These results allow us to conclude that this modelling method effectively estimates carbon dynamics through time and allows the exploration of impacts of meteorological anomalies and vegetation types on carbon dynamics.

  2. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Directory of Open Access Journals (Sweden)

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  3. A 1024 channel analyser of model FH 465

    International Nuclear Information System (INIS)

    Tang Cunxun

    1988-01-01

    The FH 465 is renewed type of the 1024 Channel Analyser of model FH451. Besides simple operation and fine display, featured by the primary one, the core memory is replaced by semiconductor memory; the integration has been improved; employment of 74LS low power consumpted devices widely used in the world has not only greatly decreased the cost, but also can be easily interchanged with Apple-II, Great Wall-0520-CH or IBM-PC/XT Microcomputers. The operating principle, main specifications and test results are described

  4. arXiv Statistical Analyses of Higgs- and Z-Portal Dark Matter Models

    CERN Document Server

    Ellis, John; Marzola, Luca; Raidal, Martti

    2018-06-12

    We perform frequentist and Bayesian statistical analyses of Higgs- and Z-portal models of dark matter particles with spin 0, 1/2 and 1. Our analyses incorporate data from direct detection and indirect detection experiments, as well as LHC searches for monojet and monophoton events, and we also analyze the potential impacts of future direct detection experiments. We find acceptable regions of the parameter spaces for Higgs-portal models with real scalar, neutral vector, Majorana or Dirac fermion dark matter particles, and Z-portal models with Majorana or Dirac fermion dark matter particles. In many of these cases, there are interesting prospects for discovering dark matter particles in Higgs or Z decays, as well as dark matter particles weighing $\\gtrsim 100$ GeV. Negative results from planned direct detection experiments would still allow acceptable regions for Higgs- and Z-portal models with Majorana or Dirac fermion dark matter particles.

  5. Vocational Teachers and Professionalism - A Model Based on Empirical Analyses

    DEFF Research Database (Denmark)

    Duch, Henriette Skjærbæk; Andreasen, Karen E

    Vocational Teachers and Professionalism - A Model Based on Empirical Analyses Several theorists has developed models to illustrate the processes of adult learning and professional development (e.g. Illeris, Argyris, Engeström; Wahlgren & Aarkorg, Kolb and Wenger). Models can sometimes be criticized...... emphasis on the adult employee, the organization, its surroundings as well as other contextual factors. Our concern is adult vocational teachers attending a pedagogical course and teaching at vocational colleges. The aim of the paper is to discuss different models and develop a model concerning teachers...... at vocational colleges based on empirical data in a specific context, vocational teacher-training course in Denmark. By offering a basis and concepts for analysis of practice such model is meant to support the development of vocational teachers’ professionalism at courses and in organizational contexts...

  6. Devising a New Model of Demand-Based Learning Integrated with Social Networks and Analyses of its Performance

    Directory of Open Access Journals (Sweden)

    Bekim Fetaji

    2018-02-01

    Full Text Available The focus of the research study is to devise a new model for demand based learning that will be integrated with social networks such as Facebook, twitter and other. The study investigates this by reviewing the published literature and realizes a case study analyses in order to analyze the new models’ analytical perspectives of practical implementation. The study focuses on analyzing demand-based learning and investigating how it can be improved by devising a specific model that incorporates social network use. Statistical analyses of the results of the questionnaire through research of the raised questions and hypothesis showed that there is a need for introducing new models in the teaching process. The originality stands on the prologue of the social login approach to an educational environment, whereas the approach is counted as a contribution of developing a demand-based web application, which aims to modernize the educational pattern of communication, introduce the social login approach, and increase the process of knowledge transfer as well as improve learners’ performance and skills. Insights and recommendations are provided, argumented and discussed.

  7. Material model for non-linear finite element analyses of large concrete structures

    NARCIS (Netherlands)

    Engen, Morten; Hendriks, M.A.N.; Øverli, Jan Arve; Åldstedt, Erik; Beushausen, H.

    2016-01-01

    A fully triaxial material model for concrete was implemented in a commercial finite element code. The only required input parameter was the cylinder compressive strength. The material model was suitable for non-linear finite element analyses of large concrete structures. The importance of including

  8. Taxing CO2 and subsidising biomass: Analysed in a macroeconomic and sectoral model

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    2000-01-01

    This paper analyses the combination of taxes and subsidies as an instrument to enable a reduction in CO2 emission. The objective of the study is to compare recycling of a CO2 tax revenue as a subsidy for biomass use as opposed to traditional recycling such as reduced income or corporate taxation....... A model of Denmark's energy supply sector is used to analyse the e€ect of a CO2 tax combined with using the tax revenue for biomass subsidies. The energy supply model is linked to a macroeconomic model such that the macroeconomic consequences of tax policies can be analysed along with the consequences...... for speci®c sectors such as agriculture. Electricity and heat are produced at heat and power plants utilising fuels which minimise total fuel cost, while the authorities regulate capacity expansion technologies. The e€ect of fuel taxes and subsidies on fuels is very sensitive to the fuel substitution...

  9. Quantifying and Analysing Neighbourhood Characteristics Supporting Urban Land-Use Modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2009-01-01

    Land-use modelling and spatial scenarios have gained increased attention as a means to meet the challenge of reducing uncertainty in the spatial planning and decision-making. Several organisations have developed software for land-use modelling. Many of the recent modelling efforts incorporate...... cellular automata (CA) to accomplish spatially explicit land-use change modelling. Spatial interaction between neighbour land-uses is an important component in urban cellular automata. Nevertheless, this component is calibrated through trial-and-error estimation. The aim of the current research project has...... been to quantify and analyse land-use neighbourhood characteristics and impart useful information for cell based land-use modelling. The results of our research is a major step forward, because we have estimated rules for neighbourhood interaction from really observed land-use changes at a yearly basis...

  10. Classifying Multi-Model Wheat Yield Impact Response Surfaces Showing Sensitivity to Temperature and Precipitation Change

    Science.gov (United States)

    Fronzek, Stefan; Pirttioja, Nina; Carter, Timothy R.; Bindi, Marco; Hoffmann, Holger; Palosuo, Taru; Ruiz-Ramos, Margarita; Tao, Fulu; Trnka, Miroslav; Acutis, Marco; hide

    2017-01-01

    Crop growth simulation models can differ greatly in their treatment of key processes and hence in their response to environmental conditions. Here, we used an ensemble of 26 process-based wheat models applied at sites across a European transect to compare their sensitivity to changes in temperature (minus 2 to plus 9 degrees Centigrade) and precipitation (minus 50 to plus 50 percent). Model results were analysed by plotting them as impact response surfaces (IRSs), classifying the IRS patterns of individual model simulations, describing these classes and analysing factors that may explain the major differences in model responses. The model ensemble was used to simulate yields of winter and spring wheat at four sites in Finland, Germany and Spain. Results were plotted as IRSs that show changes in yields relative to the baseline with respect to temperature and precipitation. IRSs of 30-year means and selected extreme years were classified using two approaches describing their pattern. The expert diagnostic approach (EDA) combines two aspects of IRS patterns: location of the maximum yield (nine classes) and strength of the yield response with respect to climate (four classes), resulting in a total of 36 combined classes defined using criteria pre-specified by experts. The statistical diagnostic approach (SDA) groups IRSs by comparing their pattern and magnitude, without attempting to interpret these features. It applies a hierarchical clustering method, grouping response patterns using a distance metric that combines the spatial correlation and Euclidian distance between IRS pairs. The two approaches were used to investigate whether different patterns of yield response could be related to different properties of the crop models, specifically their genealogy, calibration and process description. Although no single model property across a large model ensemble was found to explain the integrated yield response to temperature and precipitation perturbations, the

  11. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Science.gov (United States)

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

  12. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  13. Multi-state models: metapopulation and life history analyses

    Directory of Open Access Journals (Sweden)

    Arnason, A. N.

    2004-06-01

    Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such

  14. Model tests and numerical analyses on horizontal impedance functions of inclined single piles embedded in cohesionless soil

    Science.gov (United States)

    Goit, Chandra Shekhar; Saitoh, Masato

    2013-03-01

    Horizontal impedance functions of inclined single piles are measured experimentally for model soil-pile systems with both the effects of local soil nonlinearity and resonant characteristics. Two practical pile inclinations of 5° and 10° in addition to a vertical pile embedded in cohesionless soil and subjected to lateral harmonic pile head loadings for a wide range of frequencies are considered. Results obtained with low-to-high amplitude of lateral loadings on model soil-pile systems encased in a laminar shear box show that the local nonlinearities have a profound impact on the horizontal impedance functions of piles. Horizontal impedance functions of inclined piles are found to be smaller than the vertical pile and the values decrease as the angle of pile inclination increases. Distinct values of horizontal impedance functions are obtained for the `positive' and `negative' cycles of harmonic loadings, leading to asymmetric force-displacement relationships for the inclined piles. Validation of these experimental results is carried out through three-dimensional nonlinear finite element analyses, and the results from the numerical models are in good agreement with the experimental data. Sensitivity analyses conducted on the numerical models suggest that the consideration of local nonlinearity at the vicinity of the soil-pile interface influence the response of the soil-pile systems.

  15. Regional analyses of labor markets and demography: a model based Norwegian example.

    Science.gov (United States)

    Stambol, L S; Stolen, N M; Avitsland, T

    1998-01-01

    The authors discuss the regional REGARD model, developed by Statistics Norway to analyze the regional implications of macroeconomic development of employment, labor force, and unemployment. "In building the model, empirical analyses of regional producer behavior in manufacturing industries have been performed, and the relation between labor market development and regional migration has been investigated. Apart from providing a short description of the REGARD model, this article demonstrates the functioning of the model, and presents some results of an application." excerpt

  16. Microarray profiling shows distinct differences between primary tumors and commonly used preclinical models in hepatocellular carcinoma

    International Nuclear Information System (INIS)

    Wang, Weining; Iyer, N. Gopalakrishna; Tay, Hsien Ts’ung; Wu, Yonghui; Lim, Tony K. H.; Zheng, Lin; Song, In Chin; Kwoh, Chee Keong; Huynh, Hung; Tan, Patrick O. B.; Chow, Pierce K. H.

    2015-01-01

    Despite advances in therapeutics, outcomes for hepatocellular carcinoma (HCC) remain poor and there is an urgent need for efficacious systemic therapy. Unfortunately, drugs that are successful in preclinical studies often fail in the clinical setting, and we hypothesize that this is due to functional differences between primary tumors and commonly used preclinical models. In this study, we attempt to answer this question by comparing tumor morphology and gene expression profiles between primary tumors, xenografts and HCC cell lines. Hep G2 cell lines and tumor cells from patient tumor explants were subcutaneously (ectopically) injected into the flank and orthotopically into liver parenchyma of Mus Musculus SCID mice. The mice were euthanized after two weeks. RNA was extracted from the tumors, and gene expression profiling was performed using the Gene Chip Human Genome U133 Plus 2.0. Principal component analyses (PCA) and construction of dendrograms were conducted using Partek genomics suite. PCA showed that the commonly used HepG2 cell line model and its xenograft counterparts were vastly different from all fresh primary tumors. Expression profiles of primary tumors were also significantly divergent from their counterpart patient-derived xenograft (PDX) models, regardless of the site of implantation. Xenografts from the same primary tumors were more likely to cluster together regardless of site of implantation, although heat maps showed distinct differences in gene expression profiles between orthotopic and ectopic models. The data presented here challenges the utility of routinely used preclinical models. Models using HepG2 were vastly different from primary tumors and PDXs, suggesting that this is not clinically representative. Surprisingly, site of implantation (orthotopic versus ectopic) resulted in limited impact on gene expression profiles, and in both scenarios xenografts differed significantly from the original primary tumors, challenging the long

  17. On the use of uncertainty analyses to test hypotheses regarding deterministic model predictions of environmental processes

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Bittner, E.A.; Essington, E.H.

    1995-01-01

    This paper illustrates the use of Monte Carlo parameter uncertainty and sensitivity analyses to test hypotheses regarding predictions of deterministic models of environmental transport, dose, risk and other phenomena. The methodology is illustrated by testing whether 238 Pu is transferred more readily than 239+240 Pu from the gastrointestinal (GI) tract of cattle to their tissues (muscle, liver and blood). This illustration is based on a study wherein beef-cattle grazed for up to 1064 days on a fenced plutonium (Pu)-contaminated arid site in Area 13 near the Nevada Test Site in the United States. Periodically, cattle were sacrificed and their tissues analyzed for Pu and other radionuclides. Conditional sensitivity analyses of the model predictions were also conducted. These analyses indicated that Pu cattle tissue concentrations had the largest impact of any model parameter on the pdf of predicted Pu fractional transfers. Issues that arise in conducting uncertainty and sensitivity analyses of deterministic models are discussed. (author)

  18. Phenotypic and functional analyses show stem cell-derived hepatocyte-like cells better mimic fetal rather than adult hepatocytes.

    Science.gov (United States)

    Baxter, Melissa; Withey, Sarah; Harrison, Sean; Segeritz, Charis-Patricia; Zhang, Fang; Atkinson-Dell, Rebecca; Rowe, Cliff; Gerrard, Dave T; Sison-Young, Rowena; Jenkins, Roz; Henry, Joanne; Berry, Andrew A; Mohamet, Lisa; Best, Marie; Fenwick, Stephen W; Malik, Hassan; Kitteringham, Neil R; Goldring, Chris E; Piper Hanley, Karen; Vallier, Ludovic; Hanley, Neil A

    2015-03-01

    Hepatocyte-like cells (HLCs), differentiated from pluripotent stem cells by the use of soluble factors, can model human liver function and toxicity. However, at present HLC maturity and whether any deficit represents a true fetal state or aberrant differentiation is unclear and compounded by comparison to potentially deteriorated adult hepatocytes. Therefore, we generated HLCs from multiple lineages, using two different protocols, for direct comparison with fresh fetal and adult hepatocytes. Protocols were developed for robust differentiation. Multiple transcript, protein and functional analyses compared HLCs to fresh human fetal and adult hepatocytes. HLCs were comparable to those of other laboratories by multiple parameters. Transcriptional changes during differentiation mimicked human embryogenesis and showed more similarity to pericentral than periportal hepatocytes. Unbiased proteomics demonstrated greater proximity to liver than 30 other human organs or tissues. However, by comparison to fresh material, HLC maturity was proven by transcript, protein and function to be fetal-like and short of the adult phenotype. The expression of 81% phase 1 enzymes in HLCs was significantly upregulated and half were statistically not different from fetal hepatocytes. HLCs secreted albumin and metabolized testosterone (CYP3A) and dextrorphan (CYP2D6) like fetal hepatocytes. In seven bespoke tests, devised by principal components analysis to distinguish fetal from adult hepatocytes, HLCs from two different source laboratories consistently demonstrated fetal characteristics. HLCs from different sources are broadly comparable with unbiased proteomic evidence for faithful differentiation down the liver lineage. This current phenotype mimics human fetal rather than adult hepatocytes. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.

  19. An efficient modeling of fine air-gaps in tokamak in-vessel components for electromagnetic analyses

    International Nuclear Information System (INIS)

    Oh, Dong Keun; Pak, Sunil; Jhang, Hogun

    2012-01-01

    Highlights: ► A simple and efficient modeling technique is introduced to avoid undesirable massive air mesh which is usually encountered at the modeling of fine structures in tokamak in-vessel component. ► This modeling method is based on the decoupled nodes at the boundary element mocking the air gaps. ► We demonstrated the viability and efficacy, comparing this method with brute force modeling of air-gaps and effective resistivity approximation instead of detail modeling. ► Application of the method to the ITER machine was successfully carried out without sacrificing computational resources and speed. - Abstract: A simple and efficient modeling technique is presented for a proper analysis of complicated eddy current flows in conducting structures with fine air gaps. It is based on the idea of replacing a slit with the decoupled boundary of finite elements. The viability and efficacy of the technique is demonstrated in a simple problem. Application of the method to electromagnetic load analyses during plasma disruptions in ITER has been successfully carried out without sacrificing computational resources and speed. This shows the proposed method is applicable to a practical system with complicated geometrical structures.

  20. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  1. Model shows future cut in U.S. ozone levels

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    A joint U.S. auto-oil industry research program says modeling shows that changing gasoline composition can reduce ozone levels for Los Angeles in 2010 and for New York City and Dallas-Fort Worth in 2005. The air quality modeling was based on vehicle emissions research data released late last year (OGJ, Dec. 24, 1990, p. 20). The effort is sponsored by the big three auto manufacturers and 14 oil companies. Sponsors the cars and small trucks account for about one third of ozone generated in the three cities studied but by 2005-10 will account for only 5-9%

  2. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  3. YALINA Booster subcritical assembly modeling and analyses

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, Y.; Aliberti, G.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Sadovich, S.

    2010-01-01

    Full text: Accurate simulation models of the YALINA Booster assembly of the Joint Institute for Power and Nuclear Research (JIPNR)-Sosny, Belarus have been developed by Argonne National Laboratory (ANL) of the USA. YALINA-Booster has coupled zones operating with fast and thermal neutron spectra, which requires a special attention in the modelling process. Three different uranium enrichments of 90%, 36% or 21% were used in the fast zone and 10% uranium enrichment was used in the thermal zone. Two of the most advanced Monte Carlo computer programs have been utilized for the ANL analyses: MCNP of the Los Alamos National Laboratory and MONK of the British Nuclear Fuel Limited and SERCO Assurance. The developed geometrical models for both computer programs modelled all the details of the YALINA Booster facility as described in the technical specifications defined in the International Atomic Energy Agency (IAEA) report without any geometrical approximation or material homogenization. Materials impurities and the measured material densities have been used in the models. The obtained results for the neutron multiplication factors calculated in criticality mode (keff) and in source mode (ksrc) with an external neutron source from the two Monte Carlo programs are very similar. Different external neutron sources have been investigated including californium, deuterium-deuterium (D-D), and deuterium-tritium (D-T) neutron sources. The spatial neutron flux profiles and the neutron spectra in the experimental channels were calculated. In addition, the kinetic parameters were defined including the effective delayed neutron fraction, the prompt neutron lifetime, and the neutron generation time. A new calculation methodology has been developed at ANL to simulate the pulsed neutron source experiments. In this methodology, the MCNP code is used to simulate the detector response from a single pulse of the external neutron source and a C code is used to superimpose the pulse until the

  4. Global analyses of historical masonry buildings: Equivalent frame vs. 3D solid models

    Science.gov (United States)

    Clementi, Francesco; Mezzapelle, Pardo Antonio; Cocchi, Gianmichele; Lenci, Stefano

    2017-07-01

    The paper analyses the seismic vulnerability of two different masonry buildings. It provides both an advanced 3D modelling with solid elements and an equivalent frame modelling. The global structural behaviour and the dynamic properties of the compound have been evaluated using the Finite Element Modelling (FEM) technique, where the nonlinear behaviour of masonry has been taken into account by proper constitutive assumptions. A sensitivity analysis is done to evaluate the effect of the choice of the structural models.

  5. PWR plant transient analyses using TRAC-PF1

    International Nuclear Information System (INIS)

    Ireland, J.R.; Boyack, B.E.

    1984-01-01

    This paper describes some of the pressurized water reactor (PWR) transient analyses performed at Los Alamos for the US Nuclear Regulatory Commission using the Transient Reactor Analysis Code (TRAC-PF1). Many of the transient analyses performed directly address current PWR safety issues. Included in this paper are examples of two safety issues addressed by TRAC-PF1. These examples are pressurized thermal shock (PTS) and feed-and-bleed cooling for Oconee-1. The calculations performed were plant specific in that details of both the primary and secondary sides were modeled in addition to models of the plant integrated control systems. The results of these analyses show that for these two transients, the reactor cores remained covered and cooled at all times posing no real threat to the reactor system nor to the public

  6. Gamma-ray pulsar physics: gap-model populations and light-curve analyses in the Fermi era

    International Nuclear Information System (INIS)

    Pierbattista, M.

    2010-01-01

    This thesis research focusses on the study of the young and energetic isolated ordinary pulsar population detected by the Fermi gamma-ray space telescope. We compared the model expectations of four emission models and the LAT data. We found that all the models fail to reproduce the LAT detections, in particular the large number of high E objects observed. This inconsistency is not model dependent. A discrepancy between the radio-loud/radio-quiet objects ratio was also found between the observed and predicted samples. The L γ α E 0.5 relation is robustly confirmed by all the assumed models with particular agreement in the slot gap (SG) case. On luminosity bases, the intermediate altitude emission of the two pole caustic SG model is favoured. The beaming factor f Ω shows an E dependency that is slightly visible in the SG case. Estimates of the pulsar orientations have been obtained to explain the simultaneous gamma and radio light-curves. By analysing the solutions we found a relation between the observed energy cutoff and the width of the emission slot gap. This relation has been theoretically predicted. A possible magnetic obliquity α alignment with time is rejected -for all the models- on timescale of the order of 10 6 years. The light-curve morphology study shows that the outer magnetosphere gap emission (OGs) are favoured to explain the observed radio-gamma lag. The light curve moment studies (symmetry and sharpness) on the contrary favour a two pole caustic SG emission. All the model predictions suggest a different magnetic field layout with an hybrid two pole caustic and intermediate altitude emission to explain both the pulsar luminosity and light curve morphology. The low magnetosphere emission mechanism of the polar cap model, is systematically rejected by all the tests done. (author) [fr

  7. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    Science.gov (United States)

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  8. Vibration tests and analyses of the reactor building model on a small scale

    International Nuclear Information System (INIS)

    Tsuchiya, Hideo; Tanaka, Mitsuru; Ogihara, Yukio; Moriyama, Ken-ichi; Nakayama, Masaaki

    1985-01-01

    The purpose of this paper is to describe the vibration tests and the simulation analyses of the reactor building model on a small scale. The model vibration tests were performed to investigate the vibrational characteristics of the combined super-structure and to verify the computor code based on Dr. H. Tajimi's Thin Layered Element Theory, using the uniaxial shaking table (60 cm x 60 cm). The specimens consist of ground model, three structural model (prestressed concrete containment vessel, inner concrete structure, and enclosure building), a combined structural model and a combined structure-soil interaction model. These models are made of silicon-rubber, and they have a scale of 1:600. Harmonic step by step excitation of 40 gals was performed to investigate the vibrational characteristics for each structural model. The responses of the specimen to harmonic excitation were measured by optical displacement meters, and analyzed by a real time spectrum analyzer. The resonance and phase lag curves of the specimens to the shaking table were obtained respectively. As for the tests of a combined structure-soil interaction model, three predominant frequencies were observed in the resonance curves. These values were in good agreement with the analytical transfer function curves on the computer code. From the vibration tests and the simulation analyses, the silicon-rubber model test is useful for the fundamental study of structural problems. The computer code based on the Thin Element Theory can simulate well the test results. (Kobozono, M.)

  9. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  10. Steady-state and accident analyses of PBMR with the computer code SPECTRA

    International Nuclear Information System (INIS)

    Stempniewicz, Marek M.

    2002-01-01

    The SPECTRA code is an accident analysis code developed at NRG. It is designed for thermal-hydraulic analyses of nuclear or conventional power plants. The code is capable of analysing the whole power plant, including reactor vessel, primary system, various control and safety systems, containment and reactor building. The aim of the work presented in this paper was to prepare a preliminary thermal-hydraulic model of PBMR for SPECTRA, and perform steady state and accident analyses. In order to assess SPECTRA capability to model the PBMR reactors, a model of the INCOGEN system has been prepared first. Steady state and accident scenarios were analyzed for INCOGEN configuration. Results were compared to the results obtained earlier with INAS and OCTOPUS/PANTHERMIX. A good agreement was obtained. Results of accident analyses with PBMR model showed qualitatively good results. It is concluded that SPECTRA is a suitable tool for analyzing High Temperature Reactors, such as INCOGEN or for example PBMR (Pebble Bed Modular Reactor). Analyses of INCOGEN and PBMR systems showed that in all analyzed cases the fuel temperatures remained within the acceptable limits. Consequently there is no danger of release of radioactivity to the environment. It may be concluded that those are promising designs for future safe industrial reactors. (author)

  11. Hybrid Logical Analyses of the Ambient Calculus

    DEFF Research Database (Denmark)

    Bolander, Thomas; Hansen, Rene Rydhof

    2010-01-01

    In this paper, hybrid logic is used to formulate three control flow analyses for Mobile Ambients, a process calculus designed for modelling mobility. We show that hybrid logic is very well-suited to express the semantic structure of the ambient calculus and how features of hybrid logic can...

  12. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    International Nuclear Information System (INIS)

    2014-12-01

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  13. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-12-15

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  14. Applications of one-dimensional models in simplified inelastic analyses

    International Nuclear Information System (INIS)

    Kamal, S.A.; Chern, J.M.; Pai, D.H.

    1980-01-01

    This paper presents an approximate inelastic analysis based on geometric simplification with emphasis on its applicability, modeling, and the method of defining the loading conditions. Two problems are investigated: a one-dimensional axisymmetric model of generalized plane strain thick-walled cylinder is applied to the primary sodium inlet nozzle of the Clinch River Breeder Reactor Intermediate Heat Exchanger (CRBRP-IHX), and a finite cylindrical shell is used to simulate the branch shell forging (Y) junction. The results are then compared with the available detailed inelastic analyses under cyclic loading conditions in terms of creep and fatigue damages and inelastic ratchetting strains per the ASME Code Case N-47 requirements. In both problems, the one-dimensional simulation is able to trace the detailed stress-strain response. The quantitative comparison is good for the nozzle, but less satisfactory for the Y junction. Refinements are suggested to further improve the simulation

  15. NUMERICAL MODELLING AS NON-DESTRUCTIVE METHOD FOR THE ANALYSES AND DIAGNOSIS OF STONE STRUCTURES: MODELS AND POSSIBILITIES

    Directory of Open Access Journals (Sweden)

    Nataša Štambuk-Cvitanović

    1999-12-01

    Full Text Available Assuming the necessity of analysis, diagnosis and preservation of existing valuable stone masonry structures and ancient monuments in today European urban cores, numerical modelling become an efficient tool for the structural behaviour investigation. It should be supported by experimentally found input data and taken as a part of general combined approach, particularly non-destructive techniques on the structure/model within it. For the structures or their detail which may require more complex analyses three numerical models based upon finite elements technique are suggested: (1 standard linear model; (2 linear model with contact (interface elements; and (3 non-linear elasto-plastic and orthotropic model. The applicability of these models depend upon the accuracy of the approach or type of the problem, and will be presented on some characteristic samples.

  16. Generic uncertainty model for DETRA for environmental consequence analyses. Application and sample outputs

    International Nuclear Information System (INIS)

    Suolanen, V.; Ilvonen, M.

    1998-10-01

    Computer model DETRA applies a dynamic compartment modelling approach. The compartment structure of each considered application can be tailored individually. This flexible modelling method makes it possible that the transfer of radionuclides can be considered in various cases: aquatic environment and related food chains, terrestrial environment, food chains in general and food stuffs, body burden analyses of humans, etc. In the former study on this subject, modernization of the user interface of DETRA code was carried out. This new interface works in Windows environment and the usability of the code has been improved. The objective of this study has been to further develop and diversify the user interface so that also probabilistic uncertainty analyses can be performed by DETRA. The most common probability distributions are available: uniform, truncated Gaussian and triangular. The corresponding logarithmic distributions are also available. All input data related to a considered case can be varied, although this option is seldomly needed. The calculated output values can be selected as monitored values at certain simulation time points defined by the user. The results of a sensitivity run are immediately available after simulation as graphical presentations. These outcomes are distributions generated for varied parameters, density functions of monitored parameters and complementary cumulative density functions (CCDF). An application considered in connection with this work was the estimation of contamination of milk caused by radioactive deposition of Cs (10 kBq(Cs-137)/m 2 ). The multi-sequence calculation model applied consisted of a pasture modelling part and a dormant season modelling part. These two sequences were linked periodically simulating the realistic practice of care taking of domestic animals in Finland. The most important parameters were varied in this exercise. The performed diversifying of the user interface of DETRA code seems to provide an easily

  17. Visualizing Three-dimensional Slab Geometries with ShowEarthModel

    Science.gov (United States)

    Chang, B.; Jadamec, M. A.; Fischer, K. M.; Kreylos, O.; Yikilmaz, M. B.

    2017-12-01

    Seismic data that characterize the morphology of modern subducted slabs on Earth suggest that a two-dimensional paradigm is no longer adequate to describe the subduction process. Here we demonstrate the effect of data exploration of three-dimensional (3D) global slab geometries with the open source program ShowEarthModel. ShowEarthModel was designed specifically to support data exploration, by focusing on interactivity and real-time response using the Vrui toolkit. Sixteen movies are presented that explore the 3D complexity of modern subduction zones on Earth. The first movie provides a guided tour through the Earth's major subduction zones, comparing the global slab geometry data sets of Gudmundsson and Sambridge (1998), Syracuse and Abers (2006), and Hayes et al. (2012). Fifteen regional movies explore the individual subduction zones and regions intersecting slabs, using the Hayes et al. (2012) slab geometry models where available and the Engdahl and Villasenor (2002) global earthquake data set. Viewing the subduction zones in this way provides an improved conceptualization of the 3D morphology within a given subduction zone as well as the 3D spatial relations between the intersecting slabs. This approach provides a powerful tool for rendering earth properties and broadening capabilities in both Earth Science research and education by allowing for whole earth visualization. The 3D characterization of global slab geometries is placed in the context of 3D slab-driven mantle flow and observations of shear wave splitting in subduction zones. These visualizations contribute to the paradigm shift from a 2D to 3D subduction framework by facilitating the conceptualization of the modern subduction system on Earth in 3D space.

  18. Food pattern modeling shows that the 2010 Dietary Guidelines for sodium and potassium cannot be met simultaneously

    Science.gov (United States)

    Maillot, Matthieu; Monsivais, Pablo; Drewnowski, Adam

    2013-01-01

    The 2010 US Dietary Guidelines recommended limiting intake of sodium to 1500 mg/d for people older than 50 years, African Americans, and those suffering from chronic disease. The guidelines recommended that all other people consume less than 2300 mg sodium and 4700 mg of potassium per day. The theoretical feasibility of meeting the sodium and potassium guidelines while simultaneously maintaining nutritional adequacy of the diet was tested using food pattern modeling based on linear programming. Dietary data from the National Health and Nutrition Examination Survey 2001-2002 were used to create optimized food patterns for 6 age-sex groups. Linear programming models determined the boundary conditions for the potassium and sodium content of the modeled food patterns that would also be compatible with other nutrient goals. Linear programming models also sought to determine the amounts of sodium and potassium that both would be consistent with the ratio of Na to K of 0.49 and would cause the least deviation from the existing food habits. The 6 sets of food patterns were created before and after an across-the-board 10% reduction in sodium content of all foods in the Food and Nutrition Database for Dietary Studies. Modeling analyses showed that the 2010 Dietary Guidelines for sodium were incompatible with potassium guidelines and with nutritionally adequate diets, even after reducing the sodium content of all US foods by 10%. Feasibility studies should precede or accompany the issuing of dietary guidelines to the public. PMID:23507224

  19. Passive safety injection experiments and analyses (PAHKO)

    International Nuclear Information System (INIS)

    Tuunanen, J.

    1998-01-01

    PAHKO project involved experiments on the PACTEL facility and computer simulations of selected experiments. The experiments focused on the performance of Passive Safety Injection Systems (PSIS) of Advanced Light Water Reactors (ALWRs) in Small Break Loss-Of-Coolant Accident (SBLOCA) conditions. The PSIS consisted of a Core Make-up Tank (CMT) and two pipelines (Pressure Balancing Line, PBL, and Injection Line, IL). The examined PSIS worked efficiently in SBLOCAs although the flow through the PSIS stopped temporarily if the break was very small and the hot water filled the CMT. The experiments demonstrated the importance of the flow distributor in the CMT to limit rapid condensation. The project included validation of three thermal-hydraulic computer codes (APROS, CATHARE and RELAP5). The analyses showed the codes are capable to simulate the overall behaviour of the transients. The detailed analyses of the results showed some models in the codes still need improvements. Especially, further development of models for thermal stratification, condensation and natural circulation flow with small driving forces would be necessary for accurate simulation of the PSIS phenomena. (orig.)

  20. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  1. Analysing spatially extended high-dimensional dynamics by recurrence plots

    Energy Technology Data Exchange (ETDEWEB)

    Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)

    2015-05-08

    Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.

  2. Demographical history and palaeodistribution modelling show range shift towards Amazon Basin for a Neotropical tree species in the LGM.

    Science.gov (United States)

    Vitorino, Luciana Cristina; Lima-Ribeiro, Matheus S; Terribile, Levi Carina; Collevatti, Rosane G

    2016-10-13

    We studied the phylogeography and demographical history of Tabebuia serratifolia (Bignoniaceae) to understand the disjunct geographical distribution of South American seasonally dry tropical forests (SDTFs). We specifically tested if the multiple and isolated patches of SDTFs are current climatic relicts of a widespread and continuously distributed dry forest during the last glacial maximum (LGM), the so called South American dry forest refugia hypothesis, using ecological niche modelling (ENM) and statistical phylogeography. We sampled 235 individuals of T. serratifolia in 17 populations in Brazil and analysed the polymorphisms at three intergenic chloroplast regions and ITS nuclear ribosomal DNA. Coalescent analyses showed a demographical expansion at the last c. 130 ka (thousand years before present). Simulations and ENM also showed that the current spatial pattern of genetic diversity is most likely due to a scenario of range expansion and range shift towards the Amazon Basin during the colder and arid climatic conditions associated with the LGM, matching the expected for the South American dry forest refugia hypothesis, although contrasting to the Pleistocene Arc hypothesis. Populations in more stable areas or with higher suitability through time showed higher genetic diversity. Postglacial range shift towards the Southeast and Atlantic coast may have led to spatial genome assortment due to leading edge colonization as the species tracks suitable environments, leading to lower genetic diversity in populations at higher distance from the distribution centroid at 21 ka. Haplotype sharing or common ancestry among populations from Caatinga in Northeast Brazil, Atlantic Forest in Southeast and Cerrado biome and ENM evince the past connection among these biomes.

  3. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  4. Process of Integrating Screening and Detailed Risk-based Modeling Analyses to Ensure Consistent and Scientifically Defensible Results

    International Nuclear Information System (INIS)

    Buck, John W.; McDonald, John P.; Taira, Randal Y.

    2002-01-01

    To support cleanup and closure of these tanks, modeling is performed to understand and predict potential impacts to human health and the environment. Pacific Northwest National Laboratory developed a screening tool for the United States Department of Energy, Office of River Protection that estimates the long-term human health risk, from a strategic planning perspective, posed by potential tank releases to the environment. This tool is being conditioned to more detailed model analyses to ensure consistency between studies and to provide scientific defensibility. Once the conditioning is complete, the system will be used to screen alternative cleanup and closure strategies. The integration of screening and detailed models provides consistent analyses, efficiencies in resources, and positive feedback between the various modeling groups. This approach of conditioning a screening methodology to more detailed analyses provides decision-makers with timely and defensible information and increases confidence in the results on the part of clients, regulators, and stakeholders

  5. Evaluation of Temperature and Humidity Profiles of Unified Model and ECMWF Analyses Using GRUAN Radiosonde Observations

    Directory of Open Access Journals (Sweden)

    Young-Chan Noh

    2016-07-01

    Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.

  6. Application of a weighted spatial probability model in GIS to analyse landslides in Penang Island, Malaysia

    Directory of Open Access Journals (Sweden)

    Samy Ismail Elmahdy

    2016-01-01

    Full Text Available In the current study, Penang Island, which is one of the several mountainous areas in Malaysia that is often subjected to landslide hazard, was chosen for further investigation. A multi-criteria Evaluation and the spatial probability weighted approach and model builder was applied to map and analyse landslides in Penang Island. A set of automated algorithms was used to construct new essential geological and morphometric thematic maps from remote sensing data. The maps were ranked using the weighted probability spatial model based on their contribution to the landslide hazard. Results obtained showed that sites at an elevation of 100–300 m, with steep slopes of 10°–37° and slope direction (aspect in the E and SE directions were areas of very high and high probability for the landslide occurrence; the total areas were 21.393 km2 (11.84% and 58.690 km2 (32.48%, respectively. The obtained map was verified by comparing variogram models of the mapped and the occurred landslide locations and showed a strong correlation with the locations of occurred landslides, indicating that the proposed method can successfully predict the unpredictable landslide hazard. The method is time and cost effective and can be used as a reference for geological and geotechnical engineers.

  7. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  8. Comparison of optical-model and Lane-model analyses of sub-Coulomb protons on /sup 92,94/Zr

    International Nuclear Information System (INIS)

    Schrils, R.; Flynn, D.S.; Hershberger, R.L.; Gabbard, F.

    1979-01-01

    Accurate proton elastic-scattering cross sections were measured with enriched targets of /sup 92,94/Zr from E/sub p/ = 2.0 to 6.5 MeV. The elastic-scattering cross sections, together with absorption cross sections, were analyzed with a Lane model which employed the optical potential of Johnson et al. The resulting parameters were compared with those obtained with a single-channel optical model and negligible differences were found. Significant differences between the 92 Zr and 94 Zr real diffusenesses resulted from the inclusion of the (p,p) data in the analyses

  9. Inverse analyses of effective diffusion parameters relevant for a two-phase moisture model of cementitious materials

    DEFF Research Database (Denmark)

    Addassi, Mouadh; Johannesson, Björn; Wadsö, Lars

    2018-01-01

    Here we present an inverse analyses approach to determining the two-phase moisture transport properties relevant to concrete durability modeling. The purposed moisture transport model was based on a continuum approach with two truly separate equations for the liquid and gas phase being connected...... test, and, (iv) capillary suction test. Mass change over time, as obtained from the drying test, the two different cup test intervals and the capillary suction test, was used to obtain the effective diffusion parameters using the proposed inverse analyses approach. The moisture properties obtained...

  10. Nurses' intention to leave: critically analyse the theory of reasoned action and organizational commitment model.

    Science.gov (United States)

    Liou, Shwu-Ru

    2009-01-01

    To systematically analyse the Organizational Commitment model and Theory of Reasoned Action and determine concepts that can better explain nurses' intention to leave their job. The Organizational Commitment model and Theory of Reasoned Action have been proposed and applied to understand intention to leave and turnover behaviour, which are major contributors to nursing shortage. However, the appropriateness of applying these two models in nursing was not analysed. Three main criteria of a useful model were used for the analysis: consistency in the use of concepts, testability and predictability. Both theories use concepts consistently. Concepts in the Theory of Reasoned Action are defined broadly whereas they are operationally defined in the Organizational Commitment model. Predictability of the Theory of Reasoned Action is questionable whereas the Organizational Commitment model can be applied to predict intention to leave. A model was proposed based on this analysis. Organizational commitment, intention to leave, work experiences, job characteristics and personal characteristics can be concepts for predicting nurses' intention to leave. Nursing managers may consider nurses' personal characteristics and experiences to increase their organizational commitment and enhance their intention to stay. Empirical studies are needed to test and cross-validate the re-synthesized model for nurses' intention to leave their job.

  11. Modelling and Analyses of Embedded Systems Design

    DEFF Research Database (Denmark)

    Brekling, Aske Wiid

    We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....... semantics to the elements of specifications in the MoVES language. We show that even for seem- ingly simple systems, the complexity of verifying real-time constraints can be overwhelming - but we give an upper limit to the size of the search-space that needs examining. Furthermore, the formal model exposes...

  12. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Science.gov (United States)

    Rallapalli, Varsha H.

    2016-01-01

    Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL) often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM) has demonstrated that the signal-to-noise ratio (SNRENV) from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N) is assumed to: (a) reduce S + N envelope power by filling in dips within clean speech (S) and (b) introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  13. Modular 3-D solid finite element model for fatigue analyses of a PWR coolant system

    International Nuclear Information System (INIS)

    Garrido, Oriol Costa; Cizelj, Leon; Simonovski, Igor

    2012-01-01

    Highlights: ► A 3-D model of a reactor coolant system for fatigue usage assessment. ► The performed simulations are a heat transfer and stress analyses. ► The main results are the expected ranges of fatigue loadings. - Abstract: The extension of operational licenses of second generation pressurized water reactor (PWR) nuclear power plants depends to a large extent on the analyses of fatigue usage of the reactor coolant pressure boundary. The reliable estimation of the fatigue usage requires detailed thermal and stress analyses of the affected components. Analyses, based upon the in-service transient loads should be compared to the loads analyzed at the design stage. The thermal and stress transients can be efficiently analyzed using the finite element method. This requires that a 3-D solid model of a given system is discretized with finite elements (FE). The FE mesh density is crucial for both the accuracy and the cost of the analysis. The main goal of the paper is to propose a set of computational tools which assist a user in a deployment of modular spatial FE model of main components of a typical reactor coolant system, e.g., pipes, pressure vessels and pumps. The modularity ensures that the components can be analyzed individually or in a system. Also, individual components can be meshed with different mesh densities, as required by the specifics of the particular transient studied. For optimal accuracy, all components are meshed with hexahedral elements with quadratic interpolation. The performance of the model is demonstrated with simulations performed with a complete two-loop PWR coolant system (RCS). Heat transfer analysis and stress analysis for a complete loading and unloading cycle of the RCS are performed. The main results include expected ranges of fatigue loading for the pipe lines and coolant pump components under the given conditions.

  14. Partial differential equation techniques for analysing animal movement: A comparison of different methods.

    Science.gov (United States)

    Wang, Yi-Shan; Potts, Jonathan R

    2017-03-07

    Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A theoretical model for analysing gender bias in medicine.

    Science.gov (United States)

    Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina

    2009-08-03

    During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  16. Growth Modeling with Non-Ignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial

    Science.gov (United States)

    Muthén, Bengt; Asparouhov, Tihomir; Hunter, Aimee; Leuchter, Andrew

    2011-01-01

    This paper uses a general latent variable framework to study a series of models for non-ignorable missingness due to dropout. Non-ignorable missing data modeling acknowledges that missingness may depend on not only covariates and observed outcomes at previous time points as with the standard missing at random (MAR) assumption, but also on latent variables such as values that would have been observed (missing outcomes), developmental trends (growth factors), and qualitatively different types of development (latent trajectory classes). These alternative predictors of missing data can be explored in a general latent variable framework using the Mplus program. A flexible new model uses an extended pattern-mixture approach where missingness is a function of latent dropout classes in combination with growth mixture modeling using latent trajectory classes. A new selection model allows not only an influence of the outcomes on missingness, but allows this influence to vary across latent trajectory classes. Recommendations are given for choosing models. The missing data models are applied to longitudinal data from STAR*D, the largest antidepressant clinical trial in the U.S. to date. Despite the importance of this trial, STAR*D growth model analyses using non-ignorable missing data techniques have not been explored until now. The STAR*D data are shown to feature distinct trajectory classes, including a low class corresponding to substantial improvement in depression, a minority class with a U-shaped curve corresponding to transient improvement, and a high class corresponding to no improvement. The analyses provide a new way to assess drug efficiency in the presence of dropout. PMID:21381817

  17. Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile

    Science.gov (United States)

    Hoľko, Michal; Stacho, Jakub

    2014-12-01

    The article deals with numerical analyses of a Continuous Flight Auger (CFA) pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed. Our analyses show that both types of software permit the modelling of pile foundations. The Plaxis software uses advanced material models as well as the modelling of the impact of groundwater or overconsolidation. The load-settlement curve calculated using Plaxis is equal to the results of a static load test with a more than 95 % degree of accuracy. In comparison, the load-settlement curve calculated using Ansys allows for the obtaining of only an approximate estimate, but the software allows for the common modelling of large structure systems together with a foundation system.

  18. Uncertainty and Sensitivity Analyses Plan

    International Nuclear Information System (INIS)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project

  19. An LP-model to analyse economic and ecological sustainability on Dutch dairy farms: model presentation and application for experimental farm "de Marke"

    NARCIS (Netherlands)

    Calker, van K.J.; Berentsen, P.B.M.; Boer, de I.J.M.; Giesen, G.W.J.; Huirne, R.B.M.

    2004-01-01

    Farm level modelling can be used to determine how farm management adjustments and environmental policy affect different sustainability indicators. In this paper indicators were included in a dairy farm LP (linear programming)-model to analyse the effects of environmental policy and management

  20. Analyse Risk-Return Paradox: Evidence from Electricity Sector of Pakistan

    OpenAIRE

    Naqi Shah, Sadia; Qayyum, Abdul

    2016-01-01

    This study analyse risk return relationship of the electricity companies of Pakistan by using the log return series of these electricity companies. Financial time series data have the property of autoregressive heteroscedasticity so move towards the GARCH family test. As the study want to analyse the risk return relationship so, GARCH-M Model of Engel et al (1987) is used, who empirically found relationship between risk and return. Results show that risk return in case of Pakistan electricity...

  1. Kinetic analyses and mathematical modeling of primary photochemical and photoelectrochemical processes in plant photosystems

    NARCIS (Netherlands)

    Vredenberg, W.J.

    2011-01-01

    In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with

  2. Dairy shows different associations with abdominal and BMI-defined overweight: Cross-sectional analyses exploring a variety of dairy products.

    Science.gov (United States)

    Brouwer-Brolsma, E M; Sluik, D; Singh-Povel, C M; Feskens, E J M

    2018-05-01

    Previous studies have suggested weight-regulatory properties for several dairy nutrients, but population-based studies on dairy and body weight are inconclusive. We explored cross-sectional associations between dairy consumption and indicators of overweight. We included 114,682 Dutch adults, aged ≥18 years. Dairy consumption was quantified by a food frequency questionnaire. Abdominal overweight was defined as waist circumference (WC) ≥88 cm (women) or ≥102 cm (men) (n = 37,391), overweight as BMI ≥25-30 kg/m 2 (n = 44,772) and obesity as BMI ≥30 kg/m 2 (n = 15,339). Associations were quantified by logistic (abdominal overweight, no/yes), multinomial logistic (BMI-defined overweight and obesity) and linear regression analyses (continuous measures of WC and BMI), and they were adjusted for relevant covariates. Total dairy showed a positive association with abdominal overweight (OR Q1 ref vs. Q5: 1.09; 95% CI: 1.04-1.14) and with BMI-defined overweight (OR Q5 1.13; 95% CI: 1.08-1.18) and obesity (OR Q5 1.09; 95% CI: 1.02-1.16). Skimmed, semi-skimmed and non-fermented dairy also showed positive associations with overweight categories. Full-fat dairy showed an inverse association with overweight and obesity (OR Q5 for obesity: 0.78; 95% CI: 0.73-0.83). Moreover, inverse associations were observed for yoghurt and custard and positive associations for milk, buttermilk, flavoured yoghurt drinks, cheese and cheese snacks. Fermented dairy, curd cheese and Dutch cheese did not show a consistent association with overweight categories. Total, skimmed, semi-skimmed and non-fermented dairy; milk; buttermilk; flavoured yoghurt drinks; total cheese and cheese snacks showed a positive association with overweight categories, whereas full-fat dairy, custard and yoghurt showed an inverse association with overweight categories. Copyright © 2018 The Italian Society of Diabetology, the Italian Society for the Study of Atherosclerosis, the Italian Society of Human

  3. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Tappen, J. J.; Wasiolek, M. A.; Wu, D. W.; Schmitt, J. F.; Smith, A. J.

    2002-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  4. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Jeff Tappen; M.A. Wasiolek; D.W. Wu; J.F. Schmitt

    2001-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  5. Dynamics of energy systems: Methods of analysing technology change

    Energy Technology Data Exchange (ETDEWEB)

    Neij, Lena

    1999-05-01

    Technology change will have a central role in achieving a sustainable energy system. This calls for methods of analysing the dynamics of energy systems in view of technology change and policy instruments for effecting and accelerating technology change. In this thesis, such methods have been developed, applied, and assessed. Two types of methods have been considered, methods of analysing and projecting the dynamics of future technology change and methods of evaluating policy instruments effecting technology change, i.e. market transformation programmes. Two methods are focused on analysing the dynamics of future technology change; vintage models and experience curves. Vintage models, which allow for complex analysis of annual streams of energy and technological investments, are applied to the analysis of the time dynamics of electricity demand for lighting and air-distribution in Sweden. The results of the analyses show that the Swedish electricity demand for these purposes could decrease over time, relative to a reference scenario, if policy instruments are used. Experience curves are used to provide insight into the prospects of diffusion of wind turbines and photo voltaic (PV) modules due to cost reduction. The results show potential for considerable cost reduction for wind-generated electricity, which, in turn, could lead to major diffusion of wind turbines. The results also show that major diffusion of PV modules, and a reduction of PV generated electricity down to the level of conventional base-load electricity, will depend on large investments in bringing the costs down (through R D and D, market incentives and investments in niche markets) or the introduction of new generations of PV modules (e.g. high-efficiency mass-produced thin-film cells). Moreover, a model has been developed for the evaluation of market transformation programmes, i.e. policy instruments that effect technology change and the introduction and commercialisation of energy

  6. Towards an Industrial Application of Statistical Uncertainty Analysis Methods to Multi-physical Modelling and Safety Analyses

    International Nuclear Information System (INIS)

    Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe

    2013-01-01

    Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)

  7. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 3

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Corum, J.M.; Bryson, J.W.

    1975-06-01

    The third in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: the experimental data provide design information directly applicable to nozzles in cylindrical vessels; and the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 3 had a 10 in. OD and the nozzle had a 1.29 in. OD, giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios for the cylinder and the nozzle were 50 and 7.68 respectively. Thirteen separate loading cases were analyzed. In each, one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for all the loadings were obtained using 158 three-gage strain rosettes located on the inner and outer surfaces. The loading cases were also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  8. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 4

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-06-01

    The last in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models in the series are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: (1) the experimental data provide design information directly applicable to nozzles in cylindrical vessels, and (2) the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 4 had an outside diameter of 10 in., and the nozzle had an outside diameter of 1.29 in., giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios were 50 and 20.2 for the cylinder and nozzle respectively. Thirteen separate loading cases were analyzed. For each loading condition one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for each of the 13 loadings were obtained using 157 three-gage strain rosettes located on the inner and outer surfaces. Each of the 13 loading cases was also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  9. A theoretical model for analysing gender bias in medicine

    Directory of Open Access Journals (Sweden)

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  10. Comparison of plasma input and reference tissue models for analysing [(11)C]flumazenil studies

    NARCIS (Netherlands)

    Klumpers, Ursula M. H.; Veltman, Dick J.; Boellaard, Ronald; Comans, Emile F.; Zuketto, Cassandra; Yaqub, Maqsood; Mourik, Jurgen E. M.; Lubberink, Mark; Hoogendijk, Witte J. G.; Lammertsma, Adriaan A.

    2008-01-01

    A single-tissue compartment model with plasma input is the established method for analysing [(11)C]flumazenil ([(11)C]FMZ) studies. However, arterial cannulation and measurement of metabolites are time-consuming. Therefore, a reference tissue approach is appealing, but this approach has not been

  11. One size does not fit all: On how Markov model order dictates performance of genomic sequence analyses

    Science.gov (United States)

    Narlikar, Leelavati; Mehta, Nidhi; Galande, Sanjeev; Arjunwadkar, Mihir

    2013-01-01

    The structural simplicity and ability to capture serial correlations make Markov models a popular modeling choice in several genomic analyses, such as identification of motifs, genes and regulatory elements. A critical, yet relatively unexplored, issue is the determination of the order of the Markov model. Most biological applications use a predetermined order for all data sets indiscriminately. Here, we show the vast variation in the performance of such applications with the order. To identify the ‘optimal’ order, we investigated two model selection criteria: Akaike information criterion and Bayesian information criterion (BIC). The BIC optimal order delivers the best performance for mammalian phylogeny reconstruction and motif discovery. Importantly, this order is different from orders typically used by many tools, suggesting that a simple additional step determining this order can significantly improve results. Further, we describe a novel classification approach based on BIC optimal Markov models to predict functionality of tissue-specific promoters. Our classifier discriminates between promoters active across 12 different tissues with remarkable accuracy, yielding 3 times the precision expected by chance. Application to the metagenomics problem of identifying the taxum from a short DNA fragment yields accuracies at least as high as the more complex mainstream methodologies, while retaining conceptual and computational simplicity. PMID:23267010

  12. Challenges of Analysing Gene-Environment Interactions in Mouse Models of Schizophrenia

    Directory of Open Access Journals (Sweden)

    Peter L. Oliver

    2011-01-01

    Full Text Available The modelling of neuropsychiatric disease using the mouse has provided a wealth of information regarding the relationship between specific genetic lesions and behavioural endophenotypes. However, it is becoming increasingly apparent that synergy between genetic and nongenetic factors is a key feature of these disorders that must also be taken into account. With the inherent limitations of retrospective human studies, experiments in mice have begun to tackle this complex association, combining well-established behavioural paradigms and quantitative neuropathology with a range of environmental insults. The conclusions from this work have been varied, due in part to a lack of standardised methodology, although most have illustrated that phenotypes related to disorders such as schizophrenia are consistently modified. Far fewer studies, however, have attempted to generate a “two-hit” model, whereby the consequences of a pathogenic mutation are analysed in combination with environmental manipulation such as prenatal stress. This significant, yet relatively new, approach is beginning to produce valuable new models of neuropsychiatric disease. Focussing on prenatal and perinatal stress models of schizophrenia, this review discusses the current progress in this field, and highlights important issues regarding the interpretation and comparative analysis of such complex behavioural data.

  13. Development of steady-state model for MSPT and detailed analyses of receiver

    Science.gov (United States)

    Yuasa, Minoru; Sonoda, Masanori; Hino, Koichi

    2016-05-01

    Molten salt parabolic trough system (MSPT) uses molten salt as heat transfer fluid (HTF) instead of synthetic oil. The demonstration plant of MSPT was constructed by Chiyoda Corporation and Archimede Solar Energy in Italy in 2013. Chiyoda Corporation developed a steady-state model for predicting the theoretical behavior of the demonstration plant. The model was designed to calculate the concentrated solar power and heat loss using ray tracing of incident solar light and finite element modeling of thermal energy transferred into the medium. This report describes the verification of the model using test data on the demonstration plant, detailed analyses on the relation between flow rate and temperature difference on the metal tube of receiver and the effect of defocus angle on concentrated power rate, for solar collector assembly (SCA) development. The model is accurate to an extent of 2.0% as systematic error and 4.2% as random error. The relationships between flow rate and temperature difference on metal tube and the effect of defocus angle on concentrated power rate are shown.

  14. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.; Sobel, A.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overview of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.

  15. An assessment of the wind re-analyses in the modelling of an extreme sea state in the Black Sea

    Science.gov (United States)

    Akpinar, Adem; Ponce de León, S.

    2016-03-01

    This study aims at an assessment of wind re-analyses for modelling storms in the Black Sea. A wind-wave modelling system (Simulating WAve Nearshore, SWAN) is applied to the Black Sea basin and calibrated with buoy data for three recent re-analysis wind sources, namely the European Centre for Medium-Range Weather Forecasts Reanalysis-Interim (ERA-Interim), Climate Forecast System Reanalysis (CFSR), and Modern Era Retrospective Analysis for Research and Applications (MERRA) during an extreme wave condition that occurred in the north eastern part of the Black Sea. The SWAN model simulations are carried out for default and tuning settings for deep water source terms, especially whitecapping. Performances of the best model configurations based on calibration with buoy data are discussed using data from the JASON2, TOPEX-Poseidon, ENVISAT and GFO satellites. The SWAN model calibration shows that the best configuration is obtained with Janssen and Komen formulations with whitecapping coefficient (Cds) equal to 1.8e-5 for wave generation by wind and whitecapping dissipation using ERA-Interim. In addition, from the collocated SWAN results against the satellite records, the best configuration is determined to be the SWAN using the CFSR winds. Numerical results, thus show that the accuracy of a wave forecast will depend on the quality of the wind field and the ability of the SWAN model to simulate the waves under extreme wind conditions in fetch limited wave conditions.

  16. Development and application of model RAIA uranium on-line analyser

    International Nuclear Information System (INIS)

    Dong Yanwu; Song Yufen; Zhu Yaokun; Cong Peiyuan; Cui Songru

    1999-01-01

    The working principle, structure, adjustment and application of model RAIA on-line analyser are reported. The performance of this instrument is reliable. For identical sample, the signal fluctuation in continuous monitoring for four months is less than +-1%. According to required measurement range, appropriate length of sample cell is chosen. The precision of measurement process is better than 1% at 100 g/L U. The detection limit is 50 mg/L. The uranium concentration in process stream can be displayed automatically and printed at any time. It presents 4∼20 mA current signal being proportional to the uranium concentration. This makes a long step towards process continuous control and computer management

  17. How Genes Modulate Patterns of Aging-Related Changes on the Way to 100: Biodemographic Models and Methods in Genetic Analyses of Longitudinal Data

    Science.gov (United States)

    Yashin, Anatoliy I.; Arbeev, Konstantin G.; Wu, Deqing; Arbeeva, Liubov; Kulminski, Alexander; Kulminskaya, Irina; Akushevich, Igor; Ukraintseva, Svetlana V.

    2016-01-01

    Background and Objective To clarify mechanisms of genetic regulation of human aging and longevity traits, a number of genome-wide association studies (GWAS) of these traits have been performed. However, the results of these analyses did not meet expectations of the researchers. Most detected genetic associations have not reached a genome-wide level of statistical significance, and suffered from the lack of replication in the studies of independent populations. The reasons for slow progress in this research area include low efficiency of statistical methods used in data analyses, genetic heterogeneity of aging and longevity related traits, possibility of pleiotropic (e.g., age dependent) effects of genetic variants on such traits, underestimation of the effects of (i) mortality selection in genetically heterogeneous cohorts, (ii) external factors and differences in genetic backgrounds of individuals in the populations under study, the weakness of conceptual biological framework that does not fully account for above mentioned factors. One more limitation of conducted studies is that they did not fully realize the potential of longitudinal data that allow for evaluating how genetic influences on life span are mediated by physiological variables and other biomarkers during the life course. The objective of this paper is to address these issues. Data and Methods We performed GWAS of human life span using different subsets of data from the original Framingham Heart Study cohort corresponding to different quality control (QC) procedures and used one subset of selected genetic variants for further analyses. We used simulation study to show that approach to combining data improves the quality of GWAS. We used FHS longitudinal data to compare average age trajectories of physiological variables in carriers and non-carriers of selected genetic variants. We used stochastic process model of human mortality and aging to investigate genetic influence on hidden biomarkers of aging

  18. Thermoelastic analyses of spent fuel repositories in bedded and dome salt. Technical memorandum report RSI-0054

    International Nuclear Information System (INIS)

    Callahan, G.D.; Ratigan, J.L.

    1978-01-01

    Global thermoelastic analyses of bedded and dome salt models showed a slight preference for the bedded salt model through the range of thermal loading conditions. Spent fuel thermal loadings should be less than 75 kW/acre of the repository pending more accurate material modeling. One should first limit the study to one or two spent fuel thermal loading (i.e. 75 kW/acre and/or 50 kW/acre) analyses up to a maximum time of approximately 2000 years. Parametric thermoelastic type analyses could then be readily obtained to determine the influence of the thermomechanical properties. Recommendations for further study include parametric analyses, plasticity analyses, consideration of the material interfaces as joints, and possibly consideration of a global joint pattern (i.e. jointed at the same orientation everywhere) for the non-salt materials. Subsequently, the viscoelastic analyses could be performed

  19. Tests and analyses of 1/4-scale upgraded nine-bay reinforced concrete basement models

    International Nuclear Information System (INIS)

    Woodson, S.C.

    1983-01-01

    Two nine-bay prototype structures, a flat plate and two-way slab with beams, were designed in accordance with the 1977 ACI code. A 1/4-scale model of each prototype was constructed, upgraded with timber posts, and statically tested. The development of the timber posts placement scheme was based upon yield-line analyses, punching shear evaluation, and moment-thrust interaction diagrams of the concrete slab sections. The flat plate model and the slab with beams model withstood approximate overpressures of 80 and 40 psi, respectively, indicating that required hardness may be achieved through simple upgrading techniques

  20. A model finite-element to analyse the mechanical behavior of a PWR fuel rod

    International Nuclear Information System (INIS)

    Galeao, A.C.N.R.; Tanajura, C.A.S.

    1988-01-01

    A model to analyse the mechanical behavior of a PWR fuel rod is presented. We drew our attention to the phenomenon of pellet-pellet and pellet-cladding contact by taking advantage of an elastic model which include the effects of thermal gradients, cladding internal and external pressures, swelling and initial relocation. The problem of contact gives rise ro a variational formulation which employs Lagrangian multipliers. An iterative scheme is constructed and the finite element method is applied to obtain the numerical solution. Some results and comments are presented to examine the performance of the model. (author) [pt

  1. Elastodynamic fracture analyses of large crack-arrest experiments

    International Nuclear Information System (INIS)

    Bass, B.R.; Pugh, C.E.; Walker, J.K.

    1985-01-01

    Results obtained to date show that the essence of the run-arrest events, including dynamic behavior, is being modeled. Refined meshes and optimum solution algorithms are important parameters in elastodynamic analysis programs to give sufficient resolution to the geometric and time-dependent aspects of fracture analyses. Further refinements in quantitative representation of material parameters and the inclusion of rate dependence through viscoplastic modeling is expected to give an even more accurate basis for assessing the fracture behavior of reactor pressure vessels under PTS and other off-normal loading conditions

  2. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    Science.gov (United States)

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  3. Extending and Applying Spartan to Perform Temporal Sensitivity Analyses for Predicting Changes in Influential Biological Pathways in Computational Models.

    Science.gov (United States)

    Alden, Kieran; Timmis, Jon; Andrews, Paul S; Veiga-Fernandes, Henrique; Coles, Mark

    2017-01-01

    Through integrating real time imaging, computational modelling, and statistical analysis approaches, previous work has suggested that the induction of and response to cell adhesion factors is the key initiating pathway in early lymphoid tissue development, in contrast to the previously accepted view that the process is triggered by chemokine mediated cell recruitment. These model derived hypotheses were developed using spartan, an open-source sensitivity analysis toolkit designed to establish and understand the relationship between a computational model and the biological system that model captures. Here, we extend the functionality available in spartan to permit the production of statistical analyses that contrast the behavior exhibited by a computational model at various simulated time-points, enabling a temporal analysis that could suggest whether the influence of biological mechanisms changes over time. We exemplify this extended functionality by using the computational model of lymphoid tissue development as a time-lapse tool. By generating results at twelve- hour intervals, we show how the extensions to spartan have been used to suggest that lymphoid tissue development could be biphasic, and predict the time-point when a switch in the influence of biological mechanisms might occur.

  4. Model of the synthesis of trisporic acid in Mucorales showing bistability.

    Science.gov (United States)

    Werner, S; Schroeter, A; Schimek, C; Vlaic, S; Wöstemeyer, J; Schuster, S

    2012-12-01

    An important substance in the signalling between individuals of Mucor-like fungi is trisporic acid (TA). This compound, together with some of its precursors, serves as a pheromone in mating between (+)- and (-)-mating types. Moreover, intermediates of the TA pathway are exchanged between the two mating partners. Based on differential equations, mathematical models of the synthesis pathways of TA in the two mating types of an idealised Mucor-fungus are here presented. These models include the positive feedback of TA on its own synthesis. The authors compare three sub-models in view of bistability, robustness and the reversibility of transitions. The proposed modelling study showed that, in a system where intermediates are exchanged, a reversible transition between the two stable steady states occurs, whereas an exchange of the end product leads to an irreversible transition. The reversible transition is physiologically favoured, because the high-production state of TA must come to an end eventually. Moreover, the exchange of intermediates and TA is compared with the 3-way handshake widely used by computers linked in a network.

  5. Indian Point 2 steam generator tube rupture analyses

    International Nuclear Information System (INIS)

    Dayan, A.

    1985-01-01

    Analyses were conducted with RETRAN-02 to study consequences of steam generator tube rupture (SGTR) events. The Indian Point, Unit 2, power plant (IP2, PWR) was modeled as a two asymmetric loops, consisting of 27 volumes and 37 junctions. The break section was modeled once, conservatively, as a 150% flow area opening at the wall of the steam generator cold leg plenum, and once as a 200% double-ended tube break. Results revealed 60% overprediction of breakflow rates by the traditional conservative model. Two SGTR transients were studied, one with low-pressure reactor trip and one with an earlier reactor trip via over temperature ΔT. The former is more typical to a plant with low reactor average temperature such as IP2. Transient analyses for a single tube break event over 500 seconds indicated continued primary subcooling and no need for steam line pressure relief. In addition, SGTR transients with reactor trip while the pressurizer still contains water were found to favorably reduce depressurization rates. Comparison of the conservative results with independent LOFTRAN predictions showed good agreement

  6. A new approach to analyse longitudinal epidemiological data with an excess of zeros.

    Science.gov (United States)

    Spriensma, Alette S; Hajos, Tibor R S; de Boer, Michiel R; Heymans, Martijn W; Twisk, Jos W R

    2013-02-20

    Within longitudinal epidemiological research, 'count' outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables with an excess of zeros. Therefore, objective of this paper was to introduce the relatively 'new' method of two-part joint regression modelling in longitudinal data analysis for outcome variables with an excess of zeros, and to compare the performance of this method to current approaches. Within an observational longitudinal dataset, we compared three techniques; two 'standard' approaches (a linear mixed model, and a Poisson mixed model), and a two-part joint mixed model (a binomial/Poisson mixed distribution model), including random intercepts and random slopes. Model fit indicators, and differences between predicted and observed values were used for comparisons. The analyses were performed with STATA using the GLLAMM procedure. Regarding the random intercept models, the two-part joint mixed model (binomial/Poisson) performed best. Adding random slopes for time to the models changed the sign of the regression coefficient for both the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) and resulted into a much better fit. This paper showed that a two-part joint mixed model is a more appropriate method to analyse longitudinal data with an excess of zeros compared to a linear mixed model and a Poisson mixed model. However, in a model with random slopes for time a Poisson mixed model also performed remarkably well.

  7. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  8. Uncertainty and sensitivity analyses for age-dependent unavailability model integrating test and maintenance

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Highlights: ► Application of analytical unavailability model integrating T and M, ageing, and test strategy. ► Ageing data uncertainty propagation on system level assessed via Monte Carlo simulation. ► Uncertainty impact is growing with the extension of the surveillance test interval. ► Calculated system unavailability dependence on two different sensitivity study ageing databases. ► System unavailability sensitivity insights regarding specific groups of BEs as test intervals extend. - Abstract: The interest in operational lifetime extension of the existing nuclear power plants is growing. Consequently, plants life management programs, considering safety components ageing, are being developed and employed. Ageing represents a gradual degradation of the physical properties and functional performance of different components consequently implying their reduced availability. Analyses, which are being made in the direction of nuclear power plants lifetime extension are based upon components ageing management programs. On the other side, the large uncertainties of the ageing parameters as well as the uncertainties associated with most of the reliability data collections are widely acknowledged. This paper addresses the uncertainty and sensitivity analyses conducted utilizing a previously developed age-dependent unavailability model, integrating effects of test and maintenance activities, for a selected stand-by safety system in a nuclear power plant. The most important problem is the lack of data concerning the effects of ageing as well as the relatively high uncertainty associated to these data, which would correspond to more detailed modelling of ageing. A standard Monte Carlo simulation was coded for the purpose of this paper and utilized in the process of assessment of the component ageing parameters uncertainty propagation on system level. The obtained results from the uncertainty analysis indicate the extent to which the uncertainty of the selected

  9. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    Science.gov (United States)

    Kolkman, M. J.; Kok, M.; van der Veen, A.

    The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties on a fundamental cognitive level, which can reveal experiences, perceptions, assumptions, knowledge and subjective beliefs of stakeholders, experts and other actors, and can stimulate communication and learning. This article presents the theoretical framework from which the use of mental model mapping techniques to analyse this type of problems emerges as a promising technique. The framework consists of the problem solving or policy design cycle, the knowledge production or modelling cycle, and the (computer) model as interface between the cycles. Literature attributes difficulties in the decision-making process to communication gaps between decision makers, stakeholders and scientists, and to the construction of knowledge within different paradigm groups that leads to different interpretation of the problem situation. Analysis of the decision-making process literature indicates that choices, which are made in all steps of the problem solving cycle, are based on an individual decision maker’s frame of perception. This frame, in turn, depends on the mental model residing in the mind of the individual. Thus we identify three levels of awareness on which the decision process can be analysed. This research focuses on the third level. Mental models can be elicited using mapping techniques. In this way, analysing an individual’s mental model can shed light on decision-making problems. The steps of the knowledge production cycle are, in the same manner, ultimately driven by the mental models of the scientist in a specific discipline. Remnants of this mental model can be found in the resulting computer model. The characteristics of unstructured problems (complexity

  10. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  11. Analyses and optimization of Lee propagation model for LoRa 868 MHz network deployments in urban areas

    Directory of Open Access Journals (Sweden)

    Dobrilović Dalibor

    2017-01-01

    Full Text Available In the recent period, fast ICT expansion and rapid appearance of new technologies raised the importance of fast and accurate planning and deployment of emerging communication technologies, especially wireless ones. In this paper is analyzed possible usage of Lee propagation model for planning, design and management of networks based on LoRa 868MHz technology. LoRa is wireless technology which can be deployed in various Internet of Things and Smart City scenarios in urban areas. The analyses are based on comparison of field measurements with model calculations. Besides the analyses of Lee propagation model usability, the possible optimization of the model is discussed as well. The research results can be used for accurate design, planning and for preparation of high-performance wireless resource management of various Internet of Things and Smart City applications in urban areas based on LoRa or similar wireless technology. The equipment used for measurements is based on open-source hardware.

  12. Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.

    Science.gov (United States)

    Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D

    2014-05-01

    In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available. Copyright © 2013. Published by Elsevier Ireland Ltd.

  13. Risk Aversion in Game Shows

    DEFF Research Database (Denmark)

    Andersen, Steffen; Harrison, Glenn W.; Lau, Morten I.

    2008-01-01

    We review the use of behavior from television game shows to infer risk attitudes. These shows provide evidence when contestants are making decisions over very large stakes, and in a replicated, structured way. Inferences are generally confounded by the subjective assessment of skill in some games......, and the dynamic nature of the task in most games. We consider the game shows Card Sharks, Jeopardy!, Lingo, and finally Deal Or No Deal. We provide a detailed case study of the analyses of Deal Or No Deal, since it is suitable for inference about risk attitudes and has attracted considerable attention....

  14. Modeled hydrologic metrics show links between hydrology and the functional composition of stream assemblages.

    Science.gov (United States)

    Patrick, Christopher J; Yuan, Lester L

    2017-07-01

    Flow alteration is widespread in streams, but current understanding of the effects of differences in flow characteristics on stream biological communities is incomplete. We tested hypotheses about the effect of variation in hydrology on stream communities by using generalized additive models to relate watershed information to the values of different flow metrics at gauged sites. Flow models accounted for 54-80% of the spatial variation in flow metric values among gauged sites. We then used these models to predict flow metrics in 842 ungauged stream sites in the mid-Atlantic United States that were sampled for fish, macroinvertebrates, and environmental covariates. Fish and macroinvertebrate assemblages were characterized in terms of a suite of metrics that quantified aspects of community composition, diversity, and functional traits that were expected to be associated with differences in flow characteristics. We related modeled flow metrics to biological metrics in a series of stressor-response models. Our analyses identified both drying and base flow instability as explaining 30-50% of the observed variability in fish and invertebrate community composition. Variations in community composition were related to variations in the prevalence of dispersal traits in invertebrates and trophic guilds in fish. The results demonstrate that we can use statistical models to predict hydrologic conditions at bioassessment sites, which, in turn, we can use to estimate relationships between flow conditions and biological characteristics. This analysis provides an approach to quantify the effects of spatial variation in flow metrics using readily available biomonitoring data. © 2017 by the Ecological Society of America.

  15. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  16. Water flow experiments and analyses on the cross-flow type mercury target model with the flow guide plates

    CERN Document Server

    Haga, K; Kaminaga, M; Hino, R

    2001-01-01

    A mercury target is used in the spallation neutron source driven by a high-intensity proton accelerator. In this study, the effectiveness of the cross-flow type mercury target structure was evaluated experimentally and analytically. Prior to the experiment, the mercury flow field and the temperature distribution in the target container were analyzed assuming a proton beam energy and power of 1.5 GeV and 5 MW, respectively, and the feasibility of the cross-flow type target was evaluated. Then the average water flow velocity field in the target mock-up model, which was fabricated from Plexiglass for a water experiment, was measured at room temperature using the PIV technique. Water flow analyses were conducted and the analytical results were compared with the experimental results. The experimental results showed that the cross-flow could be realized in most of the proton beam path area and the analytical result of the water flow velocity field showed good correspondence to the experimental results in the case w...

  17. Analysing bifurcations encountered in numerical modelling of current transfer to cathodes of dc glow and arc discharges

    International Nuclear Information System (INIS)

    Almeida, P G C; Benilov, M S; Cunha, M D; Faria, M J

    2009-01-01

    Bifurcations and/or their consequences are frequently encountered in numerical modelling of current transfer to cathodes of gas discharges, also in apparently simple situations, and a failure to recognize and properly analyse a bifurcation may create difficulties in the modelling and hinder the understanding of numerical results and the underlying physics. This work is concerned with analysis of bifurcations that have been encountered in the modelling of steady-state current transfer to cathodes of glow and arc discharges. All basic types of steady-state bifurcations (fold, transcritical, pitchfork) have been identified and analysed. The analysis provides explanations to many results obtained in numerical modelling. In particular, it is shown that dramatic changes in patterns of current transfer to cathodes of both glow and arc discharges, described by numerical modelling, occur through perturbed transcritical bifurcations of first- and second-order contact. The analysis elucidates the reason why the mode of glow discharge associated with the falling section of the current-voltage characteristic in the solution of von Engel and Steenbeck seems not to appear in 2D numerical modelling and the subnormal and normal modes appear instead. A similar effect has been identified in numerical modelling of arc cathodes and explained.

  18. Plasticity effect of rider-horse interaction on genetic evaluations for Show Jumping discipline in sport horses.

    Science.gov (United States)

    Bartolomé, E; Menéndez-Buxadera, A; Molina, A; Valera, M

    2018-04-01

    To obtain a sport horse that excels in the highest levels of competition, breeders must take into account certain genetic and environmental factors that could influence the sport horse's performance, such as the rider-horse interaction (RHI). The main aim of this study was to describe this interaction in a genetic model by modelling it in relation to the horse's age. A total of 31,129 sport results from Spanish Sport Horses were used from a total of 1,101 animals evaluated, and these were grouped in three age levels and had been ridden by 606 different riders. Only riders who had ridden more than one horse (and vice-versa) were considered for the analyses. Five linear models with different random effects were analysed according to the covariates, the homogeneity/heterogeneity of the RHI and the relevant residual random effects. The model of best fit was then selected for the genetic evaluation of the animal. In general, models including the RHI effect (M2, M4 and M5) fitted better than the other models, and the best fit was obtained for M4 (with heterogeneous residual variance). The genetic variance increased constantly with age, whereas heritability showed a response on three intervals. This study revealed the varied evolution of the RHI with age, showing the different "plastic abilities" of this relationship. © 2018 Blackwell Verlag GmbH.

  19. Reproduction of the Yucca Mountain Project TSPA-LA Uncertainty and Sensitivity Analyses and Preliminary Upgrade of Models

    Energy Technology Data Exchange (ETDEWEB)

    Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis; Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis

    2016-09-01

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.

  20. Pathway models for analysing and managing the introduction of alien plant pests - an overview and categorization

    NARCIS (Netherlands)

    Douma, J.C.; Pautasso, M.; Venette, R.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Schans, J.; Werf, van der W.

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative

  1. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    Directory of Open Access Journals (Sweden)

    Han Bossier

    2018-01-01

    Full Text Available Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1 the balance between false and true positives and (2 the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS, or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35. To do this, we apply a resampling scheme on a large dataset (N = 1,400 to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results.

  2. Fractal and multifractal analyses of bipartite networks

    Science.gov (United States)

    Liu, Jin-Long; Wang, Jian; Yu, Zu-Guo; Xie, Xian-Hua

    2017-03-01

    Bipartite networks have attracted considerable interest in various fields. Fractality and multifractality of unipartite (classical) networks have been studied in recent years, but there is no work to study these properties of bipartite networks. In this paper, we try to unfold the self-similarity structure of bipartite networks by performing the fractal and multifractal analyses for a variety of real-world bipartite network data sets and models. First, we find the fractality in some bipartite networks, including the CiteULike, Netflix, MovieLens (ml-20m), Delicious data sets and (u, v)-flower model. Meanwhile, we observe the shifted power-law or exponential behavior in other several networks. We then focus on the multifractal properties of bipartite networks. Our results indicate that the multifractality exists in those bipartite networks possessing fractality. To capture the inherent attribute of bipartite network with two types different nodes, we give the different weights for the nodes of different classes, and show the existence of multifractality in these node-weighted bipartite networks. In addition, for the data sets with ratings, we modify the two existing algorithms for fractal and multifractal analyses of edge-weighted unipartite networks to study the self-similarity of the corresponding edge-weighted bipartite networks. The results show that our modified algorithms are feasible and can effectively uncover the self-similarity structure of these edge-weighted bipartite networks and their corresponding node-weighted versions.

  3. BWR Mark III containment analyses using a GOTHIC 8.0 3D model

    International Nuclear Information System (INIS)

    Jimenez, Gonzalo; Serrano, César; Lopez-Alonso, Emma; Molina, M del Carmen; Calvo, Daniel; García, Javier; Queral, César; Zuriaga, J. Vicente; González, Montserrat

    2015-01-01

    Highlights: • The development of a 3D GOTHIC code model of BWR Mark-III containment is described. • Suppression pool modelling based on the POOLEX STB-20 and STB-16 experimental tests. • LOCA and SBO transient simulated to verify the behaviour of the 3D GOTHIC model. • Comparison between the 3D GOTHIC model and MAAP4.07 model is conducted. • Accurate reproduction of pre severe accident conditions with the 3D GOTHIC model. - Abstract: The purpose of this study is to establish a detailed three-dimensional model of Cofrentes NPP BWR/6 Mark III containment building using the containment code GOTHIC 8.0. This paper presents the model construction, the phenomenology tests conducted and the selected transient for the model evaluation. In order to study the proper settings for the model in the suppression pool, two experiments conducted with the experimental installation POOLEX have been simulated, allowing to obtain a proper behaviour of the model under different suppression pool phenomenology. In the transient analyses, a Loss of Coolant Accident (LOCA) and a Station Blackout (SBO) transient have been performed. The main results of the simulations of those transients were qualitative compared with the results obtained from simulations with MAAP 4.07 Cofrentes NPP model, used by the plant for simulating severe accidents. From this comparison, a verification of the model in terms of pressurization, asymmetric discharges and high pressure release were obtained. The completeness of this model has proved to adequately simulate the thermal hydraulic phenomena which occur in the containment during accidental sequences

  4. A chip-level modeling approach for rail span collapse and survivability analyses

    International Nuclear Information System (INIS)

    Marvis, D.G.; Alexander, D.R.; Dinger, G.L.

    1989-01-01

    A general semiautomated analysis technique has been developed for analyzing rail span collapse and survivability of VLSI microcircuits in high ionizing dose rate radiation environments. Hierarchical macrocell modeling permits analyses at the chip level and interactive graphical postprocessing provides a rapid visualization of voltage, current and power distributions over an entire VLSIC. The technique is demonstrated for a 16k C MOS/SOI SRAM and a CMOS/SOS 8-bit multiplier. The authors also present an efficient method to treat memory arrays as well as a three-dimensional integration technique to compute sapphire photoconduction from the design layout

  5. Demographic origins of skewed operational and adult sex ratios: perturbation analyses of two-sex models.

    Science.gov (United States)

    Veran, Sophie; Beissinger, Steven R

    2009-02-01

    Skewed sex ratios - operational (OSR) and Adult (ASR) - arise from sexual differences in reproductive behaviours and adult survival rates due to the cost of reproduction. However, skewed sex-ratio at birth, sex-biased dispersal and immigration, and sexual differences in juvenile mortality may also contribute. We present a framework to decompose the roles of demographic traits on sex ratios using perturbation analyses of two-sex matrix population models. Metrics of sensitivity are derived from analyses of sensitivity, elasticity, life-table response experiments and life stage simulation analyses, and applied to the stable stage distribution instead of lambda. We use these approaches to examine causes of male-biased sex ratios in two populations of green-rumped parrotlets (Forpus passerinus) in Venezuela. Female local juvenile survival contributed the most to the unbalanced OSR and ASR due to a female-biased dispersal rate, suggesting sexual differences in philopatry can influence sex ratios more strongly than the cost of reproduction.

  6. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  7. Impact of sophisticated fog spray models on accident analyses

    International Nuclear Information System (INIS)

    Roblyer, S.P.; Owzarski, P.C.

    1978-01-01

    The N-Reactor confinement system release dose to the public in a postulated accident is reduced by washing the confinement atmosphere with fog sprays. This allows a low pressure release of confinement atmosphere containing fission products through filters and out an elevated stack. The current accident analysis required revision of the CORRAL code and other codes such as CONTEMPT to properly model the N Reactor confinement into a system of multiple fog-sprayed compartments. In revising these codes, more sophisticated models for the fog sprays and iodine plateout were incorporated to remove some of the conservatism of steam condensing rate, fission product washout and iodine plateout than used in previous studies. The CORRAL code, which was used to describe the transport and deposition of airborne fission products in LWR containment systems for the Rasmussen Study, was revised to describe fog spray removal of molecular iodine (I 2 ) and particulates in multiple compartments for sprays having individual characteristics of on-off times, flow rates, fall heights, and drop sizes in changing containment atmospheres. During postulated accidents, the code determined the fission product removal rates internally rather than from input decontamination factors. A discussion is given of how the calculated plateout and washout rates vary with time throughout the analysis. The results of the accident analyses indicated that more credit could be given to fission product washout and plateout. An important finding was that the release of fission products to the atmosphere and adsorption of fission products on the filters were significantly lower than previous studies had indicated

  8. Radiobiological analyse based on cell cluster models

    International Nuclear Information System (INIS)

    Lin Hui; Jing Jia; Meng Damin; Xu Yuanying; Xu Liangfeng

    2010-01-01

    The influence of cell cluster dimension on EUD and TCP for targeted radionuclide therapy was studied using the radiobiological method. The radiobiological features of tumor with activity-lack in core were evaluated and analyzed by associating EUD, TCP and SF.The results show that EUD will increase with the increase of tumor dimension under the activity homogeneous distribution. If the extra-cellular activity was taken into consideration, the EUD will increase 47%. Under the activity-lack in tumor center and the requirement of TCP=0.90, the α cross-fire influence of 211 At could make up the maximum(48 μm)3 activity-lack for Nucleus source, but(72 μm)3 for Cytoplasm, Cell Surface, Cell and Voxel sources. In clinic,the physician could prefer the suggested dose of Cell Surface source in case of the future of local tumor control for under-dose. Generally TCP could well exhibit the effect difference between under-dose and due-dose, but not between due-dose and over-dose, which makes TCP more suitable for the therapy plan choice. EUD could well exhibit the difference between different models and activity distributions,which makes it more suitable for the research work. When the user uses EUD to study the influence of activity inhomogeneous distribution, one should keep the consistency of the configuration and volume of the former and the latter models. (authors)

  9. A simple beam model to analyse the durability of adhesively bonded tile floorings in presence of shrinkage

    Directory of Open Access Journals (Sweden)

    S. de Miranda

    2014-07-01

    Full Text Available A simple beam model for the evaluation of tile debonding due to substrate shrinkage is presented. The tile-adhesive-substrate package is modeled as an Euler-Bernoulli beam laying on a two-layer elastic foundation. An effective discrete model for inter-tile grouting is introduced with the aim of modelling workmanship defects due to partial filled groutings. The model is validated using the results of a 2D FE model. Different defect configurations and adhesive typologies are analysed, focusing the attention on the prediction of normal stresses in the adhesive layer under the assumption of Mode I failure of the adhesive.

  10. Quantitative Model for Economic Analyses of Information Security Investment in an Enterprise Information System

    Directory of Open Access Journals (Sweden)

    Bojanc Rok

    2012-11-01

    Full Text Available The paper presents a mathematical model for the optimal security-technology investment evaluation and decision-making processes based on the quantitative analysis of security risks and digital asset assessments in an enterprise. The model makes use of the quantitative analysis of different security measures that counteract individual risks by identifying the information system processes in an enterprise and the potential threats. The model comprises the target security levels for all identified business processes and the probability of a security accident together with the possible loss the enterprise may suffer. The selection of security technology is based on the efficiency of selected security measures. Economic metrics are applied for the efficiency assessment and comparative analysis of different protection technologies. Unlike the existing models for evaluation of the security investment, the proposed model allows direct comparison and quantitative assessment of different security measures. The model allows deep analyses and computations providing quantitative assessments of different options for investments, which translate into recommendations facilitating the selection of the best solution and the decision-making thereof. The model was tested using empirical examples with data from real business environment.

  11. Preliminary Analyses Showed Short-Term Mental Health Improvements after a Single-Day Manager Training.

    Science.gov (United States)

    Boysen, Elena; Schiller, Birgitta; Mörtl, Kathrin; Gündel, Harald; Hölzer, Michael

    2018-01-10

    Psychosocial working conditions attract more and more attention when it comes to mental health in the workplace. Trying to support managers to deal with their own as well as their employees' psychological risk factors, we conducted a specific manager training. Within this investigation, we wanted to learn about the training's effects and acceptance. A single-day manager training was provided in a large industrial company in Germany. The participants were asked to fill out questionnaires regarding their own physical and mental health condition as well as their working situation. Questionnaires were distributed at baseline, 3-month, and 12-month follow-up. At this point of time the investigation is still ongoing. The current article focuses on short-term preliminary effects. Analyses only included participants that already completed baseline and three months follow-up. Preliminary results from three-month follow-up survey ( n = 33, nmale = 30, Mage = 47.5) indicated positive changes in the manager's mental health condition measured by the Patient Health Questionnaire for depression (PHQ-9: Mt1 = 3.82, Mt2 = 3.15). Training managers about common mental disorders and risk factors at the workplace within a single-day workshop seems to promote positive effects on their own mental health. Especially working with the managers on their own early stress symptoms might have been an important element.

  12. Analysing Protocol Stacks for Services

    DEFF Research Database (Denmark)

    Gao, Han; Nielson, Flemming; Nielson, Hanne Riis

    2011-01-01

    We show an approach, CaPiTo, to model service-oriented applications using process algebras such that, on the one hand, we can achieve a certain level of abstraction without being overwhelmed by the underlying implementation details and, on the other hand, we respect the concrete industrial...... standards used for implementing the service-oriented applications. By doing so, we will be able to not only reason about applications at different levels of abstractions, but also to build a bridge between the views of researchers on formal methods and developers in industry. We apply our approach...... to the financial case study taken from Chapter 0-3. Finally, we develop a static analysis to analyse the security properties as they emerge at the level of concrete industrial protocols....

  13. Rockslide and Impulse Wave Modelling in the Vajont Reservoir by DEM-CFD Analyses

    Science.gov (United States)

    Zhao, T.; Utili, S.; Crosta, G. B.

    2016-06-01

    This paper investigates the generation of hydrodynamic water waves due to rockslides plunging into a water reservoir. Quasi-3D DEM analyses in plane strain by a coupled DEM-CFD code are adopted to simulate the rockslide from its onset to the impact with the still water and the subsequent generation of the wave. The employed numerical tools and upscaling of hydraulic properties allow predicting a physical response in broad agreement with the observations notwithstanding the assumptions and characteristics of the adopted methods. The results obtained by the DEM-CFD coupled approach are compared to those published in the literature and those presented by Crosta et al. (Landslide spreading, impulse waves and modelling of the Vajont rockslide. Rock mechanics, 2014) in a companion paper obtained through an ALE-FEM method. Analyses performed along two cross sections are representative of the limit conditions of the eastern and western slope sectors. The max rockslide average velocity and the water wave velocity reach ca. 22 and 20 m/s, respectively. The maximum computed run up amounts to ca. 120 and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 and 190 m, respectively). Therefore, the overall study lays out a possible DEM-CFD framework for the modelling of the generation of the hydrodynamic wave due to the impact of a rapid moving rockslide or rock-debris avalanche.

  14. Modelling and Analysing Access Control Policies in XACML 3.0

    DEFF Research Database (Denmark)

    Ramli, Carroline Dewi Puspa Kencana

    (c.f. GM03,Mos05,Ris13) and manual analysis of the overall effect and consequences of a large XACML policy set is a very daunting and time-consuming task. In this thesis we address the problem of understanding the semantics of access control policy language XACML, in particular XACML version 3.0....... The main focus of this thesis is modelling and analysing access control policies in XACML 3.0. There are two main contributions in this thesis. First, we study and formalise XACML 3.0, in particular the Policy Decision Point (PDP). The concrete syntax of XACML is based on the XML format, while its standard...... semantics is described normatively using natural language. The use of English text in standardisation leads to the risk of misinterpretation and ambiguity. In order to avoid this drawback, we define an abstract syntax of XACML 3.0 and a formal XACML semantics. Second, we propose a logic-based XACML analysis...

  15. Risk Factor Analyses for the Return of Spontaneous Circulation in the Asphyxiation Cardiac Arrest Porcine Model

    Directory of Open Access Journals (Sweden)

    Cai-Jun Wu

    2015-01-01

    Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.

  16. Pathway models for analysing and managing the introduction of alien plant pests—an overview and categorization

    Science.gov (United States)

    J.C. Douma; M. Pautasso; R.C. Venette; C. Robinet; L. Hemerik; M.C.M. Mourits; J. Schans; W. van der Werf

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative estimates of introduction risks and effectiveness of management options....

  17. The application of model with lumped parameters for transient condition analyses of NPP

    International Nuclear Information System (INIS)

    Stankovic, B.; Stevanovic, V.

    1985-01-01

    The transient behaviour of NPP Krsko during the accident of pressurizer spray valve stuck open has been simulated y lumped parameters model of the PWR coolant system components, developed at the faculty of Mechanical Engineering, University of Belgrade. The elementary volumes which are characterised by the process and state parameters, and by junctions which are characterised by the geometrical and flow parameters are basic structure of physical model. The process parameters obtained by the model RESI, show qualitative agreement with the measured valves, in a degree in which the actions of reactor safety engineered system and emergency core cooling system are adequately modelled; in spite of the elementary physical model structure and only the modelling of thermal process in reactor core and equilibrium conditions of pressurizer and steam generator. The pressurizer pressure and liquid level predicted by the non-equilibrium pressurizer model SOP show good agreement until the HIPS (high pressure pumps) is activated. (author)

  18. Showing that the race model inequality is not violated

    DEFF Research Database (Denmark)

    Gondan, Matthias; Riehl, Verena; Blurton, Steven Paul

    2012-01-01

    important being race models and coactivation models. Redundancy gains consistent with the race model have an upper limit, however, which is given by the well-known race model inequality (Miller, 1982). A number of statistical tests have been proposed for testing the race model inequality in single...... participants and groups of participants. All of these tests use the race model as the null hypothesis, and rejection of the null hypothesis is considered evidence in favor of coactivation. We introduce a statistical test in which the race model prediction is the alternative hypothesis. This test controls...

  19. LOD score exclusion analyses for candidate genes using random population samples.

    Science.gov (United States)

    Deng, H W; Li, J; Recker, R R

    2001-05-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes with random population samples. We develop a LOD score approach for exclusion analyses of candidate genes with random population samples. Under this approach, specific genetic effects and inheritance models at candidate genes can be analysed and if a LOD score is < or = - 2.0, the locus can be excluded from having an effect larger than that specified. Computer simulations show that, with sample sizes often employed in association studies, this approach has high power to exclude a gene from having moderate genetic effects. In contrast to regular association analyses, population admixture will not affect the robustness of our analyses; in fact, it renders our analyses more conservative and thus any significant exclusion result is robust. Our exclusion analysis complements association analysis for candidate genes in random population samples and is parallel to the exclusion mapping analyses that may be conducted in linkage analyses with pedigrees or relative pairs. The usefulness of the approach is demonstrated by an application to test the importance of vitamin D receptor and estrogen receptor genes underlying the differential risk to osteoporotic fractures.

  20. Thermal and hydraulic analyses of the System 81 cold traps

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.

    1977-06-15

    Thermal and hydraulic analyses of the System 81 Type I and II cold traps were completed except for thermal transients analysis. Results are evaluated, discussed, and reported. Analytical models were developed to determine the physical dimensions of the cold traps and to predict the performance. The FFTF cold trap crystallizer performances were simulated using the thermal model. This simulation shows that the analytical model developed predicts reasonably conservative temperatures. Pressure drop and sodium residence time calculations indicate that the present design will meet the requirements specified in the E-Specification. Steady state temperature data for the critical regions were generated to assess the magnitude of the thermal stress.

  1. The BACHD Rat Model of Huntington Disease Shows Signs of Fronto-Striatal Dysfunction in Two Operant Conditioning Tests of Short-Term Memory.

    Directory of Open Access Journals (Sweden)

    Erik Karl Håkan Clemensson

    Full Text Available The BACHD rat is a recently developed transgenic animal model of Huntington disease, a progressive neurodegenerative disorder characterized by extensive loss of striatal neurons. Cognitive impairments are common among patients, and characterization of similar deficits in animal models of the disease is therefore of interest. The present study assessed the BACHD rats' performance in the delayed alternation and the delayed non-matching to position test, two Skinner box-based tests of short-term memory function. The transgenic rats showed impaired performance in both tests, indicating general problems with handling basic aspects of the tests, while short-term memory appeared to be intact. Similar phenotypes have been found in rats with fronto-striatal lesions, suggesting that Huntington disease-related neuropathology might be present in the BACHD rats. Further analyses indicated that the performance deficit in the delayed alternation test might be due to impaired inhibitory control, which has also been implicated in Huntington disease patients. The study ultimately suggests that the BACHD rats might suffer from neuropathology and cognitive impairments reminiscent of those of Huntington disease patients.

  2. The BACHD Rat Model of Huntington Disease Shows Signs of Fronto-Striatal Dysfunction in Two Operant Conditioning Tests of Short-Term Memory.

    Science.gov (United States)

    Clemensson, Erik Karl Håkan; Clemensson, Laura Emily; Riess, Olaf; Nguyen, Huu Phuc

    2017-01-01

    The BACHD rat is a recently developed transgenic animal model of Huntington disease, a progressive neurodegenerative disorder characterized by extensive loss of striatal neurons. Cognitive impairments are common among patients, and characterization of similar deficits in animal models of the disease is therefore of interest. The present study assessed the BACHD rats' performance in the delayed alternation and the delayed non-matching to position test, two Skinner box-based tests of short-term memory function. The transgenic rats showed impaired performance in both tests, indicating general problems with handling basic aspects of the tests, while short-term memory appeared to be intact. Similar phenotypes have been found in rats with fronto-striatal lesions, suggesting that Huntington disease-related neuropathology might be present in the BACHD rats. Further analyses indicated that the performance deficit in the delayed alternation test might be due to impaired inhibitory control, which has also been implicated in Huntington disease patients. The study ultimately suggests that the BACHD rats might suffer from neuropathology and cognitive impairments reminiscent of those of Huntington disease patients.

  3. Integrated tokamak modelling with the fast-ion Fokker–Planck solver adapted for transient analyses

    International Nuclear Information System (INIS)

    Toma, M; Hamamatsu, K; Hayashi, N; Honda, M; Ide, S

    2015-01-01

    Integrated tokamak modelling that enables the simulation of an entire discharge period is indispensable for designing advanced tokamak plasmas. For this purpose, we extend the integrated code TOPICS to make it more suitable for transient analyses in the fast-ion part. The fast-ion Fokker–Planck solver is integrated into TOPICS at the same level as the bulk transport solver so that the time evolutions of the fast ion and the bulk plasma are consistent with each other as well as with the equilibrium magnetic field. The fast-ion solver simultaneously handles neutral beam-injected ions and alpha particles. Parallelisation of the fast-ion solver in addition to its computational lightness owing to a dimensional reduction in the phase space enables transient analyses for long periods in the order of tens of seconds. The fast-ion Fokker–Planck calculation is compared and confirmed to be in good agreement with an orbit following a Monte Carlo calculation. The integrated code is applied to ramp-up simulations for JT-60SA and ITER to confirm its capability and effectiveness in transient analyses. In the integrated simulations, the coupled evolution of the fast ions, plasma profiles, and equilibrium magnetic fields are presented. In addition, the electric acceleration effect on fast ions is shown and discussed. (paper)

  4. A Formal Model to Analyse the Firewall Configuration Errors

    Directory of Open Access Journals (Sweden)

    T. T. Myo

    2015-01-01

    Full Text Available The firewall is widely known as a brandmauer (security-edge gateway. To provide the demanded security, the firewall has to be appropriately adjusted, i.e. be configured. Unfortunately, when configuring, even the skilled administrators may make mistakes, which result in decreasing level of a network security and network infiltration undesirable packages.The network can be exposed to various threats and attacks. One of the mechanisms used to ensure network security is the firewall.The firewall is a network component, which, using a security policy, controls packages passing through the borders of a secured network. The security policy represents the set of rules.Package filters work in the mode without inspection of a state: they investigate packages as the independent objects. Rules take the following form: (condition, action. The firewall analyses the entering traffic, based on the IP address of the sender and recipient, the port number of the sender and recipient, and the used protocol. When the package meets rule conditions, the action specified in the rule is carried out. It can be: allow, deny.The aim of this article is to develop tools to analyse a firewall configuration with inspection of states. The input data are the file with the set of rules. It is required to submit the analysis of a security policy in an informative graphic form as well as to reveal discrepancy available in rules. The article presents a security policy visualization algorithm and a program, which shows how the firewall rules act on all possible packages. To represent a result in an intelligible form a concept of the equivalence region is introduced.Our task is the program to display results of rules action on the packages in a convenient graphic form as well as to reveal contradictions between the rules. One of problems is the large number of measurements. As it was noted above, the following parameters are specified in the rule: Source IP address, appointment IP

  5. Prediction of Seismic Slope Displacements by Dynamic Stick-Slip Analyses

    International Nuclear Information System (INIS)

    Ausilio, Ernesto; Costanzo, Antonio; Silvestri, Francesco; Tropeano, Giuseppe

    2008-01-01

    A good-working balance between simplicity and reliability in assessing seismic slope stability is represented by displacement-based methods, in which the effects of deformability and ductility can be either decoupled or coupled in the dynamic analyses. In this paper, a 1D lumped mass ''stick-slip'' model is developed, accounting for soil heterogeneity and non-linear behaviour, with a base sliding mechanism at a potential rupture surface. The results of the preliminary calibration show a good agreement with frequency-domain site response analysis in no-slip conditions. The comparison with rigid sliding block analyses and with the decoupled approach proves that the stick-slip procedure can result increasingly unconservative for soft soils and deep sliding depths

  6. Uncertainty analyses of the calibrated parameter values of a water quality model

    Science.gov (United States)

    Rode, M.; Suhr, U.; Lindenschmidt, K.-E.

    2003-04-01

    For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.

  7. Modeling and analysing storage systems in agricultural biomass supply chain for cellulosic ethanol production

    International Nuclear Information System (INIS)

    Ebadian, Mahmood; Sowlati, Taraneh; Sokhansanj, Shahab; Townley-Smith, Lawrence; Stumborg, Mark

    2013-01-01

    Highlights: ► Studied the agricultural biomass supply chain for cellulosic ethanol production. ► Evaluated the impact of storage systems on different supply chain actors. ► Developed a combined simulation/optimization model to evaluate storage systems. ► Compared two satellite storage systems with roadside storage in terms of costs and emitted CO 2 . ► SS would lead to a more cost-efficient supply chain compared to roadside storage. -- Abstract: In this paper, a combined simulation/optimization model is developed to better understand and evaluate the impact of the storage systems on the costs incurred by each actor in the agricultural biomass supply chain including farmers, hauling contractors and the cellulosic ethanol plant. The optimization model prescribes the optimum number and location of farms and storages. It also determines the supply radius, the number of farms required to secure the annual supply of biomass and also the assignment of farms to storage locations. Given the specific design of the supply chain determined by the optimization model, the simulation model determines the number of required machines for each operation, their daily working schedule and utilization rates, along with the capacities of storages. To evaluate the impact of the storage systems on the delivered costs, three storage systems are molded and compared: roadside storage (RS) system and two satellite storage (SS) systems including SS with fixed hauling distance (SF) and SS with variable hauling distance (SV). In all storage systems, it is assumed the loading equipment is dedicated to storage locations. The obtained results from a real case study provide detailed cost figures for each storage system since the developed model analyses the supply chain on an hourly basis and considers time-dependence and stochasticity of the supply chain. Comparison of the storage systems shows SV would outperform SF and RS by reducing the total delivered cost by 8% and 6%, respectively

  8. Lixisenatide, a drug developed to treat type 2 diabetes, shows neuroprotective effects in a mouse model of Alzheimer's disease.

    Science.gov (United States)

    McClean, Paula L; Hölscher, Christian

    2014-11-01

    Type 2 diabetes is a risk factor for developing Alzheimer's disease (AD). In the brains of AD patients, insulin signalling is desensitised. The incretin hormone Glucagon-like peptide-1 (GLP-1) facilitates insulin signalling, and analogues such as liraglutide are on the market as treatments for type 2 diabetes. We have previously shown that liraglutide showed neuroprotective effects in the APPswe/PS1ΔE9 mouse model of AD. Here, we test the GLP-1 receptor agonist lixisenatide in the same mouse model and compare the effects to liraglutide. After ten weeks of daily i.p. injections with liraglutide (2.5 or 25 nmol/kg) or lixisenatide (1 or 10 nmol/kg) or saline of APP/PS1 mice at an age when amyloid plaques had already formed, performance in an object recognition task was improved in APP/PS1 mice by both drugs at all doses tested. When analysing synaptic plasticity in the hippocampus, LTP was strongly increased in APP/PS1 mice by either drug. Lixisenatide (1 nmol/kg) was most effective. The reduction of synapse numbers seen in APP/PS1 mice was prevented by the drugs. The amyloid plaque load and dense-core Congo red positive plaque load in the cortex was reduced by both drugs at all doses. The chronic inflammation response (microglial activation) was also reduced by all treatments. The results demonstrate that the GLP-1 receptor agonists liraglutide and lixisenatide which are on the market as treatments for type 2 diabetes show promise as potential drug treatments of AD. Lixisenatide was equally effective at a lower dose compared to liraglutide in some of the parameters measured. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Directory of Open Access Journals (Sweden)

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  10. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    Kevin Coppersmith

    2001-01-01

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  11. Systematic review of model-based analyses reporting the cost-effectiveness and cost-utility of cardiovascular disease management programs.

    Science.gov (United States)

    Maru, Shoko; Byrnes, Joshua; Whitty, Jennifer A; Carrington, Melinda J; Stewart, Simon; Scuffham, Paul A

    2015-02-01

    The reported cost effectiveness of cardiovascular disease management programs (CVD-MPs) is highly variable, potentially leading to different funding decisions. This systematic review evaluates published modeled analyses to compare study methods and quality. Articles were included if an incremental cost-effectiveness ratio (ICER) or cost-utility ratio (ICUR) was reported, it is a multi-component intervention designed to manage or prevent a cardiovascular disease condition, and it addressed all domains specified in the American Heart Association Taxonomy for Disease Management. Nine articles (reporting 10 clinical outcomes) were included. Eight cost-utility and two cost-effectiveness analyses targeted hypertension (n=4), coronary heart disease (n=2), coronary heart disease plus stoke (n=1), heart failure (n=2) and hyperlipidemia (n=1). Study perspectives included the healthcare system (n=5), societal and fund holders (n=1), a third party payer (n=3), or was not explicitly stated (n=1). All analyses were modeled based on interventions of one to two years' duration. Time horizon ranged from two years (n=1), 10 years (n=1) and lifetime (n=8). Model structures included Markov model (n=8), 'decision analytic models' (n=1), or was not explicitly stated (n=1). Considerable variation was observed in clinical and economic assumptions and reporting practices. Of all ICERs/ICURs reported, including those of subgroups (n=16), four were above a US$50,000 acceptability threshold, six were below and six were dominant. The majority of CVD-MPs was reported to have favorable economic outcomes, but 25% were at unacceptably high cost for the outcomes. Use of standardized reporting tools should increase transparency and inform what drives the cost-effectiveness of CVD-MPs. © The European Society of Cardiology 2014.

  12. Modeling Patient No-Show History and Predicting Future Outpatient Appointment Behavior in the Veterans Health Administration.

    Science.gov (United States)

    Goffman, Rachel M; Harris, Shannon L; May, Jerrold H; Milicevic, Aleksandra S; Monte, Robert J; Myaskovsky, Larissa; Rodriguez, Keri L; Tjader, Youxu C; Vargas, Dominic L

    2017-05-01

    Missed appointments reduce the efficiency of the health care system and negatively impact access to care for all patients. Identifying patients at risk for missing an appointment could help health care systems and providers better target interventions to reduce patient no-shows. Our aim was to develop and test a predictive model that identifies patients that have a high probability of missing their outpatient appointments. Demographic information, appointment characteristics, and attendance history were drawn from the existing data sets from four Veterans Affairs health care facilities within six separate service areas. Past attendance behavior was modeled using an empirical Markov model based on up to 10 previous appointments. Using logistic regression, we developed 24 unique predictive models. We implemented the models and tested an intervention strategy using live reminder calls placed 24, 48, and 72 hours ahead of time. The pilot study targeted 1,754 high-risk patients, whose probability of missing an appointment was predicted to be at least 0.2. Our results indicate that three variables were consistently related to a patient's no-show probability in all 24 models: past attendance behavior, the age of the appointment, and having multiple appointments scheduled on that day. After the intervention was implemented, the no-show rate in the pilot group was reduced from the expected value of 35% to 12.16% (p value < 0.0001). The predictive model accurately identified patients who were more likely to miss their appointments. Applying the model in practice enables clinics to apply more intensive intervention measures to high-risk patients. Reprint & Copyright © 2017 Association of Military Surgeons of the U.S.

  13. Performance and Vibration Analyses of Lift-Offset Helicopters

    Directory of Open Access Journals (Sweden)

    Jeong-In Go

    2017-01-01

    Full Text Available A validation study on the performance and vibration analyses of the XH-59A compound helicopter is conducted to establish techniques for the comprehensive analysis of lift-offset compound helicopters. This study considers the XH-59A lift-offset compound helicopter using a rigid coaxial rotor system as a verification model. CAMRAD II (Comprehensive Analytical Method of Rotorcraft Aerodynamics and Dynamics II, a comprehensive analysis code, is used as a tool for the performance, vibration, and loads analyses. A general free wake model, which is a more sophisticated wake model than other wake models, is used to obtain good results for the comprehensive analysis. Performance analyses of the XH-59A helicopter with and without auxiliary propulsion are conducted in various flight conditions. In addition, vibration analyses of the XH-59A compound helicopter configuration are conducted in the forward flight condition. The present comprehensive analysis results are in good agreement with the flight test and previous analyses. Therefore, techniques for the comprehensive analysis of lift-offset compound helicopters are appropriately established. Furthermore, the rotor lifts are calculated for the XH-59A lift-offset compound helicopter in the forward flight condition to investigate the airloads characteristics of the ABC™ (Advancing Blade Concept rotor.

  14. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  15. Multicollinearity in prognostic factor analyses using the EORTC QLQ-C30: identification and impact on model selection.

    Science.gov (United States)

    Van Steen, Kristel; Curran, Desmond; Kramer, Jocelyn; Molenberghs, Geert; Van Vreckem, Ann; Bottomley, Andrew; Sylvester, Richard

    2002-12-30

    Clinical and quality of life (QL) variables from an EORTC clinical trial of first line chemotherapy in advanced breast cancer were used in a prognostic factor analysis of survival and response to chemotherapy. For response, different final multivariate models were obtained from forward and backward selection methods, suggesting a disconcerting instability. Quality of life was measured using the EORTC QLQ-C30 questionnaire completed by patients. Subscales on the questionnaire are known to be highly correlated, and therefore it was hypothesized that multicollinearity contributed to model instability. A correlation matrix indicated that global QL was highly correlated with 7 out of 11 variables. In a first attempt to explore multicollinearity, we used global QL as dependent variable in a regression model with other QL subscales as predictors. Afterwards, standard diagnostic tests for multicollinearity were performed. An exploratory principal components analysis and factor analysis of the QL subscales identified at most three important components and indicated that inclusion of global QL made minimal difference to the loadings on each component, suggesting that it is redundant in the model. In a second approach, we advocate a bootstrap technique to assess the stability of the models. Based on these analyses and since global QL exacerbates problems of multicollinearity, we therefore recommend that global QL be excluded from prognostic factor analyses using the QLQ-C30. The prognostic factor analysis was rerun without global QL in the model, and selected the same significant prognostic factors as before. Copyright 2002 John Wiley & Sons, Ltd.

  16. Analyses of non-fatal accidents in an opencast mine by logistic regression model - a case study.

    Science.gov (United States)

    Onder, Seyhan; Mutlu, Mert

    2017-09-01

    Accidents cause major damage for both workers and enterprises in the mining industry. To reduce the number of occupational accidents, these incidents should be properly registered and carefully analysed. This study efficiently examines the Aegean Lignite Enterprise (ELI) of Turkish Coal Enterprises (TKI) in Soma between 2006 and 2011, and opencast coal mine occupational accident records were used for statistical analyses. A total of 231 occupational accidents were analysed for this study. The accident records were categorized into seven groups: area, reason, occupation, part of body, age, shift hour and lost days. The SPSS package program was used in this study for logistic regression analyses, which predicted the probability of accidents resulting in greater or less than 3 lost workdays for non-fatal injuries. Social facilities-area of surface installations, workshops and opencast mining areas are the areas with the highest probability for accidents with greater than 3 lost workdays for non-fatal injuries, while the reasons with the highest probability for these types of accidents are transporting and manual handling. Additionally, the model was tested for such reported accidents that occurred in 2012 for the ELI in Soma and estimated the probability of exposure to accidents with lost workdays correctly by 70%.

  17. Exploratory multinomial logit model-based driver injury severity analyses for teenage and adult drivers in intersection-related crashes.

    Science.gov (United States)

    Wu, Qiong; Zhang, Guohui; Ci, Yusheng; Wu, Lina; Tarefder, Rafiqul A; Alcántara, Adélamar Dely

    2016-05-18

    Teenage drivers are more likely to be involved in severely incapacitating and fatal crashes compared to adult drivers. Moreover, because two thirds of urban vehicle miles traveled are on signal-controlled roadways, significant research efforts are needed to investigate intersection-related teenage driver injury severities and their contributing factors in terms of driver behavior, vehicle-infrastructure interactions, environmental characteristics, roadway geometric features, and traffic compositions. Therefore, this study aims to explore the characteristic differences between teenage and adult drivers in intersection-related crashes, identify the significant contributing attributes, and analyze their impacts on driver injury severities. Using crash data collected in New Mexico from 2010 to 2011, 2 multinomial logit regression models were developed to analyze injury severities for teenage and adult drivers, respectively. Elasticity analyses and transferability tests were conducted to better understand the quantitative impacts of these factors and the teenage driver injury severity model's generality. The results showed that although many of the same contributing factors were found to be significant in the both teenage and adult driver models, certain different attributes must be distinguished to specifically develop effective safety solutions for the 2 driver groups. The research findings are helpful to better understand teenage crash uniqueness and develop cost-effective solutions to reduce intersection-related teenage injury severities and facilitate driver injury mitigation research.

  18. Wave modelling for the North Indian Ocean using MSMR analysed winds

    Digital Repository Service at National Institute of Oceanography (India)

    Vethamony, P.; Sudheesh, K.; Rupali, S.P.; Babu, M.T.; Jayakumar, S.; Saran, A; Basu, S.K.; Kumar, R.; Sarkar, A

    prediction when NCMRWF winds blended with MSMR winds are utilised in the wave model. A comparison between buoy and TOPEX wave heights of May 2000 at 4 buoy locations provides a good match, showing the merit of using altimeter data, wherever it is difficult...

  19. Analysing Feature Model Changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable sys- tems is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this con- text, the evolution of the feature model closely follows the evolution of the system.

  20. Analysing PKCS#11 Key Management APIs with Unbounded Fresh Data

    Science.gov (United States)

    Fröschle, Sibylle; Steel, Graham

    We extend Delaune, Kremer and Steel’s framework for analysis of PKCS#11-based APIs from bounded to unbounded fresh data. We achieve this by: formally defining the notion of an attribute policy; showing that a well-designed API should have a certain class of policy we call complete; showing that APIs with complete policies may be safely abstracted to APIs where the attributes are fixed; and proving that these static APIs can be analysed in a small bounded model such that security properties will hold for the unbounded case. We automate analysis in our framework using the SAT-based security protocol model checker SATMC. We show that a symmetric key management subset of the Eracom PKCS#11 API, used in their ProtectServer product, preserves the secrecy of sensitive keys for unbounded numbers of fresh keys and handles, i.e. pointers to keys. We also show that this API is not robust: if an encryption key is lost to the intruder, SATMC finds an attack whereby all the keys may be compromised.

  1. Modeling Acequia Irrigation Systems Using System Dynamics: Model Development, Evaluation, and Sensitivity Analyses to Investigate Effects of Socio-Economic and Biophysical Feedbacks

    Directory of Open Access Journals (Sweden)

    Benjamin L. Turner

    2016-10-01

    Full Text Available Agriculture-based irrigation communities of northern New Mexico have survived for centuries despite the arid environment in which they reside. These irrigation communities are threatened by regional population growth, urbanization, a changing demographic profile, economic development, climate change, and other factors. Within this context, we investigated the extent to which community resource management practices centering on shared resources (e.g., water for agricultural in the floodplains and grazing resources in the uplands and mutualism (i.e., shared responsibility of local residents to maintaining traditional irrigation policies and upholding cultural and spiritual observances embedded within the community structure influence acequia function. We used a system dynamics modeling approach as an interdisciplinary platform to integrate these systems, specifically the relationship between community structure and resource management. In this paper we describe the background and context of acequia communities in northern New Mexico and the challenges they face. We formulate a Dynamic Hypothesis capturing the endogenous feedbacks driving acequia community vitality. Development of the model centered on major stock-and-flow components, including linkages for hydrology, ecology, community, and economics. Calibration metrics were used for model evaluation, including statistical correlation of observed and predicted values and Theil inequality statistics. Results indicated that the model reproduced trends exhibited by the observed system. Sensitivity analyses of socio-cultural processes identified absentee decisions, cumulative income effect on time in agriculture, and land use preference due to time allocation, community demographic effect, effect of employment on participation, and farm size effect as key determinants of system behavior and response. Sensitivity analyses of biophysical parameters revealed that several key parameters (e.g., acres per

  2. An extensible analysable system model

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    , this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...

  3. BWR core melt progression phenomena: Experimental analyses

    International Nuclear Information System (INIS)

    Ott, L.J.

    1992-01-01

    In the BWR Core Melt in Progression Phenomena Program, experimental results concerning severe fuel damage and core melt progression in BWR core geometry are used to evaluate existing models of the governing phenomena. These include control blade eutectic liquefaction and the subsequent relocation and attack on the channel box structure; oxidation heating and hydrogen generation; Zircaloy melting and relocation; and the continuing oxidation of zirconium with metallic blockage formation. Integral data have been obtained from the BWR DF-4 experiment in the ACRR and from BWR tests in the German CORA exreactor fuel-damage test facility. Additional integral data will be obtained from new CORA BWR test, the full-length FLHT-6 BWR test in the NRU test reactor, and the new program of exreactor experiments at Sandia National Laboratories (SNL) on metallic melt relocation and blockage formation. an essential part of this activity is interpretation and use of the results of the BWR tests. The Oak Ridge National Laboratory (ORNL) has developed experiment-specific models for analysis of the BWR experiments; to date, these models have permitted far more precise analyses of the conditions in these experiments than has previously been available. These analyses have provided a basis for more accurate interpretation of the phenomena that the experiments are intended to investigate. The results of posttest analyses of BWR experiments are discussed and significant findings from these analyses are explained. The ORNL control blade/canister models with materials interaction, relocation and blockage models are currently being implemented in SCDAP/RELAP5 as an optional structural component

  4. Assessment of Tools and Data for System-Level Dynamic Analyses

    International Nuclear Information System (INIS)

    Piet, Steven J.; Soelberg, Nick R.

    2011-01-01

    The only fuel cycle for which dynamic analyses and assessments are not needed is the null fuel cycle - no nuclear power. For every other concept, dynamic analyses are needed and can influence relative desirability of options. Dynamic analyses show how a fuel cycle might work during transitions from today's partial fuel cycle to something more complete, impact of technology deployments, location of choke points, the key time lags, when benefits can manifest, and how well parts of fuel cycles work together. This report summarizes the readiness of existing Fuel Cycle Technology (FCT) tools and data for conducting dynamic analyses on the range of options. VISION is the primary dynamic analysis tool. Not only does it model mass flows, as do other dynamic system analysis models, but it allows users to explore various potential constraints. The only fuel cycle for which constraints are not important are those in concept advocates PowerPoint presentations; in contrast, comparative analyses of fuel cycles must address what constraints exist and how they could impact performance. The most immediate tool need is extending VISION to the thorium/U233 fuel cycle. Depending on further clarification of waste management strategies in general and for specific fuel cycle candidates, waste management sub-models in VISION may need enhancement, e.g., more on 'co-flows' of non-fuel materials, constraints in waste streams, or automatic classification of waste streams on the basis of user-specified rules. VISION originally had an economic sub-model. The economic calculations were deemed unnecessary in later versions so it was retired. Eventually, the program will need to restore and improve the economics sub-model of VISION to at least the cash flow stage and possibly to incorporating cost constraints and feedbacks. There are multiple sources of data that dynamic analyses can draw on. In this report, 'data' means experimental data, data from more detailed theoretical or empirical

  5. Assessment of Tools and Data for System-Level Dynamic Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Steven J. Piet; Nick R. Soelberg

    2011-06-01

    The only fuel cycle for which dynamic analyses and assessments are not needed is the null fuel cycle - no nuclear power. For every other concept, dynamic analyses are needed and can influence relative desirability of options. Dynamic analyses show how a fuel cycle might work during transitions from today's partial fuel cycle to something more complete, impact of technology deployments, location of choke points, the key time lags, when benefits can manifest, and how well parts of fuel cycles work together. This report summarizes the readiness of existing Fuel Cycle Technology (FCT) tools and data for conducting dynamic analyses on the range of options. VISION is the primary dynamic analysis tool. Not only does it model mass flows, as do other dynamic system analysis models, but it allows users to explore various potential constraints. The only fuel cycle for which constraints are not important are those in concept advocates PowerPoint presentations; in contrast, comparative analyses of fuel cycles must address what constraints exist and how they could impact performance. The most immediate tool need is extending VISION to the thorium/U233 fuel cycle. Depending on further clarification of waste management strategies in general and for specific fuel cycle candidates, waste management sub-models in VISION may need enhancement, e.g., more on 'co-flows' of non-fuel materials, constraints in waste streams, or automatic classification of waste streams on the basis of user-specified rules. VISION originally had an economic sub-model. The economic calculations were deemed unnecessary in later versions so it was retired. Eventually, the program will need to restore and improve the economics sub-model of VISION to at least the cash flow stage and possibly to incorporating cost constraints and feedbacks. There are multiple sources of data that dynamic analyses can draw on. In this report, 'data' means experimental data, data from more detailed

  6. Bayesian salamanders: analysing the demography of an underground population of the European plethodontid Speleomantes strinatii with state-space modelling

    Directory of Open Access Journals (Sweden)

    Salvidio Sebastiano

    2010-02-01

    Full Text Available Abstract Background It has been suggested that Plethodontid salamanders are excellent candidates for indicating ecosystem health. However, detailed, long-term data sets of their populations are rare, limiting our understanding of the demographic processes underlying their population fluctuations. Here we present a demographic analysis based on a 1996 - 2008 data set on an underground population of Speleomantes strinatii (Aellen in NW Italy. We utilised a Bayesian state-space approach allowing us to parameterise a stage-structured Lefkovitch model. We used all the available population data from annual temporary removal experiments to provide us with the baseline data on the numbers of juveniles, subadults and adult males and females present at any given time. Results Sampling the posterior chains of the converged state-space model gives us the likelihood distributions of the state-specific demographic rates and the associated uncertainty of these estimates. Analysing the resulting parameterised Lefkovitch matrices shows that the population growth is very close to 1, and that at population equilibrium we expect half of the individuals present to be adults of reproductive age which is what we also observe in the data. Elasticity analysis shows that adult survival is the key determinant for population growth. Conclusion This analysis demonstrates how an understanding of population demography can be gained from structured population data even in a case where following marked individuals over their whole lifespan is not practical.

  7. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    Science.gov (United States)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  8. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Directory of Open Access Journals (Sweden)

    Jordi Marcé-Nogué

    2017-10-01

    Full Text Available Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches.

  9. The intervals method: a new approach to analyse finite element outputs using multivariate statistics

    Science.gov (United States)

    De Esteban-Trivigno, Soledad; Püschel, Thomas A.; Fortuny, Josep

    2017-01-01

    Background In this paper, we propose a new method, named the intervals’ method, to analyse data from finite element models in a comparative multivariate framework. As a case study, several armadillo mandibles are analysed, showing that the proposed method is useful to distinguish and characterise biomechanical differences related to diet/ecomorphology. Methods The intervals’ method consists of generating a set of variables, each one defined by an interval of stress values. Each variable is expressed as a percentage of the area of the mandible occupied by those stress values. Afterwards these newly generated variables can be analysed using multivariate methods. Results Applying this novel method to the biological case study of whether armadillo mandibles differ according to dietary groups, we show that the intervals’ method is a powerful tool to characterize biomechanical performance and how this relates to different diets. This allows us to positively discriminate between specialist and generalist species. Discussion We show that the proposed approach is a useful methodology not affected by the characteristics of the finite element mesh. Additionally, the positive discriminating results obtained when analysing a difficult case study suggest that the proposed method could be a very useful tool for comparative studies in finite element analysis using multivariate statistical approaches. PMID:29043107

  10. Diagnostic Comparison of Meteorological Analyses during the 2002 Antarctic Winter

    Science.gov (United States)

    Manney, Gloria L.; Allen, Douglas R.; Kruger, Kirstin; Naujokat, Barbara; Santee, Michelle L.; Sabutis, Joseph L.; Pawson, Steven; Swinbank, Richard; Randall, Cora E.; Simmons, Adrian J.; hide

    2005-01-01

    Several meteorological datasets, including U.K. Met Office (MetO), European Centre for Medium-Range Weather Forecasts (ECMWF), National Centers for Environmental Prediction (NCEP), and NASA's Goddard Earth Observation System (GEOS-4) analyses, are being used in studies of the 2002 Southern Hemisphere (SH) stratospheric winter and Antarctic major warming. Diagnostics are compared to assess how these studies may be affected by the meteorological data used. While the overall structure and evolution of temperatures, winds, and wave diagnostics in the different analyses provide a consistent picture of the large-scale dynamics of the SH 2002 winter, several significant differences may affect detailed studies. The NCEP-NCAR reanalysis (REAN) and NCEP-Department of Energy (DOE) reanalysis-2 (REAN-2) datasets are not recommended for detailed studies, especially those related to polar processing, because of lower-stratospheric temperature biases that result in underestimates of polar processing potential, and because their winds and wave diagnostics show increasing differences from other analyses between similar to 30 and 10 hPa (their top level). Southern Hemisphere polar stratospheric temperatures in the ECMWF 40-Yr Re-analysis (ERA-40) show unrealistic vertical structure, so this long-term reanalysis is also unsuited for quantitative studies. The NCEP/Climate Prediction Center (CPC) objective analyses give an inferior representation of the upper-stratospheric vortex. Polar vortex transport barriers are similar in all analyses, but there is large variation in the amount, patterns, and timing of mixing, even among the operational assimilated datasets (ECMWF, MetO, and GEOS-4). The higher-resolution GEOS-4 and ECMWF assimilations provide significantly better representation of filamentation and small-scale structure than the other analyses, even when fields gridded at reduced resolution are studied. The choice of which analysis to use is most critical for detailed transport

  11. Solving scheduling problems by untimed model checking. The clinical chemical analyser case study

    NARCIS (Netherlands)

    Margaria, T.; Wijs, Anton J.; Massink, M.; van de Pol, Jan Cornelis; Bortnik, Elena M.

    2009-01-01

    In this article, we show how scheduling problems can be modelled in untimed process algebra, by using special tick actions. A minimal-cost trace leading to a particular action, is one that minimises the number of tick steps. As a result, we can use any (timed or untimed) model checking tool to find

  12. A methodology for eliciting, representing, and analysing stakeholder knowledge for decision making on complex socio-ecological systems: from cognitive maps to agent-based models.

    Science.gov (United States)

    Elsawah, Sondoss; Guillaume, Joseph H A; Filatova, Tatiana; Rook, Josefine; Jakeman, Anthony J

    2015-03-15

    This paper aims to contribute to developing better ways for incorporating essential human elements in decision making processes for modelling of complex socio-ecological systems. It presents a step-wise methodology for integrating perceptions of stakeholders (qualitative) into formal simulation models (quantitative) with the ultimate goal of improving understanding and communication about decision making in complex socio-ecological systems. The methodology integrates cognitive mapping and agent based modelling. It cascades through a sequence of qualitative/soft and numerical methods comprising: (1) Interviews to elicit mental models; (2) Cognitive maps to represent and analyse individual and group mental models; (3) Time-sequence diagrams to chronologically structure the decision making process; (4) All-encompassing conceptual model of decision making, and (5) computational (in this case agent-based) Model. We apply the proposed methodology (labelled ICTAM) in a case study of viticulture irrigation in South Australia. Finally, we use strengths-weakness-opportunities-threats (SWOT) analysis to reflect on the methodology. Results show that the methodology leverages the use of cognitive mapping to capture the richness of decision making and mental models, and provides a combination of divergent and convergent analysis methods leading to the construction of an Agent Based Model. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Comparison with Russian analyses of meteor impact

    Energy Technology Data Exchange (ETDEWEB)

    Canavan, G.H.

    1997-06-01

    The inversion model for meteor impacts is used to discuss Russian analyses and compare principal results. For common input parameters, the models produce consistent estimates of impactor parameters. Directions for future research are discussed and prioritized.

  14. Comparative Analyses of Zebrafish Anxiety-Like Behavior Using Conflict-Based Novelty Tests.

    Science.gov (United States)

    Kysil, Elana V; Meshalkina, Darya A; Frick, Erin E; Echevarria, David J; Rosemberg, Denis B; Maximino, Caio; Lima, Monica Gomes; Abreu, Murilo S; Giacomini, Ana C; Barcellos, Leonardo J G; Song, Cai; Kalueff, Allan V

    2017-06-01

    Modeling of stress and anxiety in adult zebrafish (Danio rerio) is increasingly utilized in neuroscience research and central nervous system (CNS) drug discovery. Representing the most commonly used zebrafish anxiety models, the novel tank test (NTT) focuses on zebrafish diving in response to potentially threatening stimuli, whereas the light-dark test (LDT) is based on fish scototaxis (innate preference for dark vs. bright areas). Here, we systematically evaluate the utility of these two tests, combining meta-analyses of published literature with comparative in vivo behavioral and whole-body endocrine (cortisol) testing. Overall, the NTT and LDT behaviors demonstrate a generally good cross-test correlation in vivo, whereas meta-analyses of published literature show that both tests have similar sensitivity to zebrafish anxiety-like states. Finally, NTT evokes higher levels of cortisol, likely representing a more stressful procedure than LDT. Collectively, our study reappraises NTT and LDT for studying anxiety-like states in zebrafish, and emphasizes their developing utility for neurobehavioral research. These findings can help optimize drug screening procedures by choosing more appropriate models for testing anxiolytic or anxiogenic drugs.

  15. KEEFEKTIFAN MODEL SHOW NOT TELL DAN MIND MAP PADA PEMBELAJARAN MENULIS TEKS EKSPOSISI BERDASARKAN MINAT PESERTA DIDIK KELAS X SMK

    Directory of Open Access Journals (Sweden)

    Wiwit Lili Sokhipah

    2015-03-01

    Full Text Available Tujuan penelitian ini adalah (1 menentukan keefektifan penggunaan model show not tell pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK Kelas X, (2 menentukan keefektifan penggunaan model mind map pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK kelas X, (3 menentukan keefektifan interaksi show not tell dan mind map pada pembelajaran keterampilan menulis teks eksposisi berdasarkan minat peserta didik SMK kelas X. Penelitian ini adalah quasi experimental design (pretes-postes control group design. Dalam desain ini terdapat dua kelompok eksperimen yakni penerapan model show not tell dalam pembelajaran keterampilan menulis teks eksposisipeserta didik dengan minat tinggi dan penerapan model mind map dalam pembelajaran keterampilan menulis teks eksposisi  peserta didik dengan minat rendah. Hasil penelitian adalah (1 model show not tell efektif digunakan  dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat tinggi, (2 model mind map efektif digunakan dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat rendah, dan (3 model show not tell lebih efektif digunakan dalam membelajarkan menulis teks eksposisi bagi peserta didik yang memiliki minat tinggi, sedangkan model mind map efektif digunakan dalam membelajarkan teks eksposisi pagi peserta didik yang memiliki minat rendah.

  16. Showing/Sharing: Analysing Visual Communication from a Praxeological Perspective

    Directory of Open Access Journals (Sweden)

    Maria Schreiber

    2017-12-01

    Full Text Available This contribution proposes a methodological framework for empirical research into visual practices on social media. The framework identifies practices, pictures and platforms as relevant dimensions of analysis. It is mainly developed within, and is compatible with qualitative, interpretive approaches which focus on visual communication as part of everyday personal communicative practices. Two screenshots from Instagram and Facebook are introduced as empirical examples to investigate collaborative practices of meaning-making relating to pictures on social media. While social media seems to augment reflexive, processual practices of negotiating identities, visual media, in particular, amps up aesthetic, ambivalent and embodied dimensions within these practices.

  17. Tokyo Motor Show 2003; Tokyo Motor Show 2003

    Energy Technology Data Exchange (ETDEWEB)

    Joly, E.

    2004-01-01

    The text which follows present the different techniques exposed during the 37. Tokyo Motor Show. The report points out the great tendencies of developments of the Japanese automobile industry. The hybrid electric-powered vehicles or those equipped with fuel cells have been highlighted by the Japanese manufacturers which allow considerable budgets in the research of less polluting vehicles. The exposed models, although being all different according to the manufacturer, use always a hybrid system: fuel cell/battery. The manufacturers have stressed too on the intelligent systems for navigation and safety as well as on the design and comfort. (O.M.)

  18. A review on design of experiments and surrogate models in aircraft real-time and many-query aerodynamic analyses

    Science.gov (United States)

    Yondo, Raul; Andrés, Esther; Valero, Eusebio

    2018-01-01

    Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial

  19. Analysing the Effects of Flood-Resilience Technologies in Urban Areas Using a Synthetic Model Approach

    Directory of Open Access Journals (Sweden)

    Reinhard Schinke

    2016-11-01

    Full Text Available Flood protection systems with their spatial effects play an important role in managing and reducing flood risks. The planning and decision process as well as the technical implementation are well organized and often exercised. However, building-related flood-resilience technologies (FReT are often neglected due to the absence of suitable approaches to analyse and to integrate such measures in large-scale flood damage mitigation concepts. Against this backdrop, a synthetic model-approach was extended by few complementary methodical steps in order to calculate flood damage to buildings considering the effects of building-related FReT and to analyse the area-related reduction of flood risks by geo-information systems (GIS with high spatial resolution. It includes a civil engineering based investigation of characteristic properties with its building construction including a selection and combination of appropriate FReT as a basis for derivation of synthetic depth-damage functions. Depending on the real exposition and the implementation level of FReT, the functions can be used and allocated in spatial damage and risk analyses. The application of the extended approach is shown at a case study in Valencia (Spain. In this way, the overall research findings improve the integration of FReT in flood risk management. They provide also some useful information for advising of individuals at risk supporting the selection and implementation of FReT.

  20. Multi-Scale Carbon Isotopic Analyses Show Allende Nanodiamonds are Mostly Solar with Some PreSolar

    Science.gov (United States)

    Lewis, J. B.; Isheim, D.; Floss, C.; Gyngard, F.; Seidman, D. N.

    2017-07-01

    NanoSIMS and atom-probe experiments on different-sized aggregates of meteoritic nanodiamonds show mostly normal C isotopes, with a fraction of 13C-enriched material. The best interpretation is a combination of solar system and supernova formation.

  1. Advanced toroidal facility vaccuum vessel stress analyses

    International Nuclear Information System (INIS)

    Hammonds, C.J.; Mayhall, J.A.

    1987-01-01

    The complex geometry of the Advance Toroidal Facility (ATF) vacuum vessel required special analysis techniques in investigating the structural behavior of the design. The response of a large-scale finite element model was found for transportation and operational loading. Several computer codes and systems, including the National Magnetic Fusion Energy Computer Center Cray machines, were implemented in accomplishing these analyses. The work combined complex methods that taxed the limits of both the codes and the computer systems involved. Using MSC/NASTRAN cyclic-symmetry solutions permitted using only 1/12 of the vessel geometry to mathematically analyze the entire vessel. This allowed the greater detail and accuracy demanded by the complex geometry of the vessel. Critical buckling-pressure analyses were performed with the same model. The development, results, and problems encountered in performing these analyses are described. 5 refs., 3 figs

  2. A History of Rotorcraft Comprehensive Analyses

    Science.gov (United States)

    Johnson, Wayne

    2013-01-01

    A history of the development of rotorcraft comprehensive analyses is presented. Comprehensive analyses are digital computer programs that calculate the aeromechanical behavior of the rotor and aircraft, bringing together the most advanced models of the geometry, structure, dynamics, and aerodynamics available in rotary wing technology. The development of the major codes of the last five decades from industry, government, and universities is described. A number of common themes observed in this history are discussed.

  3. Sensitivity in risk analyses with uncertain numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, W. Troy; Ferson, Scott

    2006-06-01

    Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.

  4. Microsegregation in multicomponent alloy analysed by quantitative phase-field model

    International Nuclear Information System (INIS)

    Ohno, M; Takaki, T; Shibuta, Y

    2015-01-01

    Microsegregation behaviour in a ternary alloy system has been analysed by means of quantitative phase-field (Q-PF) simulations with a particular attention directed at an influence of tie-line shift stemming from different liquid diffusivities of the solute elements. The Q-PF model developed for non-isothermal solidification in multicomponent alloys with non-zero solid diffusivities was applied to analysis of microsegregation in a ternary alloy consisting of fast and slow diffusing solute elements. The accuracy of the Q-PF simulation was first verified by performing the convergence test of segregation ratio with respect to the interface thickness. From one-dimensional analysis, it was found that the microsegregation of slow diffusing element is reduced due to the tie-line shift. In two-dimensional simulations, the refinement of microstructure, viz., the decrease of secondary arms spacing occurs at low cooling rates due to the formation of diffusion layer of slow diffusing element. It yields the reductions of degrees of microsegregation for both the fast and slow diffusing elements. Importantly, in a wide range of cooling rates, the degree of microsegregation of the slow diffusing element is always lower than that of the fast diffusing element, which is entirely ascribable to the influence of tie-line shift. (paper)

  5. Comparison of metformin and insulin versus insulin alone for type 2 diabetes: systematic review of randomised clinical trials with meta-analyses and trial sequential analyses.

    Science.gov (United States)

    Hemmingsen, Bianca; Christensen, Louise Lundby; Wetterslev, Jørn; Vaag, Allan; Gluud, Christian; Lund, Søren S; Almdal, Thomas

    2012-04-19

    To compare the benefits and harms of metformin and insulin versus insulin alone as reported in randomised clinical trials of patients with type 2 diabetes. Systematic review of randomised clinical trials with meta-analyses and trial sequential analyses. The Cochrane Library, Medline, Embase, Science Citation Index Expanded, Latin American Caribbean Health Sciences Literature, and Cumulative Index to Nursing and Allied Health Literature until March 2011. We also searched abstracts presented at the American Diabetes Association and European Association for the Study of Diabetes Congresses, contacted relevant trial authors and pharmaceutical companies, hand searched reference lists of included trials, and searched the US Food and Drug Administration website. Two authors independently screened titles and abstracts for randomised clinical trials comparing metformin and insulin versus insulin alone (with or without placebo) in patients with type 2 diabetes, older than 18 years, and with an intervention period of at least 12 weeks. We included trials irrespective of language, publication status, predefined outcomes, antidiabetic interventions used before randomisation, and reported outcomes. We included 26 randomised trials with 2286 participants, of which 23 trials with 2117 participants could provide data. All trials had high risk of bias. Data were sparse for outcomes relevant to patients. Metformin and insulin versus insulin alone did not significantly affect all cause mortality (relative risk 1.30, 95% confidence interval 0.57 to 2.99) or cardiovascular mortality (1.70, 0.35 to 8.30). Trial sequential analyses showed that more trials were needed before reliable conclusions could be drawn regarding these outcomes. In a fixed effect model, but not in a random effects model, severe hypoglycaemia was significantly more frequent with metformin and insulin than with insulin alone (2.83, 1.17 to 6.86). In a random effects model, metformin and insulin resulted in reduced Hb

  6. Chapter No.4. Safety analyses

    International Nuclear Information System (INIS)

    2002-01-01

    In 2001 the activity in the field of safety analyses was focused on verification of the safety analyses reports for NPP V-2 Bohunice and NPP Mochovce concerning the new profiled fuel and probabilistic safety assessment study for NPP Mochovce. The calculation safety analyses were performed and expert reviews for the internal UJD needs were elaborated. An important part of work was performed also in solving of scientific and technical tasks appointed within bilateral projects of co-operation between UJD and its international partnership organisations as well as within international projects ordered and financed by the European Commission. All these activities served as an independent support for UJD in its deterministic and probabilistic safety assessment of nuclear installations. A special attention was paid to a review of probabilistic safety assessment study of level 1 for NPP Mochovce. The probabilistic safety analysis of NPP related to the full power operation was elaborated in the study and a contribution of the technical and operational improvements to the risk decreasing was quantified. A core damage frequency of the reactor was calculated and the dominant initiating events and accident sequences with the major contribution to the risk were determined. The target of the review was to determine the acceptance of the sources of input information, assumptions, models, data, analyses and obtained results, so that the probabilistic model could give a real picture of the NPP. The review of the study was performed in co-operation of UJD with the IAEA (IPSART mission) as well as with other external organisations, which were not involved in the elaboration of the reviewed document and probabilistic model of NPP. The review was made in accordance with the IAEA guidelines and methodical documents of UJD and US NRC. In the field of calculation safety analyses the UJD activity was focused on the analysis of an operational event, analyses of the selected accident scenarios

  7. Developing a system dynamics model to analyse environmental problem in construction site

    Science.gov (United States)

    Haron, Fatin Fasehah; Hawari, Nurul Nazihah

    2017-11-01

    This study aims to develop a system dynamics model at a construction site to analyse the impact of environmental problem. Construction sites may cause damages to the environment, and interference in the daily lives of residents. A proper environmental management system must be used to reduce pollution, enhance bio-diversity, conserve water, respect people and their local environment, measure performance and set targets for the environment and sustainability. This study investigates the damaging impact normally occur during the construction stage. Environmental problem will cause costly mistake in project implementation, either because of the environmental damages that are likely to arise during project implementation, or because of modification that may be required subsequently in order to make the action environmentally acceptable. Thus, findings from this study has helped in significantly reducing the damaging impact towards environment, and improve the environmental management system performance at construction site.

  8. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    Energy Technology Data Exchange (ETDEWEB)

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  9. What is needed to eliminate new pediatric HIV infections: The contribution of model-based analyses

    Science.gov (United States)

    Doherty, Katie; Ciaranello, Andrea

    2013-01-01

    Purpose of Review Computer simulation models can identify key clinical, operational, and economic interventions that will be needed to achieve the elimination of new pediatric HIV infections. In this review, we summarize recent findings from model-based analyses of strategies for prevention of mother-to-child HIV transmission (MTCT). Recent Findings In order to achieve elimination of MTCT (eMTCT), model-based studies suggest that scale-up of services will be needed in several domains: uptake of services and retention in care (the PMTCT “cascade”), interventions to prevent HIV infections in women and reduce unintended pregnancies (the “four-pronged approach”), efforts to support medication adherence through long periods of pregnancy and breastfeeding, and strategies to make breastfeeding safer and/or shorter. Models also project the economic resources that will be needed to achieve these goals in the most efficient ways to allocate limited resources for eMTCT. Results suggest that currently recommended PMTCT regimens (WHO Option A, Option B, and Option B+) will be cost-effective in most settings. Summary Model-based results can guide future implementation science, by highlighting areas in which additional data are needed to make informed decisions and by outlining critical interventions that will be necessary in order to eliminate new pediatric HIV infections. PMID:23743788

  10. Development and preliminary analyses of material balance evaluation model in nuclear fuel cycle

    International Nuclear Information System (INIS)

    Matsumura, Tetsuo

    1994-01-01

    Material balance evaluation model in nuclear fuel cycle has been developed using ORIGEN-2 code as basic engine. This model has feature of: It can treat more than 1000 nuclides including minor actinides and fission products. It has flexibility of modeling and graph output using a engineering work station. I made preliminary calculation of LWR fuel high burnup effect (reloading fuel average burnup of 60 GWd/t) on nuclear fuel cycle. The preliminary calculation shows LWR fuel high burnup has much effect on Japanese Pu balance problem. (author)

  11. Intercomparison and analyses of the climatology of the West African monsoon in the West African monsoon modeling and evaluation project (WAMME) first model intercomparison experiment

    Energy Technology Data Exchange (ETDEWEB)

    Xue, Yongkang; Sales, Fernando De [University of California, Los Angeles, CA (United States); Lau, W.K.M.; Schubert, Siegfried D.; Wu, Man-Li C. [NASA, Goddard Space Flight Center, Greenbelt, MD (United States); Boone, Aaron [Centre National de Recherches Meteorologiques, Meteo-France Toulouse, Toulouse (France); Feng, Jinming [University of California, Los Angeles, CA (United States); Chinese Academy of Sciences, Institute of Atmospheric Physics, Beijing (China); Dirmeyer, Paul; Guo, Zhichang [Center for Ocean-Land-Atmosphere Interactions, Calverton, MD (United States); Kim, Kyu-Myong [University of Maryland Baltimore County, Baltimore, MD (United States); Kitoh, Akio [Meteorological Research Institute, Tsukuba (Japan); Kumar, Vadlamani [National Center for Environmental Prediction, Camp Springs, MD (United States); Wyle Information Systems, Gaithersburg, MD (United States); Poccard-Leclercq, Isabelle [Universite de Bourgogne, Centre de Recherches de Climatologie UMR5210 CNRS, Dijon (France); Mahowald, Natalie [Cornell University, Ithaca, NY (United States); Moufouma-Okia, Wilfran; Rowell, David P. [Met Office Hadley Centre, Exeter (United Kingdom); Pegion, Phillip [NASA, Goddard Space Flight Center, Greenbelt, MD (United States); National Center for Environmental Prediction, Camp Springs, MD (United States); Schemm, Jae; Thiaw, Wassila M. [National Center for Environmental Prediction, Camp Springs, MD (United States); Sealy, Andrea [The Caribbean Institute for Meteorology and Hydrology, St. James (Barbados); Vintzileos, Augustin [National Center for Environmental Prediction, Camp Springs, MD (United States); Science Applications International Corporation, Camp Springs, MD (United States); Williams, Steven F. [National Center for Atmospheric Research, Boulder, CO (United States)

    2010-07-15

    This paper briefly presents the West African monsoon (WAM) modeling and evaluation project (WAMME) and evaluates WAMME general circulation models' (GCM) performances in simulating variability of WAM precipitation, surface temperature, and major circulation features at seasonal and intraseasonal scales in the first WAMME experiment. The analyses indicate that models with specified sea surface temperature generally have reasonable simulations of the pattern of spatial distribution of WAM seasonal mean precipitation and surface temperature as well as the averaged zonal wind in latitude-height cross-section and low level circulation. But there are large differences among models in simulating spatial correlation, intensity, and variance of precipitation compared with observations. Furthermore, the majority of models fail to produce proper intensities of the African Easterly Jet (AEJ) and the tropical easterly jet. AMMA Land Surface Model Intercomparison Project (ALMIP) data are used to analyze the association between simulated surface processes and the WAM and to investigate the WAM mechanism. It has been identified that the spatial distributions of surface sensible heat flux, surface temperature, and moisture convergence are closely associated with the simulated spatial distribution of precipitation; while surface latent heat flux is closely associated with the AEJ and contributes to divergence in AEJ simulation. Common empirical orthogonal functions (CEOF) analysis is applied to characterize the WAM precipitation evolution and has identified a major WAM precipitation mode and two temperature modes (Sahara mode and Sahel mode). Results indicate that the WAMME models produce reasonable temporal evolutions of major CEOF modes but have deficiencies/uncertainties in producing variances explained by major modes. Furthermore, the CEOF analysis shows that WAM precipitation evolution is closely related to the enhanced Sahara mode and the weakened Sahel mode, supporting

  12. Analysing the origin of long-range interactions in proteins using lattice models

    Directory of Open Access Journals (Sweden)

    Unger Ron

    2009-01-01

    Full Text Available Abstract Background Long-range communication is very common in proteins but the physical basis of this phenomenon remains unclear. In order to gain insight into this problem, we decided to explore whether long-range interactions exist in lattice models of proteins. Lattice models of proteins have proven to capture some of the basic properties of real proteins and, thus, can be used for elucidating general principles of protein stability and folding. Results Using a computational version of double-mutant cycle analysis, we show that long-range interactions emerge in lattice models even though they are not an input feature of them. The coupling energy of both short- and long-range pairwise interactions is found to become more positive (destabilizing in a linear fashion with increasing 'contact-frequency', an entropic term that corresponds to the fraction of states in the conformational ensemble of the sequence in which the pair of residues is in contact. A mathematical derivation of the linear dependence of the coupling energy on 'contact-frequency' is provided. Conclusion Our work shows how 'contact-frequency' should be taken into account in attempts to stabilize proteins by introducing (or stabilizing contacts in the native state and/or through 'negative design' of non-native contacts.

  13. Computational modeling and statistical analyses on individual contact rate and exposure to disease in complex and confined transportation hubs

    Science.gov (United States)

    Wang, W. L.; Tsui, K. L.; Lo, S. M.; Liu, S. B.

    2018-01-01

    Crowded transportation hubs such as metro stations are thought as ideal places for the development and spread of epidemics. However, for the special features of complex spatial layout, confined environment with a large number of highly mobile individuals, it is difficult to quantify human contacts in such environments, wherein disease spreading dynamics were less explored in the previous studies. Due to the heterogeneity and dynamic nature of human interactions, increasing studies proved the importance of contact distance and length of contact in transmission probabilities. In this study, we show how detailed information on contact and exposure patterns can be obtained by statistical analyses on microscopic crowd simulation data. To be specific, a pedestrian simulation model-CityFlow was employed to reproduce individuals' movements in a metro station based on site survey data, values and distributions of individual contact rate and exposure in different simulation cases were obtained and analyzed. It is interesting that Weibull distribution fitted the histogram values of individual-based exposure in each case very well. Moreover, we found both individual contact rate and exposure had linear relationship with the average crowd densities of the environments. The results obtained in this paper can provide reference to epidemic study in complex and confined transportation hubs and refine the existing disease spreading models.

  14. Coastal and river flood risk analyses for guiding economically optimal flood adaptation policies: a country-scale study for Mexico

    Science.gov (United States)

    Haer, Toon; Botzen, W. J. Wouter; van Roomen, Vincent; Connor, Harry; Zavala-Hidalgo, Jorge; Eilander, Dirk M.; Ward, Philip J.

    2018-06-01

    Many countries around the world face increasing impacts from flooding due to socio-economic development in flood-prone areas, which may be enhanced in intensity and frequency as a result of climate change. With increasing flood risk, it is becoming more important to be able to assess the costs and benefits of adaptation strategies. To guide the design of such strategies, policy makers need tools to prioritize where adaptation is needed and how much adaptation funds are required. In this country-scale study, we show how flood risk analyses can be used in cost-benefit analyses to prioritize investments in flood adaptation strategies in Mexico under future climate scenarios. Moreover, given the often limited availability of detailed local data for such analyses, we show how state-of-the-art global data and flood risk assessment models can be applied for a detailed assessment of optimal flood-protection strategies. Our results show that especially states along the Gulf of Mexico have considerable economic benefits from investments in adaptation that limit risks from both river and coastal floods, and that increased flood-protection standards are economically beneficial for many Mexican states. We discuss the sensitivity of our results to modelling uncertainties, the transferability of our modelling approach and policy implications. This article is part of the theme issue `Advances in risk assessment for climate change adaptation policy'.

  15. Mechanical analyses on the digital behaviour of the Tokay gecko (Gekko gecko) based on a multi-level directional adhesion model

    OpenAIRE

    Wu, Xuan; Wang, Xiaojie; Mei, Tao; Sun, Shaoming

    2015-01-01

    This paper proposes a multi-level hierarchical model for the Tokay gecko (Gekko gecko) adhesive system and analyses the digital behaviour of the G. gecko under macro/meso-level scale. The model describes the structures of G. gecko's adhesive system from the nano-level spatulae to the sub-millimetre-level lamella. The G. gecko's seta is modelled using inextensible fibril based on Euler's elastica theorem. Considering the side contact of the spatular pads of the seta on the flat and rigid subst...

  16. Continuous spatial modelling to analyse planning and economic consequences of offshore wind energy

    International Nuclear Information System (INIS)

    Moeller, Bernd

    2011-01-01

    Offshore wind resources appear abundant, but technological, economic and planning issues significantly reduce the theoretical potential. While massive investments are anticipated and planners and developers are scouting for viable locations and consider risk and impact, few studies simultaneously address potentials and costs together with the consequences of proposed planning in an analytical and continuous manner and for larger areas at once. Consequences may be investments short of efficiency and equity, and failed planning routines. A spatial resource economic model for the Danish offshore waters is presented, used to analyse area constraints, technological risks, priorities for development and opportunity costs of maintaining competing area uses. The SCREAM-offshore wind model (Spatially Continuous Resource Economic Analysis Model) uses raster-based geographical information systems (GIS) and considers numerous geographical factors, technology and cost data as well as planning information. Novel elements are weighted visibility analysis and geographically recorded shipping movements as variable constraints. A number of scenarios have been described, which include restrictions of using offshore areas, as well as alternative uses such as conservation and tourism. The results comprise maps, tables and cost-supply curves for further resource economic assessment and policy analysis. A discussion of parameter variations exposes uncertainties of technology development, environmental protection as well as competing area uses and illustrates how such models might assist in ameliorating public planning, while procuring decision bases for the political process. The method can be adapted to different research questions, and is largely applicable in other parts of the world. - Research Highlights: → A model for the spatially continuous evaluation of offshore wind resources. → Assessment of spatial constraints, costs and resources for each location. → Planning tool for

  17. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  18. How can results from macro economic analyses of the energy consumption of households be used in macro models? A discussion of theoretical and empirical literature about aggregation

    International Nuclear Information System (INIS)

    Halvorsen, Bente; Larsen, Bodil M.; Nesbakken, Runa

    2001-01-01

    The literature on energy demand shows that there are systematic differences in income- and price elasticity from analyses based on macro data and micro data. Even if one estimates models with the same explanatory variables, the results may differ with respect to estimated price- and income sensitivity. These differences may be caused by problems involved in transferring micro properties to macro properties, or the estimated macro relationships have failed to adequately consideration the fact that households behave differently in their energy demand. Political goals are often directed towards the entire household sector. Partial equilibrium models do not capture important equilibrium effects and feedback through the energy markets and the economy in general. Thus, it is very interesting, politically and scientifically, to do macro economic model analyses of different political measures that affect the energy consumption. The results of behavioural analyses, in which one investigates the heterogeneity of the energy demand, must be based on information about individual households. When the demand is studied based on micro data, it is difficult to aggregate its properties to a total demand function for the entire household sector if different household sectors have different behaviour. Such heterogeneity of behaviour may for instance arise when households in different regions have different heating equipment because of regional differences in the price of electricity. The subject of aggregation arises immediately when one wants to draw conclusions about the household sector based on information about individual households, whether the discussion is about the whole population or a selection of households. Thus, aggregation is a topic of interest in a wide range of problems

  19. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  20. Development of model for analysing respective collections of intended hematopoietic stem cells and harvests of unintended mature cells in apheresis for autologous hematopoietic stem cell collection.

    Science.gov (United States)

    Hequet, O; Le, Q H; Rodriguez, J; Dubost, P; Revesz, D; Clerc, A; Rigal, D; Salles, G; Coiffier, B

    2014-04-01

    Hematopoietic stem cells (HSCs) required to perform peripheral hematopoietic autologous stem cell transplantation (APBSCT) can be collected by processing several blood volumes (BVs) in leukapheresis sessions. However, this may cause granulocyte harvest in graft and decrease in patient's platelet blood level. Both consequences may induce disturbances in patient. One apheresis team's current purpose is to improve HSC collection by increasing HSC collection and prevent increase in granulocyte and platelet harvests. Before improving HSC collection it seemed important to know more about the way to harvest these types of cells. The purpose of our study was to develop a simple model for analysing respective collections of intended CD34+ cells among HSC (designated here as HSC) and harvests of unintended platelets or granulocytes among mature cells (designated here as mature cells) considering the number of BVs processed and factors likely to influence cell collection or harvest. For this, we processed 1, 2 and 3 BVs in 59 leukapheresis sessions and analysed corresponding collections and harvests with a referent device (COBE Spectra). First we analysed the amounts of HSC collected and mature cells harvested and second the evolution of the respective shares of HSC and mature cells collected or harvested throughout the BV processes. HSC collections and mature cell harvests increased globally (pcollections and harvests, which showed that only pre-leukapheresis blood levels (CD34+cells and platelets) influenced both cell collections and harvests (CD34+cells and platelets) (pcollections and mature unintended cells harvests (pcollections or unintended mature cell harvests were pre-leukapheresis blood cell levels. Our model was meant to assist apheresis teams in analysing shares of HSC collected and mature cells harvested with new devices or with new types of HSC mobilization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Correlation of Klebsiella pneumoniae comparative genetic analyses with virulence profiles in a murine respiratory disease model.

    Directory of Open Access Journals (Sweden)

    Ramy A Fodah

    Full Text Available Klebsiella pneumoniae is a bacterial pathogen of worldwide importance and a significant contributor to multiple disease presentations associated with both nosocomial and community acquired disease. ATCC 43816 is a well-studied K. pneumoniae strain which is capable of causing an acute respiratory disease in surrogate animal models. In this study, we performed sequencing of the ATCC 43816 genome to support future efforts characterizing genetic elements required for disease. Furthermore, we performed comparative genetic analyses to the previously sequenced genomes from NTUH-K2044 and MGH 78578 to gain an understanding of the conservation of known virulence determinants amongst the three strains. We found that ATCC 43816 and NTUH-K2044 both possess the known virulence determinant for yersiniabactin, as well as a Type 4 secretion system (T4SS, CRISPR system, and an acetonin catabolism locus, all absent from MGH 78578. While both NTUH-K2044 and MGH 78578 are clinical isolates, little is known about the disease potential of these strains in cell culture and animal models. Thus, we also performed functional analyses in the murine macrophage cell lines RAW264.7 and J774A.1 and found that MGH 78578 (K52 serotype was internalized at higher levels than ATCC 43816 (K2 and NTUH-K2044 (K1, consistent with previous characterization of the antiphagocytic properties of K1 and K2 serotype capsules. We also examined the three K. pneumoniae strains in a novel BALB/c respiratory disease model and found that ATCC 43816 and NTUH-K2044 are highly virulent (LD50<100 CFU while MGH 78578 is relatively avirulent.

  2. Approximate analyses of inelastic effects in pipework

    International Nuclear Information System (INIS)

    Jobson, D.A.

    1983-01-01

    This presentation shows figures concerned with analyses of inelastic effects in pipework as follows: comparison of experimental and calculated simplified analyses results for free end rotation and for circumferential strain; interrupted stress relaxation; regenerated relaxation caused by reversed yield; buckling of straight pipe under combined bending and torsion; results of fatigues test of pipe bend

  3. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  4. Forced vibration tests and simulation analyses of a nuclear reactor building. Part 2: simulation analyses

    International Nuclear Information System (INIS)

    Kuno, M.; Nakagawa, S.; Momma, T.; Naito, Y.; Niwa, M.; Motohashi, S.

    1995-01-01

    Forced vibration tests of a BWR-type reactor building. Hamaoka Unit 4, were performed. Valuable data on the dynamic characteristics of the soil-structure interaction system were obtained through the tests. Simulation analyses of the fundamental dynamic characteristics of the soil-structure system were conducted, using a basic lumped mass soil-structure model (lattice model), and strong correlation with the measured data was obtained. Furthermore, detailed simulation models were employed to investigate the effects of simultaneously induced vertical response and response of the adjacent turbine building on the lateral response of the reactor building. (author). 4 refs., 11 figs

  5. Models for regionalizing economic data and their applications within the scope of forensic disaster analyses

    Science.gov (United States)

    Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.

    2015-04-01

    The impact of natural hazards on the economic system can be observed in many different regions all over the world. Once the local economic structure is hit by an event direct costs instantly occur. However, the disturbance on a local level (e.g. parts of city or industries along a river bank) might also cause monetary damages in other, indirectly affected sectors. If the impact of an event is strong, these damages are likely to cascade and spread even on an international scale (e.g. the eruption of Eyjafjallajökull and its impact on the automotive sector in Europe). In order to determine these special impacts, one has to gain insights into the directly hit economic structure before being able to calculate these side effects. Especially, regarding the development of a model used for near real-time forensic disaster analyses any simulation needs to be based on data that is rapidly available or easily to be computed. Therefore, we investigated commonly used or recently discussed methodologies for regionalizing economic data. Surprisingly, even for German federal states there is no official input-output data available that can be used, although it might provide detailed figures concerning economic interrelations between different industry sectors. In the case of highly developed countries, such as Germany, we focus on models for regionalizing nationwide input-output table which is usually available at the national statistical offices. However, when it comes to developing countries (e.g. South-East Asia) the data quality and availability is usually much poorer. In this case, other sources need to be found for the proper assessment of regional economic performance. We developed an indicator-based model that can fill this gap because of its flexibility regarding the level of aggregation and the composability of different input parameters. Our poster presentation brings up a literature review and a summary on potential models that seem to be useful for this specific task

  6. Automatic incrementalization of Prolog based static analyses

    DEFF Research Database (Denmark)

    Eichberg, Michael; Kahl, Matthias; Saha, Diptikalyan

    2007-01-01

    Modem development environments integrate various static analyses into the build process. Analyses that analyze the whole project whenever the project changes are impractical in this context. We present an approach to automatic incrementalization of analyses that are specified as tabled logic...... programs and evaluated using incremental tabled evaluation, a technique for efficiently updating memo tables in response to changes in facts and rules. The approach has been implemented and integrated into the Eclipse IDE. Our measurements show that this technique is effective for automatically...

  7. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  8. Analysing and Comparing Encodability Criteria

    Directory of Open Access Journals (Sweden)

    Kirstin Peters

    2015-08-01

    Full Text Available Encodings or the proof of their absence are the main way to compare process calculi. To analyse the quality of encodings and to rule out trivial or meaningless encodings, they are augmented with quality criteria. There exists a bunch of different criteria and different variants of criteria in order to reason in different settings. This leads to incomparable results. Moreover it is not always clear whether the criteria used to obtain a result in a particular setting do indeed fit to this setting. We show how to formally reason about and compare encodability criteria by mapping them on requirements on a relation between source and target terms that is induced by the encoding function. In particular we analyse the common criteria full abstraction, operational correspondence, divergence reflection, success sensitiveness, and respect of barbs; e.g. we analyse the exact nature of the simulation relation (coupled simulation versus bisimulation that is induced by different variants of operational correspondence. This way we reduce the problem of analysing or comparing encodability criteria to the better understood problem of comparing relations on processes.

  9. Assessment applicability of selected models of multiple discriminant analyses to forecast financial situation of Polish wood sector enterprises

    Directory of Open Access Journals (Sweden)

    Adamowicz Krzysztof

    2017-03-01

    Full Text Available In the last three decades forecasting bankruptcy of enterprises has been an important and difficult problem, used as an impulse for many research projects (Ribeiro et al. 2012. At present many methods of bankruptcy prediction are available. In view of the specific character of economic activity in individual sectors, specialised methods adapted to a given branch of industry are being used increasingly often. For this reason an important scientific problem is related with the indication of an appropriate model or group of models to prepare forecasts for a given branch of industry. Thus research has been conducted to select an appropriate model of Multiple Discriminant Analysis (MDA, best adapted to forecasting changes in the wood industry. This study analyses 10 prediction models popular in Poland. Effectiveness of the model proposed by Jagiełło, developed for all industrial enterprises, may be labelled accidental. That model is not adapted to predict financial changes in wood sector companies in Poland.

  10. Automated Techniques for the Qualitative Analysis of Ecological Models: Continuous Models

    Directory of Open Access Journals (Sweden)

    Lynn van Coller

    1997-06-01

    Full Text Available The mathematics required for a detailed analysis of the behavior of a model can be formidable. In this paper, I demonstrate how various computer packages can aid qualitative analyses by implementing techniques from dynamical systems theory. Because computer software is used to obtain the results, the techniques can be used by nonmathematicians as well as mathematicians. In-depth analyses of complicated models that were previously very difficult to study can now be done. Because the paper is intended as an introduction to applying the techniques to ecological models, I have included an appendix describing some of the ideas and terminology. A second appendix shows how the techniques can be applied to a fairly simple predator-prey model and establishes the reliability of the computer software. The main body of the paper discusses a ratio-dependent model. The new techniques highlight some limitations of isocline analyses in this three-dimensional setting and show that the model is structurally unstable. Another appendix describes a larger model of a sheep-pasture-hyrax-lynx system. Dynamical systems techniques are compared with a traditional sensitivity analysis and are found to give more information. As a result, an incomplete relationship in the model is highlighted. I also discuss the resilience of these models to both parameter and population perturbations.

  11. Evidence of a major gene from Bayesian segregation analyses of liability to osteochondral diseases in pigs.

    Science.gov (United States)

    Kadarmideen, Haja N; Janss, Luc L G

    2005-11-01

    Bayesian segregation analyses were used to investigate the mode of inheritance of osteochondral lesions (osteochondrosis, OC) in pigs. Data consisted of 1163 animals with OC and their pedigrees included 2891 animals. Mixed-inheritance threshold models (MITM) and several variants of MITM, in conjunction with Markov chain Monte Carlo methods, were developed for the analysis of these (categorical) data. Results showed major genes with significant and substantially higher variances (range 1.384-37.81), compared to the polygenic variance (sigmau2). Consequently, heritabilities for a mixed inheritance (range 0.65-0.90) were much higher than the heritabilities from the polygenes. Disease allele frequencies range was 0.38-0.88. Additional analyses estimating the transmission probabilities of the major gene showed clear evidence for Mendelian segregation of a major gene affecting osteochondrosis. The variants, MITM with informative prior on sigmau2, showed significant improvement in marginal distributions and accuracy of parameters. MITM with a "reduced polygenic model" for parameterization of polygenic effects avoided convergence problems and poor mixing encountered in an "individual polygenic model." In all cases, "shrinkage estimators" for fixed effects avoided unidentifiability for these parameters. The mixed-inheritance linear model (MILM) was also applied to all OC lesions and compared with the MITM. This is the first study to report evidence of major genes for osteochondral lesions in pigs; these results may also form a basis for underpinning the genetic inheritance of this disease in other animals as well as in humans.

  12. LOD score exclusion analyses for candidate QTLs using random population samples.

    Science.gov (United States)

    Deng, Hong-Wen

    2003-11-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes as putative QTLs using random population samples. Previously, we developed an LOD score exclusion mapping approach for candidate genes for complex diseases. Here, we extend this LOD score approach for exclusion analyses of candidate genes for quantitative traits. Under this approach, specific genetic effects (as reflected by heritability) and inheritance models at candidate QTLs can be analyzed and if an LOD score is < or = -2.0, the locus can be excluded from having a heritability larger than that specified. Simulations show that this approach has high power to exclude a candidate gene from having moderate genetic effects if it is not a QTL and is robust to population admixture. Our exclusion analysis complements association analysis for candidate genes as putative QTLs in random population samples. The approach is applied to test the importance of Vitamin D receptor (VDR) gene as a potential QTL underlying the variation of bone mass, an important determinant of osteoporosis.

  13. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    NARCIS (Netherlands)

    Kolkman, Rien; Kok, Matthijs; van der Veen, A.

    2005-01-01

    The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties

  14. Balmorel: A model for analyses of the electricity and CHP markets in the Baltic Sea Region. Appendices

    International Nuclear Information System (INIS)

    Ravn, H.F.; Munksgaard, J.; Ramskov, J.; Grohnheit, P.E.; Larsen, H.V.

    2001-03-01

    This report describes the motivations behind the development of the Balmorel model as well as the model itself. The purpose of the Balmorel project is to develop a model for analyses of the power and CHP sectors in the Baltic Sea Region. The model is directed towards the analysis of relevant policy questions to the extent that they contain substantial international aspects. The model is developed in response to the trend towards internationalisation in the electricity sector. This trend is seen in increased international trade of electricity, in investment strategies among producers and otherwise. Also environmental considerations and policies are to an increasing extent gaining an international perspective in relation to the greenhouse gasses. Further, the ongoing process of deregulation of the energy sector highlights this and contributes to the need for overview and analysis. A guiding principle behind the construction of the model has been that it may serve as a means of communication in relation to the policy issues that already are or that may become important for the region. Therefore, emphasis has been put on documentation, transparency and flexibility of the model. This is achieved in part by formulating the model in a high level modelling language, and by making the model, including data, available at the internet. Potential users of the Balmorel model include research institutions, consulting companies, energy authorities, transmission system operators and energy companies. (au)

  15. ENERGY AND ENTROPY ANALYSES OF AN EXPERIMENTAL TURBOJET ENGINE FOR TARGET DRONE APPLICATION

    Directory of Open Access Journals (Sweden)

    Onder Turan

    2016-12-01

    Full Text Available This study investigates energy and entropy analyses of an experimental turbojet engine build in Anadolu University Faculty of Aeronautics and Astronautics Test-Cell Laboratory.  Law of motions and Brayton thermodynamic cycle model are used for this purpose. The processes (that is, compression, combustion, and expansion are simulated in P-v, T-s and h-s diagrams. Furthermore, the second law of thermodynamics is applied to the cycle model to perform the entropy analysis. A distribution of the wasted and thrust power, the overall (energy-based the first law efficiency, and the specific fuel consumption and specific thrust of the engine were calculated during the analyses as well. The results of the study also show the entropy changing value in engine components due to irreversibilities and inefficiencies. As a conclusion, it is expected that this study is useful to study future design and research work similar aircraft turbojets, auxiliary power units and target drone power systems.

  16. Review of accident analyses performed at Mochovce NPP

    International Nuclear Information System (INIS)

    Siko, D.

    2000-01-01

    In this paper the review of accident analysis performed in NPP Mochovce V-1 is presented. The scope of these safety measures was defined and development in the T SSM for NPP Mochovce Nuclear Safety Improvements Report' issued in July 1995. The main objectives of these safety measures were the followings: (a) to establish the criteria for selection and classification of accidental events, as well as defining the list of initiating events to be analysed. Accident classification to the individual groups must be performed in accordance with RG 1.70 and IAEA recommendations 'Guidelines for Accidental Analysis of WWER NPP' (IAEA-EBR-WWER-01) to select boundary cases to be calculated from the scope of initiating events; (b ) to elaborate the accident analysis methodology that also includes acceptance criteria for their result evaluation, initial and boundary conditions, assumption related with the application of the single failure criteria, requirements on the analysis quality, used computer codes, as well as NPP models and input data for the accident analysis; (c) to perform the accident analysis for the Pre-operational Safety Report (POSAR); (d) to provide a synthetic report addressing the validity range of codes models and correlations, the assessment against relevant tests results, the evidence of the user qualification, the modernisation and nodding scheme for the plant and the justification of used computer codes. Analyses results showed that all acceptance criteria were met with satisfactory margin and design of the NPP Mochovce is accurate. (author)

  17. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  18. Congruence between distribution modelling and phylogeographical analyses reveals Quaternary survival of a toadflax species (Linaria elegans) in oceanic climate areas of a mountain ring range.

    Science.gov (United States)

    Fernández-Mazuecos, Mario; Vargas, Pablo

    2013-06-01

    · The role of Quaternary climatic shifts in shaping the distribution of Linaria elegans, an Iberian annual plant, was investigated using species distribution modelling and molecular phylogeographical analyses. Three hypotheses are proposed to explain the Quaternary history of its mountain ring range. · The distribution of L. elegans was modelled using the maximum entropy method and projected to the last interglacial and to the last glacial maximum (LGM) using two different paleoclimatic models: the Community Climate System Model (CCSM) and the Model for Interdisciplinary Research on Climate (MIROC). Two nuclear and three plastid DNA regions were sequenced for 24 populations (119 individuals sampled). Bayesian phylogenetic, phylogeographical, dating and coalescent-based population genetic analyses were conducted. · Molecular analyses indicated the existence of northern and southern glacial refugia and supported two routes of post-glacial recolonization. These results were consistent with the LGM distribution as inferred under the CCSM paleoclimatic model (but not under the MIROC model). Isolation between two major refugia was dated back to the Riss or Mindel glaciations, > 100 kyr before present (bp). · The Atlantic distribution of inferred refugia suggests that the oceanic (buffered)-continental (harsh) gradient may have played a key and previously unrecognized role in determining Quaternary distribution shifts of Mediterranean plants. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  19. Thermal Safety Analyses for the Production of Plutonium-238 at the High Flux Isotope Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Hurt, Christopher J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Freels, James D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hobbs, Randy W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jain, Prashant K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Maldonado, G. Ivan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-08-01

    that were necessary due to the limiting fabrication and irradiation data available. The calculated parameters in the final production target model are the most accurate and comprehensive, while still conservative. Over 210 permutations in irradiation time and position were evaluated, and are supported by the most recent inputs and highest fidelity methodology. The results of these analyses show that the models presented in this report provide a robust and reliable basis for previous, current and future experiment safety analyses. In addition, they reveal an evolving knowledge of the steady-state behavior of the NpO2/Al pellets under irradiation for a variety of target encapsulations and potential conditions.

  20. Spatial occupancy models applied to atlas data show Southern Ground Hornbills strongly depend on protected areas.

    Science.gov (United States)

    Broms, Kristin M; Johnson, Devin S; Altwegg, Res; Conquest, Loveday L

    2014-03-01

    Determining the range of a species and exploring species--habitat associations are central questions in ecology and can be answered by analyzing presence--absence data. Often, both the sampling of sites and the desired area of inference involve neighboring sites; thus, positive spatial autocorrelation between these sites is expected. Using survey data for the Southern Ground Hornbill (Bucorvus leadbeateri) from the Southern African Bird Atlas Project, we compared advantages and disadvantages of three increasingly complex models for species occupancy: an occupancy model that accounted for nondetection but assumed all sites were independent, and two spatial occupancy models that accounted for both nondetection and spatial autocorrelation. We modeled the spatial autocorrelation with an intrinsic conditional autoregressive (ICAR) model and with a restricted spatial regression (RSR) model. Both spatial models can readily be applied to any other gridded, presence--absence data set using a newly introduced R package. The RSR model provided the best inference and was able to capture small-scale variation that the other models did not. It showed that ground hornbills are strongly dependent on protected areas in the north of their South African range, but less so further south. The ICAR models did not capture any spatial autocorrelation in the data, and they took an order, of magnitude longer than the RSR models to run. Thus, the RSR occupancy model appears to be an attractive choice for modeling occurrences at large spatial domains, while accounting for imperfect detection and spatial autocorrelation.

  1. A new approach to analyse longitudinal epidemiological data with an excess of zeros

    NARCIS (Netherlands)

    Spriensma, Alette S.; Hajos, Tibor R. S.; de Boer, Michiel R.; Heymans, Martijn W.; Twisk, Jos W. R.

    2013-01-01

    Background: Within longitudinal epidemiological research, 'count' outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables

  2. Hydrological Assessment of Model Performance and Scenario Analyses of Land Use Change and Climate Change in lowlands of Veneto Region (Italy)

    Science.gov (United States)

    Pijl, Anton; Brauer, Claudia; Sofia, Giulia; Teuling, Ryan; Tarolli, Paolo

    2017-04-01

    Growing water-related challenges in lowland areas of the world call for good assessment of our past and present actions, in order to guide our future decisions. The novel Wageningen Lowland Runoff Simulator (WALRUS; Brauer et al., 2014) was developed to simulate hydrological processes and has showed promising performance in recent studies in the Netherlands. Here the model was applied to a coastal basin of 2800 ha in the Veneto Region (northern Italy) to test model performance and evaluate scenario analyses of land use change and climate change. Located partially below sea-level, the reclaimed area is facing persistent land transformation and climate change trends, which alter not only the processes in the catchment but also the demands from it (Tarolli and Sofia, 2016). Firstly results of the calibration (NSE = 0.77; year simulation, daily resolution) and validation (NSE = 0.53; idem) showed that the model is able to reproduce the dominant hydrological processes of this lowland area (e.g. discharge and groundwater fluxes). Land use scenarios between 1951 and 2060 were constructed using demographic models, supported by orthographic interpretation techniques. Climate scenarios were constructed by historical records and future projections by COSMO-CLM regional climate model (Rockel et al., 2008) under the RCP4.5 pathway. WALRUS simulations showed that the land use changes result in a wetter catchment with more discharge, and the climatic changes cause more extremes with longer droughts and stronger rain events. These changes combined show drier summers (-33{%} rainfall, +27{%} soil moisture deficit) and wetter (+13{%} rainfall) and intenser (+30{%} rain intensity) autumn and winters in the future. The simulated discharge regime -particularly peak flow- follows these polarising trends, in good agreement with similar studies in the geographical zone (e.g. Vezzoli et al., 2015). This will increase the pressure on the fully-artificial drainage and agricultural systems

  3. Models for transient analyses in advanced test reactors

    International Nuclear Information System (INIS)

    Gabrielli, Fabrizio

    2011-01-01

    Several strategies are developed worldwide to respond to the world's increasing demand for electricity. Modern nuclear facilities are under construction or in the planning phase. In parallel, advanced nuclear reactor concepts are being developed to achieve sustainability, minimize waste, and ensure uranium resources. To optimize the performance of components (fuels and structures) of these systems, significant efforts are under way to design new Material Test Reactors facilities in Europe which employ water as a coolant. Safety provisions and the analyses of severe accidents are key points in the determination of sound designs. In this frame, the SIMMER multiphysics code systems is a very attractive tool as it can simulate transients and phenomena within and beyond the design basis in a tightly coupled way. This thesis is primarily focused upon the extension of the SIMMER multigroup cross-sections processing scheme (based on the Bondarenko method) for a proper heterogeneity treatment in the analyses of water-cooled thermal neutron systems. Since the SIMMER code was originally developed for liquid metal-cooled fast reactors analyses, the effect of heterogeneity had been neglected. As a result, the application of the code to water-cooled systems leads to a significant overestimation of the reactivity feedbacks and in turn to non-conservative results. To treat the heterogeneity, the multigroup cross-sections should be computed by properly taking account of the resonance self-shielding effects and the fine intra-cell flux distribution in space group-wise. In this thesis, significant improvements of the SIMMER cross-section processing scheme are described. A new formulation of the background cross-section, based on the Bell and Wigner correlations, is introduced and pre-calculated reduction factors (Effective Mean Chord Lengths) are used to take proper account of the resonance self-shielding effects of non-fuel isotopes. Moreover, pre-calculated parameters are applied

  4. Analyses of cavitation instabilities in ductile metals

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2007-01-01

    Cavitation instabilities have been predicted for a single void in a ductile metal stressed under high triaxiality conditions. In experiments for a ceramic reinforced by metal particles a single dominant void has been observed on the fracture surface of some of the metal particles bridging a crack......, and also tests for a thin ductile metal layer bonding two ceramic blocks have indicated rapid void growth. Analyses for these material configurations are discussed here. When the void radius is very small, a nonlocal plasticity model is needed to account for observed size-effects, and recent analyses......, while the surrounding voids are represented by a porous ductile material model in terms of a field quantity that specifies the variation of the void volume fraction in the surrounding metal....

  5. An application of the 'Bayesian cohort model' to nuclear power plant cost analyses

    International Nuclear Information System (INIS)

    Ono, Kenji; Nakamura, Takashi

    2002-01-01

    We have developed a new method for identifying the effects of calendar year, plant age and commercial operation starting year on the costs and performances of nuclear power plants and also developed an analysis system running on personal computers. The method extends the Bayesian cohort model for time series social survey data proposed by one of the authors. The proposed method was shown to be able to separate the above three effects more properly than traditional methods such as taking simple means by time domain. The analyses of US nuclear plant cost and performance data by using the proposed method suggest that many of the US plants spent relatively long time and much capital cost for modification at their age of about 10 to 20 years, but that, after those ages, they performed fairly well with lower and stabilized O and M and additional capital costs. (author)

  6. Fracture Mechanics Analyses of Reinforced Carbon-Carbon Wing-Leading-Edge Panels

    Science.gov (United States)

    Raju, Ivatury S.; Phillips, Dawn R.; Knight, Norman F., Jr.; Song, Kyongchan

    2010-01-01

    Fracture mechanics analyses of subsurface defects within the joggle regions of the Space Shuttle wing-leading-edge RCC panels are performed. A 2D plane strain idealized joggle finite element model is developed to study the fracture behavior of the panels for three distinct loading conditions - lift-off and ascent, on-orbit, and entry. For lift-off and ascent, an estimated bounding aerodynamic pressure load is used for the analyses, while for on-orbit and entry, thermo-mechanical analyses are performed using the extreme cold and hot temperatures experienced by the panels. In addition, a best estimate for the material stress-free temperature is used in the thermo-mechanical analyses. In the finite element models, the substrate and coating are modeled separately as two distinct materials. Subsurface defects are introduced at the coating-substrate interface and within the substrate. The objective of the fracture mechanics analyses is to evaluate the defect driving forces, which are characterized by the strain energy release rates, and determine if defects can become unstable for each of the loading conditions.

  7. A new approach to analyse longitudinal epidemiological data with an excess of zeros

    NARCIS (Netherlands)

    Spriensma, A.S.; Hajós, T.R.S.; de Boer, M.R.; Heijmans, M.W.; Twisk, J.W.R.

    2013-01-01

    Background: Within longitudinal epidemiological research, count outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables with

  8. Seismic risk analyses in the German Risk Study, phase B

    International Nuclear Information System (INIS)

    Hosser, D.; Liemersdorf, H.

    1991-01-01

    The paper discusses some aspects of the seismic risk part of the German Risk Study for Nuclear Power Plants, Phase B. First simplified analyses in Phase A of the study allowed only a rough classification of structures and systems of the PWR reference plant according to their seismic risk contribution. These studies were extended in Phase B using improved models for the dynamic analyses of buildings, structures and components as well as for the probabilistic analyses of seismic loading, failure probabilities and event trees. The methodology of deriving probabilistic seismic load descriptions is explained and compared with the methods in Phase A of the study and in other studies. Some details of the linear and nonlinear dynamic analyses of structures are reported in order to demonstrate the influence of different assumptions for material behaviour and failure criteria. The probabilistic structural and event tree analyses are discussed with respect to distribution assumptions, acceptable simplifications and model uncertainties. Some results for the PWR reference plant are given. (orig.)

  9. Measuring performance at trade shows

    DEFF Research Database (Denmark)

    Hansen, Kåre

    2004-01-01

    Trade shows is an increasingly important marketing activity to many companies, but current measures of trade show performance do not adequately capture dimensions important to exhibitors. Based on the marketing literature's outcome and behavior-based control system taxonomy, a model is built...... that captures a outcome-based sales dimension and four behavior-based dimensions (i.e. information-gathering, relationship building, image building, and motivation activities). A 16-item instrument is developed for assessing exhibitors perceptions of their trade show performance. The paper presents evidence...

  10. Landscape evolution models using the stream power incision model show unrealistic behavior when m ∕ n equals 0.5

    Directory of Open Access Journals (Sweden)

    J. S. Kwang

    2017-12-01

    Full Text Available Landscape evolution models often utilize the stream power incision model to simulate river incision: E = KAmSn, where E is the vertical incision rate, K is the erodibility constant, A is the upstream drainage area, S is the channel gradient, and m and n are exponents. This simple but useful law has been employed with an imposed rock uplift rate to gain insight into steady-state landscapes. The most common choice of exponents satisfies m ∕ n = 0.5. Yet all models have limitations. Here, we show that when hillslope diffusion (which operates only on small scales is neglected, the choice m ∕ n = 0.5 yields a curiously unrealistic result: the predicted landscape is invariant to horizontal stretching. That is, the steady-state landscape for a 10 km2 horizontal domain can be stretched so that it is identical to the corresponding landscape for a 1000 km2 domain.

  11. Show them the money? The role of pay, managerial need support, and justice in a self-determination theory model of intrinsic work motivation.

    Science.gov (United States)

    Olafsen, Anja H; Halvari, Hallgeir; Forest, Jacques; Deci, Edward L

    2015-08-01

    The link between money and motivation has been a debated topic for decades, especially in work organizations. However, field studies investigating the amount of pay in relation to employee motivation is lacking and there have been calls for empirical studies addressing compensation systems and motivation in the work domain. The purpose of this study was to examine outcomes associated with the amount of pay, and perceived distributive and procedural justice regarding pay in relation to those for perceived managerial need support. Participants were 166 bank employees who also reported on their basic psychological need satisfaction and intrinsic work motivation. SEM-analyses tested a self-determination theory (SDT) model, with satisfaction of the competence and autonomy needs as an intervening variable. The primary findings were that amount of pay and employees' perceived distributive justice regarding their pay were unrelated to employees' need satisfaction and intrinsic work motivation, but procedural justice regarding pay did affect these variables. However, managerial need support was the most important factor for promoting need satisfaction and intrinsic work motivation both directly, indirectly, and as a moderator in the model. Hence, the results of the present organizational field study support earlier laboratory experiments within the SDT framework showing that monetary rewards did not enhance intrinsic motivation. This seems to have profound implications for organizations concerned about motivating their employees. © 2015 Scandinavian Psychological Associations and John Wiley & Sons Ltd.

  12. Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile

    Directory of Open Access Journals (Sweden)

    Hoľko Michal

    2014-12-01

    Full Text Available The article deals with numerical analyses of a Continuous Flight Auger (CFA pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed.

  13. The speed of memory errors shows the influence of misleading information: Testing the diffusion model and discrete-state models.

    Science.gov (United States)

    Starns, Jeffrey J; Dubé, Chad; Frelinger, Matthew E

    2018-05-01

    In this report, we evaluate single-item and forced-choice recognition memory for the same items and use the resulting accuracy and reaction time data to test the predictions of discrete-state and continuous models. For the single-item trials, participants saw a word and indicated whether or not it was studied on a previous list. The forced-choice trials had one studied and one non-studied word that both appeared in the earlier single-item trials and both received the same response. Thus, forced-choice trials always had one word with a previous correct response and one with a previous error. Participants were asked to select the studied word regardless of whether they previously called both words "studied" or "not studied." The diffusion model predicts that forced-choice accuracy should be lower when the word with a previous error had a fast versus a slow single-item RT, because fast errors are associated with more compelling misleading memory retrieval. The two-high-threshold (2HT) model does not share this prediction because all errors are guesses, so error RT is not related to memory strength. A low-threshold version of the discrete state approach predicts an effect similar to the diffusion model, because errors are a mixture of responses based on misleading retrieval and guesses, and the guesses should tend to be slower. Results showed that faster single-trial errors were associated with lower forced-choice accuracy, as predicted by the diffusion and low-threshold models. Copyright © 2018 Elsevier Inc. All rights reserved.

  14. We're Playing "Jeremy Kyle"! Television Talk Shows in the Playground

    Science.gov (United States)

    Marsh, Jackie; Bishop, Julia

    2014-01-01

    This paper focuses on an episode of play in a primary school playground in England, which featured a group of children re-enacting elements of the television talk show "The Jeremy Kyle Show". The episode is analysed in the light of work that has identified the key elements of the talk show genre and the children's play is examined in…

  15. Plot showing ATLAS limits on Standard Model Higgs production in the mass range 100-600 GeV

    CERN Multimedia

    ATLAS Collaboration

    2011-01-01

    The combined upper limit on the Standard Model Higgs boson production cross section divided by the Standard Model expectation as a function of mH is indicated by the solid line. This is a 95% CL limit using the CLs method in the entire mass range. The dotted line shows the median expected limit in the absence of a signal and the green and yellow bands reflect the corresponding 68% and 95% expected

  16. Plot showing ATLAS limits on Standard Model Higgs production in the mass range 110-150 GeV

    CERN Multimedia

    ATLAS Collaboration

    2011-01-01

    The combined upper limit on the Standard Model Higgs boson production cross section divided by the Standard Model expectation as a function of mH is indicated by the solid line. This is a 95% CL limit using the CLs method in in the low mass range. The dotted line shows the median expected limit in the absence of a signal and the green and yellow bands reflect the corresponding 68% and 95% expected

  17. Meta-analyses of the 5-HTTLPR polymorphisms and post-traumatic stress disorder.

    Directory of Open Access Journals (Sweden)

    Fernando Navarro-Mateu

    Full Text Available OBJECTIVE: To conduct a meta-analysis of all published genetic association studies of 5-HTTLPR polymorphisms performed in PTSD cases. METHODS DATA SOURCES: Potential studies were identified through PubMed/MEDLINE, EMBASE, Web of Science databases (Web of Knowledge, WoK, PsychINFO, PsychArticles and HuGeNet (Human Genome Epidemiology Network up until December 2011. STUDY SELECTION: Published observational studies reporting genotype or allele frequencies of this genetic factor in PTSD cases and in non-PTSD controls were all considered eligible for inclusion in this systematic review. DATA EXTRACTION: Two reviewers selected studies for possible inclusion and extracted data independently following a standardized protocol. STATISTICAL ANALYSIS: A biallelic and a triallelic meta-analysis, including the total S and S' frequencies, the dominant (S+/LL and S'+/L'L' and the recessive model (SS/L+ and S'S'/L'+, was performed with a random-effect model to calculate the pooled OR and its corresponding 95% CI. Forest plots and Cochran's Q-Statistic and I(2 index were calculated to check for heterogeneity. Subgroup analyses and meta-regression were carried out to analyze potential moderators. Publication bias and quality of reporting were also analyzed. RESULTS: 13 studies met our inclusion criteria, providing a total sample of 1874 patients with PTSD and 7785 controls in the biallelic meta-analyses and 627 and 3524, respectively, in the triallelic. None of the meta-analyses showed evidence of an association between 5-HTTLPR and PTSD but several characteristics (exposure to the same principal stressor for PTSD cases and controls, adjustment for potential confounding variables, blind assessment, study design, type of PTSD, ethnic distribution and Total Quality Score influenced the results in subgroup analyses and meta-regression. There was no evidence of potential publication bias. CONCLUSIONS: Current evidence does not support a direct effect of 5-HTTLPR

  18. Meta-analyses of the 5-HTTLPR polymorphisms and post-traumatic stress disorder.

    Science.gov (United States)

    Navarro-Mateu, Fernando; Escámez, Teresa; Koenen, Karestan C; Alonso, Jordi; Sánchez-Meca, Julio

    2013-01-01

    To conduct a meta-analysis of all published genetic association studies of 5-HTTLPR polymorphisms performed in PTSD cases. Potential studies were identified through PubMed/MEDLINE, EMBASE, Web of Science databases (Web of Knowledge, WoK), PsychINFO, PsychArticles and HuGeNet (Human Genome Epidemiology Network) up until December 2011. Published observational studies reporting genotype or allele frequencies of this genetic factor in PTSD cases and in non-PTSD controls were all considered eligible for inclusion in this systematic review. Two reviewers selected studies for possible inclusion and extracted data independently following a standardized protocol. A biallelic and a triallelic meta-analysis, including the total S and S' frequencies, the dominant (S+/LL and S'+/L'L') and the recessive model (SS/L+ and S'S'/L'+), was performed with a random-effect model to calculate the pooled OR and its corresponding 95% CI. Forest plots and Cochran's Q-Statistic and I(2) index were calculated to check for heterogeneity. Subgroup analyses and meta-regression were carried out to analyze potential moderators. Publication bias and quality of reporting were also analyzed. 13 studies met our inclusion criteria, providing a total sample of 1874 patients with PTSD and 7785 controls in the biallelic meta-analyses and 627 and 3524, respectively, in the triallelic. None of the meta-analyses showed evidence of an association between 5-HTTLPR and PTSD but several characteristics (exposure to the same principal stressor for PTSD cases and controls, adjustment for potential confounding variables, blind assessment, study design, type of PTSD, ethnic distribution and Total Quality Score) influenced the results in subgroup analyses and meta-regression. There was no evidence of potential publication bias. Current evidence does not support a direct effect of 5-HTTLPR polymorphisms on PTSD. Further analyses of gene-environment interactions, epigenetic modulation and new studies with large samples

  19. GOTHIC MODEL OF BWR SECONDARY CONTAINMENT DRAWDOWN ANALYSES

    International Nuclear Information System (INIS)

    Hansen, P.N.

    2004-01-01

    This article introduces a GOTHIC version 7.1 model of the Secondary Containment Reactor Building Post LOCA drawdown analysis for a BWR. GOTHIC is an EPRI sponsored thermal hydraulic code. This analysis is required by the Utility to demonstrate an ability to restore and maintain the Secondary Containment Reactor Building negative pressure condition. The technical and regulatory issues associated with this modeling are presented. The analysis includes the affect of wind, elevation and thermal impacts on pressure conditions. The model includes a multiple volume representation which includes the spent fuel pool. In addition, heat sources and sinks are modeled as one dimensional heat conductors. The leakage into the building is modeled to include both laminar as well as turbulent behavior as established by actual plant test data. The GOTHIC code provides components to model heat exchangers used to provide fuel pool cooling as well as area cooling via air coolers. The results of the evaluation are used to demonstrate the time that the Reactor Building is at a pressure that exceeds external conditions. This time period is established with the GOTHIC model based on the worst case pressure conditions on the building. For this time period the Utility must assume the primary containment leakage goes directly to the environment. Once the building pressure is restored below outside conditions the release to the environment can be credited as a filtered release

  20. DCH analyses using the CONTAIN code

    International Nuclear Information System (INIS)

    Hong, Sung Wan; Kim, Hee Dong

    1996-08-01

    This report describes CONTAIN analyses performed during participation in the project of 'DCH issue resolution for ice condenser plants' which is sponsored by NRC at SNL. Even though the calculations were performed for the Ice Condenser plant, CONTAIN code has been used for analyses of many phenomena in the PWR containment and the DCH module can be commonly applied to any plant types. The present ice condenser issue resolution effort intended to provide guidance as to what might be needed to resolve DCH for ice condenser plants. It includes both a screening analysis and a scoping study if the screening analysis cannot provide an complete resolution. The followings are the results concerning DCH loads in descending order. 1. Availability of ignition sources prior to vessel breach 2. availability and effectiveness of ice in the ice condenser 3. Loads modeling uncertainties related to co-ejected RPV water 4. Other loads modeling uncertainties 10 tabs., 3 figs., 14 refs. (Author)

  1. DCH analyses using the CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Sung Wan; Kim, Hee Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-08-01

    This report describes CONTAIN analyses performed during participation in the project of `DCH issue resolution for ice condenser plants` which is sponsored by NRC at SNL. Even though the calculations were performed for the Ice Condenser plant, CONTAIN code has been used for analyses of many phenomena in the PWR containment and the DCH module can be commonly applied to any plant types. The present ice condenser issue resolution effort intended to provide guidance as to what might be needed to resolve DCH for ice condenser plants. It includes both a screening analysis and a scoping study if the screening analysis cannot provide an complete resolution. The followings are the results concerning DCH loads in descending order. 1. Availability of ignition sources prior to vessel breach 2. availability and effectiveness of ice in the ice condenser 3. Loads modeling uncertainties related to co-ejected RPV water 4. Other loads modeling uncertainties 10 tabs., 3 figs., 14 refs. (Author).

  2. Data Assimilation with the Extended Cmam: Nudging to Re-Analyses of the Lower Atmosphere

    Science.gov (United States)

    Fomichev, V. I.; Beagley, S. R.; Shepherd, M. G.; Semeniuk, K.; Mclandress, C. W.; Scinocca, J.; McConnell, J. C.

    2012-12-01

    The extended CMAM is currently being run in a forecast mode allowing the use of the model to simulate specific events. The current analysis period covers 1990-2010. The model is forced using ERA-Interim re-analyses via a nudging technique for the troposphere/stratosphere in combination with the GCM evolution in the lower atmosphere. Thus a transient forced model state is created in the lower atmosphere. The upper atmosphere is allowed to evolve in response to the observed conditions occurring in the lower atmosphere and in response to other transient forcing's such as SSTs, solar flux, and CO2 and CFC boundary changes. This methodology allows specific events and observations to be more successfully compared with the model. The model results compared to TOMS and ACE observations show a good agreement.

  3. Analyses and testing of model prestressed concrete reactor vessels with built-in planes of weakness

    International Nuclear Information System (INIS)

    Dawson, P.; Paton, A.A.; Fleischer, C.C.

    1990-01-01

    This paper describes the design, construction, analyses and testing of two small scale, single cavity prestressed concrete reactor vessel models, one without planes of weakness and one with planes of weakness immediately behind the cavity liner. This work was carried out to extend a previous study which had suggested the likely feasibility of constructing regions of prestressed concrete reactor vessels and biological shields, which become activated, using easily removable blocks, separated by a suitable membrane. The paper describes the results obtained and concludes that the planes of weakness concept could offer a means of facilitating the dismantling of activated regions of prestressed concrete reactor vessels, biological shields and similar types of structure. (author)

  4. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping

    Energy Technology Data Exchange (ETDEWEB)

    Baessler, Bettina; Treutlein, Melanie; Maintz, David; Bunck, Alexander C. [University Hospital of Cologne, Department of Radiology, Cologne (Germany); Schaarschmidt, Frank [Leibniz Universitaet Hannover, Institute of Biostatistics, Faculty of Natural Sciences, Hannover (Germany); Stehning, Christian [Philips Research, Hamburg (Germany); Schnackenburg, Bernhard [Philips, Healthcare Germany, Hamburg (Germany); Michels, Guido [University Hospital of Cologne, Department III of Internal Medicine, Heart Centre, Cologne (Germany)

    2017-12-15

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. (orig.)

  5. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping

    International Nuclear Information System (INIS)

    Baessler, Bettina; Treutlein, Melanie; Maintz, David; Bunck, Alexander C.; Schaarschmidt, Frank; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido

    2017-01-01

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. (orig.)

  6. Model-based performance and energy analyses of reverse osmosis to reuse wastewater in a PVC production site.

    Science.gov (United States)

    Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie

    2017-11-10

    A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.

  7. Developmental trajectories of paediatric headache - sex-specific analyses and predictors.

    Science.gov (United States)

    Isensee, Corinna; Fernandez Castelao, Carolin; Kröner-Herwig, Birgit

    2016-01-01

    Headache is the most common pain disorder in children and adolescents and is associated with diverse dysfunctions and psychological symptoms. Several studies evidenced sex-specific differences in headache frequency. Until now no study exists that examined sex-specific patterns of change in paediatric headache across time and included pain-related somatic and (socio-)psychological predictors. Latent Class Growth Analysis (LCGA) was used in order to identify different trajectory classes of headache across four annual time points in a population-based sample (n = 3 227; mean age 11.34 years; 51.2 % girls). In multinomial logistic regression analyses the influence of several predictors on the class membership was examined. For girls, a four-class model was identified as the best fitting model. While the majority of girls reported no (30.5 %) or moderate headache frequencies (32.5 %) across time, one class with a high level of headache days (20.8 %) and a class with an increasing headache frequency across time (16.2 %) were identified. For boys a two class model with a 'no headache class' (48.6 %) and 'moderate headache class' (51.4 %) showed the best model fit. Regarding logistic regression analyses, migraine and parental headache proved to be stable predictors across sexes. Depression/anxiety was a significant predictor for all pain classes in girls. Life events, dysfunctional stress coping and school burden were also able to differentiate at least between some classes in both sexes. The identified trajectories reflect sex-specific differences in paediatric headache, as seen in the number and type of classes extracted. The documented risk factors can deliver ideas for preventive actions and considerations for treatment programmes.

  8. Multi-year objective analyses of warm season ground-level ozone and PM2.5 over North America using real-time observations and Canadian operational air quality models

    Science.gov (United States)

    Robichaud, A.; Ménard, R.

    2014-02-01

    Multi-year objective analyses (OA) on a high spatiotemporal resolution for the warm season period (1 May to 31 October) for ground-level ozone and for fine particulate matter (diameter less than 2.5 microns (PM2.5)) are presented. The OA used in this study combines model outputs from the Canadian air quality forecast suite with US and Canadian observations from various air quality surface monitoring networks. The analyses are based on an optimal interpolation (OI) with capabilities for adaptive error statistics for ozone and PM2.5 and an explicit bias correction scheme for the PM2.5 analyses. The estimation of error statistics has been computed using a modified version of the Hollingsworth-Lönnberg (H-L) method. The error statistics are "tuned" using a χ2 (chi-square) diagnostic, a semi-empirical procedure that provides significantly better verification than without tuning. Successful cross-validation experiments were performed with an OA setup using 90% of data observations to build the objective analyses and with the remainder left out as an independent set of data for verification purposes. Furthermore, comparisons with other external sources of information (global models and PM2.5 satellite surface-derived or ground-based measurements) show reasonable agreement. The multi-year analyses obtained provide relatively high precision with an absolute yearly averaged systematic error of less than 0.6 ppbv (parts per billion by volume) and 0.7 μg m-3 (micrograms per cubic meter) for ozone and PM2.5, respectively, and a random error generally less than 9 ppbv for ozone and under 12 μg m-3 for PM2.5. This paper focuses on two applications: (1) presenting long-term averages of OA and analysis increments as a form of summer climatology; and (2) analyzing long-term (decadal) trends and inter-annual fluctuations using OA outputs. The results show that high percentiles of ozone and PM2.5 were both following a general decreasing trend in North America, with the eastern

  9. Advanced Behavioral Analyses Show that the Presence of Food Causes Subtle Changes in C. elegans Movement

    OpenAIRE

    Angstman, Nicholas B.; Frank, Hans-Georg; Schmitz, Christoph

    2016-01-01

    As a widely used and studied model organism, Caenorhabditis elegans worms offer the ability to investigate implications of behavioral change. Although, investigation of C. elegans behavioral traits has been shown, analysis is often narrowed down to measurements based off a single point, and thus cannot pick up on subtle behavioral and morphological changes. In the present study videos were captured of four different C. elegans strains grown in liquid cultures and transferred to NGM-agar plate...

  10. On the Structure of Personality Disorder Traits: Conjoint Analyses of the CAT-PD, PID-5, and NEO-PI-3 Trait Models

    Science.gov (United States)

    Wright, Aidan G.C.; Simms, Leonard J.

    2014-01-01

    The current study examines the relations among contemporary models of pathological and normal range personality traits. Specifically, we report on (a) conjoint exploratory factor analyses of the Computerized Adaptive Test of Personality Disorder static form (CAT-PD-SF) with the Personality Inventory for the DSM-5 (PID-5; Krueger et al., 2012) and NEO Personality Inventory-3 First Half (NEI-PI-3FH; McCrae & Costa, 2007), and (b) unfolding hierarchical analyses of the three measures in a large general psychiatric outpatient sample (N = 628; 64% Female). A five-factor solution provided conceptually coherent alignment among the CAT-PD-SF, PID-5, and NEO-PI-3FH scales. Hierarchical solutions suggested that higher-order factors bear strong resemblance to dimensions that emerge from structural models of psychopathology (e.g., Internalizing and Externalizing spectra). These results demonstrate that the CAT-PD-SF adheres to the consensual structure of broad trait domains at the five-factor level. Additionally, patterns of scale loadings further inform questions of structure and bipolarity of facet and domain level constructs. Finally, hierarchical analyses strengthen the argument for using broad dimensions that span normative and pathological functioning to scaffold a quantitatively derived phenotypic structure of psychopathology to orient future research on explanatory, etiological, and maintenance mechanisms. PMID:24588061

  11. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  12. Using FOSM-Based Data Worth Analyses to Design Geophysical Surveys to Reduce Uncertainty in a Regional Groundwater Model Update

    Science.gov (United States)

    Smith, B. D.; White, J.; Kress, W. H.; Clark, B. R.; Barlow, J.

    2016-12-01

    Hydrogeophysical surveys have become an integral part of understanding hydrogeological frameworks used in groundwater models. Regional models cover a large area where water well data is, at best, scattered and irregular. Since budgets are finite, priorities must be assigned to select optimal areas for geophysical surveys. For airborne electromagnetic (AEM) geophysical surveys, optimization of mapping depth and line spacing needs to take in account the objectives of the groundwater models. The approach discussed here uses a first-order, second-moment (FOSM) uncertainty analyses which assumes an approximate linear relation between model parameters and observations. This assumption allows FOSM analyses to be applied to estimate the value of increased parameter knowledge to reduce forecast uncertainty. FOSM is used to facilitate optimization of yet-to-be-completed geophysical surveying to reduce model forecast uncertainty. The main objective of geophysical surveying is assumed to estimate values and spatial variation in hydrologic parameters (i.e. hydraulic conductivity) as well as map lower permeability layers that influence the spatial distribution of recharge flux. The proposed data worth analysis was applied to Mississippi Embayment Regional Aquifer Study (MERAS) which is being updated. The objective of MERAS is to assess the ground-water availability (status and trends) of the Mississippi embayment aquifer system. The study area covers portions of eight states including Alabama, Arkansas, Illinois, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee. The active model grid covers approximately 70,000 square miles, and incorporates some 6,000 miles of major rivers and over 100,000 water wells. In the FOSM analysis, a dense network of pilot points was used to capture uncertainty in hydraulic conductivity and recharge. To simulate the effect of AEM flight lines, the prior uncertainty for hydraulic conductivity and recharge pilots along potential flight lines was

  13. Biofuel market and carbon modeling to analyse French biofuel policy

    International Nuclear Information System (INIS)

    Bernard, F.; Prieur, A.

    2007-01-01

    In order to comply with European Union objectives, France has set up an ambitious biofuel plan. This plan is evaluated on the basis of two criteria: tax exemption on fossil fuels and greenhouse gases (GHG) emission savings. An economic marginal analysis and a life cycle assessment (LCA) are provided using a coupling procedure between a partial agro-industrial equilibrium model and an oil refining optimization model. Thus, we determine the minimum tax exemption needed to place on the market a targeted quantity of biofuel by deducting the biofuel long-run marginal revenue of refiners from the agro-industrial marginal cost of biofuel production. With a clear view of the refiner's economic choices, total pollutant emissions along the biofuel production chains are quantified and used to feed an LCA. The French biofuel plan is evaluated for 2008, 2010 and 2012 using prospective scenarios. Results suggest that biofuel competitiveness depends on crude oil prices and demand for petroleum products and consequently these parameters should be taken into account by authorities to modulate biofuel tax exemption. LCA results show that biofuel production and use, from 'seed to wheel', would facilitate the French Government's compliance with its 'Plan Climat' objectives by reducing up to 5% GHG emissions in the French road transport sector by 2010

  14. Data Assimilation Tools for CO2 Reservoir Model Development – A Review of Key Data Types, Analyses, and Selected Software

    Energy Technology Data Exchange (ETDEWEB)

    Rockhold, Mark L.; Sullivan, E. C.; Murray, Christopher J.; Last, George V.; Black, Gary D.

    2009-09-30

    Pacific Northwest National Laboratory (PNNL) has embarked on an initiative to develop world-class capabilities for performing experimental and computational analyses associated with geologic sequestration of carbon dioxide. The ultimate goal of this initiative is to provide science-based solutions for helping to mitigate the adverse effects of greenhouse gas emissions. This Laboratory-Directed Research and Development (LDRD) initiative currently has two primary focus areas—advanced experimental methods and computational analysis. The experimental methods focus area involves the development of new experimental capabilities, supported in part by the U.S. Department of Energy’s (DOE) Environmental Molecular Science Laboratory (EMSL) housed at PNNL, for quantifying mineral reaction kinetics with CO2 under high temperature and pressure (supercritical) conditions. The computational analysis focus area involves numerical simulation of coupled, multi-scale processes associated with CO2 sequestration in geologic media, and the development of software to facilitate building and parameterizing conceptual and numerical models of subsurface reservoirs that represent geologic repositories for injected CO2. This report describes work in support of the computational analysis focus area. The computational analysis focus area currently consists of several collaborative research projects. These are all geared towards the development and application of conceptual and numerical models for geologic sequestration of CO2. The software being developed for this focus area is referred to as the Geologic Sequestration Software Suite or GS3. A wiki-based software framework is being developed to support GS3. This report summarizes work performed in FY09 on one of the LDRD projects in the computational analysis focus area. The title of this project is Data Assimilation Tools for CO2 Reservoir Model Development. Some key objectives of this project in FY09 were to assess the current state

  15. Assessing models of speciation under different biogeographic scenarios; An empirical study using multi-locus and RNA-seq analyses

    Science.gov (United States)

    Edwards, Taylor; Tollis, Marc; Hsieh, PingHsun; Gutenkunst, Ryan N.; Liu, Zhen; Kusumi, Kenro; Culver, Melanie; Murphy, Robert W.

    2016-01-01

    Evolutionary biology often seeks to decipher the drivers of speciation, and much debate persists over the relative importance of isolation and gene flow in the formation of new species. Genetic studies of closely related species can assess if gene flow was present during speciation, because signatures of past introgression often persist in the genome. We test hypotheses on which mechanisms of speciation drove diversity among three distinct lineages of desert tortoise in the genus Gopherus. These lineages offer a powerful system to study speciation, because different biogeographic patterns (physical vs. ecological segregation) are observed at opposing ends of their distributions. We use 82 samples collected from 38 sites, representing the entire species' distribution and generate sequence data for mtDNA and four nuclear loci. A multilocus phylogenetic analysis in *BEAST estimates the species tree. RNA-seq data yield 20,126 synonymous variants from 7665 contigs from two individuals of each of the three lineages. Analyses of these data using the demographic inference package ∂a∂i serve to test the null hypothesis of no gene flow during divergence. The best-fit demographic model for the three taxa is concordant with the *BEAST species tree, and the ∂a∂i analysis does not indicate gene flow among any of the three lineages during their divergence. These analyses suggest that divergence among the lineages occurred in the absence of gene flow and in this scenario the genetic signature of ecological isolation (parapatric model) cannot be differentiated from geographic isolation (allopatric model).

  16. Using species abundance distribution models and diversity indices for biogeographical analyses

    Science.gov (United States)

    Fattorini, Simone; Rigal, François; Cardoso, Pedro; Borges, Paulo A. V.

    2016-01-01

    We examine whether Species Abundance Distribution models (SADs) and diversity indices can describe how species colonization status influences species community assembly on oceanic islands. Our hypothesis is that, because of the lack of source-sink dynamics at the archipelago scale, Single Island Endemics (SIEs), i.e. endemic species restricted to only one island, should be represented by few rare species and consequently have abundance patterns that differ from those of more widespread species. To test our hypothesis, we used arthropod data from the Azorean archipelago (North Atlantic). We divided the species into three colonization categories: SIEs, archipelagic endemics (AZEs, present in at least two islands) and native non-endemics (NATs). For each category, we modelled rank-abundance plots using both the geometric series and the Gambin model, a measure of distributional amplitude. We also calculated Shannon entropy and Buzas and Gibson's evenness. We show that the slopes of the regression lines modelling SADs were significantly higher for SIEs, which indicates a relative predominance of a few highly abundant species and a lack of rare species, which also depresses diversity indices. This may be a consequence of two factors: (i) some forest specialist SIEs may be at advantage over other, less adapted species; (ii) the entire populations of SIEs are by definition concentrated on a single island, without possibility for inter-island source-sink dynamics; hence all populations must have a minimum number of individuals to survive natural, often unpredictable, fluctuations. These findings are supported by higher values of the α parameter of the Gambin mode for SIEs. In contrast, AZEs and NATs had lower regression slopes, lower α but higher diversity indices, resulting from their widespread distribution over several islands. We conclude that these differences in the SAD models and diversity indices demonstrate that the study of these metrics is useful for

  17. Evaluation of Uncertainties in hydrogeological modeling and groundwater flow analyses. Model calibration

    International Nuclear Information System (INIS)

    Ijiri, Yuji; Ono, Makoto; Sugihara, Yutaka; Shimo, Michito; Yamamoto, Hajime; Fumimura, Kenichi

    2003-03-01

    This study involves evaluation of uncertainty in hydrogeological modeling and groundwater flow analysis. Three-dimensional groundwater flow in Shobasama site in Tono was analyzed using two continuum models and one discontinuous model. The domain of this study covered area of four kilometers in east-west direction and six kilometers in north-south direction. Moreover, for the purpose of evaluating how uncertainties included in modeling of hydrogeological structure and results of groundwater simulation decreased with progress of investigation research, updating and calibration of the models about several modeling techniques of hydrogeological structure and groundwater flow analysis techniques were carried out, based on the information and knowledge which were newly acquired. The acquired knowledge is as follows. As a result of setting parameters and structures in renewal of the models following to the circumstances by last year, there is no big difference to handling between modeling methods. The model calibration is performed by the method of matching numerical simulation with observation, about the pressure response caused by opening and closing of a packer in MIU-2 borehole. Each analysis technique attains reducing of residual sum of squares of observations and results of numerical simulation by adjusting hydrogeological parameters. However, each model adjusts different parameters as water conductivity, effective porosity, specific storage, and anisotropy. When calibrating models, sometimes it is impossible to explain the phenomena only by adjusting parameters. In such case, another investigation may be required to clarify details of hydrogeological structure more. As a result of comparing research from beginning to this year, the following conclusions are obtained about investigation. (1) The transient hydraulic data are effective means in reducing the uncertainty of hydrogeological structure. (2) Effective porosity for calculating pore water velocity of

  18. A miniaturised, nested-cylindrical electrostatic analyser geometry for dual electron and ion, multi-energy measurements

    Energy Technology Data Exchange (ETDEWEB)

    Bedington, Robert, E-mail: r.bedington@nus.edu.sg; Kataria, Dhiren; Smith, Alan

    2015-09-01

    The CATS (Cylindrical And Tiny Spectrometer) electrostatic optics geometry features multiple nested cylindrical analysers to simultaneously measure multiple energies of electron and multiple energies of ion in a configuration that is targeted at miniaturisation and MEMS fabrication. In the prototyped model, two configurations of cylindrical analyser were used, featuring terminating side-plates that caused particle trajectories to either converge (C type) or diverge (D type) in the axial direction. Simulations show how these different electrode configurations affect the particle focussing and instrument parameters; C-type providing greater throughputs but D-type providing higher resolving powers. The simulations were additionally used to investigate unexpected plate spacing variations in the as-built model, revealing that the k-factors are most sensitive to the width of the inter-electrode spacing at its narrowest point. - Highlights: • A new nested cylindrical miniaturised electrostatic analyser geometry is described. • “Converging” (C) and “diverging” (D) type channel properties are investigated. • C channels are shown to have greater throughputs and D greater resolving powers. • Plate factors are shown to be sensitive to the minimum in inter-electrode spacing.

  19. Comparison between Duncan and Chang’s EB Model and the Generalized Plasticity Model in the Analysis of a High Earth-Rockfill Dam

    Directory of Open Access Journals (Sweden)

    Weixin Dong

    2013-01-01

    Full Text Available Nonlinear elastic model and elastoplastic model are two main kinds of constitutive models of soil, which are widely used in the numerical analyses of soil structure. In this study, Duncan and Chang's EB model and the generalized plasticity model proposed by Pastor, Zienkiewicz, and Chan was discussed and applied to describe the stress-strain relationship of rockfill materials. The two models were validated using the results of triaxial shear tests under different confining pressures. The comparisons between the fittings of models and test data showed that the modified generalized plasticity model is capable of simulating the mechanical behaviours of rockfill materials. The modified generalized plasticity model was implemented into a finite element code to carry out static analyses of a high earth-rockfill dam in China. Nonlinear elastic analyses were also performed with Duncan and Chang's EB model in the same program framework. The comparisons of FEM results and in situ monitoring data showed that the modified PZ-III model can give a better description of deformation of the earth-rockfill dam than Duncan and Chang’s EB model.

  20. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  1. First-born siblings show better second language skills than later born siblings

    Science.gov (United States)

    Keller, Karin; Troesch, Larissa M.; Grob, Alexander

    2015-01-01

    We examined the extent to which three sibling structure variables number of siblings, birth order, and presence of an older sibling at school age are linked to the second language skills of bilingual children. The research questions were tested using an ethnically heterogeneous sample of 1209 bilingual children with German as a second language. Controlling for children’s age, sex, nationality, number of children’s books at home, family language and parental German language skills, hierarchical regression analyses showed an inverse relationship between the number of siblings and second language skills: the more siblings a child had, the lower was his/her second language proficiency. This relationship was mediated by attendance in early education institutions. Moreover, first-born siblings showed better second language skills than later born siblings. The current study revealed that the resource dilution model, i.e., the decrease in resources for every additional sibling, holds for second language acquisition. Moreover, the results indicate that bilingual children from families with several children benefit from access to early education institutions. PMID:26089806

  2. First-born siblings show better second language skills than later born siblings

    Directory of Open Access Journals (Sweden)

    Karin eKeller

    2015-06-01

    Full Text Available We examined the extent to which three sibling structure variables number of siblings, birth order and presence of an older sibling at school age are linked to the second language skills of bilingual children. The research questions were tested using an ethnically heterogeneous sample of 1209 bilingual children with German as a second language. Controlling for children’s age, sex, nationality, number of children’s books at home, family language and parental German language skills, hierarchical regression analyses showed an inverse relationship between the number of siblings and second language skills: The more siblings a child had, the lower was his/her second language proficiency. This relationship was mediated by attendance in early education institutions. Moreover, first-born siblings showed better second language skills than later born siblings.The current study revealed that the resource dilution model, i.e., the decrease in resources for every additional sibling, holds for second language acquisition. Moreover, the results indicate that bilingual children from families with several children benefit from access to early education institutions.

  3. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    Science.gov (United States)

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  4. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  5. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results (a) confirmed, in a general way, the procedures for application to pulsed burning, (b) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur and (c) indicated that steam can terminate continuous burning. Future actions recommended include (a) modification of the code to perform continuous-burn analyses, which is demonstrated, (b) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (c) changes to the models for estimating burn parameters

  6. Development of Super-Ensemble techniques for ocean analyses: the Mediterranean Sea case

    Science.gov (United States)

    Pistoia, Jenny; Pinardi, Nadia; Oddo, Paolo; Collins, Matthew; Korres, Gerasimos; Drillet, Yann

    2017-04-01

    Short-term ocean analyses for Sea Surface Temperature SST in the Mediterranean Sea can be improved by a statistical post-processing technique, called super-ensemble. This technique consists in a multi-linear regression algorithm applied to a Multi-Physics Multi-Model Super-Ensemble (MMSE) dataset, a collection of different operational forecasting analyses together with ad-hoc simulations produced by modifying selected numerical model parameterizations. A new linear regression algorithm based on Empirical Orthogonal Function filtering techniques is capable to prevent overfitting problems, even if best performances are achieved when we add correlation to the super-ensemble structure using a simple spatial filter applied after the linear regression. Our outcomes show that super-ensemble performances depend on the selection of an unbiased operator and the length of the learning period, but the quality of the generating MMSE dataset has the largest impact on the MMSE analysis Root Mean Square Error (RMSE) evaluated with respect to observed satellite SST. Lower RMSE analysis estimates result from the following choices: 15 days training period, an overconfident MMSE dataset (a subset with the higher quality ensemble members), and the least square algorithm being filtered a posteriori.

  7. Possible future HERA analyses

    International Nuclear Information System (INIS)

    Geiser, Achim

    2015-12-01

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing ep collider data and their physics scope. Comparisons to the original scope of the HERA pro- gramme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-established data and MC sets, calibrations, and analysis procedures the manpower and expertise needed for a particular analysis is often very much smaller than that needed for an ongoing experiment. Since centrally funded manpower to carry out such analyses is not available any longer, this contribution not only targets experienced self-funded experimentalists, but also theorists and master-level students who might wish to carry out such an analysis.

  8. Morphological and functional analyses of skeletal muscles from an immunodeficient animal model of limb-girdle muscular dystrophy type 2E.

    Science.gov (United States)

    Giovannelli, Gaia; Giacomazzi, Giorgia; Grosemans, Hanne; Sampaolesi, Maurilio

    2018-02-24

    Limb-girdle muscular dystrophy type 2E (LGMD2E) is caused by mutations in the β-sarcoglycan gene, which is expressed in skeletal, cardiac, and smooth muscles. β-Sarcoglycan-deficient (Sgcb-null) mice develop severe muscular dystrophy and cardiomyopathy with focal areas of necrosis. In this study we performed morphological (histological and cellular characterization) and functional (isometric tetanic force and fatigue) analyses in dystrophic mice. Comparison studies were carried out in 1-month-old (clinical onset of the disease) and 7-month-old control mice (C57Bl/6J, Rag2/γc-null) and immunocompetent and immunodeficient dystrophic mice (Sgcb-null and Sgcb/Rag2/γc-null, respectively). We found that the lack of an immunological system resulted in an increase of calcification in striated muscles without impairing extensor digitorum longus muscle performance. Sgcb/Rag2/γc-null muscles showed a significant reduction of alkaline phosphate-positive mesoangioblasts. The immunological system counteracts skeletal muscle degeneration in the murine model of LGMD2E. Muscle Nerve, 2018. © 2018 The Authors. Muscle & Nerve Published by Wiley Periodicals, Inc.

  9. Signal Transduction Pathways of TNAP: Molecular Network Analyses.

    Science.gov (United States)

    Négyessy, László; Györffy, Balázs; Hanics, János; Bányai, Mihály; Fonta, Caroline; Bazsó, Fülöp

    2015-01-01

    Despite the growing body of evidence pointing on the involvement of tissue non-specific alkaline phosphatase (TNAP) in brain function and diseases like epilepsy and Alzheimer's disease, our understanding about the role of TNAP in the regulation of neurotransmission is severely limited. The aim of our study was to integrate the fragmented knowledge into a comprehensive view regarding neuronal functions of TNAP using objective tools. As a model we used the signal transduction molecular network of a pyramidal neuron after complementing with TNAP related data and performed the analysis using graph theoretic tools. The analyses show that TNAP is in the crossroad of numerous pathways and therefore is one of the key players of the neuronal signal transduction network. Through many of its connections, most notably with molecules of the purinergic system, TNAP serves as a controller by funnelling signal flow towards a subset of molecules. TNAP also appears as the source of signal to be spread via interactions with molecules involved among others in neurodegeneration. Cluster analyses identified TNAP as part of the second messenger signalling cascade. However, TNAP also forms connections with other functional groups involved in neuronal signal transduction. The results indicate the distinct ways of involvement of TNAP in multiple neuronal functions and diseases.

  10. The variants of an LOD of a 3D building model and their influence on spatial analyses

    Science.gov (United States)

    Biljecki, Filip; Ledoux, Hugo; Stoter, Jantien; Vosselman, George

    2016-06-01

    The level of detail (LOD) of a 3D city model indicates the model's grade and usability. However, there exist multiple valid variants of each LOD. As a consequence, the LOD concept is inconclusive as an instruction for the acquisition of 3D city models. For instance, the top surface of an LOD1 block model may be modelled at the eaves of a building or at its ridge height. Such variants, which we term geometric references, are often overlooked and are usually not documented in the metadata. Furthermore, the influence of a particular geometric reference on the performance of a spatial analysis is not known. In response to this research gap, we investigate a variety of LOD1 and LOD2 geometric references that are commonly employed, and perform numerical experiments to investigate their relative difference when used as input for different spatial analyses. We consider three use cases (estimation of the area of the building envelope, building volume, and shadows cast by buildings), and compute the deviations in a Monte Carlo simulation. The experiments, carried out with procedurally generated models, indicate that two 3D models representing the same building at the same LOD, but modelled according to different geometric references, may yield substantially different results when used in a spatial analysis. The outcome of our experiments also suggests that the geometric reference may have a bigger influence than the LOD, since an LOD1 with a specific geometric reference may yield a more accurate result than when using LOD2 models.

  11. Analyses of GIMMS NDVI Time Series in Kogi State, Nigeria

    Science.gov (United States)

    Palka, Jessica; Wessollek, Christine; Karrasch, Pierre

    2017-10-01

    The value of remote sensing data is particularly evident where an areal monitoring is needed to provide information on the earth's surface development. The use of temporal high resolution time series data allows for detecting short-term changes. In Kogi State in Nigeria different vegetation types can be found. As the major population in this region is living in rural communities with crop farming the existing vegetation is slowly being altered. The expansion of agricultural land causes loss of natural vegetation, especially in the regions close to the rivers which are suitable for crop production. With regard to these facts, two questions can be dealt with covering different aspects of the development of vegetation in the Kogi state, the determination and evaluation of the general development of the vegetation in the study area (trend estimation) and analyses on a short-term behavior of vegetation conditions, which can provide information about seasonal effects in vegetation development. For this purpose, the GIMMS-NDVI data set, provided by the NOAA, provides information on the normalized difference vegetation index (NDVI) in a geometric resolution of approx. 8 km. The temporal resolution of 15 days allows the already described analyses. For the presented analysis data for the period 1981-2012 (31 years) were used. The implemented workflow mainly applies methods of time series analysis. The results show that in addition to the classical seasonal development, artefacts of different vegetation periods (several NDVI maxima) can be found in the data. The trend component of the time series shows a consistently positive development in the entire study area considering the full investigation period of 31 years. However, the results also show that this development has not been continuous and a simple linear modeling of the NDVI increase is only possible to a limited extent. For this reason, the trend modeling was extended by procedures for detecting structural breaks in

  12. Domain analyses of Usher syndrome causing Clarin-1 and GPR98 protein models.

    Science.gov (United States)

    Khan, Sehrish Haider; Javed, Muhammad Rizwan; Qasim, Muhammad; Shahzadi, Samar; Jalil, Asma; Rehman, Shahid Ur

    2014-01-01

    Usher syndrome is an autosomal recessive disorder that causes hearing loss, Retinitis Pigmentosa (RP) and vestibular dysfunction. It is clinically and genetically heterogeneous disorder which is clinically divided into three types i.e. type I, type II and type III. To date, there are about twelve loci and ten identified genes which are associated with Usher syndrome. A mutation in any of these genes e.g. CDH23, CLRN1, GPR98, MYO7A, PCDH15, USH1C, USH1G, USH2A and DFNB31 can result in Usher syndrome or non-syndromic deafness. These genes provide instructions for making proteins that play important roles in normal hearing, balance and vision. Studies have shown that protein structures of only seven genes have been determined experimentally and there are still three genes whose structures are unavailable. These genes are Clarin-1, GPR98 and Usherin. In the absence of an experimentally determined structure, homology modeling and threading often provide a useful 3D model of a protein. Therefore in the current study Clarin-1 and GPR98 proteins have been analyzed for signal peptide, domains and motifs. Clarin-1 protein was found to be without any signal peptide and consists of prokar lipoprotein domain. Clarin-1 is classified within claudin 2 super family and consists of twelve motifs. Whereas, GPR98 has a 29 amino acids long signal peptide and classified within GPCR family 2 having Concanavalin A-like lectin/glucanase superfamily. It was found to be consists of GPS and G protein receptor F2 domains and twenty nine motifs. Their 3D structures have been predicted using I-TASSER server. The model of Clarin-1 showed only α-helix but no beta sheets while model of GPR98 showed both α-helix and β sheets. The predicted structures were then evaluated and validated by MolProbity and Ramachandran plot. The evaluation of the predicted structures showed 78.9% residues of Clarin-1 and 78.9% residues of GPR98 within favored regions. The findings of present study has resulted in the

  13. Characteristics of electrostatic solitary waves observed in the plasma sheet boundary: Statistical analyses

    Directory of Open Access Journals (Sweden)

    H. Kojima

    1999-01-01

    Full Text Available We present the characteristics of the Electrostatic Solitary Waves (ESW observed by the Geotail spacecraft in the plasma sheet boundary layer based on the statistical analyses. We also discuss the results referring to a model of ESW generation due to electron beams, which is proposed by computer simulations. In this generation model, the nonlinear evolution of Langmuir waves excited by electron bump-on-tail instabilities leads to formation of isolated electrostatic potential structures corresponding to "electron hole" in the phase space. The statistical analyses of the Geotail data, which we conducted under the assumption that polarity of ESW potentials is positive, show that most of ESW propagate in the same direction of electron beams, which are observed by the plasma instrument, simultaneously. Further, we also find that the ESW potential energy is much smaller than the background electron thermal energy and that the ESW potential widths are typically shorter than 60 times of local electron Debye length when we assume that the ESW potentials travel in the same velocity of electron beams. These results are very consistent with the ESW generation model that the nonlinear evolution of electron bump-on-tail instability leads to the formation of electron holes in the phase space.

  14. Sorption analyses in materials science: selected oxides

    International Nuclear Information System (INIS)

    Fuller, E.L. Jr.; Condon, J.B.; Eager, M.H.; Jones, L.L.

    1981-01-01

    Physical adsorption studies have been shown to be extremely valuable in studying the chemistry and structure of dispersed materials. Many processes rely on the access to the large amount of surface made available by the high degree of dispersion. Conversely, there are many applications where consolidation of the dispersed solids is required. Several systems (silica gel, alumina catalysts, mineralogic alumino-silicates, and yttrium oxide plasters) have been studied to show the type and amount of chemical and structural information that can be obtained. Some review of current theories is given and additional concepts are developed based on statistical and thermodynamic arguments. The results are applied to sorption data to show that detailed sorption analyses are extremely useful and can provide valuable information that is difficult to obtain by any other means. Considerable emphasis has been placed on data analyses and interpretation of a nonclassical nature to show the potential of such studies that is often not recognized nor utilized

  15. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-01-01

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1

  16. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters.

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-09-11

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  17. Rubber particle proteins, HbREF and HbSRPP, show different interactions with model membranes.

    Science.gov (United States)

    Berthelot, Karine; Lecomte, Sophie; Estevez, Yannick; Zhendre, Vanessa; Henry, Sarah; Thévenot, Julie; Dufourc, Erick J; Alves, Isabel D; Peruch, Frédéric

    2014-01-01

    The biomembrane surrounding rubber particles from the hevea latex is well known for its content of numerous allergen proteins. HbREF (Hevb1) and HbSRPP (Hevb3) are major components, linked on rubber particles, and they have been shown to be involved in rubber synthesis or quality (mass regulation), but their exact function is still to be determined. In this study we highlighted the different modes of interactions of both recombinant proteins with various membrane models (lipid monolayers, liposomes or supported bilayers, and multilamellar vesicles) to mimic the latex particle membrane. We combined various biophysical methods (polarization-modulation-infrared reflection-adsorption spectroscopy (PM-IRRAS)/ellipsometry, attenuated-total reflectance Fourier-transform infrared (ATR-FTIR), solid-state nuclear magnetic resonance (NMR), plasmon waveguide resonance (PWR), fluorescence spectroscopy) to elucidate their interactions. Small rubber particle protein (SRPP) shows less affinity than rubber elongation factor (REF) for the membranes but displays a kind of "covering" effect on the lipid headgroups without disturbing the membrane integrity. Its structure is conserved in the presence of lipids. Contrarily, REF demonstrates higher membrane affinity with changes in its aggregation properties, the amyloid nature of REF, which we previously reported, is not favored in the presence of lipids. REF binds and inserts into membranes. The membrane integrity is highly perturbed, and we suspect that REF is even able to remove lipids from the membrane leading to the formation of mixed micelles. These two homologous proteins show affinity to all membrane models tested but neatly differ in their interacting features. This could imply differential roles on the surface of rubber particles. © 2013.

  18. Modeling and analyses for an extended car-following model accounting for drivers' situation awareness from cyber physical perspective

    Science.gov (United States)

    Chen, Dong; Sun, Dihua; Zhao, Min; Zhou, Tong; Cheng, Senlin

    2018-07-01

    In fact, driving process is a typical cyber physical process which couples tightly the cyber factor of traffic information with the physical components of the vehicles. Meanwhile, the drivers have situation awareness in driving process, which is not only ascribed to the current traffic states, but also extrapolates the changing trend. In this paper, an extended car-following model is proposed to account for drivers' situation awareness. The stability criterion of the proposed model is derived via linear stability analysis. The results show that the stable region of proposed model will be enlarged on the phase diagram compared with previous models. By employing the reductive perturbation method, the modified Korteweg de Vries (mKdV) equation is obtained. The kink-antikink soliton of mKdV equation reveals theoretically the evolution of traffic jams. Numerical simulations are conducted to verify the analytical results. Two typical traffic Scenarios are investigated. The simulation results demonstrate that drivers' situation awareness plays a key role in traffic flow oscillations and the congestion transition.

  19. HLA region excluded by linkage analyses of early onset periodontitis

    Energy Technology Data Exchange (ETDEWEB)

    Sun, C.; Wang, S.; Lopez, N.

    1994-09-01

    Previous studies suggested that HLA genes may influence susceptibility to early-onset periodontitis (EOP). Segregation analyses indicate that EOP may be due to a single major gene. We conducted linkage analyses to assess possible HLA effects on EOP. Fifty families with two or more close relatives affected by EOP were ascertained in Virginia and Chile. A microsatellite polymorphism within the HLA region (at the tumor necrosis factor beta locus) was typed using PCR. Linkage analyses used a donimant model most strongly supported by previous studies. Assuming locus homogeneity, our results exclude a susceptibility gene within 10 cM on either side of our marker locus. This encompasses all of the HLA region. Analyses assuming alternative models gave qualitatively similar results. Allowing for locus heterogeneity, our data still provide no support for HLA-region involvement. However, our data do not statistically exclude (LOD <-2.0) hypotheses of disease-locus heterogeneity, including models where up to half of our families could contain an EOP disease gene located in the HLA region. This is due to the limited power of even our relatively large collection of families and the inherent difficulties of mapping genes for disorders that have complex and heterogeneous etiologies. Additional statistical analyses, recruitment of families, and typing of flanking DNA markers are planned to more conclusively address these issues with respect to the HLA region and other candidate locations in the human genome. Additional results for markers covering most of the human genome will also be presented.

  20. Modelling and analysing interoperability in service compositions using COSMO

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.

    2008-01-01

    A service composition process typically involves multiple service models. These models may represent the composite and composed services from distinct perspectives, e.g. to model the role of some system that is involved in a service, and at distinct abstraction levels, e.g. to model the goal,

  1. Fundamental issues in finite element analyses of localization of deformation

    NARCIS (Netherlands)

    Borst, de R.; Sluys, L.J.; Mühlhaus, H.-B.; Pamin, J.

    1993-01-01

    Classical continuum models, i.e. continuum models that do not incorporate an internal length scale, suffer from excessive mesh dependence when strain-softening models are used in numerical analyses and cannot reproduce the size effect commonly observed in quasi-brittle failure. In this contribution

  2. QTL analyses on genotype-specific component traits in a crop simulation model for capsicum annuum L.

    NARCIS (Netherlands)

    Wubs, A.M.; Heuvelink, E.; Dieleman, J.A.; Magan, J.J.; Palloix, A.; Eeuwijk, van F.A.

    2012-01-01

    Abstract: QTL for a complex trait like yield tend to be unstable across environments and show QTL by environment interaction. Direct improvement of complex traits by selecting on QTL is therefore difficult. For improvement of complex traits, crop growth models can be useful, as such models can

  3. A systematic review and meta-Analyses show that carbapenem use and medical devices are the leading risk factors for carbapenem- resistant pseudomonas aeruginosa

    NARCIS (Netherlands)

    A.F. Voor (Anne); J.A. Severin (Juliëtte); E.M.E.H. Lesaffre (Emmanuel); M.C. Vos (Margreet)

    2014-01-01

    textabstractA systematic review and meta-Analyses were performed to identify the risk factors associated with carbapenem-resistant Pseudomonas aeruginosa and to identify sources and reservoirs for the pathogen. A systematic search of PubMed and Embase databases from 1 January 1987 until 27 January

  4. CFD analyses of coolant channel flowfields

    Science.gov (United States)

    Yagley, Jennifer A.; Feng, Jinzhang; Merkle, Charles L.

    1993-01-01

    The flowfield characteristics in rocket engine coolant channels are analyzed by means of a numerical model. The channels are characterized by large length to diameter ratios, high Reynolds numbers, and asymmetrical heating. At representative flow conditions, the channel length is approximately twice the hydraulic entrance length so that fully developed conditions would be reached for a constant property fluid. For the supercritical hydrogen that is used as the coolant, the strong property variations create significant secondary flows in the cross-plane which have a major influence on the flow and the resulting heat transfer. Comparison of constant and variable property solutions show substantial differences. In addition, the property variations prevent fully developed flow. The density variation accelerates the fluid in the channels increasing the pressure drop without an accompanying increase in heat flux. Analyses of the inlet configuration suggest that side entry from a manifold can affect the development of the velocity profile because of vortices generated as the flow enters the channel. Current work is focused on studying the effects of channel bifurcation on the flow field and the heat transfer characteristics.

  5. Workload analyse of assembling process

    Science.gov (United States)

    Ghenghea, L. D.

    2015-11-01

    The workload is the most important indicator for managers responsible of industrial technological processes no matter if these are automated, mechanized or simply manual in each case, machines or workers will be in the focus of workload measurements. The paper deals with workload analyses made to a most part manual assembling technology for roller bearings assembling process, executed in a big company, with integrated bearings manufacturing processes. In this analyses the delay sample technique have been used to identify and divide all bearing assemblers activities, to get information about time parts from 480 minutes day work time that workers allow to each activity. The developed study shows some ways to increase the process productivity without supplementary investments and also indicated the process automation could be the solution to gain maximum productivity.

  6. Analysing Terrorism from a Systems Thinking Perspective

    Directory of Open Access Journals (Sweden)

    Lukas Schoenenberger

    2014-02-01

    Full Text Available Given the complexity of terrorism, solutions based on single factors are destined to fail. Systems thinking offers various tools for helping researchers and policy makers comprehend terrorism in its entirety. We have developed a semi-quantitative systems thinking approach for characterising relationships between variables critical to terrorism and their impact on the system as a whole. For a better understanding of the mechanisms underlying terrorism, we present a 16-variable model characterising the critical components of terrorism and perform a series of highly focused analyses. We show how to determine which variables are best suited for government intervention, describing in detail their effects on the key variable—the political influence of a terrorist network. We also offer insights into how to elicit variables that destabilise and ultimately break down these networks. Because we clarify our novel approach with fictional data, the primary importance of this paper lies in the new framework for reasoning that it provides.

  7. Competing risk models in reliability systems, a Weibull distribution model with Bayesian analysis approach

    International Nuclear Information System (INIS)

    Iskandar, Ismed; Gondokaryono, Yudi Satria

    2016-01-01

    In reliability theory, the most important problem is to determine the reliability of a complex system from the reliability of its components. The weakness of most reliability theories is that the systems are described and explained as simply functioning or failed. In many real situations, the failures may be from many causes depending upon the age and the environment of the system and its components. Another problem in reliability theory is one of estimating the parameters of the assumed failure models. The estimation may be based on data collected over censored or uncensored life tests. In many reliability problems, the failure data are simply quantitatively inadequate, especially in engineering design and maintenance system. The Bayesian analyses are more beneficial than the classical one in such cases. The Bayesian estimation analyses allow us to combine past knowledge or experience in the form of an apriori distribution with life test data to make inferences of the parameter of interest. In this paper, we have investigated the application of the Bayesian estimation analyses to competing risk systems. The cases are limited to the models with independent causes of failure by using the Weibull distribution as our model. A simulation is conducted for this distribution with the objectives of verifying the models and the estimators and investigating the performance of the estimators for varying sample size. The simulation data are analyzed by using Bayesian and the maximum likelihood analyses. The simulation results show that the change of the true of parameter relatively to another will change the value of standard deviation in an opposite direction. For a perfect information on the prior distribution, the estimation methods of the Bayesian analyses are better than those of the maximum likelihood. The sensitivity analyses show some amount of sensitivity over the shifts of the prior locations. They also show the robustness of the Bayesian analysis within the range

  8. New insights into survival trend analyses in cancer population-based studies: the SUDCAN methodology.

    Science.gov (United States)

    Uhry, Zoé; Bossard, Nadine; Remontet, Laurent; Iwaz, Jean; Roche, Laurent

    2017-01-01

    The main objective of the SUDCAN study was to compare, for 15 cancer sites, the trends in net survival and excess mortality rates from cancer 5 years after diagnosis between six European Latin countries (Belgium, France, Italy, Portugal, Spain and Switzerland). The data were extracted from the EUROCARE-5 database. The study period ranged from 6 (Portugal, 2000-2005) to 18 years (Switzerland, 1989-2007). Trend analyses were carried out separately for each country and cancer site; the number of cases ranged from 1500 to 104 000 cases. We developed an original flexible excess rate modelling strategy that accounts for the continuous effects of age, year of diagnosis, time since diagnosis and their interactions. Nineteen models were constructed; they differed in the modelling of the effect of the year of diagnosis in terms of linearity, proportionality and interaction with age. The final model was chosen according to the Akaike Information Criterion. The fit was assessed graphically by comparing model estimates versus nonparametric (Pohar-Perme) net survival estimates. Out of the 90 analyses carried out, the effect of the year of diagnosis on the excess mortality rate depended on age in 61 and was nonproportional in 64; it was nonlinear in 27 out of the 75 analyses where this effect was considered. The model fit was overall satisfactory. We analysed successfully 15 cancer sites in six countries. The refined methodology proved necessary for detailed trend analyses. It is hoped that three-dimensional parametric modelling will be used more widely in net survival trend studies as it has major advantages over stratified analyses.

  9. Analysing harmonic motions with an iPhone’s magnetometer

    Science.gov (United States)

    Yavuz, Ahmet; Kağan Temiz, Burak

    2016-05-01

    In this paper, we propose an experiment for analysing harmonic motion using an iPhone’s (or iPad’s) magnetometer. This experiment consists of the detection of magnetic field variations obtained from an iPhone’s magnetometer sensor. A graph of harmonic motion is directly displayed on the iPhone’s screen using the Sensor Kinetics application. Data from this application was analysed with Eureqa software to establish the equation of the harmonic motion. Analyses show that the use of an iPhone’s magnetometer to analyse harmonic motion is a practical and effective method for small oscillations and frequencies less than 15-20 Hz.

  10. Neutronics Conversion Analyses of the Laue-Langevin Institute (ILL) High Flux Reactor (RHF)

    Energy Technology Data Exchange (ETDEWEB)

    Bergeron, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Dionne, B. [Argonne National Lab. (ANL), Argonne, IL (United States); Calzavara, Y. [Inst. Laue-Langevin (ILL), Grenoble (France)

    2014-09-30

    The following report describes the neutronics results obtained with the MCNP model of the RHF U7Mo LEU reference design that has been established in 2010 during the feasibility analysis. This work constitutes a complete and detailed neutronics analysis of that LEU design using models that have been significantly improved since 2010 and the release of the feasibility report. When possible, the credibility of the neutronics model is tested by comparing the HEU model results with experimental data or other codes calculations results. The results obtained with the LEU model are systematically compared to the HEU model. The changes applied to the neutronics model lead to better comparisons with experimental data or improved the calculation efficiency but do not challenge the conclusion of the feasibility analysis. If the U7Mo fuel is commercially available, not cost prohibitive, a back-end solution is established and if it is possible to manufacture the proposed element, neutronics analyses show that the performance of the reactor would not be challenged by the conversion to LEU fuel.

  11. Histidine decarboxylase knockout mice, a genetic model of Tourette syndrome, show repetitive grooming after induced fear.

    Science.gov (United States)

    Xu, Meiyu; Li, Lina; Ohtsu, Hiroshi; Pittenger, Christopher

    2015-05-19

    Tics, such as are seen in Tourette syndrome (TS), are common and can cause profound morbidity, but they are poorly understood. Tics are potentiated by psychostimulants, stress, and sleep deprivation. Mutations in the gene histidine decarboxylase (Hdc) have been implicated as a rare genetic cause of TS, and Hdc knockout mice have been validated as a genetic model that recapitulates phenomenological and pathophysiological aspects of the disorder. Tic-like stereotypies in this model have not been observed at baseline but emerge after acute challenge with the psychostimulant d-amphetamine. We tested the ability of an acute stressor to stimulate stereotypies in this model, using tone fear conditioning. Hdc knockout mice acquired conditioned fear normally, as manifested by freezing during the presentation of a tone 48h after it had been paired with a shock. During the 30min following tone presentation, knockout mice showed increased grooming. Heterozygotes exhibited normal freezing and intermediate grooming. These data validate a new paradigm for the examination of tic-like stereotypies in animals without pharmacological challenge and enhance the face validity of the Hdc knockout mouse as a pathophysiologically grounded model of tic disorders. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. A Simple, Reliable Precision Time Analyser

    Energy Technology Data Exchange (ETDEWEB)

    Joshi, B. V.; Nargundkar, V. R.; Subbarao, K.; Kamath, M. S.; Eligar, S. K. [Atomic Energy Establishment Trombay, Bombay (India)

    1966-06-15

    A 30-channel time analyser is described. The time analyser was designed and built for pulsed neutron research but can be applied to other uses. Most of the logic is performed by means of ferrite memory core and transistor switching circuits. This leads to great versatility, low power consumption, extreme reliability and low cost. The analyser described provides channel Widths from 10 {mu}s to 10 ms; arbitrarily wider channels are easily obtainable. It can handle counting rates up to 2000 counts/min in each channel with less than 1% dead time loss. There is a provision for an initial delay equal to 100 channel widths. An input pulse de-randomizer unit using tunnel diodes ensures exactly equal channel widths. A brief description of the principles involved in core switching circuitry is given. The core-transistor transfer loop is compared with the usual core-diode loops and is shown to be more versatile and better adapted to the making of a time analyser. The circuits derived from the basic loop are described. These include the scale of ten, the frequency dividers and the delay generator. The current drivers developed for driving the cores are described. The crystal-controlled clock which controls the width of the time channels and synchronizes the operation of the various circuits is described. The detector pulse derandomizer unit using tunnel diodes is described. The scheme of the time analyser is then described showing how the various circuits can be integrated together to form a versatile time analyser. (author)

  13. Competition and stability analyses among emissions, energy, and economy: Application for Mexico

    International Nuclear Information System (INIS)

    Pao, Hsiao-Tien; Fu, Hsin-Chia

    2015-01-01

    In view of limited natural resources on Earth, linkage among environment, energy, and economy (3Es) becomes important perspectives for sustainable development. This paper proposes to use Lotka–Volterra model for SUstainable Development (LV-SUD) to analyse the interspecific interactions, equilibria and their stabilities among emissions, different types of energy consumption (renewable, nuclear, and fossil fuel), and real GDP, the main factors of 3Es issues. Modelling these interactions provides a useful multivariate framework for prediction outcomes. Interaction between 3Es, namely competition, symbiosis, or predation, plays an important role in policy development to achieve a balanced use of energy resources and to strengthen the green economy. Applying LV-SUD in Mexico, an emerging markets country, analysing results show that there is a mutualism between fossil fuel consumption and GDP; prey-predator relationships that fossil fuel and GDP enhance the growth of emissions, but emissions inhibit the growth of the others; and commensalisms that GDP benefits from nuclear power, and renewable power benefits from fossil fuel. It is suggested that national energy policies should remain committed to decoupling the relevance between non-clean energy and GDP, to actively developing clean energy and thereby to properly reducing fossil fuel consumption and emissions without harming economic growth. - Highlights: • LV-SUD is used to analyse the competition between environment-energy-economy (3Es). • The competitions between renewable, nuclear, and fossil energy are analysed. • Competition between 3Es plays an important role in policy development. • LV-SUD provides a useful multivariate framework for prediction outcomes. • An application for emerging markets countries such as Mexico is presented

  14. Applications of MIDAS regression in analysing trends in water quality

    Science.gov (United States)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  15. Structural analyses of ITER toroidal field coils under fault conditions

    International Nuclear Information System (INIS)

    Jong, C.T.J.

    1992-04-01

    ITER (International Thermonuclear Experimental Reactor) is intended to be an experimental thermonuclear tokamak reactor testing the basic physics performance and technologies essential to future fusion reactors. The magnet system of ITER consists essentially of 4 sub-systems, i.e. toroidal field coils (TFCs), poloidal field coils (PFCs), power supplies, and cryogenic supplies. These subsystems do not contain significant radioactivity inventories, but the large energy inventory is a potential accident initiator. The aim of the structural analyses is to prevent accidents from propagating into vacuum vessel, tritium system and cooling system, which all contain significant amounts of radioactivity. As part of design process 3 conditions are defined for PF and TF coils, at which mechanical behaviour has to be analyzed in some detail, viz: normal operating conditions, upset conditions and fault conditions. This paper describes the work carried out by ECN to create a detailed finite element model of 16 TFCs as well as results of some fault condition analyses made with the model. Due to fault conditions, either electrical or mechanical, magnetic loading of TFCs becomes abnormal and further mechanical failure of parts of the overall structure might occur (e.g. failure of coil, gravitational supports, intercoil structure). The analyses performed consist of linear elastic stress analyses and electro-magneto-structural analyses (coupled field analyses). 8 refs.; 5 figs.; 5 tabs

  16. Using plant growth modeling to analyse C source-sink relations under drought: inter and intra specific comparison

    Directory of Open Access Journals (Sweden)

    Benoit ePallas

    2013-11-01

    Full Text Available The ability to assimilate C and allocate NSC (non structural carbohydrates to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyse such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed.

  17. 68Ga/177Lu-labeled DOTA-TATE shows similar imaging and biodistribution in neuroendocrine tumor model.

    Science.gov (United States)

    Liu, Fei; Zhu, Hua; Yu, Jiangyuan; Han, Xuedi; Xie, Qinghua; Liu, Teli; Xia, Chuanqin; Li, Nan; Yang, Zhi

    2017-06-01

    Somatostatin receptors are overexpressed in neuroendocrine tumors, whose endogenous ligands are somatostatin. DOTA-TATE is an analogue of somatostatin, which shows high binding affinity to somatostatin receptors. We aim to evaluate the 68 Ga/ 177 Lu-labeling DOTA-TATE kit in neuroendocrine tumor model for molecular imaging and to try human-positron emission tomography/computed tomography imaging of 68 Ga-DOTA-TATE in neuroendocrine tumor patients. DOTA-TATE kits were formulated and radiolabeled with 68 Ga/ 177 Lu for 68 Ga/ 177 Lu-DOTA-TATE (M-DOTA-TATE). In vitro and in vivo stability of 177 Lu-DOTA-TATE were performed. Nude mice bearing human tumors were injected with 68 Ga-DOTA-TATE or 177 Lu-DOTA-TATE for micro-positron emission tomography and micro-single-photon emission computed tomography/computed tomography imaging separately, and clinical positron emission tomography/computed tomography images of 68 Ga-DOTA-TATE were obtained at 1 h post-intravenous injection from patients with neuroendocrine tumors. Micro-positron emission tomography and micro-single-photon emission computed tomography/computed tomography imaging of 68 Ga-DOTA-TATE and 177 Lu-DOTA-TATE both showed clear tumor uptake which could be blocked by excess DOTA-TATE. In addition, 68 Ga-DOTA-TATE-positron emission tomography/computed tomography imaging in neuroendocrine tumor patients could show primary and metastatic lesions. 68 Ga-DOTA-TATE and 177 Lu-DOTA-TATE could accumulate in tumors in animal models, paving the way for better clinical peptide receptor radionuclide therapy for neuroendocrine tumor patients in Asian population.

  18. Analyses of containment structures with corrosion damage

    International Nuclear Information System (INIS)

    Cherry, J.L.

    1997-01-01

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a open-quotes lower boundclose quotes, open-quotes best estimateclose quotes, and open-quotes upper boundclose quotes failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties

  19. Application of geostatistical methods to long-term safety analyses for radioactive waste repositories

    International Nuclear Information System (INIS)

    Roehlig, K.J.

    2001-01-01

    Long-term safety analyses are an important part of the design and optimisation process as well as of the licensing procedure for final repositories for radioactive waste in deep geological formations. For selected scenarios describing possible evolutions of the repository system in the post-closure phase, quantitative consequence analyses are performed. Due to the complexity of the phenomena of concern and the large timeframes under consideration, several types of uncertainties have to be taken into account. The modelling work for the far-field (geosphere) surrounding or overlaying the repository is based on model calculations concerning the groundwater movement and the resulting migration of radionuclides which possibly will be released from the repository. In contrast to engineered systems, the geosphere shows a strong spatial variability of facies, materials and material properties. The paper presented here describes the first steps towards a quantitative approach for an uncertainty assessment taking into account this variability. Due to the availability of a large amount of data and information of several types, the Gorleben site (Germany) has been used for a case study in order to demonstrate the method. (orig.)

  20. Overview of fuel behaviour and core degradation, based on modelling analyses. Overview of fuel behaviour and core degradation, on the basis of modelling results

    International Nuclear Information System (INIS)

    Massara, Simone

    2013-01-01

    Since the very first hours after the accident at Fukushima-Daiichi, numerical simulations by means of severe accident codes have been carried out, aiming at highlighting the key physical phenomena allowing a correct understanding of the sequence of events, and - on a long enough timeline - improving models and methods, in order to reduce the discrepancy between calculated and measured data. A last long-term objective is to support the future decommissioning phase. The presentation summarises some of the available elements on the role of the fuel/cladding-water interaction, which became available only through modelling because of the absence of measured data directly related to the cladding-steam interaction. This presentation also aims at drawing some conclusions on the status of the modelling capabilities of current tools, particularly for the purpose of the foreseen application to ATF fuels: - analyses with MELCOR, MAAP, THALES2 and RELAP5 are presented; - input data are taken from BWR Mark-I Fukushima-Daiichi Units 1, 2 and 3, completed with operational data published by TEPCO. In the case of missing or incomplete data or hypotheses, these are adjusted to reduce the calculation/measurement discrepancy. The behaviour of the accident is well understood on a qualitative level (major trends on RPV pressure and water level, dry-wet and PCV pressure are well represented), allowing a certain level of confidence in the results of the analysis of the zirconium-steam reaction - which is accessible only through numerical simulations. These show an extremely fast sequence of events (here for Unit 1): - the top of fuel is uncovered in 3 hours (after the tsunami); - the steam line breaks at 6.5 hours. Vessel dries at 10 hours, with a heat-up rate in a first moment driven by the decay heat only (∼7 K/min) and afterwards by the chemical heat from Zr-oxidation (over 30 K/min), associated with massive hydrogen production. It appears that the level of uncertainty increases with

  1. A response-modeling alternative to surrogate models for support in computational analyses

    International Nuclear Information System (INIS)

    Rutherford, Brian

    2006-01-01

    Often, the objectives in a computational analysis involve characterization of system performance based on some function of the computed response. In general, this characterization includes (at least) an estimate or prediction for some performance measure and an estimate of the associated uncertainty. Surrogate models can be used to approximate the response in regions where simulations were not performed. For most surrogate modeling approaches, however (1) estimates are based on smoothing of available data and (2) uncertainty in the response is specified in a point-wise (in the input space) fashion. These aspects of the surrogate model construction might limit their capabilities. One alternative is to construct a probability measure, G(r), for the computer response, r, based on available data. This 'response-modeling' approach will permit probability estimation for an arbitrary event, E(r), based on the computer response. In this general setting, event probabilities can be computed: prob(E)=∫ r I(E(r))dG(r) where I is the indicator function. Furthermore, one can use G(r) to calculate an induced distribution on a performance measure, pm. For prediction problems where the performance measure is a scalar, its distribution F pm is determined by: F pm (z)=∫ r I(pm(r)≤z)dG(r). We introduce response models for scalar computer output and then generalize the approach to more complicated responses that utilize multiple response models

  2. Comparison of immittance spectroscopy analyses of ultra-pure and “pure” water in the lower frequency regime

    International Nuclear Information System (INIS)

    Macdonald, J. Ross

    2014-01-01

    Two different analyses of impedance data obtained from ultra-pure water allowed to equilibrate with the atmosphere have recently appeared. They both thus show much smaller low-frequency resistances than does ultra-pure water. Different fitting models were used in these analyses and led to appreciably different parameter estimates from their data fits. Their two “pure” water experimental data sets are here analyzed with a Poisson-Nernst-Planck model that incorporates the possibility of dissociation of a neutral species to positive and negative charges of arbitrary mobilities, anomalous diffusion in the interface region, and reaction of mobile ions at the electrodes. Complex-nonlinear-least-squares fitting of these data sets with either charges of a single sign mobile or with those of both signs mobile showed that the one-mobile choice was far superior to the two-mobile one. These results were compared both with newly calculated theoretical ultra-pure water immittance ones and with the results obtained in the earlier two papers, where different Poisson-Nernst-Planck-related fitting models were employed. Both involved the restrictive assumptions of full dissociation and two-mobile behavior with equal mobilities of the positive and negative charges. The dominant mobile charge species present in the equilibrated “pure” water data sets (protons for the ultra-pure water), involved mobile impurity ions, possibly oxygen ones. The Poisson-Nernst-Planck model used here is simpler than the other models, and it led to better fits of the data sets and to more physically significant parameter estimates than did the earlier fits

  3. Serotonin markers show altered transcription levels in an experimental pig model of mitral regurgitation

    DEFF Research Database (Denmark)

    Cremer, Signe Emilie; Zois, Nora Elisabeth; Moesgaard, S. G.

    2015-01-01

    surgically induced MR or sham-operation, resulting in three MR groups: control (CON, n = 12), mild MR (mMR, n = 10) and severe MR (sMR, n = 6). The gene expression levels of 5-HT1BR, 5-HT2AR, 5-HT2BR, SERT and TPH-1 were analysed using quantitative PCR (qPCR) in the mitral valve (MV), anterior papillary......-uptake transporter (SERT) in MMVD-affected valves, increased valvular 5-HT synthesis and decreased clearance have been suggested. It remains unknown how haemodynamic changes associated with mitral regurgitation (MR) affect 5-HT markers in the mitral valve, myocardium and circulation. Twenty-eight pigs underwent...... muscle (AP) and left ventricle (LV). MV 5-HT2BR was also analysed with immunohistochemistry (IHC) in relation to histological lesions and valvular myofibroblasts. All 5-HTR mRNAs were up-regulated in MV compared to AP and LV (P SERT and TPH-1 were up-regulated in AP and LV compared...

  4. A Framework for Analysing Driver Interactions with Semi-Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Siraj Shaikh

    2012-12-01

    Full Text Available Semi-autonomous vehicles are increasingly serving critical functions in various settings from mining to logistics to defence. A key characteristic of such systems is the presence of the human (drivers in the control loop. To ensure safety, both the driver needs to be aware of the autonomous aspects of the vehicle and the automated features of the vehicle built to enable safer control. In this paper we propose a framework to combine empirical models describing human behaviour with the environment and system models. We then analyse, via model checking, interaction between the models for desired safety properties. The aim is to analyse the design for safe vehicle-driver interaction. We demonstrate the applicability of our approach using a case study involving semi-autonomous vehicles where the driver fatigue are factors critical to a safe journey.

  5. Recriticality analyses for CAPRA cores

    International Nuclear Information System (INIS)

    Maschek, W.; Thiem, D.

    1995-01-01

    The first scoping calculation performed show that the energetics levels from recriticalities in CAPRA cores are in the same range as in conventional cores. However, considerable uncertainties exist and further analyses are necessary. Additional investigations are performed for the separation scenarios of fuel/steel/inert and matrix material as a large influence of these processes on possible ramp rates and kinetics parameters was detected in the calculations. (orig./HP)

  6. Recriticality analyses for CAPRA cores

    Energy Technology Data Exchange (ETDEWEB)

    Maschek, W.; Thiem, D.

    1995-08-01

    The first scoping calculation performed show that the energetics levels from recriticalities in CAPRA cores are in the same range as in conventional cores. However, considerable uncertainties exist and further analyses are necessary. Additional investigations are performed for the separation scenarios of fuel/steel/inert and matrix material as a large influence of these processes on possible ramp rates and kinetics parameters was detected in the calculations. (orig./HP)

  7. Nuclear power plants: Results of recent safety analyses

    International Nuclear Information System (INIS)

    Steinmetz, E.

    1987-01-01

    The contributions deal with the problems posed by low radiation doses, with the information currently available from analyses of the Chernobyl reactor accident, and with risk assessments in connection with nuclear power plant accidents. Other points of interest include latest results on fission product release from reactor core or reactor building, advanced atmospheric dispersion models for incident and accident analyses, reliability studies on safety systems, and assessment of fire hazard in nuclear installations. The various contributions are found as separate entries in the database. (DG) [de

  8. A comparison of linear tyre models for analysing shimmy

    NARCIS (Netherlands)

    Besselink, I.J.M.; Maas, J.W.L.H.; Nijmeijer, H.

    2011-01-01

    A comparison is made between three linear, dynamic tyre models using low speed step responses and yaw oscillation tests. The match with the measurements improves with increasing complexity of the tyre model. Application of the different tyre models to a two degree of freedom trailing arm suspension

  9. Neoliberalism in education: Five images of critical analyses

    Directory of Open Access Journals (Sweden)

    Branislav Pupala

    2011-03-01

    Full Text Available The survey study brings information about the way that educational researchcopes with neoliberalism as a generalized form of social government in the currentwestern culture. It shows that neoliberalism is considered as a universal scope of otherchanges in the basic segments of education and those theoretical and critical analyses ofthis phenomenon represent an important part of production in the area of educationalresearch. It emphasizes the contribution of formation and development of the socalledgovernmental studies for comprehension of mechanisms and consequences ofneoliberal government of the society and shows how way the methodology of thesestudies helps to identify neoliberal strategies used in the regulation of social subjectsby education. There are five selected segments of critical analyses elaborated (fromthe concept of a lifelong learning, through preschool and university education to theeducation of teachers and PISA project that obviously show ideological and theoreticalcohesiveness of the education analysis through the scope of neoliberal governmentality.

  10. Results of radiotherapy in craniopharyngiomas analysed by the linear quadratic model

    Energy Technology Data Exchange (ETDEWEB)

    Guerkaynak, M. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Oezyar, E. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Zorlu, F. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Akyol, F.H. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Lale Atahan, I. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey)

    1994-12-31

    In 23 craniopharyngioma patients treated by limited surgery and external radiotherapy, the results concerning local control were analysed by linear quadratic formula. A biologically effective dose (BED) of 55 Gy, calculated with time factor and an {alpha}/{beta} value of 10 Gy, seemed to be adequate for local control. (orig.).

  11. Detection of nonstationary transition to synchronized states of a neural network using recurrence analyses

    Science.gov (United States)

    Budzinski, R. C.; Boaretto, B. R. R.; Prado, T. L.; Lopes, S. R.

    2017-07-01

    We study the stability of asymptotic states displayed by a complex neural network. We focus on the loss of stability of a stationary state of networks using recurrence quantifiers as tools to diagnose local and global stabilities as well as the multistability of a coupled neural network. Numerical simulations of a neural network composed of 1024 neurons in a small-world connection scheme are performed using the model of Braun et al. [Int. J. Bifurcation Chaos 08, 881 (1998), 10.1142/S0218127498000681], which is a modified model from the Hodgkin-Huxley model [J. Phys. 117, 500 (1952)]. To validate the analyses, the results are compared with those produced by Kuramoto's order parameter [Chemical Oscillations, Waves, and Turbulence (Springer-Verlag, Berlin Heidelberg, 1984)]. We show that recurrence tools making use of just integrated signals provided by the networks, such as local field potential (LFP) (LFP signals) or mean field values bring new results on the understanding of neural behavior occurring before the synchronization states. In particular we show the occurrence of different stationary and nonstationarity asymptotic states.

  12. Sampling strategies for improving tree accuracy and phylogenetic analyses: a case study in ciliate protists, with notes on the genus Paramecium.

    Science.gov (United States)

    Yi, Zhenzhen; Strüder-Kypke, Michaela; Hu, Xiaozhong; Lin, Xiaofeng; Song, Weibo

    2014-02-01

    In order to assess how dataset-selection for multi-gene analyses affects the accuracy of inferred phylogenetic trees in ciliates, we chose five genes and the genus Paramecium, one of the most widely used model protist genera, and compared tree topologies of the single- and multi-gene analyses. Our empirical study shows that: (1) Using multiple genes improves phylogenetic accuracy, even when their one-gene topologies are in conflict with each other. (2) The impact of missing data on phylogenetic accuracy is ambiguous: resolution power and topological similarity, but not number of represented taxa, are the most important criteria of a dataset for inclusion in concatenated analyses. (3) As an example, we tested the three classification models of the genus Paramecium with a multi-gene based approach, and only the monophyly of the subgenus Paramecium is supported. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Global post-Kyoto scenario analyses at PSI

    Energy Technology Data Exchange (ETDEWEB)

    Kypreos, S [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    Scenario analyses are described here using the Global MARKAL-Macro Trade (GMMT) model to study the economic implications of the Kyoto Protocol to the UN Convention on Climate change. Some conclusions are derived in terms of efficient implementations of the post-Kyoto extensions of the Protocol. (author) 2 figs., 5 refs.

  14. Global post-Kyoto scenario analyses at PSI

    International Nuclear Information System (INIS)

    Kypreos, S.

    1999-01-01

    Scenario analyses are described here using the Global MARKAL-Macro Trade (GMMT) model to study the economic implications of the Kyoto Protocol to the UN Convention on Climate change. Some conclusions are derived in terms of efficient implementations of the post-Kyoto extensions of the Protocol. (author) 2 figs., 5 refs

  15. LOCO - a linearised model for analysing the onset of coolant oscillations and frequency response of boiling channels

    International Nuclear Information System (INIS)

    Romberg, T.M.

    1982-12-01

    Industrial plant such as heat exchangers and nuclear and conventional boilers are prone to coolant flow oscillations which may not be detected. In this report, a hydrodynamic model is formulated in which the one-dimensional, non-linear, partial differential equations for the conservation of mass, energy and momentum are perturbed with respect to time, linearised, and Laplace-transformed into the s-domain for frequency response analysis. A computer program has been developed to integrate numerically the resulting non-linear ordinary differential equations by finite difference methods. A sample problem demonstrates how the computer code is used to analyse the frequency response and flow stability characteristics of a heated channel

  16. DMPD: Structural and functional analyses of bacterial lipopolysaccharides. [Dynamic Macrophage Pathway CSML Database

    Lifescience Database Archive (English)

    Full Text Available 12106784 Structural and functional analyses of bacterial lipopolysaccharides. Carof...html) (.csml) Show Structural and functional analyses of bacterial lipopolysaccharides. PubmedID 12106784 Title Structural and functi...onal analyses of bacterial lipopolysaccharides. Authors

  17. Analysing Models as a Knowledge Technology in Transport Planning

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2011-01-01

    critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... device to illuminate how such an analytic scheme may allow patterns of insight about the use, influence and role of models in planning to emerge. The main contribution of the paper is to demonstrate that concepts and terminologies from knowledge use literature can provide interpretations of significance...

  18. Fracture Analyses of Cracked Delta Eye Plates in Ship Towing

    Science.gov (United States)

    Huang, Xiangbing; Huang, Xingling; Sun, Jizheng

    2018-01-01

    Based on fracture mechanics, a safety analysis approach is proposed for cracked delta eye plates in ship towing. The static analysis model is presented when the delta eye plate is in service, and the fracture criterion is introduced on basis of stress intensity factor, which is estimated with domain integral method. Subsequently, three-dimensional finite element analyses are carried out to obtain the effective stress intensity factors, and a case is studied to demonstrate the reasonability of the approach. The results show that the classical strength theory is not applicable to evaluate the cracked plate while fracture mechanics can solve the problem very well, and the load level, which a delta eye plate can carry on, decreases evidently when it is damaged.

  19. Resolution analyses for selecting an appropriate airborne electromagnetic (AEM) system

    DEFF Research Database (Denmark)

    Christensen, N.B.; Lawrie, Ken

    2012-01-01

    is necessary and has to be approached in a pragmatic way involving a range of different aspects. In this paper, we concentrate on the resolution analysis perspective and demonstrate that the inversion analysis must be preferred over the derivative analysis because it takes parameter coupling into account, and...... resolution for a series of models relevant to the survey area by comparing the sum over the data of squares of noise-normalised derivatives. We compare this analysis method with a resolution analysis based on the posterior covariance matrix of an inversion formulation. Both of the above analyses depend......, furthermore, that the derivative analysis generally overestimates the resolution capability. Finally we show that impulse response data are to be preferred over step response data for near-surface resolution....

  20. Twenty years of meta-analyses in orthopaedic surgery: has quality kept up with quantity?

    Science.gov (United States)

    Dijkman, Bernadette G; Abouali, Jihad A K; Kooistra, Bauke W; Conter, Henry J; Poolman, Rudolf W; Kulkarni, Abhaya V; Tornetta, Paul; Bhandari, Mohit

    2010-01-01

    As the number of studies in the literature is increasing, orthopaedic surgeons highly depend on meta-analyses as their primary source of scientific evidence. The objectives of this review were to assess the scientific quality and number of published meta-analyses on orthopaedics-related topics over time. We conducted, in duplicate and independently, a systematic review of published meta-analyses in orthopaedics in the years 2005 and 2008 and compared them with a previous systematic review of meta-analyses from 1969 to 1999. A search of electronic databases (MEDLINE, EMBASE, and the Cochrane Database of Systematic Reviews) was performed to identify meta-analyses published in 2005 and 2008. We searched bibliographies and contacted content experts to identify additional relevant studies. Two investigators independently assessed the quality of the studies, using the Oxman and Guyatt index, and abstracted relevant data. We included forty-five and forty-four meta-analyses from 2005 and 2008, respectively. While the number of meta-analyses increased fivefold from 1999 to 2008, the mean quality score did not change significantly over time (p = 0.067). In the later years, a significantly lower proportion of meta-analyses had methodological flaws (56% in 2005 and 68% in 2008) compared with meta-analyses published prior to 2000 (88%) (p = 0.006). In 2005 and 2008, respectively, 18% and 30% of the meta-analyses had major to extensive flaws in their methodology. Studies from 2008 with positive conclusions used and described appropriate criteria for the validity assessment less often than did those with negative results. The use of random-effects and fixed-effects models as pooling methods became more popular toward 2008. Although the methodological quality of orthopaedic meta-analyses has increased in the past twenty years, a substantial proportion continues to show major to extensive flaws. As the number of published meta-analyses is increasing, a routine checklist for

  1. A simulation model for analysing brain structure deformations

    Energy Technology Data Exchange (ETDEWEB)

    Bona, Sergio Di [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy); Lutzemberger, Ludovico [Department of Neuroscience, Institute of Neurosurgery, University of Pisa, Via Roma, 67-56100 Pisa (Italy); Salvetti, Ovidio [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy)

    2003-12-21

    Recent developments of medical software applications from the simulation to the planning of surgical operations have revealed the need for modelling human tissues and organs, not only from a geometric point of view but also from a physical one, i.e. soft tissues, rigid body, viscoelasticity, etc. This has given rise to the term 'deformable objects', which refers to objects with a morphology, a physical and a mechanical behaviour of their own and that reflects their natural properties. In this paper, we propose a model, based upon physical laws, suitable for the realistic manipulation of geometric reconstructions of volumetric data taken from MR and CT scans. In particular, a physically based model of the brain is presented that is able to simulate the evolution of different nature pathological intra-cranial phenomena such as haemorrhages, neoplasm, haematoma, etc and to describe the consequences that are caused by their volume expansions and the influences they have on the anatomical and neuro-functional structures of the brain.

  2. Event-based model diagnosis of rainfall-runoff model structures

    International Nuclear Information System (INIS)

    Stanzel, P.

    2012-01-01

    The objective of this research is a comparative evaluation of different rainfall-runoff model structures. Comparative model diagnostics facilitate the assessment of strengths and weaknesses of each model. The application of multiple models allows an analysis of simulation uncertainties arising from the selection of model structure, as compared with effects of uncertain parameters and precipitation input. Four different model structures, including conceptual and physically based approaches, are compared. In addition to runoff simulations, results for soil moisture and the runoff components of overland flow, interflow and base flow are analysed. Catchment runoff is simulated satisfactorily by all four model structures and shows only minor differences. Systematic deviations from runoff observations provide insight into model structural deficiencies. While physically based model structures capture some single runoff events better, they do not generally outperform conceptual model structures. Contributions to uncertainty in runoff simulations stemming from the choice of model structure show similar dimensions to those arising from parameter selection and the representation of precipitation input. Variations in precipitation mainly affect the general level and peaks of runoff, while different model structures lead to different simulated runoff dynamics. Large differences between the four analysed models are detected for simulations of soil moisture and, even more pronounced, runoff components. Soil moisture changes are more dynamical in the physically based model structures, which is in better agreement with observations. Streamflow contributions of overland flow are considerably lower in these models than in the more conceptual approaches. Observations of runoff components are rarely made and are not available in this study, but are shown to have high potential for an effective selection of appropriate model structures (author) [de

  3. Preliminary analyses of AP600 using RELAP5

    International Nuclear Information System (INIS)

    Modro, S.M.; Beelman, R.J.; Fisher, J.E.

    1991-01-01

    This paper presents results of preliminary analyses of the proposed Westinghouse Electric Corporation AP600 design. AP600 is a two loop, 600 MW (e) pressurized water reactor (PWR) arranged in a two hot leg, four cold leg nuclear steam supply system (NSSS) configuration. In contrast to the present generation of PWRs it is equipped with passive emergency core coolant (ECC) systems. Also, the containment and the safety systems of the AP600 interact with the reactor coolant system and each other in a more integral fashion than present day PWRs. The containment in this design is the ultimate heat sink for removal of decay heat to the environment. Idaho National Engineering Laboratory (INEL) has studied applicability of the RELAP5 code to AP600 safety analysis and has developed a model of the AP600 for the Nuclear Regulatory Commission. The model incorporates integral modeling of the containment, NSSS and passive safety systems. Best available preliminary design data were used. Nodalization sensitivity studies were conducted to gain experience in modeling of systems and conditions which are beyond the applicability of previously established RELAP5 modeling guidelines or experience. Exploratory analyses were then undertaken to investigate AP600 system response during postulated accident conditions. Four small break LOCA calculations and two large break LOCA calculations were conducted

  4. Density dependent forces and large basis structure models in the analyses of 12C(p,p') reactions at 135 MeV

    International Nuclear Information System (INIS)

    Bauhoff, W.; Collins, S.F.; Henderson, R.S.

    1983-01-01

    Differential cross-sections have been measured for the elastic and inelastic scattering of 135 MeV protons from 12 C. The data from the transitions to 9 select states up to 18.3 MeV in excitation have been analysed using a distorted wave approximation with various microscopic model nuclear structure transition densities and free and density dependent two nucleon t-matrices. Clear signatures of the density dependence of the t-matrix are defined and the utility of select transitions to test different attributes of that t-matrix when good nuclear structure models are used is established

  5. Isotropy analyses of the Planck convergence map

    Science.gov (United States)

    Marques, G. A.; Novaes, C. P.; Bernui, A.; Ferreira, I. S.

    2018-01-01

    The presence of matter in the path of relic photons causes distortions in the angular pattern of the cosmic microwave background (CMB) temperature fluctuations, modifying their properties in a slight but measurable way. Recently, the Planck Collaboration released the estimated convergence map, an integrated measure of the large-scale matter distribution that produced the weak gravitational lensing (WL) phenomenon observed in Planck CMB data. We perform exhaustive analyses of this convergence map calculating the variance in small and large regions of the sky, but excluding the area masked due to Galactic contaminations, and compare them with the features expected in the set of simulated convergence maps, also released by the Planck Collaboration. Our goal is to search for sky directions or regions where the WL imprints anomalous signatures to the variance estimator revealed through a χ2 analyses at a statistically significant level. In the local analysis of the Planck convergence map, we identified eight patches of the sky in disagreement, in more than 2σ, with what is observed in the average of the simulations. In contrast, in the large regions analysis we found no statistically significant discrepancies, but, interestingly, the regions with the highest χ2 values are surrounding the ecliptic poles. Thus, our results show a good agreement with the features expected by the Λ cold dark matter concordance model, as given by the simulations. Yet, the outliers regions found here could suggest that the data still contain residual contamination, like noise, due to over- or underestimation of systematic effects in the simulation data set.

  6. Do men and women show love differently in marriage?

    Science.gov (United States)

    Schoenfeld, Elizabeth A; Bredow, Carrie A; Huston, Ted L

    2012-11-01

    In Western societies, women are considered more adept than men at expressing love in romantic relationships. Although scholars have argued that this view of love gives short shrift to men's ways of showing love (e.g., Cancian, 1986; Noller, 1996), the widely embraced premise that men and women "love differently" has rarely been examined empirically. Using data collected at four time points over 13 years of marriage, the authors examined whether love is associated with different behaviors for husbands and wives. Multilevel analyses revealed that, counter to theoretical expectations, both genders were equally likely to show love through affection. But whereas wives expressed love by enacting fewer negative or antagonistic behaviors, husbands showed love by initiating sex, sharing leisure activities, and doing household work together with their wives. Overall, the findings indicate that men and women show their love in more nuanced ways than cultural stereotypes suggest.

  7. Energy and exergy analyses of the diffusion absorption refrigeration system

    International Nuclear Information System (INIS)

    Yıldız, Abdullah; Ersöz, Mustafa Ali

    2013-01-01

    This paper describes the thermodynamic analyses of a DAR (diffusion absorption refrigeration) cycle. The experimental apparatus is set up to an ammonia–water DAR cycle with helium as the auxiliary inert gas. A thermodynamic model including mass, energy and exergy balance equations are presented for each component of the DAR cycle and this model is then validated by comparison with experimental data. In the thermodynamic analyses, energy and exergy losses for each component of the system are quantified and illustrated. The systems' energy and exergy losses and efficiencies are investigated. The highest energy and exergy losses occur in the solution heat exchanger. The highest energy losses in the experimental and theoretical analyses are found 25.7090 W and 25.4788 W respectively, whereas those losses as to exergy are calculated 13.7933 W and 13.9976 W. Although the values of energy efficiencies obtained from both the model and experimental studies are calculated as 0.1858, those values, in terms of exergy efficiencies are found 0.0260 and 0.0356. - Highlights: • The diffusion absorption refrigerator system is designed manufactured and tested. • The energy and exergy analyses of the system are presented theoretically and experimentally. • The energy and exergy losses are investigated for each component of the system. • The highest energy and exergy losses occur in the solution heat exchanger. • The energy and the exergy performances are also calculated

  8. Optimization of a Centrifugal Boiler Circulating Pump's Casing Based on CFD and FEM Analyses

    Directory of Open Access Journals (Sweden)

    Zhigang Zuo

    2014-04-01

    Full Text Available It is important to evaluate the economic efficiency of boiler circulating pumps in manufacturing process from the manufacturers' point of view. The possibility of optimizing the pump casing with respect to structural pressure integrity and hydraulic performance was discussed. CFD analyses of pump models with different pump casing sizes were firstly carried out for the hydraulic performance evaluation. The effects of the working temperature and the sealing ring on the hydraulic efficiency were discussed. A model with casing diameter of 0.875D40 was selected for further analyses. FEM analyses were then carried out on different combinations of casing sizes, casing wall thickness, and materials, to evaluate its safety related to pressure integrity, with respect to both static and fatigue strength analyses. Two models with forging and cast materials were selected as final results.

  9. Security for Show? The Militarisation of Public Space in Light of the 2016 Rio Olympic Games

    Directory of Open Access Journals (Sweden)

    Veronica F. Azzi

    Full Text Available Abstract This article aims to analyse the increasing militarisation of public space in the Brazilian city of Rio de Janeiro, particularly on the eve of the 2016 Olympics. To this end, I briefly discuss how the concept of militarisation has been historically approached in the International Relations literature, namely within the security field. In the first section, I address the nature of the domestic security challenges Brazil faces as a developing country. In the second section, I show that the public security challenge of organised crime in Rio was securitised and confronted by increasing militarisation over the years as a result of a specific model of neo-liberal social control carried out by the country. I then analyse Brazil’s Olympics security scheme carried out in order to portray Rio as a safe city to the world. In the last section, I highlight the contradictions between accounts on the collapse in domestic security vis-à-vis official government statements to the international media to assure that ‘nothing would go wrong’ during the mega sports event. The idea is to show how the militarisation of public security, rather than mere governmental efforts to signal stability to the international community during the Olympics, is a trend likely to outlast the event that implies not only, but mainly, the perpetuation of insecurity.

  10. Model tests and elasto-plastic finite element analysis on multicavity type PCRV

    International Nuclear Information System (INIS)

    Nojiri, Y.; Yamazaki, M.; Kotani, K.; Matsuzaki, Y.

    1978-01-01

    Multicavity type PCRV models were tested to investigate elastic stress distributions, cracking and failure mode of the models, and to determine the adequacy and relative accuracy of finite element structural analyses. The behavior of the models under pressure was investigated, and it was found that the predictions of the analyses showed a good agreement with the test results

  11. Thermodynamic analysis and modeling of thermo compressor; Analyse et modelisation thermodynamique du mouvement du piston d'un thermocompresseur

    Energy Technology Data Exchange (ETDEWEB)

    Arques, Ph. [Ecole Centrale de Lyon, 69 - Ecully (France)

    1998-07-01

    A thermo-compressor is a compressor that transforms directly the heat release by a source in an energy of pressure without intermediate mechanical work. It is a conversion of the Stirling engine in driven machine in order that the piston that provides the work has been suppressed. In this article, we present the analytical and numerical analyses of heat and mass transfers modeling in the different volumes of the thermo-compressor. This engine comprises a free piston displacer that separates cold and hot gas. (author)

  12. Setting up processes and standardization of the equipment in order to optimize analyses of the wavelength dispersion X-ray fluorescence (WDXRF) system

    International Nuclear Information System (INIS)

    Phan Trong Phuc; Luu Anh Tuyen; La Ly Nguyen; Nguyen Thi Ngoc Hue; Pham Thi Hue; Do Duy Khiem

    2015-01-01

    For the purpose of operating and optimizing the analyses of the equipment: wavelength dispersion X-ray fluorescence (WDXRF)- model S8 TIGER from Enhancing Equipment Project (TCTTB) 2011-2012, we set up sampling and analytical process for different sample kinds; we constructed multi-elemental calibration curve for clay sample; we analysed elemental concentrations of 5 clay samples by XRF method and compared the results with the results given by NAA method. Equipment sensitivity was tested by analysing elemental concentrations of 2 Kaolin standard samples. The results show that S8-Tiger equipment is within good condition and is able to analyze powder clay sample exactly. (author)

  13. Homeopathy: meta-analyses of pooled clinical data.

    Science.gov (United States)

    Hahn, Robert G

    2013-01-01

    In the first decade of the evidence-based era, which began in the mid-1990s, meta-analyses were used to scrutinize homeopathy for evidence of beneficial effects in medical conditions. In this review, meta-analyses including pooled data from placebo-controlled clinical trials of homeopathy and the aftermath in the form of debate articles were analyzed. In 1997 Klaus Linde and co-workers identified 89 clinical trials that showed an overall odds ratio of 2.45 in favor of homeopathy over placebo. There was a trend toward smaller benefit from studies of the highest quality, but the 10 trials with the highest Jadad score still showed homeopathy had a statistically significant effect. These results challenged academics to perform alternative analyses that, to demonstrate the lack of effect, relied on extensive exclusion of studies, often to the degree that conclusions were based on only 5-10% of the material, or on virtual data. The ultimate argument against homeopathy is the 'funnel plot' published by Aijing Shang's research group in 2005. However, the funnel plot is flawed when applied to a mixture of diseases, because studies with expected strong treatments effects are, for ethical reasons, powered lower than studies with expected weak or unclear treatment effects. To conclude that homeopathy lacks clinical effect, more than 90% of the available clinical trials had to be disregarded. Alternatively, flawed statistical methods had to be applied. Future meta-analyses should focus on the use of homeopathy in specific diseases or groups of diseases instead of pooling data from all clinical trials. © 2013 S. Karger GmbH, Freiburg.

  14. A three-stage hybrid model for regionalization, trends and sensitivity analyses of temperature anomalies in China from 1966 to 2015

    Science.gov (United States)

    Wu, Feifei; Yang, XiaoHua; Shen, Zhenyao

    2018-06-01

    Temperature anomalies have received increasing attention due to their potentially severe impacts on ecosystems, economy and human health. To facilitate objective regionalization and examine regional temperature anomalies, a three-stage hybrid model with stages of regionalization, trends and sensitivity analyses was developed. Annual mean and extreme temperatures were analyzed using the daily data collected from 537 stations in China from 1966 to 2015, including the annual mean, minimum and maximum temperatures (Tm, TNm and TXm) as well as the extreme minimum and maximum temperatures (TNe and TXe). The results showed the following: (1) subregions with coherent temperature changes were identified using the rotated empirical orthogonal function analysis and K-means clustering algorithm. The numbers of subregions were 6, 7, 8, 9 and 8 for Tm, TNm, TXm, TNe and TXe, respectively. (2) Significant increases in temperature were observed in most regions of China from 1966 to 2015, although warming slowed down over the last decade. This warming primarily featured a remarkable increase in its minimum temperature. For Tm and TNm, 95% of the stations showed a significant upward trend at the 99% confidence level. TNe increased the fastest, at a rate of 0.56 °C/decade, whereas 21% of the stations in TXe showed a downward trend. (3) The mean temperatures (Tm, TNm and TXm) in the high-latitude regions increased more quickly than those in the low-latitude regions. The maximum temperature increased significantly at high elevations, whereas the minimum temperature increased greatly at middle-low elevations. The most pronounced warming occurred in eastern China in TNe and northwestern China in TXe, with mean elevations of 51 m and 2098 m, respectively. A cooling trend in TXe was observed at the northwestern end of China. The warming rate in TNe varied the most among the subregions (0.63 °C/decade).

  15. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Directory of Open Access Journals (Sweden)

    Catherine L Worth

    Full Text Available BACKGROUND: Up until recently the only available experimental (high resolution structure of a G-protein-coupled receptor (GPCR was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. METHODOLOGY: We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s for building a comparative molecular model. CONCLUSIONS: The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying

  16. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Science.gov (United States)

    Worth, Catherine L; Kleinau, Gunnar; Krause, Gerd

    2009-09-16

    Up until recently the only available experimental (high resolution) structure of a G-protein-coupled receptor (GPCR) was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s) to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s) for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s) for building a comparative molecular model. The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying suitable templates for GPCR homology modelling that will

  17. Model evaluation methodology applicable to environmental assessment models

    International Nuclear Information System (INIS)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes

  18. Plasma-safety assessment model and safety analyses of ITER

    International Nuclear Information System (INIS)

    Honda, T.; Okazaki, T.; Bartels, H.-H.; Uckan, N.A.; Sugihara, M.; Seki, Y.

    2001-01-01

    A plasma-safety assessment model has been provided on the basis of the plasma physics database of the International Thermonuclear Experimental Reactor (ITER) to analyze events including plasma behavior. The model was implemented in a safety analysis code (SAFALY), which consists of a 0-D dynamic plasma model and a 1-D thermal behavior model of the in-vessel components. Unusual plasma events of ITER, e.g., overfueling, were calculated using the code and plasma burning is found to be self-bounded by operation limits or passively shut down due to impurity ingress from overheated divertor targets. Sudden transition of divertor plasma might lead to failure of the divertor target because of a sharp increase of the heat flux. However, the effects of the aggravating failure can be safely handled by the confinement boundaries. (author)

  19. Thermogravimetric and x-ray diffraction analyses of Luna-24 regolith samples

    International Nuclear Information System (INIS)

    Deshpande, V.V.; Dharwadkar, S.R.; Jakkal, V.S.

    1979-01-01

    Two samples of Luna-24 were analysed by X-ray diffraction and thermogravimetric (TG) techniques. The sample 24123.12 shows a weight loss of nearly 0.85 percent between 23O-440deg C and followed by 1.16 percent weight gain from 500 to 800deg C. The sample 23190.13 showed only a weight gain of about 1.5 percent from 5O0deg C to 900deg C. X-ray diffraction analyses show the presence of olivine, plagioclase, pigeonite, enstatite, and native iron in both the virgin samples. The heated samples, however, show that only the native iron got oxidized to iron oxide. The other constituents remain unaltered. (auth.)

  20. Brain areas associated with numbers and calculations in children: Meta-analyses of fMRI studies

    Directory of Open Access Journals (Sweden)

    Marie Arsalidou

    2018-04-01

    Full Text Available Children use numbers every day and typically receive formal mathematical training from an early age, as it is a main subject in school curricula. Despite an increase in children neuroimaging studies, a comprehensive neuropsychological model of mathematical functions in children is lacking. Using quantitative meta-analyses of functional magnetic resonance imaging (fMRI studies, we identify concordant brain areas across articles that adhere to a set of selection criteria (e.g., whole-brain analysis, coordinate reports and report brain activity to tasks that involve processing symbolic and non-symbolic numbers with and without formal mathematical operations, which we called respectively number tasks and calculation tasks. We present data on children 14 years and younger, who solved these tasks. Results show activity in parietal (e.g., inferior parietal lobule and precuneus and frontal (e.g., superior and medial frontal gyri cortices, core areas related to mental-arithmetic, as well as brain regions such as the insula and claustrum, which are not typically discussed as part of mathematical problem solving models. We propose a topographical atlas of mathematical processes in children, discuss findings within a developmental constructivist theoretical model, and suggest practical methodological considerations for future studies. Keywords: Mathematical cognition, Meta-analyses, fMRI, Children, Development, Insula

  1. Zhang functions and various models

    CERN Document Server

    Zhang, Yunong

    2015-01-01

    This book focuses on solving different types of time-varying problems. It presents various Zhang dynamics (ZD) models by defining various Zhang functions (ZFs) in real and complex domains. It then provides theoretical analyses of such ZD models and illustrates their results. It also uses simulations to substantiate their efficacy and show the feasibility of the presented ZD approach (i.e., different ZFs leading to different ZD models), which is further applied to the repetitive motion planning (RMP) of redundant robots, showing its application potential.

  2. Comparative analyses reveal potential uses of Brachypodium distachyon as a model for cold stress responses in temperate grasses

    Directory of Open Access Journals (Sweden)

    Li Chuan

    2012-05-01

    Full Text Available Abstract Background Little is known about the potential of Brachypodium distachyon as a model for low temperature stress responses in Pooideae. The ice recrystallization inhibition protein (IRIP genes, fructosyltransferase (FST genes, and many C-repeat binding factor (CBF genes are Pooideae specific and important in low temperature responses. Here we used comparative analyses to study conservation and evolution of these gene families in B. distachyon to better understand its potential as a model species for agriculturally important temperate grasses. Results Brachypodium distachyon contains cold responsive IRIP genes which have evolved through Brachypodium specific gene family expansions. A large cold responsive CBF3 subfamily was identified in B. distachyon, while CBF4 homologs are absent from the genome. No B. distachyon FST gene homologs encode typical core Pooideae FST-motifs and low temperature induced fructan accumulation was dramatically different in B. distachyon compared to core Pooideae species. Conclusions We conclude that B. distachyon can serve as an interesting model for specific molecular mechanisms involved in low temperature responses in core Pooideae species. However, the evolutionary history of key genes involved in low temperature responses has been different in Brachypodium and core Pooideae species. These differences limit the use of B. distachyon as a model for holistic studies relevant for agricultural core Pooideae species.

  3. Analysing User Lifetime in Voluntary Online Collaboration

    DEFF Research Database (Denmark)

    McHugh, Ronan; Larsen, Birger

    2010-01-01

    This paper analyses persuasion in online collaboration projects. It introduces a set of heuristics that can be applied to such projects and combines these with a quantitative analysis of user activity over time. Two example sites are studies, Open Street Map and The Pirate Bay. Results show that ...

  4. From global economic modelling to household level analyses of food security and sustainability: how big is the gap and can we bridge it?

    NARCIS (Netherlands)

    Wijk, van M.T.

    2014-01-01

    Policy and decision makers have to make difficult choices to improve the food security of local people against the background of drastic global and local changes. Ex-ante impact assessment using integrated models can help them with these decisions. This review analyses the state of affairs of the

  5. Lagrangian Coherent Structure Analysis of Terminal Winds: Three-Dimensionality, Intramodel Variations, and Flight Analyses

    Directory of Open Access Journals (Sweden)

    Brent Knutson

    2015-01-01

    Full Text Available We present a study of three-dimensional Lagrangian coherent structures (LCS near the Hong Kong International Airport and relate to previous developments of two-dimensional (2D LCS analyses. The LCS are contrasted among three independent models and against 2D coherent Doppler light detection and ranging (LIDAR data. Addition of the velocity information perpendicular to the LIDAR scanning cone helps solidify flow structures inferred from previous studies; contrast among models reveals the intramodel variability; and comparison with flight data evaluates the performance among models in terms of Lagrangian analyses. We find that, while the three models and the LIDAR do recover similar features of the windshear experienced by a landing aircraft (along the landing trajectory, their Lagrangian signatures over the entire domain are quite different—a portion of each numerical model captures certain features resembling those LCS extracted from independent 2D LIDAR analyses based on observations.

  6. Models of alien species richness show moderate predictive accuracy and poor transferability

    Directory of Open Access Journals (Sweden)

    César Capinha

    2018-06-01

    Full Text Available Robust predictions of alien species richness are useful to assess global biodiversity change. Nevertheless, the capacity to predict spatial patterns of alien species richness remains largely unassessed. Using 22 data sets of alien species richness from diverse taxonomic groups and covering various parts of the world, we evaluated whether different statistical models were able to provide useful predictions of absolute and relative alien species richness, as a function of explanatory variables representing geographical, environmental and socio-economic factors. Five state-of-the-art count data modelling techniques were used and compared: Poisson and negative binomial generalised linear models (GLMs, multivariate adaptive regression splines (MARS, random forests (RF and boosted regression trees (BRT. We found that predictions of absolute alien species richness had a low to moderate accuracy in the region where the models were developed and a consistently poor accuracy in new regions. Predictions of relative richness performed in a superior manner in both geographical settings, but still were not good. Flexible tree ensembles-type techniques (RF and BRT were shown to be significantly better in modelling alien species richness than parametric linear models (such as GLM, despite the latter being more commonly applied for this purpose. Importantly, the poor spatial transferability of models also warrants caution in assuming the generality of the relationships they identify, e.g. by applying projections under future scenario conditions. Ultimately, our results strongly suggest that predictability of spatial variation in richness of alien species richness is limited. The somewhat more robust ability to rank regions according to the number of aliens they have (i.e. relative richness, suggests that models of aliens species richness may be useful for prioritising and comparing regions, but not for predicting exact species numbers.

  7. Plectasin shows intracellular activity against Staphylococcus aureus in human THP-1 monocytes and in a mouse peritonitis model

    DEFF Research Database (Denmark)

    Brinch, Karoline Sidelmann; Sandberg, Anne; Baudoux, Pierre

    2009-01-01

    was maintained (maximal relative efficacy [E(max)], 1.0- to 1.3-log reduction in CFU) even though efficacy was inferior to that of extracellular killing (E(max), >4.5-log CFU reduction). Animal studies included a novel use of the mouse peritonitis model, exploiting extra- and intracellular differentiation assays...... concentration. These findings stress the importance of performing studies of extra- and intracellular activity since these features cannot be predicted from traditional MIC and killing kinetic studies. Application of both the THP-1 and the mouse peritonitis models showed that the in vitro results were similar...

  8. Weighing Evidence "Steampunk" Style via the Meta-Analyser.

    Science.gov (United States)

    Bowden, Jack; Jackson, Chris

    2016-10-01

    The funnel plot is a graphical visualization of summary data estimates from a meta-analysis, and is a useful tool for detecting departures from the standard modeling assumptions. Although perhaps not widely appreciated, a simple extension of the funnel plot can help to facilitate an intuitive interpretation of the mathematics underlying a meta-analysis at a more fundamental level, by equating it to determining the center of mass of a physical system. We used this analogy to explain the concepts of weighing evidence and of biased evidence to a young audience at the Cambridge Science Festival, without recourse to precise definitions or statistical formulas and with a little help from Sherlock Holmes! Following on from the science fair, we have developed an interactive web-application (named the Meta-Analyser) to bring these ideas to a wider audience. We envisage that our application will be a useful tool for researchers when interpreting their data. First, to facilitate a simple understanding of fixed and random effects modeling approaches; second, to assess the importance of outliers; and third, to show the impact of adjusting for small study bias. This final aim is realized by introducing a novel graphical interpretation of the well-known method of Egger regression.

  9. Using niche-modelling and species-specific cost analyses to determine a multispecies corridor in a fragmented landscape

    Science.gov (United States)

    Zurano, Juan Pablo; Selleski, Nicole; Schneider, Rosio G.

    2017-01-01

    types independent of the degree of legal protection. These data used with multifocal GIS analyses balance the varying degree of overlap and unique properties among them allowing for comprehensive conservation strategies to be developed relatively rapidly. Our comprehensive approach serves as a model to other regions faced with habitat loss and lack of data. The five carnivores focused on in our study have wide ranges, so the results from this study can be expanded and combined with surrounding countries, with analyses at the species or community level. PMID:28841692

  10. Understanding the Flow Physics of Shock Boundary-Layer Interactions Using CFD and Numerical Analyses

    Science.gov (United States)

    Friedlander, David J.

    2013-01-01

    Computational fluid dynamic (CFD) analyses of the University of Michigan (UM) Shock/Boundary-Layer Interaction (SBLI) experiments were performed as an extension of the CFD SBLI Workshop held at the 48th AIAA Aerospace Sciences Meeting in 2010. In particular, the UM Mach 2.75 Glass Tunnel with a semi-spanning 7.75deg wedge was analyzed in attempts to explore key physics pertinent to SBLI's, including thermodynamic and viscous boundary conditions as well as turbulence modeling. Most of the analyses were 3D CFD simulations using the OVERFLOW flow solver, with additional quasi-1D simulations performed with an in house MATLAB code interfacing with the NIST REFPROP code to explore perfect verses non-ideal air. A fundamental exploration pertaining to the effects of particle image velocimetry (PIV) on post-processing data is also shown. Results from the CFD simulations showed an improvement in agreement with experimental data with key contributions including adding a laminar zone upstream of the wedge and the necessity of mimicking PIV particle lag for comparisons. Results from the quasi-1D simulation showed that there was little difference between perfect and non-ideal air for the configuration presented.

  11. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  12. Multi-level Bayesian analyses for single- and multi-vehicle freeway crashes.

    Science.gov (United States)

    Yu, Rongjie; Abdel-Aty, Mohamed

    2013-09-01

    This study presents multi-level analyses for single- and multi-vehicle crashes on a mountainous freeway. Data from a 15-mile mountainous freeway section on I-70 were investigated. Both aggregate and disaggregate models for the two crash conditions were developed. Five years of crash data were used in the aggregate investigation, while the disaggregate models utilized one year of crash data along with real-time traffic and weather data. For the aggregate analyses, safety performance functions were developed for the purpose of revealing the contributing factors for each crash type. Two methodologies, a Bayesian bivariate Poisson-lognormal model and a Bayesian hierarchical Poisson model with correlated random effects, were estimated to simultaneously analyze the two crash conditions with consideration of possible correlations. Except for the factors related to geometric characteristics, two exposure parameters (annual average daily traffic and segment length) were included. Two different sets of significant explanatory and exposure variables were identified for the single-vehicle (SV) and multi-vehicle (MV) crashes. It was found that the Bayesian bivariate Poisson-lognormal model is superior to the Bayesian hierarchical Poisson model, the former with a substantially lower DIC and more significant variables. In addition to the aggregate analyses, microscopic real-time crash risk evaluation models were developed for the two crash conditions. Multi-level Bayesian logistic regression models were estimated with the random parameters accounting for seasonal variations, crash-unit-level diversity and segment-level random effects capturing unobserved heterogeneity caused by the geometric characteristics. The model results indicate that the effects of the selected variables on crash occurrence vary across seasons and crash units; and that geometric characteristic variables contribute to the segment variations: the more unobserved heterogeneity have been accounted, the better

  13. Modeling patterns in count data using loglinear and related models

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1995-12-01

    This report explains the use of loglinear and logit models, for analyzing Poisson and binomial counts in the presence of explanatory variables. The explanatory variables may be unordered categorical variables or numerical variables, or both. The report shows how to construct models to fit data, and how to test whether a model is too simple or too complex. The appropriateness of the methods with small data sets is discussed. Several example analyses, using the SAS computer package, illustrate the methods

  14. Vortexlet models of flapping flexible wings show tuning for force production and control

    International Nuclear Information System (INIS)

    Mountcastle, A M; Daniel, T L

    2010-01-01

    Insect wings are compliant structures that experience deformations during flight. Such deformations have recently been shown to substantially affect induced flows, with appreciable consequences to flight forces. However, there are open questions related to the aerodynamic mechanisms underlying the performance benefits of wing deformation, as well as the extent to which such deformations are determined by the boundary conditions governing wing actuation together with mechanical properties of the wing itself. Here we explore aerodynamic performance parameters of compliant wings under periodic oscillations, subject to changes in phase between wing elevation and pitch, and magnitude and spatial pattern of wing flexural stiffness. We use a combination of computational structural mechanics models and a 2D computational fluid dynamics approach to ask how aerodynamic force production and control potential are affected by pitch/elevation phase and variations in wing flexural stiffness. Our results show that lift and thrust forces are highly sensitive to flexural stiffness distributions, with performance optima that lie in different phase regions. These results suggest a control strategy for both flying animals and engineering applications of micro-air vehicles.

  15. RELAP5 analyses of overcooling transients in a pressurized water reactor

    International Nuclear Information System (INIS)

    Bolander, M.A.; Fletcher, C.D.; Ogden, D.M.; Stitt, B.D.; Waterman, M.E.

    1983-01-01

    In support of the Pressurized Thermal Shock Integration Study sponsored by the United States Nuclear Regulatory Commission, the Idaho National Engineering Laboratory has performed analyses of overcooling transients using the RELAP5/MOD1.5 computer code. These analyses were performed for Oconee Plants 1 and 3, which are pressurized water reactors of Babcock and Wilcox lowered-loop design. Results of the RELAP5 analyses are presented, including a comparison with plant data. The capabilities and limitations of the RELAP5/MOD1.5 computer code in analyzing integral plant transients are examined. These analyses require detailed thermal-hydraulic and control system computer models

  16. A shock absorber model for structure-borne noise analyses

    Science.gov (United States)

    Benaziz, Marouane; Nacivet, Samuel; Thouverez, Fabrice

    2015-08-01

    Shock absorbers are often responsible for undesirable structure-borne noise in cars. The early numerical prediction of this noise in the automobile development process can save time and money and yet remains a challenge for industry. In this paper, a new approach to predicting shock absorber structure-borne noise is proposed; it consists in modelling the shock absorber and including the main nonlinear phenomena responsible for discontinuities in the response. The model set forth herein features: compressible fluid behaviour, nonlinear flow rate-pressure relations, valve mechanical equations and rubber mounts. The piston, base valve and complete shock absorber model are compared with experimental results. Sensitivity of the shock absorber response is evaluated and the most important parameters are classified. The response envelope is also computed. This shock absorber model is able to accurately reproduce local nonlinear phenomena and improves our state of knowledge on potential noise sources within the shock absorber.

  17. Do phase-shift analyses and nucleon-nucleon potential models yield the wrong 3Pj phase shifts at low energies?

    International Nuclear Information System (INIS)

    Tornow, W.; Witala, H.; Kievsky, A.

    1998-01-01

    The 4 P J waves in nucleon-deuteron scattering were analyzed using proton-deuteron and neutron-deuteron data at E N =3 MeV. New sets of nucleon-nucleon 3 P j phase shifts were obtained that may lead to a better understanding of the long-standing A y (θ) puzzle in nucleon-deuteron elastic scattering. However, these sets of 3 P j phase shifts are quite different from the ones determined from both global phase-shift analyses of nucleon-nucleon data and nucleon-nucleon potential models. copyright 1998 The American Physical Society

  18. Pathophysiological analyses of periventricular nodular heterotopia using gyrencephalic mammals.

    Science.gov (United States)

    Matsumoto, Naoyuki; Hoshiba, Yoshio; Morita, Kazuya; Uda, Natsu; Hirota, Miwako; Minamikawa, Maki; Ebisu, Haruka; Shinmyo, Yohei; Kawasaki, Hiroshi

    2017-03-15

    Although periventricular nodular heterotopia (PNH) is often found in the cerebral cortex of people with thanatophoric dysplasia (TD), the pathophysiology of PNH in TD is largely unknown. This is mainly because of difficulties in obtaining brain samples of TD patients and a lack of appropriate animal models for analyzing the pathophysiology of PNH in TD. Here we investigate the pathophysiological mechanisms of PNH in the cerebral cortex of TD by utilizing a ferret TD model which we recently developed. To make TD ferrets, we electroporated fibroblast growth factor 8 (FGF8) into the cerebral cortex of ferrets. Our immunohistochemical analyses showed that PNH nodules in the cerebral cortex of TD ferrets were mostly composed of cortical neurons, including upper layer neurons and GABAergic neurons. We also found disorganizations of radial glial fibers and of the ventricular lining in the TD ferret cortex, indicating that PNH may result from defects in radial migration of cortical neurons along radial glial fibers during development. Our findings provide novel mechanistic insights into the pathogenesis of PNH in TD. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  19. Modelling, singular perturbation and bifurcation analyses of bitrophic food chains.

    Science.gov (United States)

    Kooi, B W; Poggiale, J C

    2018-04-20

    Two predator-prey model formulations are studied: for the classical Rosenzweig-MacArthur (RM) model and the Mass Balance (MB) chemostat model. When the growth and loss rate of the predator is much smaller than that of the prey these models are slow-fast systems leading mathematically to singular perturbation problem. In contradiction to the RM-model, the resource for the prey are modelled explicitly in the MB-model but this comes with additional parameters. These parameter values are chosen such that the two models become easy to compare. In both models a transcritical bifurcation, a threshold above which invasion of predator into prey-only system occurs, and the Hopf bifurcation where the interior equilibrium becomes unstable leading to a stable limit cycle. The fast-slow limit cycles are called relaxation oscillations which for increasing differences in time scales leads to the well known degenerated trajectories being concatenations of slow parts of the trajectory and fast parts of the trajectory. In the fast-slow version of the RM-model a canard explosion of the stable limit cycles occurs in the oscillatory region of the parameter space. To our knowledge this type of dynamics has not been observed for the RM-model and not even for more complex ecosystem models. When a bifurcation parameter crosses the Hopf bifurcation point the amplitude of the emerging stable limit cycles increases. However, depending of the perturbation parameter the shape of this limit cycle changes abruptly from one consisting of two concatenated slow and fast episodes with small amplitude of the limit cycle, to a shape with large amplitude of which the shape is similar to the relaxation oscillation, the well known degenerated phase trajectories consisting of four episodes (concatenation of two slow and two fast). The canard explosion point is accurately predicted by using an extended asymptotic expansion technique in the perturbation and bifurcation parameter simultaneously where the small

  20. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Science.gov (United States)

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  1. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Directory of Open Access Journals (Sweden)

    Arika Ligmann-Zielinska

    Full Text Available Agent-based models (ABMs have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1 efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2 conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  2. Epidemiology of HPV 16 and cervical cancer in Finland and the potential impact of vaccination: mathematical modelling analyses.

    Directory of Open Access Journals (Sweden)

    Ruanne V Barnabas

    2006-05-01

    Full Text Available BACKGROUND: Candidate human papillomavirus (HPV vaccines have demonstrated almost 90%-100% efficacy in preventing persistent, type-specific HPV infection over 18 mo in clinical trials. If these vaccines go on to demonstrate prevention of precancerous lesions in phase III clinical trials, they will be licensed for public use in the near future. How these vaccines will be used in countries with national cervical cancer screening programmes is an important question. METHODS AND FINDINGS: We developed a transmission model of HPV 16 infection and progression to cervical cancer and calibrated it to Finnish HPV 16 seroprevalence over time. The model was used to estimate the transmission probability of the virus, to look at the effect of changes in patterns of sexual behaviour and smoking on age-specific trends in cancer incidence, and to explore the impact of HPV 16 vaccination. We estimated a high per-partnership transmission probability of HPV 16, of 0.6. The modelling analyses showed that changes in sexual behaviour and smoking accounted, in part, for the increase seen in cervical cancer incidence in 35- to 39-y-old women from 1990 to 1999. At both low (10% in opportunistic immunisation and high (90% in a national immunisation programme coverage of the adolescent population, vaccinating women and men had little benefit over vaccinating women alone. We estimate that vaccinating 90% of young women before sexual debut has the potential to decrease HPV type-specific (e.g., type 16 cervical cancer incidence by 91%. If older women are more likely to have persistent infections and progress to cancer, then vaccination with a duration of protection of less than 15 y could result in an older susceptible cohort and no decrease in cancer incidence. While vaccination has the potential to significantly reduce type-specific cancer incidence, its combination with screening further improves cancer prevention. CONCLUSIONS: HPV vaccination has the potential to

  3. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  4. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  5. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  6. Exergetic and thermoeconomic analyses of power plants

    International Nuclear Information System (INIS)

    Kwak, H.-Y.; Kim, D.-J.; Jeon, J.-S.

    2003-01-01

    Exergetic and thermoeconomic analyses were performed for a 500-MW combined cycle plant. In these analyses, mass and energy conservation laws were applied to each component of the system. Quantitative balances of the exergy and exergetic cost for each component, and for the whole system was carefully considered. The exergoeconomic model, which represented the productive structure of the system considered, was used to visualize the cost formation process and the productive interaction between components. The computer program developed in this study can determine the production costs of power plants, such as gas- and steam-turbines plants and gas-turbine cogeneration plants. The program can be also be used to study plant characteristics, namely, thermodynamic performance and sensitivity to changes in process and/or component design variables

  7. Modelling and Verification of Web Services Business Activity Protocol

    DEFF Research Database (Denmark)

    Ravn, Anders Peter; Srba, Jiri; Vighio, Saleem

    2011-01-01

    WS-Business Activity specification defines two coordination protocols in order to ensure a consistent agreement on the outcome of long-running distributed applications. We use the model checker Uppaal to analyse the Business Agreement with Coordination Completion protocol type. Our analyses show ...

  8. Finite element analyses of a linear-accelerator electron gun

    Science.gov (United States)

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-02-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  9. Finite element analyses of a linear-accelerator electron gun

    Energy Technology Data Exchange (ETDEWEB)

    Iqbal, M., E-mail: muniqbal.chep@pu.edu.pk, E-mail: muniqbal@ihep.ac.cn [Centre for High Energy Physics, University of the Punjab, Lahore 45590 (Pakistan); Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Wasy, A. [Department of Mechanical Engineering, Changwon National University, Changwon 641773 (Korea, Republic of); Islam, G. U. [Centre for High Energy Physics, University of the Punjab, Lahore 45590 (Pakistan); Zhou, Z. [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2014-02-15

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  10. Finite element analyses of a linear-accelerator electron gun

    International Nuclear Information System (INIS)

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-01-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator

  11. Climate Modelling Shows Increased Risk to Eucalyptus sideroxylon on the Eastern Coast of Australia Compared to Eucalyptus albens

    Directory of Open Access Journals (Sweden)

    Farzin Shabani

    2017-11-01

    Full Text Available Aim: To identify the extent and direction of range shift of Eucalyptus sideroxylon and E. albens in Australia by 2050 through an ensemble forecast of four species distribution models (SDMs. Each was generated using four global climate models (GCMs, under two representative concentration pathways (RCPs. Location: Australia. Methods: We used four SDMs of (i generalized linear model, (ii MaxEnt, (iii random forest, and (iv boosted regression tree to construct SDMs for species E. sideroxylon and E. albens under four GCMs including (a MRI-CGCM3, (b MIROC5, (c HadGEM2-AO and (d CCSM4, under two RCPs of 4.5 and 6.0. Here, the true skill statistic (TSS index was used to assess the accuracy of each SDM. Results: Results showed that E. albens and E. sideroxylon will lose large areas of their current suitable range by 2050 and E. sideroxylon is projected to gain in eastern and southeastern Australia. Some areas were also projected to remain suitable for each species between now and 2050. Our modelling showed that E. sideroxylon will lose suitable habitat on the western side and will not gain any on the eastern side because this region is one the most heavily populated areas in the country, and the populated areas are moving westward. The predicted decrease in E. sideroxylon’s distribution suggests that land managers should monitor its population closely, and evaluate whether it meets criteria for a protected legal status. Main conclusions: Both Eucalyptus sideroxylon and E. albens will be negatively affected by climate change and it is projected that E. sideroxylon will be at greater risk of losing habitat than E. albens.

  12. Comparison study of inelastic analyses for high temperature structure subjected to cyclic creep loading

    International Nuclear Information System (INIS)

    Kim, J. B.; Lee, H. Y.; Lee, J. H.

    2002-01-01

    It is necessary to develop a reliable numerical analysis method to simulate the plasticity and creep behavior of LMR high temperature structures. Since general purpose finite element analysis codes such as ABAQUS and ANSYS provide various models for plastic hardening and creep equation of Norton's power law, it is possible to perform the separate iscoplasticity analysis. In this study, the high temperature structural analysis program(NONSTA-VP) implementing Chaboche's unified visco plasticity equation into ABAQUS has been developed and the viscoplastic response of the 316 SS plate having a circular hole subjected to a cyclic creep loading has been analyzed. The results among the separate visco plasticity analyses and the unified visco plasticity analysis using NONSTA-VP have been compared and the results from NONSTA-VP shows remarkable responses of stress relaxation and creep behavior during hold time compared to those from separate visco plasticity analyses. Also, it is anticipated to reduce the conservatism arising from using elastic approach for creep-fatigue damage analysis since the stress range and the strain range from the unified visco plasticity analysis has been greatly reduced compared to those from separate visco plasticity analyses and elastic analysis

  13. BN-600 hybrid core benchmark analyses

    International Nuclear Information System (INIS)

    Kim, Y.I.; Stanculescu, A.; Finck, P.; Hill, R.N.; Grimm, K.N.

    2003-01-01

    Benchmark analyses for the hybrid BN-600 reactor that contains three uranium enrichment zones and one plutonium zone in the core, have been performed within the frame of an IAEA sponsored Coordinated Research Project. The results for several relevant reactivity parameters obtained by the participants with their own state-of-the-art basic data and codes, were compared in terms of calculational uncertainty, and their effects on the ULOF transient behavior of the hybrid BN-600 core were evaluated. The comparison of the diffusion and transport results obtained for the homogeneous representation generally shows good agreement for most parameters between the RZ and HEX-Z models. The burnup effect and the heterogeneity effect on most reactivity parameters also show good agreement for the HEX-Z diffusion and transport theory results. A large difference noticed for the sodium and steel density coefficients is mainly due to differences in the spatial coefficient predictions for non fuelled regions. The burnup reactivity loss was evaluated to be 0.025 (4.3 $) within ∼ 5.0% standard deviation. The heterogeneity effect on most reactivity coefficients was estimated to be small. The heterogeneity treatment reduced the control rod worth by 2.3%. The heterogeneity effect on the k-eff and control rod worth appeared to differ strongly depending on the heterogeneity treatment method. A substantial spread noticed for several reactivity coefficients did not give a significant impact on the transient behavior prediction. This result is attributable to compensating effects between several reactivity effects and the specific design of the partially MOX fuelled hybrid core. (author)

  14. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  15. Quantitative Prediction of Coalbed Gas Content Based on Seismic Multiple-Attribute Analyses

    Directory of Open Access Journals (Sweden)

    Renfang Pan

    2015-09-01

    Full Text Available Accurate prediction of gas planar distribution is crucial to selection and development of new CBM exploration areas. Based on seismic attributes, well logging and testing data we found that seismic absorption attenuation, after eliminating the effects of burial depth, shows an evident correlation with CBM gas content; (positive structure curvature has a negative correlation with gas content; and density has a negative correlation with gas content. It is feasible to use the hydrocarbon index (P*G and pseudo-Poisson ratio attributes for detection of gas enrichment zones. Based on seismic multiple-attribute analyses, a multiple linear regression equation was established between the seismic attributes and gas content at the drilling wells. Application of this equation to the seismic attributes at locations other than the drilling wells yielded a quantitative prediction of planar gas distribution. Prediction calculations were performed for two different models, one using pre-stack inversion and the other one disregarding pre-stack inversion. A comparison of the results indicates that both models predicted a similar trend for gas content distribution, except that the model using pre-stack inversion yielded a prediction result with considerably higher precision than the other model.

  16. State of the art in establishing computed models of adsorption processes to serve as a basis of radionuclide migration assessment for safety analyses

    International Nuclear Information System (INIS)

    Koss, V.

    1991-01-01

    An important point in safety analysis of an underground repository is adsorption of radionuclides in the overlying cover. Adsorption may be judged according to experimental results or to model calculations. Because of the reliability aspired in safety analyses, it is necessary to strengthen experimental results by theoretical calculations. At the time, there is no single thermodynamic model of adsorption to be agreed on. Therefore, this work reviews existing equilibrium models of adsorption. Limitations of the K d -concept and of adsorption-isotherms according to Freundlich and Langmuir are mentioned. The surface ionisation and complexation edl model is explained in full as is the criticism of this model. The application is stressed of simple surface complexation models to adsorption experiments in natural systems as is experimental and modelling work according to systems from Gorleben. Hints are given how to deal with modelling of adsorption related to Gorleben systems in the future. (orig.) [de

  17. Analysing hydro-mechanical behaviour of reinforced slopes through centrifuge modelling

    Science.gov (United States)

    Veenhof, Rick; Wu, Wei

    2017-04-01

    Every year, slope instability is causing casualties and damage to properties and the environment. The behaviour of slopes during and after these kind of events is complex and depends on meteorological conditions, slope geometry, hydro-mechanical soil properties, boundary conditions and the initial state of the soils. This study describes the effects of adding reinforcement, consisting of randomly distributed polyolefin monofilament fibres or Ryegrass (Lolium), on the behaviour of medium-fine sand in loose and medium dense conditions. Direct shear tests were performed on sand specimens with different void ratios, water content and fibre or root density, respectively. To simulate the stress state of real scale field situations, centrifuge model tests were conducted on sand specimens with different slope angles, thickness of the reinforced layer, fibre density, void ratio and water content. An increase in peak shear strength is observed in all reinforced cases. Centrifuge tests show that for slopes that are reinforced the period until failure is extended. The location of shear band formation and patch displacement behaviour indicate that the design of slope reinforcement has a significant effect on the failure behaviour. Future research will focus on the effect of plant water uptake on soil cohesion.

  18. Physics Analyses in the Design of the HFIR Cold Neutron Source

    International Nuclear Information System (INIS)

    Bucholz, J.A.

    1999-01-01

    Physics analyses have been performed to characterize the performance of the cold neutron source to be installed in the High Flux Isotope Reactor at the Oak Ridge National Laboratory in the near future. This paper provides a description of the physics models developed, and the resulting analyses that have been performed to support the design of the cold source. These analyses have provided important parametric performance information, such as cold neutron brightness down the beam tube and the various component heat loads, that have been used to develop the reference cold source concept

  19. Ab Initio Modeling Of Friction Reducing Agents Shows Quantum Mechanical Interactions Can Have Macroscopic Manifestation.

    Science.gov (United States)

    Hernández Velázquez, J D; Barroso-Flores, J; Gama Goicochea, A

    2016-11-23

    Two of the most commonly encountered friction-reducing agents used in plastic sheet production are the amides known as erucamide and behenamide, which despite being almost identical chemically, lead to markedly different values of the friction coefficient. To understand the origin of this contrasting behavior, in this work we model brushes made of these two types of linear-chain molecules using quantum mechanical numerical simulations under the density functional theory at the B97D/6-31G(d,p) level of theory. Four chains of erucamide and behenamide were linked to a 2 × 10 zigzag graphene sheet and optimized both in vacuum and in continuous solvent using the SMD implicit solvation model. We find that erucamide chains tend to remain closer together through π-π stacking interactions arising from the double bonds located at C13-C14, a feature behenamide lacks, and thus a more spread configuration is obtained with the latter. It is argued that this arrangement of the erucamide chains is responsible for the lower friction coefficient of erucamide brushes, compared with behenamide brushes, which is a macroscopic consequence of cooperative quantum mechanical interactions. While only quantum level interactions are modeled here, we show that behenamide chains are more spread out in the brush than erucamide chains as a consequence of those interactions. The spread-out configuration allows more solvent particles to penetrate the brush, leading in turn to more friction, in agreement with macroscopic measurements and mesoscale simulations of the friction coefficient reported in the literature.

  20. Meta-analytic structural equation modelling

    CERN Document Server

    Jak, Suzanne

    2015-01-01

    This book explains how to employ MASEM, the combination of meta-analysis (MA) and structural equation modelling (SEM). It shows how by using MASEM, a single model can be tested to explain the relationships between a set of variables in several studies. This book gives an introduction to MASEM, with a focus on the state of the art approach: the two stage approach of Cheung and Cheung & Chan. Both, the fixed and the random approach to MASEM are illustrated with two applications to real data. All steps that have to be taken to perform the analyses are discussed extensively. All data and syntax files are available online, so that readers can imitate all analyses. By using SEM for meta-analysis, this book shows how to benefit from all available information from all available studies, even if few or none of the studies report about all relationships that feature in the full model of interest.

  1. Transformation (normalization) of slope gradient and surface curvatures, automated for statistical analyses from DEMs

    Science.gov (United States)

    Csillik, O.; Evans, I. S.; Drăguţ, L.

    2015-03-01

    Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.

  2. Cloning, characterisation, and comparative quantitative expression analyses of receptor for advanced glycation end products (RAGE) transcript forms.

    Science.gov (United States)

    Sterenczak, Katharina A; Willenbrock, Saskia; Barann, Matthias; Klemke, Markus; Soller, Jan T; Eberle, Nina; Nolte, Ingo; Bullerdiek, Jörn; Murua Escobar, Hugo

    2009-04-01

    RAGE is a member of the immunoglobulin superfamily of cell surface molecules playing key roles in pathophysiological processes, e.g. immune/inflammatory disorders, Alzheimer's disease, diabetic arteriosclerosis and tumourigenesis. In humans 19 naturally occurring RAGE splicing variants resulting in either N-terminally or C-terminally truncated proteins were identified and are lately discussed as mechanisms for receptor regulation. Accordingly, deregulation of sRAGE levels has been associated with several diseases e.g. Alzheimer's disease, Type 1 diabetes, and rheumatoid arthritis. Administration of recombinant sRAGE to animal models of cancer blocked tumour growth successfully. In spite of its obvious relationship to cancer and metastasis data focusing sRAGE deregulation and tumours is rare. In this study we screened a set of tumours, healthy tissues and various cancer cell lines for RAGE splicing variants and analysed their structure. Additionally, we analysed the ratio of the mainly found transcript variants using quantitative Real-Time PCR. In total we characterised 24 previously not described canine and 4 human RAGE splicing variants, analysed their structure, classified their characteristics, and derived their respective protein forms. Interestingly, the healthy and the neoplastic tissue samples showed in majority RAGE transcripts coding for the complete receptor and transcripts showing insertions of intron 1.

  3. Evaluation properties of the French version of the OUT-PATSAT35 satisfaction with care questionnaire according to classical and item response theory analyses.

    Science.gov (United States)

    Panouillères, M; Anota, A; Nguyen, T V; Brédart, A; Bosset, J F; Monnier, A; Mercier, M; Hardouin, J B

    2014-09-01

    The present study investigates the properties of the French version of the OUT-PATSAT35 questionnaire, which evaluates the outpatients' satisfaction with care in oncology using classical analysis (CTT) and item response theory (IRT). This cross-sectional multicenter study includes 692 patients who completed the questionnaire at the end of their ambulatory treatment. CTT analyses tested the main psychometric properties (convergent and divergent validity, and internal consistency). IRT analyses were conducted separately for each OUT-PATSAT35 domain (the doctors, the nurses or the radiation therapists and the services/organization) by models from the Rasch family. We examined the fit of the data to the model expectations and tested whether the model assumptions of unidimensionality, monotonicity and local independence were respected. A total of 605 (87.4%) respondents were analyzed with a mean age of 64 years (range 29-88). Internal consistency for all scales separately and for the three main domains was good (Cronbach's α 0.74-0.98). IRT analyses were performed with the partial credit model. No disordered thresholds of polytomous items were found. Each domain showed high reliability but fitted poorly to the Rasch models. Three items in particular, the item about "promptness" in the doctors' domain and the items about "accessibility" and "environment" in the services/organization domain, presented the highest default of fit. A correct fit of the Rasch model can be obtained by dropping these items. Most of the local dependence concerned items about "information provided" in each domain. A major deviation of unidimensionality was found in the nurses' domain. CTT showed good psychometric properties of the OUT-PATSAT35. However, the Rasch analysis revealed some misfitting and redundant items. Taking the above problems into consideration, it could be interesting to refine the questionnaire in a future study.

  4. RETRAN nonequilibrium two-phase flow model for operational transient analyses

    International Nuclear Information System (INIS)

    Paulsen, M.P.; Hughes, E.D.

    1982-01-01

    The field balance equations, flow-field models, and equation of state for a nonequilibrium two-phase flow model for RETRAN are given. The differential field balance model equations are: (1) conservation of mixture mass; (2) conservation of vapor mass; (3) balance of mixture momentum; (4) a dynamic-slip model for the velocity difference; and (5) conservation of mixture energy. The equation of state is formulated such that the liquid phase may be subcooled, saturated, or superheated. The vapor phase is constrained to be at the saturation state. The dynamic-slip model includes wall-to-phase and interphase momentum exchanges. A mechanistic vapor generation model is used to describe vapor production under bulk subcooling conditions. The speed of sound for the mixture under nonequilibrium conditions is obtained from the equation of state formulation. The steady-state and transient solution methods are described

  5. Mitochondrial dysfunction, oxidative stress and apoptosis revealed by proteomic and transcriptomic analyses of the striata in two mouse models of Parkinson’s disease

    Energy Technology Data Exchange (ETDEWEB)

    Chin, Mark H.; Qian, Weijun; Wang, Haixing; Petyuk, Vladislav A.; Bloom, Joshua S.; Sforza, Daniel M.; Lacan, Goran; Liu, Dahai; Khan, Arshad H.; Cantor, Rita M.; Bigelow, Diana J.; Melega, William P.; Camp, David G.; Smith, Richard D.; Smith, Desmond J.

    2008-02-10

    The molecular mechanisms underlying the changes in the nigrostriatal pathway in Parkinson disease (PD) are not completely understood. Here we use mass spectrometry and microarrays to study the proteomic and transcriptomic changes in the striatum of two mouse models of PD, induced by the distinct neurotoxins 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) and methamphetamine (METH). Proteomic analyses resulted in the identification and relative quantification of 912 proteins with two or more unique peptides and 85 proteins with significant abundance changes following neurotoxin treatment. Similarly, microarray analyses revealed 181 genes with significant changes in mRNA following neurotoxin treatment. The combined protein and gene list provides a clearer picture of the potential mechanisms underlying neurodegeneration observed in PD. Functional analysis of this combined list revealed a number of significant categories, including mitochondrial dysfunction, oxidative stress response and apoptosis. Additionally, codon usage and miRNAs may play an important role in translational control in the striatum. These results constitute one of the largest datasets integrating protein and transcript changes for these neurotoxin models with many similar endpoint phenotypes but distinct mechanisms.

  6. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  7. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  8. Simulation-based Investigations of Electrostatic Beam Energy Analysers

    CERN Document Server

    Pahl, Hannes

    2015-01-01

    An energy analyser is needed to measure the beam energy profile behind the REX-EBIS at ISOLDE. The device should be able to operate with an accuracy of 1 V at voltages up to 30 kV. In order to find a working concept for an electrostatic energy analyser different designs were evaluated with simulations. A spherical device and its design issues are presented. The potential deformation effects of grids at high voltages and their influence on the energy resolution were investigated. First tests were made with a grid-free ring electrode device and show promising results.

  9. Long-term uranium supply-demand analyses

    International Nuclear Information System (INIS)

    1986-12-01

    It is the intention of this study to investigate the long-term uranium supply demand situation using a number of supply and demand related assumptions. For supply, these assumptions as used in the Resources and Production Projection (RAPP) model include country economic development status, and consequent lead times for exploration and development, uranium development status, country infrastructure, and uranium resources including the Reasonably Assured (RAR), Estimated Additional, Categories I and II, (EAR-I and II) and Speculative Resource categories. The demand assumptions were based on the ''pure'' reactor strategies developed by the NEA Working Party on Nuclear Fuel Cycle Requirements for the 1986 OECD (NEA)/IAEA reports ''Nuclear Energy and its Fuel Cycle: Prospects to 2025''. In addition for this study, a mixed strategy case was computed using the averages of the Plutonium (Pu) burning LWR high, and the improved LWR low cases. It is understandable that such a long-term analysis cannot present hard facts, but it can show which variables may in fact influence the long-term supply-demand situation. It is hoped that results of this study will provide valuable information for planners in the uranium supply and demand fields. Periodical re-analyses with updated data bases will be needed from time to time

  10. Special analyses reveal coke-deposit structure

    International Nuclear Information System (INIS)

    Albright, L.F.

    1988-01-01

    A scanning electron microscope (SEM) and an energy dispersive X-ray analyzer (EDAX) have been used to obtain information that clarifies the three mechanisms of coke formation in ethylene furnaces, and to analyze the metal condition at the exit of furnace. The results can be used to examine furnace operations and develop improved ethylene plant practices. In this first of four articles on the analyses of coke and metal samples, the coking mechanisms and coke deposits in a section of tube from an actual ethylene furnace (Furnace A) from a plant on the Texas Gulf Coast are discussed. The second articles in the series will analyze the condition of the tube metal in the same furnace. To show how coke deposition and metal condition dependent on the operating parameters of an ethylene furnace, the third article in the series will show the coke deposition in a Texas Gulf Coast furnace tube (Furnace B) that operated at shorter residence time. The fourth article discusses the metal condition in that furnace. Some recommendations, based on the analyses and findings, are offered in the fourth article that could help extend the life of ethylene furnace tubes, and also improve overall ethylene plant operations

  11. Influence of the Human Skin Tumor Type in Photodynamic Therapy Analysed by a Predictive Model

    Directory of Open Access Journals (Sweden)

    I. Salas-García

    2012-01-01

    Full Text Available Photodynamic Therapy (PDT modeling allows the prediction of the treatment results depending on the lesion properties, the photosensitizer distribution, or the optical source characteristics. We employ a predictive PDT model and apply it to different skin tumors. It takes into account optical radiation distribution, a nonhomogeneous topical photosensitizer spatial temporal distribution, and the time-dependent photochemical interaction. The predicted singlet oxygen molecular concentrations with varying optical irradiance are compared and could be directly related with the necrosis area. The results show a strong dependence on the particular lesion. This suggests the need to design optimal PDT treatment protocols adapted to the specific patient and lesion.

  12. A journey into a Mediterranean diet and type 2 diabetes: a systematic review with meta-analyses

    Science.gov (United States)

    Esposito, Katherine; Maiorino, Maria Ida; Bellastella, Giuseppe; Chiodini, Paolo; Panagiotakos, Demosthenes; Giugliano, Dario

    2015-01-01

    Objectives To summarise the evidence about the efficacy of a Mediterranean diet on the management of type 2 diabetes and prediabetic states. Design A systematic review of all meta-analyses and randomised controlled trials (RCTs) that compared the Mediterranean diet with a control diet on the treatment of type 2 diabetes and prediabetic states was conducted. Electronic searches were carried out up to January 2015. Trials were included for meta-analyses if they had a control group treated with another diet, if they were of sufficient duration (at least 6 months), and if they had at least 30 participants in each arm. A random-effect model was used to pool data. Participants Adults with or at risk for type 2 diabetes. Interventions Dietary patterns that described themselves as using a ‘Mediterranean’ dietary pattern. Outcome measures The outcomes were glycaemic control, cardiovascular risk factors and remission from the metabolic syndrome. Results From 2824 studies, 8 meta-analyses and 5 RCTs were eligible. A ‘de novo’ meta-analysis of 3 long-term (>6 months) RCTs of the Mediterranean diet and glycaemic control of diabetes favoured the Mediterranean diet as compared with lower fat diets. Another ‘de novo’ meta-analysis of two long-term RCTs showed a 49% increased probability of remission from the metabolic syndrome. 5 meta-analyses showed a favourable effect of the Mediterranean diet, as compared with other diets, on body weight, total cholesterol and high-density lipoprotein cholesterol. 2 meta-analyses demonstrated that higher adherence to the Mediterranean diet reduced the risk of future diabetes by 19–23%. Conclusions The Mediterranean diet was associated with better glycaemic control and cardiovascular risk factors than control diets, including a lower fat diet, suggesting that it is suitable for the overall management of type 2 diabetes. PMID:26260349

  13. Proceedings of the 2nd CSNI Specialist Meeting on Simulators and Plant Analysers

    International Nuclear Information System (INIS)

    Tiihonen, O.

    1999-01-01

    The safe utilisation of nuclear power plants requires the availability of different computerised tools for analysing the plant behaviour and training the plant personnel. These can be grouped into three categories: accident analysis codes, plant analysers and training simulators. The safety analysis of nuclear power plants has traditionally been limited to the worst accident cases expected for the specific plant design. Many accident analysis codes have been developed for different plant types. The scope of the analyses has continuously expanded. The plant analysers are now emerging tools intended for extensive analysis of the plant behaviour using a best estimate model for the whole plant including the reactor and full thermodynamic process, both combined with automation and electrical systems. The comprehensive model is also supported by good visualisation tools. Training simulators with real time plant model are tools for training the plant operators to run the plant. Modern training simulators have also features supporting visualisation of the important phenomena occurring in the plant during transients. The 2nd CSNI Specialist Meeting on Simulators and Plant Analysers in Espoo attracted some 90 participants from 17 countries. A total of 49 invited papers were presented in the meeting in addition to 7 simulator system demonstrations. Ample time was reserved for the presentations and informal discussions during the four meeting days. (orig.)

  14. Analysing the Transformation of Higher Education Governance in Bulgaria and Lithuania

    NARCIS (Netherlands)

    Dobbins, Michael; Leisyte, Liudvika

    2014-01-01

    Drawing on sociological neo-institutional theory and models of higher education governance, we examine current developments in Bulgaria and Lithuania and explore to what extent those developments were shaped by the Bologna reform. We analyse to what extent the state has moved away from a model of

  15. Model-independent plot of dynamic PET data facilitates data interpretation and model selection.

    Science.gov (United States)

    Munk, Ole Lajord

    2012-02-21

    When testing new PET radiotracers or new applications of existing tracers, the blood-tissue exchange and the metabolism need to be examined. However, conventional plots of measured time-activity curves from dynamic PET do not reveal the inherent kinetic information. A novel model-independent volume-influx plot (vi-plot) was developed and validated. The new vi-plot shows the time course of the instantaneous distribution volume and the instantaneous influx rate. The vi-plot visualises physiological information that facilitates model selection and it reveals when a quasi-steady state is reached, which is a prerequisite for the use of the graphical analyses by Logan and Gjedde-Patlak. Both axes of the vi-plot have direct physiological interpretation, and the plot shows kinetic parameter in close agreement with estimates obtained by non-linear kinetic modelling. The vi-plot is equally useful for analyses of PET data based on a plasma input function or a reference region input function. The vi-plot is a model-independent and informative plot for data exploration that facilitates the selection of an appropriate method for data analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  16. Comparative modeling analyses of Cs-137 fate in the rivers impacted by Chernobyl and Fukushima accidents

    Energy Technology Data Exchange (ETDEWEB)

    Zheleznyak, M.; Kivva, S. [Institute of Environmental Radioactivity, Fukushima University (Japan)

    2014-07-01

    The consequences of two largest nuclear accidents of the last decades - at Chernobyl Nuclear Power Plant (ChNPP) (1986) and at Fukushima Daiichi NPP (FDNPP) (2011) clearly demonstrated that radioactive contamination of water bodies in vicinity of NPP and on the waterways from it, e.g., river- reservoir water after Chernobyl accident and rivers and coastal marine waters after Fukushima accident, in the both cases have been one of the main sources of the public concerns on the accident consequences. The higher weight of water contamination in public perception of the accidents consequences in comparison with the real fraction of doses via aquatic pathways in comparison with other dose components is a specificity of public perception of environmental contamination. This psychological phenomenon that was confirmed after these accidents provides supplementary arguments that the reliable simulation and prediction of the radionuclide dynamics in water and sediments is important part of the post-accidental radioecological research. The purpose of the research is to use the experience of the modeling activities f conducted for the past more than 25 years within the Chernobyl affected Pripyat River and Dnieper River watershed as also data of the new monitoring studies in Japan of Abukuma River (largest in the region - the watershed area is 5400 km{sup 2}), Kuchibuto River, Uta River, Niita River, Natsui River, Same River, as also of the studies on the specific of the 'water-sediment' {sup 137}Cs exchanges in this area to refine the 1-D model RIVTOX and 2-D model COASTOX for the increasing of the predictive power of the modeling technologies. The results of the modeling studies are applied for more accurate prediction of water/sediment radionuclide contamination of rivers and reservoirs in the Fukushima Prefecture and for the comparative analyses of the efficiency of the of the post -accidental measures to diminish the contamination of the water bodies. Document

  17. Applications of Historical Analyses in Combat Modelling

    Science.gov (United States)

    2011-12-01

    causes of those results [2]. Models can be classified into three descriptive types [8], according to the degree of abstraction required:  iconic ...22 ( A5 ) Hence the first bracket of Equation A3 is zero. Therefore:         022 22 22 122 22 22 2      o o o o xx xxy yy yyx...A6) Equation A5 can also be used to replace the term in Equation A6, leaving: 22 oxx     222 2 22 1 2222 2 oo xxa byyyx

  18. Vegetable parenting practices scale: Item response modeling analyses

    Science.gov (United States)

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  19. Coadministration of doxorubicin and etoposide loaded in camel milk phospholipids liposomes showed increased antitumor activity in a murine model

    Directory of Open Access Journals (Sweden)

    Maswadeh HM

    2015-04-01

    Full Text Available Hamzah M Maswadeh,1 Ahmed N Aljarbou,1 Mohammed S Alorainy,2 Arshad H Rahmani,3 Masood A Khan3 1Department of Pharmaceutics, College of Pharmacy, 2Department of Pharmacology and Therapeutics, College of Medicine, 3College of Applied Medical Sciences, Qassim University, Buraydah, Kingdom of Saudi Arabia Abstract: Small unilamellar vesicles from camel milk phospholipids (CML mixture or from 1,2 dipalmitoyl-sn-glycero-3-phosphatidylcholine (DPPC were prepared, and anticancer drugs doxorubicin (Dox or etoposide (ETP were loaded. Liposomal formulations were used against fibrosarcoma in a murine model. Results showed a very high percentage of Dox encapsulation (~98% in liposomes (Lip prepared from CML-Lip or DPPC-Lip, whereas the percentage of encapsulations of ETP was on the lower side, 22% of CML-Lip and 18% for DPPC-Lip. Differential scanning calorimetry curves show that Dox enhances the lamellar formation in CML-Lip, whereas ETP enhances the nonlamellar formation. Differential scanning calorimetry curves also showed that the presence of Dox and ETP together into DPPC-Lip produced the interdigitation effect. The in vivo anticancer activity of liposomal formulations of Dox or ETP or a combination of both was assessed against benzopyrene (BAP-induced fibrosarcoma in a murine model. Tumor-bearing mice treated with a combination of Dox and ETP loaded into CML-Lip showed increased survival and reduced tumor growth compared to other groups, including the combination of Dox and ETP in DPPC-Lip. Fibrosarcoma-bearing mice treated with a combination of free (Dox + ETP showed much higher tumor growth compared to those groups treated with CML-Lip-(Dox + ETP or DPPC-Lip-(Dox + ETP. Immunohistochemical study was also performed to show the expression of tumor-suppressor PTEN, and it was found that the tumor tissues from the group of mice treated with a combination of free (Dox + ETP showed greater loss of cytoplasmic PTEN than tumor tissues obtained from the

  20. SDMtoolbox 2.0: the next generation Python-based GIS toolkit for landscape genetic, biogeographic and species distribution model analyses.

    Science.gov (United States)

    Brown, Jason L; Bennett, Joseph R; French, Connor M

    2017-01-01

    SDMtoolbox 2.0 is a software package for spatial studies of ecology, evolution, and genetics. The release of SDMtoolbox 2.0 allows researchers to use the most current ArcGIS software and MaxEnt software, and reduces the amount of time that would be spent developing common solutions. The central aim of this software is to automate complicated and repetitive spatial analyses in an intuitive graphical user interface. One core tenant facilitates careful parameterization of species distribution models (SDMs) to maximize each model's discriminatory ability and minimize overfitting. This includes carefully processing of occurrence data, environmental data, and model parameterization. This program directly interfaces with MaxEnt, one of the most powerful and widely used species distribution modeling software programs, although SDMtoolbox 2.0 is not limited to species distribution modeling or restricted to modeling in MaxEnt. Many of the SDM pre- and post-processing tools have 'universal' analogs for use with any modeling software. The current version contains a total of 79 scripts that harness the power of ArcGIS for macroecology, landscape genetics, and evolutionary studies. For example, these tools allow for biodiversity quantification (such as species richness or corrected weighted endemism), generation of least-cost paths and corridors among shared haplotypes, assessment of the significance of spatial randomizations, and enforcement of dispersal limitations of SDMs projected into future climates-to only name a few functions contained in SDMtoolbox 2.0. Lastly, dozens of generalized tools exists for batch processing and conversion of GIS data types or formats, which are broadly useful to any ArcMap user.

  1. Structural and functional analyses of the putrescine binding protein PotF from Xanthomonas citri

    International Nuclear Information System (INIS)

    Santana, L.D.F.; Balan, A.

    2012-01-01

    Full text: The focus of our group is to determinate the role of ABC transporters in the physiology and growth of Xanthomonas citri, a phytopathogenic bacteria that infects citrus plants causing significant losses for the economy. One of the ABC transporters identified in the X. citri genome and that was showed to be active during the infection in Citrus sinensis plants was the putrescine transporter. This transporter consists of two internal membrane proteins PotG and PotH that form a pore, a cytoplasmic protein that gives energy for the transport and the periplasmic-binding protein PotF, which is responsible for the affinity and specificity of the system. Its function is associated to the microbial carcinogenesis, biofilm formation, escape from phagolysosomes, bacteriocin production, toxin activity and protection from oxidative and acid stress. In this work, we show for the first time, the expression, purification, functional and structural analyses of the X. citri PotF protein. The PotF was expressed from Escherichia coli cells strain Arctic, as a 40 kDa soluble protein, after induction of IPTG for twenty four hours at thirteen deg C. Using immobilized metal affinity chromatography for purification, the protein was eluted in the fractions with 10-500 mM of imidazole. To test the folding and cability to bind putrescine, spectroscopic analyses were performed using circular dichroism and intrinsic fluorescence. The data showed that PotF suffers conformational changes in presence of ligands and in different pH, suggesting a possible interaction with the tested ligand. Moreover, based on bioinformatics studies and molecular modeling analyses, we showed that X. citri PotF is highly conserved when compared to orthologs present in other bacteria, including the residues that form the ligand-binding site. The production of PotF in a soluble and stable form will allow us to start the crystallization trials in attempt to solve its structure. (author)

  2. Structural and functional analyses of the putrescine binding protein PotF from Xanthomonas citri

    Energy Technology Data Exchange (ETDEWEB)

    Santana, L.D.F.; Balan, A. [Laboratorio Nacional de Biociencias - LNBIO, Campinas, SP (Brazil)

    2012-07-01

    Full text: The focus of our group is to determinate the role of ABC transporters in the physiology and growth of Xanthomonas citri, a phytopathogenic bacteria that infects citrus plants causing significant losses for the economy. One of the ABC transporters identified in the X. citri genome and that was showed to be active during the infection in Citrus sinensis plants was the putrescine transporter. This transporter consists of two internal membrane proteins PotG and PotH that form a pore, a cytoplasmic protein that gives energy for the transport and the periplasmic-binding protein PotF, which is responsible for the affinity and specificity of the system. Its function is associated to the microbial carcinogenesis, biofilm formation, escape from phagolysosomes, bacteriocin production, toxin activity and protection from oxidative and acid stress. In this work, we show for the first time, the expression, purification, functional and structural analyses of the X. citri PotF protein. The PotF was expressed from Escherichia coli cells strain Arctic, as a 40 kDa soluble protein, after induction of IPTG for twenty four hours at thirteen deg C. Using immobilized metal affinity chromatography for purification, the protein was eluted in the fractions with 10-500 mM of imidazole. To test the folding and cability to bind putrescine, spectroscopic analyses were performed using circular dichroism and intrinsic fluorescence. The data showed that PotF suffers conformational changes in presence of ligands and in different pH, suggesting a possible interaction with the tested ligand. Moreover, based on bioinformatics studies and molecular modeling analyses, we showed that X. citri PotF is highly conserved when compared to orthologs present in other bacteria, including the residues that form the ligand-binding site. The production of PotF in a soluble and stable form will allow us to start the crystallization trials in attempt to solve its structure. (author)

  3. The ASSET intercomparison of stratosphere and lower mesosphere humidity analyses

    Directory of Open Access Journals (Sweden)

    H. E. Thornton

    2009-02-01

    Full Text Available This paper presents results from the first detailed intercomparison of stratosphere-lower mesosphere water vapour analyses; it builds on earlier results from the EU funded framework V "Assimilation of ENVISAT Data" (ASSET project. Stratospheric water vapour plays an important role in many key atmospheric processes and therefore an improved understanding of its daily variability is desirable. With the availability of high resolution, good quality Michelson Interferometer for Passive Atmospheric Sounding (MIPAS water vapour profiles, the ability of four different atmospheric models to assimilate these data is tested. MIPAS data have been assimilated over September 2003 into the models of the European Centre for Medium Range Weather Forecasts (ECMWF, the Belgian Institute for Space and Aeronomy (BIRA-IASB, the French Service d'Aéronomie (SA-IPSL and the UK Met Office. The resultant middle atmosphere humidity analyses are compared against independent satellite data from the Halogen Occultation Experiment (HALOE, the Polar Ozone and Aerosol Measurement (POAM III and the Stratospheric Aerosol and Gas Experiment (SAGE II. The MIPAS water vapour profiles are generally well assimilated in the ECMWF, BIRA-IASB and SA systems, producing stratosphere-mesosphere water vapour fields where the main features compare favourably with the independent observations. However, the models are less capable of assimilating the MIPAS data where water vapour values are locally extreme or in regions of strong humidity gradients, such as the southern hemisphere lower stratosphere polar vortex. Differences in the analyses can be attributed to the choice of humidity control variable, how the background error covariance matrix is generated, the model resolution and its complexity, the degree of quality control of the observations and the use of observations near the model boundaries. Due to the poor performance of the Met Office analyses the results are not included in

  4. The ASSET intercomparison of stratosphere and lower mesosphere humidity analyses

    Science.gov (United States)

    Thornton, H. E.; Jackson, D. R.; Bekki, S.; Bormann, N.; Errera, Q.; Geer, A. J.; Lahoz, W. A.; Rharmili, S.

    2009-02-01

    This paper presents results from the first detailed intercomparison of stratosphere-lower mesosphere water vapour analyses; it builds on earlier results from the EU funded framework V "Assimilation of ENVISAT Data" (ASSET) project. Stratospheric water vapour plays an important role in many key atmospheric processes and therefore an improved understanding of its daily variability is desirable. With the availability of high resolution, good quality Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) water vapour profiles, the ability of four different atmospheric models to assimilate these data is tested. MIPAS data have been assimilated over September 2003 into the models of the European Centre for Medium Range Weather Forecasts (ECMWF), the Belgian Institute for Space and Aeronomy (BIRA-IASB), the French Service d'Aéronomie (SA-IPSL) and the UK Met Office. The resultant middle atmosphere humidity analyses are compared against independent satellite data from the Halogen Occultation Experiment (HALOE), the Polar Ozone and Aerosol Measurement (POAM III) and the Stratospheric Aerosol and Gas Experiment (SAGE II). The MIPAS water vapour profiles are generally well assimilated in the ECMWF, BIRA-IASB and SA systems, producing stratosphere-mesosphere water vapour fields where the main features compare favourably with the independent observations. However, the models are less capable of assimilating the MIPAS data where water vapour values are locally extreme or in regions of strong humidity gradients, such as the southern hemisphere lower stratosphere polar vortex. Differences in the analyses can be attributed to the choice of humidity control variable, how the background error covariance matrix is generated, the model resolution and its complexity, the degree of quality control of the observations and the use of observations near the model boundaries. Due to the poor performance of the Met Office analyses the results are not included in the intercomparison

  5. Bioconductor workflow for microbiome data analysis: from raw reads to community analyses [version 1; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Ben J. Callahan

    2016-06-01

    Full Text Available High-throughput sequencing of PCR-amplified taxonomic markers (like the 16S rRNA gene has enabled a new level of analysis of complex bacterial communities known as microbiomes. Many tools exist to quantify and compare abundance levels or microbial composition of communities in different conditions. The sequencing reads have to be denoised and assigned to the closest taxa from a reference database. Common approaches use a notion of 97% similarity and normalize the data by subsampling to equalize library sizes. In this paper, we show that statistical models allow more accurate abundance estimates. By providing a complete workflow in R, we enable the user to do sophisticated downstream statistical analyses, including both parameteric and nonparametric methods. We provide examples of using the R packages dada2, phyloseq, DESeq2, ggplot2 and vegan to filter, visualize and test microbiome data. We also provide examples of supervised analyses using random forests, partial least squares and linear models as well as nonparametric testing using community networks and the ggnetwork package.

  6. CaRMeN: a tool for analysing and deriving kinetics in the real world.

    Science.gov (United States)

    Gossler, H; Maier, L; Angeli, S; Tischer, S; Deutschmann, O

    2018-04-25

    This paper presents the concepts of an open software tool (CaRMeN) that can be used to rapidly analyse and derive models, in particular chemical kinetics. The software automates the workflow of comparing model vs. experiment, which must currently be done manually and is thus a time-consuming and error-prone task. The capabilities of the software are illustrated through a case study. Experimental data for the conversion of methane over rhodium catalysts in a wide range of conditions and experimental setups are numerically simulated using five different mechanisms from the literature. The applicability of the mechanisms as well as differences between flow and diffusion models are evaluated. The results show that no single mechanism reliably predicts the chemical conversions of all of the experiments. Although the software was initially developed for chemical kinetics applications, it can also be extended to run any simulation code, and can therefore be applied in other scenarios.

  7. Linkage and related analyses of Barrett's esophagus and its associated adenocarcinomas.

    Science.gov (United States)

    Sun, Xiangqing; Elston, Robert; Falk, Gary W; Grady, William M; Faulx, Ashley; Mittal, Sumeet K; Canto, Marcia I; Shaheen, Nicholas J; Wang, Jean S; Iyer, Prasad G; Abrams, Julian A; Willis, Joseph E; Guda, Kishore; Markowitz, Sanford; Barnholtz-Sloan, Jill S; Chandar, Apoorva; Brock, Wendy; Chak, Amitabh

    2016-07-01

    Familial aggregation and segregation analysis studies have provided evidence of a genetic basis for esophageal adenocarcinoma (EAC) and its premalignant precursor, Barrett's esophagus (BE). We aim to demonstrate the utility of linkage analysis to identify the genomic regions that might contain the genetic variants that predispose individuals to this complex trait (BE and EAC). We genotyped 144 individuals in 42 multiplex pedigrees chosen from 1000 singly ascertained BE/EAC pedigrees, and performed both model-based and model-free linkage analyses, using S.A.G.E. and other software. Segregation models were fitted, from the data on both the 42 pedigrees and the 1000 pedigrees, to determine parameters for performing model-based linkage analysis. Model-based and model-free linkage analyses were conducted in two sets of pedigrees: the 42 pedigrees and a subset of 18 pedigrees with female affected members that are expected to be more genetically homogeneous. Genome-wide associations were also tested in these families. Linkage analyses on the 42 pedigrees identified several regions consistently suggestive of linkage by different linkage analysis methods on chromosomes 2q31, 12q23, and 4p14. A linkage on 15q26 is the only consistent linkage region identified in the 18 female-affected pedigrees, in which the linkage signal is higher than in the 42 pedigrees. Other tentative linkage signals are also reported. Our linkage study of BE/EAC pedigrees identified linkage regions on chromosomes 2, 4, 12, and 15, with some reported associations located within our linkage peaks. Our linkage results can help prioritize association tests to delineate the genetic determinants underlying susceptibility to BE and EAC.

  8. Skeletal Muscle Differentiation on a Chip Shows Human Donor Mesoangioblasts' Efficiency in Restoring Dystrophin in a Duchenne Muscular Dystrophy Model.

    Science.gov (United States)

    Serena, Elena; Zatti, Susi; Zoso, Alice; Lo Verso, Francesca; Tedesco, F Saverio; Cossu, Giulio; Elvassore, Nicola

    2016-12-01

    : Restoration of the protein dystrophin on muscle membrane is the goal of many research lines aimed at curing Duchenne muscular dystrophy (DMD). Results of ongoing preclinical and clinical trials suggest that partial restoration of dystrophin might be sufficient to significantly reduce muscle damage. Different myogenic progenitors are candidates for cell therapy of muscular dystrophies, but only satellite cells and pericytes have already entered clinical experimentation. This study aimed to provide in vitro quantitative evidence of the ability of mesoangioblasts to restore dystrophin, in terms of protein accumulation and distribution, within myotubes derived from DMD patients, using a microengineered model. We designed an ad hoc experimental strategy to miniaturize on a chip the standard process of muscle regeneration independent of variables such as inflammation and fibrosis. It is based on the coculture, at different ratios, of human dystrophin-positive myogenic progenitors and dystrophin-negative myoblasts in a substrate with muscle-like physiological stiffness and cell micropatterns. Results showed that both healthy myoblasts and mesoangioblasts restored dystrophin expression in DMD myotubes. However, mesoangioblasts showed unexpected efficiency with respect to myoblasts in dystrophin production in terms of the amount of protein produced (40% vs. 15%) and length of the dystrophin membrane domain (210-240 µm vs. 40-70 µm). These results show that our microscaled in vitro model of human DMD skeletal muscle validated previous in vivo preclinical work and may be used to predict efficacy of new methods aimed at enhancing dystrophin accumulation and distribution before they are tested in vivo, reducing time, costs, and variability of clinical experimentation. This study aimed to provide in vitro quantitative evidence of the ability of human mesoangioblasts to restore dystrophin, in terms of protein accumulation and distribution, within myotubes derived from

  9. Analysing improvements to on-street public transport systems: a mesoscopic model approach

    DEFF Research Database (Denmark)

    Ingvardson, Jesper Bláfoss; Kornerup Jensen, Jonas; Nielsen, Otto Anker

    2017-01-01

    and other advanced public transport systems (APTS), the attractiveness of such systems depends heavily on their implementation. In the early planning stage it is advantageous to deploy simple and transparent models to evaluate possible ways of implementation. For this purpose, the present study develops...... headway time regularity and running time variability, i.e. taking into account waiting time and in-vehicle time. The approach was applied on a case study by assessing the effects of implementing segregated infrastructure and APTS elements, individually and in combination. The results showed...... that the reliability of on-street public transport operations mainly depends on APTS elements, and especially holding strategies, whereas pure infrastructure improvements induced travel time reductions. The results further suggested that synergy effects can be obtained by planning on-street public transport coherently...

  10. Process models and model-data fusion in dendroecology

    Directory of Open Access Journals (Sweden)

    Joel eGuiot

    2014-08-01

    Full Text Available Dendrochronology (i.e. the study of annually dated tree-ring time series has proved to be a powerful technique to understand tree-growth. This paper intends to show the interest of using ecophysiological modeling not only to understand and predict tree-growth (dendroecology but also to reconstruct past climates (dendroclimatology. Process models have been used for several decades in dendroclimatology, but it is only recently that methods of model-data fusion have led to significant progress in modeling tree-growth as a function of climate and in reconstructing past climates. These model-data fusion (MDF methods, mainly based on the Bayesian paradigm, have been shown to be powerful for both model calibration and model inversion. After a rapid survey of tree-growth modeling, we illustrate MDF with examples based on series of Southern France Aleppo pines and Central France oaks. These examples show that if plants experienced CO2 fertilization, this would have a significant effect on tree-growth which in turn would bias the climate reconstructions. This bias could be extended to other environmental non-climatic factors directly or indirectly affecting annual ring formation and not taken into account in classical empirical models, which supports the use of more complex process-based models. Finally, we conclude by showing the interest of the data assimilation methods applied in climatology to produce climate re-analyses.

  11. [Methods, challenges and opportunities for big data analyses of microbiome].

    Science.gov (United States)

    Sheng, Hua-Fang; Zhou, Hong-Wei

    2015-07-01

    Microbiome is a novel research field related with a variety of chronic inflamatory diseases. Technically, there are two major approaches to analysis of microbiome: metataxonome by sequencing the 16S rRNA variable tags, and metagenome by shot-gun sequencing of the total microbial (mainly bacterial) genome mixture. The 16S rRNA sequencing analyses pipeline includes sequence quality control, diversity analyses, taxonomy and statistics; metagenome analyses further includes gene annotation and functional analyses. With the development of the sequencing techniques, the cost of sequencing will decrease, and big data analyses will become the central task. Data standardization, accumulation, modeling and disease prediction are crucial for future exploit of these data. Meanwhile, the information property in these data, and the functional verification with culture-dependent and culture-independent experiments remain the focus in future research. Studies of human microbiome will bring a better understanding of the relations between the human body and the microbiome, especially in the context of disease diagnosis and therapy, which promise rich research opportunities.

  12. Post-facta Analyses of Fukushima Accident and Lessons Learned

    Energy Technology Data Exchange (ETDEWEB)

    Tanabe, Fumiya [Sociotechnical Systems Safety Research Institute, Ichige (Japan)

    2014-08-15

    Independent analyses have been performed of the core melt behavior of the Unit 1, Unit 2 and Unit 3 reactors of Fukushima Daiichi Nuclear Power Station on 11-15 March 2011. The analyses are based on a phenomenological methodology with measured data investigation and a simple physical model calculation. Estimated are time variation of core water level, core material temperature and hydrogen generation rate. The analyses have revealed characteristics of accident process of each reactor. In the case of Unit 2 reactor, the calculated result suggests little hydrogen generation because of no steam generation in the core for zirconium-steam reaction during fuel damage process. It could be the reason of no hydrogen explosion in the Unit 2 reactor building. Analyses have been performed also on the core material behavior in another chaotic period of 19-31 March 2011, and it resulted in a re-melt hypothesis that core material in each reactor should have melted again due to shortage of cooling water. The hypothesis is consistent with many observed features of radioactive materials dispersion into the environment.

  13. Issues and approaches in risk-based aging analyses of passive components

    International Nuclear Information System (INIS)

    Uryasev, S.P.; Samanta, P.K.; Vesely, W.E.

    1994-01-01

    In previous NRC-sponsored work a general methodology was developed to quantify the risk contributions from aging components at nuclear plants. The methodology allowed Probabilistic Risk Analyses (PRAs) to be modified to incorporate the age-dependent component failure rates and also aging maintenance models to evaluate and prioritize the aging contributions from active components using the linear aging failure rate model and empirical components aging rates. In the present paper, this methodology is extended to passive components (for example, the pipes, heat exchangers, and the vessel). The analyses of passive components bring in issues different from active components. Here, we specifically focus on three aspects that need to be addressed in risk-based aging prioritization of passive components

  14. Model-Driven Engineering of Machine Executable Code

    Science.gov (United States)

    Eichberg, Michael; Monperrus, Martin; Kloppenburg, Sven; Mezini, Mira

    Implementing static analyses of machine-level executable code is labor intensive and complex. We show how to leverage model-driven engineering to facilitate the design and implementation of programs doing static analyses. Further, we report on important lessons learned on the benefits and drawbacks while using the following technologies: using the Scala programming language as target of code generation, using XML-Schema to express a metamodel, and using XSLT to implement (a) transformations and (b) a lint like tool. Finally, we report on the use of Prolog for writing model transformations.

  15. Uncertainty Analyses for Back Projection Methods

    Science.gov (United States)

    Zeng, H.; Wei, S.; Wu, W.

    2017-12-01

    So far few comprehensive error analyses for back projection methods have been conducted, although it is evident that high frequency seismic waves can be easily affected by earthquake depth, focal mechanisms and the Earth's 3D structures. Here we perform 1D and 3D synthetic tests for two back projection methods, MUltiple SIgnal Classification (MUSIC) (Meng et al., 2011) and Compressive Sensing (CS) (Yao et al., 2011). We generate synthetics for both point sources and finite rupture sources with different depths, focal mechanisms, as well as 1D and 3D structures in the source region. The 3D synthetics are generated through a hybrid scheme of Direct Solution Method and Spectral Element Method. Then we back project the synthetic data using MUSIC and CS. The synthetic tests show that the depth phases can be back projected as artificial sources both in space and time. For instance, for a source depth of 10km, back projection gives a strong signal 8km away from the true source. Such bias increases with depth, e.g., the error of horizontal location could be larger than 20km for a depth of 40km. If the array is located around the nodal direction of direct P-waves the teleseismic P-waves are dominated by the depth phases. Therefore, back projections are actually imaging the reflection points of depth phases more than the rupture front. Besides depth phases, the strong and long lasted coda waves due to 3D effects near trench can lead to additional complexities tested here. The strength contrast of different frequency contents in the rupture models also produces some variations to the back projection results. In the synthetic tests, MUSIC and CS derive consistent results. While MUSIC is more computationally efficient, CS works better for sparse arrays. In summary, our analyses indicate that the impact of various factors mentioned above should be taken into consideration when interpreting back projection images, before we can use them to infer the earthquake rupture physics.

  16. Analysing the Effects of a Pigs Production Quota within a Dynamic CGE Framework

    DEFF Research Database (Denmark)

    Adams, Philip D; Hansen, Lill Thanning; Jacobsen, Lars Bo

    2001-01-01

    In this paper we address the issue of timing and announcement within a dynamic applied general equilibrium model of the Danish economy. Specifically we analyse the introduction of a quota on the production of pigs. Two scenarios are analysed, namely the introduction of a once-off quota without any...

  17. Modelling the models

    CERN Multimedia

    Anaïs Schaeffer

    2012-01-01

    By analysing the production of mesons in the forward region of LHC proton-proton collisions, the LHCf collaboration has provided key information needed to calibrate extremely high-energy cosmic ray models.   Average transverse momentum (pT) as a function of rapidity loss ∆y. Black dots represent LHCf data and the red diamonds represent SPS experiment UA7 results. The predictions of hadronic interaction models are shown by open boxes (sibyll 2.1), open circles (qgsjet II-03) and open triangles (epos 1.99). Among these models, epos 1.99 shows the best overall agreement with the LHCf data. LHCf is dedicated to the measurement of neutral particles emitted at extremely small angles in the very forward region of LHC collisions. Two imaging calorimeters – Arm1 and Arm2 – take data 140 m either side of the ATLAS interaction point. “The physics goal of this type of analysis is to provide data for calibrating the hadron interaction models – the well-known &...

  18. Extending Data Worth Analyses to Select Multiple Observations Targeting Multiple Forecasts.

    Science.gov (United States)

    Vilhelmsen, Troels N; Ferré, Ty P A

    2017-09-15

    Hydrological models are often set up to provide specific forecasts of interest. Owing to the inherent uncertainty in data used to derive model structure and used to constrain parameter variations, the model forecasts will be uncertain. Additional data collection is often performed to minimize this forecast uncertainty. Given our common financial restrictions, it is critical that we identify data with maximal information content with respect to forecast of interest. In practice, this often devolves to qualitative decisions based on expert opinion. However, there is no assurance that this will lead to optimal design, especially for complex hydrogeological problems. Specifically, these complexities include considerations of multiple forecasts, shared information among potential observations, information content of existing data, and the assumptions and simplifications underlying model construction. In the present study, we extend previous data worth analyses to include: simultaneous selection of multiple new measurements and consideration of multiple forecasts of interest. We show how the suggested approach can be used to optimize data collection. This can be used in a manner that suggests specific measurement sets or that produces probability maps indicating areas likely to be informative for specific forecasts. Moreover, we provide examples documenting that sequential measurement election approaches often lead to suboptimal designs and that estimates of data covariance should be included when selecting future measurement sets. © 2017, National Ground Water Association.

  19. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  20. Optical region elemental abundance analyses of B and A stars

    International Nuclear Information System (INIS)

    Adelman, S.J.

    1984-01-01

    Abundance analyses using optical region data and fully line blanketed model atmospheres have been performed for six moderately sharplined middle to late B-type stars. The derived abundances have values similar to those of the Sun. (author)

  1. Kinetic stability analyses in a bumpy cylinder

    International Nuclear Information System (INIS)

    Dominguez, R.R.; Berk, H.L.

    1981-01-01

    Recent interest in the ELMO Bumpy Torus (EBT) has prompted a number of stability analyses of both the hot electron rings and the toroidal plasma. Typically these works employ the local approximation, neglecting radial eigenmode structure and ballooning effects to perform the stability analysis. In the present work we develop a fully kinetic formalism for performing nonlocal stability analyses in a bumpy cylinder. We show that the Vlasov-Maxwell integral equations (with one ignorable coordinate) are self-adjoint and hence amenable to analysis using numerical techniques developed for self-adjoint systems of equations. The representation we obtain for the kernel of the Vlasov-Maxwell equations is a differential operator of arbitrarily high order. This form leads to a manifestly self-adjoint system of differential equations for long wavelength modes

  2. Integrating Risk Analyses and Tools at the DOE Hanford Site

    International Nuclear Information System (INIS)

    LOBER, R.W.

    2002-01-01

    Risk assessment and environmental impact analysis at the U.S. Department of Energy (DOE) Hanford Site in Washington State has made significant progress in refining the strategy for using risk analysis to support closing of several hundred waste sites plus 149 single-shell tanks at the Hanford Site. A Single-Shell Tank System Closure Work Plan outlines the current basis for closing the single-shell tank systems. An analogous site approach has been developed to address closure of aggregated groups of similar waste sites. Because of the complexity, decision time frames, proximity of non-tank farm waste sites to tank farms, scale, and regulatory considerations, various projects are providing integrated assessments to support risk analyses and decision-making. Projects and the tools that are being developed and applied at Hanford to support retrieval and cleanup decisions include: (1) Life Cycle Model (LCM) and Risk Receptor Model (RRM)--A site-level set of tools to support strategic analyses through scoping level risk management to assess different alternatives and options for tank closure. (2) Systems Assessment Capability for Integrated Groundwater Nadose Zone (SAC) and the Site-Wide Groundwater Model (SWGM)--A site-wide groundwater modeling system coupled with a risk-based uncertainty analysis of inventory, vadose zone, groundwater, and river interactions for evaluating cumulative impacts from individual and aggregate waste sites. (3) Retrieval Performance Evaluation (RPE)--A site-specific, risk-based methodology developed to evaluate performance of waste retrieval, leak detection and closure on a tank-specific basis as a function of past tank Leaks, potential leakage during retrieval operations, and remaining residual waste inventories following completion of retrieval operations. (4) Field Investigation Report (FIR)--A corrective action program to investigate the nature and extent of past tank leaks through characterization activities and assess future impacts to

  3. USE OF BOUNDING ANALYSES TO ESTIMATE THE PREFORMANCE OF A SEISMICALLY ISOLATED STRUCTURE

    Directory of Open Access Journals (Sweden)

    Gökhan ÖZDEMİR

    2017-03-01

    Full Text Available Current design approach for seismic isolated structures is to perform bounding analyses. These analyses provide an envelope for the response of the seismic isolated structure rather than focusing on the actual performance. In this study, the success of bounding analyses to estimate performance of a seismic isolated structure, in which the isolation is provided by means of lead rubber bearings (LRBs, is evaluated in a comparative manner. For this purpose, nonlinear response history analyses were performed under the effect of bidirectional ground motion excitations. In bounding analyses, non-deteriorating hysteretic representations were used to model the hysteretic behavior of LRBs. On the other hand, to estimate the actual performance of both the superstructure and isolator units, deteriorating hysteretic idealizations were employed. The deterioration in strength of LRBs was defined as a function of temperature rise in the lead core. The analyzed structure is an existing seismically isolated hospital building and analytically modeled in accordance with its reported design properties for both isolation units and superstructure. Results obtained from analyses where LRBs are idealized by both deteriorating and non-deteriorating hysteretic representations are used in the comparisons. The response quantities used in the comparisons are maximum isolator displacement, maximum isolator force, maximum absolute floor acceleration, and maximum relative story displacements. In an average sense, bounding analyses is found to provide conservative estimates for the selected response quantities and fulfills its intended purpose. However, it is revealed that there may be individual cases where bounding analyses fails to provide a safe envelope.

  4. ALBEDO PATTERN RECOGNITION AND TIME-SERIES ANALYSES IN MALAYSIA

    Directory of Open Access Journals (Sweden)

    S. A. Salleh

    2012-07-01

    Full Text Available Pattern recognition and time-series analyses will enable one to evaluate and generate predictions of specific phenomena. The albedo pattern and time-series analyses are very much useful especially in relation to climate condition monitoring. This study is conducted to seek for Malaysia albedo pattern changes. The pattern recognition and changes will be useful for variety of environmental and climate monitoring researches such as carbon budgeting and aerosol mapping. The 10 years (2000–2009 MODIS satellite images were used for the analyses and interpretation. These images were being processed using ERDAS Imagine remote sensing software, ArcGIS 9.3, the 6S code for atmospherical calibration and several MODIS tools (MRT, HDF2GIS, Albedo tools. There are several methods for time-series analyses were explored, this paper demonstrates trends and seasonal time-series analyses using converted HDF format MODIS MCD43A3 albedo land product. The results revealed significance changes of albedo percentages over the past 10 years and the pattern with regards to Malaysia's nebulosity index (NI and aerosol optical depth (AOD. There is noticeable trend can be identified with regards to its maximum and minimum value of the albedo. The rise and fall of the line graph show a similar trend with regards to its daily observation. The different can be identified in term of the value or percentage of rises and falls of albedo. Thus, it can be concludes that the temporal behavior of land surface albedo in Malaysia have a uniform behaviours and effects with regards to the local monsoons. However, although the average albedo shows linear trend with nebulosity index, the pattern changes of albedo with respects to the nebulosity index indicates that there are external factors that implicates the albedo values, as the sky conditions and its diffusion plotted does not have uniform trend over the years, especially when the trend of 5 years interval is examined, 2000 shows high

  5. Modelling of the $t\\bar{t}H$ and $t\\bar{t}V$ $(V=W,Z)$ processes for $\\sqrt{s}=13$ TeV ATLAS analyses

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    Production of top quark pairs in association with heavy Standard Model bosons is important both as a signal and a background in several ATLAS analyses. Strong constraints on such processes cannot at present be obtained from data, and therefore their modelling by Monte Carlo simulation as well as the associated uncertainties are important. This note documents the Monte Carlo samples currently being used in ATLAS for the $t\\bar{t}H$ and $t\\bar{t}V$ ($V=W,Z$ vector bosons) processes for $\\sqrt{s}=13$ TeV proton-proton collisions.

  6. Coupled Mooring Analyses for the WEC-Sim Wave Energy Converter Design Tool: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Sirnivas, Senu; Yu, Yi-Hsiang; Hall, Matthew; Bosma, Bret

    2016-07-01

    A wave-energy-converter-specific time-domain modeling method (WEC-Sim) was coupled with a lumped-mass-based mooring model (MoorDyn) to improve its mooring dynamics modeling capability. This paper presents a verification and validation study on the coupled numerical method. First, a coupled model was built to simulate a 1/25 model scale floating power system connected to a traditional three-point catenary mooring with an angle of 120 between the lines. The body response and the tension force on the mooring lines at the fairlead in decay tests and under regular and irregular waves were examined. To validate and verify the coupled numerical method, the simulation results were compared to the measurements from a wave tank test and a commercial code (OrcaFlex). Second, a coupled model was built to simulate a two-body point absorber system with a chain-connected catenary system. The influence of the mooring connection on the point absorber was investigated. Overall, the study showed that the coupling of WEC-Sim and the MoorDyn model works reasonably well for simulating a floating system with practical mooring designs and predicting the corresponding dynamic loads on the mooring lines. Further analyses on improving coupling efficiency and the feasibility of applying the numerical method to simulate WEC systems with more complex mooring configuration are still needed.

  7. Modeling seasonal leptospirosis transmission and its association with rainfall and temperature in Thailand using time-series and ARIMAX analyses.

    Science.gov (United States)

    Chadsuthi, Sudarat; Modchang, Charin; Lenbury, Yongwimon; Iamsirithaworn, Sopon; Triampo, Wannapong

    2012-07-01

    To study the number of leptospirosis cases in relations to the seasonal pattern, and its association with climate factors. Time series analysis was used to study the time variations in the number of leptospirosis cases. The Autoregressive Integrated Moving Average (ARIMA) model was used in data curve fitting and predicting the next leptospirosis cases. We found that the amount of rainfall was correlated to leptospirosis cases in both regions of interest, namely the northern and northeastern region of Thailand, while the temperature played a role in the northeastern region only. The use of multivariate ARIMA (ARIMAX) model showed that factoring in rainfall (with an 8 months lag) yields the best model for the northern region while the model, which factors in rainfall (with a 10 months lag) and temperature (with an 8 months lag) was the best for the northeastern region. The models are able to show the trend in leptospirosis cases and closely fit the recorded data in both regions. The models can also be used to predict the next seasonal peak quite accurately. Copyright © 2012 Hainan Medical College. Published by Elsevier B.V. All rights reserved.

  8. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.; Mai, Paul Martin; Thingbaijam, Kiran Kumar; Razafindrakoto, H. N. T.; Genton, Marc G.

    2014-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  9. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  10. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  11. Race, Gender, and Reseacher Positionality Analysed Through Memory Work

    DEFF Research Database (Denmark)

    Andreassen, Rikke; Myong, Lene

    2017-01-01

    Drawing upon feminist standpoint theory and memory work, the authors analyse racial privilege by investigating their own racialized and gendered subjectifications as academic researchers. By looking at their own experiences within academia, they show how authority and agency are contingent upon...

  12. Comparative biochemical analyses of venous blood and peritoneal fluid from horses with colic using a portable analyser and an in-house analyser.

    Science.gov (United States)

    Saulez, M N; Cebra, C K; Dailey, M

    2005-08-20

    Fifty-six horses with colic were examined over a period of three months. The concentrations of glucose, lactate, sodium, potassium and chloride, and the pH of samples of blood and peritoneal fluid, were determined with a portable clinical analyser and with an in-house analyser and the results were compared. Compared with the in-house analyser, the portable analyser gave higher pH values for blood and peritoneal fluid with greater variability in the alkaline range, and lower pH values in the acidic range, lower concentrations of glucose in the range below 8.3 mmol/l, and lower concentrations of lactate in venous blood in the range below 5 mmol/l and in peritoneal fluid in the range below 2 mmol/l, with less variability. On average, the portable analyser underestimated the concentrations of lactate and glucose in peritoneal fluid in comparison with the in-house analyser. Its measurements of the concentrations of sodium and chloride in peritoneal fluid had a higher bias and were more variable than the measurements in venous blood, and its measurements of potassium in venous blood and peritoneal fluid had a smaller bias and less variability than the measurements made with the in-house analyser.

  13. Network-derived inhomogeneity in monthly rainfall analyses over western Tasmania

    International Nuclear Information System (INIS)

    Fawcett, Robert; Trewin, Blair; Barnes-Keoghan, Ian

    2010-01-01

    Monthly rainfall in the wetter western half of Tasmania was relatively poorly observed in the early to middle parts of the 20th century, and this causes a marked inhomogeneity in the operational gridded monthly rainfall analyses generated by the Australian Bureau of Meteorology up until the end of 2009. These monthly rainfall analyses were generated for the period 1900 to 2009 in two forms; a national analysis at 0.25 0 latitude-longitude resolution, and a southeastern Australia regional analysis at 0.1 0 resolution. For any given month, they used all the monthly data from the standard Bureau rainfall gauge network available in the Australian Data Archive for Meteorology. Since this network has changed markedly since Federation (1901), there is obvious scope for network-derived inhomogeneities in the analyses. In this study, we show that the topography-resolving techniques of the new Australian Water Availability Project analyses, adopted as the official operational analyses from the start of 2010, substantially diminish those inhomogeneities, while using largely the same observation network. One result is an improved characterisation of recent rainfall declines across Tasmania. The new analyses are available at two resolutions, 0.25 0 and 0.05 0 .

  14. Seismic analyses of structures. 1st draft

    International Nuclear Information System (INIS)

    David, M.

    1995-01-01

    The dynamic analysis presented in this paper refers to the seismic analysis of the main building of Paks NPP. The aim of the analysis was to determine the floor response spectra as response to seismic input. This analysis was performed by the 3-dimensional calculation model and the floor response spectra were determined for a number levels from the floor response time histories and no other adjustments were applied. The following results of seismic analysis are presented: 3-dimensional finite element model; basic assumptions of dynamic analyses; table of frequencies and included factors; modal masses for all modes; floor response spectra in all the selected nodes with figures of indicated nodes and important nodes of free vibration

  15. Seismic analyses of structures. 1st draft

    Energy Technology Data Exchange (ETDEWEB)

    David, M [David Consulting, Engineering and Design Office (Czech Republic)

    1995-07-01

    The dynamic analysis presented in this paper refers to the seismic analysis of the main building of Paks NPP. The aim of the analysis was to determine the floor response spectra as responseto seismic input. This analysis was performed by the 3-dimensional calculation model and the floor response spectra were determined for a number levels from the floor response time histories and no other adjustments were applied. The following results of seismic analysis are presented: 3-dimensional finite element model; basic assumptions of dynamic analyses; table of frequencies and included factors; modal masses for all modes; floor response spectra in all the selected nodes with figures of indicated nodes and important nodes of free vibration.

  16. Phenolic Acids from Wheat Show Different Absorption Profiles in Plasma: A Model Experiment with Catheterized Pigs

    DEFF Research Database (Denmark)

    Nørskov, Natalja; Hedemann, Mette Skou; Theil, Peter Kappel

    2013-01-01

    The concentration and absorption of the nine phenolic acids of wheat were measured in a model experiment with catheterized pigs fed whole grain wheat and wheat aleurone diets. Six pigs in a repeated crossover design were fitted with catheters in the portal vein and mesenteric artery to study...... the absorption of phenolic acids. The difference between the artery and the vein for all phenolic acids was small, indicating that the release of phenolic acids in the large intestine was not sufficient to create a porto-arterial concentration difference. Although, the porto-arterial difference was small...... consumed. Benzoic acid derivatives showed low concentration in the plasma (phenolic acids, likely because it is an intermediate in the phenolic acid metabolism...

  17. A Cyber-Attack Detection Model Based on Multivariate Analyses

    Science.gov (United States)

    Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi

    In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.

  18. Show and Tell: Video Modeling and Instruction Without Feedback Improves Performance but Is Not Sufficient for Retention of a Complex Voice Motor Skill.

    Science.gov (United States)

    Look, Clarisse; McCabe, Patricia; Heard, Robert; Madill, Catherine J

    2018-02-02

    Modeling and instruction are frequent components of both traditional and technology-assisted voice therapy. This study investigated the value of video modeling and instruction in the early acquisition and short-term retention of a complex voice task without external feedback. Thirty participants were randomized to two conditions and trained to produce a vocal siren over 40 trials. One group received a model and verbal instructions, the other group received a model only. Sirens were analyzed for phonation time, vocal intensity, cepstral peak prominence, peak-to-peak time, and root-mean-square error at five time points. The model and instruction group showed significant improvement on more outcome measures than the model-only group. There was an interaction effect for vocal intensity, which showed that instructions facilitated greater improvement when they were first introduced. However, neither group reproduced the model's siren performance across all parameters or retained the skill 1 day later. Providing verbal instruction with a model appears more beneficial than providing a model only in the prepractice phase of acquiring a complex voice skill. Improved performance was observed; however, the higher level of performance was not retained after 40 trials in both conditions. Other prepractice variables may need to be considered. Findings have implications for traditional and technology-assisted voice therapy. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  19. Economical analyses of construction of a biomass boiler house

    International Nuclear Information System (INIS)

    Normak, A.

    2002-01-01

    To reduce the energy costs we can use cheaper fuel to fire our boiler. One of the cheapest fuels is wood biomass. It is very actual issue how to use cheaper wood biomass in heat generation to decrease energy costs and to increase biomass share in our energy balance. Before we decide to build biomass boiler house it is recommendable to analyse the economical situation and work out most profitable, efficient, reliable and ecological boiler plant design on particular conditions. The best way to perform the analyses is to use the economical model presented. It saves our time and gives objective evaluation to the project. (author)

  20. One dimensional analysis model for condensation heat transfer in feed water heater

    International Nuclear Information System (INIS)

    Murase, Michio; Takamori, Kazuhide; Aihara, Tsuyoshi

    1998-01-01

    In order to simplify condensation heat transfer calculations for feed water heaters, one dimensional (1D) analyses were compared with three dimensional (3D) analyses. The results showed that average condensation heat transfer coefficients by 1D analyses with 1/2 rows of heat transfer tubes agreed with those by 3D analyses within 7%. Using the 1D analysis model, effects of the pitch of heat transfer tubes were evaluated. The results showed that the pitch did not affect much on heat transfer rates and that the size of heat transfer tube bundle could be decreased by a small pitch. (author)

  1. SCALE Graphical Developments for Improved Criticality Safety Analyses

    International Nuclear Information System (INIS)

    Barnett, D.L.; Bowman, S.M.; Horwedel, J.E.; Petrie, L.M.

    1999-01-01

    New computer graphic developments at Oak Ridge National Ridge National Laboratory (ORNL) are being used to provide visualization of criticality safety models and calculational results as well as tools for criticality safety analysis input preparation. The purpose of this paper is to present the status of current development efforts to continue to enhance the SCALE (Standardized Computer Analyses for Licensing Evaluations) computer software system. Applications for criticality safety analysis in the areas of 3-D model visualization, input preparation and execution via a graphical user interface (GUI), and two-dimensional (2-D) plotting of results are discussed

  2. Elemental abundance and analyses with coadded DAO spectrograms

    International Nuclear Information System (INIS)

    Adelman, S.J.

    1987-01-01

    One can improve the quality of elemental abundance analyses by using higher signal-to-noise data than has been the practice at high resolution. The procedures developed at the Dominion Astrophysical Observatory to coadd high-dispersion coude spectrograms are used with a minimum of 10 6.5 A mm -1 IIa-O spectrograms of each of three field horizontal-branch (FHB)A stars to increase the signal-to-noise ratio of the photographic data over a considerable wavelength region. Fine analyses of the sharp-lined prototype FHB stars HD 109995 and 161817 show an internal consistency which justifies this effort. Their photospheric elemental abundances are similar to those of Population II globular cluster giants. (author)

  3. Modelling the geographic distribution of wind power and the impact on transmission needs

    DEFF Research Database (Denmark)

    Østergaard, Poul Alberg

    2003-01-01

    Through energy systems modelling, transmission systems modelling and geographical modelling, the article examines the sensitivity of the response of the transmission system to the geographic distributions of wind power and in particular the sensitivity of the results to the accuracy...... of the distributed modelled. The results show that accuracy of the geographic modelling while important for the analysis of specific single transmission lines is not important for the analysis of the general response of the transmission system. The analyses thus corroborate previous analyses that demonstrated...

  4. Unconscious analyses of visual scenes based on feature conjunctions.

    Science.gov (United States)

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  5. An analyser for power plant operations

    International Nuclear Information System (INIS)

    Rogers, A.E.; Wulff, W.

    1990-01-01

    Safe and reliable operation of power plants is essential. Power plant operators need a forecast of what the plant will do when its current state is disturbed. The in-line plant analyser provides precisely this information at relatively low cost. The plant analyser scheme uses a mathematical model of the dynamic behaviour of the plant to establish a numerical simulation. Over a period of time, the simulation is calibrated with measurements from the particular plant in which it is used. The analyser then provides a reference against which to evaluate the plant's current behaviour. It can be used to alert the operator to any atypical excursions or combinations of readings that indicate malfunction or off-normal conditions that, as the Three Mile Island event suggests, are not easily recognised by operators. In a look-ahead mode, it can forecast the behaviour resulting from an intended change in settings or operating conditions. Then, when such changes are made, the plant's behaviour can be tracked against the forecast in order to assure that the plant is behaving as expected. It can be used to investigate malfunctions that have occurred and test possible adjustments in operating procedures. Finally, it can be used to consider how far from the limits of performance the elements of the plant are operating. Then by adjusting settings, the required power can be generated with as little stress as possible on the equipment. (6 figures) (Author)

  6. Theorising and Analysing Academic Labour

    Directory of Open Access Journals (Sweden)

    Thomas Allmer

    2018-01-01

    Full Text Available The aim of this article is to contextualise universities historically within capitalism and to analyse academic labour and the deployment of digital media theoretically and critically. It argues that the post-war expansion of the university can be considered as medium and outcome of informational capitalism and as a dialectical development of social achievement and advanced commodification. The article strives to identify the class position of academic workers, introduces the distinction between academic work and labour, discusses the connection between academic, information and cultural work, and suggests a broad definition of university labour. It presents a theoretical model of working conditions that helps to systematically analyse the academic labour process and to provide an overview of working conditions at universities. The paper furthermore argues for the need to consider the development of education technologies as a dialectics of continuity and discontinuity, discusses the changing nature of the forces and relations of production, and the impact on the working conditions of academics in the digital university. Based on Erik Olin Wright’s inclusive approach of social transformation, the article concludes with the need to bring together anarchist, social democratic and revolutionary strategies for establishing a socialist university in a commons-based information society.

  7. First study of correlation between oleic acid content and SAD gene polymorphism in olive oil samples through statistical and bayesian modeling analyses.

    Science.gov (United States)

    Ben Ayed, Rayda; Ennouri, Karim; Ercişli, Sezai; Ben Hlima, Hajer; Hanana, Mohsen; Smaoui, Slim; Rebai, Ahmed; Moreau, Fabienne

    2018-04-10

    Virgin olive oil is appreciated for its particular aroma and taste and is recognized worldwide for its nutritional value and health benefits. The olive oil contains a vast range of healthy compounds such as monounsaturated free fatty acids, especially, oleic acid. The SAD.1 polymorphism localized in the Stearoyl-acyl carrier protein desaturase gene (SAD) was genotyped and showed that it is associated with the oleic acid composition of olive oil samples. However, the effect of polymorphisms in fatty acid-related genes on olive oil monounsaturated and saturated fatty acids distribution in the Tunisian olive oil varieties is not understood. Seventeen Tunisian olive-tree varieties were selected for fatty acid content analysis by gas chromatography. The association of SAD.1 genotypes with the fatty acids composition was studied by statistical and Bayesian modeling analyses. Fatty acid content analysis showed interestingly that some Tunisian virgin olive oil varieties could be classified as a functional food and nutraceuticals due to their particular richness in oleic acid. In fact, the TT-SAD.1 genotype was found to be associated with a higher proportion of mono-unsaturated fatty acids (MUFA), mainly oleic acid (C18:1) (r = - 0.79, p SAD.1 association with the oleic acid composition of olive oil was identified among the studied varieties. This correlation fluctuated between studied varieties, which might elucidate variability in lipidic composition among them and therefore reflecting genetic diversity through differences in gene expression and biochemical pathways. SAD locus would represent an excellent marker for identifying interesting amongst virgin olive oil lipidic composition.

  8. Can trial sequential monitoring boundaries reduce spurious inferences from meta-analyses?

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Devereaux, P J; Wetterslev, Jørn

    2008-01-01

    BACKGROUND: Results from apparently conclusive meta-analyses may be false. A limited number of events from a few small trials and the associated random error may be under-recognized sources of spurious findings. The information size (IS, i.e. number of participants) required for a reliable......-analyses after each included trial and evaluated their results using a conventional statistical criterion (alpha = 0.05) and two-sided Lan-DeMets monitoring boundaries. We examined the proportion of false positive results and important inaccuracies in estimates of treatment effects that resulted from the two...... approaches. RESULTS: Using the random-effects model and final data, 12 of the meta-analyses yielded P > alpha = 0.05, and 21 yielded P alpha = 0.05. The monitoring boundaries eliminated all false positives. Important inaccuracies in estimates were observed in 6 out of 21 meta-analyses using the conventional...

  9. Target gene analyses of 39 amelogenesis imperfecta kindreds

    Science.gov (United States)

    Chan, Hui-Chen; Estrella, Ninna M. R. P.; Milkovich, Rachel N.; Kim, Jung-Wook; Simmer, James P.; Hu, Jan C-C.

    2012-01-01

    Previously, mutational analyses identified six disease-causing mutations in 24 amelogenesis imperfecta (AI) kindreds. We have since expanded the number of AI kindreds to 39, and performed mutation analyses covering the coding exons and adjoining intron sequences for the six proven AI candidate genes [amelogenin (AMELX), enamelin (ENAM), family with sequence similarity 83, member H (FAM83H), WD repeat containing domain 72 (WDR72), enamelysin (MMP20), and kallikrein-related peptidase 4 (KLK4)] and for ameloblastin (AMBN) (a suspected candidate gene). All four of the X-linked AI families (100%) had disease-causing mutations in AMELX, suggesting that AMELX is the only gene involved in the aetiology of X-linked AI. Eighteen families showed an autosomal-dominant pattern of inheritance. Disease-causing mutations were identified in 12 (67%): eight in FAM83H, and four in ENAM. No FAM83H coding-region or splice-junction mutations were identified in three probands with autosomal-dominant hypocalcification AI (ADHCAI), suggesting that a second gene may contribute to the aetiology of ADHCAI. Six families showed an autosomal-recessive pattern of inheritance, and disease-causing mutations were identified in three (50%): two in MMP20, and one in WDR72. No disease-causing mutations were found in 11 families with only one affected member. We conclude that mutation analyses of the current candidate genes for AI have about a 50% chance of identifying the disease-causing mutation in a given kindred. PMID:22243262

  10. Show-Bix &

    DEFF Research Database (Denmark)

    2014-01-01

    The anti-reenactment 'Show-Bix &' consists of 5 dias projectors, a dial phone, quintophonic sound, and interactive elements. A responsive interface will enable the Dias projectors to show copies of original dias slides from the Show-Bix piece ”March på Stedet”, 265 images in total. The copies are...

  11. Computational Fluid Dynamics (CFD) Analyses in Support of Space Shuttle Main Engine (SSME) Heat Exchanger (HX) Vane Cracking Investigation

    Science.gov (United States)

    Garcia, Roberto; Benjamin, Theodore G.; Cornelison, J.; Fredmonski, A. J.

    1993-01-01

    Integration issues involved with installing the alternate turbopump (ATP) High Pressure Oxygen Turbopump (HPOTP) into the SSME have raised questions regarding the flow in the HPOTP turnaround duct (TAD). Steady-state Navier-Stokes CFD analyses have been performed by NASA and Pratt & Whitney (P&W) to address these questions. The analyses have consisted of two-dimensional axisymmetric calculations done at Marshall Space Flight Center and three-dimensional calculations performed at P&W. These analyses have identified flowfield differences between the baseline ATP and the Rocketdyne configurations. The results show that the baseline ATP configuration represents a more severe environment to the inner HX guide vane. This vane has limited life when tested in conjunction with the ATP but infinite life when tested with the current SSME HPOTP. The CFD results have helped interpret test results and have been used to assess proposed redesigns. This paper includes details of the axisymmetric model, its results, and its contribution towards resolving the problem.

  12. Assessing Diabetes Self-Management with the Diabetes Self-Management Questionnaire (DSMQ Can Help Analyse Behavioural Problems Related to Reduced Glycaemic Control.

    Directory of Open Access Journals (Sweden)

    Andreas Schmitt

    Full Text Available To appraise the Diabetes Self-Management Questionnaire (DSMQ's measurement of diabetes self-management as a statistical predictor of glycaemic control relative to the widely used SDSCA.248 patients with type 1 diabetes and 182 patients with type 2 diabetes were cross-sectionally assessed using the two self-report measures of diabetes self-management DSMQ and SDSCA; the scales were used as competing predictors of HbA1c. We developed a structural equation model of self-management as measured by the DSMQ and analysed the amount of variation explained in HbA1c; an analogue model was developed for the SDSCA.The structural equation models of self-management and glycaemic control showed very good fit to the data. The DSMQ's measurement of self-management showed associations with HbA1c of -0.53 for type 1 and -0.46 for type 2 diabetes (both P < 0.001, explaining 21% and 28% of variation in glycaemic control, respectively. The SDSCA's measurement showed associations with HbA1c of -0.14 (P = 0.030 for type 1 and -0.31 (P = 0.003 for type 2 diabetes, explaining 2% and 10% of glycaemic variation. Predictive power for glycaemic control was significantly higher for the DSMQ (P < 0.001.This study supports the DSMQ as the preferred tool when analysing self-reported behavioural problems related to reduced glycaemic control. The scale may be useful for clinical assessments of patients with suboptimal diabetes outcomes or research on factors affecting associations between self-management behaviours and glycaemic control.

  13. Thermoeconomic and environmental analyses of a low water consumption combined steam power plant and refrigeration chillers – Part 1: Energy and economic modelling and analysis

    International Nuclear Information System (INIS)

    Ifaei, Pouya; Rashidi, Jouan; Yoo, ChangKyoo

    2016-01-01

    Highlights: • Proposing two energy systems by combining refrigeration chillers and power plants. • Model-based comparison of the systems through energy and economic standpoints. • Reducing total annual costs of the base system up to 4.7% by process integration. • Decreasing the water loss in wet cooling towers by up to 18% in the proposed system. • Suggesting a water-fuel economic management strategy based on parametric analysis. - Abstract: Two novel configurations are proposed to replace the conventional Rankine cycle based steam power plants (SPP) with natural draft wet cooling towers (NDWCT) as cooling units. Closed feedwater heater unit of the base SPP-NDWCT system is eliminated in order to combine a vapor compression refrigeration (VCR) and an absorption heat pump (ABHP) with the base SPP-NDWCT system. Both VCR-SPP-NDWCT and ABHP-SPP-NDWCT systems are integrated to decrease the NDWCT load which could result in water losses decrease. In part one of the presented two-part paper, model-based energy and economic analyses are performed to compare systems performance and applicability. The temperature difference at pinch point and temperature difference between the hot and cold sides of the heat exchangers which were used for systems integration in VCR-SPP-NDWCT, and the absorber pressure and temperature in ABHP-SPP-NDWCT system are studied using parametric analysis procedure. A water-fuel management strategy is also introduced for the ABHP-SPP-NDWCT system according to the influence of the absorber pressure changes on system water and fuel consumption. In part 2, environmental and thermoeconomic analyses are performed to complete a comprehensive study on designing steam power plants. The results of part 1 showed that water losses and total annual costs decreased by 1–18% and 0–4.7% for the ABHP-SPP-NDWCT system but increased by 11% and 60% for the VCR-SPP-NDWCT system, respectively.

  14. Analyses and simulation to spatial pattern of land utilization in Guangzhu City

    Science.gov (United States)

    Zhang, Xin-chang; Zhang, Wen-jiang; Ma, Kun

    2006-10-01

    Based on Landsat TM remote sensing images in 1990 and 2000, we analyses the temporal and spatial pattern Characters of land use in the 1990s in Guangzhou city. We also simulate the scenarios of land-use pattern in 2010 by integrating the Markov process into cellular automata model. The results show that the area of constructions was rapid increasing during the last ten years of the 20th century, at the same time the arable land, woodland and unused land areas were decreasing, the orchard and water areas were rarely changed; In the first ten years of 21st century, land use pattern keep the change trend in the 1990s, land of constructions continue rapid increasing; arable land and unused land areas continue rapid decreasing; woodland, orchard and water areas keep steadily. Research shows that the extent of urban area has increased exponentially in Guangzhou city, no evidences show that the arable land decreasing rate will slow down in the near future. So, it is necessary to enhance the control functions of land use planning and take actives measures to protect arable land.

  15. Inactivated ORF virus shows antifibrotic activity and inhibits human hepatitis B virus (HBV) and hepatitis C virus (HCV) replication in preclinical models.

    Science.gov (United States)

    Paulsen, Daniela; Urban, Andreas; Knorr, Andreas; Hirth-Dietrich, Claudia; Siegling, Angela; Volk, Hans-Dieter; Mercer, Andrew A; Limmer, Andreas; Schumak, Beatrix; Knolle, Percy; Ruebsamen-Schaeff, Helga; Weber, Olaf

    2013-01-01

    Inactivated orf virus (iORFV), strain D1701, is a potent immune modulator in various animal species. We recently demonstrated that iORFV induces strong antiviral activity in animal models of acute and chronic viral infections. In addition, we found D1701-mediated antifibrotic effects in different rat models of liver fibrosis. In the present study, we compare iORFV derived from two different strains of ORFV, D1701 and NZ2, respectively, with respect to their antifibrotic potential as well as their potential to induce an antiviral response controlling infections with the hepatotropic pathogens hepatitis C virus (HCV) and hepatitis B virus (HBV). Both strains of ORFV showed anti-viral activity against HCV in vitro and against HBV in a transgenic mouse model without signs of necro-inflammation in vivo. Our experiments suggest that the absence of liver damage is potentially mediated by iORFV-induced downregulation of antigen cross-presentation in liver sinus endothelial cells. Furthermore, both strains showed significant anti-fibrotic activity in rat models of liver fibrosis. iORFV strain NZ2 appeared more potent compared to strain D1701 with respect to both its antiviral and antifibrotic activity on the basis of dosages estimated by titration of active virus. These results show a potential therapeutic approach against two important human liver pathogens HBV and HCV that independently addresses concomitant liver fibrosis. Further studies are required to characterize the details of the mechanisms involved in this novel therapeutic principle.

  16. Inactivated ORF virus shows antifibrotic activity and inhibits human hepatitis B virus (HBV and hepatitis C virus (HCV replication in preclinical models.

    Directory of Open Access Journals (Sweden)

    Daniela Paulsen

    Full Text Available Inactivated orf virus (iORFV, strain D1701, is a potent immune modulator in various animal species. We recently demonstrated that iORFV induces strong antiviral activity in animal models of acute and chronic viral infections. In addition, we found D1701-mediated antifibrotic effects in different rat models of liver fibrosis. In the present study, we compare iORFV derived from two different strains of ORFV, D1701 and NZ2, respectively, with respect to their antifibrotic potential as well as their potential to induce an antiviral response controlling infections with the hepatotropic pathogens hepatitis C virus (HCV and hepatitis B virus (HBV. Both strains of ORFV showed anti-viral activity against HCV in vitro and against HBV in a transgenic mouse model without signs of necro-inflammation in vivo. Our experiments suggest that the absence of liver damage is potentially mediated by iORFV-induced downregulation of antigen cross-presentation in liver sinus endothelial cells. Furthermore, both strains showed significant anti-fibrotic activity in rat models of liver fibrosis. iORFV strain NZ2 appeared more potent compared to strain D1701 with respect to both its antiviral and antifibrotic activity on the basis of dosages estimated by titration of active virus. These results show a potential therapeutic approach against two important human liver pathogens HBV and HCV that independently addresses concomitant liver fibrosis. Further studies are required to characterize the details of the mechanisms involved in this novel therapeutic principle.

  17. Cost-effectiveness and budget impact analyses of a colorectal cancer screening programme in a high adenoma prevalence scenario using MISCAN-Colon microsimulation model.

    Science.gov (United States)

    Arrospide, Arantzazu; Idigoras, Isabel; Mar, Javier; de Koning, Harry; van der Meulen, Miriam; Soto-Gordoa, Myriam; Martinez-Llorente, Jose Miguel; Portillo, Isabel; Arana-Arri, Eunate; Ibarrondo, Oliver; Lansdorp-Vogelaar, Iris

    2018-04-25

    The Basque Colorectal Cancer Screening Programme began in 2009 and the implementation has been complete since 2013. Faecal immunological testing was used for screening in individuals between 50 and 69 years old. Colorectal Cancer in Basque country is characterized by unusual epidemiological features given that Colorectal Cancer incidence is similar to other European countries while adenoma prevalence is higher. The object of our study was to economically evaluate the programme via cost-effectiveness and budget impact analyses with microsimulation models. We applied the Microsimulation Screening Analysis (MISCAN)-Colon model to predict trends in Colorectal Cancer incidence and mortality and to quantify the short- and long-term effects and costs of the Basque Colorectal Cancer Screening Programme. The model was calibrated to the Basque demographics in 2008 and age-specific Colorectal Cancer incidence data in the Basque Cancer Registry from 2005 to 2008 before the screening begun. The model was also calibrated to the high adenoma prevalence observed for the Basque population in a previously published study. The multi-cohort approach used in the model included all the cohorts in the programme during 30 years of implementation, with lifetime follow-up. Unit costs were obtained from the Basque Health Service and both cost-effectiveness analysis and budget impact analysis were carried out. The goodness-of-fit of the model adaptation to observed programme data was evidence of validation. In the cost-effectiveness analysis, the savings from treatment were larger than the added costs due to screening. Thus, the Basque programme was dominant compared to no screening, as life expectancy increased by 29.3 days per person. The savings in the budget analysis appeared 10 years after the complete implementation of the programme. The average annual budget was €73.4 million from year 2023 onwards. This economic evaluation showed a screening intervention with a major health gain

  18. Analyses of hydrodynamic effects of large sodium-water reactions

    International Nuclear Information System (INIS)

    Sakano, K.; Shindo, Y.; Koishikawa, A.; Maekawa, I.

    1977-01-01

    Large leak sodium-water reactions that would occur in a steam generator of LMFBR causes abrupt changes of pressure and velocity of fluid in a secondary sodium system and relief system. This paper describes SOWACS-III together with its model and method. Results of analyses are also given, the comparison with experimental results of initial pressure spike being included. SOWACS-III treats the system which consists of the steam generator, vessel, valve, pump and pipe, and uses the following models and methods. (1) Components are assumed to be one-dimensional. (2) Pressure wave propagation near a reaction zone, where hydrogen is generated, is analyzed with the spherical co-ordinate (sphere-cylinder model). (3) A moving boundary is formed by contact of sodium with other fluid such as hydrogen and nitrogen. The boundary travels without mixing of sodium and another fluid through the boundary (boundary tracking model). The boundary can be treated not to move from the original place (fixed boundary model). (4) Pressure wave propagation is analyzed by the explicit method of characteristics in one-dimensional Eulerian co-ordinate. (5) Flow-induced force is analyzed by momentum balance. (6) The lateral motion of relief piping caused by the force is analyzed by NASTRAN code. Analyses were carried out for large sodium-water reaction experiments in SWAT-3 rig of PNC by using the sphere-cylinder model. The calculated pressure spike in the reaction vessel was compared with the measured one for a few milliseconds after water injection. The calculated value and measured one were 6.4 ata and 6.7 ata for peak pressure and 0.6 ms and 2.8 ms for rising time, respectively

  19. VESPA: Very large-scale Evolutionary and Selective Pressure Analyses

    Directory of Open Access Journals (Sweden)

    Andrew E. Webb

    2017-06-01

    Full Text Available Background Large-scale molecular evolutionary analyses of protein coding sequences requires a number of preparatory inter-related steps from finding gene families, to generating alignments and phylogenetic trees and assessing selective pressure variation. Each phase of these analyses can represent significant challenges, particularly when working with entire proteomes (all protein coding sequences in a genome from a large number of species. Methods We present VESPA, software capable of automating a selective pressure analysis using codeML in addition to the preparatory analyses and summary statistics. VESPA is written in python and Perl and is designed to run within a UNIX environment. Results We have benchmarked VESPA and our results show that the method is consistent, performs well on both large scale and smaller scale datasets, and produces results in line with previously published datasets. Discussion Large-scale gene family identification, sequence alignment, and phylogeny reconstruction are all important aspects of large-scale molecular evolutionary analyses. VESPA provides flexible software for simplifying these processes along with downstream selective pressure variation analyses. The software automatically interprets results from codeML and produces simplified summary files to assist the user in better understanding the results. VESPA may be found at the following website: http://www.mol-evol.org/VESPA.

  20. Business models for telehealth in the US: analyses and insights

    Directory of Open Access Journals (Sweden)

    Pereira F

    2017-02-01

    Full Text Available Francis Pereira Data Sciences and Operations, Marshall School of Business, University of Southern, Los Angeles, CA, USAAbstract: A growing shortage of medical doctors and nurses, globally, coupled with increasing life expectancy, is generating greater cost pressures on health care, in the US and globally. In this respect, telehealth can help alleviate these pressures, as well as extend medical services to underserved or unserved areas. However, its relatively slow adoption in the US, as well as in other markets, suggests the presence of barriers and challenges. The use of a business model framework helps identify the value proposition of telehealth as well as these challenges, which include identifying the right revenue model, organizational structure, and, perhaps more importantly, the stakeholders in the telehealth ecosystem. Successful and cost-effective deployment of telehealth require a redefinition of the ecosystem and a comprehensive review of all benefits and beneficiaries of such a system; hence a reassessment of all the stakeholders that could benefit from such a system, beyond the traditional patient–health provider–insurer model, and thus “who should pay” for such a system, and the driving efforts of a “keystone” player in developing this initiative would help. Keywords: telehealth, business model framework, stakeholders, ecosystem, VISOR business Model

  1. Mechanical analyses on the digital behaviour of the Tokay gecko (Gekko gecko) based on a multi-level directional adhesion model.

    Science.gov (United States)

    Wu, Xuan; Wang, Xiaojie; Mei, Tao; Sun, Shaoming

    2015-07-08

    This paper proposes a multi-level hierarchical model for the Tokay gecko ( Gekko gecko ) adhesive system and analyses the digital behaviour of the G. gecko under macro/meso-level scale. The model describes the structures of G. gecko 's adhesive system from the nano-level spatulae to the sub-millimetre-level lamella. The G. gecko 's seta is modelled using inextensible fibril based on Euler's elastica theorem. Considering the side contact of the spatular pads of the seta on the flat and rigid substrate, the directional adhesion behaviour of the seta has been investigated. The lamella-induced attachment and detachment have been modelled to simulate the active digital hyperextension (DH) and the digital gripping (DG) phenomena. The results suggest that a tiny angular displacement within 0.25° of the lamellar proximal end is necessary in which a fast transition from attachment to detachment or vice versa is induced. The active DH helps release the torque to induce setal non-sliding detachment, while the DG helps apply torque to make the setal adhesion stable. The lamella plays a key role in saving energy during detachment to adapt to its habitat and provides another adhesive function which differs from the friction-dependent setal adhesion system controlled by the dynamic of G. gecko 's body.

  2. SDMtoolbox 2.0: the next generation Python-based GIS toolkit for landscape genetic, biogeographic and species distribution model analyses

    Directory of Open Access Journals (Sweden)

    Jason L. Brown

    2017-12-01

    Full Text Available SDMtoolbox 2.0 is a software package for spatial studies of ecology, evolution, and genetics. The release of SDMtoolbox 2.0 allows researchers to use the most current ArcGIS software and MaxEnt software, and reduces the amount of time that would be spent developing common solutions. The central aim of this software is to automate complicated and repetitive spatial analyses in an intuitive graphical user interface. One core tenant facilitates careful parameterization of species distribution models (SDMs to maximize each model’s discriminatory ability and minimize overfitting. This includes carefully processing of occurrence data, environmental data, and model parameterization. This program directly interfaces with MaxEnt, one of the most powerful and widely used species distribution modeling software programs, although SDMtoolbox 2.0 is not limited to species distribution modeling or restricted to modeling in MaxEnt. Many of the SDM pre- and post-processing tools have ‘universal’ analogs for use with any modeling software. The current version contains a total of 79 scripts that harness the power of ArcGIS for macroecology, landscape genetics, and evolutionary studies. For example, these tools allow for biodiversity quantification (such as species richness or corrected weighted endemism, generation of least-cost paths and corridors among shared haplotypes, assessment of the significance of spatial randomizations, and enforcement of dispersal limitations of SDMs projected into future climates—to only name a few functions contained in SDMtoolbox 2.0. Lastly, dozens of generalized tools exists for batch processing and conversion of GIS data types or formats, which are broadly useful to any ArcMap user.

  3. Thermal conductivity degradation analyses of LWR MOX fuel by the quasi-two phase material model

    International Nuclear Information System (INIS)

    Kosaka, Yuji; Kurematsu, Shigeru; Kitagawa, Takaaki; Suzuki, Akihiro; Terai, Takayuki

    2012-01-01

    The temperature measurements of mixed oxide (MOX) and UO 2 fuels during irradiation suggested that the thermal conductivity degradation rate of the MOX fuel with burnup should be slower than that of the UO 2 fuel. In order to explain the difference of the degradation rates, the quasi-two phase material model is proposed to assess the thermal conductivity degradation of the MIMAS MOX fuel, which takes into account the Pu agglomerate distributions in the MOX fuel matrix as fabricated. As a result, the quasi-two phase model calculation shows the gradual increase of the difference with burnup and may expect more than 10% higher thermal conductivity values around 75 GWd/t. While these results are not fully suitable for thermal conductivity degradation models implemented by some industrial fuel manufacturers, they are consistent with the results from the irradiation tests and indicate that the inhomogeneity of Pu content in the MOX fuel can be one of the major reasons for the moderation of the thermal conductivity degradation of the MOX fuel. (author)

  4. Performance Analyses in an Assistive Technology Service Delivery Process

    DEFF Research Database (Denmark)

    Petersen, Anne Karin

    Performance Analyses in an Assistive Technology Service Delivery Process.Keywords: process model, occupational performance, assistive technologiesThe Poster is about teaching students, using models and theory in education and practice. It is related to Occupational therapy process and professional...... af top-til-bund, klientcentreret og aktivitetsbaseret interventioner, ERGO/MunksgaardFisher, A. &, Griswold, L. A., 2014. Performance Skills. I: B.Schell red.2014 Occupational Therapy. Willard &Spackman’s occupational therapy. -12th ed., p.249-264Cook A.M., Polgar J.M. (2015) Assistive Technologies...

  5. Children with dyslexia show cortical hyperactivation in response to increasing literacy processing demands

    Directory of Open Access Journals (Sweden)

    Frøydis eMorken

    2014-12-01

    Full Text Available This fMRI study aimed to examine how differences in literacy processing demands may affect cortical activation patterns in 11- to 12-year-old children with dyslexia as compared to children with typical reading skills. 11 children with and 18 without dyslexia were assessed using a reading paradigm based on different stages of literacy development. In the analyses, six regions showed an interaction effect between group and condition in a factorial ANOVA. These regions were selected as regions of interest for further analyses. Overall, the dyslexia group showed cortical hyperactivation compared to the typical group. The difference between the groups tended to increase with increasing processing demands. Differences in cortical activation were not reflected in in-scanner reading performance. The six regions further grouped into three patterns, which are discussed in terms of processing demands, compensatory mechanisms, orthography and contextual facilitation. We conclude that the observed hyperactivation is chiefly a result of compensatory activity, modulated by other factors.

  6. Using US EPA’s Chemical Safety for Sustainability’s Comptox Chemistry Dashboard and Tools for Bioactivity, Chemical and Toxicokinetic Modeling Analyses (Course at 2017 ISES Annual Meeting)

    Science.gov (United States)

    Title: Using US EPA’s Chemical Safety for Sustainability’s Comptox Chemistry Dashboard and Tools for Bioactivity, Chemical and Toxicokinetic Modeling Analyses • Class format: half-day (4 hours) • Course leader(s): Barbara A. Wetmore and Antony J. Williams,...

  7. Integrative analyses of leprosy susceptibility genes indicate a common autoimmune profile.

    Science.gov (United States)

    Zhang, Deng-Feng; Wang, Dong; Li, Yu-Ye; Yao, Yong-Gang

    2016-04-01

    Leprosy is an ancient chronic infection in the skin and peripheral nerves caused by Mycobacterium leprae. The development of leprosy depends on genetic background and the immune status of the host. However, there is no systematic view focusing on the biological pathways, interaction networks and overall expression pattern of leprosy-related immune and genetic factors. To identify the hub genes in the center of leprosy genetic network and to provide an insight into immune and genetic factors contributing to leprosy. We retrieved all reported leprosy-related genes and performed integrative analyses covering gene expression profiling, pathway analysis, protein-protein interaction network, and evolutionary analyses. A list of 123 differentially expressed leprosy related genes, which were enriched in activation and regulation of immune response, was obtained in our analyses. Cross-disorder analysis showed that the list of leprosy susceptibility genes was largely shared by typical autoimmune diseases such as lupus erythematosus and arthritis, suggesting that similar pathways might be affected in leprosy and autoimmune diseases. Protein-protein interaction (PPI) and positive selection analyses revealed a co-evolution network of leprosy risk genes. Our analyses showed that leprosy associated genes constituted a co-evolution network and might undergo positive selection driven by M. leprae. We suggested that leprosy may be a kind of autoimmune disease and the development of leprosy is a matter of defect or over-activation of body immunity. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  8. Fracture Mechanics Analyses of the Slip-Side Joggle Regions of Wing-Leading Edge Panels

    Science.gov (United States)

    Raju, Ivatury S.; Knight, Norman F., Jr.; Song, Kyongchan; Phillips, Dawn R.

    2010-01-01

    The Space Shuttle Orbiter wing comprises of 22 leading edge panels on each side of the wing. These panels are part of the thermal protection system that protects the Orbiter wings from extreme heating that take place on the reentry in to the earth atmosphere. On some panels that experience extreme heating, liberation of silicon carbon (SiC) coating was observed on the slip side regions of the panels. Global structural and local fracture mechanics analyses were performed on these panels as a part of the root cause investigation of this coating liberation anomaly. The wing-leading-edge reinforced carbon-carbon (RCC) panels, Panel 9, T-seal 10, and Panel 10, are shown in Figure 1 and the progression of the stress analysis models is presented in Figure 2. The global structural analyses showed minimal interaction between adjacent panels and the T-seal that bridges the gap between the panels. A bounding uniform temperature is applied to a representative panel and the resulting stress distribution is examined. For this loading condition, the interlaminar normal stresses showed negligible variation in the chord direction and increased values in the vicinity of the slip-side joggle shoulder. As such, a representative span wise slice on the panel can be taken and the cross section can be analyzed using plane strain analysis.

  9. Bifactor Models Show a Superior Model Fit: Examination of the Factorial Validity of Parent-Reported and Self-Reported Symptoms of Attention-Deficit/Hyperactivity Disorders in Children and Adolescents.

    Science.gov (United States)

    Rodenacker, Klaas; Hautmann, Christopher; Görtz-Dorten, Anja; Döpfner, Manfred

    2016-01-01

    Various studies have demonstrated that bifactor models yield better solutions than models with correlated factors. However, the kind of bifactor model that is most appropriate is yet to be examined. The current study is the first to test bifactor models across the full age range (11-18 years) of adolescents using self-reports, and the first to test bifactor models with German subjects and German questionnaires. The study sample included children and adolescents aged between 6 and 18 years recruited from a German clinical sample (n = 1,081) and a German community sample (n = 642). To examine the factorial validity, we compared unidimensional, correlated factors and higher-order and bifactor models and further tested a modified incomplete bifactor model for measurement invariance. Bifactor models displayed superior model fit statistics compared to correlated factor models or second-order models. However, a more parsimonious incomplete bifactor model with only 2 specific factors (inattention and impulsivity) showed a good model fit and a better factor structure than the other bifactor models. Scalar measurement invariance was given in most group comparisons. An incomplete bifactor model would suggest that the specific inattention and impulsivity factors represent entities separable from the general attention-deficit/hyperactivity disorder construct and might, therefore, give way to a new approach to subtyping of children beyond and above attention-deficit/hyperactivity disorder. © 2016 S. Karger AG, Basel.

  10. Re-evaluation of a novel approach for quantitative myocardial oedema detection by analysing tissue inhomogeneity in acute myocarditis using T2-mapping.

    Science.gov (United States)

    Baeßler, Bettina; Schaarschmidt, Frank; Treutlein, Melanie; Stehning, Christian; Schnackenburg, Bernhard; Michels, Guido; Maintz, David; Bunck, Alexander C

    2017-12-01

    To re-evaluate a recently suggested approach of quantifying myocardial oedema and increased tissue inhomogeneity in myocarditis by T2-mapping. Cardiac magnetic resonance data of 99 patients with myocarditis were retrospectively analysed. Thirthy healthy volunteers served as controls. T2-mapping data were acquired at 1.5 T using a gradient-spin-echo T2-mapping sequence. T2-maps were segmented according to the 16-segments AHA-model. Segmental T2-values, segmental pixel-standard deviation (SD) and the derived parameters maxT2, maxSD and madSD were analysed and compared to the established Lake Louise criteria (LLC). A re-estimation of logistic regression models revealed that all models containing an SD-parameter were superior to any model containing global myocardial T2. Using a combined cut-off of 1.8 ms for madSD + 68 ms for maxT2 resulted in a diagnostic sensitivity of 75% and specificity of 80% and showed a similar diagnostic performance compared to LLC in receiver-operating-curve analyses. Combining madSD, maxT2 and late gadolinium enhancement (LGE) in a model resulted in a superior diagnostic performance compared to LLC (sensitivity 93%, specificity 83%). The results show that the novel T2-mapping-derived parameters exhibit an additional diagnostic value over LGE with the inherent potential to overcome the current limitations of T2-mapping. • A novel quantitative approach to myocardial oedema imaging in myocarditis was re-evaluated. • The T2-mapping-derived parameters maxT2 and madSD were compared to traditional Lake-Louise criteria. • Using maxT2 and madSD with dedicated cut-offs performs similarly to Lake-Louise criteria. • Adding maxT2 and madSD to LGE results in further increased diagnostic performance. • This novel approach has the potential to overcome the limitations of T2-mapping.

  11. Grey literature in meta-analyses of randomized trials of health care interventions.

    Science.gov (United States)

    Hopewell, S; McDonald, S; Clarke, M; Egger, M

    2007-04-18

    The inclusion of grey literature (i.e. literature that has not been formally published) in systematic reviews may help to overcome some of the problems of publication bias, which can arise due to the selective availability of data. To review systematically research studies, which have investigated the impact of grey literature in meta-analyses of randomized trials of health care interventions. We searched the Cochrane Methodology Register (The Cochrane Library Issue 3, 2005), MEDLINE (1966 to 20 May 2005), the Science Citation Index (June 2005) and contacted researchers who may have carried out relevant studies. A study was considered eligible for this review if it compared the effect of the inclusion and exclusion of grey literature on the results of a cohort of meta-analyses of randomized trials. Data were extracted from each report independently by two reviewers. The main outcome measure was an estimate of the impact of trials from the grey literature on the pooled effect estimates of the meta-analyses. Information was also collected on the area of health care, the number of meta-analyses, the number of trials, the number of trial participants, the year of publication of the trials, the language and country of publication of the trials, the number and type of grey and published literature, and methodological quality. Five studies met the inclusion criteria. All five studies showed that published trials showed an overall greater treatment effect than grey trials. This difference was statistically significant in one of the five studies. Data could be combined for three of the five studies. This showed that, on average, published trials showed a 9% greater treatment effect than grey trials (ratio of odds ratios for grey versus published trials 1.09; 95% CI 1.03-1.16). Overall there were more published trials included in the meta-analyses than grey trials (median 224 (IQR 108-365) versus 45(IQR 40-102)). Published trials had more participants on average. The most

  12. ORNL Pre-test Analyses of A Large-scale Experiment in STYLE

    International Nuclear Information System (INIS)

    Williams, Paul T.; Yin, Shengjun; Klasky, Hilda B.; Bass, Bennett Richard

    2011-01-01

    Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current status of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite

  13. Using observed load distributions with a simple model to analyse the epidemiology of sea lice (Lepeophtheirus salmonis) on sea trout (Salmo trutta).

    Science.gov (United States)

    Murray, Alexander G

    2002-06-01

    Sea lice are ectoparasites of salmonids that have been associated with the recent decline in sea trout numbers in north-west Europe. Observed patterns of louse load distribution between sea trout in the seas surrounding the UK, Ireland and Norway and a simple model have been used to analyse the epidemiology of lice. Loads are aggregated and deviate strongly from the Poisson distribution, although less than is observed with many other parasites. The louse numbers on fish from offshore sites are slightly less variable than for fish from coastal sites with comparable mean loads. Analysis of louse development stages and sexes shows that selection between hosts by sea lice plays a limited role. If host selection is absent, then associated poor condition would be caused by, not the cause of, high louse burdens; however the absence of such selection is not proved. Scenarios with infection that is patchy in space and time best generate the aggregated load patterns observed; these patches accord with observed swarms of copepodids. Prevalence patterns may indicate the movement of trout between environments. Control of copepodids in infection 'hot spots', either directly or through control of louse egg production in their catchment, may reduce louse loads on wild sea trout and, in particular, extreme and damaging loads.

  14. Clustering structures of large proteins using multifractal analyses based on a 6-letter model and hydrophobicity scale of amino acids

    International Nuclear Information System (INIS)

    Yang Jianyi; Yu Zuguo; Anh, Vo

    2009-01-01

    The Schneider and Wrede hydrophobicity scale of amino acids and the 6-letter model of protein are proposed to study the relationship between the primary structure and the secondary structural classification of proteins. Two kinds of multifractal analyses are performed on the two measures obtained from these two kinds of data on large proteins. Nine parameters from the multifractal analyses are considered to construct the parameter spaces. Each protein is represented by one point in these spaces. A procedure is proposed to separate large proteins in the α, β, α + β and α/β structural classes in these parameter spaces. Fisher's linear discriminant algorithm is used to assess our clustering accuracy on the 49 selected large proteins. Numerical results indicate that the discriminant accuracies are satisfactory. In particular, they reach 100.00% and 84.21% in separating the α proteins from the {β, α + β, α/β} proteins in a parameter space; 92.86% and 86.96% in separating the β proteins from the {α + β, α/β} proteins in another parameter space; 91.67% and 83.33% in separating the α/β proteins from the α + β proteins in the last parameter space.

  15. Toward a Model-Based Approach to Flight System Fault Protection

    Science.gov (United States)

    Day, John; Murray, Alex; Meakin, Peter

    2012-01-01

    Fault Protection (FP) is a distinct and separate systems engineering sub-discipline that is concerned with the off-nominal behavior of a system. Flight system fault protection is an important part of the overall flight system systems engineering effort, with its own products and processes. As with other aspects of systems engineering, the FP domain is highly amenable to expression and management in models. However, while there are standards and guidelines for performing FP related analyses, there are not standards or guidelines for formally relating the FP analyses to each other or to the system hardware and software design. As a result, the material generated for these analyses are effectively creating separate models that are only loosely-related to the system being designed. Development of approaches that enable modeling of FP concerns in the same model as the system hardware and software design enables establishment of formal relationships that has great potential for improving the efficiency, correctness, and verification of the implementation of flight system FP. This paper begins with an overview of the FP domain, and then continues with a presentation of a SysML/UML model of the FP domain and the particular analyses that it contains, by way of showing a potential model-based approach to flight system fault protection, and an exposition of the use of the FP models in FSW engineering. The analyses are small examples, inspired by current real-project examples of FP analyses.

  16. Comparative Analysis of Upper Ocean Heat Content Variability from Ensemble Operational Ocean Analyses

    Science.gov (United States)

    Xue, Yan; Balmaseda, Magdalena A.; Boyer, Tim; Ferry, Nicolas; Good, Simon; Ishikawa, Ichiro; Rienecker, Michele; Rosati, Tony; Yin, Yonghong; Kumar, Arun

    2012-01-01

    Upper ocean heat content (HC) is one of the key indicators of climate variability on many time-scales extending from seasonal to interannual to long-term climate trends. For example, HC in the tropical Pacific provides information on thermocline anomalies that is critical for the longlead forecast skill of ENSO. Since HC variability is also associated with SST variability, a better understanding and monitoring of HC variability can help us understand and forecast SST variability associated with ENSO and other modes such as Indian Ocean Dipole (IOD), Pacific Decadal Oscillation (PDO), Tropical Atlantic Variability (TAV) and Atlantic Multidecadal Oscillation (AMO). An accurate ocean initialization of HC anomalies in coupled climate models could also contribute to skill in decadal climate prediction. Errors, and/or uncertainties, in the estimation of HC variability can be affected by many factors including uncertainties in surface forcings, ocean model biases, and deficiencies in data assimilation schemes. Changes in observing systems can also leave an imprint on the estimated variability. The availability of multiple operational ocean analyses (ORA) that are routinely produced by operational and research centers around the world provides an opportunity to assess uncertainties in HC analyses, to help identify gaps in observing systems as they impact the quality of ORAs and therefore climate model forecasts. A comparison of ORAs also gives an opportunity to identify deficiencies in data assimilation schemes, and can be used as a basis for development of real-time multi-model ensemble HC monitoring products. The OceanObs09 Conference called for an intercomparison of ORAs and use of ORAs for global ocean monitoring. As a follow up, we intercompared HC variations from ten ORAs -- two objective analyses based on in-situ data only and eight model analyses based on ocean data assimilation systems. The mean, annual cycle, interannual variability and longterm trend of HC have

  17. A journey into a Mediterranean diet and type 2 diabetes: a systematic review with meta-analyses.

    Science.gov (United States)

    Esposito, Katherine; Maiorino, Maria Ida; Bellastella, Giuseppe; Chiodini, Paolo; Panagiotakos, Demosthenes; Giugliano, Dario

    2015-08-10

    To summarise the evidence about the efficacy of a Mediterranean diet on the management of type 2 diabetes and prediabetic states. A systematic review of all meta-analyses and randomised controlled trials (RCTs) that compared the Mediterranean diet with a control diet on the treatment of type 2 diabetes and prediabetic states was conducted. Electronic searches were carried out up to January 2015. Trials were included for meta-analyses if they had a control group treated with another diet, if they were of sufficient duration (at least 6 months), and if they had at least 30 participants in each arm. A random-effect model was used to pool data. Adults with or at risk for type 2 diabetes. Dietary patterns that described themselves as using a 'Mediterranean' dietary pattern. The outcomes were glycaemic control, cardiovascular risk factors and remission from the metabolic syndrome. From 2824 studies, 8 meta-analyses and 5 RCTs were eligible. A 'de novo' meta-analysis of 3 long-term (>6 months) RCTs of the Mediterranean diet and glycaemic control of diabetes favoured the Mediterranean diet as compared with lower fat diets. Another 'de novo' meta-analysis of two long-term RCTs showed a 49% increased probability of remission from the metabolic syndrome. 5 meta-analyses showed a favourable effect of the Mediterranean diet, as compared with other diets, on body weight, total cholesterol and high-density lipoprotein cholesterol. 2 meta-analyses demonstrated that higher adherence to the Mediterranean diet reduced the risk of future diabetes by 19-23%. The Mediterranean diet was associated with better glycaemic control and cardiovascular risk factors than control diets, including a lower fat diet, suggesting that it is suitable for the overall management of type 2 diabetes. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  18. Analyses of a steel containment vessel with an outer contact structure under severe internal overpressurization conditions

    International Nuclear Information System (INIS)

    Porter, V.L.

    1994-01-01

    Many Mark-I and Mark-II BWR plants are designed with a steel vessel as the primary containment. Typically, the steel containment vessel (SCV) is enclosed within a reinforced concrete shield building with only a small gap (74-90 mm) separating the two structures. This paper describes finite element analyses performed to evaluate the effects of contact and friction between a steel containment vessel and an outer contact structure when the containment vessel is subjected to large internal pressures. These computations were motivated by a joint program on containment integrity involving the Nuclear Power Engineering Corporation (NUPEC) of Japan, the US Nuclear Regulatory Commission (NRC), and Sandia National Laboratories for testing model containments. Under severe accident loading conditions, the steel containment vessel in a typical Mark-I or Mark-II plant may deform under internal pressurization such that it contacts the inner surface of a shield building wall. (Thermal expansion from increasing accident temperatures would also close the gap between the SCV and the shield building, but temperature effects are not considered in these analyses.) The amount and location of contact and the pressure at which it occurs all affect how the combined structure behaves. A preliminary finite element model has been developed to analyze a model of a typical steel containment vessel con-ling into contact with an outer structure. Both the steel containment vessel and the outer contact structure were modelled with axisymmetric shell finite elements. Of particular interest are the influence that the contact structure has on deformation and potential failure modes of the containment vessel. Furthermore, the coefficient of friction between the two structures was varied to study its effects on the behavior of the containment vessel and on the uplift loads transmitted to the contact structure. These analyses show that the material properties of an outer contact structure and the amount

  19. Architecture Level Safety Analyses for Safety-Critical Systems

    Directory of Open Access Journals (Sweden)

    K. S. Kushal

    2017-01-01

    Full Text Available The dependency of complex embedded Safety-Critical Systems across Avionics and Aerospace domains on their underlying software and hardware components has gradually increased with progression in time. Such application domain systems are developed based on a complex integrated architecture, which is modular in nature. Engineering practices assured with system safety standards to manage the failure, faulty, and unsafe operational conditions are very much necessary. System safety analyses involve the analysis of complex software architecture of the system, a major aspect in leading to fatal consequences in the behaviour of Safety-Critical Systems, and provide high reliability and dependability factors during their development. In this paper, we propose an architecture fault modeling and the safety analyses approach that will aid in identifying and eliminating the design flaws. The formal foundations of SAE Architecture Analysis & Design Language (AADL augmented with the Error Model Annex (EMV are discussed. The fault propagation, failure behaviour, and the composite behaviour of the design flaws/failures are considered for architecture safety analysis. The illustration of the proposed approach is validated by implementing the Speed Control Unit of Power-Boat Autopilot (PBA system. The Error Model Annex (EMV is guided with the pattern of consideration and inclusion of probable failure scenarios and propagation of fault conditions in the Speed Control Unit of Power-Boat Autopilot (PBA. This helps in validating the system architecture with the detection of the error event in the model and its impact in the operational environment. This also provides an insight of the certification impact that these exceptional conditions pose at various criticality levels and design assurance levels and its implications in verifying and validating the designs.

  20. Factors for analysing and improving performance of R&D in Malaysian universities

    NARCIS (Netherlands)

    Ramli, Mohammad Shakir; de Boer, S.J.; de Bruijn, E.J.

    2004-01-01

    This paper presents a model for analysing and improving performance of R&D in Malaysian universities. There are various general models for R&D analysis, but none is specific for improving the performance of R&D in Malaysian universities. This research attempts to fill a gap in the body of knowledge