WorldWideScience

Sample records for modeling hlm analyses

  1. Hierarchical Linear Modeling (HLM): An Introduction to Key Concepts within Cross-Sectional and Growth Modeling Frameworks. Technical Report #1308

    Science.gov (United States)

    Anderson, Daniel

    2012-01-01

    This manuscript provides an overview of hierarchical linear modeling (HLM), as part of a series of papers covering topics relevant to consumers of educational research. HLM is tremendously flexible, allowing researchers to specify relations across multiple "levels" of the educational system (e.g., students, classrooms, schools, etc.).…

  2. Hierarchical linear modeling (HLM) of longitudinal brain structural and cognitive changes in alcohol-dependent individuals during sobriety

    DEFF Research Database (Denmark)

    Yeh, P.H.; Gazdzinski, S.; Durazzo, T.C.;

    2007-01-01

    and unique hierarchical linear models allow assessments of the complex relationships among outcome measures of longitudinal data sets. These HLM applications suggest that chronic cigarette smoking modulates the temporal dynamics of brain structural and cognitive changes in alcoholics during prolonged......Background: Hierarchical linear modeling (HLM) can reveal complex relationships between longitudinal outcome measures and their covariates under proper consideration of potentially unequal error variances. We demonstrate the application of FILM to the study of magnetic resonance imaging (MRI...... time points. Using HLM, we modeled volumetric and cognitive outcome measures as a function of cigarette and alcohol use variables. Results: Different hierarchical linear models with unique model structures are presented and discussed. The results show that smaller brain volumes at baseline predict...

  3. Meta-analysis methods for synthesizing treatment effects in multisite studies: hierarchical linear modeling (HLM perspective

    Directory of Open Access Journals (Sweden)

    Sema A. Kalaian

    2003-06-01

    Full Text Available The objectives of the present mixed-effects meta-analytic application are to provide practical guidelines to: (a Calculate..treatment effect sizes from multiple sites; (b Calculate the overall mean of the site effect sizes and their variances; (c..Model the heterogeneity in these site treatment effects as a function of site and program characteristics plus..unexplained random error using Hierarchical Linear Modeling (HLM; (d Improve the ability of multisite evaluators..and policy makers to reach sound conclusions about the effectiveness of educational and social interventions based on..multisite evaluations; and (e Illustrate the proposed methodology by applying these methods to real multi-site research..data.

  4. Hierarchical linear modeling (HLM) of longitudinal brain structural and cognitive changes in alcohol-dependent individuals during sobriety.

    Science.gov (United States)

    Yeh, Ping-Hong; Gazdzinski, Stefan; Durazzo, Timothy C; Sjöstrand, Karl; Meyerhoff, Dieter J

    2007-12-01

    Hierarchical linear modeling (HLM) can reveal complex relationships between longitudinal outcome measures and their covariates under proper consideration of potentially unequal error variances. We demonstrate the application of HLM to the study of magnetic resonance imaging (MRI)-derived brain volume changes and cognitive changes in abstinent alcohol-dependent individuals as a function of smoking status, smoking severity, and drinking quantities. Twenty non-smoking recovering alcoholics (nsALC) and 30 age-matched smoking recovering alcoholics (sALC) underwent quantitative MRI and cognitive assessments at 1 week, 1 month, and 7 months of sobriety. Eight non-smoking light drinking controls were studied at baseline and 7 months later. Brain and ventricle volumes at each time point were quantified using MRI masks, while the boundary shift integral method measured volume changes between time points. Using HLM, we modeled volumetric and cognitive outcome measures as a function of cigarette and alcohol use variables. Different hierarchical linear models with unique model structures are presented and discussed. The results show that smaller brain volumes at baseline predict faster brain volume gains, which were also related to greater smoking and drinking severities. Over 7 months of abstinence from alcohol, sALC compared to nsALC showed less improvements in visuospatial learning and memory despite larger brain volume gains and ventricular shrinkage. Different and unique hierarchical linear models allow assessments of the complex relationships among outcome measures of longitudinal data sets. These HLM applications suggest that chronic cigarette smoking modulates the temporal dynamics of brain structural and cognitive changes in alcoholics during prolonged sobriety.

  5. HLM behind the Curtain: Unveiling Decisions behind the Use and Interpretation of HLM in Higher Education Research

    Science.gov (United States)

    Niehaus, Elizabeth; Campbell, Corbin M.; Inkelas, Karen Kurotsuchi

    2014-01-01

    Hierarchical linear modeling (HLM) has become increasingly popular in the higher education literature, but there is significant variability in the current approaches to the conducting and reporting of HLM. The field currently lacks a general consensus around important issues such as the number of levels of analysis that are important to include…

  6. Examining the Effectiveness of Peer-Mediated and Video-Modeling Social Skills Interventions for Children with Autism Spectrum Disorders: A Meta-Analysis in Single-Case Research Using HLM

    Science.gov (United States)

    Wang, Shin-Yi; Cui, Ying; Parrila, Rauno

    2011-01-01

    Social interaction is a fundamental problem for children with autism spectrum disorders (ASD). Various types of social skills interventions have been developed and used by clinicians to promote the social interaction in children with ASD. This meta-analysis used hierarchical linear modeling (HLM) to examine the effectiveness of peer-mediated and…

  7. HLM in Cluster-Randomised Trials--Measuring Efficacy across Diverse Populations of Learners

    Science.gov (United States)

    Hegedus, Stephen; Tapper, John; Dalton, Sara; Sloane, Finbarr

    2013-01-01

    We describe the application of Hierarchical Linear Modelling (HLM) in a cluster-randomised study to examine learning algebraic concepts and procedures in an innovative, technology-rich environment in the US. HLM is applied to measure the impact of such treatment on learning and on contextual variables. We provide a detailed description of such…

  8. 进城务工人员随迁子女的学业成就及其影响因素——基于多层次线性模型(HLM)的分析%The Empirical Study on The Academic Achievement of Children of Migrant Workers And Its Inlfuencing Factors:Basing on analysis of Hierarchical Linear Model(HLM)

    Institute of Scientific and Technical Information of China (English)

    尚伟伟

    2015-01-01

    This study investigate the student's academic performance of 3800 children of migrant workers form Henan province Zhengzhou, Luoyang, Xuchang city, and then adopt the method of Hierarchical Linear Modeling(HLM), explores the impact of different level factors of the individuals, families and schools for children's academic achievement of migrant workers. The results showed that: the migrant children schools and public schools there were signiifcant differences in students' academic achievement, the academic achievement differences are 6.13% between schools; From inside the public schools, the academic achievement exist obvious differences in the group between the children of migrant workers and local children; Peer relations, parents' education expectation and expectation by themselves have a signiifcantly positive inlfuence on students' academic achievement, frequency of transfering and lfowing, the way to school time on students' academic achievement is a signiifcant negative impact; The level of teachers' wages have signiifcant positive inlfuence on students' academic achievement, but the training number of teachers have negative inlfuence on students' academic achievement.%本研究对河南省郑州、洛阳、许昌三所城市的3714名进城务工人员随迁子女进行了学生学业成就调查,然后采用多层线性模型分析的方法,探索了学生个体、家庭、学校不同层面因素对进城务工人员随迁子女学业成就产生的影响.研究结果显示:民工子弟学校与公立学校在学生学业成就方面存在显著的差异,其中有6.13%的学业成就差异来源于校际之间;在公立学校内部,进城务工人员随迁子女与本地户籍学生的学业成就存在明显的组内差异;同伴关系、父母教育期望以及学生自我期望对学生学业成就均有显著的正向影响,转学流动频率、上学路上花费时间对学生学业成就则有显著的负向影响;教师工资水平对学生学业

  9. Using HLM to Explore the Effects of Perceptions of Learning Environments and Assessments on Students' Test Performance

    Science.gov (United States)

    Chu, Man-Wai; Babenko, Oksana; Cui, Ying; Leighton, Jacqueline P.

    2014-01-01

    The study examines the role that perceptions or impressions of learning environments and assessments play in students' performance on a large-scale standardized test. Hierarchical linear modeling (HLM) was used to test aspects of the Learning Errors and Formative Feedback model to determine how much variation in students' performance was explained…

  10. Using HLM to Explore the Effects of Perceptions of Learning Environments and Assessments on Students' Test Performance

    Science.gov (United States)

    Chu, Man-Wai; Babenko, Oksana; Cui, Ying; Leighton, Jacqueline P.

    2014-01-01

    The study examines the role that perceptions or impressions of learning environments and assessments play in students' performance on a large-scale standardized test. Hierarchical linear modeling (HLM) was used to test aspects of the Learning Errors and Formative Feedback model to determine how much variation in students' performance was explained…

  11. A preliminary approach to the extension of the Transuranus code to the fuel rod performance analysis of HLM-cooled nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Luzzi, L.; Botazzoli, P.; Devita, M.; Di Marcello, V.; Pastore, G. [Department of Energy, Politecnico di Milano, Enrico Fermi Center for Nuclear Studies - CeSNEF, via Ponzio 34/3, 20133 Milano (Italy)

    2010-07-01

    This paper briefly presents a preliminary modelling approach, aimed at the extension of the TRANSURANUS code to the fuel rod performance analysis of Heavy Liquid Metal (HLM) cooled nuclear reactors, with specific reference to the employment of the T91 steel as cladding material and of the liquid Lead-Bismuth Eutectic (LBE) as coolant. On the basis of literature indications, correlations for heat transfer to LBE, corrosion behaviour and thermo-mechanical properties of T91 are proposed, and some open issues are discussed in prospect of more reliable fuel rod performance analysis of HLM-cooled nuclear reactors. (authors)

  12. An extensible analysable system model

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    , this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...... allows for easy development of analyses for the abstracted systems. We briefly present one application of our approach, namely the analysis of systems for potential insider threats....

  13. An introduction to hierarchical linear modeling

    Directory of Open Access Journals (Sweden)

    Heather Woltman

    2012-02-01

    Full Text Available This tutorial aims to introduce Hierarchical Linear Modeling (HLM. A simple explanation of HLM is provided that describes when to use this statistical technique and identifies key factors to consider before conducting this analysis. The first section of the tutorial defines HLM, clarifies its purpose, and states its advantages. The second section explains the mathematical theory, equations, and conditions underlying HLM. HLM hypothesis testing is performed in the third section. Finally, the fourth section provides a practical example of running HLM, with which readers can follow along. Throughout this tutorial, emphasis is placed on providing a straightforward overview of the basic principles of HLM.

  14. HLM fuel pin bundle experiments in the CIRCE pool facility

    Energy Technology Data Exchange (ETDEWEB)

    Martelli, Daniele, E-mail: daniele.martelli@ing.unipi.it [University of Pisa, Department of Civil and Industrial Engineering, Pisa (Italy); Forgione, Nicola [University of Pisa, Department of Civil and Industrial Engineering, Pisa (Italy); Di Piazza, Ivan; Tarantino, Mariano [Italian National Agency for New Technologies, Energy and Sustainable Economic Development, C.R. ENEA Brasimone (Italy)

    2015-10-15

    Highlights: • The experimental results represent the first set of values for LBE pool facility. • Heat transfer is investigated for a 37-pin electrical bundle cooled by LBE. • Experimental data are presented together with a detailed error analysis. • Nu is computed as a function of the Pe and compared with correlations. • Experimental Nu is about 25% lower than Nu derived from correlations. - Abstract: Since Lead-cooled Fast Reactors (LFR) have been conceptualized in the frame of GEN IV International Forum (GIF), great interest has focused on the development and testing of new technologies related to HLM nuclear reactors. In this frame the Integral Circulation Experiment (ICE) test section has been installed into the CIRCE pool facility and suitable experiments have been carried out aiming to fully investigate the heat transfer phenomena in grid spaced fuel pin bundles providing experimental data in support of European fast reactor development. In particular, the fuel pin bundle simulator (FPS) cooled by lead bismuth eutectic (LBE), has been conceived with a thermal power of about 1 MW and a uniform linear power up to 25 kW/m, relevant values for a LFR. It consists of 37 fuel pins (electrically simulated) placed on a hexagonal lattice with a pitch to diameter ratio of 1.8. The FPS was deeply instrumented by several thermocouples. In particular, two sections of the FPS were instrumented in order to evaluate the heat transfer coefficient along the bundle as well as the cladding temperature in different ranks of sub-channels. Nusselt number in the central sub-channel was therefore calculated as a function of the Peclet number and the obtained results were compared to Nusselt numbers obtained from convective heat transfer correlations available in literature on Heavy Liquid Metals (HLM). Results reported in the present work, represent the first set of experimental data concerning fuel pin bundle behaviour in a heavy liquid metal pool, both in forced and

  15. Heat transfer on HLM cooled wire-spaced fuel pin bundle simulator in the NACIE-UP facility

    Energy Technology Data Exchange (ETDEWEB)

    Di Piazza, Ivan, E-mail: ivan.dipiazza@enea.it [Italian National Agency for New Technologies, Energy and Sustainable Economic Development, C.R. ENEA Brasimone, Camugnano (Italy); Angelucci, Morena; Marinari, Ranieri [University of Pisa, Dipartimento di Ingegneria Civile e Industriale, Pisa (Italy); Tarantino, Mariano [Italian National Agency for New Technologies, Energy and Sustainable Economic Development, C.R. ENEA Brasimone, Camugnano (Italy); Forgione, Nicola [University of Pisa, Dipartimento di Ingegneria Civile e Industriale, Pisa (Italy)

    2016-04-15

    Highlights: • Experiments with a wire-wrapped 19-pin fuel bundle cooled by LBE. • Wall and bulk temperature measurements at three axial positions. • Heat transfer and error analysis in the range of low mass flow rates and Péclet number. • Comparison of local and section-averaged Nusselt number with correlations. - Abstract: The NACIE-UP experimental facility at the ENEA Brasimone Research Centre (Italy) allowed to evaluate the heat transfer coefficient of a wire-spaced fuel bundle cooled by lead-bismuth eutectic (LBE). Lead or lead-bismuth eutectic are very attractive as coolants for the GEN-IV fast reactors due to the good thermo-physical properties and the capability to fulfil the GEN-IV goals. Nevertheless, few experimental data on heat transfer with heavy liquid metals (HLM) are available in literature. Furthermore, just a few data can be identified on the specific topic of wire-spaced fuel bundle cooled by HLM. Additional analysis on thermo-fluid dynamic behaviour of the HLM inside the subchannels of a rod bundle is necessary to support the design and safety assessment of GEN. IV/ADS reactors. In this context, a wire-spaced 19-pin fuel bundle was installed inside the NACIE-UP facility. The pin bundle is equipped with 67 thermocouples to monitor temperatures and analyse the heat transfer behaviour in different sub-channels and axial positions. The experimental campaign was part of the SEARCH FP7 EU project to support the development of the MYRRHA irradiation facility (SCK-CEN). Natural and mixed circulation flow regimes were investigated, with subchannel Reynolds number in the range Re = 1000–10,000 and heat flux in the range q″ = 50–500 kW/m{sup 2}. Local Nusselt numbers were calculated for five sub-channels in different ranks at three axial positions. Section-averaged Nusselt number was also defined and calculated. Local Nusselt data showed good consistency with some of the correlation existing in literature for heat transfer in liquid metals

  16. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  17. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...

  18. An introduction to hierarchical linear modeling

    National Research Council Canada - National Science Library

    Woltman, Heather; Feldstain, Andrea; MacKay, J. Christine; Rocchi, Meredith

    2012-01-01

    This tutorial aims to introduce Hierarchical Linear Modeling (HLM). A simple explanation of HLM is provided that describes when to use this statistical technique and identifies key factors to consider before conducting this analysis...

  19. Challenges and Opportunities in Analysing Students Modelling

    Science.gov (United States)

    Blanco-Anaya, Paloma; Justi, Rosária; Díaz de Bustamante, Joaquín

    2017-01-01

    Modelling-based teaching activities have been designed and analysed from distinct theoretical perspectives. In this paper, we use one of them--the model of modelling diagram (MMD)--as an analytical tool in a regular classroom context. This paper examines the challenges that arise when the MMD is used as an analytical tool to characterise the…

  20. The Aachen MiniHLM--a miniaturized heart-lung machine for neonates with an integrated rotary blood pump.

    Science.gov (United States)

    Arens, Jutta; Schnoering, Heike; Pfennig, Michael; Mager, Ilona; Vázquez-Jiménez, Jaime F; Schmitz-Rode, Thomas; Steinseifer, Ulrich

    2010-09-01

    The operation of congenital heart defects in neonates often requires the use of heart-lung machines (HLMs) to provide perfusion and oxygenation. This is prevalently followed by serious complications inter alia caused by hemodilution and extrinsic blood contact surfaces. Thus, one goal of developing a HLM for neonates is the reduction of priming volume and contact surface. The currently available systems offer reasonable priming volumes for oxygenators, reservoirs, etc. However, the necessary tubing system contains the highest volumes within the whole system. This is due to the use of roller pumps; hence, the resulting placement of the complete HLM is between 1 and 2 m away from the operating table due to connective tubing between the components. Therefore, we pursued a novel approach for a miniaturized HLM (MiniHLM) by integrating all major system components in one single device. In particular, the MiniHLM is a HLM with the rotary blood pump centrically integrated into the oxygenator and a heat exchanger integrated into the cardiotomy reservoir which is directly connected to the pump inlet. Thus, tubing is only necessary between the patient and MiniHLM. A total priming volume of 102 mL (including arterial filter and a/v line) could be achieved. To validate the overall concept and the specific design we conducted several in vitro and in vivo test series. All tests confirm the novel concept of the MiniHLM. Its low priming volume and blood contact surface may significantly reduce known complications related to cardiopulmonary bypass in neonates (e.g., inflammatory reaction and capillary leak syndrome). © 2010, Copyright the Authors. Artificial Organs © 2010, International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.

  1. Hierarchical Linear Modeling with Maximum Likelihood, Restricted Maximum Likelihood, and Fully Bayesian Estimation

    Science.gov (United States)

    Boedeker, Peter

    2017-01-01

    Hierarchical linear modeling (HLM) is a useful tool when analyzing data collected from groups. There are many decisions to be made when constructing and estimating a model in HLM including which estimation technique to use. Three of the estimation techniques available when analyzing data with HLM are maximum likelihood, restricted maximum…

  2. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate......System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...

  3. Scale of association: hierarchical linear models and the measurement of ecological systems

    Science.gov (United States)

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  4. Modelling and Analysing Socio-Technical Systems

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Ivanova, Marieta Georgieva; Nielson, Flemming

    2015-01-01

    with social engineering. Due to this combination of attack steps on technical and social levels, risk assessment in socio-technical systems is complex. Therefore, established risk assessment methods often abstract away the internal structure of an organisation and ignore human factors when modelling...... and assessing attacks. In our work we model all relevant levels of socio-technical systems, and propose evaluation techniques for analysing the security properties of the model. Our approach simplifies the identification of possible attacks and provides qualified assessment and ranking of attacks based...... on the expected impact. We demonstrate our approach on a home-payment system. The system is specifically designed to help elderly or disabled people, who may have difficulties leaving their home, to pay for some services, e.g., care-taking or rent. The payment is performed using the remote control of a television...

  5. Externalizing Behaviour for Analysing System Models

    NARCIS (Netherlands)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof; Kammüller, Florian

    Systems models have recently been introduced to model organisationsandevaluate their vulnerability to threats and especially insiderthreats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside

  6. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  7. Bayesian Uncertainty Analyses Via Deterministic Model

    Science.gov (United States)

    Krzysztofowicz, R.

    2001-05-01

    Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.

  8. Analysing Social Epidemics by Delayed Stochastic Models

    Directory of Open Access Journals (Sweden)

    Francisco-José Santonja

    2012-01-01

    Full Text Available We investigate the dynamics of a delayed stochastic mathematical model to understand the evolution of the alcohol consumption in Spain. Sufficient condition for stability in probability of the equilibrium point of the dynamic model with aftereffect and stochastic perturbations is obtained via Kolmanovskii and Shaikhet general method of Lyapunov functionals construction. We conclude that alcohol consumption in Spain will be constant (with stability in time with around 36.47% of nonconsumers, 62.94% of nonrisk consumers, and 0.59% of risk consumers. This approach allows us to emphasize the possibilities of the dynamical models in order to study human behaviour.

  9. Modelling, analyses and design of switching converters

    Science.gov (United States)

    Cuk, S. M.; Middlebrook, R. D.

    1978-01-01

    A state-space averaging method for modelling switching dc-to-dc converters for both continuous and discontinuous conduction mode is developed. In each case the starting point is the unified state-space representation, and the end result is a complete linear circuit model, for each conduction mode, which correctly represents all essential features, namely, the input, output, and transfer properties (static dc as well as dynamic ac small-signal). While the method is generally applicable to any switching converter, it is extensively illustrated for the three common power stages (buck, boost, and buck-boost). The results for these converters are then easily tabulated owing to the fixed equivalent circuit topology of their canonical circuit model. The insights that emerge from the general state-space modelling approach lead to the design of new converter topologies through the study of generic properties of the cascade connection of basic buck and boost converters.

  10. Properties of unirradiated HTGR core support and permanent side reflector graphites: PGX, HLM, 2020, and H-440N

    Energy Technology Data Exchange (ETDEWEB)

    Engle, G.B.

    1977-05-01

    Candidate materials for HTGR core supports and permanent side reflectors--graphite grades 2020 (Stackpole Carbon Company), H-440N (Great Lakes Carbon Corporation), PGX (Union Carbide Corporation), and HLM (Great Lakes Carbon Corporation)--are described and property data are presented. Properties measured are bulk density; tensile properties including ultimate strength, modulus of elasticity, and strain at fracture; flexural strength; compressive properties including ultimate strength, modulus of elasticity, and strain at fracture; and chemical impurity content.

  11. Longitudinal hierarchical linear modeling analyses of California Psychological Inventory data from age 33 to 75: an examination of stability and change in adult personality.

    Science.gov (United States)

    Jones, Constance J; Livson, Norman; Peskin, Harvey

    2003-06-01

    Twenty aspects of personality assessed via the California Psychological Inventory (CPI; Gough & Bradley, 1996) from age 33 to 75 were examined in a sample of 279 individuals. Oakland Growth Study and Berkeley Guidance Study members completed the CPI a maximum of 4 times. We used longitudinal hierarchical linear modeling (HLM) to ask the following: Which personality characteristics change and which do not? Five CPI scales showed uniform lack of change, 2 showed heterogeneous change giving an averaged lack of change, 4 showed linear increases with age, 2 showed linear decreases with age, 4 showed gender or sample differences in linear change, 1 showed a quadratic peak, and 2 showed a quadratic nadir. The utility of HLM becomes apparent in portraying the complexity of personality change and stability.

  12. Modelling and Analyses of Embedded Systems Design

    DEFF Research Database (Denmark)

    Brekling, Aske Wiid

    We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....

  13. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  14. Modelling longevity bonds: Analysing the Swiss Re Kortis bond

    OpenAIRE

    2015-01-01

    A key contribution to the development of the traded market for longevity risk was the issuance of the Kortis bond, the world's first longevity trend bond, by Swiss Re in 2010. We analyse the design of the Kortis bond, develop suitable mortality models to analyse its payoff and discuss the key risk factors for the bond. We also investigate how the design of the Kortis bond can be adapted and extended to further develop the market for longevity risk.

  15. When to Use Hierarchical Linear Modeling

    Directory of Open Access Journals (Sweden)

    Veronika Huta

    2014-04-01

    Full Text Available Previous publications on hierarchical linear modeling (HLM have provided guidance on how to perform the analysis, yet there is relatively little information on two questions that arise even before analysis: Does HLM apply to one’s data and research question? And if it does apply, how does one choose between HLM and other methods sometimes used in these circumstances, including multiple regression, repeated-measures or mixed ANOVA, and structural equation modeling or path analysis? The purpose of this tutorial is to briefly introduce HLM and then to review some of the considerations that are helpful in answering these questions, including the nature of the data, the model to be tested, and the information desired on the output. Some examples of how the same analysis could be performed in HLM, repeated-measures or mixed ANOVA, and structural equation modeling or path analysis are also provided. .

  16. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or m

  17. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  18. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  19. The method of characteristics applied to analyse 2DH models

    NARCIS (Netherlands)

    Sloff, C.J.

    1992-01-01

    To gain insight into the physical behaviour of 2D hydraulic models (mathematically formulated as a system of partial differential equations), the method of characteristics is used to analyse the propagation of physical meaningful disturbances. These disturbances propagate as wave fronts along bichar

  20. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    D. E. Reusser

    2008-11-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns which can lead to the identification of model structural errors.

  1. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    E. Zehe

    2009-07-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns, which can lead to the identification of model structural errors.

  2. A Hierarchical Linear Model with Factor Analysis Structure at Level 2

    Science.gov (United States)

    Miyazaki, Yasuo; Frank, Kenneth A.

    2006-01-01

    In this article the authors develop a model that employs a factor analysis structure at Level 2 of a two-level hierarchical linear model (HLM). The model (HLM2F) imposes a structure on a deficient rank Level 2 covariance matrix [tau], and facilitates estimation of a relatively large [tau] matrix. Maximum likelihood estimators are derived via the…

  3. Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach

    Directory of Open Access Journals (Sweden)

    Alistair McNair Senior

    2016-01-01

    Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  4. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Science.gov (United States)

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  5. Augmenting Visual Analysis in Single-Case Research with Hierarchical Linear Modeling

    Science.gov (United States)

    Davis, Dawn H.; Gagne, Phill; Fredrick, Laura D.; Alberto, Paul A.; Waugh, Rebecca E.; Haardorfer, Regine

    2013-01-01

    The purpose of this article is to demonstrate how hierarchical linear modeling (HLM) can be used to enhance visual analysis of single-case research (SCR) designs. First, the authors demonstrated the use of growth modeling via HLM to augment visual analysis of a sophisticated single-case study. Data were used from a delayed multiple baseline…

  6. Comparing modelling techniques for analysing urban pluvial flooding.

    Science.gov (United States)

    van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M

    2014-01-01

    Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.

  7. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Du, Qiang [Pennsylvania State Univ., State College, PA (United States)

    2014-11-12

    The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next

  8. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are more appropriate to accurately reflect the trial data.

  9. Analysing regenerative potential in zebrafish models of congenital muscular dystrophy.

    Science.gov (United States)

    Wood, A J; Currie, P D

    2014-11-01

    The congenital muscular dystrophies (CMDs) are a clinically and genetically heterogeneous group of muscle disorders. Clinically hypotonia is present from birth, with progressive muscle weakness and wasting through development. For the most part, CMDs can mechanistically be attributed to failure of basement membrane protein laminin-α2 sufficiently binding with correctly glycosylated α-dystroglycan. The majority of CMDs therefore arise as the result of either a deficiency of laminin-α2 (MDC1A) or hypoglycosylation of α-dystroglycan (dystroglycanopathy). Here we consider whether by filling a regenerative medicine niche, the zebrafish model can address the present challenge of delivering novel therapeutic solutions for CMD. In the first instance the readiness and appropriateness of the zebrafish as a model organism for pioneering regenerative medicine therapies in CMD is analysed, in particular for MDC1A and the dystroglycanopathies. Despite the recent rapid progress made in gene editing technology, these approaches have yet to yield any novel zebrafish models of CMD. Currently the most genetically relevant zebrafish models to the field of CMD, have all been created by N-ethyl-N-nitrosourea (ENU) mutagenesis. Once genetically relevant models have been established the zebrafish has several important facets for investigating the mechanistic cause of CMD, including rapid ex vivo development, optical transparency up to the larval stages of development and relative ease in creating transgenic reporter lines. Together, these tools are well suited for use in live-imaging studies such as in vivo modelling of muscle fibre detachment. Secondly, the zebrafish's contribution to progress in effective treatment of CMD was analysed. Two approaches were identified in which zebrafish could potentially contribute to effective therapies. The first hinges on the augmentation of functional redundancy within the system, such as upregulating alternative laminin chains in the candyfloss

  10. [Approach to depressogenic genes from genetic analyses of animal models].

    Science.gov (United States)

    Yoshikawa, Takeo

    2004-01-01

    Human depression or mood disorder is defined as a complex disease, making positional cloning of susceptibility genes a formidable task. We have undertaken genetic analyses of three different animal models for depression, comparing our results with advanced database resources. We first performed quantitative trait loci (QTL) analysis on two mouse models of "despair", namely, the forced swim test (FST) and tail suspension test (TST), and detected multiple chromosomal loci that control immobility time in these tests. Since one QTL detected on mouse chromosome 11 harbors the GABA A receptor subunit genes, we tested these genes for association in human mood disorder patients. We obtained significant associations of the alpha 1 and alpha 6 subunit genes with the disease, particularly in females. This result was striking, because we had previously detected an epistatic interaction between mouse chromosomes 11 and X that regulates immobility time in these animals. Next, we performed genome-wide expression analyses using a rat model of depression, learned helplessness (LH). We found that in the frontal cortex of LH rats, a disease implicated region, the LIM kinase 1 gene (Limk 1) showed greatest alteration, in this case down-regulation. By combining data from the QTL analysis of FST/TST and DNA microarray analysis of mouse frontal cortex, we identified adenylyl cyclase-associated CAP protein 1 (Cap 1) as another candidate gene for depression susceptibility. Both Limk 1 and Cap 1 are key players in the modulation of actin G-F conversion. In summary, our current study using animal models suggests disturbances of GABAergic neurotransmission and actin turnover as potential pathophysiologies for mood disorder.

  11. Magnetic fabric analyses in analogue models of clays

    Science.gov (United States)

    García-Lasanta, Cristina; Román-Berdiel, Teresa; Izquierdo-Llavall, Esther; Casas-Sainz, Antonio

    2017-04-01

    Anisotropy of magnetic susceptibility (AMS) studies in sedimentary rocks subjected to deformation indicate that magnetic fabrics orientation can be conditioned by multiple factors: sedimentary conditions, magnetic mineralogy, successive tectonic events, etc. All of them difficult the interpretation of the AMS as a marker of the deformation conditions. Analogue modeling allows to isolate the variables that act in a geological process and to determine the factors and in which extent they influence in the process. This study shows the magnetic fabric analyses applied to several analogue models developed with common commercial red clays. This material resembles natural clay materials that, despite their greater degree of impurities and heterogeneity, have been proved to record a robust magnetic signal carried by a mixture of para- and ferromagnetic minerals. The magnetic behavior of the modeled clay has been characterized by temperature dependent magnetic susceptibility curves (from 40 to 700°C). The measurements were performed combining a KLY-3S Kappabridge susceptometer with a CS3 furnace (AGICO Inc., Czech Republic). The obtained results indicate the presence of an important content of hematite as ferromagnetic phase, as well as a remarkable paramagnetic fraction, probably constituted by phyllosilicates. This mineralogy is common in natural materials such as Permo-Triassic red facies, and magnetic fabric analyses in these natural examples have given consistent results in different tectonic contexts. In this study, sedimentary conditions and magnetic mineralogy are kept constant and the influence of the tectonic regime in the magnetic fabrics is analyzed. Our main objective is to reproduce several tectonic contexts (strike-slip and compression) in a sedimentary environment where material is not yet compacted, in order to determine how tectonic conditions influence the magnetic fabric registered in each case. By dispersing the clays in water and after allowing their

  12. Multi-state models: metapopulation and life history analyses

    Directory of Open Access Journals (Sweden)

    Arnason, A. N.

    2004-06-01

    Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such

  13. Structural pairwise comparisons of HLM stability of phenyl derivatives: Introduction of the Pfizer metabolism index (PMI) and metabolism-lipophilicity efficiency (MLE).

    Science.gov (United States)

    Lewis, Mark L; Cucurull-Sanchez, Lourdes

    2009-02-01

    Data mining by pairwise comparison of over 150,000 human liver microsome (HLM) intrinsic clearance values stored within the internal Pfizer database has been performed by an automated tool. Systematic probability tables of specific structural changes on the intrinsic clearance of phenyl derivatives have been generated. From these data two new parameters, the Pfizer Metabolism Index (PMI) and Metabolism-Lipophilicity Efficiency (MLE) are introduced for each fragment. The findings are applied to a Topliss style analysis that focuses on metabolic stability.

  14. Dipole model test with one superconducting coil; results analysed

    CERN Document Server

    Durante, M; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  15. Dipole model test with one superconducting coil: results analysed

    CERN Document Server

    Bajas, H; Benda, V; Berriaud, C; Bajko, M; Bottura, L; Caspi, S; Charrondiere, M; Clément, S; Datskov, V; Devaux, M; Durante, M; Fazilleau, P; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  16. Incorporating flood event analyses and catchment structures into model development

    Science.gov (United States)

    Oppel, Henning; Schumann, Andreas

    2016-04-01

    The space-time variability in catchment response results from several hydrological processes which differ in their relevance in an event-specific way. An approach to characterise this variance consists in comparisons between flood events in a catchment and between flood responses of several sub-basins in such an event. In analytical frameworks the impact of space and time variability of rainfall on runoff generation due to rainfall excess can be characterised. Moreover the effect of hillslope and channel network routing on runoff timing can be specified. Hence, a modelling approach is needed to specify the runoff generation and formation. Knowing the space-time variability of rainfall and the (spatial averaged) response of a catchment it seems worthwhile to develop new models based on event and catchment analyses. The consideration of spatial order and the distribution of catchment characteristics in their spatial variability and interaction with the space-time variability of rainfall provides additional knowledge about hydrological processes at the basin scale. For this purpose a new procedure to characterise the spatial heterogeneity of catchments characteristics in their succession along the flow distance (differentiated between river network and hillslopes) was developed. It was applied to study of flood responses at a set of nested catchments in a river basin in eastern Germany. In this study the highest observed rainfall-runoff events were analysed, beginning at the catchment outlet and moving upstream. With regard to the spatial heterogeneities of catchment characteristics, sub-basins were separated by new algorithms to attribute runoff-generation, hillslope and river network processes. With this procedure the cumulative runoff response at the outlet can be decomposed and individual runoff features can be assigned to individual aspects of the catchment. Through comparative analysis between the sub-catchments and the assigned effects on runoff dynamics new

  17. A theoretical model for analysing gender bias in medicine

    Directory of Open Access Journals (Sweden)

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  18. An Illumination Modeling System for Human Factors Analyses

    Science.gov (United States)

    Huynh, Thong; Maida, James C.; Bond, Robert L. (Technical Monitor)

    2002-01-01

    Seeing is critical to human performance. Lighting is critical for seeing. Therefore, lighting is critical to human performance. This is common sense, and here on earth, it is easily taken for granted. However, on orbit, because the sun will rise or set every 45 minutes on average, humans working in space must cope with extremely dynamic lighting conditions. Contrast conditions of harsh shadowing and glare is also severe. The prediction of lighting conditions for critical operations is essential. Crew training can factor lighting into the lesson plans when necessary. Mission planners can determine whether low-light video cameras are required or whether additional luminaires need to be flown. The optimization of the quantity and quality of light is needed because of the effects on crew safety, on electrical power and on equipment maintainability. To address all of these issues, an illumination modeling system has been developed by the Graphics Research and Analyses Facility (GRAF) and Lighting Environment Test Facility (LETF) in the Space Human Factors Laboratory at NASA Johnson Space Center. The system uses physically based ray tracing software (Radiance) developed at Lawrence Berkeley Laboratories, a human factors oriented geometric modeling system (PLAID) and an extensive database of humans and environments. Material reflectivity properties of major surfaces and critical surfaces are measured using a gonio-reflectometer. Luminaires (lights) are measured for beam spread distribution, color and intensity. Video camera performances are measured for color and light sensitivity. 3D geometric models of humans and the environment are combined with the material and light models to form a system capable of predicting lighting conditions and visibility conditions in space.

  19. Comparison of two potato simulation models under climate change. I. Model calibration and sensitivity analyses

    NARCIS (Netherlands)

    Wolf, J.

    2002-01-01

    To analyse the effects of climate change on potato growth and production, both a simple growth model, POTATOS, and a comprehensive model, NPOTATO, were applied. Both models were calibrated and tested against results from experiments and variety trials in The Netherlands. The sensitivity of model

  20. Application of Rapid Visco Analyser (RVA) viscograms and chemometrics for maize hardness characterisation.

    Science.gov (United States)

    Guelpa, Anina; Bevilacqua, Marta; Marini, Federico; O'Kennedy, Kim; Geladi, Paul; Manley, Marena

    2015-04-15

    It has been established in this study that the Rapid Visco Analyser (RVA) can describe maize hardness, irrespective of the RVA profile, when used in association with appropriate multivariate data analysis techniques. Therefore, the RVA can complement or replace current and/or conventional methods as a hardness descriptor. Hardness modelling based on RVA viscograms was carried out using seven conventional hardness methods (hectoliter mass (HLM), hundred kernel mass (HKM), particle size index (PSI), percentage vitreous endosperm (%VE), protein content, percentage chop (%chop) and near infrared (NIR) spectroscopy) as references and three different RVA profiles (hard, soft and standard) as predictors. An approach using locally weighted partial least squares (LW-PLS) was followed to build the regression models. The resulted prediction errors (root mean square error of cross-validation (RMSECV) and root mean square error of prediction (RMSEP)) for the quantification of hardness values were always lower or in the same order of the laboratory error of the reference method.

  1. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    Energy Technology Data Exchange (ETDEWEB)

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  2. When to Use Hierarchical Linear Modeling

    National Research Council Canada - National Science Library

    Veronika Huta

    2014-01-01

    Previous publications on hierarchical linear modeling (HLM) have provided guidance on how to perform the analysis, yet there is relatively little information on two questions that arise even before analysis...

  3. Analyses on Four Models and Cases of Enterprise Informatization

    Institute of Scientific and Technical Information of China (English)

    Shi Chunsheng(石春生); Han Xinjuan; Yang Cuilan; Zhao Dongbai

    2003-01-01

    The basic conditions of the enterprise informatization in Heilongjiang province are analyzed and 4 models are designed to drive the industrial and commercial information enterprise. The 4 models are the Resource Integration Informatization Model, the Flow Management Informatization Model, the Intranet E-commerce Informatization Model and the Network Enterprise Informatization Model. The conditions for using and problems needing attentions of these 4 models are also analyzed.

  4. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gunzburger, Max [Florida State Univ., Tallahassee, FL (United States)

    2015-02-17

    We have treated the modeling, analysis, numerical analysis, and algorithmic development for nonlocal models of diffusion and mechanics. Variational formulations were developed and finite element methods were developed based on those formulations for both steady state and time dependent problems. Obstacle problems and optimization problems for the nonlocal models were also treated and connections made with fractional derivative models.

  5. Unmix 6.0 Model for environmental data analyses

    Science.gov (United States)

    Unmix Model is a mathematical receptor model developed by EPA scientists that provides scientific support for the development and review of the air and water quality standards, exposure research, and environmental forensics.

  6. Analysing Models as a Knowledge Technology in Transport Planning

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2011-01-01

    Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame for such a ......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic...

  7. Analyses of Tsunami Events using Simple Propagation Models

    Science.gov (United States)

    Chilvery, Ashwith Kumar; Tan, Arjun; Aggarwal, Mohan

    2012-03-01

    Tsunamis exhibit the characteristics of ``canal waves'' or ``gravity waves'' which belong to the class of ``long ocean waves on shallow water.'' The memorable tsunami events including the 2004 Indian Ocean tsunami and the 2011 Pacific Ocean tsunami off the coast of Japan are analyzed by constructing simple tsunami propagation models including the following: (1) One-dimensional propagation model; (2) Two-dimensional propagation model on flat surface; (3) Two-dimensional propagation model on spherical surface; and (4) A finite line-source model on two-dimensional surface. It is shown that Model 1 explains the basic features of the tsunami including the propagation speed, depth of the ocean, dispersion-less propagation and bending of tsunamis around obstacles. Models 2 and 3 explain the observed amplitude variations for long-distance tsunami propagation across the Pacific Ocean, including the effect of the equatorial ocean current on the arrival times. Model 3 further explains the enhancement effect on the amplitude due to the curvature of the Earth past the equatorial distance. Finally, Model 4 explains the devastating effect of superposition of tsunamis from two subduction event, which struck the Phuket region during the 2004 Indian Ocean tsunami.

  8. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The pur

  9. Hyperelastic Modelling and Finite Element Analysing of Rubber Bushing

    Directory of Open Access Journals (Sweden)

    Merve Yavuz ERKEK

    2015-03-01

    Full Text Available The objective of this paper is to obtain stiffness curves of rubber bushings which are used in automotive industry with hyperelastic finite element model. Hyperelastic material models were obtained with different material tests. Stress and strain values and static stiffness curves were determined. It is shown that, static stiffness curves are nonlinear. The level of stiffness affects the vehicle dynamics behaviour.

  10. Modelling theoretical uncertainties in phenomenological analyses for particle physics

    CERN Document Server

    Charles, Jérôme; Niess, Valentin; Silva, Luiz Vale

    2016-01-01

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding $p$-values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive $p$-value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavour p...

  11. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  12. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  13. Assessment of a geological model by surface wave analyses

    Science.gov (United States)

    Martorana, R.; Capizzi, P.; Avellone, G.; D'Alessandro, A.; Siragusa, R.; Luzio, D.

    2017-02-01

    A set of horizontal to vertical spectral ratio (HVSR) and multichannel analysis of surface waves (MASW) measurements, carried out in the Altavilla Milicia (Sicily) area, is analyzed to test a geological model of the area. Statistical techniques have been used in different stages of the data analysis, to optimize the reliability of the information extracted from geophysical measurements. In particular, cluster analysis algorithms have been implemented to select the time windows of the microseismic signal to be used for calculating the spectral ratio H/V and to identify sets of spectral ratio peaks likely caused by the same underground structures. Using results of reflection seismic lines, typical values of P-wave and S-wave velocity were estimated for each geological formation present in the area. These were used to narrow down the research space of parameters for the HVSR interpretation. MASW profiles have been carried out close to each HVSR measuring point, provided the parameters of the shallower layers for the HVSR models. MASW inversion has been constrained by extrapolating thicknesses from a known stratigraphic sequence. Preliminary 1D seismic models were obtained by adding deeper layers to models that resulted from MASW inversion. These justify the peaks of the HVSR curves due to layers deeper than MASW investigation depth. Furthermore, much deeper layers were included in the HVSR model, as suggested by geological setting and stratigraphic sequence. This choice was made considering that these latter layers do not generate other HVSR peaks and do not significantly affect the misfit. The starting models have been used to limit the starting research space for a more accurate interpretation, made considering the noise as a superposition of Rayleigh and Love waves. Results allowed to recognize four main seismic layers and to associate them to the main stratigraphic successions. The lateral correlation of seismic velocity models, joined with tectonic evidences

  14. Compound dislocation models (CDMs) for volcano deformation analyses

    Science.gov (United States)

    Nikkhoo, Mehdi; Walter, Thomas R.; Lundgren, Paul R.; Prats-Iraola, Pau

    2017-02-01

    Volcanic crises are often preceded and accompanied by volcano deformation caused by magmatic and hydrothermal processes. Fast and efficient model identification and parameter estimation techniques for various sources of deformation are crucial for process understanding, volcano hazard assessment and early warning purposes. As a simple model that can be a basis for rapid inversion techniques, we present a compound dislocation model (CDM) that is composed of three mutually orthogonal rectangular dislocations (RDs). We present new RD solutions, which are free of artefact singularities and that also possess full rotational degrees of freedom. The CDM can represent both planar intrusions in the near field and volumetric sources of inflation and deflation in the far field. Therefore, this source model can be applied to shallow dikes and sills, as well as to deep planar and equidimensional sources of any geometry, including oblate, prolate and other triaxial ellipsoidal shapes. In either case the sources may possess any arbitrary orientation in space. After systematically evaluating the CDM, we apply it to the co-eruptive displacements of the 2015 Calbuco eruption observed by the Sentinel-1A satellite in both ascending and descending orbits. The results show that the deformation source is a deflating vertical lens-shaped source at an approximate depth of 8 km centred beneath Calbuco volcano. The parameters of the optimal source model clearly show that it is significantly different from an isotropic point source or a single dislocation model. The Calbuco example reflects the convenience of using the CDM for a rapid interpretation of deformation data.

  15. A Formal Model to Analyse the Firewall Configuration Errors

    Directory of Open Access Journals (Sweden)

    T. T. Myo

    2015-01-01

    Full Text Available The firewall is widely known as a brandmauer (security-edge gateway. To provide the demanded security, the firewall has to be appropriately adjusted, i.e. be configured. Unfortunately, when configuring, even the skilled administrators may make mistakes, which result in decreasing level of a network security and network infiltration undesirable packages.The network can be exposed to various threats and attacks. One of the mechanisms used to ensure network security is the firewall.The firewall is a network component, which, using a security policy, controls packages passing through the borders of a secured network. The security policy represents the set of rules.Package filters work in the mode without inspection of a state: they investigate packages as the independent objects. Rules take the following form: (condition, action. The firewall analyses the entering traffic, based on the IP address of the sender and recipient, the port number of the sender and recipient, and the used protocol. When the package meets rule conditions, the action specified in the rule is carried out. It can be: allow, deny.The aim of this article is to develop tools to analyse a firewall configuration with inspection of states. The input data are the file with the set of rules. It is required to submit the analysis of a security policy in an informative graphic form as well as to reveal discrepancy available in rules. The article presents a security policy visualization algorithm and a program, which shows how the firewall rules act on all possible packages. To represent a result in an intelligible form a concept of the equivalence region is introduced.Our task is the program to display results of rules action on the packages in a convenient graphic form as well as to reveal contradictions between the rules. One of problems is the large number of measurements. As it was noted above, the following parameters are specified in the rule: Source IP address, appointment IP

  16. Analysing the Organizational Culture of Universities: Two Models

    Science.gov (United States)

    Folch, Marina Tomas; Ion, Georgeta

    2009-01-01

    This article presents the findings of two research projects, examining organizational culture by means of two different models of analysis--one at university level and one at department level--which were carried out over the last four years at Catalonian public universities (Spain). Theoretical and methodological approaches for the two…

  17. Enhancing Technology-Mediated Communication: Tools, Analyses, and Predictive Models

    Science.gov (United States)

    2007-09-01

    the home (see, for example, Nagel, Hudson, & Abowd, 2004), in social Chapter 2: Background 17 settings (see Kern, Antifakos, Schiele ...on Computer Supported Cooperative Work (CSCW 2006), pp. 525-528 ACM Press. Kern, N., Antifakos, S., Schiele , B., & Schwaninger, A. (2004). A model

  18. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    Institute of Scientific and Technical Information of China (English)

    Cai-Ping Feng; John Mundy

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions,TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also discussed.

  19. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    DEFF Research Database (Denmark)

    Feng, Cai-ping; Mundy, J.

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions, TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also...

  20. A new model for analysing thermal stress in granular composite

    Institute of Scientific and Technical Information of China (English)

    郑茂盛; 金志浩; 浩宏奇

    1995-01-01

    A double embedding model of inletting reinforcement grain and hollow matrix ball into the effective media of the particulate-reinforced composite is advanced. And with this model the distributions of thermal stress in different phases of the composite during cooling are studied. Various expressions for predicting elastic and elastoplastic thermal stresses are derived. It is found that the reinforcement suffers compressive hydrostatic stress and the hydrostatic stress in matrix zone is a tensile one when temperature decreases; when temperature further decreases, yield area in matrix forms; when the volume fraction of reinforcement is enlarged, compressive stress on grain and tensile hydrostatic stress in matrix zone decrease; the initial temperature difference of the interface of reinforcement and matrix yielding rises, while that for the matrix yielding overall decreases.

  1. Analysing an Analytical Solution Model for Simultaneous Mobility

    Directory of Open Access Journals (Sweden)

    Md. Ibrahim Chowdhury

    2013-12-01

    Full Text Available Current mobility models for simultaneous mobility h ave their convolution in designing simultaneous movement where mobile nodes (MNs travel randomly f rom the two adjacent cells at the same time and also have their complexity in the measurement of th e occurrences of simultaneous handover. Simultaneou s mobility problem incurs when two of the MNs start h andover approximately at the same time. As Simultaneous mobility is different for the other mo bility pattern, generally occurs less number of tim es in real time; we analyze that a simplified simultaneou s mobility model can be considered by taking only symmetric positions of MNs with random steps. In ad dition to that, we simulated the model using mSCTP and compare the simulation results in different sce narios with customized cell ranges. The analytical results shows that with the bigger the cell sizes, simultaneous handover with random steps occurrences become lees and for the sequential mobility (where initial positions of MNs is predetermined with ran dom steps, simultaneous handover is more frequent.

  2. A simulation model for analysing brain structure deformations

    Energy Technology Data Exchange (ETDEWEB)

    Bona, Sergio Di [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy); Lutzemberger, Ludovico [Department of Neuroscience, Institute of Neurosurgery, University of Pisa, Via Roma, 67-56100 Pisa (Italy); Salvetti, Ovidio [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy)

    2003-12-21

    Recent developments of medical software applications from the simulation to the planning of surgical operations have revealed the need for modelling human tissues and organs, not only from a geometric point of view but also from a physical one, i.e. soft tissues, rigid body, viscoelasticity, etc. This has given rise to the term 'deformable objects', which refers to objects with a morphology, a physical and a mechanical behaviour of their own and that reflects their natural properties. In this paper, we propose a model, based upon physical laws, suitable for the realistic manipulation of geometric reconstructions of volumetric data taken from MR and CT scans. In particular, a physically based model of the brain is presented that is able to simulate the evolution of different nature pathological intra-cranial phenomena such as haemorrhages, neoplasm, haematoma, etc and to describe the consequences that are caused by their volume expansions and the influences they have on the anatomical and neuro-functional structures of the brain.

  3. Analyses of Cometary Silicate Crystals: DDA Spectral Modeling of Forsterite

    Science.gov (United States)

    Wooden, Diane

    2012-01-01

    Comets are the Solar System's deep freezers of gases, ices, and particulates that were present in the outer protoplanetary disk. Where comet nuclei accreted was so cold that CO ice (approximately 50K) and other supervolatile ices like ethane (C2H2) were preserved. However, comets also accreted high temperature minerals: silicate crystals that either condensed (greater than or equal to 1400 K) or that were annealed from amorphous (glassy) silicates (greater than 850-1000 K). By their rarity in the interstellar medium, cometary crystalline silicates are thought to be grains that formed in the inner disk and were then radially transported out to the cold and ice-rich regimes near Neptune. The questions that comets can potentially address are: How fast, how far, and over what duration were crystals that formed in the inner disk transported out to the comet-forming region(s)? In comets, the mass fractions of silicates that are crystalline, f_cryst, translate to benchmarks for protoplanetary disk radial transport models. The infamous comet Hale-Bopp has crystalline fractions of over 55%. The values for cometary crystalline mass fractions, however, are derived assuming that the mineralogy assessed for the submicron to micron-sized portion of the size distribution represents the compositional makeup of all larger grains in the coma. Models for fitting cometary SEDs make this assumption because models can only fit the observed features with submicron to micron-sized discrete crystals. On the other hand, larger (0.1-100 micrometer radii) porous grains composed of amorphous silicates and amorphous carbon can be easily computed with mixed medium theory wherein vacuum mixed into a spherical particle mimics a porous aggregate. If crystalline silicates are mixed in, the models completely fail to match the observations. Moreover, models for a size distribution of discrete crystalline forsterite grains commonly employs the CDE computational method for ellipsoidal platelets (c:a:b=8

  4. Temporal variations analyses and predictive modeling of microbiological seawater quality.

    Science.gov (United States)

    Lušić, Darija Vukić; Kranjčević, Lado; Maćešić, Senka; Lušić, Dražen; Jozić, Slaven; Linšak, Željko; Bilajac, Lovorka; Grbčić, Luka; Bilajac, Neiro

    2017-08-01

    Bathing water quality is a major public health issue, especially for tourism-oriented regions. Currently used methods within EU allow at least a 2.2 day period for obtaining the analytical results, making outdated the information forwarded to the public. Obtained results and beach assessment are influenced by the temporal and spatial characteristics of sample collection, and numerous environmental parameters, as well as by differences of official water standards. This paper examines the temporal variation of microbiological parameters during the day, as well as the influence of the sampling hour, on decision processes in the management of the beach. Apart from the fecal indicators stipulated by the EU Bathing Water Directive (E. coli and enterococci), additional fecal (C. perfringens) and non-fecal (S. aureus and P. aeriginosa) parameters were analyzed. Moreover, the effects of applying different evaluation criteria (national, EU and U.S. EPA) to beach ranking were studied, and the most common reasons for exceeding water-quality standards were investigated. In order to upgrade routine monitoring, a predictive statistical model was developed. The highest concentrations of fecal indicators were recorded early in the morning (6 AM) due to the lack of solar radiation during the night period. When compared to enterococci, E. coli criteria appears to be more stringent for the detection of fecal pollution. In comparison to EU and U.S. EPA criteria, Croatian national evaluation criteria provide stricter public health standards. Solar radiation and precipitation were the predominant environmental parameters affecting beach water quality, and these parameters were included in the predictive model setup. Predictive models revealed great potential for the monitoring of recreational water bodies, and with further development can become a useful tool for the improvement of public health protection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Analysing the Competency of Mathematical Modelling in Physics

    CERN Document Server

    Redish, Edward F

    2016-01-01

    A primary goal of physics is to create mathematical models that allow both predictions and explanations of physical phenomena. We weave maths extensively into our physics instruction beginning in high school, and the level and complexity of the maths we draw on grows as our students progress through a physics curriculum. Despite much research on the learning of both physics and math, the problem of how to successfully teach most of our students to use maths in physics effectively remains unsolved. A fundamental issue is that in physics, we don't just use maths, we think about the physical world with it. As a result, we make meaning with math-ematical symbology in a different way than mathematicians do. In this talk we analyze how developing the competency of mathematical modeling is more than just "learning to do math" but requires learning to blend physical meaning into mathematical representations and use that physical meaning in solving problems. Examples are drawn from across the curriculum.

  6. Fluctuating selection models and McDonald-Kreitman type analyses.

    Directory of Open Access Journals (Sweden)

    Toni I Gossmann

    Full Text Available It is likely that the strength of selection acting upon a mutation varies through time due to changes in the environment. However, most population genetic theory assumes that the strength of selection remains constant. Here we investigate the consequences of fluctuating selection pressures on the quantification of adaptive evolution using McDonald-Kreitman (MK style approaches. In agreement with previous work, we show that fluctuating selection can generate evidence of adaptive evolution even when the expected strength of selection on a mutation is zero. However, we also find that the mutations, which contribute to both polymorphism and divergence tend, on average, to be positively selected during their lifetime, under fluctuating selection models. This is because mutations that fluctuate, by chance, to positive selected values, tend to reach higher frequencies in the population than those that fluctuate towards negative values. Hence the evidence of positive adaptive evolution detected under a fluctuating selection model by MK type approaches is genuine since fixed mutations tend to be advantageous on average during their lifetime. Never-the-less we show that methods tend to underestimate the rate of adaptive evolution when selection fluctuates.

  7. A workflow model to analyse pediatric emergency overcrowding.

    Science.gov (United States)

    Zgaya, Hayfa; Ajmi, Ines; Gammoudi, Lotfi; Hammadi, Slim; Martinot, Alain; Beuscart, Régis; Renard, Jean-Marie

    2014-01-01

    The greatest source of delay in patient flow is the waiting time from the health care request, and especially the bed request to exit from the Pediatric Emergency Department (PED) for hospital admission. It represents 70% of the time that these patients occupied in the PED waiting rooms. Our objective in this study is to identify tension indicators and bottlenecks that contribute to overcrowding. Patient flow mapping through the PED was carried out in a continuous 2 years period from January 2011 to December 2012. Our method is to use the collected real data, basing on accurate visits made in the PED of the Regional University Hospital Center (CHRU) of Lille (France), in order to construct an accurate and complete representation of the PED processes. The result of this representation is a Workflow model of the patient journey in the PED representing most faithfully possible the reality of the PED of CHRU of Lille. This model allowed us to identify sources of delay in patient flow and aspects of the PED activity that could be improved. It must be enough retailed to produce an analysis allowing to identify the dysfunctions of the PED and also to propose and to estimate prevention indicators of tensions. Our survey is integrated into the French National Research Agency project, titled: "Hospital: optimization, simulation and avoidance of strain" (ANR HOST).

  8. Geographically Isolated Wetlands and Catchment Hydrology: A Modified Model Analyses

    Science.gov (United States)

    Evenson, G.; Golden, H. E.; Lane, C.; D'Amico, E.

    2014-12-01

    Geographically isolated wetlands (GIWs), typically defined as depressional wetlands surrounded by uplands, support an array of hydrological and ecological processes. However, key research questions concerning the hydrological connectivity of GIWs and their impacts on downgradient surface waters remain unanswered. This is particularly important for regulation and management of these systems. For example, in the past decade United States Supreme Court decisions suggest that GIWs can be afforded protection if significant connectivity exists between these waters and traditional navigable waters. Here we developed a simulation procedure to quantify the effects of various spatial distributions of GIWs across the landscape on the downgradient hydrograph using a refined version of the Soil and Water Assessment Tool (SWAT), a catchment-scale hydrological simulation model. We modified the SWAT FORTRAN source code and employed an alternative hydrologic response unit (HRU) definition to facilitate an improved representation of GIW hydrologic processes and connectivity relationships to other surface waters, and to quantify their downgradient hydrological effects. We applied the modified SWAT model to an ~ 202 km2 catchment in the Coastal Plain of North Carolina, USA, exhibiting a substantial population of mapped GIWs. Results from our series of GIW distribution scenarios suggest that: (1) Our representation of GIWs within SWAT conforms to field-based characterizations of regional GIWs in most respects; (2) GIWs exhibit substantial seasonally-dependent effects upon downgradient base flow; (3) GIWs mitigate peak flows, particularly following high rainfall events; and (4) The presence of GIWs on the landscape impacts the catchment water balance (e.g., by increasing groundwater outflows). Our outcomes support the hypothesis that GIWs have an important catchment-scale effect on downgradient streamflow.

  9. The Impact of Neighborhood Characteristics on Housing Prices-An Application of Hierarchical Linear Modeling

    OpenAIRE

    Lee Chun Chang; Hui-Yu Lin

    2012-01-01

    Housing data are of a nested nature as houses are nested in a village, a town, or a county. This study thus applies HLM (hierarchical linear modelling) in an empirical study by adding neighborhood characteristic variables into the model for consideration. Using the housing data of 31 neighborhoods in the Taipei area as analysis samples and three HLM sub-models, this study discusses the impact of neighborhood characteristics on house prices. The empirical results indicate that the impact of va...

  10. Using System Dynamic Model and Neural Network Model to Analyse Water Scarcity in Sudan

    Science.gov (United States)

    Li, Y.; Tang, C.; Xu, L.; Ye, S.

    2017-07-01

    Many parts of the world are facing the problem of Water Scarcity. Analysing Water Scarcity quantitatively is an important step to solve the problem. Water scarcity in a region is gauged by WSI (water scarcity index), which incorporate water supply and water demand. To get the WSI, Neural Network Model and SDM (System Dynamic Model) that depict how environmental and social factors affect water supply and demand are developed to depict how environmental and social factors affect water supply and demand. The uneven distribution of water resource and water demand across a region leads to an uneven distribution of WSI within this region. To predict WSI for the future, logistic model, Grey Prediction, and statistics are applied in predicting variables. Sudan suffers from severe water scarcity problem with WSI of 1 in 2014, water resource unevenly distributed. According to the result of modified model, after the intervention, Sudan’s water situation will become better.

  11. Pan-European modelling of riverine nutrient concentrations - spatial patterns, source detection, trend analyses, scenario modelling

    Science.gov (United States)

    Bartosova, Alena; Arheimer, Berit; Capell, Rene; Donnelly, Chantal; Strömqvist, Johan

    2016-04-01

    Nutrient transport models are important tools for large scale assessments of macro-nutrient fluxes (nitrogen, phosphorus) and thus can serve as support tool for environmental assessment and management. Results from model applications over large areas, i.e. from major river basin to continental scales can fill a gap where monitoring data is not available. Here, we present results from the pan-European rainfall-runoff and nutrient transfer model E-HYPE, which is based on open data sources. We investigate the ability of the E-HYPE model to replicate the spatial and temporal variations found in observed time-series of riverine N and P concentrations, and illustrate the model usefulness for nutrient source detection, trend analyses, and scenario modelling. The results show spatial patterns in N concentration in rivers across Europe which can be used to further our understanding of nutrient issues across the European continent. E-HYPE results show hot spots with highest concentrations of total nitrogen in Western Europe along the North Sea coast. Source apportionment was performed to rank sources of nutrient inflow from land to sea along the European coast. An integrated dynamic model as E-HYPE also allows us to investigate impacts of climate change and measure programs, which was illustrated in a couple of scenarios for the Baltic Sea. Comparing model results with observations shows large uncertainty in many of the data sets and the assumptions used in the model set-up, e.g. point source release estimates. However, evaluation of model performance at a number of measurement sites in Europe shows that mean N concentration levels are generally well simulated. P levels are less well predicted which is expected as the variability of P concentrations in both time and space is higher. Comparing model performance with model set-ups using local data for the Weaver River (UK) did not result in systematically better model performance which highlights the complexity of model

  12. Taxing CO2 and subsidising biomass: Analysed in a macroeconomic and sectoral model

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    2000-01-01

    This paper analyses the combination of taxes and subsidies as an instrument to enable a reduction in CO2 emission. The objective of the study is to compare recycling of a CO2 tax revenue as a subsidy for biomass use as opposed to traditional recycling such as reduced income or corporate taxation....... A model of Denmark's energy supply sector is used to analyse the e€ect of a CO2 tax combined with using the tax revenue for biomass subsidies. The energy supply model is linked to a macroeconomic model such that the macroeconomic consequences of tax policies can be analysed along with the consequences...

  13. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  14. Pathway models for analysing and managing the introduction of alien plant pests - an overview and categorization

    NARCIS (Netherlands)

    Douma, J.C.; Pautasso, M.; Venette, R.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Schans, J.; Werf, van der W.

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative

  15. Establishing Causality Using Longitudinal Hierarchical Linear Modeling: An Illustration Predicting Achievement From Self-Control.

    Science.gov (United States)

    Duckworth, Angela Lee; Tsukayama, Eli; May, Henry

    2010-10-01

    The predictive validity of personality for important life outcomes is well established, but conventional longitudinal analyses cannot rule out the possibility that unmeasured third-variable confounds fully account for the observed relationships. Longitudinal hierarchical linear models (HLM) with time-varying covariates allow each subject to serve as his or her own control, thus eliminating between-individual confounds. HLM also allows the directionality of the causal relationship to be tested by reversing time-lagged predictor and outcome variables. We illustrate these techniques through a series of models that demonstrate that within-individual changes in self-control over time predict subsequent changes in GPA but not vice-versa. The evidence supporting a causal role for self-control was not moderated by IQ, gender, ethnicity, or income. Further analyses rule out one time-varying confound: self-esteem. The analytic approach taken in this study provides the strongest evidence to date for the causal role of self-control in determining achievement.

  16. An improved lake model for climate simulations: Model structure, evaluation, and sensitivity analyses in CESM1

    Directory of Open Access Journals (Sweden)

    Zachary Subin

    2012-02-01

    Full Text Available Lakes can influence regional climate, yet most general circulation models have, at best, simple and largely untested representations of lakes. We developed the Lake, Ice, Snow, and Sediment Simulator(LISSS for inclusion in the land-surface component (CLM4 of an earth system model (CESM1. The existing CLM4 lake modelperformed poorly at all sites tested; for temperate lakes, summer surface water temperature predictions were 10–25uC lower than observations. CLM4-LISSS modifies the existing model by including (1 a treatment of snow; (2 freezing, melting, and ice physics; (3 a sediment thermal submodel; (4 spatially variable prescribed lakedepth; (5 improved parameterizations of lake surface properties; (6 increased mixing under ice and in deep lakes; and (7 correction of previous errors. We evaluated the lake model predictions of water temperature and surface fluxes at three small temperate and boreal lakes where extensive observational data was available. We alsoevaluated the predicted water temperature and/or ice and snow thicknesses for ten other lakes where less comprehensive forcing observations were available. CLM4-LISSS performed very well compared to observations for shallow to medium-depth small lakes. For large, deep lakes, the under-prediction of mixing was improved by increasing the lake eddy diffusivity by a factor of 10, consistent with previouspublished analyses. Surface temperature and surface flux predictions were improved when the aerodynamic roughness lengths were calculated as a function of friction velocity, rather than using a constant value of 1 mm or greater. We evaluated the sensitivity of surface energy fluxes to modeled lake processes and parameters. Largechanges in monthly-averaged surface fluxes (up to 30 W m22 were found when excluding snow insulation or phase change physics and when varying the opacity, depth, albedo of melting lake ice, and mixing strength across ranges commonly found in real lakes. Typical

  17. A modified Lee-Carter model for analysing short-base-period data.

    Science.gov (United States)

    Zhao, Bojuan Barbara

    2012-03-01

    This paper introduces a new modified Lee-Carter model for analysing short-base-period mortality data, for which the original Lee-Carter model produces severely fluctuating predicted age-specific mortality. Approximating the unknown parameters in the modified model by linearized cubic splines and other additive functions, the model can be simplified into a logistic regression when fitted to binomial data. The expected death rate estimated from the modified model is smooth, not only over ages but also over years. The analysis of mortality data in China (2000-08) demonstrates the advantages of the new model over existing models.

  18. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Science.gov (United States)

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p plaster models using the caliper and from the digital models using O3d software were identical.

  19. Three-dimensional lake water quality modeling: sensitivity and uncertainty analyses.

    Science.gov (United States)

    Missaghi, Shahram; Hondzo, Miki; Melching, Charles

    2013-11-01

    Two sensitivity and uncertainty analysis methods are applied to a three-dimensional coupled hydrodynamic-ecological model (ELCOM-CAEDYM) of a morphologically complex lake. The primary goals of the analyses are to increase confidence in the model predictions, identify influential model parameters, quantify the uncertainty of model prediction, and explore the spatial and temporal variabilities of model predictions. The influence of model parameters on four model-predicted variables (model output) and the contributions of each of the model-predicted variables to the total variations in model output are presented. The contributions of predicted water temperature, dissolved oxygen, total phosphorus, and algal biomass contributed 3, 13, 26, and 58% of total model output variance, respectively. The fraction of variance resulting from model parameter uncertainty was calculated by two methods and used for evaluation and ranking of the most influential model parameters. Nine out of the top 10 parameters identified by each method agreed, but their ranks were different. Spatial and temporal changes of model uncertainty were investigated and visualized. Model uncertainty appeared to be concentrated around specific water depths and dates that corresponded to significant storm events. The results suggest that spatial and temporal variations in the predicted water quality variables are sensitive to the hydrodynamics of physical perturbations such as those caused by stream inflows generated by storm events. The sensitivity and uncertainty analyses identified the mineralization of dissolved organic carbon, sediment phosphorus release rate, algal metabolic loss rate, internal phosphorus concentration, and phosphorus uptake rate as the most influential model parameters.

  20. Processes models, environmental analyses, and cognitive architectures: quo vadis quantum probability theory?

    Science.gov (United States)

    Marewski, Julian N; Hoffrage, Ulrich

    2013-06-01

    A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.

  1. Stability of fMRI striatal response to alcohol cues: a hierarchical linear modeling approach.

    Science.gov (United States)

    Schacht, Joseph P; Anton, Raymond F; Randall, Patrick K; Li, Xingbao; Henderson, Scott; Myrick, Hugh

    2011-05-01

    In functional magnetic resonance imaging (fMRI) studies of alcohol-dependent individuals, alcohol cues elicit activation of the ventral and dorsal aspects of the striatum (VS and DS), which are believed to underlie aspects of reward learning critical to the initiation and maintenance of alcohol dependence. Cue-elicited striatal activation may represent a biological substrate through which treatment efficacy may be measured. However, to be useful for this purpose, VS or DS activation must first demonstrate stability across time. Using hierarchical linear modeling (HLM), this study tested the stability of cue-elicited activation in anatomically and functionally defined regions of interest in bilateral VS and DS. Nine non-treatment-seeking alcohol-dependent participants twice completed an alcohol cue reactivity task during two fMRI scans separated by 14 days. HLM analyses demonstrated that, across all participants, alcohol cues elicited significant activation in each of the regions of interest. At the group level, these activations attenuated slightly between scans, but session-wise differences were not significant. Within-participants stability was best in the anatomically defined right VS and DS and in a functionally defined region that encompassed right caudate and putamen (intraclass correlation coefficients of .75, .81, and .76, respectively). Thus, within this small sample, alcohol cue-elicited fMRI activation had good reliability in the right striatum, though a larger sample is necessary to ensure generalizability and further evaluate stability. This study also demonstrates the utility of HLM analytic techniques for serial fMRI studies, in which separating within-participants variance (individual changes in activation) from between-participants factors (time or treatment) is critical. Copyright © 2011 Elsevier Inc. All rights reserved.

  2. Analyses and simulations in income frame regulation model for the network sector from 2007; Analyser og simuleringer i inntektsrammereguleringsmodellen for nettbransjen fra 2007

    Energy Technology Data Exchange (ETDEWEB)

    Askeland, Thomas Haave; Fjellstad, Bjoern

    2007-07-01

    Analyses of the income frame regulation model for the network sector in Norway, introduced 1.st of January 2007. The model's treatment of the norm cost is evaluated, especially the effect analyses carried out by a so called Data Envelopment Analysis model. It is argued that there may exist an age lopsidedness in the data set, and that this should and can be corrected in the effect analyses. The adjustment is proposed corrected for by introducing an age parameter in the data set. Analyses of how the calibration effects in the regulation model affect the business' total income frame, as well as each network company's income frame have been made. It is argued that the calibration, the way it is presented, is not working according to its intention, and should be adjusted in order to provide the sector with the rate of reference in return (ml)

  3. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  4. Sensitivity analyses of spatial population viability analysis models for species at risk and habitat conservation planning.

    Science.gov (United States)

    Naujokaitis-Lewis, Ilona R; Curtis, Janelle M R; Arcese, Peter; Rosenfeld, Jordan

    2009-02-01

    Population viability analysis (PVA) is an effective framework for modeling species- and habitat-recovery efforts, but uncertainty in parameter estimates and model structure can lead to unreliable predictions. Integrating complex and often uncertain information into spatial PVA models requires that comprehensive sensitivity analyses be applied to explore the influence of spatial and nonspatial parameters on model predictions. We reviewed 87 analyses of spatial demographic PVA models of plants and animals to identify common approaches to sensitivity analysis in recent publications. In contrast to best practices recommended in the broader modeling community, sensitivity analyses of spatial PVAs were typically ad hoc, inconsistent, and difficult to compare. Most studies applied local approaches to sensitivity analyses, but few varied multiple parameters simultaneously. A lack of standards for sensitivity analysis and reporting in spatial PVAs has the potential to compromise the ability to learn collectively from PVA results, accurately interpret results in cases where model relationships include nonlinearities and interactions, prioritize monitoring and management actions, and ensure conservation-planning decisions are robust to uncertainties in spatial and nonspatial parameters. Our review underscores the need to develop tools for global sensitivity analysis and apply these to spatial PVA.

  5. Design evaluation and optimisation in crossover pharmacokinetic studies analysed by nonlinear mixed effects models

    OpenAIRE

    Nguyen, Thu Thuy; Bazzoli, Caroline; Mentré, France

    2012-01-01

    International audience; Bioequivalence or interaction trials are commonly studied in crossover design and can be analysed by nonlinear mixed effects models as an alternative to noncompartmental approach. We propose an extension of the population Fisher information matrix in nonlinear mixed effects models to design crossover pharmacokinetic trials, using a linearisation of the model around the random effect expectation, including within-subject variability and discrete covariates fixed or chan...

  6. Analysing outsourcing policies in an asset management context: a six-stage model

    OpenAIRE

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates the top-down discussion for analysing the degree of outsourcing maintenance. The model is based on the cyclic nature of maintenance. The six-stage model can: (1) give clear statements about the pre...

  7. Geographical variation of sporadic Legionnaires' disease analysed in a grid model

    DEFF Research Database (Denmark)

    Rudbeck, M.; Jepsen, Martin Rudbeck; Sonne, I.B.;

    2010-01-01

    clusters. Four cells had excess incidence in all three time periods. The analysis in 25 different grid positions indicated a low risk of overlooking cells with excess incidence in a random grid. The coefficient of variation ranged from 0.08 to 0.11 independent of the threshold. By application of a random......The aim was to analyse variation in incidence of sporadic Legionnaires' disease in a geographical information system in three time periods (1990-2005) by the application of a grid model and to assess the model's validity by analysing variation according to grid position. Coordinates...

  8. X-ray CT analyses, models and numerical simulations: a comparison with petrophysical analyses in an experimental CO2 study

    Science.gov (United States)

    Henkel, Steven; Pudlo, Dieter; Enzmann, Frieder; Reitenbach, Viktor; Albrecht, Daniel; Ganzer, Leonhard; Gaupp, Reinhard

    2016-06-01

    An essential part of the collaborative research project H2STORE (hydrogen to store), which is funded by the German government, was a comparison of various analytical methods for characterizing reservoir sandstones from different stratigraphic units. In this context Permian, Triassic and Tertiary reservoir sandstones were analysed. Rock core materials, provided by RWE Gasspeicher GmbH (Dortmund, Germany), GDF Suez E&P Deutschland GmbH (Lingen, Germany), E.ON Gas Storage GmbH (Essen, Germany) and RAG Rohöl-Aufsuchungs Aktiengesellschaft (Vienna, Austria), were processed by different laboratory techniques; thin sections were prepared, rock fragments were crushed and cubes of 1 cm edge length and plugs 3 to 5 cm in length with a diameter of about 2.5 cm were sawn from macroscopic homogeneous cores. With this prepared sample material, polarized light microscopy and scanning electron microscopy, coupled with image analyses, specific surface area measurements (after Brunauer, Emmet and Teller, 1938; BET), He-porosity and N2-permeability measurements and high-resolution microcomputer tomography (μ-CT), which were used for numerical simulations, were applied. All these methods were practised on most of the same sample material, before and on selected Permian sandstones also after static CO2 experiments under reservoir conditions. A major concern in comparing the results of these methods is an appraisal of the reliability of the given porosity, permeability and mineral-specific reactive (inner) surface area data. The CO2 experiments modified the petrophysical as well as the mineralogical/geochemical rock properties. These changes are detectable by all applied analytical methods. Nevertheless, a major outcome of the high-resolution μ-CT analyses and following numerical data simulations was that quite similar data sets and data interpretations were maintained by the different petrophysical standard methods. Moreover, the μ-CT analyses are not only time saving, but also non

  9. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    Science.gov (United States)

    Piparo, D.; Schott, G.; Quast, G.

    2009-12-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  10. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    CERN Document Server

    Piparo, D; Quast, Prof G

    2008-01-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  11. Combined Task and Physical Demands Analyses towards a Comprehensive Human Work Model

    Science.gov (United States)

    2014-09-01

    velocities, and accelerations over time for each postural sequence. Neck strain measures derived from biomechanical analyses of these postural...and whole missions. The result is a comprehensive model of tasks and associated physical demands from which one can estimate the accumulative neck ...Griffon Helicopter aircrew (Pilots and Flight Engineers) reported neck pain particularly when wearing Night Vision Goggles (NVGs) (Forde et al. , 2011

  12. Dutch AG-MEMOD model; A tool to analyse the agri-food sector

    NARCIS (Netherlands)

    Leeuwen, van M.G.A.; Tabeau, A.A.

    2005-01-01

    Agricultural policies in the European Union (EU) have a history of continuous reform. AG-MEMOD, acronym for Agricultural sector in the Member states and EU: econometric modelling for projections and analysis of EU policies on agriculture, forestry and the environment, provides a system for analysing

  13. Supply Chain Modeling for Fluorspar and Hydrofluoric Acid and Implications for Further Analyses

    Science.gov (United States)

    2015-04-01

    analysis. 15. SUBJECT TERMS supply chain , model, fluorspar, hydrofluoric acid, shortfall, substitution, Defense Logistics Agency, National Defense...unlimited. IDA Document D-5379 Log: H 15-000099 INSTITUTE FOR DEFENSE ANALYSES 4850 Mark Center Drive Alexandria, Virginia 22311-1882 Supply Chain ...E F E N S E A N A L Y S E S IDA Document D-5379 D. Sean Barnett Jerome Bracken Supply Chain Modeling for Fluorspar and Hydrofluoric Acid and

  14. Wavelet-based spatial comparison technique for analysing and evaluating two-dimensional geophysical model fields

    Directory of Open Access Journals (Sweden)

    S. Saux Picart

    2011-11-01

    Full Text Available Complex numerical models of the Earth's environment, based around 3-D or 4-D time and space domains are routinely used for applications including climate predictions, weather forecasts, fishery management and environmental impact assessments. Quantitatively assessing the ability of these models to accurately reproduce geographical patterns at a range of spatial and temporal scales has always been a difficult problem to address. However, this is crucial if we are to rely on these models for decision making. Satellite data are potentially the only observational dataset able to cover the large spatial domains analysed by many types of geophysical models. Consequently optical wavelength satellite data is beginning to be used to evaluate model hindcast fields of terrestrial and marine environments. However, these satellite data invariably contain regions of occluded or missing data due to clouds, further complicating or impacting on any comparisons with the model. A methodology has recently been developed to evaluate precipitation forecasts using radar observations. It allows model skill to be evaluated at a range of spatial scales and rain intensities. Here we extend the original method to allow its generic application to a range of continuous and discontinuous geophysical data fields, and therefore allowing its use with optical satellite data. This is achieved through two major improvements to the original method: (i all thresholds are determined based on the statistical distribution of the input data, so no a priori knowledge about the model fields being analysed is required and (ii occluded data can be analysed without impacting on the metric results. The method can be used to assess a model's ability to simulate geographical patterns over a range of spatial scales. We illustrate how the method provides a compact and concise way of visualising the degree of agreement between spatial features in two datasets. The application of the new method, its

  15. A Model for Integrating Fixed-, Random-, and Mixed-Effects Meta-Analyses into Structural Equation Modeling

    Science.gov (United States)

    Cheung, Mike W.-L.

    2008-01-01

    Meta-analysis and structural equation modeling (SEM) are two important statistical methods in the behavioral, social, and medical sciences. They are generally treated as two unrelated topics in the literature. The present article proposes a model to integrate fixed-, random-, and mixed-effects meta-analyses into the SEM framework. By applying an…

  16. Stellar abundance analyses in the light of 3D hydrodynamical model atmospheres

    CERN Document Server

    Asplund, M

    2003-01-01

    I describe recent progress in terms of 3D hydrodynamical model atmospheres and 3D line formation and their applications to stellar abundance analyses of late-type stars. Such 3D studies remove the free parameters inherent in classical 1D investigations (mixing length parameters, macro- and microturbulence) yet are highly successful in reproducing a large arsenal of observational constraints such as detailed line shapes and asymmetries. Their potential for abundance analyses is illustrated by discussing the derived oxygen abundances in the Sun and in metal-poor stars, where they seem to resolve long-standing problems as well as significantly alter the inferred conclusions.

  17. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    Science.gov (United States)

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).

  18. Application of an approximate vectorial diffraction model to analysing diffractive micro-optical elements

    Institute of Scientific and Technical Information of China (English)

    Niu Chun-Hui; Li Zhi-Yuan; Ye Jia-Sheng; Gu Ben-Yuan

    2005-01-01

    Scalar diffraction theory, although simple and efficient, is too rough for analysing diffractive micro-optical elements.Rigorous vectorial diffraction theory requires extensive numerical efforts, and is not a convenient design tool. In this paper we employ a simple approximate vectorial diffraction model which combines the principle of the scalar diffraction theory with an approximate local field model to analyse the diffraction of optical waves by some typical two-dimensional diffractive micro-optical elements. The TE and TM polarization modes are both considered. We have found that the approximate vectorial diffraction model can agree much better with the rigorous electromagnetic simulation results than the scalar diffraction theory for these micro-optical elements.

  19. On the unnecessary ubiquity of hierarchical linear modeling.

    Science.gov (United States)

    McNeish, Daniel; Stapleton, Laura M; Silverman, Rebecca D

    2017-03-01

    In psychology and the behavioral sciences generally, the use of the hierarchical linear model (HLM) and its extensions for discrete outcomes are popular methods for modeling clustered data. HLM and its discrete outcome extensions, however, are certainly not the only methods available to model clustered data. Although other methods exist and are widely implemented in other disciplines, it seems that psychologists have yet to consider these methods in substantive studies. This article compares and contrasts HLM with alternative methods including generalized estimating equations and cluster-robust standard errors. These alternative methods do not model random effects and thus make a smaller number of assumptions and are interpreted identically to single-level methods with the benefit that estimates are adjusted to reflect clustering of observations. Situations where these alternative methods may be advantageous are discussed including research questions where random effects are and are not required, when random effects can change the interpretation of regression coefficients, challenges of modeling with random effects with discrete outcomes, and examples of published psychology articles that use HLM that may have benefitted from using alternative methods. Illustrative examples are provided and discussed to demonstrate the advantages of the alternative methods and also when HLM would be the preferred method. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST. Tropical response

    Energy Technology Data Exchange (ETDEWEB)

    Moron, V. [Universite' de Provence, UFR des sciences geographiques et de l' amenagement, Aix-en-Provence (France); Navarra, A. [Istituto Nazionale di Geofisica e Vulcanologia, Bologna (Italy); Ward, M. N. [University of Oklahoma, Cooperative Institute for Mesoscale Meteorological Studies, Norman OK (United States); Foland, C. K. [Hadley Center for Climate Prediction and Research, Meteorological Office, Bracknell (United Kingdom); Friederichs, P. [Meteorologisches Institute des Universitaet Bonn, Bonn (Germany); Maynard, K.; Polcher, J. [Paris Universite' Pierre et Marie Curie, Paris (France). Centre Nationale de la Recherche Scientifique, Laboratoire de Meteorologie Dynamique, Paris

    2001-08-01

    The ECHAM 3.2 (T21), ECHAM (T30) and LMD (version 6, grid-point resolution with 96 longitudes x 72 latitudes) atmospheric general circulation models were integrated through the period 1961 to 1993 forces with the same observed Sea Surface Temperatures (SSTs) as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The large-scale tropical inter-annual variability is analysed to give a picture of a skill of each model and of some sort of combination of the three models. To analyse the similarity of model response averaged over the same key regions, several widely-used indices are calculated: Southern Oscillation Index (SOI), large-scale wind shear indices of the boreal summer monsoon in Asia and West Africa and rainfall indices for NE Brazil, Sahel and India. Even for the indices where internal noise is large, some years are consistent amongst all the runs, suggesting inter-annual variability of the strength of SST forcing. Averaging the ensemble mean of the three models (the super-ensemble mean) yields improved skill. When each run is weighted according to its skill, taking three runs from different models instead of three runs of the same model improves the mean skill. There is also some indication that one run of a given model could be better than another, suggesting that persistent anomalies could change its sensitivity to SST. The index approach lacks flexibility to assess whether a model's response to SST has been geographically displaced. It can focus on the first mode in the global tropics, found through singular value decomposition analysis, which is clearly related to El Nino/Southern Oscillation (ENSO) in all seasons. The Observed-Model and Model-Model analyses lead to almost the same patterns, suggesting that the dominant pattern of model response is also the most skilful mode. Seasonal modulation of both skill and spatial patterns (both model and observed) clearly exists with highest skill

  1. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  2. Distinguishing Mediational Models and Analyses in Clinical Psychology: Atemporal Associations Do Not Imply Causation.

    Science.gov (United States)

    Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R

    2016-09-01

    A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.

  3. FluxExplorer: A general platform for modeling and analyses of metabolic networks based on stoichiometry

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Stoichiometry-based analyses of meta- bolic networks have aroused significant interest of systems biology researchers in recent years. It is necessary to develop a more convenient modeling platform on which users can reconstruct their network models using completely graphical operations, and explore them with powerful analyzing modules to get a better understanding of the properties of metabolic systems. Herein, an in silico platform, FluxExplorer, for metabolic modeling and analyses based on stoichiometry has been developed as a publicly available tool for systems biology research. This platform integrates various analytic approaches, in- cluding flux balance analysis, minimization of meta- bolic adjustment, extreme pathways analysis, shadow prices analysis, and singular value decom- position, providing a thorough characterization of the metabolic system. Using a graphic modeling process, metabolic networks can be reconstructed and modi- fied intuitively and conveniently. The inconsistencies of a model with respect to the FBA principles can be proved automatically. In addition, this platform sup- ports systems biology markup language (SBML). FluxExplorer has been applied to rebuild a metabolic network in mammalian mitochondria, producing meaningful results. Generally, it is a powerful and very convenient tool for metabolic network modeling and analysis.

  4. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  5. Calibration of back-analysed model parameters for landslides using classification statistics

    Science.gov (United States)

    Cepeda, Jose; Henderson, Laura

    2016-04-01

    Back-analyses are useful for characterizing the geomorphological and mechanical processes and parameters involved in the initiation and propagation of landslides. These processes and parameters can in turn be used for improving forecasts of scenarios and hazard assessments in areas or sites which have similar settings to the back-analysed cases. The selection of the modeled landslide that produces the best agreement with the actual observations requires running a number of simulations by varying the type of model and the sets of input parameters. The comparison of the simulated and observed parameters is normally performed by visual comparison of geomorphological or dynamic variables (e.g., geometry of scarp and final deposit, maximum velocities and depths). Over the past six years, a method developed by NGI has been used by some researchers for a more objective selection of back-analysed input model parameters. That method includes an adaptation of the equations for calculation of classifiers, and a comparative evaluation of classifiers of the selected parameter sets in the Receiver Operating Characteristic (ROC) space. This contribution presents an updating of the methodology. The proposed procedure allows comparisons between two or more "clouds" of classifiers. Each cloud represents the performance of a model over a range of input parameters (e.g., samples of probability distributions). Considering the fact that each cloud does not necessarily produce a full ROC curve, two new normalised ROC-space parameters are introduced for characterizing the performance of each cloud. The first parameter is representative of the cloud position relative to the point of perfect classification. The second parameter characterizes the position of the cloud relative to the theoretically perfect ROC curve and the no-discrimination line. The methodology is illustrated with back-analyses of slope stability and landslide runout of selected case studies. This research activity has been

  6. Volvo Logistics Corporation Returnable Packaging System : a model for analysing cost savings when switching packaging system

    OpenAIRE

    2008-01-01

    This thesis is a study for analysing costs affected by packaging in a producing industry. The purpose is to develop a model that will calculate and present possible cost savings for the customer by using Volvo Logistics Corporations, VLC’s, returnable packaging instead of other packaging solutions. The thesis is based on qualitative data gained from both theoretical and empirical studies. The methodology for gaining information has been to study theoretical sources such as course literature a...

  7. Longitudinal data and hierarchical modeling. A tutorial for sport sciences researchers

    Directory of Open Access Journals (Sweden)

    António Prista

    2005-12-01

    Full Text Available This paper was prepared to be a tutorial on ways of approaching longitudinal data. The main aim is to help researchers to use Hierarchical or Multilevel Modelling (HMM to extract all the information their data contain. In the first part, we present the fundamental ideas of HMM applied to longitudinal data. The following part shows a complex example illustrating all steps in HMM as well as the analyses of all statistics given by the HLM 6.0 software package. RESUMO Este texto pretende ser um auxiliar didáctico sobre modos de olhar informação de natureza longitudinal. O seu propósito fundamental é auxiliar os investigadores a recorrerem à Modelação Hierárquica ou Multinível (MHMN para extraírem dos dados toda a sua riqueza. Na primeira parte apresentaremos as ideias fundamentais da MHMN aplicadas a dados longitudinais. De seguida recorreremos a um exemplo complexo para apresentar todos os passos da MHMN, interpretando de modo substantivo as principais estatísticas produzidas pelo software HLM 6.0.

  8. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  9. Computational model for supporting SHM systems design: Damage identification via numerical analyses

    Science.gov (United States)

    Sartorato, Murilo; de Medeiros, Ricardo; Vandepitte, Dirk; Tita, Volnei

    2017-02-01

    This work presents a computational model to simulate thin structures monitored by piezoelectric sensors in order to support the design of SHM systems, which use vibration based methods. Thus, a new shell finite element model was proposed and implemented via a User ELement subroutine (UEL) into the commercial package ABAQUS™. This model was based on a modified First Order Shear Theory (FOST) for piezoelectric composite laminates. After that, damaged cantilever beams with two piezoelectric sensors in different positions were investigated by using experimental analyses and the proposed computational model. A maximum difference in the magnitude of the FRFs between numerical and experimental analyses of 7.45% was found near the resonance regions. For damage identification, different levels of damage severity were evaluated by seven damage metrics, including one proposed by the present authors. Numerical and experimental damage metrics values were compared, showing a good correlation in terms of tendency. Finally, based on comparisons of numerical and experimental results, it is shown a discussion about the potentials and limitations of the proposed computational model to be used for supporting SHM systems design.

  10. Model error analyses of photochemistry mechanisms using the BEATBOX/BOXMOX data assimilation toy model

    Science.gov (United States)

    Knote, C. J.; Eckl, M.; Barré, J.; Emmons, L. K.

    2016-12-01

    Simplified descriptions of photochemistry in the atmosphere ('photochemical mechanisms') necessary to reduce the computational burden of a model simulation contribute significantly to the overall uncertainty of an air quality model. Understanding how the photochemical mechanism contributes to observed model errors through examination of results of the complete model system is next to impossible due to cancellation and amplification effects amongst the tightly interconnected model components. Here we present BEATBOX, a novel method to evaluate photochemical mechanisms using the underlying chemistry box model BOXMOX. With BOXMOX we can rapidly initialize various mechanisms (e.g. MOZART, RACM, CBMZ, MCM) with homogenized observations (e.g. from field campaigns) and conduct idealized 'chemistry in a jar' simulations under controlled conditions. BEATBOX is a data assimilation toy model built upon BOXMOX which allows to simulate the effects of assimilating observations (e.g., CO, NO2, O3) into these simulations. In this presentation we show how we use the Master Chemical Mechanism (MCM, U Leeds) as benchmark for more simplified mechanisms like MOZART, use BEATBOX to homogenize the chemical environment and diagnose errors within the more simplified mechanisms. We present BEATBOX as a new, freely available tool that allows researchers to rapidly evaluate their chemistry mechanism against a range of others under varying chemical conditions.

  11. Modeling and performance analyses of evaporators in frozen-food supermarket display cabinets at low temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Getu, H.M.; Bansal, P.K. [Department of Mechanical Engineering, The University of Auckland, Private Bag 92019, Auckland (New Zealand)

    2007-11-15

    This paper presents modeling and experimental analyses of evaporators in 'in situ' frozen-food display cabinets at low temperatures in the supermarket industry. Extensive experiments were conducted to measure store and display cabinet relative humidities and temperatures, and pressures, temperatures and mass flow rates of the refrigerant. The mathematical model adopts various empirical correlations of heat transfer coefficients and frost properties in a fin-tube heat exchanger in order to investigate the influence of indoor conditions on the performance of the display cabinets. The model is validated with the experimental data of 'in situ' cabinets. The model would be a good guide tool to the design engineers to evaluate the performance of supermarket display cabinet heat exchangers under various store conditions. (author)

  12. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.; Sobel, A.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overview of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.

  13. Risk Factor Analyses for the Return of Spontaneous Circulation in the Asphyxiation Cardiac Arrest Porcine Model

    Directory of Open Access Journals (Sweden)

    Cai-Jun Wu

    2015-01-01

    Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.

  14. Prediction Uncertainty Analyses for the Combined Physically-Based and Data-Driven Models

    Science.gov (United States)

    Demissie, Y. K.; Valocchi, A. J.; Minsker, B. S.; Bailey, B. A.

    2007-12-01

    The unavoidable simplification associated with physically-based mathematical models can result in biased parameter estimates and correlated model calibration errors, which in return affect the accuracy of model predictions and the corresponding uncertainty analyses. In this work, a physically-based groundwater model (MODFLOW) together with error-correcting artificial neural networks (ANN) are used in a complementary fashion to obtain an improved prediction (i.e. prediction with reduced bias and error correlation). The associated prediction uncertainty of the coupled MODFLOW-ANN model is then assessed using three alternative methods. The first method estimates the combined model confidence and prediction intervals using first-order least- squares regression approximation theory. The second method uses Monte Carlo and bootstrap techniques for MODFLOW and ANN, respectively, to construct the combined model confidence and prediction intervals. The third method relies on a Bayesian approach that uses analytical or Monte Carlo methods to derive the intervals. The performance of these approaches is compared with Generalized Likelihood Uncertainty Estimation (GLUE) and Calibration-Constrained Monte Carlo (CCMC) intervals of the MODFLOW predictions alone. The results are demonstrated for a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study comprises structural, parameter, and measurement uncertainties. The preliminary results indicate that the proposed three approaches yield comparable confidence and prediction intervals, thus making the computationally efficient first-order least-squares regression approach attractive for estimating the coupled model uncertainty. These results will be compared with GLUE and CCMC results.

  15. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull

    Science.gov (United States)

    Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2013-01-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  16. Analysing adverse events by time-to-event models: the CLEOPATRA study.

    Science.gov (United States)

    Proctor, Tanja; Schumacher, Martin

    2016-07-01

    When analysing primary and secondary endpoints in a clinical trial with patients suffering from a chronic disease, statistical models for time-to-event data are commonly used and accepted. This is in contrast to the analysis of data on adverse events where often only a table with observed frequencies and corresponding test statistics is reported. An example is the recently published CLEOPATRA study where a three-drug regimen is compared with a two-drug regimen in patients with HER2-positive first-line metastatic breast cancer. Here, as described earlier, primary and secondary endpoints (progression-free and overall survival) are analysed using time-to-event models, whereas adverse events are summarized in a simple frequency table, although the duration of study treatment differs substantially. In this paper, we demonstrate the application of time-to-event models to first serious adverse events using the data of the CLEOPATRA study. This will cover the broad range between a simple incidence rate approach over survival and competing risks models (with death as a competing event) to multi-state models. We illustrate all approaches by means of graphical displays highlighting the temporal dynamics and compare the obtained results. For the CLEOPATRA study, the resulting hazard ratios are all in the same order of magnitude. But the use of time-to-event models provides valuable and additional information that would potentially be overlooked by only presenting incidence proportions. These models adequately address the temporal dynamics of serious adverse events as well as death of patients. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Models and analyses for inertial-confinement fusion-reactor studies

    Energy Technology Data Exchange (ETDEWEB)

    Bohachevsky, I.O.

    1981-05-01

    This report describes models and analyses devised at Los Alamos National Laboratory to determine the technical characteristics of different inertial confinement fusion (ICF) reactor elements required for component integration into a functional unit. We emphasize the generic properties of the different elements rather than specific designs. The topics discussed are general ICF reactor design considerations; reactor cavity phenomena, including the restoration of interpulse ambient conditions; first-wall temperature increases and material losses; reactor neutronics and hydrodynamic blanket response to neutron energy deposition; and analyses of loads and stresses in the reactor vessel walls, including remarks about the generation and propagation of very short wavelength stress waves. A discussion of analytic approaches useful in integrations and optimizations of ICF reactor systems concludes the report.

  18. Dynamics and spatial structure of ENSO from re-analyses versus CMIP5 models

    Science.gov (United States)

    Serykh, Ilya; Sonechkin, Dmitry

    2016-04-01

    Basing on a mathematical idea about the so-called strange nonchaotic attractor (SNA) in the quasi-periodically forced dynamical systems, the currently available re-analyses data are considered. It is found that the El Niño - Southern Oscillation (ENSO) is driven not only by the seasonal heating, but also by three more external periodicities (incommensurate to the annual period) associated with the ~18.6-year lunar-solar nutation of the Earth rotation axis, ~11-year sunspot activity cycle and the ~14-month Chandler wobble in the Earth's pole motion. Because of the incommensurability of their periods all four forces affect the system in inappropriate time moments. As a result, the ENSO time series look to be very complex (strange in mathematical terms) but nonchaotic. The power spectra of ENSO indices reveal numerous peaks located at the periods that are multiples of the above periodicities as well as at their sub- and super-harmonic. In spite of the above ENSO complexity, a mutual order seems to be inherent to the ENSO time series and their spectra. This order reveals itself in the existence of a scaling of the power spectrum peaks and respective rhythms in the ENSO dynamics that look like the power spectrum and dynamics of the SNA. It means there are no limits to forecast ENSO, in principle. In practice, it opens a possibility to forecast ENSO for several years ahead. Global spatial structures of anomalies during El Niño and power spectra of ENSO indices from re-analyses are compared with the respective output quantities in the CMIP5 climate models (the Historical experiment). It is found that the models reproduce global spatial structures of the near surface temperature and sea level pressure anomalies during El Niño very similar to these fields in the re-analyses considered. But the power spectra of the ENSO indices from the CMIP5 models show no peaks at the same periods as the re-analyses power spectra. We suppose that it is possible to improve modeled

  19. Design evaluation and optimisation in crossover pharmacokinetic studies analysed by nonlinear mixed effects models.

    Science.gov (United States)

    Nguyen, Thu Thuy; Bazzoli, Caroline; Mentré, France

    2012-05-20

    Bioequivalence or interaction trials are commonly studied in crossover design and can be analysed by nonlinear mixed effects models as an alternative to noncompartmental approach. We propose an extension of the population Fisher information matrix in nonlinear mixed effects models to design crossover pharmacokinetic trials, using a linearisation of the model around the random effect expectation, including within-subject variability and discrete covariates fixed or changing between periods. We use the expected standard errors of treatment effect to compute the power for the Wald test of comparison or equivalence and the number of subjects needed for a given power. We perform various simulations mimicking crossover two-period trials to show the relevance of these developments. We then apply these developments to design a crossover pharmacokinetic study of amoxicillin in piglets and implement them in the new version 3.2 of the r function PFIM.

  20. An age-dependent model to analyse the evolutionary stability of bacterial quorum sensing.

    Science.gov (United States)

    Mund, A; Kuttler, C; Pérez-Velázquez, J; Hense, B A

    2016-09-21

    Bacterial communication is enabled through the collective release and sensing of signalling molecules in a process called quorum sensing. Cooperative processes can easily be destabilized by the appearance of cheaters, who contribute little or nothing at all to the production of common goods. This especially applies for planktonic cultures. In this study, we analyse the dynamics of bacterial quorum sensing and its evolutionary stability under two levels of cooperation, namely signal and enzyme production. The model accounts for mutation rates and switches between planktonic and biofilm state of growth. We present a mathematical approach to model these dynamics using age-dependent colony models. We explore the conditions under which cooperation is stable and find that spatial structuring can lead to long-term scenarios such as coexistence or bistability, depending on the non-linear combination of different parameters like death rates and production costs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Assessing Cognitive Processes with Diffusion Model Analyses: A Tutorial based on fast-dm-30

    Directory of Open Access Journals (Sweden)

    Andreas eVoss

    2015-03-01

    Full Text Available Diffusion models can be used to infer cognitive processes involved in fast binary decision tasks. The model assumes that information is accumulated continuously until one of two thresholds is hit. In the analysis, response time distributions from numerous trials of the decision task are used to estimate a set of parameters mapping distinct cognitive processes. In recent years, diffusion model analyses have become more and more popular in different fields of psychology. This increased popularity is based on the recent development of several software solutions for the parameter estimation. Although these programs make the application of the model relatively easy, there is a shortage of knowledge about different steps of a state-of-the-art diffusion model study. In this paper, we give a concise tutorial on diffusion modelling, and we present fast-dm-30, a thoroughly revised and extended version of the fast-dm software (Voss & Voss, 2007 for diffusion model data analysis. The most important improvement of the fast-dm version is the possibility to choose between different optimization criteria (i.e., Maximum Likelihood, Chi-Square, and Kolmogorov-Smirnov, which differ in applicability for different data sets.

  2. Models of population-based analyses for data collected from large extended families.

    Science.gov (United States)

    Wang, Wenyu; Lee, Elisa T; Howard, Barbara V; Fabsitz, Richard R; Devereux, Richard B; MacCluer, Jean W; Laston, Sandra; Comuzzie, Anthony G; Shara, Nawar M; Welty, Thomas K

    2010-12-01

    Large studies of extended families usually collect valuable phenotypic data that may have scientific value for purposes other than testing genetic hypotheses if the families were not selected in a biased manner. These purposes include assessing population-based associations of diseases with risk factors/covariates and estimating population characteristics such as disease prevalence and incidence. Relatedness among participants however, violates the traditional assumption of independent observations in these classic analyses. The commonly used adjustment method for relatedness in population-based analyses is to use marginal models, in which clusters (families) are assumed to be independent (unrelated) with a simple and identical covariance (family) structure such as those called independent, exchangeable and unstructured covariance structures. However, using these simple covariance structures may not be optimally appropriate for outcomes collected from large extended families, and may under- or over-estimate the variances of estimators and thus lead to uncertainty in inferences. Moreover, the assumption that families are unrelated with an identical family structure in a marginal model may not be satisfied for family studies with large extended families. The aim of this paper is to propose models incorporating marginal models approaches with a covariance structure for assessing population-based associations of diseases with their risk factors/covariates and estimating population characteristics for epidemiological studies while adjusting for the complicated relatedness among outcomes (continuous/categorical, normally/non-normally distributed) collected from large extended families. We also discuss theoretical issues of the proposed models and show that the proposed models and covariance structure are appropriate for and capable of achieving the aim.

  3. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  4. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Directory of Open Access Journals (Sweden)

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  5. Sampling and sensitivity analyses tools (SaSAT) for computational modelling.

    Science.gov (United States)

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-02-27

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab, a numerical mathematical software package, and utilises algorithms contained in the Matlab Statistics Toolbox. However, Matlab is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  6. Analysing animal social network dynamics: the potential of stochastic actor-oriented models.

    Science.gov (United States)

    Fisher, David N; Ilany, Amiyaal; Silk, Matthew J; Tregenza, Tom

    2017-03-01

    Animals are embedded in dynamically changing networks of relationships with conspecifics. These dynamic networks are fundamental aspects of their environment, creating selection on behaviours and other traits. However, most social network-based approaches in ecology are constrained to considering networks as static, despite several calls for such analyses to become more dynamic. There are a number of statistical analyses developed in the social sciences that are increasingly being applied to animal networks, of which stochastic actor-oriented models (SAOMs) are a principal example. SAOMs are a class of individual-based models designed to model transitions in networks between discrete time points, as influenced by network structure and covariates. It is not clear, however, how useful such techniques are to ecologists, and whether they are suited to animal social networks. We review the recent applications of SAOMs to animal networks, outlining findings and assessing the strengths and weaknesses of SAOMs when applied to animal rather than human networks. We go on to highlight the types of ecological and evolutionary processes that SAOMs can be used to study. SAOMs can include effects and covariates for individuals, dyads and populations, which can be constant or variable. This allows for the examination of a wide range of questions of interest to ecologists. However, high-resolution data are required, meaning SAOMs will not be useable in all study systems. It remains unclear how robust SAOMs are to missing data and uncertainty around social relationships. Ultimately, we encourage the careful application of SAOMs in appropriate systems, with dynamic network analyses likely to prove highly informative. Researchers can then extend the basic method to tackle a range of existing questions in ecology and explore novel lines of questioning. © 2016 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  7. Power analyses for negative binomial models with application to multiple sclerosis clinical trials.

    Science.gov (United States)

    Rettiganti, Mallik; Nagaraja, H N

    2012-01-01

    We use negative binomial (NB) models for the magnetic resonance imaging (MRI)-based brain lesion count data from parallel group (PG) and baseline versus treatment (BVT) trials for relapsing remitting multiple sclerosis (RRMS) patients, and describe the associated likelihood ratio (LR), score, and Wald tests. We perform power analyses and sample size estimation using the simulated percentiles of the exact distribution of the test statistics for the PG and BVT trials. When compared to the corresponding nonparametric test, the LR test results in 30-45% reduction in sample sizes for the PG trials and 25-60% reduction for the BVT trials.

  8. Analysing and modelling battery drain of 3G terminals due to port scan attacks

    OpenAIRE

    Pascual Trigos, Mar

    2010-01-01

    In this thesis there is detected a threat in 3G mobile phone, specifically in the eventual draining terminal's battery due to undesired data traffic. The objectives of the thesis are to analyse the battery drain of 3G mobile phones because of uplink and downlink traffic and to model the battery drain. First of all, there is described how we can make a mobile phone to increase its consumption, and therefore to shorten its battery life time. Concretely, we focus in data traffic. This traffic ca...

  9. Analysing the Effects of Flood-Resilience Technologies in Urban Areas Using a Synthetic Model Approach

    Directory of Open Access Journals (Sweden)

    Reinhard Schinke

    2016-11-01

    Full Text Available Flood protection systems with their spatial effects play an important role in managing and reducing flood risks. The planning and decision process as well as the technical implementation are well organized and often exercised. However, building-related flood-resilience technologies (FReT are often neglected due to the absence of suitable approaches to analyse and to integrate such measures in large-scale flood damage mitigation concepts. Against this backdrop, a synthetic model-approach was extended by few complementary methodical steps in order to calculate flood damage to buildings considering the effects of building-related FReT and to analyse the area-related reduction of flood risks by geo-information systems (GIS with high spatial resolution. It includes a civil engineering based investigation of characteristic properties with its building construction including a selection and combination of appropriate FReT as a basis for derivation of synthetic depth-damage functions. Depending on the real exposition and the implementation level of FReT, the functions can be used and allocated in spatial damage and risk analyses. The application of the extended approach is shown at a case study in Valencia (Spain. In this way, the overall research findings improve the integration of FReT in flood risk management. They provide also some useful information for advising of individuals at risk supporting the selection and implementation of FReT.

  10. Modeling of high homologous temperature deformation behavior for stress and life-time analyses

    Energy Technology Data Exchange (ETDEWEB)

    Krempl, E. [Rensselaer Polytechnic Institute, Troy, NY (United States)

    1997-12-31

    Stress and lifetime analyses need realistic and accurate constitutive models for the inelastic deformation behavior of engineering alloys at low and high temperatures. Conventional creep and plasticity models have fundamental difficulties in reproducing high homologous temperature behavior. To improve the modeling capabilities {open_quotes}unified{close_quotes} state variable theories were conceived. They consider all inelastic deformation rate-dependent and do not have separate repositories for creep and plasticity. The viscoplasticity theory based on overstress (VBO), one of the unified theories, is introduced and its properties are delineated. At high homologous temperature where secondary and tertiary creep are observed modeling is primarily accomplished by a static recovery term and a softening isotropic stress. At low temperatures creep is merely a manifestation of rate dependence. The primary creep modeled at low homologous temperature is due to the rate dependence of the flow law. The model is unaltered in the transition from low to high temperature except that the softening of the isotropic stress and the influence of the static recovery term increase with an increase of the temperature.

  11. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    Science.gov (United States)

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on

  12. 变革领导对护士专业生活品质和组织承诺影响的多层线性分析%A HLM study of transformational leadership on nurses' professional quality life and organizational commitment

    Institute of Scientific and Technical Information of China (English)

    郑杏; 杨敏; 高伟

    2013-01-01

    Objective To explore the impact of transformational leadership on nurses'professional quality of life and organizational commitment by hierarchical linear modeling (HLM).Methods A total of 44,6 clinical nurses were investigated with demography questionnaire,professional quality of life questionnaire,organizational commitment questionnaire and transformational leadership questionnaire.Results The transformational leadership had positive predict ability to the nurses'compassion satisfaction,secondary traumatic stress and organizational commitment.The transformational leadership had negative predict ability to burnout.The transformational leadership could adjust the relationship between compassion satisfaction,secondary traumatic stress and organizational commitment.Conclusions The transformational leadership provides theoretical reference for nursing managers to effectively lead the team,increase professional life quality of the nurses,and then improve the level of organizational commitment of the nurses.%目的 运用多层线性模型(HLM)分析护士长变革领导行为对护士专业生活品质和组织承诺关系的跨层次影响.方法 采取方便整群抽样的方法抽取64个科室中的446名护士进行一般资料、护士专业生活品质量表、变革领导行为问卷及组织承诺量表的调查,并对结果进行分析.结果 护士长变革领导行为对护士专业生活品质中慈心满意、二次创伤有显著的正向预测作用;对倦怠有显著的负向预测作用;对护士组织承诺水平有显著的正向预测作用.护士长变革领导行为对护士慈心满意和组织承诺之间的关系有显著的调节(减弱)作用;对二次创伤与组织承诺的关系有显著的调节(加强)作用.结论 变革领导为护理管理者有效管理团队、提高护士专业生活品质,进而提高护士组织承诺水平的有效方式.

  13. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  14. Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.

    Science.gov (United States)

    Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D

    2014-05-01

    In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available.

  15. A conceptual model for analysing informal learning in online social networks for health professionals.

    Science.gov (United States)

    Li, Xin; Gray, Kathleen; Chang, Shanton; Elliott, Kristine; Barnett, Stephen

    2014-01-01

    Online social networking (OSN) provides a new way for health professionals to communicate, collaborate and share ideas with each other for informal learning on a massive scale. It has important implications for ongoing efforts to support Continuing Professional Development (CPD) in the health professions. However, the challenge of analysing the data generated in OSNs makes it difficult to understand whether and how they are useful for CPD. This paper presents a conceptual model for using mixed methods to study data from OSNs to examine the efficacy of OSN in supporting informal learning of health professionals. It is expected that using this model with the dataset generated in OSNs for informal learning will produce new and important insights into how well this innovation in CPD is serving professionals and the healthcare system.

  16. Rockslide and Impulse Wave Modelling in the Vajont Reservoir by DEM-CFD Analyses

    Science.gov (United States)

    Zhao, T.; Utili, S.; Crosta, G. B.

    2016-06-01

    This paper investigates the generation of hydrodynamic water waves due to rockslides plunging into a water reservoir. Quasi-3D DEM analyses in plane strain by a coupled DEM-CFD code are adopted to simulate the rockslide from its onset to the impact with the still water and the subsequent generation of the wave. The employed numerical tools and upscaling of hydraulic properties allow predicting a physical response in broad agreement with the observations notwithstanding the assumptions and characteristics of the adopted methods. The results obtained by the DEM-CFD coupled approach are compared to those published in the literature and those presented by Crosta et al. (Landslide spreading, impulse waves and modelling of the Vajont rockslide. Rock mechanics, 2014) in a companion paper obtained through an ALE-FEM method. Analyses performed along two cross sections are representative of the limit conditions of the eastern and western slope sectors. The max rockslide average velocity and the water wave velocity reach ca. 22 and 20 m/s, respectively. The maximum computed run up amounts to ca. 120 and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 and 190 m, respectively). Therefore, the overall study lays out a possible DEM-CFD framework for the modelling of the generation of the hydrodynamic wave due to the impact of a rapid moving rockslide or rock-debris avalanche.

  17. Economic modeling of electricity production from hot dry rock geothermal reservoirs: methodology and analyses. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, R.G.; Morris, G.E.

    1979-09-01

    An analytical methodology is developed for assessing alternative modes of generating electricity from hot dry rock (HDR) geothermal energy sources. The methodology is used in sensitivity analyses to explore relative system economics. The methodology used a computerized, intertemporal optimization model to determine the profit-maximizing design and management of a unified HDR electric power plant with a given set of geologic, engineering, and financial conditions. By iterating this model on price, a levelized busbar cost of electricity is established. By varying the conditions of development, the sensitivity of both optimal management and busbar cost to these conditions are explored. A plausible set of reference case parameters is established at the outset of the sensitivity analyses. This reference case links a multiple-fracture reservoir system to an organic, binary-fluid conversion cycle. A levelized busbar cost of 43.2 mills/kWh ($1978) was determined for the reference case, which had an assumed geothermal gradient of 40/sup 0/C/km, a design well-flow rate of 75 kg/s, an effective heat transfer area per pair of wells of 1.7 x 10/sup 6/ m/sup 2/, and plant design temperature of 160/sup 0/C. Variations in the presumed geothermal gradient, size of the reservoir, drilling costs, real rates of return, and other system parameters yield minimum busbar costs between -40% and +76% of the reference case busbar cost.

  18. Establishing a Numerical Modeling Framework for Hydrologic Engineering Analyses of Extreme Storm Events

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby

    2017-08-01

    In this study a numerical modeling framework for simulating extreme storm events was established using the Weather Research and Forecasting (WRF) model. Such a framework is necessary for the derivation of engineering parameters such as probable maximum precipitation that are the cornerstone of large water management infrastructure design. Here this framework was built based on a heavy storm that occurred in Nashville (USA) in 2010, and verified using two other extreme storms. To achieve the optimal setup, several combinations of model resolutions, initial/boundary conditions (IC/BC), cloud microphysics and cumulus parameterization schemes were evaluated using multiple metrics of precipitation characteristics. The evaluation suggests that WRF is most sensitive to IC/BC option. Simulation generally benefits from finer resolutions up to 5 km. At the 15km level, NCEP2 IC/BC produces better results, while NAM IC/BC performs best at the 5km level. Recommended model configuration from this study is: NAM or NCEP2 IC/BC (depending on data availability), 15km or 15km-5km nested grids, Morrison microphysics and Kain-Fritsch cumulus schemes. Validation of the optimal framework suggests that these options are good starting choices for modeling extreme events similar to the test cases. This optimal framework is proposed in response to emerging engineering demands of extreme storm events forecasting and analyses for design, operations and risk assessment of large water infrastructures.

  19. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper;

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  20. Development of steady-state model for MSPT and detailed analyses of receiver

    Science.gov (United States)

    Yuasa, Minoru; Sonoda, Masanori; Hino, Koichi

    2016-05-01

    Molten salt parabolic trough system (MSPT) uses molten salt as heat transfer fluid (HTF) instead of synthetic oil. The demonstration plant of MSPT was constructed by Chiyoda Corporation and Archimede Solar Energy in Italy in 2013. Chiyoda Corporation developed a steady-state model for predicting the theoretical behavior of the demonstration plant. The model was designed to calculate the concentrated solar power and heat loss using ray tracing of incident solar light and finite element modeling of thermal energy transferred into the medium. This report describes the verification of the model using test data on the demonstration plant, detailed analyses on the relation between flow rate and temperature difference on the metal tube of receiver and the effect of defocus angle on concentrated power rate, for solar collector assembly (SCA) development. The model is accurate to an extent of 2.0% as systematic error and 4.2% as random error. The relationships between flow rate and temperature difference on metal tube and the effect of defocus angle on concentrated power rate are shown.

  1. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  2. Evaluation of hydrological models for scenario analyses: signal-to-noise-ratio between scenario effects and model uncertainty

    Directory of Open Access Journals (Sweden)

    H. Bormann

    2005-01-01

    Full Text Available Many model applications suffer from the fact that although it is well known that model application implies different sources of uncertainty there is no objective criterion to decide whether a model is suitable for a particular application or not. This paper introduces a comparative index between the uncertainty of a model and the change effects of scenario calculations which enables the modeller to objectively decide about suitability of a model to be applied in scenario analysis studies. The index is called "signal-to-noise-ratio", and it is applied for an exemplary scenario study which was performed within the GLOWA-IMPETUS project in Benin. The conceptual UHP model was applied on the upper Ouémé basin. Although model calibration and validation were successful, uncertainties on model parameters and input data could be identified. Applying the "signal-to-noise-ratio" on regional scale subcatchments of the upper Ouémé comparing water availability indicators for uncertainty studies and scenario analyses the UHP model turned out to be suitable to predict long-term water balances under the present poor data availability and changing environmental conditions in subhumid West Africa.

  3. A model intercomparison analysing the link between column ozone and geopotential height anomalies in January

    Directory of Open Access Journals (Sweden)

    P. Braesicke

    2008-05-01

    Full Text Available A statistical framework to evaluate the performance of chemistry-climate models with respect to the interaction between meteorology and column ozone during northern hemisphere mid-winter, in particularly January, is used. Different statistical diagnostics from four chemistry-climate models (E39C, ME4C, UMUCAM, ULAQ are compared with the ERA-40 re-analysis. First, we analyse vertical coherence in geopotential height anomalies as described by linear correlations between two different pressure levels (30 and 200 hPa of the atmosphere. In addition, linear correlations between column ozone and geopotential height anomalies at 200 hPa are discussed to motivate a simple picture of the meteorological impacts on column ozone on interannual timescales. Secondly, we discuss characteristic spatial structures in geopotential height and column ozone anomalies as given by their first two empirical orthogonal functions. Finally, we describe the covariance patterns between reconstructed anomalies of geopotential height and column ozone. In general we find good agreement between the models with higher horizontal resolution (E39C, ME4C, UMUCAM and ERA-40. The Pacific-North American (PNA pattern emerges as a useful qualitative benchmark for the model performance. Models with higher horizontal resolution and high upper boundary (ME4C and UMUCAM show good agreement with the PNA tripole derived from ERA-40 data, including the column ozone modulation over the Pacfic sector. The model with lowest horizontal resolution does not show a classic PNA pattern (ULAQ, and the model with the lowest upper boundary (E39C does not capture the PNA related column ozone variations over the Pacific sector. Those discrepancies have to be taken into account when providing confidence intervals for climate change integrations.

  4. PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems.

    Science.gov (United States)

    Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota

    2016-07-01

    PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems.

  5. Correlation of Klebsiella pneumoniae comparative genetic analyses with virulence profiles in a murine respiratory disease model.

    Directory of Open Access Journals (Sweden)

    Ramy A Fodah

    Full Text Available Klebsiella pneumoniae is a bacterial pathogen of worldwide importance and a significant contributor to multiple disease presentations associated with both nosocomial and community acquired disease. ATCC 43816 is a well-studied K. pneumoniae strain which is capable of causing an acute respiratory disease in surrogate animal models. In this study, we performed sequencing of the ATCC 43816 genome to support future efforts characterizing genetic elements required for disease. Furthermore, we performed comparative genetic analyses to the previously sequenced genomes from NTUH-K2044 and MGH 78578 to gain an understanding of the conservation of known virulence determinants amongst the three strains. We found that ATCC 43816 and NTUH-K2044 both possess the known virulence determinant for yersiniabactin, as well as a Type 4 secretion system (T4SS, CRISPR system, and an acetonin catabolism locus, all absent from MGH 78578. While both NTUH-K2044 and MGH 78578 are clinical isolates, little is known about the disease potential of these strains in cell culture and animal models. Thus, we also performed functional analyses in the murine macrophage cell lines RAW264.7 and J774A.1 and found that MGH 78578 (K52 serotype was internalized at higher levels than ATCC 43816 (K2 and NTUH-K2044 (K1, consistent with previous characterization of the antiphagocytic properties of K1 and K2 serotype capsules. We also examined the three K. pneumoniae strains in a novel BALB/c respiratory disease model and found that ATCC 43816 and NTUH-K2044 are highly virulent (LD50<100 CFU while MGH 78578 is relatively avirulent.

  6. Kinetic analyses and mathematical modeling of primary photochemical and photoelectrochemical processes in plant photosystems.

    Science.gov (United States)

    Vredenberg, Wim

    2011-02-01

    In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with multi-turnover pulses (MTFs) of variable intensity and duration. These analyses have led to definition and formulation of rate equations that describe the sequence of primary linear electron transfer (LET) steps in photosystem II (PSII) and of cyclic electron transport (CET) in PSI. The model considers heterogeneity in PSII reaction centers (RCs) associated with the S-states of the OEC and incorporates in a dark-adapted state the presence of a 15-35% fraction of Q(B)-nonreducing RCs that probably is identical with the S₀ fraction. The fluorescence induction algorithm (FIA) in the 10 μs-1s excitation time range considers a photochemical O-J-D, a photo-electrochemical J-I and an I-P phase reflecting the response of the variable fluorescence to the electric trans-thylakoid potential generated by the proton pump fuelled by CET in PSI. The photochemical phase incorporates the kinetics associated with the double reduction of the acceptor pair of pheophytin (Phe) and plastoquinone Q(A) [PheQ(A)] in Q(B) nonreducing RCs and the associated doubling of the variable fluorescence, in agreement with the three-state trapping model (TSTM) of PS II. The decline in fluorescence emission during the so called SMT in the 1-100s excitation time range, known as the Kautsky curve, is shown to be associated with a substantial decrease of CET-powered proton efflux from the stroma into the chloroplast lumen through the ATPsynthase of the photosynthetic machinery.

  7. D Recording for 2d Delivering - the Employment of 3d Models for Studies and Analyses -

    Science.gov (United States)

    Rizzi, A.; Baratti, G.; Jiménez, B.; Girardi, S.; Remondino, F.

    2011-09-01

    In the last years, thanks to the advances of surveying sensors and techniques, many heritage sites could be accurately replicated in digital form with very detailed and impressive results. The actual limits are mainly related to hardware capabilities, computation time and low performance of personal computer. Often, the produced models are not visible on a normal computer and the only solution to easily visualized them is offline using rendered videos. This kind of 3D representations is useful for digital conservation, divulgation purposes or virtual tourism where people can visit places otherwise closed for preservation or security reasons. But many more potentialities and possible applications are available using a 3D model. The problem is the ability to handle 3D data as without adequate knowledge this information is reduced to standard 2D data. This article presents some surveying and 3D modeling experiences within the APSAT project ("Ambiente e Paesaggi dei Siti d'Altura Trentini", i.e. Environment and Landscapes of Upland Sites in Trentino). APSAT is a multidisciplinary project funded by the Autonomous Province of Trento (Italy) with the aim documenting, surveying, studying, analysing and preserving mountainous and hill-top heritage sites located in the region. The project focuses on theoretical, methodological and technological aspects of the archaeological investigation of mountain landscape, considered as the product of sequences of settlements, parcelling-outs, communication networks, resources, and symbolic places. The mountain environment preserves better than others the traces of hunting and gathering, breeding, agricultural, metallurgical, symbolic activities characterised by different lengths and environmental impacts, from Prehistory to the Modern Period. Therefore the correct surveying and documentation of this heritage sites and material is very important. Within the project, the 3DOM unit of FBK is delivering all the surveying and 3D material to

  8. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Directory of Open Access Journals (Sweden)

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  9. DESCRIPTION OF MODELING ANALYSES IN SUPPORT OF THE 200-ZP-1 REMEDIAL DESIGN/REMEDIAL ACTION

    Energy Technology Data Exchange (ETDEWEB)

    VONGARGEN BH

    2009-11-03

    The Feasibility Study/or the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-28) and the Proposed Plan/or Remediation of the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-33) describe the use of groundwater pump-and-treat technology for the 200-ZP-1 Groundwater Operable Unit (OU) as part of an expanded groundwater remedy. During fiscal year 2008 (FY08), a groundwater flow and contaminant transport (flow and transport) model was developed to support remedy design decisions at the 200-ZP-1 OU. This model was developed because the size and influence of the proposed 200-ZP-1 groundwater pump-and-treat remedy will have a larger areal extent than the current interim remedy, and modeling is required to provide estimates of influent concentrations and contaminant mass removal rates to support the design of the aboveground treatment train. The 200 West Area Pre-Conceptual Design/or Final Extraction/Injection Well Network: Modeling Analyses (DOE/RL-2008-56) documents the development of the first version of the MODFLOW/MT3DMS model of the Hanford Site's Central Plateau, as well as the initial application of that model to simulate a potential well field for the 200-ZP-1 remedy (considering only the contaminants carbon tetrachloride and technetium-99). This document focuses on the use of the flow and transport model to identify suitable extraction and injection well locations as part of the 200 West Area 200-ZP-1 Pump-and-Treat Remedial Design/Remedial Action Work Plan (DOEIRL-2008-78). Currently, the model has been developed to the extent necessary to provide approximate results and to lay a foundation for the design basis concentrations that are required in support of the remedial design/remediation action (RD/RA) work plan. The discussion in this document includes the following: (1) Assignment of flow and transport parameters for the model; (2) Definition of initial conditions for the transport model for each simulated contaminant of concern (COC) (i.e., carbon

  10. Assessing the hydrodynamic boundary conditions for risk analyses in coastal areas: a stochastic storm surge model

    Directory of Open Access Journals (Sweden)

    T. Wahl

    2011-11-01

    Full Text Available This paper describes a methodology to stochastically simulate a large number of storm surge scenarios (here: 10 million. The applied model is very cheap in computation time and will contribute to improve the overall results from integrated risk analyses in coastal areas. Initially, the observed storm surge events from the tide gauges of Cuxhaven (located in the Elbe estuary and Hörnum (located in the southeast of Sylt Island are parameterised by taking into account 25 parameters (19 sea level parameters and 6 time parameters. Throughout the paper, the total water levels are considered. The astronomical tides are semidiurnal in the investigation area with a tidal range >2 m. The second step of the stochastic simulation consists in fitting parametric distribution functions to the data sets resulting from the parameterisation. The distribution functions are then used to run Monte-Carlo-Simulations. Based on the simulation results, a large number of storm surge scenarios are reconstructed. Parameter interdependencies are considered and different filter functions are applied to avoid inconsistencies. Storm surge scenarios, which are of interest for risk analyses, can easily be extracted from the results.

  11. Models for regionalizing economic data and their applications within the scope of forensic disaster analyses

    Science.gov (United States)

    Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.

    2015-04-01

    The impact of natural hazards on the economic system can be observed in many different regions all over the world. Once the local economic structure is hit by an event direct costs instantly occur. However, the disturbance on a local level (e.g. parts of city or industries along a river bank) might also cause monetary damages in other, indirectly affected sectors. If the impact of an event is strong, these damages are likely to cascade and spread even on an international scale (e.g. the eruption of Eyjafjallajökull and its impact on the automotive sector in Europe). In order to determine these special impacts, one has to gain insights into the directly hit economic structure before being able to calculate these side effects. Especially, regarding the development of a model used for near real-time forensic disaster analyses any simulation needs to be based on data that is rapidly available or easily to be computed. Therefore, we investigated commonly used or recently discussed methodologies for regionalizing economic data. Surprisingly, even for German federal states there is no official input-output data available that can be used, although it might provide detailed figures concerning economic interrelations between different industry sectors. In the case of highly developed countries, such as Germany, we focus on models for regionalizing nationwide input-output table which is usually available at the national statistical offices. However, when it comes to developing countries (e.g. South-East Asia) the data quality and availability is usually much poorer. In this case, other sources need to be found for the proper assessment of regional economic performance. We developed an indicator-based model that can fill this gap because of its flexibility regarding the level of aggregation and the composability of different input parameters. Our poster presentation brings up a literature review and a summary on potential models that seem to be useful for this specific task

  12. Modifications in the AA5083 Johnson-Cook Material Model for Use in Friction Stir Welding Computational Analyses

    Science.gov (United States)

    2011-12-30

    REPORT Modifications in the AA5083 Johnson-Cook Material Model for Use in Friction Stir Welding Computational Analyses 14. ABSTRACT 16. SECURITY...TERMS AA5083, friction stir welding , Johnson-Cook material model M. Grujicic, B. Pandurangan, C.-F. Yen, B. A. Cheeseman Clemson University Office of...Use in Friction Stir Welding Computational Analyses Report Title ABSTRACT Johnson-Cook strength material model is frequently used in finite-element

  13. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  14. Evaluation of Temperature and Humidity Profiles of Unified Model and ECMWF Analyses Using GRUAN Radiosonde Observations

    Directory of Open Access Journals (Sweden)

    Young-Chan Noh

    2016-07-01

    Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.

  15. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  16. Testing a dual-systems model of adolescent brain development using resting-state connectivity analyses.

    Science.gov (United States)

    van Duijvenvoorde, A C K; Achterberg, M; Braams, B R; Peters, S; Crone, E A

    2016-01-01

    The current study aimed to test a dual-systems model of adolescent brain development by studying changes in intrinsic functional connectivity within and across networks typically associated with cognitive-control and affective-motivational processes. To this end, resting-state and task-related fMRI data were collected of 269 participants (ages 8-25). Resting-state analyses focused on seeds derived from task-related neural activation in the same participants: the dorsal lateral prefrontal cortex (dlPFC) from a cognitive rule-learning paradigm and the nucleus accumbens (NAcc) from a reward-paradigm. Whole-brain seed-based resting-state analyses showed an age-related increase in dlPFC connectivity with the caudate and thalamus, and an age-related decrease in connectivity with the (pre)motor cortex. nAcc connectivity showed a strengthening of connectivity with the dorsal anterior cingulate cortex (ACC) and subcortical structures such as the hippocampus, and a specific age-related decrease in connectivity with the ventral medial PFC (vmPFC). Behavioral measures from both functional paradigms correlated with resting-state connectivity strength with their respective seed. That is, age-related change in learning performance was mediated by connectivity between the dlPFC and thalamus, and age-related change in winning pleasure was mediated by connectivity between the nAcc and vmPFC. These patterns indicate (i) strengthening of connectivity between regions that support control and learning, (ii) more independent functioning of regions that support motor and control networks, and (iii) more independent functioning of regions that support motivation and valuation networks with age. These results are interpreted vis-à-vis a dual-systems model of adolescent brain development.

  17. Comparative modeling analyses of Cs-137 fate in the rivers impacted by Chernobyl and Fukushima accidents

    Energy Technology Data Exchange (ETDEWEB)

    Zheleznyak, M.; Kivva, S. [Institute of Environmental Radioactivity, Fukushima University (Japan)

    2014-07-01

    The consequences of two largest nuclear accidents of the last decades - at Chernobyl Nuclear Power Plant (ChNPP) (1986) and at Fukushima Daiichi NPP (FDNPP) (2011) clearly demonstrated that radioactive contamination of water bodies in vicinity of NPP and on the waterways from it, e.g., river- reservoir water after Chernobyl accident and rivers and coastal marine waters after Fukushima accident, in the both cases have been one of the main sources of the public concerns on the accident consequences. The higher weight of water contamination in public perception of the accidents consequences in comparison with the real fraction of doses via aquatic pathways in comparison with other dose components is a specificity of public perception of environmental contamination. This psychological phenomenon that was confirmed after these accidents provides supplementary arguments that the reliable simulation and prediction of the radionuclide dynamics in water and sediments is important part of the post-accidental radioecological research. The purpose of the research is to use the experience of the modeling activities f conducted for the past more than 25 years within the Chernobyl affected Pripyat River and Dnieper River watershed as also data of the new monitoring studies in Japan of Abukuma River (largest in the region - the watershed area is 5400 km{sup 2}), Kuchibuto River, Uta River, Niita River, Natsui River, Same River, as also of the studies on the specific of the 'water-sediment' {sup 137}Cs exchanges in this area to refine the 1-D model RIVTOX and 2-D model COASTOX for the increasing of the predictive power of the modeling technologies. The results of the modeling studies are applied for more accurate prediction of water/sediment radionuclide contamination of rivers and reservoirs in the Fukushima Prefecture and for the comparative analyses of the efficiency of the of the post -accidental measures to diminish the contamination of the water bodies. Document

  18. Development of microbial-enzyme-mediated decomposition model parameters through steady-state and dynamic analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Gangsheng [ORNL; Post, Wilfred M [ORNL; Mayes, Melanie [ORNL

    2013-01-01

    We developed a Microbial-ENzyme-mediated Decomposition (MEND) model, based on the Michaelis-Menten kinetics, that describes the dynamics of physically defined pools of soil organic matter (SOC). These include particulate, mineral-associated, dissolved organic matter (POC, MOC, and DOC, respectively), microbial biomass, and associated exoenzymes. The ranges and/or distributions of parameters were determined by both analytical steady-state and dynamic analyses with SOC data from the literature. We used an improved multi-objective parameter sensitivity analysis (MOPSA) to identify the most important parameters for the full model: maintenance of microbial biomass, turnover and synthesis of enzymes, and carbon use efficiency (CUE). The model predicted an increase of 2 C (baseline temperature =12 C) caused the pools of POC-Cellulose, MOC, and total SOC to increase with dynamic CUE and decrease with constant CUE, as indicated by the 50% confidence intervals. Regardless of dynamic or constant CUE, the pool sizes of POC, MOC, and total SOC varied from 8% to 8% under +2 C. The scenario analysis using a single parameter set indicates that higher temperature with dynamic CUE might result in greater net increases in both POC-Cellulose and MOC pools. Different dynamics of various SOC pools reflected the catalytic functions of specific enzymes targeting specific substrates and the interactions between microbes, enzymes, and SOC. With the feasible parameter values estimated in this study, models incorporating fundamental principles of microbial-enzyme dynamics can lead to simulation results qualitatively different from traditional models with fast/slow/passive pools.

  19. Genomic analyses with biofilter 2.0: knowledge driven filtering, annotation, and model development.

    Science.gov (United States)

    Pendergrass, Sarah A; Frase, Alex; Wallace, John; Wolfe, Daniel; Katiyar, Neerja; Moore, Carrie; Ritchie, Marylyn D

    2013-12-30

    The ever-growing wealth of biological information available through multiple comprehensive database repositories can be leveraged for advanced analysis of data. We have now extensively revised and updated the multi-purpose software tool Biofilter that allows researchers to annotate and/or filter data as well as generate gene-gene interaction models based on existing biological knowledge. Biofilter now has the Library of Knowledge Integration (LOKI), for accessing and integrating existing comprehensive database information, including more flexibility for how ambiguity of gene identifiers are handled. We have also updated the way importance scores for interaction models are generated. In addition, Biofilter 2.0 now works with a range of types and formats of data, including single nucleotide polymorphism (SNP) identifiers, rare variant identifiers, base pair positions, gene symbols, genetic regions, and copy number variant (CNV) location information. Biofilter provides a convenient single interface for accessing multiple publicly available human genetic data sources that have been compiled in the supporting database of LOKI. Information within LOKI includes genomic locations of SNPs and genes, as well as known relationships among genes and proteins such as interaction pairs, pathways and ontological categories.Via Biofilter 2.0 researchers can:• Annotate genomic location or region based data, such as results from association studies, or CNV analyses, with relevant biological knowledge for deeper interpretation• Filter genomic location or region based data on biological criteria, such as filtering a series SNPs to retain only SNPs present in specific genes within specific pathways of interest• Generate Predictive Models for gene-gene, SNP-SNP, or CNV-CNV interactions based on biological information, with priority for models to be tested based on biological relevance, thus narrowing the search space and reducing multiple hypothesis-testing. Biofilter is a software

  20. Hierarchical Data Structures, Institutional Research, and Multilevel Modeling

    Science.gov (United States)

    O'Connell, Ann A.; Reed, Sandra J.

    2012-01-01

    Multilevel modeling (MLM), also referred to as hierarchical linear modeling (HLM) or mixed models, provides a powerful analytical framework through which to study colleges and universities and their impact on students. Due to the natural hierarchical structure of data obtained from students or faculty in colleges and universities, MLM offers many…

  1. Controls on Yardang Morphology: Insights from Field Measurements, Lidar Topographic Analyses, and Numerical Modeling

    Science.gov (United States)

    Pelletier, J. D.; Kapp, P. A.

    2014-12-01

    Yardangs are streamlined bedforms sculpted by the wind and wind-blown sand. They can form as relatively resistant exposed rocks erode more slowly than surrounding exposed rocks, thus causing the more resistant rocks to stand higher in the landscape and deflect the wind and wind-blown sand into adjacent troughs in a positive feedback. How this feedback gives rise to streamlined forms that locally have a consistent size is not well understood theoretically. In this study we combine field measurements in the yardangs of Ocotillo Wells SVRA with analyses of airborne and terrestrial lidar datasets and numerical modeling to quantify and understand the controls on yardang morphology. The classic model for yardang morphology is that they evolve to an ideal 4:1 length-to-width aspect ratio that minimizes aerodynamic drag. We show using computational fluid dynamics (CFD) modeling that this model is incorrect: the 4:1 aspect ratio is the value corresponding to minimum drag for free bodies, i.e. obstacles around which air flows on all sides. Yardangs, in contrast, are embedded in Earth's surface. For such rough streamlined half-bodies, the aspect ratio corresponding to minimum drag is larger than 20:1. As an alternative to the minimum-drag model, we propose that the aspect ratio of yardangs not significantly influenced by structural controls is controlled by the angle of dispersion of the aerodynamic jet created as deflected wind and wind-blown sand exits the troughs between incipient yardang noses. Aerodynamic jets have a universal dispersion angle of 11.8 degrees, thus predicting a yardang aspect ratio of ~5:1. We developed a landscape evolution model that combines the physics of boundary layer flow with aeolian saltation and bedrock erosion to form yardangs with a range of sizes and aspect ratios similar to those observed in nature. Yardangs with aspect ratios both larger and smaller than 5:1 occur in the model since the strike and dip of the resistant rock unit also exerts

  2. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  3. A Hidden Markov model web application for analysing bacterial genomotyping DNA microarray experiments.

    Science.gov (United States)

    Newton, Richard; Hinds, Jason; Wernisch, Lorenz

    2006-01-01

    Whole genome DNA microarray genomotyping experiments compare the gene content of different species or strains of bacteria. A statistical approach to analysing the results of these experiments was developed, based on a Hidden Markov model (HMM), which takes adjacency of genes along the genome into account when calling genes present or absent. The model was implemented in the statistical language R and applied to three datasets. The method is numerically stable with good convergence properties. Error rates are reduced compared with approaches that ignore spatial information. Moreover, the HMM circumvents a problem encountered in a conventional analysis: determining the cut-off value to use to classify a gene as absent. An Apache Struts web interface for the R script was created for the benefit of users unfamiliar with R. The application may be found at http://hmmgd.cryst.bbk.ac.uk/hmmgd. The source code illustrating how to run R scripts from an Apache Struts-based web application is available from the corresponding author on request. The application is also available for local installation if required.

  4. Global isoprene emissions estimated using MEGAN, ECMWF analyses and a detailed canopy environment model

    Directory of Open Access Journals (Sweden)

    J.-F. Müller

    2008-03-01

    Full Text Available The global emissions of isoprene are calculated at 0.5° resolution for each year between 1995 and 2006, based on the MEGAN (Model of Emissions of Gases and Aerosols from Nature version 2 model (Guenther et al., 2006 and a detailed multi-layer canopy environment model for the calculation of leaf temperature and visible radiation fluxes. The calculation is driven by meteorological fields – air temperature, cloud cover, downward solar irradiance, windspeed, volumetric soil moisture in 4 soil layers – provided by analyses of the European Centre for Medium-Range Weather Forecasts (ECMWF. The estimated annual global isoprene emission ranges between 374 Tg (in 1996 and 449 Tg (in 1998 and 2005, for an average of ca. 410 Tg/year over the whole period, i.e. about 30% less than the standard MEGAN estimate (Guenther et al., 2006. This difference is due, to a large extent, to the impact of the soil moisture stress factor, which is found here to decrease the global emissions by more than 20%. In qualitative agreement with past studies, high annual emissions are found to be generally associated with El Niño events. The emission inventory is evaluated against flux measurement campaigns at Harvard forest (Massachussets and Tapajós in Amazonia, showing that the model can capture quite well the short-term variability of emissions, but that it fails to reproduce the observed seasonal variation at the tropical rainforest site, with largely overestimated wet season fluxes. The comparison of the HCHO vertical columns calculated by a chemistry and transport model (CTM with HCHO distributions retrieved from space provides useful insights on tropical isoprene emissions. For example, the relatively low emissions calculated over Western Amazonia (compared to the corresponding estimates in the inventory of Guenther et al., 1995 are validated by the excellent agreement found between the CTM and HCHO data over this region. The parameterized impact of the soil moisture

  5. Stream Tracer Integrity: Comparative Analyses of Rhodamine-WT and Sodium Chloride through Transient Storage Modeling

    Science.gov (United States)

    Smull, E. M.; Wlostowski, A. N.; Gooseff, M. N.; Bowden, W. B.; Wollheim, W. M.

    2013-12-01

    Solute transport in natural channels describes the transport of water and dissolved matter through a river reach of interest. Conservative tracers allow us to label a parcel of stream water, such that we can track its movement downstream through space and time. A transient storage model (TSM) can be fit to the breakthrough curve (BTC) following a stream tracer experiment, as a way to quantify advection, dispersion, and transient storage processes. Arctic streams and rivers, in particular, are continuously underlain by permafrost, which provides for a simplified surface water-groundwater exchange. Sodium chloride (NaCl) and Rhodamine-WT (RWT) are widely used tracers, and differences between the two in conservative behavior and detection limits have been noted in small-scale field and laboratory studies. This study seeks to further this understanding by applying the OTIS model to NaCl and RWT BTC data from a field study on the Kuparuk River, Alaska, at varying flow rates. There are two main questions to be answered: 1) Do differences in NaCl and RWT manifest in OTIS parameter values? 2) Are the OTIS model results reliable for NaCl, RWT, or both? Fieldwork was performed in the summer of 2012 on the Kuparuk River, and modeling was performed using a modified OTIS framework, which provided for parameter optimization and further global sensitivity analyses. The results of this study will contribute to the greater body of literature surrounding Arctic stream hydrology, and it will assist in methodology for future tracer field studies. Additionally, the modeling work will provide an analysis for OTIS parameter identifiability, and assess stream tracer integrity (i.e. how well the BTC data represents the system) and its relation to TSM performance (i.e. how well the TSM can find a unique fit to the BTC data). The quantitative tools used can be applied to other solute transport studies, to better understand potential deviations in model outcome due to stream tracer choice and

  6. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Directory of Open Access Journals (Sweden)

    Varsha H. Rallapalli

    2016-10-01

    Full Text Available Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM has demonstrated that the signal-to-noise ratio (SNRENV from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N is assumed to: (a reduce S + N envelope power by filling in dips within clean speech (S and (b introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  7. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input subs

  8. Molecular analyses of neurogenic defects in a human pluripotent stem cell model of fragile X syndrome.

    Science.gov (United States)

    Boland, Michael J; Nazor, Kristopher L; Tran, Ha T; Szücs, Attila; Lynch, Candace L; Paredes, Ryder; Tassone, Flora; Sanna, Pietro Paolo; Hagerman, Randi J; Loring, Jeanne F

    2017-01-29

    New research suggests that common pathways are altered in many neurodevelopmental disorders including autism spectrum disorder; however, little is known about early molecular events that contribute to the pathology of these diseases. The study of monogenic, neurodevelopmental disorders with a high incidence of autistic behaviours, such as fragile X syndrome, has the potential to identify genes and pathways that are dysregulated in autism spectrum disorder as well as fragile X syndrome. In vitro generation of human disease-relevant cell types provides the ability to investigate aspects of disease that are impossible to study in patients or animal models. Differentiation of human pluripotent stem cells recapitulates development of the neocortex, an area affected in both fragile X syndrome and autism spectrum disorder. We have generated induced human pluripotent stem cells from several individuals clinically diagnosed with fragile X syndrome and autism spectrum disorder. When differentiated to dorsal forebrain cell fates, our fragile X syndrome human pluripotent stem cell lines exhibited reproducible aberrant neurogenic phenotypes. Using global gene expression and DNA methylation profiling, we have analysed the early stages of neurogenesis in fragile X syndrome human pluripotent stem cells. We discovered aberrant DNA methylation patterns at specific genomic regions in fragile X syndrome cells, and identified dysregulated gene- and network-level correlates of fragile X syndrome that are associated with developmental signalling, cell migration, and neuronal maturation. Integration of our gene expression and epigenetic analysis identified altered epigenetic-mediated transcriptional regulation of a distinct set of genes in fragile X syndrome. These fragile X syndrome-aberrant networks are significantly enriched for genes associated with autism spectrum disorder, giving support to the idea that underlying similarities exist among these neurodevelopmental diseases.

  9. A simple beam model to analyse the durability of adhesively bonded tile floorings in presence of shrinkage

    Directory of Open Access Journals (Sweden)

    S. de Miranda

    2014-07-01

    Full Text Available A simple beam model for the evaluation of tile debonding due to substrate shrinkage is presented. The tile-adhesive-substrate package is modeled as an Euler-Bernoulli beam laying on a two-layer elastic foundation. An effective discrete model for inter-tile grouting is introduced with the aim of modelling workmanship defects due to partial filled groutings. The model is validated using the results of a 2D FE model. Different defect configurations and adhesive typologies are analysed, focusing the attention on the prediction of normal stresses in the adhesive layer under the assumption of Mode I failure of the adhesive.

  10. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters.

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-09-11

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  11. Insights into the evolution of tectonically-active glaciated mountain ranges from digital elevation model analyses

    Science.gov (United States)

    Brocklehurst, S. H.; Whipple, K. X.

    2003-12-01

    Glaciers have played an important role in the development of most active mountain ranges around the world during the Quaternary, but the interaction between glacial erosion (as modulated by climate change) and tectonic processes is poorly understood. The so-called glacial buzzsaw hypothesis (Brozovic et al., 1997) proposes that glaciers can incise as rapidly as the most rapid rock uplift rates, such that glaciated landscapes experiencing different rock uplift rates but the same snowline elevation will look essentially the same, with mean elevations close to the snowline. Digital elevation model-based analyses of the glaciated landscapes of the Nanga Parbat region, Pakistan, and the Southern Alps, New Zealand, lend some support to this hypothesis, but also reveal considerably more variety to the landscapes of glaciated, tectonically-active mountain ranges. Larger glaciers in the Nanga Parbat region maintain a low downvalley gradient and valley floor elevations close to the snowline, even in the face of extremely rapid rock uplift. However, smaller glaciers steepen in response to rapid uplift, similar to the response of rivers. A strong correlation between the height of hillslopes rising from the cirque floors and rock uplift rates implies that erosion processes on hillslopes cannot initially keep up with more rapid glacial incision rates. It is these staggering hillslopes that permit mountain peaks to rise above 8000m. The glacial buzzsaw hypothesis does not describe the evolution of the Southern Alps as well, because here mean elevations rise in areas of more rapid rock uplift. The buzzsaw hypothesis may work well in the Nanga Parbat region because the zone of rapid rock uplift is structurally confined to a narrow region. Alternatively, the Southern Alps may not have been rising sufficiently rapidly or sufficiently long for the glacial buzzsaw to be imposed outside the most rapidly uplifting region, around Mount Cook. The challenge now is to understand in detail

  12. Soil carbon response to land-use change: evaluation of a global vegetation model using observational meta-analyses

    Science.gov (United States)

    Nyawira, Sylvia S.; Nabel, Julia E. M. S.; Don, Axel; Brovkin, Victor; Pongratz, Julia

    2016-10-01

    Global model estimates of soil carbon changes from past land-use changes remain uncertain. We develop an approach for evaluating dynamic global vegetation models (DGVMs) against existing observational meta-analyses of soil carbon changes following land-use change. Using the DGVM JSBACH, we perform idealized simulations where the entire globe is covered by one vegetation type, which then undergoes a land-use change to another vegetation type. We select the grid cells that represent the climatic conditions of the meta-analyses and compare the mean simulated soil carbon changes to the meta-analyses. Our simulated results show model agreement with the observational data on the direction of changes in soil carbon for some land-use changes, although the model simulated a generally smaller magnitude of changes. The conversion of crop to forest resulted in soil carbon gain of 10 % compared to a gain of 42 % in the data, whereas the forest-to-crop change resulted in a simulated loss of -15 % compared to -40 %. The model and the observational data disagreed for the conversion of crop to grasslands. The model estimated a small soil carbon loss (-4 %), while observational data indicate a 38 % gain in soil carbon for the same land-use change. These model deviations from the observations are substantially reduced by explicitly accounting for crop harvesting and ignoring burning in grasslands in the model. We conclude that our idealized simulation approach provides an appropriate framework for evaluating DGVMs against meta-analyses and that this evaluation helps to identify the causes of deviation of simulated soil carbon changes from the meta-analyses.

  13. A very simple dynamic soil acidification model for scenario analyses and target load calculations

    NARCIS (Netherlands)

    Posch, M.; Reinds, G.J.

    2009-01-01

    A very simple dynamic soil acidification model, VSD, is described, which has been developed as the simplest extension of steady-state models for critical load calculations and with an eye on regional applications. The model requires only a minimum set of inputs (compared to more detailed models) and

  14. A Conceptual Model for Analysing Management Development in the UK Hospitality Industry

    Science.gov (United States)

    Watson, Sandra

    2007-01-01

    This paper presents a conceptual, contingent model of management development. It explains the nature of the UK hospitality industry and its potential influence on MD practices, prior to exploring dimensions and relationships in the model. The embryonic model is presented as a model that can enhance our understanding of the complexities of the…

  15. Secondary Evaluations of MTA 36-Month Outcomes: Propensity Score and Growth Mixture Model Analyses

    Science.gov (United States)

    Swanson, James M.; Hinshaw, Stephen P.; Arnold, L. Eugene; Gibbons, Robert D.; Marcus, Sue; Hur, Kwan; Jensen, Peter S.; Vitiello, Benedetto; Abikoff, Howard B.: Greenhill, Laurence L.; Hechtman, Lily; Pelham, William E.; Wells, Karen C.; Conners, C. Keith; March, John S.; Elliott, Glen R.; Epstein, Jeffery N.; Hoagwood, Kimberly; Hoza, Betsy; Molina, Brooke S. G.; Newcorn, Jeffrey H.; Severe, Joanne B.; Wigal, Timothy

    2007-01-01

    Objective: To evaluate two hypotheses: that self-selection bias contributed to lack of medication advantage at the 36-month assessment of the Multimodal Treatment Study of Children With ADHD (MTA) and that overall improvement over time obscured treatment effects in subgroups with different outcome trajectories. Method: Propensity score analyses,…

  16. Secondary Evaluations of MTA 36-Month Outcomes: Propensity Score and Growth Mixture Model Analyses

    Science.gov (United States)

    Swanson, James M.; Hinshaw, Stephen P.; Arnold, L. Eugene; Gibbons, Robert D.; Marcus, Sue; Hur, Kwan; Jensen, Peter S.; Vitiello, Benedetto; Abikoff, Howard B.: Greenhill, Laurence L.; Hechtman, Lily; Pelham, William E.; Wells, Karen C.; Conners, C. Keith; March, John S.; Elliott, Glen R.; Epstein, Jeffery N.; Hoagwood, Kimberly; Hoza, Betsy; Molina, Brooke S. G.; Newcorn, Jeffrey H.; Severe, Joanne B.; Wigal, Timothy

    2007-01-01

    Objective: To evaluate two hypotheses: that self-selection bias contributed to lack of medication advantage at the 36-month assessment of the Multimodal Treatment Study of Children With ADHD (MTA) and that overall improvement over time obscured treatment effects in subgroups with different outcome trajectories. Method: Propensity score analyses,…

  17. The Aachen miniaturized heart-lung machine--first results in a small animal model.

    Science.gov (United States)

    Schnoering, Heike; Arens, Jutta; Sachweh, Joerg S; Veerman, Melanie; Tolba, Rene; Schmitz-Rode, Thomas; Steinseifer, Ulrich; Vazquez-Jimenez, Jaime F

    2009-11-01

    Congenital heart surgery most often incorporates extracorporeal circulation. Due to foreign surface contact and the administration of foreign blood in many children, inflammatory response and hemolysis are important matters of debate. This is particularly an issue in premature and low birth-weight newborns. Taking these considerations into account, the Aachen miniaturized heart-lung machine (MiniHLM) with a total static priming volume of 102 mL (including tubing) was developed and tested in a small animal model. Fourteen female Chinchilla Bastard rabbits were operated on using two different kinds of circuits. In eight animals, a conventional HLM with Dideco Kids oxygenator and Stöckert roller pump (Sorin group, Milan, Italy) was used, and the Aachen MiniHLM was employed in six animals. Outcome parameters were hemolysis and blood gas analysis including lactate. The rabbits were anesthetized, and a standard median sternotomy was performed. The ascending aorta and the right atrium were cannulated. After initiating cardiopulmonary bypass, the aorta was cross-clamped, and cardiac arrest was induced by blood cardioplegia. Blood samples for hemolysis and blood gas analysis were drawn before, during, and after cardiopulmonary bypass. After 1 h aortic clamp time, all animals were weaned from cardiopulmonary bypass. Blood gas analysis revealed adequate oxygenation and perfusion during cardiopulmonary bypass, irrespective of the employed perfusion system. The use of the Aachen MiniHLM resulted in a statistically significant reduced decrease in fibrinogen during cardiopulmonary bypass. A trend revealing a reduced increase in free hemoglobin during bypass in the MiniHLM group could also be observed. This newly developed Aachen MiniHLM with low priming volume, reduced hemolysis, and excellent gas transfer (O(2) and CO(2)) may reduce circuit-induced complications during heart surgery in neonates.

  18. Applying TSOI Hybrid Learning Model to Enhance Blended Learning Experience in Science Education

    Science.gov (United States)

    Tsoi, Mun Fie

    2009-01-01

    Purpose: Research on the nature of blended learning and its features has led to a variety of approaches to the practice of blended learning. The purpose of this paper is to provide an alternative practice model, the TSOI hybrid learning model (HLM) to enhance the blended learning experiences in science education. Design/methodology/approach: The…

  19. Missing Data Treatments at the Second Level of Hierarchical Linear Models

    Science.gov (United States)

    St. Clair, Suzanne W.

    2011-01-01

    The current study evaluated the performance of traditional versus modern MDTs in the estimation of fixed-effects and variance components for data missing at the second level of an hierarchical linear model (HLM) model across 24 different study conditions. Variables manipulated in the analysis included, (a) number of Level-2 variables with missing…

  20. Using an operating cost model to analyse the selection of aircraft type on short-haul routes

    CSIR Research Space (South Africa)

    Ssamula, B

    2006-08-01

    Full Text Available and the effect of passenger volume analysed. The model was applied to a specific route within Africa, and thereafter varying passenger numbers, to choose the least costly aircraft. The results showed that smaller capacity aircraft, even though limited by maximum...

  1. Pathway models for analysing and managing the introduction of alien plant pests—an overview and categorization

    Science.gov (United States)

    J.C. Douma; M. Pautasso; R.C. Venette; C. Robinet; L. Hemerik; M.C.M. Mourits; J. Schans; W. van der Werf

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative estimates of introduction risks and effectiveness of management options....

  2. Developing computational model-based diagnostics to analyse clinical chemistry data

    NARCIS (Netherlands)

    Schalkwijk, D.B. van; Bochove, K. van; Ommen, B. van; Freidig, A.P.; Someren, E.P. van; Greef, J. van der; Graaf, A.A. de

    2010-01-01

    This article provides methodological and technical considerations to researchers starting to develop computational model-based diagnostics using clinical chemistry data.These models are of increasing importance, since novel metabolomics and proteomics measuring technologies are able to produce large

  3. Bio-economic farm modelling to analyse agricultural land productivity in Rwanda

    NARCIS (Netherlands)

    Bidogeza, J.C.

    2011-01-01

    Keywords: Rwanda; farm household typology; sustainable technology adoption; multivariate analysis;
    land degradation; food security; bioeconomic model; crop simulation models; organic fertiliser; inorganic fertiliser; policy incentives In Rwanda, land degradation contributes to the low and

  4. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care

    OpenAIRE

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-01-01

    Objectives To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). Setting 2 university hospitals in south-eastern Spain from April to October 2013. Design A correlational descriptive study. Participants A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. Results The differences in obstetrical results were (biom...

  5. Quantifying and Analysing Neighbourhood Characteristics Supporting Urban Land-Use Modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2009-01-01

    Land-use modelling and spatial scenarios have gained increased attention as a means to meet the challenge of reducing uncertainty in the spatial planning and decision-making. Several organisations have developed software for land-use modelling. Many of the recent modelling efforts incorporate cel...

  6. Driver Model of a Powered Wheelchair Operation as a Tool of Theoretical Analyses

    Science.gov (United States)

    Ito, Takuma; Inoue, Takenobu; Shino, Motoki; Kamata, Minoru

    This paper describes the construction of a driver model of a powered wheelchair operation for the understanding of the characteristics of the driver. The main targets of existing researches about driver models are the operation of the automobiles and motorcycles, not a low-speed vehicle such as powered wheelchairs. Therefore, we started by verifying the possibility of modeling the turning operation at a corner of a corridor. At first, we conducted an experiment on a daily powered wheelchair user by using his vehicle. High reproducibility of driving and the driving characteristics for the construction of a driver model were both confirmed from the result of the experiment. Next, experiments with driving simulators were conducted for the collection of quantitative driving data. The parameters of the proposed driver model were identified from experimental results. From the simulations with the proposed driver model and identified parameters, the characteristics of the proposed driver model were analyzed.

  7. Fixed- and random-effects meta-analytic structural equation modeling: examples and analyses in R.

    Science.gov (United States)

    Cheung, Mike W-L

    2014-03-01

    Meta-analytic structural equation modeling (MASEM) combines the ideas of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Cheung and Chan (Psychological Methods 10:40-64, 2005b, Structural Equation Modeling 16:28-53, 2009) proposed a two-stage structural equation modeling (TSSEM) approach to conducting MASEM that was based on a fixed-effects model by assuming that all studies have the same population correlation or covariance matrices. The main objective of this article is to extend the TSSEM approach to a random-effects model by the inclusion of study-specific random effects. Another objective is to demonstrate the procedures with two examples using the metaSEM package implemented in the R statistical environment. Issues related to and future directions for MASEM are discussed.

  8. WOMBAT——A tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML)

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model;estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses.Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from http://agbu.une.edu.au/~kmeyer/wombat.html

  9. Kinetic models for analysing myocardial [{sup 11}C]palmitate data

    Energy Technology Data Exchange (ETDEWEB)

    Jong, Hugo W.A.M. de [University Medical Centre Utrecht, Department of Radiology and Nuclear Medicine, Utrecht (Netherlands); VU University Medical Centre, Department of Nuclear Medicine and PET Research, Amsterdam (Netherlands); Rijzewijk, Luuk J.; Diamant, Michaela [VU University Medical Centre, Diabetes Centre, Amsterdam (Netherlands); Lubberink, Mark; Lammertsma, Adriaan A. [VU University Medical Centre, Department of Nuclear Medicine and PET Research, Amsterdam (Netherlands); Meer, Rutger W. van der; Lamb, Hildo J. [Leiden University Medical Centre, Department of Radiology, Leiden (Netherlands); Smit, Jan W.A. [Leiden University Medical Centre, Department of Endocrinology, Leiden (Netherlands)

    2009-06-15

    [{sup 11}C]Palmitate PET can be used to study myocardial fatty acid metabolism in vivo. Several models have been applied to describe and quantify its kinetics, but to date no systematic analysis has been performed to define the most suitable model. In this study a total of 21 plasma input models comprising one to three compartments and up to six free rate constants were compared using statistical analysis of clinical data and simulations. To this end, 14 healthy volunteers were scanned using [{sup 11}C]palmitate, whilst myocardial blood flow was measured using H{sub 2} {sup 15}O. Models including an oxidative pathway, representing production of {sup 11}CO{sub 2}, provided significantly better fits to the data than other models. Model robustness was increased by fixing efflux of {sup 11}CO{sub 2} to the oxidation rate. Simulations showed that a three-tissue compartment model describing oxidation and esterification was feasible when no more than three free rate constants were included. Although further studies in patients are required to substantiate this choice, based on the accuracy of data description, the number of free parameters and generality, the three-tissue model with three free rate constants was the model of choice for describing [{sup 11}C]palmitate kinetics in terms of oxidation and fatty acid accumulation in the cell. (orig.)

  10. A novel substance flow analysis model for analysing multi-year phosphorus flow at the regional scale.

    Science.gov (United States)

    Chowdhury, Rubel Biswas; Moore, Graham A; Weatherley, Anthony J; Arora, Meenakshi

    2016-12-01

    Achieving sustainable management of phosphorus (P) is crucial for both global food security and global environmental protection. In order to formulate informed policy measures to overcome existing barriers of achieving sustainable P management, there is need for a sound understanding of the nature and magnitude of P flow through various systems at different geographical and temporal scales. So far, there is a limited understanding on the nature and magnitude of P flow over multiple years at the regional scale. In this study, we have developed a novel substance flow analysis (SFA) model in the MATLAB/Simulink® software platform that can be effectively utilized to analyse the nature and magnitude of multi-year P flow at the regional scale. The model is inclusive of all P flows and storage relating to all key systems, subsystems, processes or components, and the associated interactions of P flow required to represent a typical P flow system at the regional scale. In an annual time step, this model can analyse P flow and storage over as many as years required at a time, and therefore, can indicate the trends and changes in P flow and storage over many years, which is not offered by the existing regional scale SFA models of P. The model is flexible enough to allow any modification or the inclusion of any degree of complexity, and therefore, can be utilized for analysing P flow in any region around the world. The application of the model in the case of Gippsland region, Australia has revealed that the model generates essential information about the nature and magnitude of P flow at the regional scale which can be utilized for making improved management decisions towards attaining P sustainability. A systematic reliability check on the findings of model application also indicates that the model produces reliable results.

  11. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care.

    Science.gov (United States)

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-08-26

    To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). 2 university hospitals in south-eastern Spain from April to October 2013. A correlational descriptive study. A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. The differences in obstetrical results were (biomedical model/humanised model): onset of labour (spontaneous 66/137, augmentation 70/1, p=0.0005), pain relief (epidural 172/132, no pain relief 9/40, p=0.0005), mode of delivery (normal vaginal 140/165, instrumental 48/23, p=0.004), length of labour (0-4 hours 69/93, >4 hours 133/108, p=0.011), condition of perineum (intact perineum or tear 94/178, episiotomy 100/24, p=0.0005). The total questionnaire score (100) gave a mean (M) of 78.33 and SD of 8.46 in the biomedical model of care and an M of 82.01 and SD of 7.97 in the humanised model of care (p=0.0005). In the analysis of the results per items, statistical differences were found in 8 of the 9 subscales. The highest scores were reached in the humanised model of maternity care. The humanised model of maternity care offers better obstetrical outcomes and women's satisfaction scores during the labour, birth and immediate postnatal period than does the biomedical model. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  12. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care

    Science.gov (United States)

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-01-01

    Objectives To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). Setting 2 university hospitals in south-eastern Spain from April to October 2013. Design A correlational descriptive study. Participants A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. Results The differences in obstetrical results were (biomedical model/humanised model): onset of labour (spontaneous 66/137, augmentation 70/1, p=0.0005), pain relief (epidural 172/132, no pain relief 9/40, p=0.0005), mode of delivery (normal vaginal 140/165, instrumental 48/23, p=0.004), length of labour (0–4 hours 69/93, >4 hours 133/108, p=0.011), condition of perineum (intact perineum or tear 94/178, episiotomy 100/24, p=0.0005). The total questionnaire score (100) gave a mean (M) of 78.33 and SD of 8.46 in the biomedical model of care and an M of 82.01 and SD of 7.97 in the humanised model of care (p=0.0005). In the analysis of the results per items, statistical differences were found in 8 of the 9 subscales. The highest scores were reached in the humanised model of maternity care. Conclusions The humanised model of maternity care offers better obstetrical outcomes and women's satisfaction scores during the labour, birth and immediate postnatal period than does the biomedical model. PMID:27566632

  13. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    Science.gov (United States)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  14. Comparative Analyses of MIRT Models and Software (BMIRT and flexMIRT)

    Science.gov (United States)

    Yavuz, Guler; Hambleton, Ronald K.

    2017-01-01

    Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…

  15. The Cannon 2: A data-driven model of stellar spectra for detailed chemical abundance analyses

    CERN Document Server

    Casey, Andrew R; Ness, Melissa; Rix, Hans-Walter; Ho, Anna Q Y; Gilmore, Gerry

    2016-01-01

    We have shown that data-driven models are effective for inferring physical attributes of stars (labels; Teff, logg, [M/H]) from spectra, even when the signal-to-noise ratio is low. Here we explore whether this is possible when the dimensionality of the label space is large (Teff, logg, and 15 abundances: C, N, O, Na, Mg, Al, Si, S, K, Ca, Ti, V, Mn, Fe, Ni) and the model is non-linear in its response to abundance and parameter changes. We adopt ideas from compressed sensing to limit overall model complexity while retaining model freedom. The model is trained with a set of 12,681 red-giant stars with high signal-to-noise spectroscopic observations and stellar parameters and abundances taken from the APOGEE Survey. We find that we can successfully train and use a model with 17 stellar labels. Validation shows that the model does a good job of inferring all 17 labels (typical abundance precision is 0.04 dex), even when we degrade the signal-to-noise by discarding ~50% of the observing time. The model dependencie...

  16. Analysing empowerment-oriented email consultation for parents : Development of the Guiding the Empowerment Process model

    NARCIS (Netherlands)

    Nieuwboer, C.C.; Fukkink, R.G.; Hermanns, J.M.A.

    2017-01-01

    Online consultation is increasingly offered by parenting practitioners, but it is not clear if it is feasible to provide empowerment-oriented support in a single session email consultation. Based on the empowerment theory, we developed the Guiding the Empowerment Process model (GEP model) to evaluat

  17. Transport of nutrients from land to sea: Global modeling approaches and uncertainty analyses

    NARCIS (Netherlands)

    Beusen, A.H.W.

    2014-01-01

    This thesis presents four examples of global models developed as part of the Integrated Model to Assess the Global Environment (IMAGE). They describe different components of global biogeochemical cycles of the nutrients nitrogen (N), phosphorus (P) and silicon (Si), with a focus on approaches to

  18. Modelling and analysing track cycling Omnium performances using statistical and machine learning techniques.

    Science.gov (United States)

    Ofoghi, Bahadorreza; Zeleznikow, John; Dwyer, Dan; Macmahon, Clare

    2013-01-01

    This article describes the utilisation of an unsupervised machine learning technique and statistical approaches (e.g., the Kolmogorov-Smirnov test) that assist cycling experts in the crucial decision-making processes for athlete selection, training, and strategic planning in the track cycling Omnium. The Omnium is a multi-event competition that will be included in the summer Olympic Games for the first time in 2012. Presently, selectors and cycling coaches make decisions based on experience and intuition. They rarely have access to objective data. We analysed both the old five-event (first raced internationally in 2007) and new six-event (first raced internationally in 2011) Omniums and found that the addition of the elimination race component to the Omnium has, contrary to expectations, not favoured track endurance riders. We analysed the Omnium data and also determined the inter-relationships between different individual events as well as between those events and the final standings of riders. In further analysis, we found that there is no maximum ranking (poorest performance) in each individual event that riders can afford whilst still winning a medal. We also found the required times for riders to finish the timed components that are necessary for medal winning. The results of this study consider the scoring system of the Omnium and inform decision-making toward successful participation in future major Omnium competitions.

  19. Mathematical modeling of materially nonlinear problems in structural analyses, Part II: Application in contemporary software

    Directory of Open Access Journals (Sweden)

    Bonić Zoran

    2010-01-01

    Full Text Available The paper presents application of nonlinear material models in the software package Ansys. The development of the model theory is presented in the paper of the mathematical modeling of material nonlinear problems in structural analysis (part I - theoretical foundations, and here is described incremental-iterative procedure for solving problems of nonlinear material used by this package and an example of modeling of spread footing by using Bilinear-kinematics and Drucker-Prager mode was given. A comparative analysis of the results obtained by these modeling and experimental research of the author was made. Occurrence of the load level that corresponds to plastic deformation was noted, development of deformations with increasing load, as well as the distribution of dilatation in the footing was observed. Comparison of calculated and measured values of reinforcement dilatation shows their very good agreement.

  20. Crowd-structure interaction in footbridges: Modelling, application to a real case-study and sensitivity analyses

    Science.gov (United States)

    Bruno, Luca; Venuti, Fiammetta

    2009-06-01

    A mathematical and computational model used to simulate crowd-structure interaction in lively footbridges is presented in this work. The model is based on the mathematical and numerical decomposition of the coupled multiphysical nonlinear system into two interacting subsystems. The model was conceived to simulate the synchronous lateral excitation phenomenon caused by pedestrians walking on footbridges. The model was first applied to simulate a crowd event on an actual footbridge, the T-bridge in Japan. Three sensitivity analyses were then performed on the same benchmark to evaluate the properties of the model. The simulation results show good agreement with the experimental data found in literature and the model could be considered a useful tool for designers and engineers in the different phases of footbridge design.

  1. Stochastic Spatio-Temporal Models for Analysing NDVI Distribution of GIMMS NDVI3g Images

    Directory of Open Access Journals (Sweden)

    Ana F. Militino

    2017-01-01

    Full Text Available The normalized difference vegetation index (NDVI is an important indicator for evaluating vegetation change, monitoring land surface fluxes or predicting crop models. Due to the great availability of images provided by different satellites in recent years, much attention has been devoted to testing trend changes with a time series of NDVI individual pixels. However, the spatial dependence inherent in these data is usually lost unless global scales are analyzed. In this paper, we propose incorporating both the spatial and the temporal dependence among pixels using a stochastic spatio-temporal model for estimating the NDVI distribution thoroughly. The stochastic model is a state-space model that uses meteorological data of the Climatic Research Unit (CRU TS3.10 as auxiliary information. The model will be estimated with the Expectation-Maximization (EM algorithm. The result is a set of smoothed images providing an overall analysis of the NDVI distribution across space and time, where fluctuations generated by atmospheric disturbances, fire events, land-use/cover changes or engineering problems from image capture are treated as random fluctuations. The illustration is carried out with the third generation of NDVI images, termed NDVI3g, of the Global Inventory Modeling and Mapping Studies (GIMMS in continental Spain. This data are taken in bymonthly periods from January 2011 to December 2013, but the model can be applied to many other variables, countries or regions with different resolutions.

  2. Neural Network-Based Model for Landslide Susceptibility and Soil Longitudinal Profile Analyses

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Barari, Amin; Choobbasti, A. J.

    2011-01-01

    The purpose of this study was to create an empirical model for assessing the landslide risk potential at Savadkouh Azad University, which is located in the rural surroundings of Savadkouh, about 5 km from the city of Pol-Sefid in northern Iran. The soil longitudinal profile of the city of Babol......, located 25 km from the Caspian Sea, also was predicted with an artificial neural network (ANN). A multilayer perceptron neural network model was applied to the landslide area and was used to analyze specific elements in the study area that contributed to previous landsliding events. The ANN models were...... studies in landslide susceptibility zonation....

  3. Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time

    Science.gov (United States)

    Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan

    2012-01-01

    Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).

  4. A Multiple Risk Factors Model of the Development of Aggression among Early Adolescents from Urban Disadvantaged Neighborhoods

    Science.gov (United States)

    Kim, Sangwon; Orpinas, Pamela; Kamphaus, Randy; Kelder, Steven H.

    2011-01-01

    This study empirically derived a multiple risk factors model of the development of aggression among middle school students in urban, low-income neighborhoods, using Hierarchical Linear Modeling (HLM). Results indicated that aggression increased from sixth to eighth grade. Additionally, the influences of four risk domains (individual, family,…

  5. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach......This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section....... The model is able to correctly separate the two experimental groups. Two different approaches to estimate the thickness of each section of specimen being imaged are introduced. The first approach uses Darboux frame and Cartan matrix to measure the isophote curvature and the second approach is based...

  6. Genomic analyses with biofilter 2.0: knowledge driven filtering, annotation, and model development

    National Research Council Canada - National Science Library

    Pendergrass, Sarah A; Frase, Alex; Wallace, John; Wolfe, Daniel; Katiyar, Neerja; Moore, Carrie; Ritchie, Marylyn D

    2013-01-01

    .... We have now extensively revised and updated the multi-purpose software tool Biofilter that allows researchers to annotate and/or filter data as well as generate gene-gene interaction models based...

  7. Wave modelling for the North Indian Ocean using MSMR analysed winds

    Digital Repository Service at National Institute of Oceanography (India)

    Vethamony, P.; Sudheesh, K.; Rupali, S.P.; Babu, M.T.; Jayakumar, S.; Saran, A.K.; Basu, S.K.; Kumar, R.; Sarkar, A.

    NCMRWF (National Centre for Medium Range Weather Forecast) winds assimilated with MSMR (Multi-channel Scanning Microwave Radiometer) winds are used as input to MIKE21 Offshore Spectral Wave model (OSW) which takes into account wind induced wave...

  8. The strut-and-tie models in reinforced concrete structures analysed by a numerical technique

    Directory of Open Access Journals (Sweden)

    V. S. Almeida

    Full Text Available The strut-and-tie models are appropriate to design and to detail certain types of structural elements in reinforced concrete and in regions of stress concentrations, called "D" regions. This is a good model representation of the structural behavior and mechanism. The numerical techniques presented herein are used to identify stress regions which represent the strut-and-tie elements and to quantify their respective efforts. Elastic linear plane problems are analyzed using strut-and-tie models by coupling the classical evolutionary structural optimization, ESO, and a new variant called SESO - Smoothing ESO, for finite element formulation. The SESO method is based on the procedure of gradual reduction of stiffness contribution of the inefficient elements at lower stress until it no longer has any influence. Optimal topologies of strut-and-tie models are presented in several instances with good settings comparing with other pioneer works allowing the design of reinforcement for structural elements.

  9. Het Job-demands resources model: Een motivationele analyse vanuit de Zelf-Determinatie Theorie

    OpenAIRE

    2013-01-01

    This article details the doctoral dissertation of Anja Van Broeck (2010) detailing employee motivation from two different recent perspectives: the job demands-resources model (JD-R model) en the self-determination theory (SDT). This article primarily highlights how the studies of this dissertation add to the JDR by relying on SDT. First, a distinction is made between two types of job demands: job hindrances and job challenges Second, motivation is shown to represent the underlying mechanism ...

  10. Optimization of extraction procedures for ecotoxicity analyses: Use of TNT contaminated soil as a model

    Energy Technology Data Exchange (ETDEWEB)

    Sunahara, G.I.; Renoux, A.Y.; Dodard, S.; Paquet, L.; Hawari, J. [BRI, Montreal, Quebec (Canada); Ampleman, G.; Lavigne, J.; Thiboutot, S. [DREV, Courcelette, Quebec (Canada)

    1995-12-31

    The environmental impact of energetic substances (TNT, RDX, GAP, NC) in soil is being examined using ecotoxicity bioassays. An extraction method was characterized to optimize bioassay assessment of TNT toxicity in different soil types. Using the Microtox{trademark} (Photobacterium phosphoreum) assay and non-extracted samples, TNT was most acutely toxic (IC{sub 50} = 1--9 PPM) followed by RDX and GAP; NC did not show obvious toxicity (probably due to solubility limitations). TNT (in 0.25% DMSO) yielded an IC{sub 50} 0.98 + 0.10 (SD) ppm. The 96h-EC{sub 50} (Selenastrum capricornutum growth inhibition) of TNT (1. 1 ppm) was higher than GAP and RDX; NC was not apparently toxic (probably due to solubility limitations). Soil samples (sand or a silt-sand mix) were spiked with either 2,000 or 20,000 mg TNT/kg soil, and were adjusted to 20% moisture. Samples were later mixed with acetonitrile, sonicated, and then treated with CaCl{sub 2} before filtration, HPLC and ecotoxicity analyses. Results indicated that: the recovery of TNT from soil (97.51% {+-} 2.78) was independent of the type of soil or moisture content; CaCl{sub 2} interfered with TNT toxicity and acetonitrile extracts could not be used directly for algal testing. When TNT extracts were diluted to fixed concentrations, similar TNT-induced ecotoxicities were generally observed and suggested that, apart from the expected effects of TNT concentrations in the soil, the soil texture and the moisture effects were minimal. The extraction procedure permits HPLC analyses as well as ecotoxicity testing and minimizes secondary soil matrix effects. Studies will be conducted to study the toxic effects of other energetic substances present in soil using this approach.

  11. Analysing stratified medicine business models and value systems: innovation-regulation interactions.

    Science.gov (United States)

    Mittra, James; Tait, Joyce

    2012-09-15

    Stratified medicine offers both opportunities and challenges to the conventional business models that drive pharmaceutical R&D. Given the increasingly unsustainable blockbuster model of drug development, due in part to maturing product pipelines, alongside increasing demands from regulators, healthcare providers and patients for higher standards of safety, efficacy and cost-effectiveness of new therapies, stratified medicine promises a range of benefits to pharmaceutical and diagnostic firms as well as healthcare providers and patients. However, the transition from 'blockbusters' to what might now be termed 'niche-busters' will require the adoption of new, innovative business models, the identification of different and perhaps novel types of value along the R&D pathway, and a smarter approach to regulation to facilitate innovation in this area. In this paper we apply the Innogen Centre's interdisciplinary ALSIS methodology, which we have developed for the analysis of life science innovation systems in contexts where the value creation process is lengthy, expensive and highly uncertain, to this emerging field of stratified medicine. In doing so, we consider the complex collaboration, timing, coordination and regulatory interactions that shape business models, value chains and value systems relevant to stratified medicine. More specifically, we explore in some depth two convergence models for co-development of a therapy and diagnostic before market authorisation, highlighting the regulatory requirements and policy initiatives within the broader value system environment that have a key role in determining the probable success and sustainability of these models.

  12. Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes

    Directory of Open Access Journals (Sweden)

    Marc Carreras

    2016-08-01

    Full Text Available Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain for the year 2012 (N = 92,498 individuals. A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals.

  13. An LP-model to analyse economic and ecological sustainability on Dutch dairy farms: model presentation and application for experimental farm "de Marke"

    NARCIS (Netherlands)

    Calker, van K.J.; Berentsen, P.B.M.; Boer, de I.J.M.; Giesen, G.W.J.; Huirne, R.B.M.

    2004-01-01

    Farm level modelling can be used to determine how farm management adjustments and environmental policy affect different sustainability indicators. In this paper indicators were included in a dairy farm LP (linear programming)-model to analyse the effects of environmental policy and management

  14. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  15. AMME: an Automatic Mental Model Evaluation to analyse user behaviour traced in a finite, discrete state space.

    Science.gov (United States)

    Rauterberg, M

    1993-11-01

    To support the human factors engineer in designing a good user interface, a method has been developed to analyse the empirical data of the interactive user behaviour traced in a finite discrete state space. The sequences of actions produced by the user contain valuable information about the mental model of this user, the individual problem solution strategies for a given task and the hierarchical structure of the task-subtasks relationships. The presented method, AMME, can analyse the action sequences and automatically generate (1) a net description of the task dependent model of the user, (2) a complete state transition matrix, and (3) various quantitative measures of the user's task solving process. The behavioural complexity of task-solving processes carried out by novices has been found to be significantly larger than the complexity of task-solving processes carried out by experts.

  16. Model-driven meta-analyses for informing health care: a diabetes meta-analysis as an exemplar.

    Science.gov (United States)

    Brown, Sharon A; Becker, Betsy Jane; García, Alexandra A; Brown, Adama; Ramírez, Gilbert

    2015-04-01

    A relatively novel type of meta-analysis, a model-driven meta-analysis, involves the quantitative synthesis of descriptive, correlational data and is useful for identifying key predictors of health outcomes and informing clinical guidelines. Few such meta-analyses have been conducted and thus, large bodies of research remain unsynthesized and uninterpreted for application in health care. We describe the unique challenges of conducting a model-driven meta-analysis, focusing primarily on issues related to locating a sample of published and unpublished primary studies, extracting and verifying descriptive and correlational data, and conducting analyses. A current meta-analysis of the research on predictors of key health outcomes in diabetes is used to illustrate our main points.

  17. Multicollinearity in prognostic factor analyses using the EORTC QLQ-C30: identification and impact on model selection.

    Science.gov (United States)

    Van Steen, Kristel; Curran, Desmond; Kramer, Jocelyn; Molenberghs, Geert; Van Vreckem, Ann; Bottomley, Andrew; Sylvester, Richard

    2002-12-30

    Clinical and quality of life (QL) variables from an EORTC clinical trial of first line chemotherapy in advanced breast cancer were used in a prognostic factor analysis of survival and response to chemotherapy. For response, different final multivariate models were obtained from forward and backward selection methods, suggesting a disconcerting instability. Quality of life was measured using the EORTC QLQ-C30 questionnaire completed by patients. Subscales on the questionnaire are known to be highly correlated, and therefore it was hypothesized that multicollinearity contributed to model instability. A correlation matrix indicated that global QL was highly correlated with 7 out of 11 variables. In a first attempt to explore multicollinearity, we used global QL as dependent variable in a regression model with other QL subscales as predictors. Afterwards, standard diagnostic tests for multicollinearity were performed. An exploratory principal components analysis and factor analysis of the QL subscales identified at most three important components and indicated that inclusion of global QL made minimal difference to the loadings on each component, suggesting that it is redundant in the model. In a second approach, we advocate a bootstrap technique to assess the stability of the models. Based on these analyses and since global QL exacerbates problems of multicollinearity, we therefore recommend that global QL be excluded from prognostic factor analyses using the QLQ-C30. The prognostic factor analysis was rerun without global QL in the model, and selected the same significant prognostic factors as before.

  18. Analysing improvements to on-street public transport systems: a mesoscopic model approach

    DEFF Research Database (Denmark)

    Ingvardson, Jesper Bláfoss; Kornerup Jensen, Jonas; Nielsen, Otto Anker

    2017-01-01

    Light rail transit and bus rapid transit have shown to be efficient and cost-effective in improving public transport systems in cities around the world. As these systems comprise various elements, which can be tailored to any given setting, e.g. pre-board fare-collection, holding strategies...... and other advanced public transport systems (APTS), the attractiveness of such systems depends heavily on their implementation. In the early planning stage it is advantageous to deploy simple and transparent models to evaluate possible ways of implementation. For this purpose, the present study develops...... a mesoscopic model which makes it possible to evaluate public transport operations in details, including dwell times, intelligent traffic signal timings and holding strategies while modelling impacts from other traffic using statistical distributional data thereby ensuring simplicity in use and fast...

  19. Latent Variable Modelling and Item Response Theory Analyses in Marketing Research

    Directory of Open Access Journals (Sweden)

    Brzezińska Justyna

    2016-12-01

    Full Text Available Item Response Theory (IRT is a modern statistical method using latent variables designed to model the interaction between a subject’s ability and the item level stimuli (difficulty, guessing. Item responses are treated as the outcome (dependent variables, and the examinee’s ability and the items’ characteristics are the latent predictor (independent variables. IRT models the relationship between a respondent’s trait (ability, attitude and the pattern of item responses. Thus, the estimation of individual latent traits can differ even for two individuals with the same total scores. IRT scores can yield additional benefits and this will be discussed in detail. In this paper theory and application with R software with the use of packages designed for modelling IRT will be presented.

  20. Modeling human papillomavirus and cervical cancer in the United States for analyses of screening and vaccination

    Directory of Open Access Journals (Sweden)

    Ortendahl Jesse

    2007-10-01

    Full Text Available Abstract Background To provide quantitative insight into current U.S. policy choices for cervical cancer prevention, we developed a model of human papillomavirus (HPV and cervical cancer, explicitly incorporating uncertainty about the natural history of disease. Methods We developed a stochastic microsimulation of cervical cancer that distinguishes different HPV types by their incidence, clearance, persistence, and progression. Input parameter sets were sampled randomly from uniform distributions, and simulations undertaken with each set. Through systematic reviews and formal data synthesis, we established multiple epidemiologic targets for model calibration, including age-specific prevalence of HPV by type, age-specific prevalence of cervical intraepithelial neoplasia (CIN, HPV type distribution within CIN and cancer, and age-specific cancer incidence. For each set of sampled input parameters, likelihood-based goodness-of-fit (GOF scores were computed based on comparisons between model-predicted outcomes and calibration targets. Using 50 randomly resampled, good-fitting parameter sets, we assessed the external consistency and face validity of the model, comparing predicted screening outcomes to independent data. To illustrate the advantage of this approach in reflecting parameter uncertainty, we used the 50 sets to project the distribution of health outcomes in U.S. women under different cervical cancer prevention strategies. Results Approximately 200 good-fitting parameter sets were identified from 1,000,000 simulated sets. Modeled screening outcomes were externally consistent with results from multiple independent data sources. Based on 50 good-fitting parameter sets, the expected reductions in lifetime risk of cancer with annual or biennial screening were 76% (range across 50 sets: 69–82% and 69% (60–77%, respectively. The reduction from vaccination alone was 75%, although it ranged from 60% to 88%, reflecting considerable parameter

  1. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    Stress can affect the brain functionality in many ways. As the synaptic vesicles have a major role in nervous signal transportation in synapses, their distribution in relationship to the active zone is very important in studying the neuron responses. We study the effect of stress on brain functio...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....

  2. Analysing Amazonian forest productivity using a new individual and trait-based model (TFS v.1)

    Science.gov (United States)

    Fyllas, N. M.; Gloor, E.; Mercado, L. M.; Sitch, S.; Quesada, C. A.; Domingues, T. F.; Galbraith, D. R.; Torre-Lezama, A.; Vilanova, E.; Ramírez-Angulo, H.; Higuchi, N.; Neill, D. A.; Silveira, M.; Ferreira, L.; Aymard C., G. A.; Malhi, Y.; Phillips, O. L.; Lloyd, J.

    2014-07-01

    Repeated long-term censuses have revealed large-scale spatial patterns in Amazon basin forest structure and dynamism, with some forests in the west of the basin having up to a twice as high rate of aboveground biomass production and tree recruitment as forests in the east. Possible causes for this variation could be the climatic and edaphic gradients across the basin and/or the spatial distribution of tree species composition. To help understand causes of this variation a new individual-based model of tropical forest growth, designed to take full advantage of the forest census data available from the Amazonian Forest Inventory Network (RAINFOR), has been developed. The model allows for within-stand variations in tree size distribution and key functional traits and between-stand differences in climate and soil physical and chemical properties. It runs at the stand level with four functional traits - leaf dry mass per area (Ma), leaf nitrogen (NL) and phosphorus (PL) content and wood density (DW) varying from tree to tree - in a way that replicates the observed continua found within each stand. We first applied the model to validate canopy-level water fluxes at three eddy covariance flux measurement sites. For all three sites the canopy-level water fluxes were adequately simulated. We then applied the model at seven plots, where intensive measurements of carbon allocation are available. Tree-by-tree multi-annual growth rates generally agreed well with observations for small trees, but with deviations identified for larger trees. At the stand level, simulations at 40 plots were used to explore the influence of climate and soil nutrient availability on the gross (ΠG) and net (ΠN) primary production rates as well as the carbon use efficiency (CU). Simulated ΠG, ΠN and CU were not associated with temperature. On the other hand, all three measures of stand level productivity were positively related to both mean annual precipitation and soil nutrient status

  3. Meta-Analysis in Higher Education: An Illustrative Example Using Hierarchical Linear Modeling

    Science.gov (United States)

    Denson, Nida; Seltzer, Michael H.

    2011-01-01

    The purpose of this article is to provide higher education researchers with an illustrative example of meta-analysis utilizing hierarchical linear modeling (HLM). This article demonstrates the step-by-step process of meta-analysis using a recently-published study examining the effects of curricular and co-curricular diversity activities on racial…

  4. Meta-Analysis in Higher Education: An Illustrative Example Using Hierarchical Linear Modeling

    Science.gov (United States)

    Denson, Nida; Seltzer, Michael H.

    2011-01-01

    The purpose of this article is to provide higher education researchers with an illustrative example of meta-analysis utilizing hierarchical linear modeling (HLM). This article demonstrates the step-by-step process of meta-analysis using a recently-published study examining the effects of curricular and co-curricular diversity activities on racial…

  5. Integrated modeling/analyses of thermal-shock effects in SNS targets

    Energy Technology Data Exchange (ETDEWEB)

    Taleyarkhan, R.P.; Haines, J. [Oak Ridge National Lab., TN (United States)

    1996-06-01

    In a spallation neutron source (SNS), extremely rapid energy pulses are introduced in target materials such as mercury, lead, tungsten, uranium, etc. Shock phenomena in such systems may possibly lead to structural material damage beyond the design basis. As expected, the progression of shock waves and interaction with surrounding materials for liquid targets can be quite different from that in solid targets. The purpose of this paper is to describe ORNL`s modeling framework for `integrated` assessment of thermal-shock issues in liquid and solid target designs. This modeling framework is being developed based upon expertise developed from past reactor safety studies, especially those related to the Advanced Neutron Source (ANS) Project. Unlike previous separate-effects modeling approaches employed (for evaluating target behavior when subjected to thermal shocks), the present approach treats the overall problem in a coupled manner using state-of-the-art equations of state for materials of interest (viz., mercury, tungsten and uranium). That is, the modeling framework simultaneously accounts for localized (and distributed) compression pressure pulse generation due to transient heat deposition, the transport of this shock wave outwards, interaction with surrounding boundaries, feedback to mercury from structures, multi-dimensional reflection patterns & stress induced (possible) breakup or fracture.

  6. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Science.gov (United States)

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  7. Using Latent Trait Measurement Models to Analyse Attitudinal Data: A Synthesis of Viewpoints.

    Science.gov (United States)

    Andrich, David

    A Rasch model for ordered response categories is derived and it is shown that it retains the key features of both the Thurstone and Likert approaches to studying attitude. Key features of the latter approaches are reviewed. Characteristics in common with the Thurstone approach are: statements are scaled with respect to their affective values;…

  8. An anisotropic numerical model for thermal hydraulic analyses: application to liquid metal flow in fuel assemblies

    Science.gov (United States)

    Vitillo, F.; Vitale Di Maio, D.; Galati, C.; Caruso, G.

    2015-11-01

    A CFD analysis has been carried out to study the thermal-hydraulic behavior of liquid metal coolant in a fuel assembly of triangular lattice. In order to obtain fast and accurate results, the isotropic two-equation RANS approach is often used in nuclear engineering applications. A different approach is provided by Non-Linear Eddy Viscosity Models (NLEVM), which try to take into account anisotropic effects by a nonlinear formulation of the Reynolds stress tensor. This approach is very promising, as it results in a very good numerical behavior and in a potentially better fluid flow description than classical isotropic models. An Anisotropic Shear Stress Transport (ASST) model, implemented into a commercial software, has been applied in previous studies, showing very trustful results for a large variety of flows and applications. In the paper, the ASST model has been used to perform an analysis of the fluid flow inside the fuel assembly of the ALFRED lead cooled fast reactor. Then, a comparison between the results of wall-resolved conjugated heat transfer computations and the results of a decoupled analysis using a suitable thermal wall-function previously implemented into the solver has been performed and presented.

  9. Cyclodextrin--piroxicam inclusion complexes: analyses by mass spectrometry and molecular modelling

    Science.gov (United States)

    Gallagher, Richard T.; Ball, Christopher P.; Gatehouse, Deborah R.; Gates, Paul J.; Lobell, Mario; Derrick, Peter J.

    1997-11-01

    Mass spectrometry has been used to investigate the natures of non-covalent complexes formed between the anti-inflammatory drug piroxicam and [alpha]-, [beta]- and [gamma]-cyclodextrins. Energies of these complexes have been calculated by means of molecular modelling. There is a correlation between peak intensities in the mass spectra and the calculated energies.

  10. Survival data analyses in ecotoxicology: critical effect concentrations, methods and models. What should we use?

    Science.gov (United States)

    Forfait-Dubuc, Carole; Charles, Sandrine; Billoir, Elise; Delignette-Muller, Marie Laure

    2012-05-01

    In ecotoxicology, critical effect concentrations are the most common indicators to quantitatively assess risks for species exposed to contaminants. Three types of critical effect concentrations are classically used: lowest/ no observed effect concentration (LOEC/NOEC), LC( x) (x% lethal concentration) and NEC (no effect concentration). In this article, for each of these three types of critical effect concentration, we compared methods or models used for their estimation and proposed one as the most appropriate. We then compared these critical effect concentrations to each other. For that, we used nine survival data sets corresponding to D. magna exposition to nine different contaminants, for which the time-course of the response was monitored. Our results showed that: (i) LOEC/NOEC values at day 21 were method-dependent, and that the Cochran-Armitage test with a step-down procedure appeared to be the most protective for the environment; (ii) all tested concentration-response models we compared gave close values of LC50 at day 21, nevertheless the Weibull model had the lowest global mean deviance; (iii) a simple threshold NEC-model both concentration and time dependent more completely described whole data (i.e. all timepoints) and enabled a precise estimation of the NEC. We then compared the three critical effect concentrations and argued that the use of the NEC might be a good option for environmental risk assessment.

  11. Transformation of Baumgarten's aesthetics into a tool for analysing works and for modelling

    DEFF Research Database (Denmark)

    Thomsen, Bente Dahl

    2006-01-01

      Abstract: Is this the best form, or does it need further work? The aesthetic object does not possess the perfect qualities; but how do I proceed with the form? These are questions that all modellers ask themselves at some point, and with which they can grapple for days - even weeks - before the...

  12. Modelling and analysing 3D buildings with a primal/dual data structure

    NARCIS (Netherlands)

    Boguslawski, P.; Gold, C.; Ledoux, H.

    2011-01-01

    While CityGML permits us to represent 3D city models, its use for applications where spatial analysis and/or real-time modifications are required is limited since at this moment the possibility to store topological relationships between the elements is rather limited and often not exploited. We pres

  13. Modelling and analysing 3D buildings with a primal/dual data structure

    NARCIS (Netherlands)

    Boguslawski, P.; Gold, C.; Ledoux, H.

    2011-01-01

    While CityGML permits us to represent 3D city models, its use for applications where spatial analysis and/or real-time modifications are required is limited since at this moment the possibility to store topological relationships between the elements is rather limited and often not exploited. We

  14. A multi-scale modelling approach for analysing landscape service dynamics

    NARCIS (Netherlands)

    Willemen, L.; Veldkamp, A.; Verburg, P.H.; Hein, L.G.; Leemans, R.

    2012-01-01

    Shifting societal needs drive and shape landscapes and the provision of their services. This paper presents a modelling approach to visualize the regional spatial and temporal dynamics in landscape service supply as a function of changing landscapes and societal demand. This changing demand can resu

  15. GSEVM v.2: MCMC software to analyse genetically structured environmental variance models

    DEFF Research Database (Denmark)

    Ibáñez-Escriche, N; Garcia, M; Sorensen, D

    2010-01-01

    This note provides a description of software that allows to fit Bayesian genetically structured variance models using Markov chain Monte Carlo (MCMC). The gsevm v.2 program was written in Fortran 90. The DOS and Unix executable programs, the user's guide, and some example files are freely availab...

  16. Analysing outsourcing policies in an asset management context: a six-stage model

    NARCIS (Netherlands)

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates

  17. Analysing outsourcing policies in an asset management context: a six-stage model

    NARCIS (Netherlands)

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates

  18. Analysing green supply chain management practices in Brazil's electrical/electronics industry using interpretive structural modelling

    DEFF Research Database (Denmark)

    Govindan, Kannan; Kannan, Devika; Mathiyazhagan, K.

    2013-01-01

    that exists between GSCM practices with regard to their adoption within Brazilian electrical/electronic industry with the help of interpretive structural modelling (ISM). From the results, we infer that cooperation with customers for eco-design practice is driving other practices, and this practice acts...

  19. Analysing the Severity and Frequency of Traffic Crashes in Riyadh City Using Statistical Models

    Directory of Open Access Journals (Sweden)

    Saleh Altwaijri

    2012-12-01

    Full Text Available Traffic crashes in Riyadh city cause losses in the form of deaths, injuries and property damages, in addition to the pain and social tragedy affecting families of the victims. In 2005, there were a total of 47,341 injury traffic crashes occurred in Riyadh city (19% of the total KSA crashes and 9% of those crashes were severe. Road safety in Riyadh city may have been adversely affected by: high car ownership, migration of people to Riyadh city, high daily trips reached about 6 million, high rate of income, low-cost of petrol, drivers from different nationalities, young drivers and tremendous growth in population which creates a high level of mobility and transport activities in the city. The primary objective of this paper is therefore to explore factors affecting the severity and frequency of road crashes in Riyadh city using appropriate statistical models aiming to establish effective safety policies ready to be implemented to reduce the severity and frequency of road crashes in Riyadh city. Crash data for Riyadh city were collected from the Higher Commission for the Development of Riyadh (HCDR for a period of five years from 1425H to 1429H (roughly corresponding to 2004-2008. Crash data were classified into three categories: fatal, serious-injury and slight-injury. Two nominal response models have been developed: a standard multinomial logit model (MNL and a mixed logit model to injury-related crash data. Due to a severe underreporting problem on the slight injury crashes binary and mixed binary logistic regression models were also estimated for two categories of severity: fatal and serious crashes. For frequency, two count models such as Negative Binomial (NB models were employed and the unit of analysis was 168 HAIs (wards in Riyadh city. Ward-level crash data are disaggregated by severity of the crash (such as fatal and serious injury crashes. The results from both multinomial and binary response models are found to be fairly consistent but

  20. Using species abundance distribution models and diversity indices for biogeographical analyses

    Science.gov (United States)

    Fattorini, Simone; Rigal, François; Cardoso, Pedro; Borges, Paulo A. V.

    2016-01-01

    We examine whether Species Abundance Distribution models (SADs) and diversity indices can describe how species colonization status influences species community assembly on oceanic islands. Our hypothesis is that, because of the lack of source-sink dynamics at the archipelago scale, Single Island Endemics (SIEs), i.e. endemic species restricted to only one island, should be represented by few rare species and consequently have abundance patterns that differ from those of more widespread species. To test our hypothesis, we used arthropod data from the Azorean archipelago (North Atlantic). We divided the species into three colonization categories: SIEs, archipelagic endemics (AZEs, present in at least two islands) and native non-endemics (NATs). For each category, we modelled rank-abundance plots using both the geometric series and the Gambin model, a measure of distributional amplitude. We also calculated Shannon entropy and Buzas and Gibson's evenness. We show that the slopes of the regression lines modelling SADs were significantly higher for SIEs, which indicates a relative predominance of a few highly abundant species and a lack of rare species, which also depresses diversity indices. This may be a consequence of two factors: (i) some forest specialist SIEs may be at advantage over other, less adapted species; (ii) the entire populations of SIEs are by definition concentrated on a single island, without possibility for inter-island source-sink dynamics; hence all populations must have a minimum number of individuals to survive natural, often unpredictable, fluctuations. These findings are supported by higher values of the α parameter of the Gambin mode for SIEs. In contrast, AZEs and NATs had lower regression slopes, lower α but higher diversity indices, resulting from their widespread distribution over several islands. We conclude that these differences in the SAD models and diversity indices demonstrate that the study of these metrics is useful for

  1. Evaluation of a dentoalveolar model for testing mouthguards: stress and strain analyses.

    Science.gov (United States)

    Verissimo, Crisnicaw; Costa, Paulo Victor Moura; Santos-Filho, Paulo César Freitas; Fernandes-Neto, Alfredo Júlio; Tantbirojn, Daranee; Versluis, Antheunis; Soares, Carlos José

    2016-02-01

    Custom-fitted mouthguards are devices used to decrease the likelihood of dental trauma. The aim of this study was to develop an experimental bovine dentoalveolar model with periodontal ligament to evaluate mouthguard shock absorption, and impact strain and stress behavior. A pendulum impact device was developed to perform the impact tests with two different impact materials (steel ball and baseball). Five bovine jaws were selected with standard age and dimensions. Six-mm mouthguards were made for the impact tests. The jaws were fixed in a pendulum device and impacts were performed from 90, 60, and 45° angles, with and without mouthguard. Strain gauges were attached at the palatal surface of the impacted tooth. The strain and shock absorption of the mouthguards was calculated and data were analyzed with 3-way anova and Tukey's test (α = 0.05). Two-dimensional finite element models were created based on the cross-section of the bovine dentoalveolar model used in the experiment. A nonlinear dynamic impact analysis was performed to evaluate the strain and stress distributions. Without mouthguards, the increase in impact angulation significantly increased strains and stresses. Mouthguards reduced strain and stress values. Impact velocity, impact object (steel ball or baseball), and mouthguard presence affected the impact stresses and strains in a bovine dentoalveolar model. Experimental strain measurements and finite element models predicted similar behavior; therefore, both methodologies are suitable for evaluating the biomechanical performance of mouthguards. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Novel basophil- or eosinophil-depleted mouse models for functional analyses of allergic inflammation.

    Science.gov (United States)

    Matsuoka, Kunie; Shitara, Hiroshi; Taya, Choji; Kohno, Kenji; Kikkawa, Yoshiaki; Yonekawa, Hiromichi

    2013-01-01

    Basophils and eosinophils play important roles in various host defense mechanisms but also act as harmful effectors in allergic disorders. We generated novel basophil- and eosinophil-depletion mouse models by introducing the human diphtheria toxin (DT) receptor gene under the control of the mouse CD203c and the eosinophil peroxidase promoter, respectively, to study the critical roles of these cells in the immunological response. These mice exhibited selective depletion of the target cells upon DT administration. In the basophil-depletion model, DT administration attenuated a drop in body temperature in IgG-mediated systemic anaphylaxis in a dose-dependent manner and almost completely abolished the development of ear swelling in IgE-mediated chronic allergic inflammation (IgE-CAI), a typical skin swelling reaction with massive eosinophil infiltration. In contrast, in the eosinophil-depletion model, DT administration ameliorated the ear swelling in IgE-CAI whether DT was administered before, simultaneously, or after, antigen challenge, with significantly lower numbers of eosinophils infiltrating into the swelling site. These results confirm that basophils and eosinophils act as the initiator and the effector, respectively, in IgE-CAI. In addition, antibody array analysis suggested that eotaxin-2 is a principal chemokine that attracts proinflammatory cells, leading to chronic allergic inflammation. Thus, the two mouse models established in this study are potentially useful and powerful tools for studying the in vivo roles of basophils and eosinophils. The combination of basophil- and eosinophil-depletion mouse models provides a new approach to understanding the complicated mechanism of allergic inflammation in conditions such as atopic dermatitis and asthma.

  3. Novel basophil- or eosinophil-depleted mouse models for functional analyses of allergic inflammation.

    Directory of Open Access Journals (Sweden)

    Kunie Matsuoka

    Full Text Available Basophils and eosinophils play important roles in various host defense mechanisms but also act as harmful effectors in allergic disorders. We generated novel basophil- and eosinophil-depletion mouse models by introducing the human diphtheria toxin (DT receptor gene under the control of the mouse CD203c and the eosinophil peroxidase promoter, respectively, to study the critical roles of these cells in the immunological response. These mice exhibited selective depletion of the target cells upon DT administration. In the basophil-depletion model, DT administration attenuated a drop in body temperature in IgG-mediated systemic anaphylaxis in a dose-dependent manner and almost completely abolished the development of ear swelling in IgE-mediated chronic allergic inflammation (IgE-CAI, a typical skin swelling reaction with massive eosinophil infiltration. In contrast, in the eosinophil-depletion model, DT administration ameliorated the ear swelling in IgE-CAI whether DT was administered before, simultaneously, or after, antigen challenge, with significantly lower numbers of eosinophils infiltrating into the swelling site. These results confirm that basophils and eosinophils act as the initiator and the effector, respectively, in IgE-CAI. In addition, antibody array analysis suggested that eotaxin-2 is a principal chemokine that attracts proinflammatory cells, leading to chronic allergic inflammation. Thus, the two mouse models established in this study are potentially useful and powerful tools for studying the in vivo roles of basophils and eosinophils. The combination of basophil- and eosinophil-depletion mouse models provides a new approach to understanding the complicated mechanism of allergic inflammation in conditions such as atopic dermatitis and asthma.

  4. Static simulation and analyses of mower's ROPS behavior in a finite element model.

    Science.gov (United States)

    Wang, X; Ayers, P; Womac, A R

    2009-10-01

    The goal of this research was to numerically predict the maximum lateral force acting on a mower rollover protective structure (ROPS) and the energy absorbed by the ROPS during a lateral continuous roll. A finite element (FE) model of the ROPS was developed using elastic and plastic theories including nonlinear relationships between stresses and strains in the plastic deformation range. Model validation was performed using field measurements of ROPS behavior in a lateral continuous roll on a purpose-designed test slope. Field tests determined the maximum deformation of the ROPS of a 900 kg John Deere F925 mower with a 183 cm (72 in.) mowing deck during an actual lateral roll on a pad and on soil. In the FE model, lateral force was gradually added to the ROPS until the field-measured maximum deformation was achieved. The results from the FE analysis indicated that the top corners of the ROPS enter slightly into the plastic deformation region. Maximum lateral forces acting on the ROPS during the simulated impact with the pad and soil were 19650 N and 22850 N, respectively. The FE model predicted that the energy absorbed by the ROPS (643 J) in the lateral roll test on the pad was less than the static test requirements (1575 J) of Organization for Economic Development (OECD) Code 6. In addition, the energy absorbed by the ROPS (1813 J) in the test on the soil met the static test requirements (1575 J). Both the FE model and the field test results indicated that the deformed ROPS of the F925 mower with deck did not intrude into the occupant clearance zone during the lateral continuous or non-continuous roll.

  5. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  6. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling......, which leads to more accurate results. Finally, we present a thorough statistical investigation of the shape, orientation and interactions of the synaptic vesicles during active time of the synapse. Focused ion beam-scanning electron microscopy images of a male mammalian brain are used for this study...

  7. Effects of Traditional and Nontraditional Forms of Parental Involvement on School-Level Achievement Outcome: An HLM Study Using SASS 2007-2008

    Science.gov (United States)

    Shen, Jianping; Washington, Alandra L.; Bierlein Palmer, Louann; Xia, Jiangang

    2014-01-01

    The authors examined parental involvement's (PI) impact on school performance. The hierarchical linear modeling method was applied to national Schools and Staffing Survey 2007-2008 data. They found that PI variables explained significant variance for the outcomes of (a) meeting adequate yearly progress (AYP) and (b) being free from sanctions. The…

  8. A note on the Fourier series model for analysing line transect data.

    Science.gov (United States)

    Buckland, S T

    1982-06-01

    The Fourier series model offers a powerful procedure for the estimation of animal population density from line transect data. The estimate is reliable over a wide range of detection functions. In contrast, analytic confidence intervals yield, at best, 90% confidence for nominal 95% intervals. Three solutions, one using Monte Carlo techniques, another making direct use of replicate lines and the third based on the jackknife method, are discussed and compared.

  9. Analysing Amazonian forest productivity using a new individual and trait-based model (TFS v.1

    Directory of Open Access Journals (Sweden)

    N. M. Fyllas

    2014-02-01

    Full Text Available Repeated long-term censuses have revealed large-scale spatial patterns in Amazon Basin forest structure and dynamism, with some forests in the west of the Basin having up to a twice as high rate of aboveground biomass production and tree recruitment as forests in the east. Possible causes for this variation could be the climatic and edaphic gradients across the Basin and/or the spatial distribution of tree species composition. To help understand causes of this variation a new individual-based model of tropical forest growth designed to take full advantage of the forest census data available from the Amazonian Forest Inventory Network (RAINFOR has been developed. The model incorporates variations in tree size distribution, functional traits and soil physical properties and runs at the stand level with four functional traits, leaf dry mass per area (Ma, leaf nitrogen (NL and phosphorus (PL content and wood density (DW used to represent a continuum of plant strategies found in tropical forests. We first applied the model to validate canopy-level water fluxes at three Amazon eddy flux sites. For all three sites the canopy-level water fluxes were adequately simulated. We then applied the model at seven plots, where intensive measurements of carbon allocation are available. Tree-by-tree multi-annual growth rates generally agreed well with observations for small trees, but with deviations identified for large trees. At the stand-level, simulations at 40 plots were used to explore the influence of climate and soil fertility on the gross (ΠG and net (ΠN primary production rates as well as the carbon use efficiency (CU. Simulated ΠG, ΠN and CU were not associated with temperature. However all three measures of stand level productivity were positively related to annual precipitation and soil fertility.

  10. Sensitivity to model geometry in finite element analyses of reconstructed skeletal structures: experience with a juvenile pelvis.

    Science.gov (United States)

    Watson, Peter J; Fagan, Michael J; Dobson, Catherine A

    2015-01-01

    Biomechanical analysis of juvenile pelvic growth can be used in the evaluation of medical devices and investigation of hip joint disorders. This requires access to scan data of healthy juveniles, which are not always freely available. This article analyses the application of a geometric morphometric technique, which facilitates the reconstruction of the articulated juvenile pelvis from cadaveric remains, in biomechanical modelling. The sensitivity of variation in reconstructed morphologies upon predicted stress/strain distributions is of particular interest. A series of finite element analyses of a 9-year-old hemi-pelvis were performed to examine differences in predicted strain distributions between a reconstructed model and the originally fully articulated specimen. Only minor differences in the minimum principal strain distributions were observed between two varying hemi-pelvic morphologies and that of the original articulation. A Wilcoxon rank-sum test determined there was no statistical significance between the nodal strains recorded at 60 locations throughout the hemi-pelvic structures. This example suggests that finite element models created by this geometric morphometric reconstruction technique can be used with confidence, and as observed with this hemi-pelvis model, even a visual morphological difference does not significantly affect the predicted results. The validated use of this geometric morphometric reconstruction technique in biomechanical modelling reduces the dependency on clinical scan data.

  11. Systematic Selection of Key Logistic Regression Variables for Risk Prediction Analyses: A Five-Factor Maximum Model.

    Science.gov (United States)

    Hewett, Timothy E; Webster, Kate E; Hurd, Wendy J

    2017-08-16

    The evolution of clinical practice and medical technology has yielded an increasing number of clinical measures and tests to assess a patient's progression and return to sport readiness after injury. The plethora of available tests may be burdensome to clinicians in the absence of evidence that demonstrates the utility of a given measurement. Thus, there is a critical need to identify a discrete number of metrics to capture during clinical assessment to effectively and concisely guide patient care. The data sources included Pubmed and PMC Pubmed Central articles on the topic. Therefore, we present a systematic approach to injury risk analyses and how this concept may be used in algorithms for risk analyses for primary anterior cruciate ligament (ACL) injury in healthy athletes and patients after ACL reconstruction. In this article, we present the five-factor maximum model, which states that in any predictive model, a maximum of 5 variables will contribute in a meaningful manner to any risk factor analysis. We demonstrate how this model already exists for prevention of primary ACL injury, how this model may guide development of the second ACL injury risk analysis, and how the five-factor maximum model may be applied across the injury spectrum for development of the injury risk analysis.

  12. Hydrogeologic analyses in support of the conceptual model for the LANL Area G LLRW performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Vold, E.L.; Birdsell, K.; Rogers, D.; Springer, E.; Krier, D.; Turin, H.J.

    1996-04-01

    The Los Alamos National Laboratory low level radioactive waste disposal facility at Area G is currently completing a draft of the site Performance Assessment. Results from previous field studies have estimated a range in recharge rate up to 1 cm/yr. Recent estimates of unsaturated hydraulic conductivity for each stratigraphic layer under a unit gradient assumption show a wide range in recharge rate of 10{sup {minus}4} to 1 cm/yr depending upon location. Numerical computations show that a single net infiltration rate at the mesa surface does not match the moisture profile in each stratigraphic layer simultaneously, suggesting local source or sink terms possibly due to surface connected porous regions. The best fit to field data at deeper stratigraphic layers occurs for a net infiltration of about 0.1 cm/yr. A recent detailed analysis evaluated liquid phase vertical moisture flux, based on moisture profiles in several boreholes and van Genuchten fits to the hydraulic properties for each of the stratigraphic units. Results show a near surface infiltration region averages 8m deep, below which is a dry, low moisture content, and low flux region, where liquid phase recharge averages to zero. Analysis shows this low flux region is dominated by vapor movement. Field data from tritium diffusion studies, from pressure fluctuation attenuation studies, and from comparisons of in-situ and core sample permeabilities indicate that the vapor diffusion is enhanced above that expected in the matrix and is presumably due to enhanced flow through the fractures. Below this dry region within the mesa is a moisture spike which analyses show corresponds to a moisture source. The likely physical explanation is seasonal transient infiltration through surface-connected fractures. This anomalous region is being investigated in current field studies, because it is critical in understanding the moisture flux which continues to deeper regions through the unsaturated zone.

  13. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling

    Science.gov (United States)

    Schneider, P. A.; Blank, H.; Geiger, B.; Mank, K.; Martinov, S.; Ryter, F.; Weiland, M.; Weller, A.

    2015-07-01

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  14. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, P. A.; Blank, H.; Geiger, B.; Mank, K.; Martinov, S.; Ryter, F.; Weiland, M.; Weller, A. [Max-Planck-Institut für Plasmaphysik, Garching (Germany)

    2015-07-15

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  15. High-temperature series analyses of the classical Heisenberg and XY model

    CERN Document Server

    Adler, J; Janke, W

    1993-01-01

    Although there is now a good measure of agreement between Monte Carlo and high-temperature series expansion estimates for Ising ($n=1$) models, published results for the critical temperature from series expansions up to 12{\\em th} order for the three-dimensional classical Heisenberg ($n=3$) and XY ($n=2$) model do not agree very well with recent high-precision Monte Carlo estimates. In order to clarify this discrepancy we have analyzed extended high-temperature series expansions of the susceptibility, the second correlation moment, and the second field derivative of the susceptibility, which have been derived a few years ago by L\\"uscher and Weisz for general $O(n)$ vector spin models on $D$-dimensional hypercubic lattices up to 14{\\em th} order in $K \\equiv J/k_B T$. By analyzing these series expansions in three dimensions with two different methods that allow for confluent correction terms, we obtain good agreement with the standard field theory exponent estimates and with the critical temperature estimates...

  16. Metabolic model for the filamentous ‘Candidatus Microthrix parvicella' based on genomic and metagenomic analyses

    Science.gov (United States)

    Jon McIlroy, Simon; Kristiansen, Rikke; Albertsen, Mads; Michael Karst, Søren; Rossetti, Simona; Lund Nielsen, Jeppe; Tandoi, Valter; James Seviour, Robert; Nielsen, Per Halkjær

    2013-01-01

    ‘Candidatus Microthrix parvicella' is a lipid-accumulating, filamentous bacterium so far found only in activated sludge wastewater treatment plants, where it is a common causative agent of sludge separation problems. Despite attracting considerable interest, its detailed physiology is still unclear. In this study, the genome of the RN1 strain was sequenced and annotated, which facilitated the construction of a theoretical metabolic model based on available in situ and axenic experimental data. This model proposes that under anaerobic conditions, this organism accumulates preferentially long-chain fatty acids as triacylglycerols. Utilisation of trehalose and/or polyphosphate stores or partial oxidation of long-chain fatty acids may supply the energy required for anaerobic lipid uptake and storage. Comparing the genome sequence of this isolate with metagenomes from two full-scale wastewater treatment plants with enhanced biological phosphorus removal reveals high similarity, with few metabolic differences between the axenic and the dominant community ‘Ca. M. parvicella' strains. Hence, the metabolic model presented in this paper could be considered generally applicable to strains in full-scale treatment systems. The genomic information obtained here will provide the basis for future research into in situ gene expression and regulation. Such information will give substantial insight into the ecophysiology of this unusual and biotechnologically important filamentous bacterium. PMID:23446830

  17. Consequence modeling for nuclear weapons probabilistic cost/benefit analyses of safety retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Harvey, T.F.; Peters, L.; Serduke, F.J.D.; Hall, C.; Stephens, D.R.

    1998-01-01

    The consequence models used in former studies of costs and benefits of enhanced safety retrofits are considered for (1) fuel fires; (2) non-nuclear detonations; and, (3) unintended nuclear detonations. Estimates of consequences were made using a representative accident location, i.e., an assumed mixed suburban-rural site. We have explicitly quantified land- use impacts and human-health effects (e.g. , prompt fatalities, prompt injuries, latent cancer fatalities, low- levels of radiation exposure, and clean-up areas). Uncertainty in the wind direction is quantified and used in a Monte Carlo calculation to estimate a range of results for a fuel fire with uncertain respirable amounts of released Pu. We define a nuclear source term and discuss damage levels of concern. Ranges of damages are estimated by quantifying health impacts and property damages. We discuss our dispersal and prompt effects models in some detail. The models used to loft the Pu and fission products and their particle sizes are emphasized.

  18. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling.

    Science.gov (United States)

    Schneider, P A; Blank, H; Geiger, B; Mank, K; Martinov, S; Ryter, F; Weiland, M; Weller, A

    2015-07-01

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  19. Analysing the origin of long-range interactions in proteins using lattice models

    Directory of Open Access Journals (Sweden)

    Unger Ron

    2009-01-01

    Full Text Available Abstract Background Long-range communication is very common in proteins but the physical basis of this phenomenon remains unclear. In order to gain insight into this problem, we decided to explore whether long-range interactions exist in lattice models of proteins. Lattice models of proteins have proven to capture some of the basic properties of real proteins and, thus, can be used for elucidating general principles of protein stability and folding. Results Using a computational version of double-mutant cycle analysis, we show that long-range interactions emerge in lattice models even though they are not an input feature of them. The coupling energy of both short- and long-range pairwise interactions is found to become more positive (destabilizing in a linear fashion with increasing 'contact-frequency', an entropic term that corresponds to the fraction of states in the conformational ensemble of the sequence in which the pair of residues is in contact. A mathematical derivation of the linear dependence of the coupling energy on 'contact-frequency' is provided. Conclusion Our work shows how 'contact-frequency' should be taken into account in attempts to stabilize proteins by introducing (or stabilizing contacts in the native state and/or through 'negative design' of non-native contacts.

  20. Analyses of the redistribution of work following cardiac resynchronisation therapy in a patient specific model.

    Directory of Open Access Journals (Sweden)

    Steven Alexander Niederer

    Full Text Available Regulation of regional work is essential for efficient cardiac function. In patients with heart failure and electrical dysfunction such as left branch bundle block regional work is often depressed in the septum. Following cardiac resynchronisation therapy (CRT this heterogeneous distribution of work can be rebalanced by altering the pattern of electrical activation. To investigate the changes in regional work in these patients and the mechanisms underpinning the improved function following CRT we have developed a personalised computational model. Simulations of electromechanical cardiac function in the model estimate the regional stress, strain and work pre- and post-CRT. These simulations predict that the increase in observed work performed by the septum following CRT is not due to an increase in the volume of myocardial tissue recruited during contraction but rather that the volume of recruited myocardium remains the same and the average peak work rate per unit volume increases. These increases in the peak average rate of work is is attributed to slower and more effective contraction in the septum, as opposed to a change in active tension. Model results predict that this improved septal work rate following CRT is a result of resistance to septal contraction provided by the LV free wall. This resistance results in septal shortening over a longer period which, in turn, allows the septum to contract while generating higher levels of active tension to produce a higher work rate.

  1. Marginal estimation for multi-stage models: waiting time distributions and competing risks analyses.

    Science.gov (United States)

    Satten, Glen A; Datta, Somnath

    2002-01-15

    We provide non-parametric estimates of the marginal cumulative distribution of stage occupation times (waiting times) and non-parametric estimates of marginal cumulative incidence function (proportion of persons who leave stage j for stage j' within time t of entering stage j) using right-censored data from a multi-stage model. We allow for stage and path dependent censoring where the censoring hazard for an individual may depend on his or her natural covariate history such as the collection of stages visited before the current stage and their occupation times. Additional external time dependent covariates that may induce dependent censoring can also be incorporated into our estimates, if available. Our approach requires modelling the censoring hazard so that an estimate of the integrated censoring hazard can be used in constructing the estimates of the waiting times distributions. For this purpose, we propose the use of an additive hazard model which results in very flexible (robust) estimates. Examples based on data from burn patients and simulated data with tracking are also provided to demonstrate the performance of our estimators.

  2. Promoting Social Inclusion through Sport for Refugee-Background Youth in Australia: Analysing Different Participation Models

    Directory of Open Access Journals (Sweden)

    Karen Block

    2017-06-01

    Full Text Available Sports participation can confer a range of physical and psychosocial benefits and, for refugee and migrant youth, may even act as a critical mediator for achieving positive settlement and engaging meaningfully in Australian society. This group has low participation rates however, with identified barriers including costs; discrimination and a lack of cultural sensitivity in sporting environments; lack of knowledge of mainstream sports services on the part of refugee-background settlers; inadequate access to transport; culturally determined gender norms; and family attitudes. Organisations in various sectors have devised programs and strategies for addressing these participation barriers. In many cases however, these responses appear to be ad hoc and under-theorised. This article reports findings from a qualitative exploratory study conducted in a range of settings to examine the benefits, challenges and shortcomings associated with different participation models. Interview participants were drawn from non-government organisations, local governments, schools, and sports clubs. Three distinct models of participation were identified, including short term programs for refugee-background children; ongoing programs for refugee-background children and youth; and integration into mainstream clubs. These models are discussed in terms of their relative challenges and benefits and their capacity to promote sustainable engagement and social inclusion for this population group.

  3. A biophysically-based finite state machine model for analysing gastric experimental entrainment and pacing recordings

    Science.gov (United States)

    Sathar, Shameer; Trew, Mark L.; Du, Peng; O’ Grady, Greg; Cheng, Leo K.

    2014-01-01

    Gastrointestinal motility is coordinated by slow waves (SWs) generated by the interstitial cells of Cajal (ICC). Experimental studies have shown that SWs spontaneously activate at different intrinsic frequencies in isolated tissue, whereas in intact tissues they are entrained to a single frequency. Gastric pacing has been used in an attempt to improve motility in disorders such as gastroparesis by modulating entrainment, but the optimal methods of pacing are currently unknown. Computational models can aid in the interpretation of complex in-vivo recordings and help to determine optical pacing strategies. However, previous computational models of SW entrainment are limited to the intrinsic pacing frequency as the primary determinant of the conduction velocity, and are not able to accurately represent the effects of external stimuli and electrical anisotropies. In this paper, we present a novel computationally efficient method for modelling SW propagation through the ICC network while accounting for conductivity parameters and fiber orientations. The method successfully reproduced experimental recordings of entrainment following gastric transection and the effects of gastric pacing on SW activity. It provides a reliable new tool for investigating gastric electrophysiology in normal and diseased states, and to guide and focus future experimental studies. PMID:24276722

  4. Study on dynamic response of embedded long span corrugated steel culverts using scaled model shaking table tests and numerical analyses

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A series of scaled-model shaking table tests and its simulation analyses using dynamic finite element method were performed to clarify the dynamic behaviors and the seismic stability of embedded corrugated steel culverts due to strong earthquakes like the 1995 Hyogoken-nanbu earthquake. The dynamic strains of the embedded culvert models and the seismic soil pressure acting on the models due to sinusoidal and random strong motions were investigated. This study verified that the corrugated culvert model was subjected to dynamic horizontal forces (lateral seismic soil pressure) from the surrounding ground,which caused the large bending strains on the structure; and that the structures do not exceed the allowable plastic deformation and do not collapse completely during strong earthquake like Hyogoken-nanbu earthquake. The results obtained are useful for design and construction of embedded long span corrugated steel culverts in seismic regions.

  5. Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski functionals

    Science.gov (United States)

    Buchert, Thomas; France, Martin J.; Steiner, Frank

    2017-05-01

    Despite the wealth of Planck results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependences. Aiming at detecting the NGs of the CMB temperature anisotropy δ T , while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function P(δ T) , related to v0, the first Minkowski Functional (MF), and the two other MFs, v1 and v2. From their analytical Gaussian predictions we build the discrepancy functions {{ Δ }k} (k  =  P, 0, 1, 2) which are applied to an ensemble of 105 CMB realization maps of the Λ CDM model and to the Planck CMB maps. In our analysis we use general Hermite expansions of the {{ Δ }k} up to the 12th order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the second order expansions of Matsubara to arbitrary order in the standard deviation {σ0} for P(δ T) and v0, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the Λ CDM map sample and the Planck data. We confirm the weak level of non-Gaussianity (1-2)σ of the foreground corrected masked Planck 2015 maps.

  6. Computational and Statistical Analyses of Insertional Polymorphic Endogenous Retroviruses in a Non-Model Organism

    Directory of Open Access Journals (Sweden)

    Le Bao

    2014-11-01

    Full Text Available Endogenous retroviruses (ERVs are a class of transposable elements found in all vertebrate genomes that contribute substantially to genomic functional and structural diversity. A host species acquires an ERV when an exogenous retrovirus infects a germ cell of an individual and becomes part of the genome inherited by viable progeny. ERVs that colonized ancestral lineages are fixed in contemporary species. However, in some extant species, ERV colonization is ongoing, which results in variation in ERV frequency in the population. To study the consequences of ERV colonization of a host genome, methods are needed to assign each ERV to a location in a species’ genome and determine which individuals have acquired each ERV by descent. Because well annotated reference genomes are not widely available for all species, de novo clustering approaches provide an alternative to reference mapping that are insensitive to differences between query and reference and that are amenable to mobile element studies in both model and non-model organisms. However, there is substantial uncertainty in both identifying ERV genomic position and assigning each unique ERV integration site to individuals in a population. We present an analysis suitable for detecting ERV integration sites in species without the need for a reference genome. Our approach is based on improved de novo clustering methods and statistical models that take the uncertainty of assignment into account and yield a probability matrix of shared ERV integration sites among individuals. We demonstrate that polymorphic integrations of a recently identified endogenous retrovirus in deer reflect contemporary relationships among individuals and populations.

  7. Analysing and modelling the impact of habitat fragmentation on species diversity: a macroecological perspective

    Directory of Open Access Journals (Sweden)

    Thomas Matthews

    2015-07-01

    Full Text Available My research aimed to examine a variety of macroecological and biogeographical patterns using a large number of purely habitat island datasets (i.e. isolated patches of natural habitat set within in a matrix of human land uses sourced from both the literature and my own sampling, with the objective of testing various macroecological and biogeographical patterns. These patterns can be grouped under four broad headings: 1 species–area relationships (SAR, 2 nestedness, 3 species abundance distributions (SADs and 4 species incidence functions (function of area. Overall, I found that there were few hard macroecological generalities that hold in all cases across habitat island systems. This is because most habitat island systems are highly disturbed environments, with a variety of confounding variables and ‘undesirable’ species (e.g. species associated with human land uses acting to modulate the patterns of interest. Nonetheless, some clear patterns did emerge. For example, the power model was by the far the best general SAR model for habitat islands. The slope of the island species–area relationship (ISAR was related to the matrix type surrounding archipelagos, such that habitat island ISARs were shallower than true island ISARs. Significant compositional and functional nestedness was rare in habitat island datasets, although island area was seemingly responsible for what nestedness was observed. Species abundance distribution models were found to provide useful information for conservation in fragmented landscapes, but the presence of undesirable species substantially affected the shape of the SAD. In conclusion, I found that the application of theory derived from the study of true islands, to habitat island systems, is inappropriate as it fails to incorporate factors that are unique to habitat islands. 

  8. Using Rasch Modeling to Re-Evaluate Rapid Malaria Diagnosis Test Analyses

    Directory of Open Access Journals (Sweden)

    Dawit G. Ayele

    2014-06-01

    Full Text Available The objective of this study was to demonstrate the use of the Rasch model by assessing the appropriateness of the demographic, social-economic and geographic factors in providing a total score in malaria RDT in accordance with the model’s expectations. The baseline malaria indicator survey was conducted in Amhara, Oromiya and Southern Nation Nationalities and People (SNNP regions of Ethiopia by The Carter Center in 2007. The result shows high reliability and little disordering of thresholds with no evidence of differential item functioning.

  9. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    Stress can affect the brain functionality in many ways. As the synaptic vesicles have a major role in nervous signal transportation in synapses, their distribution in relationship to the active zone is very important in studying the neuron responses. We study the effect of stress on brain...... functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...

  10. The influence of jet-grout constitutive modelling in excavation analyses

    OpenAIRE

    Ciantia, M.; Arroyo Alvarez de Toledo, Marcos; Castellanza, R; Gens Solé, Antonio

    2012-01-01

    A bonded elasto-plastic soil model is employed to characterize cement-treated clay in the finite element analysis of an excavation on soft clay supported with a soil-cement slab at the bottom. The soft clay is calibrated to represent the behaviour of Bangkok soft clay. A parametric study is run for a series of materials characterised by increasing cement content in the clay-cement mixture. The different mixtures are indirectly specified by means of their unconfined compressive strength. A ...

  11. Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes

    Directory of Open Access Journals (Sweden)

    Stoyan Stoyanov

    2009-08-01

    Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.

  12. Daniel K. Inouye Solar Telescope: computational fluid dynamic analyses and evaluation of the air knife model

    Science.gov (United States)

    McQuillen, Isaac; Phelps, LeEllen; Warner, Mark; Hubbard, Robert

    2016-08-01

    Implementation of an air curtain at the thermal boundary between conditioned and ambient spaces allows for observation over wavelength ranges not practical when using optical glass as a window. The air knife model of the Daniel K. Inouye Solar Telescope (DKIST) project, a 4-meter solar observatory that will be built on Haleakalā, Hawai'i, deploys such an air curtain while also supplying ventilation through the ceiling of the coudé laboratory. The findings of computational fluid dynamics (CFD) analysis and subsequent changes to the air knife model are presented. Major design constraints include adherence to the Interface Control Document (ICD), separation of ambient and conditioned air, unidirectional outflow into the coudé laboratory, integration of a deployable glass window, and maintenance and accessibility requirements. Optimized design of the air knife successfully holds full 12 Pa backpressure under temperature gradients of up to 20°C while maintaining unidirectional outflow. This is a significant improvement upon the .25 Pa pressure differential that the initial configuration, tested by Linden and Phelps, indicated the curtain could hold. CFD post- processing, developed by Vogiatzis, is validated against interferometry results of initial air knife seeing evaluation, performed by Hubbard and Schoening. This is done by developing a CFD simulation of the initial experiment and using Vogiatzis' method to calculate error introduced along the optical path. Seeing error, for both temperature differentials tested in the initial experiment, match well with seeing results obtained from the CFD analysis and thus validate the post-processing model. Application of this model to the realizable air knife assembly yields seeing errors that are well within the error budget under which the air knife interface falls, even with a temperature differential of 20°C between laboratory and ambient spaces. With ambient temperature set to 0°C and conditioned temperature set to 20

  13. Subchannel and Computational Fluid Dynamic Analyses of a Model Pin Bundle

    Energy Technology Data Exchange (ETDEWEB)

    Gairola, A.; Arif, M.; Suh, K. Y. [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    The current study showed that the simplistic approach of subchannel analysis code MATRA was not good in capturing the physical behavior of the coolant inside the rod bundle. With the incorporation of more detailed geometry of the grid spacer in the CFX code it was possible to approach the experimental values. However, it is vital to incorporate more advanced turbulence mixing models to more realistically simulate behavior of the liquid metal coolant inside the model pin bundle in parallel with the incorporation of the bottom and top grid structures. In the framework of the 11{sup th} international meeting of International Association for Hydraulic Research and Engineering (IAHR) working group on the advanced reactor thermal hydraulics a standard problem was conducted. The quintessence of the problem was to check on the hydraulics and heat transfer in a novel pin bundle with different pitch to rod diameter ratio and heat flux cooled by liquid metal. The standard problem stems from the field of nuclear safety research with the idea of validating and checking the performances of computer codes against the experimental results. Comprehensive checks between the two will succor in improving the dependability and exactness of the codes used for accident simulations.

  14. Integration of 3d Models and Diagnostic Analyses Through a Conservation-Oriented Information System

    Science.gov (United States)

    Mandelli, A.; Achille, C.; Tommasi, C.; Fassi, F.

    2017-08-01

    In the recent years, mature technologies for producing high quality virtual 3D replicas of Cultural Heritage (CH) artefacts has grown thanks to the progress of Information Technologies (IT) tools. These methods are an efficient way to present digital models that can be used with several scopes: heritage managing, support to conservation, virtual restoration, reconstruction and colouring, art cataloguing and visual communication. The work presented is an emblematic case of study oriented to the preventive conservation through monitoring activities, using different acquisition methods and instruments. It was developed inside a project founded by Lombardy Region, Italy, called "Smart Culture", which was aimed to realise a platform that gave the users the possibility to easily access to the CH artefacts, using as an example a very famous statue. The final product is a 3D reality-based model that contains a lot of information inside it, and that can be consulted through a common web browser. In the end, it was possible to define the general strategies oriented to the maintenance and the valorisation of CH artefacts, which, in this specific case, must consider the integration of different techniques and competencies, to obtain a complete, accurate and continuative monitoring of the statue.

  15. Preliminary Thermal Hydraulic Analyses of the Conceptual Core Models with Tubular Type Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Hee Taek; Park, Jong Hark; Park, Cheol

    2006-11-15

    A new research reactor (AHR, Advanced HANARO Reactor) based on the HANARO has being conceptually developed for the future needs of research reactors. A tubular type fuel was considered as one of the fuel options of the AHR. A tubular type fuel assembly has several curved fuel plates arranged with a constant small gap to build up cooling channels, which is very similar to an annulus pipe with many layers. This report presents the preliminary analysis of thermal hydraulic characteristics and safety margins for three conceptual core models using tubular fuel assemblies. Four design criteria, which are the fuel temperature, ONB (Onset of Nucleate Boiling) margin, minimum DNBR (Departure from Nucleate Boiling Ratio) and OFIR (Onset of Flow Instability Ratio), were investigated along with various core flow velocities in the normal operating conditions. And the primary coolant flow rate based a conceptual core model was suggested as a design information for the process design of the primary cooling system. The computational fluid dynamics analysis was also carried out to evaluate the coolant velocity distributions between tubular channels and the pressure drop characteristics of the tubular fuel assembly.

  16. A new non-randomized model for analysing sensitive questions with binary outcomes.

    Science.gov (United States)

    Tian, Guo-Liang; Yu, Jun-Wu; Tang, Man-Lai; Geng, Zhi

    2007-10-15

    We propose a new non-randomized model for assessing the association of two sensitive questions with binary outcomes. Under the new model, respondents only need to answer a non-sensitive question instead of the original two sensitive questions. As a result, it can protect a respondent's privacy, avoid the usage of any randomizing device, and be applied to both the face-to-face interview and mail questionnaire. We derive the constrained maximum likelihood estimates of the cell probabilities and the odds ratio for two binary variables associated with the sensitive questions via the EM algorithm. The corresponding standard error estimates are then obtained by bootstrap approach. A likelihood ratio test and a chi-squared test are developed for testing association between the two binary variables. We discuss the loss of information due to the introduction of the non-sensitive question, and the design of the co-operative parameters. Simulations are performed to evaluate the empirical type I error rates and powers for the two tests. In addition, a simulation is conducted to study the relationship between the probability of obtaining valid estimates and the sample size for any given cell probability vector. A real data set from an AIDS study is used to illustrate the proposed methodologies.

  17. Coupled biophysical global ocean model and molecular genetic analyses identify multiple introductions of cryptogenic species.

    Science.gov (United States)

    Dawson, Michael N; Sen Gupta, Alex; England, Matthew H

    2005-08-23

    The anthropogenic introduction of exotic species is one of the greatest modern threats to marine biodiversity. Yet exotic species introductions remain difficult to predict and are easily misunderstood because knowledge of natural dispersal patterns, species diversity, and biogeography is often insufficient to distinguish between a broadly dispersed natural population and an exotic one. Here we compare a global molecular phylogeny of a representative marine meroplanktonic taxon, the moon-jellyfish Aurelia, with natural dispersion patterns predicted by a global biophysical ocean model. Despite assumed high dispersal ability, the phylogeny reveals many cryptic species and predominantly regional structure with one notable exception: the globally distributed Aurelia sp.1, which, molecular data suggest, may occasionally traverse the Pacific unaided. This possibility is refuted by the ocean model, which shows much more limited dispersion and patterns of distribution broadly consistent with modern biogeographic zones, thus identifying multiple introductions worldwide of this cryptogenic species. This approach also supports existing evidence that (i) the occurrence in Hawaii of Aurelia sp. 4 and other native Indo-West Pacific species with similar life histories is most likely due to anthropogenic translocation, and (ii) there may be a route for rare natural colonization of northeast North America by the European marine snail Littorina littorea, whose status as endemic or exotic is unclear.

  18. Cotton chromosome substitution lines crossed with cultivars: genetic model evaluation and seed trait analyses.

    Science.gov (United States)

    Wu, Jixiang; McCarty, Jack C; Jenkins, Johnie N

    2010-05-01

    Seed from upland cotton, Gossypium hirsutum L., provides a desirable and important nutrition profile. In this study, several seed traits (protein content, oil content, seed hull fiber content, seed index, seed volume, embryo percentage) for F(3) hybrids of 13 cotton chromosome substitution lines crossed with five elite cultivars over four environments were evaluated. Oil and protein were expressed both as percentage of total seed weight and as an index which is the grams of product/100 seeds. An additive and dominance (AD) genetic model with cytoplasmic effects was designed, assessed by simulations, and employed to analyze these seed traits. Simulated results showed that this model was sufficient for analyzing the data structure with F(3) and parents in multiple environments without replications. Significant cytoplasmic effects were detected for seed oil content, oil index, seed index, seed volume, and seed embryo percentage. Additive effects were significant for protein content, fiber content, protein index, oil index, fiber index, seed index, seed volume, and embryo percentage. Dominance effects were significant for oil content, oil index, seed index, and seed volume. Cytoplasmic and additive effects for parents and dominance effects in homozygous and heterozygous forms were predicted. Favorable genetic effects were predicted in this study and the results provided evidence that these seed traits can be genetically improved. In addition, chromosome associations with AD effects were detected and discussed in this study.

  19. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST: northern extratropical response

    Directory of Open Access Journals (Sweden)

    K. Maynard

    2001-06-01

    Full Text Available The ECHAM 3.2 (T21, ECHAM 4 (T30 and LMD (version 6, grid-point resolution with 96 longitudes × 72 latitudes atmospheric general circulation models were integrated through the period 1961 to 1993 forced with the same observed Sea Surface Temperatures (SSTs as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The mid-latitude circulation pattern which maximises the covariance between the simulation and the observations, i.e. the most skilful mode, and the one which maximises the covariance amongst the runs, i.e. the most reproducible mode, is calculated as the leading mode of a Singular Value Decomposition (SVD analysis of observed and simulated Sea Level Pressure (SLP and geopotential height at 500 hPa (Z500 seasonal anomalies. A common response amongst the different models, having different resolution and parametrization should be considered as a more robust atmospheric response to SST than the same response obtained with only one model. A robust skilful mode is found mainly in December-February (DJF, and in June-August (JJA. In DJF, this mode is close to the SST-forced pattern found by Straus and Shukla (2000 over the North Pacific and North America with a wavy out-of-phase between the NE Pacific and the SE US on the one hand and the NE North America on the other. This pattern evolves in a NAO-like pattern over the North Atlantic and Europe (SLP and in a more N-S tripole on the Atlantic and European sector with an out-of-phase between the middle Europe on the one hand and the northern and southern parts on the other (Z500. There are almost no spatial shifts between either field around North America (just a slight eastward shift of the highest absolute heterogeneous correlations for SLP relative to the Z500 ones. The time evolution of the SST-forced mode is moderatly to strongly related to the ENSO/LNSO events but the spread amongst the ensemble of runs is not systematically related

  20. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST. Northern extra tropical response

    Energy Technology Data Exchange (ETDEWEB)

    Moron, V. [Universite' de Provence, UFR des sciences geographiques et de l' amenagement, Aix-en-Provence (France); Navarra, A. [Istituto Nazionale di Geofisica e Vulcanologia, Bologna (Italy); Ward, M. N. [University of Oklahoma, Cooperative Institute for Mesoscale Meteorological Studies, Norman OK (United States); Foland, C. K. [Hadley Center for Climate Prediction and Research, Meteorological Office, Bracknell (United Kingdom); Friederichs, P. [Meteorologisches Institute des Universitaet Bonn, Bonn (Germany); Maynard, K.; Polcher, J. [Paris Universite' Pierre et Marie Curie, Paris (France). Centre Nationale de la Recherche Scientifique, Laboratoire de Meteorologie Dynamique, Paris

    2001-08-01

    The ECHAM 3.2 (T21), ECHAM 4 (T30) and LMD (version 6, grid-point resolution with 96 longitudes x 72 latitudes) atmospheric general circulation models were integrated through the period 1961 to 1993 forced with the same observed Sa Surface Temperatures (SSTs) as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The mid-latitude circulation pattern which maximises the covariance between the simulation and the observations, i.e. the most skilful mode, and the one which maximises the covariance amongst the runs, i.e. the most reproducible mode, is calculated as the leading mode of a Singular Value Decomposition (SVD) analysis of observed and simulated Sea Level Pressure (SLP) and geo potential height at 500 hPa (Z500) seasonal anomalies. A common response amongst the different models, having different resolution and parametrization should be considered as a more robust atmospheric response to SST than the sam response obtained with only one model A robust skilful mode is found mainly in December-February (DJF), and in June-August (JJA). In DJF, this mode is close to the SST-forced pattern found by Straus nd Shukla (2000) over the North Pacific and North America with a wavy out-of-phase between the NE Pacific and the SE US on the one hand and the NE North America on the other. This pattern evolves in a NAO-like pattern over the North Atlantic and Europe (SLP) and in a more N-S tripote on the Atlantic and European sector with an out-of-phase between the middle Europe on the one hand and the northern and southern parts on the other (Z500). There are almost no spatial shifts between either field around North America (just a slight eastward shift of the highest absolute heterogenous correlations for SLP relative to the Z500 ones). The time evolution of the SST-forced mode is moderately to strongly related to the ENSO/LNSO events but the spread amongst the ensemble of runs is not systematically related at all to

  1. Possibilities for a sustainable development. Muligheter for en baerekraftig utvikling; Analyser paa ''World Model''

    Energy Technology Data Exchange (ETDEWEB)

    Bjerkholt, O.; Johnsen, T.; Thonstad, K.

    1993-01-01

    This report is the final report of a project that the Central Bureau of Statistics of Norway has carried out. The report present analyses of the relations between economic development, energy consumption and emission of pollutants to air in a global perspective. The analyses are based on the ''World Model'', that has been developed at the Institute for Economic Analysis at New York University. The analyses show that it will be very difficult to obtain a global stabilization of the CO[sub 2] emission on the 1990 level. In the reference scenario of the United Nations report ''Our Common Future'', the increase of CO[sub 2] emissions from 1990 to 2020 was 73%. Even in the scenario with the most drastic measures, the emissions in 2020 will be about 43% above the 1990 level, according to the present report. A stabilization of the global emissions at the 1990 level will require strong measures beyond those assumed in the model calculations, or a considerable breakthrough in energy technology. 17 refs., 5 figs., 21 tabs.

  2. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in forested ecosystems

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-12-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. Here we explore the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants using a new, simple model of ecosystem C-N cycling and interactions (ACONITE). ACONITE builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C:N, N fixation, and plant C use efficiency) based on the optimization of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state and transient ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C:N differed among the three ecosystem types (temperate deciduous traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C:N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C:N, while a more recently reported non-linear relationship simulated leaf C:N that compared better to the global trait database than the linear relationship. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C:N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple approach with emergent properties based on coupled C-N dynamics has

  3. Application of satellite precipitation data to analyse and model arbovirus activity in the tropics

    Directory of Open Access Journals (Sweden)

    Corner Robert J

    2011-01-01

    Full Text Available Abstract Background Murray Valley encephalitis virus (MVEV is a mosquito-borne Flavivirus (Flaviviridae: Flavivirus which is closely related to Japanese encephalitis virus, West Nile virus and St. Louis encephalitis virus. MVEV is enzootic in northern Australia and Papua New Guinea and epizootic in other parts of Australia. Activity of MVEV in Western Australia (WA is monitored by detection of seroconversions in flocks of sentinel chickens at selected sample sites throughout WA. Rainfall is a major environmental factor influencing MVEV activity. Utilising data on rainfall and seroconversions, statistical relationships between MVEV occurrence and rainfall can be determined. These relationships can be used to predict MVEV activity which, in turn, provides the general public with important information about disease transmission risk. Since ground measurements of rainfall are sparse and irregularly distributed, especially in north WA where rainfall is spatially and temporally highly variable, alternative data sources such as remote sensing (RS data represent an attractive alternative to ground measurements. However, a number of competing alternatives are available and careful evaluation is essential to determine the most appropriate product for a given problem. Results The Tropical Rainfall Measurement Mission (TRMM Multi-satellite Precipitation Analysis (TMPA 3B42 product was chosen from a range of RS rainfall products to develop rainfall-based predictor variables and build logistic regression models for the prediction of MVEV activity in the Kimberley and Pilbara regions of WA. Two models employing monthly time-lagged rainfall variables showed the strongest discriminatory ability of 0.74 and 0.80 as measured by the Receiver Operating Characteristics area under the curve (ROC AUC. Conclusions TMPA data provide a state-of-the-art data source for the development of rainfall-based predictive models for Flavivirus activity in tropical WA. Compared to

  4. IMPROVEMENTS IN HANFORD TRANSURANIC (TRU) PROGRAM UTILIZING SYSTEMS MODELING AND ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    UYTIOCO EM

    2007-11-12

    Hanford's Transuranic (TRU) Program is responsible for certifying contact-handled (CH) TRU waste and shipping the certified waste to the Waste Isolation Pilot Plant (WIPP). Hanford's CH TRU waste includes material that is in retrievable storage as well as above ground storage, and newly generated waste. Certifying a typical container entails retrieving and then characterizing it (Real-Time Radiography, Non-Destructive Assay, and Head Space Gas Sampling), validating records (data review and reconciliation), and designating the container for a payload. The certified payload is then shipped to WIPP. Systems modeling and analysis techniques were applied to Hanford's TRU Program to help streamline the certification process and increase shipping rates.

  5. Analysing green supply chain management practices in Brazil's electrical/electronics industry using interpretive structural modelling

    DEFF Research Database (Denmark)

    Govindan, Kannan; Kannan, Devika; Mathiyazhagan, K.

    2013-01-01

    Industries need to adopt the environmental management concepts in the traditional supply chain management. The green supply chain management (GSCM) is an established concept to ensure environment-friendly activities in industry. This paper identifies the relationship of driving and dependence...... that exists between GSCM practices with regard to their adoption within Brazilian electrical/electronic industry with the help of interpretive structural modelling (ISM). From the results, we infer that cooperation with customers for eco-design practice is driving other practices, and this practice acts...... as a vital role among other practices. Commitment to GSCM from senior managers and cooperation with customers for cleaner production occupy the highest level. © 2013 © 2013 Taylor & Francis....

  6. Statistical Analyses and Modeling of the Implementation of Agile Manufacturing Tactics in Industrial Firms

    Directory of Open Access Journals (Sweden)

    Mohammad D. AL-Tahat

    2012-01-01

    Full Text Available This paper provides a review and introduction on agile manufacturing. Tactics of agile manufacturing are mapped into different production areas (eight-construct latent: manufacturing equipment and technology, processes technology and know-how, quality and productivity improvement, production planning and control, shop floor management, product design and development, supplier relationship management, and customer relationship management. The implementation level of agile manufacturing tactics is investigated in each area. A structural equation model is proposed. Hypotheses are formulated. Feedback from 456 firms is collected using five-point-Likert-scale questionnaire. Statistical analysis is carried out using IBM SPSS and AMOS. Multicollinearity, content validity, consistency, construct validity, ANOVA analysis, and relationships between agile components are tested. The results of this study prove that the agile manufacturing tactics have positive effect on the overall agility level. This conclusion can be used by manufacturing firms to manage challenges when trying to be agile.

  7. Reporting Results from Structural Equation Modeling Analyses in Archives of Scientific Psychology.

    Science.gov (United States)

    Hoyle, Rick H; Isherwood, Jennifer C

    2013-02-01

    Psychological research typically involves the analysis of data (e.g., questionnaire responses, records of behavior) using statistical methods. The description of how those methods are used and the results they produce is a key component of scholarly publications. Despite their importance, these descriptions are not always complete and clear. In order to ensure the completeness and clarity of these descriptions, the Archives of Scientific Psychology requires that authors of manuscripts to be considered for publication adhere to a set of publication standards. Although the current standards cover most of the statistical methods commonly used in psychological research, they do not cover them all. In this manuscript, we propose adjustments to the current standards and the addition of additional standards for a statistical method not adequately covered in the current standards-structural equation modeling (SEM). Adherence to the standards we propose would ensure that scholarly publications that report results of data analyzed using SEM are complete and clear.

  8. Personality change over 40 years of adulthood: hierarchical linear modeling analyses of two longitudinal samples.

    Science.gov (United States)

    Helson, Ravenna; Jones, Constance; Kwan, Virginia S Y

    2002-09-01

    Normative personality change over 40 years was shown in 2 longitudinal cohorts with hierarchical linear modeling of California Psychological Inventory data obtained at multiple times between ages 21-75. Although themes of change and the paucity of differences attributable to gender and cohort largely supported findings of multiethnic cross-sectional samples, the authors also found much quadratic change and much individual variability. The form of quadratic change supported predictions about the influence of period of life and social climate as factors in change over the adult years: Scores on Dominance and Independence peaked in the middle age of both cohorts, and scores on Responsibility were lowest during peak years of the culture of individualism. The idea that personality change is most pronounced before age 30 and then reaches a plateau received no support.

  9. Exploring prospective secondary mathematics teachers' interpretation of student thinking through analysing students' work in modelling

    Science.gov (United States)

    Didis, Makbule Gozde; Erbas, Ayhan Kursat; Cetinkaya, Bulent; Cakiroglu, Erdinc; Alacaci, Cengiz

    2016-09-01

    Researchers point out the importance of teachers' knowledge of student thinking and the role of examining student work in various contexts to develop a knowledge base regarding students' ways of thinking. This study investigated prospective secondary mathematics teachers' interpretations of students' thinking as manifested in students' work that embodied solutions of mathematical modelling tasks. The data were collected from 25 prospective mathematics teachers enrolled in an undergraduate course through four 2-week-long cycles. Analysis of data revealed that the prospective teachers interpreted students' thinking in four ways: describing, questioning, explaining, and comparing. Moreover, whereas some of the prospective teachers showed a tendency to increase their attention to the meaning of students' ways of thinking more while they engaged in students' work in depth over time and experience, some of them continued to focus on only judging the accuracy of students' thinking. The implications of the findings for understanding and developing prospective teachers' ways of interpreting students' thinking are discussed.

  10. The usefulness of optical analyses for detecting vulnerable plaques using rabbit models

    Science.gov (United States)

    Nakai, Kanji; Ishihara, Miya; Kawauchi, Satoko; Shiomi, Masashi; Kikuchi, Makoto; Kaji, Tatsumi

    2011-03-01

    Purpose: Carotid artery stenting (CAS) has become a widely used option for treatment of carotid stenosis. Although technical improvements have led to a decrease in complications related to CAS, distal embolism continues to be a problem. The purpose of this research was to investigate the usefulness of optical methods (Time-Resolved Laser- Induced Fluorescence Spectroscopy [TR-LIFS] and reflection spectroscopy [RS] as diagnostic tools for assessment of vulnerable atherosclerotic lesions, using rabbit models of vulnerable plaque. Materials & Methods: Male Japanese white rabbits were divided into a high cholesterol diet group and a normal diet group. In addition, we used a Watanabe heritable hyperlipidemic (WHHL) rabbit, because we confirmed the reliability of our animal model for this study. Experiment 1: TR-LIFS. Fluorescence was induced using the third harmonic wave of a Q switch Nd:YAG laser. The TR-LIFS was performed using a photonic multi-channel analyzer with ICCD (wavelength range, 200 - 860 nm). Experiment 2: RS. Refection spectra in the wavelength range of 900 to 1700 nm were acquired using a spectrometer. Results: In the TR-LIFS, the wavelength at the peak was longer by plaque formation. The TR-LIFS method revealed a difference in peak levels between a normal aorta and a lipid-rich aorta. The RS method showed increased absorption from 1450 to 1500 nm for lipid-rich plaques. We observed absorption around 1200 nm due to lipid only in the WHHL group. Conclusion: These methods using optical analysis might be useful for diagnosis of vulnerable plaques. Keywords: Carotid artery stenting, vulnerable plaque, Time-Resolved Laser-Induced Fluorescence

  11. Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Wetterslev, Jørn; Awad, Tahany;

    2011-01-01

    In random-effects model meta-analysis, the conventional DerSimonian-Laird (DL) estimator typically underestimates the between-trial variance. Alternative variance estimators have been proposed to address this bias. This study aims to empirically compare statistical inferences from random......-effects model meta-analyses on the basis of the DL estimator and four alternative estimators, as well as distributional assumptions (normal distribution and t-distribution) about the pooled intervention effect. We evaluated the discrepancies of p-values, 95% confidence intervals (CIs) in statistically...... significant meta-analyses, and the degree (percentage) of statistical heterogeneity (e.g. I(2)) across 920 Cochrane primary outcome meta-analyses. In total, 414 of the 920 meta-analyses were statistically significant with the DL meta-analysis, and 506 were not. Compared with the DL estimator, the four...

  12. In silico analyses of dystrophin Dp40 cellular distribution, nuclear export signals and structure modeling

    Directory of Open Access Journals (Sweden)

    Alejandro Martínez-Herrera

    2015-09-01

    Full Text Available Dystrophin Dp40 is the shortest protein encoded by the DMD (Duchenne muscular dystrophy gene. This protein is unique since it lacks the C-terminal end of dystrophins. In this data article, we describe the subcellular localization, nuclear export signals and the three-dimensional structure modeling of putative Dp40 proteins using bioinformatics tools. The Dp40 wild type protein was predicted as a cytoplasmic protein while the Dp40n4 was predicted to be nuclear. Changes L93P and L170P are involved in the nuclear localization of Dp40n4 protein. A close analysis of Dp40 protein scored that amino acids 93LEQEHNNLV101 and 168LLLHDSIQI176 could function as NES sequences and the scores are lost in Dp40n4. In addition, the changes L93/170P modify the tertiary structure of putative Dp40 mutants. The analysis showed that changes of residues 93 and 170 from leucine to proline allow the nuclear localization of Dp40 proteins. The data described here are related to the research article entitled “EF-hand domains are involved in the differential cellular distribution of dystrophin Dp40” (J. Aragón et al. Neurosci. Lett. 600 (2015 115–120 [1].

  13. Analysing hydro-mechanical behaviour of reinforced slopes through centrifuge modelling

    Science.gov (United States)

    Veenhof, Rick; Wu, Wei

    2017-04-01

    Every year, slope instability is causing casualties and damage to properties and the environment. The behaviour of slopes during and after these kind of events is complex and depends on meteorological conditions, slope geometry, hydro-mechanical soil properties, boundary conditions and the initial state of the soils. This study describes the effects of adding reinforcement, consisting of randomly distributed polyolefin monofilament fibres or Ryegrass (Lolium), on the behaviour of medium-fine sand in loose and medium dense conditions. Direct shear tests were performed on sand specimens with different void ratios, water content and fibre or root density, respectively. To simulate the stress state of real scale field situations, centrifuge model tests were conducted on sand specimens with different slope angles, thickness of the reinforced layer, fibre density, void ratio and water content. An increase in peak shear strength is observed in all reinforced cases. Centrifuge tests show that for slopes that are reinforced the period until failure is extended. The location of shear band formation and patch displacement behaviour indicate that the design of slope reinforcement has a significant effect on the failure behaviour. Future research will focus on the effect of plant water uptake on soil cohesion.

  14. Biomechanical analyses of prosthetic mesh repair in a hiatal hernia model.

    Science.gov (United States)

    Alizai, Patrick Hamid; Schmid, Sofie; Otto, Jens; Klink, Christian Daniel; Roeth, Anjali; Nolting, Jochen; Neumann, Ulf Peter; Klinge, Uwe

    2014-10-01

    Recurrence rate of hiatal hernia can be reduced with prosthetic mesh repair; however, type and shape of the mesh are still a matter of controversy. The purpose of this study was to investigate the biomechanical properties of four conventional meshes: pure polypropylene mesh (PP-P), polypropylene/poliglecaprone mesh (PP-U), polyvinylidenefluoride/polypropylene mesh (PVDF-I), and pure polyvinylidenefluoride mesh (PVDF-S). Meshes were tested either in warp direction (parallel to production direction) or perpendicular to the warp direction. A Zwick testing machine was used to measure elasticity and effective porosity of the textile probes. Stretching of the meshes in warp direction required forces that were up to 85-fold higher than the same elongation in perpendicular direction. Stretch stress led to loss of effective porosity in most meshes, except for PVDF-S. Biomechanical impact of the mesh was additionally evaluated in a hiatal hernia model. The different meshes were used either as rectangular patches or as circular meshes. Circular meshes led to a significant reinforcement of the hiatus, largely unaffected by the orientation of the warp fibers. In contrast, rectangular meshes provided a significant reinforcement only when warp fibers ran perpendicular to the crura. Anisotropic elasticity of prosthetic meshes should therefore be considered in hiatal closure with rectangular patches.

  15. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  16. Fungal-Induced Deterioration of Mural Paintings: In Situ and Mock-Model Microscopy Analyses.

    Science.gov (United States)

    Unković, Nikola; Grbić, Milica Ljaljević; Stupar, Miloš; Savković, Željko; Jelikić, Aleksa; Stanojević, Dragan; Vukojević, Jelena

    2016-04-01

    Fungal deterioration of frescoes was studied in situ on a selected Serbian church, and on a laboratory model, utilizing standard and newly implemented microscopy techniques. Scanning electron microscopy (SEM) with energy-dispersive X-ray confirmed the limestone components of the plaster. Pigments used were identified as carbon black, green earth, iron oxide, ocher, and an ocher/cinnabar mixture. In situ microscopy, applied via a portable microscope ShuttlePix P-400R, proved very useful for detection of invisible micro-impairments and hidden, symptomless, microbial growth. SEM and optical microscopy established that observed deterioration symptoms, predominantly discoloration and pulverization of painted layers, were due to bacterial filaments and fungal hyphal penetration, and formation of a wide range of fungal structures (i.e., melanized hyphae, chlamydospores, microcolonial clusters, Cladosporium-like conidia, and Chaetomium perithecia and ascospores). The all year-round monitoring of spontaneous and induced fungal colonization of a "mock painting" in controlled laboratory conditions confirmed the decisive role of humidity level (70.18±6.91% RH) in efficient colonization of painted surfaces, as well as demonstrated increased bioreceptivity of painted surfaces to fungal colonization when plant-based adhesives (ilinocopie, murdent), compared with organic adhesives of animal origin (bone glue, egg white), are used for pigment sizing.

  17. Analysing movements in investor’s risk aversion using the Heston volatility model

    Directory of Open Access Journals (Sweden)

    Alexie ALUPOAIEI

    2013-03-01

    Full Text Available In this paper we intend to identify and analyze, if it is the case, an “epidemiological” relationship between forecasts of professional investors and short-term developments in the EUR/RON exchange rate. Even that we don’t call a typical epidemiological model as those ones used in biology fields of research, we investigated the hypothesis according to which after the Lehman Brothers crash and implicit the generation of the current financial crisis, the forecasts of professional investors pose a significant explanatory power on the futures short-run movements of EUR/RON. How does it work this mechanism? Firstly, the professional forecasters account for the current macro, financial and political states, then they elaborate forecasts. Secondly, based on that forecasts they get positions in the Romanian exchange market for hedging and/or speculation purposes. But their positions incorporate in addition different degrees of uncertainty. In parallel, a part of their anticipations are disseminated to the public via media channels. Since some important movements are viewed within macro, financial or political fields, the positions of professsional investors from FX derivative market are activated. The current study represents a first step in that direction of analysis for Romanian case. For the above formulated objectives, in this paper different measures of EUR/RON rate volatility have been estimated and compared with implied volatilities. In a second timeframe we called the co-integration and dynamic correlation based tools in order to investigate the relationship between implied volatility and daily returns of EUR/RON exchange rate.

  18. Model-based analyses of bioequivalence crossover trials using the stochastic approximation expectation maximisation algorithm.

    Science.gov (United States)

    Dubois, Anne; Lavielle, Marc; Gsteiger, Sandro; Pigeolet, Etienne; Mentré, France

    2011-09-20

    In this work, we develop a bioequivalence analysis using nonlinear mixed effects models (NLMEM) that mimics the standard noncompartmental analysis (NCA). We estimate NLMEM parameters, including between-subject and within-subject variability and treatment, period and sequence effects. We explain how to perform a Wald test on a secondary parameter, and we propose an extension of the likelihood ratio test for bioequivalence. We compare these NLMEM-based bioequivalence tests with standard NCA-based tests. We evaluate by simulation the NCA and NLMEM estimates and the type I error of the bioequivalence tests. For NLMEM, we use the stochastic approximation expectation maximisation (SAEM) algorithm implemented in monolix. We simulate crossover trials under H(0) using different numbers of subjects and of samples per subject. We simulate with different settings for between-subject and within-subject variability and for the residual error variance. The simulation study illustrates the accuracy of NLMEM-based geometric means estimated with the SAEM algorithm, whereas the NCA estimates are biased for sparse design. NCA-based bioequivalence tests show good type I error except for high variability. For a rich design, type I errors of NLMEM-based bioequivalence tests (Wald test and likelihood ratio test) do not differ from the nominal level of 5%. Type I errors are inflated for sparse design. We apply the bioequivalence Wald test based on NCA and NLMEM estimates to a three-way crossover trial, showing that Omnitrope®; (Sandoz GmbH, Kundl, Austria) powder and solution are bioequivalent to Genotropin®; (Pfizer Pharma GmbH, Karlsruhe, Germany). NLMEM-based bioequivalence tests are an alternative to standard NCA-based tests. However, caution is needed for small sample size and highly variable drug.

  19. Tropical cyclones in a T159 resolution global climate model: comparison with observations and re-analyses

    Science.gov (United States)

    Bengtsson, L.; Hodges, K. I.; Esch, M.

    2007-08-01

    Tropical cyclones have been investigated in a T159 version of the MPI ECHAM5 climate model using a novel technique to diagnose the evolution of the three-dimensional vorticity structure of tropical cyclones, including their full life cycle from weak initial vortices to their possible extra-tropical transition. Results have been compared with re-analyses [the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-yr Re-analysis (ERA40) and Japanese 25 yr re-analysis (JRA25)] and observed tropical storms during the period 1978-1999 for the Northern Hemisphere. There is no indication of any trend in the number or intensity of tropical storms during this period in ECHAM5 or in re-analyses but there are distinct inter-annual variations. The storms simulated by ECHAM5 are realistic both in space and time, but the model and even more so the re-analyses, underestimate the intensities of the most intense storms (in terms of their maximum wind speeds). There is an indication of a response to El Niño-Southern Oscillation (ENSO) with a smaller number of Atlantic storms during El Niño in agreement with previous studies. The global divergence circulation responds to El Niño by setting up a large-scale convergence flow, with the centre over the central Pacific with enhanced subsidence over the tropical Atlantic. At the same time there is an increase in the vertical wind shear in the region of the tropical Atlantic where tropical storms normally develop. There is a good correspondence between the model and ERA40 except that the divergence circulation is somewhat stronger in the model. The model underestimates storms in the Atlantic but tends to overestimate them in the Western Pacific and in the North Indian Ocean. It is suggested that the overestimation of storms in the Pacific by the model is related to an overly strong response to the tropical Pacific sea surface temperature (SST) anomalies. The overestimation in the North Indian Ocean is likely to be due to an over

  20. Pathophysiologic and transcriptomic analyses of viscerotropic yellow fever in a rhesus macaque model.

    Science.gov (United States)

    Engelmann, Flora; Josset, Laurence; Girke, Thomas; Park, Byung; Barron, Alex; Dewane, Jesse; Hammarlund, Erika; Lewis, Anne; Axthelm, Michael K; Slifka, Mark K; Messaoudi, Ilhem

    2014-01-01

    Infection with yellow fever virus (YFV), an explosively replicating flavivirus, results in viral hemorrhagic disease characterized by cardiovascular shock and multi-organ failure. Unvaccinated populations experience 20 to 50% fatality. Few studies have examined the pathophysiological changes that occur in humans during YFV infection due to the sporadic nature and remote locations of outbreaks. Rhesus macaques are highly susceptible to YFV infection, providing a robust animal model to investigate host-pathogen interactions. In this study, we characterized disease progression as well as alterations in immune system homeostasis, cytokine production and gene expression in rhesus macaques infected with the virulent YFV strain DakH1279 (YFV-DakH1279). Following infection, YFV-DakH1279 replicated to high titers resulting in viscerotropic disease with ∼72% mortality. Data presented in this manuscript demonstrate for the first time that lethal YFV infection results in profound lymphopenia that precedes the hallmark changes in liver enzymes and that although tissue damage was noted in liver, kidneys, and lymphoid tissues, viral antigen was only detected in the liver. These observations suggest that additional tissue damage could be due to indirect effects of viral replication. Indeed, circulating levels of several cytokines peaked shortly before euthanasia. Our study also includes the first description of YFV-DakH1279-induced changes in gene expression within peripheral blood mononuclear cells 3 days post-infection prior to any clinical signs. These data show that infection with wild type YFV-DakH1279 or live-attenuated vaccine strain YFV-17D, resulted in 765 and 46 differentially expressed genes (DEGs), respectively. DEGs detected after YFV-17D infection were mostly associated with innate immunity, whereas YFV-DakH1279 infection resulted in dysregulation of genes associated with the development of immune response, ion metabolism, and apoptosis. Therefore, WT-YFV infection

  1. [Selection of a statistical model for the evaluation of the reliability of the results of toxicological analyses. II. Selection of our statistical model for the evaluation].

    Science.gov (United States)

    Antczak, K; Wilczyńska, U

    1980-01-01

    Part II presents a statistical model devised by the authors for evaluating toxicological analyses results. The model includes: 1. Establishment of a reference value, basing on our own measurements taken by two independent analytical methods. 2. Selection of laboratories -- basing on the deviation of the obtained values from reference ones. 3. On consideration of variance analysis, t-student's test and differences test, subsequent quality controls and particular laboratories have been evaluated.

  2. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2014-01-01

    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  3. An assessment of the wind re-analyses in the modelling of an extreme sea state in the Black Sea

    Science.gov (United States)

    Akpinar, Adem; Ponce de León, S.

    2016-03-01

    This study aims at an assessment of wind re-analyses for modelling storms in the Black Sea. A wind-wave modelling system (Simulating WAve Nearshore, SWAN) is applied to the Black Sea basin and calibrated with buoy data for three recent re-analysis wind sources, namely the European Centre for Medium-Range Weather Forecasts Reanalysis-Interim (ERA-Interim), Climate Forecast System Reanalysis (CFSR), and Modern Era Retrospective Analysis for Research and Applications (MERRA) during an extreme wave condition that occurred in the north eastern part of the Black Sea. The SWAN model simulations are carried out for default and tuning settings for deep water source terms, especially whitecapping. Performances of the best model configurations based on calibration with buoy data are discussed using data from the JASON2, TOPEX-Poseidon, ENVISAT and GFO satellites. The SWAN model calibration shows that the best configuration is obtained with Janssen and Komen formulations with whitecapping coefficient (Cds) equal to 1.8e-5 for wave generation by wind and whitecapping dissipation using ERA-Interim. In addition, from the collocated SWAN results against the satellite records, the best configuration is determined to be the SWAN using the CFSR winds. Numerical results, thus show that the accuracy of a wave forecast will depend on the quality of the wind field and the ability of the SWAN model to simulate the waves under extreme wind conditions in fetch limited wave conditions.

  4. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  5. Experiments and sensitivity analyses for heat transfer in a meter-scale regularly fractured granite model with water flow

    Institute of Scientific and Technical Information of China (English)

    Wei LU; Yan-yong XIANG

    2012-01-01

    Experiments of saturated water flow and heat transfer were conducted for a meter-scale model of regularly fractured granite.The fractured rock model (height 1502.5 mm,width 904 mm,and thickness 300 mm),embedded with two vertical and two horizontal fractures of pre-set apertures,was constructed using 18 pieces of intact granite.The granite was taken from a site currently being investigated for a high-level nuclear waste repository in China.The experiments involved different heat source temperatures and vertical water fluxes in the embedded fractures either open or filled with sand.A finite difference scheme and computer code for calculation of water flow and heat transfer in regularly fractured rocks was developed,verified against both the experimental data and calculations from the TOUGH2 code,and employed for parametric sensitivity analyses.The experiments revealed that,among other things,the temperature distribution was influenced by water flow in the fractures,especially the water flow in the vertical fracture adjacent to the heat source,and that the heat conduction between the neighboring rock blocks in the model with sand-filled fractures was enhanced by the sand,with larger range of influence of the heat source and longer time for approaching asymptotic steady-state than those of the model with open fractures.The temperatures from the experiments were in general slightly smaller than those from the numerical calculations,probably due to the fact that a certain amount of outward heat transfer at the model perimeter was unavoidable in the experiments.The parametric sensitivity analyses indicated that the temperature distribution was highly sensitive to water flow in the fractures,and the water temperature in the vertical fracture adjacent to the heat source was rather insensitive to water flow in other fractures.

  6. Spatial Modeling Techniques for Characterizing Geomaterials: Deterministic vs. Stochastic Modeling for Single-Variable and Multivariate Analyses%Spatial Modeling Techniques for Characterizing Geomaterials:Deterministic vs. Stochastic Modeling for Single-Variable and Multivariate Analyses

    Institute of Scientific and Technical Information of China (English)

    Katsuaki Koike

    2011-01-01

    Sample data in the Earth and environmental sciences are limited in quantity and sampling location and therefore, sophisticated spatial modeling techniques are indispensable for accurate imaging of complicated structures and properties of geomaterials. This paper presents several effective methods that are grouped into two categories depending on the nature of regionalized data used. Type I data originate from plural populations and type II data satisfy the prerequisite of stationarity and have distinct spatial correlations. For the type I data, three methods are shown to be effective and demonstrated to produce plausible results: (1) a spline-based method, (2) a combination of a spline-based method with a stochastic simulation, and (3) a neural network method. Geostatistics proves to be a powerful tool for type II data. Three new approaches of geostatistics are presented with case studies: an application to directional data such as fracture, multi-scale modeling that incorporates a scaling law,and space-time joint analysis for multivariate data. Methods for improving the contribution of such spatial modeling to Earth and environmental sciences are also discussed and future important problems to be solved are summarized.

  7. Factors Associated With Rehabilitation Outcomes After Traumatic Brain Injury: Comparing Functional Outcomes Between TBIMS Centers Using Hierarchical Linear Modeling.

    Science.gov (United States)

    Dahdah, Marie N; Hofmann, Melissa; Pretz, Christopher; An, Viktoriya; Barnes, Sunni A; Bennett, Monica; Dreer, Laura E; Bergquist, Thomas; Shafi, Shahid

    To examine differences in patient outcomes across Traumatic Brain Injury Model Systems (TBIMS) rehabilitation centers and factors that influence these differences using hierarchical linear modeling (HLM). Sixteen TBIMS centers. A total of 2056 individuals 16 years or older with moderate to severe traumatic brain injury (TBI) who received inpatient rehabilitation. Multicenter observational cohort study using HLM to analyze prospectively collected data. Functional Independence Measure and Disability Rating Scale total scores at discharge and 1 year post-TBI. Duration of posttraumatic amnesia (PTA) demonstrated a significant inverse relationship with functional outcomes. However, the magnitude of this relationship (change in functional status for each additional day in PTA) varied among centers. Functional status at discharge from rehabilitation and at 1 year post-TBI could be predicted using the slope and intercept of each TBIMS center for the duration of PTA, by comparing it against the average slope and intercept. HLM demonstrated center effect due to variability in the relationship between PTA and functional outcomes of patients. This variability is not accounted for in traditional linear regression modeling. Future studies examining variations in patient outcomes between centers should utilize HLM to measure the impact of additional factors that influence patient rehabilitation functional outcomes.

  8. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    Science.gov (United States)

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  9. Data Assimilation Tools for CO2 Reservoir Model Development – A Review of Key Data Types, Analyses, and Selected Software

    Energy Technology Data Exchange (ETDEWEB)

    Rockhold, Mark L.; Sullivan, E. C.; Murray, Christopher J.; Last, George V.; Black, Gary D.

    2009-09-30

    Pacific Northwest National Laboratory (PNNL) has embarked on an initiative to develop world-class capabilities for performing experimental and computational analyses associated with geologic sequestration of carbon dioxide. The ultimate goal of this initiative is to provide science-based solutions for helping to mitigate the adverse effects of greenhouse gas emissions. This Laboratory-Directed Research and Development (LDRD) initiative currently has two primary focus areas—advanced experimental methods and computational analysis. The experimental methods focus area involves the development of new experimental capabilities, supported in part by the U.S. Department of Energy’s (DOE) Environmental Molecular Science Laboratory (EMSL) housed at PNNL, for quantifying mineral reaction kinetics with CO2 under high temperature and pressure (supercritical) conditions. The computational analysis focus area involves numerical simulation of coupled, multi-scale processes associated with CO2 sequestration in geologic media, and the development of software to facilitate building and parameterizing conceptual and numerical models of subsurface reservoirs that represent geologic repositories for injected CO2. This report describes work in support of the computational analysis focus area. The computational analysis focus area currently consists of several collaborative research projects. These are all geared towards the development and application of conceptual and numerical models for geologic sequestration of CO2. The software being developed for this focus area is referred to as the Geologic Sequestration Software Suite or GS3. A wiki-based software framework is being developed to support GS3. This report summarizes work performed in FY09 on one of the LDRD projects in the computational analysis focus area. The title of this project is Data Assimilation Tools for CO2 Reservoir Model Development. Some key objectives of this project in FY09 were to assess the current state

  10. Kvalitative analyser ..

    DEFF Research Database (Denmark)

    Boolsen, Merete Watt

    bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse......bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse...

  11. PartitionFinder 2: New Methods for Selecting Partitioned Models of Evolution for Molecular and Morphological Phylogenetic Analyses.

    Science.gov (United States)

    Lanfear, Robert; Frandsen, Paul B; Wright, April M; Senfeld, Tereza; Calcott, Brett

    2017-03-01

    PartitionFinder 2 is a program for automatically selecting best-fit partitioning schemes and models of evolution for phylogenetic analyses. PartitionFinder 2 is substantially faster and more efficient than version 1, and incorporates many new methods and features. These include the ability to analyze morphological datasets, new methods to analyze genome-scale datasets, new output formats to facilitate interoperability with downstream software, and many new models of molecular evolution. PartitionFinder 2 is freely available under an open source license and works on Windows, OSX, and Linux operating systems. It can be downloaded from www.robertlanfear.com/partitionfinder. The source code is available at https://github.com/brettc/partitionfinder. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. ANALYSES ON NONLINEAR COUPLING OF MAGNETO-THERMO-ELASTICITY OF FERROMAGNETIC THIN SHELL-Ⅱ: FINITE ELEMENT MODELING AND APPLICATION

    Institute of Scientific and Technical Information of China (English)

    Xingzhe Wang; Xiaojing Zheng

    2009-01-01

    Based on the generalized variational principle of magneto-thermo-elasticity of a ferromagnetic thin shell established (see, Analyses on nonlinear coupling of magneto-thermo-elasticity of ferromagnetic thin shell-Ⅰ), the present paper developed a finite element modeling for the mechanical-magneto-thermal multi-field coupling of a ferromagnetic thin shell. The numerical modeling composes of finite element equations for three sub-systems of magnetic, thermal and deformation fields, as well as iterative methods for nonlinearities of the geometrical large-deflection and the multi-field coupling of the ferromagnetic shell. As examples, the numerical simulations on magneto-elastic behaviors of a ferromagnetic cylindrical shell in an applied magnetic field, and magneto-thermo-elastic behaviors of the shell in applied magnetic and thermal fields are carried out. The results are in good agreement with the experimental ones.

  13. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    Directory of Open Access Journals (Sweden)

    Ilona Naujokaitis-Lewis

    2016-07-01

    Full Text Available Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0 that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat

  14. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes.

    Science.gov (United States)

    Naujokaitis-Lewis, Ilona; Curtis, Janelle M R

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  15. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  16. Using FOSM-Based Data Worth Analyses to Design Geophysical Surveys to Reduce Uncertainty in a Regional Groundwater Model Update

    Science.gov (United States)

    Smith, B. D.; White, J.; Kress, W. H.; Clark, B. R.; Barlow, J.

    2016-12-01

    Hydrogeophysical surveys have become an integral part of understanding hydrogeological frameworks used in groundwater models. Regional models cover a large area where water well data is, at best, scattered and irregular. Since budgets are finite, priorities must be assigned to select optimal areas for geophysical surveys. For airborne electromagnetic (AEM) geophysical surveys, optimization of mapping depth and line spacing needs to take in account the objectives of the groundwater models. The approach discussed here uses a first-order, second-moment (FOSM) uncertainty analyses which assumes an approximate linear relation between model parameters and observations. This assumption allows FOSM analyses to be applied to estimate the value of increased parameter knowledge to reduce forecast uncertainty. FOSM is used to facilitate optimization of yet-to-be-completed geophysical surveying to reduce model forecast uncertainty. The main objective of geophysical surveying is assumed to estimate values and spatial variation in hydrologic parameters (i.e. hydraulic conductivity) as well as map lower permeability layers that influence the spatial distribution of recharge flux. The proposed data worth analysis was applied to Mississippi Embayment Regional Aquifer Study (MERAS) which is being updated. The objective of MERAS is to assess the ground-water availability (status and trends) of the Mississippi embayment aquifer system. The study area covers portions of eight states including Alabama, Arkansas, Illinois, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee. The active model grid covers approximately 70,000 square miles, and incorporates some 6,000 miles of major rivers and over 100,000 water wells. In the FOSM analysis, a dense network of pilot points was used to capture uncertainty in hydraulic conductivity and recharge. To simulate the effect of AEM flight lines, the prior uncertainty for hydraulic conductivity and recharge pilots along potential flight lines was

  17. Using plant growth modeling to analyse C source-sink relations under drought: inter and intra specific comparison

    Directory of Open Access Journals (Sweden)

    Benoit ePallas

    2013-11-01

    Full Text Available The ability to assimilate C and allocate NSC (non structural carbohydrates to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyse such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed.

  18. A multinomial logit model-Bayesian network hybrid approach for driver injury severity analyses in rear-end crashes.

    Science.gov (United States)

    Chen, Cong; Zhang, Guohui; Tarefder, Rafiqul; Ma, Jianming; Wei, Heng; Guan, Hongzhi

    2015-07-01

    Rear-end crash is one of the most common types of traffic crashes in the U.S. A good understanding of its characteristics and contributing factors is of practical importance. Previously, both multinomial Logit models and Bayesian network methods have been used in crash modeling and analysis, respectively, although each of them has its own application restrictions and limitations. In this study, a hybrid approach is developed to combine multinomial logit models and Bayesian network methods for comprehensively analyzing driver injury severities in rear-end crashes based on state-wide crash data collected in New Mexico from 2010 to 2011. A multinomial logit model is developed to investigate and identify significant contributing factors for rear-end crash driver injury severities classified into three categories: no injury, injury, and fatality. Then, the identified significant factors are utilized to establish a Bayesian network to explicitly formulate statistical associations between injury severity outcomes and explanatory attributes, including driver behavior, demographic features, vehicle factors, geometric and environmental characteristics, etc. The test results demonstrate that the proposed hybrid approach performs reasonably well. The Bayesian network reference analyses indicate that the factors including truck-involvement, inferior lighting conditions, windy weather conditions, the number of vehicles involved, etc. could significantly increase driver injury severities in rear-end crashes. The developed methodology and estimation results provide insights for developing effective countermeasures to reduce rear-end crash injury severities and improve traffic system safety performance.

  19. Analyses of simulations of three-dimensional lattice proteins in comparison with a simplified statistical mechanical model of protein folding.

    Science.gov (United States)

    Abe, H; Wako, H

    2006-07-01

    Folding and unfolding simulations of three-dimensional lattice proteins were analyzed using a simplified statistical mechanical model in which their amino acid sequences and native conformations were incorporated explicitly. Using this statistical mechanical model, under the assumption that only interactions between amino acid residues within a local structure in a native state are considered, the partition function of the system can be calculated for a given native conformation without any adjustable parameter. The simulations were carried out for two different native conformations, for each of which two foldable amino acid sequences were considered. The native and non-native contacts between amino acid residues occurring in the simulations were examined in detail and compared with the results derived from the theoretical model. The equilibrium thermodynamic quantities (free energy, enthalpy, entropy, and the probability of each amino acid residue being in the native state) at various temperatures obtained from the simulations and the theoretical model were also examined in order to characterize the folding processes that depend on the native conformations and the amino acid sequences. Finally, the free energy landscapes were discussed based on these analyses.

  20. Comparative analyses reveal potential uses of Brachypodium distachyon as a model for cold stress responses in temperate grasses

    Directory of Open Access Journals (Sweden)

    Li Chuan

    2012-05-01

    Full Text Available Abstract Background Little is known about the potential of Brachypodium distachyon as a model for low temperature stress responses in Pooideae. The ice recrystallization inhibition protein (IRIP genes, fructosyltransferase (FST genes, and many C-repeat binding factor (CBF genes are Pooideae specific and important in low temperature responses. Here we used comparative analyses to study conservation and evolution of these gene families in B. distachyon to better understand its potential as a model species for agriculturally important temperate grasses. Results Brachypodium distachyon contains cold responsive IRIP genes which have evolved through Brachypodium specific gene family expansions. A large cold responsive CBF3 subfamily was identified in B. distachyon, while CBF4 homologs are absent from the genome. No B. distachyon FST gene homologs encode typical core Pooideae FST-motifs and low temperature induced fructan accumulation was dramatically different in B. distachyon compared to core Pooideae species. Conclusions We conclude that B. distachyon can serve as an interesting model for specific molecular mechanisms involved in low temperature responses in core Pooideae species. However, the evolutionary history of key genes involved in low temperature responses has been different in Brachypodium and core Pooideae species. These differences limit the use of B. distachyon as a model for holistic studies relevant for agricultural core Pooideae species.

  1. 3D RECORDING FOR 2D DELIVERING – THE EMPLOYMENT OF 3D MODELS FOR STUDIES AND ANALYSES

    Directory of Open Access Journals (Sweden)

    A. Rizzi

    2012-09-01

    Full Text Available In the last years, thanks to the advances of surveying sensors and techniques, many heritage sites could be accurately replicated in digital form with very detailed and impressive results. The actual limits are mainly related to hardware capabilities, computation time and low performance of personal computer. Often, the produced models are not visible on a normal computer and the only solution to easily visualized them is offline using rendered videos. This kind of 3D representations is useful for digital conservation, divulgation purposes or virtual tourism where people can visit places otherwise closed for preservation or security reasons. But many more potentialities and possible applications are available using a 3D model. The problem is the ability to handle 3D data as without adequate knowledge this information is reduced to standard 2D data. This article presents some surveying and 3D modeling experiences within the APSAT project ("Ambiente e Paesaggi dei Siti d’Altura Trentini", i.e. Environment and Landscapes of Upland Sites in Trentino. APSAT is a multidisciplinary project funded by the Autonomous Province of Trento (Italy with the aim documenting, surveying, studying, analysing and preserving mountainous and hill-top heritage sites located in the region. The project focuses on theoretical, methodological and technological aspects of the archaeological investigation of mountain landscape, considered as the product of sequences of settlements, parcelling-outs, communication networks, resources, and symbolic places. The mountain environment preserves better than others the traces of hunting and gathering, breeding, agricultural, metallurgical, symbolic activities characterised by different lengths and environmental impacts, from Prehistory to the Modern Period. Therefore the correct surveying and documentation of this heritage sites and material is very important. Within the project, the 3DOM unit of FBK is delivering all the surveying

  2. Assessing models of speciation under different biogeographic scenarios; An empirical study using multi-locus and RNA-seq analyses

    Science.gov (United States)

    Edwards, Taylor; Tollis, Marc; Hsieh, PingHsun; Gutenkunst, Ryan N.; Liu, Zhen; Kusumi, Kenro; Culver, Melanie; Murphy, Robert W.

    2016-01-01

    Evolutionary biology often seeks to decipher the drivers of speciation, and much debate persists over the relative importance of isolation and gene flow in the formation of new species. Genetic studies of closely related species can assess if gene flow was present during speciation, because signatures of past introgression often persist in the genome. We test hypotheses on which mechanisms of speciation drove diversity among three distinct lineages of desert tortoise in the genus Gopherus. These lineages offer a powerful system to study speciation, because different biogeographic patterns (physical vs. ecological segregation) are observed at opposing ends of their distributions. We use 82 samples collected from 38 sites, representing the entire species' distribution and generate sequence data for mtDNA and four nuclear loci. A multilocus phylogenetic analysis in *BEAST estimates the species tree. RNA-seq data yield 20,126 synonymous variants from 7665 contigs from two individuals of each of the three lineages. Analyses of these data using the demographic inference package ∂a∂i serve to test the null hypothesis of no gene flow during divergence. The best-fit demographic model for the three taxa is concordant with the *BEAST species tree, and the ∂a∂i analysis does not indicate gene flow among any of the three lineages during their divergence. These analyses suggest that divergence among the lineages occurred in the absence of gene flow and in this scenario the genetic signature of ecological isolation (parapatric model) cannot be differentiated from geographic isolation (allopatric model).

  3. Virus-induced gene silencing as a tool for functional analyses in the emerging model plant Aquilegia (columbine, Ranunculaceae

    Directory of Open Access Journals (Sweden)

    Kramer Elena M

    2007-04-01

    Full Text Available Abstract Background The lower eudicot genus Aquilegia, commonly known as columbine, is currently the subject of extensive genetic and genomic research aimed at developing this taxon as a new model for the study of ecology and evolution. The ability to perform functional genetic analyses is a critical component of this development process and ultimately has the potential to provide insight into the genetic basis for the evolution of a wide array of traits that differentiate flowering plants. Aquilegia is of particular interest due to both its recent evolutionary history, which involves a rapid adaptive radiation, and its intermediate phylogenetic position between core eudicot (e.g., Arabidopsis and grass (e.g., Oryza model species. Results Here we demonstrate the effective use of a reverse genetic technique, virus-induced gene silencing (VIGS, to study gene function in this emerging model plant. Using Agrobacterium mediated transfer of tobacco rattle virus (TRV based vectors, we induce silencing of PHYTOENE DESATURASE (AqPDS in Aquilegia vulgaris seedlings, and ANTHOCYANIDIN SYNTHASE (AqANS and the B-class floral organ identity gene PISTILLATA in A. vulgaris flowers. For all of these genes, silencing phenotypes are associated with consistent reduction in endogenous transcript levels. In addition, we show that silencing of AqANS has no effect on overall floral morphology and is therefore a suitable marker for the identification of silenced flowers in dual-locus silencing experiments. Conclusion Our results show that TRV-VIGS in Aquilegia vulgaris allows data to be rapidly obtained and can be reproduced with effective survival and silencing rates. Furthermore, this method can successfully be used to evaluate the function of early-acting developmental genes. In the future, data derived from VIGS analyses will be combined with large-scale sequencing and microarray experiments already underway in order to address both recent and ancient evolutionary

  4. Systems genetics of obesity in an F2 pig model by genome-wide association, genetic network and pathway analyses

    Directory of Open Access Journals (Sweden)

    Lisette J. A. Kogelman

    2014-07-01

    Full Text Available Obesity is a complex condition with world-wide exponentially rising prevalence rates, linked with severe diseases like Type 2 Diabetes. Economic and welfare consequences have led to a raised interest in a better understanding of the biological and genetic background. To date, whole genome investigations focusing on single genetic variants have achieved limited success, and the importance of including genetic interactions is becoming evident. Here, the aim was to perform an integrative genomic analysis in an F2 pig resource population that was constructed with an aim to maximize genetic variation of obesity-related phenotypes and genotyped using the 60K SNP chip. Firstly, Genome Wide Association (GWA analysis was performed on the Obesity Index to locate candidate genomic regions that were further validated using combined Linkage Disequilibrium Linkage Analysis and investigated by evaluation of haplotype blocks. We built Weighted Interaction SNP Hub (WISH and differentially wired (DW networks using genotypic correlations amongst obesity-associated SNPs resulting from GWA analysis. GWA results and SNP modules detected by WISH and DW analyses were further investigated by functional enrichment analyses. The functional annotation of SNPs revealed several genes associated with obesity, e.g. NPC2 and OR4D10. Moreover, gene enrichment analyses identified several significantly associated pathways, over and above the GWA study results, that may influence obesity and obesity related diseases, e.g. metabolic processes. WISH networks based on genotypic correlations allowed further identification of various gene ontology terms and pathways related to obesity and related traits, which were not identified by the GWA study. In conclusion, this is the first study to develop a (genetic obesity index and employ systems genetics in a porcine model to provide important insights into the complex genetic architecture associated with obesity and many biological pathways

  5. Systems genetics of obesity in an F2 pig model by genome-wide association, genetic network, and pathway analyses.

    Science.gov (United States)

    Kogelman, Lisette J A; Pant, Sameer D; Fredholm, Merete; Kadarmideen, Haja N

    2014-01-01

    Obesity is a complex condition with world-wide exponentially rising prevalence rates, linked with severe diseases like Type 2 Diabetes. Economic and welfare consequences have led to a raised interest in a better understanding of the biological and genetic background. To date, whole genome investigations focusing on single genetic variants have achieved limited success, and the importance of including genetic interactions is becoming evident. Here, the aim was to perform an integrative genomic analysis in an F2 pig resource population that was constructed with an aim to maximize genetic variation of obesity-related phenotypes and genotyped using the 60K SNP chip. Firstly, Genome Wide Association (GWA) analysis was performed on the Obesity Index to locate candidate genomic regions that were further validated using combined Linkage Disequilibrium Linkage Analysis and investigated by evaluation of haplotype blocks. We built Weighted Interaction SNP Hub (WISH) and differentially wired (DW) networks using genotypic correlations amongst obesity-associated SNPs resulting from GWA analysis. GWA results and SNP modules detected by WISH and DW analyses were further investigated by functional enrichment analyses. The functional annotation of SNPs revealed several genes associated with obesity, e.g., NPC2 and OR4D10. Moreover, gene enrichment analyses identified several significantly associated pathways, over and above the GWA study results, that may influence obesity and obesity related diseases, e.g., metabolic processes. WISH networks based on genotypic correlations allowed further identification of various gene ontology terms and pathways related to obesity and related traits, which were not identified by the GWA study. In conclusion, this is the first study to develop a (genetic) obesity index and employ systems genetics in a porcine model to provide important insights into the complex genetic architecture associated with obesity and many biological pathways that underlie

  6. Analyses of the Classical Model for Porous Materials%多孔材料模型分析

    Institute of Scientific and Technical Information of China (English)

    刘培生; 夏凤金; 罗军

    2009-01-01

    New developments are ceaselessly gained for the preparation, the application and the property study of porous materials. As to the theories about the structure and properties of porous materials, the famous classical model-Gibson-Ashby model has been being commonly endorsed in the field of porous materials all over the world, and is the theoretical foundation widespreadly applied by numerous investigators to their relative researches up to now. Some supplementary thinking and analyses are made for the shortages in this model in the present paper, and it is found that some shortages can even break the completivity originally shown by this model. Based on the summery about these problems, another new model is introduced which can make up the shortcomings existed in Gibson-Ashby model.%多孔泡沫材料的制备、应用和性能研究均不断取得新的进展.在关于多孔材料结构和性能方面的理论中,著名的经典性模型--Gibson-Ashby模型一直受到国际同行的普遍认同,迄今仍然是众多研究者在研究工作中广泛应用的理论基础.对该模型尚存在的若干不足和问题进行了一些补充思考和分析,发现其中有些缺陷甚至可以打破该模型原来表现出来的"完满性".在总结陈述这些问题的基础上,引荐了可以克服或弥补上述模型不足的另一个模型.

  7. CREB3 subfamily transcription factors are not created equal: Recent insights from global analyses and animal models

    Directory of Open Access Journals (Sweden)

    Chan Chi-Ping

    2011-02-01

    Full Text Available Abstract The CREB3 subfamily of membrane-bound bZIP transcription factors has five members in mammals known as CREB3 and CREB3L1-L4. One current model suggests that CREB3 subfamily transcription factors are similar to ATF6 in regulated intramembrane proteolysis and transcriptional activation. Particularly, they were all thought to be proteolytically activated in response to endoplasmic reticulum (ER stress to stimulate genes that are involved in unfolded protein response (UPR. Although the physiological inducers of their proteolytic activation remain to be identified, recent findings from microarray analyses, RNAi screens and gene knockouts not only demonstrated their critical roles in regulating development, metabolism, secretion, survival and tumorigenesis, but also revealed cell type-specific patterns in the activation of their target genes. Members of the CREB3 subfamily show differential activity despite their structural similarity. The spectrum of their biological function expands beyond ER stress and UPR. Further analyses are required to elucidate the mechanism of their proteolytic activation and the molecular basis of their target recognition.

  8. Hierarchical linear modeling analyses of the NEO-PI-R scales in the Baltimore Longitudinal Study of Aging.

    Science.gov (United States)

    Terracciano, Antonio; McCrae, Robert R; Brant, Larry J; Costa, Paul T

    2005-09-01

    The authors examined age trends in the 5 factors and 30 facets assessed by the Revised NEO Personality Inventory in Baltimore Longitudinal Study of Aging data (N=1,944; 5,027 assessments) collected between 1989 and 2004. Consistent with cross-sectional results, hierarchical linear modeling analyses showed gradual personality changes in adulthood: a decline in Neuroticism up to age 80, stability and then decline in Extraversion, decline in Openness, increase in Agreeableness, and increase in Conscientiousness up to age 70. Some facets showed different curves from the factor they define. Birth cohort effects were modest, and there were no consistent Gender x Age interactions. Significant nonnormative changes were found for all 5 factors; they were not explained by attrition but might be due to genetic factors, disease, or life experience. Copyright (c) 2005 APA, all rights reserved.

  9. Hierarchical Linear Modeling Analyses of NEO-PI-R Scales In the Baltimore Longitudinal Study of Aging

    Science.gov (United States)

    Terracciano, Antonio; McCrae, Robert R.; Brant, Larry J.; Costa, Paul T.

    2009-01-01

    We examined age trends in the five factors and 30 facets assessed by the Revised NEO Personality Inventory in Baltimore Longitudinal Study of Aging data (N = 1,944; 5,027 assessments) collected between 1989 and 2004. Consistent with cross-sectional results, Hierarchical Linear Modeling analyses showed gradual personality changes in adulthood: a decline up to age 80 in Neuroticism, stability and then decline in Extraversion, decline in Openness, increase in Agreeableness, and increase up to age 70 in Conscientiousness. Some facets showed different curves from the factor they define. Birth cohort effects were modest, and there were no consistent Gender × Age interactions. Significant non-normative changes were found for all five factors; they were not explained by attrition but might be due to genetic factors, disease, or life experience. PMID:16248708

  10. COUPLING EFFECTS FOR CELL-TRUSS SPAR PLATFORM: COMPARISON OF FREQUENCY- AND TIME-DOMAIN ANALYSES WITH MODEL TESTS

    Institute of Scientific and Technical Information of China (English)

    ZHANG Fan; YANG Jian-min; LI Run-pei; CHEN Gang

    2008-01-01

    For the floating structures in deepwater, the coupling effects of the mooring lines and risers on the motion responses of the structures become increasingly significant. Viscous damping, inertial mass, current loading and restoring, etc. from these slender structures should be carefully handled to accurately predict the motion responses and line tensions. For the spar platforms, coupling the mooring system and riser with the vessel motion typically results in a reduction in extreme motion responses. This article presents numerical simulations and model tests on a new cell-truss spar platform in the State Key Laboratory of Ocean Engineering in Shanghai Jiaotong University. Results from three calculation methods, including frequency-domain analysis, time-domain semi-coupled and fully-coupled analyses, were compared with the experimental data to find the applicability of different approaches. Proposals for the improvement of numerical calculations and experimental technique were tabled as well.

  11. Using niche-modelling and species-specific cost analyses to determine a multispecies corridor in a fragmented landscape

    Science.gov (United States)

    Zurano, Juan Pablo; Selleski, Nicole; Schneider, Rosio G.

    2017-01-01

    types independent of the degree of legal protection. These data used with multifocal GIS analyses balance the varying degree of overlap and unique properties among them allowing for comprehensive conservation strategies to be developed relatively rapidly. Our comprehensive approach serves as a model to other regions faced with habitat loss and lack of data. The five carnivores focused on in our study have wide ranges, so the results from this study can be expanded and combined with surrounding countries, with analyses at the species or community level. PMID:28841692

  12. From Global Climate Model Projections to Local Impacts Assessments: Analyses in Support of Planning for Climate Change

    Science.gov (United States)

    Snover, A. K.; Littell, J. S.; Mantua, N. J.; Salathe, E. P.; Hamlet, A. F.; McGuire Elsner, M.; Tohver, I.; Lee, S.

    2010-12-01

    Assessing and planning for the impacts of climate change require regionally-specific information. Information is required not only about projected changes in climate but also the resultant changes in natural and human systems at the temporal and spatial scales of management and decision making. Therefore, climate impacts assessment typically results in a series of analyses, in which relatively coarse-resolution global climate model projections of changes in regional climate are downscaled to provide appropriate input to local impacts models. This talk will describe recent examples in which coarse-resolution (~150 to 300km) GCM output was “translated” into information requested by decision makers at relatively small (watershed) and large (multi-state) scales using regional climate modeling, statistical downscaling, hydrologic modeling, and sector-specific impacts modeling. Projected changes in local air temperature, precipitation, streamflow, and stream temperature were developed to support Seattle City Light’s assessment of climate change impacts on hydroelectric operations, future electricity load, and resident fish populations. A state-wide assessment of climate impacts on eight sectors (agriculture, coasts, energy, forests, human health, hydrology and water resources, salmon, and urban stormwater infrastructure) was developed for Washington State to aid adaptation planning. Hydro-climate change scenarios for approximately 300 streamflow locations in the Columbia River basin and selected coastal drainages west of the Cascades were developed in partnership with major water management agencies in the Pacific Northwest to allow planners to consider how hydrologic changes may affect management objectives. Treatment of uncertainty in these assessments included: using “bracketing” scenarios to describe a range of impacts, using ensemble averages to characterize the central estimate of future conditions (given an emissions scenario), and explicitly assessing

  13. Civil engineering: EDF needs for concrete modelling; Genie civile: analyse des besoins EDF en modelisation du comportement des betons

    Energy Technology Data Exchange (ETDEWEB)

    Didry, O.; Gerard, B.; Bui, D. [Electricite de France (EDF), Direction des Etudes et Recherches, 92 - Clamart (France)

    1997-12-31

    Concrete structures which are encountered at EDF, like all civil engineering structures, age. In order to adapt the maintenance conditions of these structures, particularly to extend their service life, and also to prepare constructions of future structures, tools for predicting the behaviour of these structures in their environment should be available. For EDF the technical risks are high and consequently very appropriate R and D actions are required. In this context the Direction des Etudes et Recherches (DER) has developed a methodology for analysing concrete structure behaviour modelling. This approach has several aims: - making a distinction between the problems which refer to the existing models and those which require R and D; - displaying disciplinary links between different problems encountered on EDF structures (non-linear mechanical, chemical - hydraulic - mechanical coupling, etc); - listing of the existing tools and positioning the DER `Aster` finite element code among them. This document is a state of the art of scientific knowledge intended to shed light on the fields in which one should be involved when there is, on one part a strong requirement on the side of structure operators, and on the other one, the present tools do not allow this requirement to be satisfactorily met. The analysis has been done on 12 scientific subjects: 1) Hydration of concrete at early ages: exothermicity, hardening, autogenous shrinkage; 2) Drying and drying shrinkage; 3) Alkali-silica reaction and bulky stage formation; 4) Long term deterioration by leaching; 5) Ionic diffusion and associated attacks: the chlorides case; 6) Permeability / tightness of concrete; 7) Concretes -nonlinear behaviour and cracking (I): contribution of the plasticity models; 8) Concretes - nonlinear behaviour and cracking (II): contribution of the damage models; 9) Concretes - nonlinear behaviour and cracking (III): the contribution of the probabilistic analysis model; 10) Delayed behaviour of

  14. Spatially quantitative models for vulnerability analyses and resilience measures in flood risk management: Case study Rafina, Greece

    Science.gov (United States)

    Karagiorgos, Konstantinos; Chiari, Michael; Hübl, Johannes; Maris, Fotis; Thaler, Thomas; Fuchs, Sven

    2013-04-01

    We will address spatially quantitative models for vulnerability analyses in flood risk management in the catchment of Rafina, 25 km east of Athens, Greece; and potential measures to reduce damage costs. The evaluation of flood damage losses is relatively advanced. Nevertheless, major problems arise since there are no market prices for the evaluation process available. Moreover, there is particular gap in quantifying the damages and necessary expenditures for the implementation of mitigation measures with respect to flash floods. The key issue is to develop prototypes for assessing flood losses and the impact of mitigation measures on flood resilience by adjusting a vulnerability model and to further develop the method in a Mediterranean region influenced by both, mountain and coastal characteristics of land development. The objective of this study is to create a spatial and temporal analysis of the vulnerability factors based on a method combining spatially explicit loss data, data on the value of exposed elements at risk, and data on flood intensities. In this contribution, a methodology for the development of a flood damage assessment as a function of the process intensity and the degree of loss is presented. It is shown that (1) such relationships for defined object categories are dependent on site-specific and process-specific characteristics, but there is a correlation between process types that have similar characteristics; (2) existing semi-quantitative approaches of vulnerability assessment for elements at risk can be improved based on the proposed quantitative method; and (3) the concept of risk can be enhanced with respect to a standardised and comprehensive implementation by applying the vulnerability functions to be developed within the proposed research. Therefore, loss data were collected from responsible administrative bodies and analysed on an object level. The used model is based on a basin scale approach as well as data on elements at risk exposed

  15. Implications of a Cognitive Science Model Integrating Literacy in Science on Achievement in Science and Reading: Direct Effects in Grades 3-5 with Transfer to Grades 6-7

    Science.gov (United States)

    Romance, Nancy; Vitale, Michael

    2017-01-01

    Reported are the results of a multiyear study in which reading comprehension and writing were integrated within an in-depth science instructional model (Science IDEAS) in daily 1.5 to 2 h daily lessons on a schoolwide basis in grades 3-4-5. Multilevel (HLM7) achievement findings showed the experimental intervention resulted in significant and…

  16. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    Science.gov (United States)

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  17. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  18. Time Headway Modelling of Motorcycle-Dominated Traffic to Analyse Traffic Safety Performance and Road Link Capacity of Single Carriageways

    Directory of Open Access Journals (Sweden)

    D. M. Priyantha Wedagama

    2017-04-01

    Full Text Available This study aims to develop time headway distribution models to analyse traffic safety performance and road link capacities for motorcycle-dominated traffic in Denpasar, Bali. Three road links selected as the case study are Jl. Hayam Wuruk, Jl.Hang Tuah, and Jl. Padma. Data analysis showed that between 55%-80% of motorists in Denpasar during morning and evening peak hours paid less attention to the safe distance with the vehicles in front. The study found that Lognormal distribution models are best to fit time headway data during morning peak hours while either Weibull (3P or Pearson III distributions is for evening peak hours. Road link capacities for mixed traffic predominantly motorcycles are apparently affected by the behaviour of motorists in keeping safe distance with the vehicles in front. Theoretical road link capacities for Jl. Hayam Wuruk, Jl. Hang Tuah and Jl. Padma are 3,186 vehicles/hour, 3,077 vehicles/hour and 1935 vehicles/hour respectively.

  19. Bayesian salamanders: analysing the demography of an underground population of the European plethodontid Speleomantes strinatii with state-space modelling

    Directory of Open Access Journals (Sweden)

    Salvidio Sebastiano

    2010-02-01

    Full Text Available Abstract Background It has been suggested that Plethodontid salamanders are excellent candidates for indicating ecosystem health. However, detailed, long-term data sets of their populations are rare, limiting our understanding of the demographic processes underlying their population fluctuations. Here we present a demographic analysis based on a 1996 - 2008 data set on an underground population of Speleomantes strinatii (Aellen in NW Italy. We utilised a Bayesian state-space approach allowing us to parameterise a stage-structured Lefkovitch model. We used all the available population data from annual temporary removal experiments to provide us with the baseline data on the numbers of juveniles, subadults and adult males and females present at any given time. Results Sampling the posterior chains of the converged state-space model gives us the likelihood distributions of the state-specific demographic rates and the associated uncertainty of these estimates. Analysing the resulting parameterised Lefkovitch matrices shows that the population growth is very close to 1, and that at population equilibrium we expect half of the individuals present to be adults of reproductive age which is what we also observe in the data. Elasticity analysis shows that adult survival is the key determinant for population growth. Conclusion This analysis demonstrates how an understanding of population demography can be gained from structured population data even in a case where following marked individuals over their whole lifespan is not practical.

  20. Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses.

    Science.gov (United States)

    Thorlund, Kristian; Wetterslev, Jørn; Awad, Tahany; Thabane, Lehana; Gluud, Christian

    2011-12-01

    In random-effects model meta-analysis, the conventional DerSimonian-Laird (DL) estimator typically underestimates the between-trial variance. Alternative variance estimators have been proposed to address this bias. This study aims to empirically compare statistical inferences from random-effects model meta-analyses on the basis of the DL estimator and four alternative estimators, as well as distributional assumptions (normal distribution and t-distribution) about the pooled intervention effect. We evaluated the discrepancies of p-values, 95% confidence intervals (CIs) in statistically significant meta-analyses, and the degree (percentage) of statistical heterogeneity (e.g. I(2)) across 920 Cochrane primary outcome meta-analyses. In total, 414 of the 920 meta-analyses were statistically significant with the DL meta-analysis, and 506 were not. Compared with the DL estimator, the four alternative estimators yielded p-values and CIs that could be interpreted as discordant in up to 11.6% or 6% of the included meta-analyses pending whether a normal distribution or a t-distribution of the intervention effect estimates were assumed. Large discrepancies were observed for the measures of degree of heterogeneity when comparing DL with each of the four alternative estimators. Estimating the degree (percentage) of heterogeneity on the basis of less biased between-trial variance estimators seems preferable to current practice. Disclosing inferential sensitivity of p-values and CIs may also be necessary when borderline significant results have substantial impact on the conclusion. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Using ecological niche models and niche analyses to understand speciation patterns: the case of sister neotropical orchid bees.

    Directory of Open Access Journals (Sweden)

    Daniel P Silva

    Full Text Available The role of past connections between the two major South American forested biomes on current species distribution has been recognized a long time ago. Climatic oscillations that further separated these biomes have promoted parapatric speciation, in which many species had their continuous distribution split, giving rise to different but related species (i.e., different potential distributions and realized niche features. The distribution of many sister species of orchid bees follow this pattern. Here, using ecological niche models and niche analyses, we (1 tested the role of ecological niche differentiation on the divergence between sister orchid-bees (genera Eulaema and Eufriesea from the Amazon and Atlantic forests, and (2 highlighted interesting areas for new surveys. Amazonian species occupied different realized niches than their Atlantic sister species. Conversely, species of sympatric but distantly related Eulaema bees occupied similar realized niches. Amazonian species had a wide potential distribution in South America, whereas Atlantic Forest species were more limited to the eastern coast of the continent. Additionally, we identified several areas in need of future surveys. Our results show that the realized niche of Atlantic-Amazonian sister species of orchid bees, which have been previously treated as allopatric populations of three species, had limited niche overlap and similarity. These findings agree with their current taxonomy, which treats each of those populations as distinct valid species.

  2. Using ecological niche models and niche analyses to understand speciation patterns: the case of sister neotropical orchid bees.

    Science.gov (United States)

    Silva, Daniel P; Vilela, Bruno; De Marco, Paulo; Nemésio, André

    2014-01-01

    The role of past connections between the two major South American forested biomes on current species distribution has been recognized a long time ago. Climatic oscillations that further separated these biomes have promoted parapatric speciation, in which many species had their continuous distribution split, giving rise to different but related species (i.e., different potential distributions and realized niche features). The distribution of many sister species of orchid bees follow this pattern. Here, using ecological niche models and niche analyses, we (1) tested the role of ecological niche differentiation on the divergence between sister orchid-bees (genera Eulaema and Eufriesea) from the Amazon and Atlantic forests, and (2) highlighted interesting areas for new surveys. Amazonian species occupied different realized niches than their Atlantic sister species. Conversely, species of sympatric but distantly related Eulaema bees occupied similar realized niches. Amazonian species had a wide potential distribution in South America, whereas Atlantic Forest species were more limited to the eastern coast of the continent. Additionally, we identified several areas in need of future surveys. Our results show that the realized niche of Atlantic-Amazonian sister species of orchid bees, which have been previously treated as allopatric populations of three species, had limited niche overlap and similarity. These findings agree with their current taxonomy, which treats each of those populations as distinct valid species.

  3. Water flow experiments and analyses on the cross-flow type mercury target model with the flow guide plates

    CERN Document Server

    Haga, K; Kaminaga, M; Hino, R

    2001-01-01

    A mercury target is used in the spallation neutron source driven by a high-intensity proton accelerator. In this study, the effectiveness of the cross-flow type mercury target structure was evaluated experimentally and analytically. Prior to the experiment, the mercury flow field and the temperature distribution in the target container were analyzed assuming a proton beam energy and power of 1.5 GeV and 5 MW, respectively, and the feasibility of the cross-flow type target was evaluated. Then the average water flow velocity field in the target mock-up model, which was fabricated from Plexiglass for a water experiment, was measured at room temperature using the PIV technique. Water flow analyses were conducted and the analytical results were compared with the experimental results. The experimental results showed that the cross-flow could be realized in most of the proton beam path area and the analytical result of the water flow velocity field showed good correspondence to the experimental results in the case w...

  4. Modeling Acequia Irrigation Systems Using System Dynamics: Model Development, Evaluation, and Sensitivity Analyses to Investigate Effects of Socio-Economic and Biophysical Feedbacks

    Directory of Open Access Journals (Sweden)

    Benjamin L. Turner

    2016-10-01

    Full Text Available Agriculture-based irrigation communities of northern New Mexico have survived for centuries despite the arid environment in which they reside. These irrigation communities are threatened by regional population growth, urbanization, a changing demographic profile, economic development, climate change, and other factors. Within this context, we investigated the extent to which community resource management practices centering on shared resources (e.g., water for agricultural in the floodplains and grazing resources in the uplands and mutualism (i.e., shared responsibility of local residents to maintaining traditional irrigation policies and upholding cultural and spiritual observances embedded within the community structure influence acequia function. We used a system dynamics modeling approach as an interdisciplinary platform to integrate these systems, specifically the relationship between community structure and resource management. In this paper we describe the background and context of acequia communities in northern New Mexico and the challenges they face. We formulate a Dynamic Hypothesis capturing the endogenous feedbacks driving acequia community vitality. Development of the model centered on major stock-and-flow components, including linkages for hydrology, ecology, community, and economics. Calibration metrics were used for model evaluation, including statistical correlation of observed and predicted values and Theil inequality statistics. Results indicated that the model reproduced trends exhibited by the observed system. Sensitivity analyses of socio-cultural processes identified absentee decisions, cumulative income effect on time in agriculture, and land use preference due to time allocation, community demographic effect, effect of employment on participation, and farm size effect as key determinants of system behavior and response. Sensitivity analyses of biophysical parameters revealed that several key parameters (e.g., acres per

  5. Epidemiology of HPV 16 and cervical cancer in Finland and the potential impact of vaccination: mathematical modelling analyses.

    Directory of Open Access Journals (Sweden)

    Ruanne V Barnabas

    2006-05-01

    Full Text Available BACKGROUND: Candidate human papillomavirus (HPV vaccines have demonstrated almost 90%-100% efficacy in preventing persistent, type-specific HPV infection over 18 mo in clinical trials. If these vaccines go on to demonstrate prevention of precancerous lesions in phase III clinical trials, they will be licensed for public use in the near future. How these vaccines will be used in countries with national cervical cancer screening programmes is an important question. METHODS AND FINDINGS: We developed a transmission model of HPV 16 infection and progression to cervical cancer and calibrated it to Finnish HPV 16 seroprevalence over time. The model was used to estimate the transmission probability of the virus, to look at the effect of changes in patterns of sexual behaviour and smoking on age-specific trends in cancer incidence, and to explore the impact of HPV 16 vaccination. We estimated a high per-partnership transmission probability of HPV 16, of 0.6. The modelling analyses showed that changes in sexual behaviour and smoking accounted, in part, for the increase seen in cervical cancer incidence in 35- to 39-y-old women from 1990 to 1999. At both low (10% in opportunistic immunisation and high (90% in a national immunisation programme coverage of the adolescent population, vaccinating women and men had little benefit over vaccinating women alone. We estimate that vaccinating 90% of young women before sexual debut has the potential to decrease HPV type-specific (e.g., type 16 cervical cancer incidence by 91%. If older women are more likely to have persistent infections and progress to cancer, then vaccination with a duration of protection of less than 15 y could result in an older susceptible cohort and no decrease in cancer incidence. While vaccination has the potential to significantly reduce type-specific cancer incidence, its combination with screening further improves cancer prevention. CONCLUSIONS: HPV vaccination has the potential to

  6. Multiplicative models of analysis : a description and the use in analysing accident ratios as a function of hourly traffic volume and road-surface skidding resistance.

    NARCIS (Netherlands)

    Oppe, S.

    1977-01-01

    Accident ratios are analysed with regard to the variables road surface skidding resistance and hourly traffic volume. It is concluded that the multiplicative model describes the data better than the additive model. Moreover that there is no interaction between skidding resistance and traffic volume

  7. A second-generation device for automated training and quantitative behavior analyses of molecularly-tractable model organisms.

    Directory of Open Access Journals (Sweden)

    Douglas Blackiston

    Full Text Available A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays. The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science.

  8. Transcriptomics and proteomics analyses of the PACAP38 influenced ischemic brain in permanent middle cerebral artery occlusion model mice

    Directory of Open Access Journals (Sweden)

    Hori Motohide

    2012-11-01

    Full Text Available Abstract Introduction The neuropeptide pituitary adenylate cyclase-activating polypeptide (PACAP is considered to be a potential therapeutic agent for prevention of cerebral ischemia. Ischemia is a most common cause of death after heart attack and cancer causing major negative social and economic consequences. This study was designed to investigate the effect of PACAP38 injection intracerebroventrically in a mouse model of permanent middle cerebral artery occlusion (PMCAO along with corresponding SHAM control that used 0.9% saline injection. Methods Ischemic and non-ischemic brain tissues were sampled at 6 and 24 hours post-treatment. Following behavioral analyses to confirm whether the ischemia has occurred, we investigated the genome-wide changes in gene and protein expression using DNA microarray chip (4x44K, Agilent and two-dimensional gel electrophoresis (2-DGE coupled with matrix assisted laser desorption/ionization-time of flight-mass spectrometry (MALDI-TOF-MS, respectively. Western blotting and immunofluorescent staining were also used to further examine the identified protein factor. Results Our results revealed numerous changes in the transcriptome of ischemic hemisphere (ipsilateral treated with PACAP38 compared to the saline-injected SHAM control hemisphere (contralateral. Previously known (such as the interleukin family and novel (Gabra6, Crtam genes were identified under PACAP influence. In parallel, 2-DGE analysis revealed a highly expressed protein spot in the ischemic hemisphere that was identified as dihydropyrimidinase-related protein 2 (DPYL2. The DPYL2, also known as Crmp2, is a marker for the axonal growth and nerve development. Interestingly, PACAP treatment slightly increased its abundance (by 2-DGE and immunostaining at 6 h but not at 24 h in the ischemic hemisphere, suggesting PACAP activates neuronal defense mechanism early on. Conclusions This study provides a detailed inventory of PACAP influenced gene expressions

  9. Aeroelastic Analyses of the SemiSpan SuperSonic Transport (S4T) Wind Tunnel Model at Mach 0.95

    Science.gov (United States)

    Hur, Jiyoung

    2014-01-01

    Detailed aeroelastic analyses of the SemiSpan SuperSonic Transport (S4T) wind tunnel model at Mach 0.95 with a 1.75deg fixed angle of attack are presented. First, a numerical procedure using the Computational Fluids Laboratory 3-Dimensional (CFL3D) Version 6.4 flow solver is investigated. The mesh update method for structured multi-block grids was successfully applied to the Navier-Stokes simulations. Second, the steady aerodynamic analyses with a rigid structure of the S4T wind tunnel model are reviewed in transonic flow. Third, the static analyses were performed for both the Euler and Navier-Stokes equations. Both the Euler and Navier-Stokes equations predicted a significant increase of lift forces, compared to the results from the rigid structure of the S4T wind-tunnel model, over various dynamic pressures. Finally, dynamic aeroelastic analyses were performed to investigate the flutter condition of the S4T wind tunnel model at the transonic Mach number. The condition of flutter was observed at a dynamic pressure of approximately 75.0-psf for the Navier-Stokes simulations. However, it was observed that the flutter condition occurred a dynamic pressure of approximately 47.27-psf for the Euler simulations. Also, the computational efficiency of the aeroelastic analyses for the S4T wind tunnel model has been assessed.

  10. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    Science.gov (United States)

    Kolkman, M. J.; Kok, M.; van der Veen, A.

    The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties on a fundamental cognitive level, which can reveal experiences, perceptions, assumptions, knowledge and subjective beliefs of stakeholders, experts and other actors, and can stimulate communication and learning. This article presents the theoretical framework from which the use of mental model mapping techniques to analyse this type of problems emerges as a promising technique. The framework consists of the problem solving or policy design cycle, the knowledge production or modelling cycle, and the (computer) model as interface between the cycles. Literature attributes difficulties in the decision-making process to communication gaps between decision makers, stakeholders and scientists, and to the construction of knowledge within different paradigm groups that leads to different interpretation of the problem situation. Analysis of the decision-making process literature indicates that choices, which are made in all steps of the problem solving cycle, are based on an individual decision maker’s frame of perception. This frame, in turn, depends on the mental model residing in the mind of the individual. Thus we identify three levels of awareness on which the decision process can be analysed. This research focuses on the third level. Mental models can be elicited using mapping techniques. In this way, analysing an individual’s mental model can shed light on decision-making problems. The steps of the knowledge production cycle are, in the same manner, ultimately driven by the mental models of the scientist in a specific discipline. Remnants of this mental model can be found in the resulting computer model. The characteristics of unstructured problems (complexity

  11. Generic linking of finite element models for non-linear static and global dynamic analyses for aircraft structures

    NARCIS (Netherlands)

    Wit, de A.J.; Akcay Perdahcioglu, D.; Brink, van den W.M.; Boer, de A.

    2011-01-01

    Depending on the type of analysis, Finite Element(FE) models of different fidelity are necessary. Creating these models manually is a labor intensive task. This paper discusses a generic approach for generating FE models of different fidelity from a single reference FE model. These different fidelit

  12. Generic Linking of Finite Element Models for non-linear static and global dynamic analyses of aircraft structures

    NARCIS (Netherlands)

    Wit, de A.J.; Akcay-Perdahcioglu, D.; Brink, van den W.M.; Boer, de A.

    2012-01-01

    Depending on the type of analysis, Finite Element(FE) models of different fidelity are necessary. Creating these models manually is a labor intensive task. This paper discusses a generic approach for generating FE models of different fidelity from a single reference FE model. These different fidelit

  13. Generic linking of finite element models for non-linear static and global dynamic analyses for aircraft structures

    NARCIS (Netherlands)

    de Wit, A.J.; Akcay-Perdahcioglu, Didem; van den Brink, W.M.; de Boer, Andries; Rolfes, R.; Jansen, E.L.

    2011-01-01

    Depending on the type of analysis, Finite Element(FE) models of different fidelity are necessary. Creating these models manually is a labor intensive task. This paper discusses a generic approach for generating FE models of different fidelity from a single reference FE model. These different

  14. Modelling of the spallation reaction: analysis and testing of nuclear models; Simulation de la spallation: analyse et test des modeles nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Toccoli, C

    2000-04-03

    The spallation reaction is considered as a 2-step process. First a very quick stage (10{sup -22}, 10{sup -29} s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10{sup -18}, 10{sup -19} s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, {sup 3}He, {sup 4}He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  15. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    NARCIS (Netherlands)

    Kolkman, M.J.; Kok, M.; Veen, van der A.

    2005-01-01

    The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties

  16. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    NARCIS (Netherlands)

    Kolkman, Rien; Kok, Matthijs; van der Veen, A.

    2005-01-01

    The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties

  17. From global economic modelling to household level analyses of food security and sustainability: how big is the gap and can we bridge it?

    NARCIS (Netherlands)

    Wijk, van M.T.

    2014-01-01

    Policy and decision makers have to make difficult choices to improve the food security of local people against the background of drastic global and local changes. Ex-ante impact assessment using integrated models can help them with these decisions. This review analyses the state of affairs of the mu

  18. Parsimony and Model-Based Analyses of Indels in Avian Nuclear Genes Reveal Congruent and Incongruent Phylogenetic Signals

    Directory of Open Access Journals (Sweden)

    Frederick H. Sheldon

    2013-03-01

    Full Text Available Insertion/deletion (indel mutations, which are represented by gaps in multiple sequence alignments, have been used to examine phylogenetic hypotheses for some time. However, most analyses combine gap data with the nucleotide sequences in which they are embedded, probably because most phylogenetic datasets include few gap characters. Here, we report analyses of 12,030 gap characters from an alignment of avian nuclear genes using maximum parsimony (MP and a simple maximum likelihood (ML framework. Both trees were similar, and they exhibited almost all of the strongly supported relationships in the nucleotide tree, although neither gap tree supported many relationships that have proven difficult to recover in previous studies. Moreover, independent lines of evidence typically corroborated the nucleotide topology instead of the gap topology when they disagreed, although the number of conflicting nodes with high bootstrap support was limited. Filtering to remove short indels did not substantially reduce homoplasy or reduce conflict. Combined analyses of nucleotides and gaps resulted in the nucleotide topology, but with increased support, suggesting that gap data may prove most useful when analyzed in combination with nucleotide substitutions.

  19. Structured modelling and nonlinear analysis of PEM fuel cells; Strukturierte Modellierung und nichtlineare Analyse von PEM-Brennstoffzellen

    Energy Technology Data Exchange (ETDEWEB)

    Hanke-Rauschenbach, R.

    2007-10-26

    In the first part of this work a model structuring concept for electrochemical systems is presented. The application of such a concept for the structuring of a process model allows it to combine different fuel cell models to form a whole model family, regardless of their level of detail. Beyond this the concept offers the opportunity to flexibly exchange model entities on different model levels. The second part of the work deals with the nonlinear behaviour of PEM fuel cells. With the help of a simple, spatially lumped and isothermal model, bistable current-voltage characteristics of PEM fuel cells operated with low humidified feed gases are predicted and discussed in detail. The cell is found to exhibit current-voltage curves with pronounced local extrema in a parameter range that is of practical interest when operated at constant feed gas flow rates. (orig.)

  20. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation.

    Science.gov (United States)

    Zajac, Zuzanna; Stith, Bradley; Bowling, Andrea C; Langtimm, Catherine A; Swain, Eric D

    2015-07-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  1. Evaluation of habitat suitability index models by global sensitivity and uncertainty analyses: a case study for submerged aquatic vegetation

    Science.gov (United States)

    Zajac, Zuzanna; Stith, Bradley M.; Bowling, Andrea C.; Langtimm, Catherine A.; Swain, Eric D.

    2015-01-01

    Habitat suitability index (HSI) models are commonly used to predict habitat quality and species distributions and are used to develop biological surveys, assess reserve and management priorities, and anticipate possible change under different management or climate change scenarios. Important management decisions may be based on model results, often without a clear understanding of the level of uncertainty associated with model outputs. We present an integrated methodology to assess the propagation of uncertainty from both inputs and structure of the HSI models on model outputs (uncertainty analysis: UA) and relative importance of uncertain model inputs and their interactions on the model output uncertainty (global sensitivity analysis: GSA). We illustrate the GSA/UA framework using simulated hydrology input data from a hydrodynamic model representing sea level changes and HSI models for two species of submerged aquatic vegetation (SAV) in southwest Everglades National Park: Vallisneria americana (tape grass) and Halodule wrightii (shoal grass). We found considerable spatial variation in uncertainty for both species, but distributions of HSI scores still allowed discrimination of sites with good versus poor conditions. Ranking of input parameter sensitivities also varied spatially for both species, with high habitat quality sites showing higher sensitivity to different parameters than low-quality sites. HSI models may be especially useful when species distribution data are unavailable, providing means of exploiting widely available environmental datasets to model past, current, and future habitat conditions. The GSA/UA approach provides a general method for better understanding HSI model dynamics, the spatial and temporal variation in uncertainties, and the parameters that contribute most to model uncertainty. Including an uncertainty and sensitivity analysis in modeling efforts as part of the decision-making framework will result in better-informed, more robust

  2. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  3. Analysis and modelling of the energy requirements of batch processes; Analyse und Modellierung des Energiebedarfes in Batch-Prozessen

    Energy Technology Data Exchange (ETDEWEB)

    Bieler, P.S.

    2002-07-01

    This intermediate report for the Swiss Federal Office of Energy (SFOE) presents the results of a project aiming to model the energy consumption of multi-product, multi-purpose batch production plants. The utilities investigated were electricity, brine and steam. Both top-down and bottom-up approaches are described, whereby top-down was used for the buildings where the batch process apparatus was installed. Modelling showed that for batch-plants at the building level, the product mix can be too variable and the diversity of products and processes too great for simple modelling. Further results obtained by comparing six different production plants that could be modelled are discussed. The several models developed are described and their wider applicability is discussed. Also, the results of comparisons made between modelled and actual values are presented. Recommendations for further work are made.

  4. A growth curve model with fractional polynomials for analysing incomplete time-course data in microarray gene expression studies

    DEFF Research Database (Denmark)

    Tan, Qihua; Thomassen, Mads; Hjelmborg, Jacob V B

    2011-01-01

    among fractional polynomial models with power terms from a set of fixed values that offer a wide range of curve shapes and suggests a best fitting model. After a limited simulation study, the model has been applied to our human in vivo irritated epidermis data with missing observations to investigate......-course pattern in a gene by gene manner. We introduce a growth curve model with fractional polynomials to automatically capture the various time-dependent expression patterns and meanwhile efficiently handle missing values due to incomplete observations. For each gene, our procedure compares the performances...... time-dependent transcriptional responses to a chemical irritant. Our method was able to identify the various nonlinear time-course expression trajectories. The integration of growth curves with fractional polynomials provides a flexible way to model different time-course patterns together with model...

  5. SPECIFICS OF THE APPLICATIONS OF MULTIPLE REGRESSION MODEL IN THE ANALYSES OF THE EFFECTS OF GLOBAL FINANCIAL CRISES

    Directory of Open Access Journals (Sweden)

    Željko V. Račić

    2010-12-01

    Full Text Available This paper aims to present the specifics of the application of multiple linear regression model. The economic (financial crisis is analyzed in terms of gross domestic product which is in a function of the foreign trade balance (on one hand and the credit cards, i.e. indebtedness of the population on this basis (on the other hand, in the USA (from 1999. to 2008. We used the extended application model which shows how the analyst should run the whole development process of regression model. This process began with simple statistical features and the application of regression procedures, and ended with residual analysis, intended for the study of compatibility of data and model settings. This paper also analyzes the values of some standard statistics used in the selection of appropriate regression model. Testing of the model is carried out with the use of the Statistics PASW 17 program.

  6. Robust and portable capacity computing method for many finite element analyses of a high-fidelity crustal structure model aimed for coseismic slip estimation

    Science.gov (United States)

    Agata, Ryoichiro; Ichimura, Tsuyoshi; Hirahara, Kazuro; Hyodo, Mamoru; Hori, Takane; Hori, Muneo

    2016-09-01

    Computation of many Green's functions (GFs) in finite element (FE) analyses of crustal deformation is an essential technique in inverse analyses of coseismic slip estimations. In particular, analysis based on a high-resolution FE model (high-fidelity model) is expected to contribute to the construction of a community standard FE model and benchmark solution. Here, we propose a naive but robust and portable capacity computing method to compute many GFs using a high-fidelity model, assuming that various types of PC clusters are used. The method is based on the master-worker model, implemented using the Message Passing Interface (MPI), to perform robust and efficient input/output operations. The method was applied to numerical experiments of coseismic slip estimation in the Tohoku region of Japan; comparison of the estimated results with those generated using lower-fidelity models revealed the benefits of using a high-fidelity FE model in coseismic slip distribution estimation. Additionally, the proposed method computes several hundred GFs more robustly and efficiently than methods without the master-worker model and MPI.

  7. Global sensitivity analysis of thermomechanical models in modelling of welding; Analyse de sensibilite globale de modeles thermomecanique de simulation numerique du soudage

    Energy Technology Data Exchange (ETDEWEB)

    Petelet, M

    2008-07-01

    Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range. This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases.The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)

  8. Global sensitivity analysis of thermo-mechanical models in numerical weld modelling; Analyse de sensibilite globale de modeles thermomecaniques de simulation numerique du soudage

    Energy Technology Data Exchange (ETDEWEB)

    Petelet, M

    2007-10-15

    Current approach of most welding modellers is to content themselves with available material data, and to chose a mechanical model that seems to be appropriate. Among inputs, those controlling the material properties are one of the key problems of welding simulation: material data are never characterized over a sufficiently wide temperature range {exclamation_point} This way to proceed neglect the influence of the uncertainty of input data on the result given by the computer code. In this case, how to assess the credibility of prediction? This thesis represents a step in the direction of implementing an innovative approach in welding simulation in order to bring answers to this question, with an illustration on some concretes welding cases. The global sensitivity analysis is chosen to determine which material properties are the most sensitive in a numerical welding simulation and in which range of temperature. Using this methodology require some developments to sample and explore the input space covering welding of different steel materials. Finally, input data have been divided in two groups according to their influence on the output of the model (residual stress or distortion). In this work, complete methodology of the global sensitivity analysis has been successfully applied to welding simulation and lead to reduce the input space to the only important variables. Sensitivity analysis has provided answers to what can be considered as one of the probable frequently asked questions regarding welding simulation: for a given material which properties must be measured with a good accuracy and which ones can be simply extrapolated or taken from a similar material? (author)

  9. Hierarchical Linear Modeling to Explore the Influence of Satisfaction with Public Facilities on Housing Prices

    OpenAIRE

    Chung-Chang Lee

    2009-01-01

    This paper uses hierarchical linear modeling (HLM) to explore the influence of satisfaction with public facilities on both individual residential and overall (or regional) levels on housing prices. The empirical results indicate that the average housing prices between local cities and counties exhibit significant variance. At the macro level, the explanatory power of the variable ¡§convenience of life¡¨ on the average housing prices of all counties and cities reaches the 5% significance level...

  10. Transport of nutrients from land to sea: Global modeling approaches and uncertainty analyses (Utrecht Studies in Earth Sciences 058)

    NARCIS (Netherlands)

    Beusen, A.H.W.

    2014-01-01

    This thesis presents four examples of global models developed as part of the Integrated Model to Assess the Global Environment (IMAGE). They describe different components of global biogeochemical cycles of the nutrients nitrogen (N), phosphorus (P) and silicon (Si), with a focus on approaches to ana

  11. QTL analyses on genotype-specific component traits in a crop simulation model for capsicum annuum L.

    NARCIS (Netherlands)

    Wubs, A.M.; Heuvelink, E.; Dieleman, J.A.; Magan, J.J.; Palloix, A.; Eeuwijk, van F.A.

    2012-01-01

    Abstract: QTL for a complex trait like yield tend to be unstable across environments and show QTL by environment interaction. Direct improvement of complex traits by selecting on QTL is therefore difficult. For improvement of complex traits, crop growth models can be useful, as such models can disse

  12. Studies and analyses of the space shuttle main engine: High-pressure oxidizer turbopump failure information propagation model

    Science.gov (United States)

    Glover, R. C.; Rudy, S. W.; Tischer, A. E.

    1987-01-01

    The high-pressure oxidizer turbopump (HPOTP) failure information propagation model (FIPM) is presented. The text includes a brief discussion of the FIPM methodology and the various elements which comprise a model. Specific details of the HPOTP FIPM are described. Listings of all the HPOTP data records are included as appendices.

  13. A CFBPN Artificial Neural Network Model for Educational Qualitative Data Analyses: Example of Students' Attitudes Based on Kellerts' Typologies

    Science.gov (United States)

    Yorek, Nurettin; Ugulu, Ilker

    2015-01-01

    In this study, artificial neural networks are suggested as a model that can be "trained" to yield qualitative results out of a huge amount of categorical data. It can be said that this is a new approach applied in educational qualitative data analysis. In this direction, a cascade-forward back-propagation neural network (CFBPN) model was…

  14. APT Blanket Safety Analysis: Preliminary Analyses of Downflow Through a Lateral Row 1 Blanket Model Under Near RHR Conditions

    Energy Technology Data Exchange (ETDEWEB)

    Hamm, L.L.

    1998-10-07

    To address a concern about a potential maldistribution of coolant flow through an APT blanket module under low flow near RHR conditions, a scoping study of downflow mixed convection in parallel channels was conducted. Buoyancy will adversely effect the flow distribution in module bins with downflow and non-uniform power distributions. This study consists of two parts: a simple analytical model of flow in a two channel network, and a lumped eleven channel FLOWTRAN-TF model of a front lateral Row-1 blanket module bin. Results from both models indicate that the concern about coolant flow in a vertical model being diverted away from high power regions by buoyancy is warranted. The FLOWTRAN-TF model predicted upflow (i.e., a flow reversal) through several of the high power channels, under some low flow conditions. The transition from the regime with downflow in all channels to a regime with upflow in some channels was abrupt.

  15. Analyses of freshwater stress with a couple ground and surface water model in the Pra Basin, Ghana

    Science.gov (United States)

    Owusu, George; Owusu, Alex B.; Amankwaa, Ebenezer Forkuo; Eshun, Fatima

    2015-04-01

    The optimal management of water resources requires that the collected hydrogeological, meteorological, and spatial data be simulated and analyzed with appropriate models. In this study, a catchment-scale distributed hydrological modeling approach is applied to simulate water stress for the years 2000 and 2050 in a data scarce Pra Basin, Ghana. The model is divided into three parts: The first computes surface and groundwater availability as well as shallow and deep groundwater residence times by using POLFLOW model; the second extends the POLFLOW model with water demand (Domestic, Industrial and Agricultural) model; and the third part involves modeling water stress indices—from the ratio of water demand to water availability—for every part of the basin. On water availability, the model estimated long-term annual Pra river discharge at the outflow point of the basin, Deboase, to be 198 m3/s as against long-term average measurement of 197 m3/s. Moreover, the relationship between simulated discharge and measured discharge at 9 substations in the basin scored Nash-Sutcliffe model efficiency coefficient of 0.98, which indicates that the model estimation is in agreement with the long-term measured discharge. The estimated total water demand significantly increases from 959,049,096 m3/year in 2000 to 3,749,559,019 m3/year in 2050 (p < 0.05). The number of districts experiencing water stress significantly increases (p = 0.00044) from 8 in 2000 to 21 out of 35 by the year 2050. This study will among other things help the stakeholders in water resources management to identify and manage water stress areas in the basin.

  16. [Selection of a statistical model for evaluation of the reliability of the results of toxicological analyses. I. Discussion on selected statistical models for evaluation of the systems of control of the results of toxicological analyses].

    Science.gov (United States)

    Antczak, K; Wilczyńska, U

    1980-01-01

    2 statistical models for evaluation of toxicological studies results have been presented. Model I. after R. Hoschek and H. J. Schittke (2) involves: 1. Elimination of the values deviating from most results-by Grubbs' method (2). 2. Analysis of the differences between the results obtained by the participants of the action and tentatively assumed value. 3. Evaluation of significant differences between the reference value and average value for a given series of measurements. 4. Thorough evaluation of laboratories based on evaluation coefficient fx. Model II after Keppler et al. As a criterion for evaluating the results the authors assumed the median. Individual evaluation of laboratories was performed on the basis of: 1. Adjusted test "t" 2. Linear regression test.

  17. Long-duration magnetic clouds: a comparison of analyses using torus- and cylinder-shaped flux rope models

    Directory of Open Access Journals (Sweden)

    K. Marubashi

    2007-11-01

    Full Text Available We identified 17 magnetic clouds (MCs with durations longer than 30 h, surveying the solar wind data obtained by the WIND and ACE spacecraft during 10 years from 1995 through 2004. Then, the magnetic field structures of these 17 MCs were analyzed by the technique of the least-squares fitting to force-free flux rope models. The analysis was made with both the cylinder and torus models when possible, and the results from the two models are compared. The torus model was used in order to approximate the curved portion of the MCs near the flanks of the MC loops. As a result, we classified the 17 MCs into 4 groups. They are (1 5 MC events exhibiting magnetic field rotations through angles substantially larger than 180° which can be interpreted only by the torus model; (2 3 other MC events that can be interpreted only by the torus model as well, though the rotation angles of magnetic fields are less than 180°; (3 3 MC events for which similar geometries are obtained from both the torus and cylinder models; and (4 6 MC events for which the resultant geometries obtained from both models are substantially different from each other, even though the observed magnetic field variations can be interpreted by either of the torus model or the cylinder model. It is concluded that the MC events in the first and second groups correspond to those cases where the spacecraft traversed the MCs near the flanks of the MC loops, the difference between the two being attributed to the difference in distance between the torus axis and the spacecraft trajectory. The MC events in the third group are interpreted as the cases where the spacecraft traversed near the apexes of the MC loops. For the MC events in the fourth group, the real geometry cannot be determined from the model fitting technique alone. Though an attempt was made to determine which model is more plausible for each of the MCs in this group by comparing the characteristics of associated bidirectional electron

  18. Theoretical analyses and numerical experiments of variational assimilation for one-dimensional ocean temperature model with techniques in inverse problems

    Institute of Scientific and Technical Information of China (English)

    HUANG; Sixun; HAN; Wei; WU; Rongsheng

    2004-01-01

    In the present work, the data assimilation problem in meteorology and physical oceanography is re-examined using the variational optimal control approaches in combination with regularization techniques in inverse problem. Here the estimations of the initial condition,boundary condition and model parameters are performed simultaneously in the framework of variational data assimilation. To overcome the difficulty of ill-posedness, especially for the model parameters distributed in space and time, an additional term is added into the cost functional as a stabilized functional. Numerical experiments show that even with noisy observations the initial conditions and model parameters are recovered to an acceptable degree of accuracy.

  19. LUMPED: a Visual Basic code of lumped-parameter models for mean residence time analyses of groundwater systems

    Science.gov (United States)

    Ozyurt, N. N.; Bayari, C. S.

    2003-02-01

    A Microsoft ® Visual Basic 6.0 (Microsoft Corporation, 1987-1998) code of 15 lumped-parameter models is presented for the analysis of mean residence time in aquifers. Groundwater flow systems obeying plug and exponential flow models and their combinations of parallel or serial connection can be simulated by these steady-state models which may include complications such as bypass flow and dead volume. Each model accepts tritium, krypton-85, chlorofluorocarbons (CFC-11, CFC-12 and CFC-113) and sulfur hexafluoride (SF 6) as environmental tracer. Retardation of gas tracers in the unsaturated zone and their degradation in the flow system may also be accounted for. The executable code has been tested to run under Windows 95 or higher operating systems. The results of comparisons between other comparable codes are discussed and the limitations are indicated.

  20. ANALYSES ON NONLINEAR COUPLING OF MAGNETO-THERMO-ELASTICITY OF FERROMAGNETIC THIN SHELL-Ⅰ: GENERALIZED VARIATIONAL THEORETICAL MODELING

    Institute of Scientific and Technical Information of China (English)

    Xingzhe Wang; Xiaojing Zheng

    2009-01-01

    Based on the generalized variational principle of magneto-thermo-elasticity of the ferromagnetic elastic medium, a nonlinear coupling theoretical modeling for a ferromagnetic thin shell is developed. All governing equations and boundary conditions for the ferromagnetic shell are obtained from the variational manipulations on the magnetic scalar potential, temperature and the elastic displacement related to the total energy functional. The multi-field couplings and geometrical nonlinearity of the ferromagnetic thin shell are taken into account in the modeling. The general modeling can be further deduced to existing models of the magneto-elasticity and the thermo-elasticity of a ferromagnetic shell and magneto-thermo-elasticity of a ferromagnetic plate, which axe coincident with the ones in literature.

  1. Towards Three Dimensional Analyses for Applying E-Learning Evaluation Model: The Case of E-Learning in Helwan University

    Directory of Open Access Journals (Sweden)

    Ayman El Sayed Khedr

    2012-07-01

    Full Text Available Lots of studies have discussed the issue of “ELearning in higher education” and what it could provide to these institutions like: improved performance, increased access, convenience and flexibility to learners and developing the skills and competencies needed in the 21st century, in particular to ensure that learners have the digital literacy skills required in their discipline, profession or career. In this case study well adapt (three dimensional analyses: Student - Staff - University to evaluate the impact of e-learning implementation in Egyptian Universities, and the case study took “Helwan University” as a Sample of the research community to explain how itll improve, affect and maximize the efficiency of the educational process within the university, through the three dimensions discussed in the case study. There are previous studies framed that there is a positive relation between the adoption process of e-learning environments and the education process analysis.

  2. Multi-Scale Computational Analyses of JP-8 Fuel Droplets and Vapors in Human Respiratory Airway Models

    Science.gov (United States)

    2007-10-31

    Deposition Clearance and Effects in the Lung 20, 294-309. Kleinstreuer, C., Zhang, Z., 2003. Laminar-to-turbulent fluid-particle flows in a human airway ...FA9550-04-1-0422 Vapors in Human Respiratory Airway Models 5b. GRANT NUMBER Sc. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT NUMBER Kleinstreuer...tracheobronchial airway models, transient 3- D as well as equivalent steady-state solutions have been obtained for the transport and deposition of

  3. Modeling seasonal leptospirosis transmission and its association with rainfall and temperature in Thailand using time-series and ARIMAX analyses

    Institute of Scientific and Technical Information of China (English)

    Sudarat Chadsuthi; Charin Modchang; Yongwimon Lenbury; Sopon Iamsirithaworn; Wannapong Triampo

    2012-01-01

    ABSTRACT Objective:To study the number of leptospirosis cases in relations to the seasonal pattern, and its association with climate factors.Methods:Time series analysis was used to study the time variations in the number of leptospirosis cases.TheAutoregressiveIntegratedMovingAverage (ARIMA) model was used in data curve fitting and predicting the next leptospirosis cases. Results:We found that the amount of rainfall was correlated to leptospirosis cases in both regions of interest, namely the northern and northeastern region ofThailand, while the temperature played a role in the northeastern region only.The use of multivariateARIMA(ARIMAX) model showed that factoring in rainfall(with an8 months lag) yields the best model for the northern region while the model, which factors in rainfall(with a10 months lag) and temperature(with an8 months lag) was the best for the northeastern region.Conclusions:The models are able to show the trend in leptospirosis cases and closely fit the recorded data in both regions.The models can also be used to predict the next seasonal peak quite accurately.

  4. A Growth Curve Model with Fractional Polynomials for Analysing Incomplete Time-Course Data in Microarray Gene Expression Studies

    Directory of Open Access Journals (Sweden)

    Qihua Tan

    2011-01-01

    Full Text Available Identifying the various gene expression response patterns is a challenging issue in expression microarray time-course experiments. Due to heterogeneity in the regulatory reaction among thousands of genes tested, it is impossible to manually characterize a parametric form for each of the time-course pattern in a gene by gene manner. We introduce a growth curve model with fractional polynomials to automatically capture the various time-dependent expression patterns and meanwhile efficiently handle missing values due to incomplete observations. For each gene, our procedure compares the performances among fractional polynomial models with power terms from a set of fixed values that offer a wide range of curve shapes and suggests a best fitting model. After a limited simulation study, the model has been applied to our human in vivo irritated epidermis data with missing observations to investigate time-dependent transcriptional responses to a chemical irritant. Our method was able to identify the various nonlinear time-course expression trajectories. The integration of growth curves with fractional polynomials provides a flexible way to model different time-course patterns together with model selection and significant gene identification strategies that can be applied in microarray-based time-course gene expression experiments with missing observations.

  5. Landscaping analyses of the ROC predictions of discrete-slots and signal-detection models of visual working memory.

    Science.gov (United States)

    Donkin, Chris; Tran, Sophia Chi; Nosofsky, Robert

    2014-10-01

    A fundamental issue concerning visual working memory is whether its capacity limits are better characterized in terms of a limited number of discrete slots (DSs) or a limited amount of a shared continuous resource. Rouder et al. (2008) found that a mixed-attention, fixed-capacity, DS model provided the best explanation of behavior in a change detection task, outperforming alternative continuous signal detection theory (SDT) models. Here, we extend their analysis in two ways: first, with experiments aimed at better distinguishing between the predictions of the DS and SDT models, and second, using a model-based analysis technique called landscaping, in which the functional-form complexity of the models is taken into account. We find that the balance of evidence supports a DS account of behavior in change detection tasks but that the SDT model is best when the visual displays always consist of the same number of items. In our General Discussion section, we outline, but ultimately reject, a number of potential explanations for the observed pattern of results. We finish by describing future research that is needed to pinpoint the basis for this observed pattern of results.

  6. A systematic review of care delivery models and economic analyses in lymphedema: health policy impact (2004-2011).

    Science.gov (United States)

    Stout, N L; Weiss, R; Feldman, J L; Stewart, B R; Armer, J M; Cormier, J N; Shih, Y-C T

    2013-03-01

    A project of the American Lymphedema Framework Project (ALFP), this review seeks to examine the policy and economic impact of caring for patients with lymphedema, a common side effect of cancer treatment. This review is the first of its kind undertaken to investigate, coordinate, and streamline lymphedema policy initiatives in the United States with potential applicability worldwide. As part of a large scale literature review aiming to systematically evaluate the level of evidence of contemporary peer-reviewed lymphedema literature (2004 to 2011), publications on care delivery models, health policy, and economic impact were retrieved, summarized, and evaluated by a team of investigators and clinical experts. The review substantiates lymphedema education models and clinical models implemented at the community, health care provider, and individual level that improve delivery of care. The review exposes the lack of economic analysis related to lymphedema. Despite a dearth of evidence, efforts towards policy initiatives at the federal and state level are underway. These initiatives and the evidence to support them are examined and recommendations for translating these findings into clinical practice are made. Medical and community-based disease management interventions, taking on a public approach, are effective delivery models for lymphedema care and demonstrate great potential to improve cancer survivorship care. Efforts to create policy at the federal, state, and local level should target implementation of these models. More research is needed to identify costs associated with the treatment of lymphedema and to model the cost outlays and potential cost savings associated with comprehensive management of chronic lymphedema.

  7. Analysis and modelling of the fuels european market; Analyse et modelisation des prix des produits petroliers combustibles en europe

    Energy Technology Data Exchange (ETDEWEB)

    Simon, V

    1999-04-01

    The research focus on the European fuel market prices referring to the Rotterdam and Genoa spot markets as well the German, Italian and French domestic markets. The thesis try to explain the impact of the London IPE future market on spot prices too. The mainstream research has demonstrated that co-integration seems to be the best theoretical approach to investigate the long run equilibrium relations. A particular attention will be devoted to the structural change in the econometric modelling on these equilibriums. A deep analysis of the main European petroleum products markets permit a better model specification concerning each of these markets. Further, we will test if any evidence of relations between spot and domestic prices could be confirmed. Finally, alternative scenarios will be depicted to forecast prices in the petroleum products markets. The objective is to observe the model reaction to changes crude oil prices. (author)

  8. Methods and theory in bone modeling drift: comparing spatial analyses of primary bone distributions in the human humerus.

    Science.gov (United States)

    Maggiano, Corey M; Maggiano, Isabel S; Tiesler, Vera G; Chi-Keb, Julio R; Stout, Sam D

    2016-01-01

    This study compares two novel methods quantifying bone shaft tissue distributions, and relates observations on human humeral growth patterns for applications in anthropological and anatomical research. Microstructural variation in compact bone occurs due to developmental and mechanically adaptive circumstances that are 'recorded' by forming bone and are important for interpretations of growth, health, physical activity, adaptation, and identity in the past and present. Those interpretations hinge on a detailed understanding of the modeling process by which bones achieve their diametric shape, diaphyseal curvature, and general position relative to other elements. Bone modeling is a complex aspect of growth, potentially causing the shaft to drift transversely through formation and resorption on opposing cortices. Unfortunately, the specifics of modeling drift are largely unknown for most skeletal elements. Moreover, bone modeling has seen little quantitative methodological development compared with secondary bone processes, such as intracortical remodeling. The techniques proposed here, starburst point-count and 45° cross-polarization hand-drawn histomorphometry, permit the statistical and populational analysis of human primary tissue distributions and provide similar results despite being suitable for different applications. This analysis of a pooled archaeological and modern skeletal sample confirms the importance of extreme asymmetry in bone modeling as a major determinant of microstructural variation in diaphyses. Specifically, humeral drift is posteromedial in the human humerus, accompanied by a significant rotational trend. In general, results encourage the usage of endocortical primary bone distributions as an indicator and summary of bone modeling drift, enabling quantitative analysis by direction and proportion in other elements and populations.

  9. Qualitative and quantitative analyses of the echolocation strategies of bats on the basis of mathematical modelling and laboratory experiments.

    Directory of Open Access Journals (Sweden)

    Ikkyu Aihara

    Full Text Available Prey pursuit by an echolocating bat was studied theoretically and experimentally. First, a mathematical model was proposed to describe the flight dynamics of a bat and a single prey. In this model, the flight angle of the bat was affected by [Formula: see text] angles related to the flight path of the single moving prey, that is, the angle from the bat to the prey and the flight angle of the prey. Numerical simulation showed that the success rate of prey capture was high, when the bat mainly used the angle to the prey to minimize the distance to the prey, and also used the flight angle of the prey to minimize the difference in flight directions of itself and the prey. Second, parameters in the model were estimated according to experimental data obtained from video recordings taken while a Japanese horseshoe bat (Rhinolphus derrumequinum nippon pursued a moving moth (Goniocraspidum pryeri in a flight chamber. One of the estimated parameter values, which represents the ratio in the use of the [Formula: see text] angles, was consistent with the optimal value of the numerical simulation. This agreement between the numerical simulation and parameter estimation suggests that a bat chooses an effective flight path for successful prey capture by using the [Formula: see text] angles. Finally, the mathematical model was extended to include a bat and [Formula: see text] prey. Parameter estimation of the extended model based on laboratory experiments revealed the existence of bat's dynamical attention towards [Formula: see text] prey, that is, simultaneous pursuit of [Formula: see text] prey and selective pursuit of respective prey. Thus, our mathematical model contributes not only to quantitative analysis of effective foraging, but also to qualitative evaluation of a bat's dynamical flight strategy during multiple prey pursuit.

  10. Greenhouse gas network design using backward Lagrangian particle dispersion modelling – Part 2: Sensitivity analyses and South African test case

    CSIR Research Space (South Africa)

    Nickless, A

    2014-05-01

    Full Text Available et al., 1999; Rödenbeck et al., 2003; Chevallier et al., 2010). This method relies on precision measurements of atmo- spheric CO2 to refine the prior estimates of the fluxes. Using this theory, an optimal network of new measurement sites... of the South African network design, these variables are produced by the CSIRO Conformal-Cubic Atmospheric Model (CCAM), a global circulation model. CCAM is a two time-level semi-implicit hydrostatic primi- tive equation developed by McGregor (1987) and later...

  11. Comparative analyses of thermodynamic properties assessments, performed by geometric models: Application to the Ni-Bi-Zn system

    Directory of Open Access Journals (Sweden)

    Gandova V.

    2013-01-01

    Full Text Available The thermochemical properties of metals and alloys are essential for the chemists to invent and improve metallurgical and materials’ design processes. However, the properties of multicomponent systems are still scarcely known due to experimental difficulties and the large number of related systems. Thus, the modelling of some thermodynamic properties would be advantageous when experimental data are missing. Considering mentioned facts, geometric models to estimate some thermodynamic properties for the liquid phase of the Ni-Bi-Zn systems. The calculations have been performed in a wide temperature range (1000-2000 K. Ternary interaction parameters for the liquid phase allowing molar Gibbs excess energy calculation have been determined.

  12. A statistical human resources costing and accounting model for analysing the economic effects of an intervention at a workplace.

    Science.gov (United States)

    Landstad, Bodil J; Gelin, Gunnar; Malmquist, Claes; Vinberg, Stig

    2002-09-15

    The study had two primary aims. The first aim was to combine a human resources costing and accounting approach (HRCA) with a quantitative statistical approach in order to get an integrated model. The second aim was to apply this integrated model in a quasi-experimental study in order to investigate whether preventive intervention affected sickness absence costs at the company level. The intervention studied contained occupational organizational measures, competence development, physical and psychosocial working environmental measures and individual and rehabilitation measures on both an individual and a group basis. The study is a quasi-experimental design with a non-randomized control group. Both groups involved cleaning jobs at predominantly female workplaces. The study plan involved carrying out before and after studies on both groups. The study included only those who were at the same workplace during the whole of the study period. In the HRCA model used here, the cost of sickness absence is the net difference between the costs, in the form of the value of the loss of production and the administrative cost, and the benefits in the form of lower labour costs. According to the HRCA model, the intervention used counteracted a rise in sickness absence costs at the company level, giving an average net effect of 266.5 Euros per person (full-time working) during an 8-month period. Using an analogue statistical analysis on the whole of the material, the contribution of the intervention counteracted a rise in sickness absence costs at the company level giving an average net effect of 283.2 Euros. Using a statistical method it was possible to study the regression coefficients in sub-groups and calculate the p-values for these coefficients; in the younger group the intervention gave a calculated net contribution of 605.6 Euros with a p-value of 0.073, while the intervention net contribution in the older group had a very high p-value. Using the statistical model it was

  13. Analyses on gravity variation before and after the Lijiang earthquake based on a finite rectangular dislocation model

    Institute of Scientific and Technical Information of China (English)

    燕乃玲; 李辉; 申重阳

    2003-01-01

    The methods were discussed to calculate the gravity variation due to crustal deformation based on a model of dislocation on a finite rectangular plane. Taking the Lijiang MS=7.0 earthquake as an example the calculating principle of fault parameters were determined, and the results were given. Of particular interests were the characteristics of the gravity variations in different dislocation types.With comparison between the calculated results and the practical measurements, it was found that the model could to some extent account for the observations. But it failed to give explanations to the more far spatial gravity variation.

  14. Plastic bottle oscillator as an on-off-type oscillator: Experiments, modeling, and stability analyses of single and coupled systems

    Science.gov (United States)

    Kohira, Masahiro I.; Kitahata, Hiroyuki; Magome, Nobuyuki; Yoshikawa, Kenichi

    2012-02-01

    An oscillatory system called a plastic bottle oscillator is studied, in which the downflow of water and upflow of air alternate periodically in an upside-down plastic bottle containing water. It is demonstrated that a coupled two-bottle system exhibits in- and antiphase synchronization according to the nature of coupling. A simple ordinary differential equation is deduced to interpret the characteristics of a single oscillator. This model is also extended to coupled oscillators, and the model reproduces the essential features of the experimental observations.

  15. Effective modelling, analysis and fatigue assessment of welded structures; Effektive Modellbildung, Analyse und Bewertung fuer die rechnerische Lebensdaueranalyse geschweisster Strukturen

    Energy Technology Data Exchange (ETDEWEB)

    Rauch, R.; Schiele, S. [CADFEM GmbH, Stuttgart (Germany); Rother, K.

    2007-07-01

    Analysis of welded structures is a challenge for the analyst. Improvements of Soft- and Hardware enable an analysis containing full assemblies. Especially for welded structures these possibilities show significant benefits leading to more detailed descriptions of the flux of forces and reducing the effort for the engineer. This paper covers the method for modeling, structural analysis and assessment of welded structures using Finite Element Analysis. A hierarchical concept to localize highly stressed regions using a global model and a local approach according to a notch stress analysis will be presented. (orig.)

  16. Determination of S17 from 7Be(d,n)8B reaction CDCC analyses based on three-body model

    CERN Document Server

    Ogata, K; Iseri, Y; Kamimura, M; Ogata, Kazuyuki; Yahiro, Masanobu; Iseri, Yasunori; Kamimura, Masayasu

    2003-01-01

    The astrophysical factor $S_{17}$ for $^7$Be($p,\\gamma$)$^8$B reaction is reliably extracted from the transfer reaction $^7$Be($d,n$)$^8$B at $E=7.5$ MeV with the asymptotic normalization coefficient method. The transfer reaction is accurately analyzed with CDCC based on the three-body model. This analysis is free from uncertainties of the optical potentials having been crucial in the previous DWBA analyses.

  17. Longitudinal Analyses of a Hierarchical Model of Peer Social Competence for Preschool Children: Structural Fidelity and External Correlates

    Science.gov (United States)

    Shin, Nana; Vaughn, Brian E.; Kim, Mina; Krzysik, Lisa; Bost, Kelly K.; McBride, Brent; Santos, Antonio J.; Peceguina, Ines; Coppola, Gabrielle

    2011-01-01

    Achieving consensus on the definition and measurement of social competence (SC) for preschool children has proven difficult in the developmental sciences. We tested a hierarchical model in which SC is assumed to be a second-order latent variable by using longitudinal data (N = 345). We also tested the degree to which peer SC at Time 1 predicted…

  18. Analyses of the global geopotential models and different sources of gravity field elements used to reductions of geodetic observations

    Science.gov (United States)

    Olszak, Tomasz; Jackiewicz, Małgorzata; Margański, Stanisław

    2013-04-01

    For reductions of geodetic observations onto geoid and ellipsoid (eg. astronomical coordinates, deflections of the vertical, astronomical azimuth and linear measurements) it is necessary a knowledge of the gravity field parameters. Also, in the leveling network it is necessary to collect such information to calculate the normal (or orthometric) correction. The poster provides an assessment of the available gravity data sources for use in the reduction of mentioned observations. As a such source it is understood of direct measurements, the interpolated anomalies of the existing gravity data sets and a calculateion them based on geopotential models. The study included field data, data from the Polish National Geological Institute including anomalies used for interpolation, and data from the model Earth Gravitational Model 2008 (EGM2008) in full form and truncated to 360 degree and order. In the case of the normal corrections mentioned sources also are analyzed in comparison with the values measured Faye's anomalies on selected benchmarks of leveling lines in various Polish regions. The paper shows the requirements of precision to be met by gravimetric data to provide by the required technical instructions. It was found that for 90% of the Polish it is possible to dispense with the measured data to the data generated from the geopotential model while maintaining the sufficient accuracy. For mountain areas, however, it is necessary to use the natural elements of the gravity field determined only by direct measurements.

  19. Analysing the accuracy of pavement performance models in the short and long terms: GMDH and ANFIS methods

    NARCIS (Netherlands)

    Ziari, H.; Sobhani, J.; Ayoubinejad, J.; Hartmann, T.

    2016-01-01

    The accuracy of pavement performance prediction is a critical part of pavement management and directly influences maintenance and rehabilitation strategies. Many models with various specifications have been proposed by researchers and used by agencies. This study presents nine variables affecting pa

  20. Mind the gaps: a state-space model for analysing the dynamics of North Sea herring spawning components

    DEFF Research Database (Denmark)

    Payne, Mark

    2010-01-01

    , the sum of the fitted abundance indices across all components proves an excellent proxy for the biomass of the total stock, even though the model utilizes information at the individual-component level. The Orkney–Shetland component appears to have recovered faster from historic depletion events than...

  1. Let there be bioluminescence: development of a biophotonic imaging platform for in situ analyses of oral biofilms in animal models.

    Science.gov (United States)

    Merritt, Justin; Senpuku, Hidenobu; Kreth, Jens

    2016-01-01

    In the current study, we describe a novel biophotonic imaging-based reporter system that is particularly useful for the study of virulence in polymicrobial infections and interspecies interactions within animal models. A suite of luciferase enzymes was compared using three early colonizing species of the human oral flora (Streptococcus mutans, Streptococcus gordonii and Streptococcus sanguinis) to determine the utility of the different reporters for multiplexed imaging studies in vivo. Using the multiplex approach, we were able to track individual species within a dual-species oral infection model in mice with both temporal and spatial resolution. We also demonstrate how biophotonic imaging of multiplexed luciferase reporters could be adapted for real-time quantification of bacterial gene expression in situ. By creating an inducible dual-luciferase expressing reporter strain of S. mutans, we were able to exogenously control and measure expression of nlmAB (encoding the bacteriocin mutacin IV) within mice to assess its importance for the persistence ability of S. mutans in the oral cavity. The imaging system described in the current study circumvents many of the inherent limitations of current animal model systems, which should now make it feasible to test hypotheses that were previously impractical to model.

  2. The mental health care model in Brazil: analyses of the funding, governance processes, and mechanisms of assessment

    Science.gov (United States)

    Trapé, Thiago Lavras; Campos, Rosana Onocko

    2017-01-01

    ABSTRACT OBJECTIVE This study aims to analyze the current status of the mental health care model of the Brazilian Unified Health System, according to its funding, governance processes, and mechanisms of assessment. METHODS We have carried out a documentary analysis of the ordinances, technical reports, conference reports, normative resolutions, and decrees from 2009 to 2014. RESULTS This is a time of consolidation of the psychosocial model, with expansion of the health care network and inversion of the funding for community services with a strong emphasis on the area of crack cocaine and other drugs. Mental health is an underfunded area within the chronically underfunded Brazilian Unified Health System. The governance model constrains the progress of essential services, which creates the need for the incorporation of a process of regionalization of the management. The mechanisms of assessment are not incorporated into the health policy in the bureaucratic field. CONCLUSIONS There is a need to expand the global funding of the area of health, specifically mental health, which has been shown to be a successful policy. The current focus of the policy seems to be archaic in relation to the precepts of the psychosocial model. Mechanisms of assessment need to be expanded.

  3. Confocal microscopy-based three-dimensional cell-specific modeling for large deformation analyses in cellular mechanics.

    Science.gov (United States)

    Slomka, Noa; Gefen, Amit

    2010-06-18

    This study introduces a new confocal microscopy-based three-dimensional cell-specific finite element (FE) modeling methodology for simulating cellular mechanics experiments involving large cell deformations. Three-dimensional FE models of undifferentiated skeletal muscle cells were developed by scanning C2C12 myoblasts using a confocal microscope, and then building FE model geometries from the z-stack images. Strain magnitudes and distributions in two cells were studied when the cells were subjected to compression and stretching, which are used in pressure ulcer and deep tissue injury research to induce large cell deformations. Localized plasma membrane and nuclear surface area (NSA) stretches were observed for both the cell compression and stretching simulation configurations. It was found that in order to induce large tensile strains (>5%) in the plasma membrane and NSA, one needs to apply more than approximately 15% of global cell deformation in cell compression tests, or more than approximately 3% of tensile strains in the elastic plate substrate in cell stretching experiments. Utilization of our modeling can substantially enrich experimental cellular mechanics studies in classic cell loading designs that typically involve large cell deformations, such as static and cyclic stretching, cell compression, micropipette aspiration, shear flow and hydrostatic pressure, by providing magnitudes and distributions of the localized cellular strains specific to each setup and cell type, which could then be associated with the applied stimuli.

  4. A mathematical high bar-human body model for analysing and interpreting mechanical-energetic processes on the high bar.

    Science.gov (United States)

    Arampatzis, A; Brüggemann, G P

    1998-12-01

    The aims of this study were: 1. To study the transfer of energy between the high bar and the gymnast. 2. To develop criteria from the utilisation of high bar elasticity and the utilisation of muscle capacity to assess the effectiveness of a movement solution. 3. To study the influence of varying segment movement upon release parameters. For these purposes a model of the human body attached to the high bar (high bar-human body model) was developed. The human body was modelled using a 15-segment body system. The joint-beam element method (superelement) was employed for modelling the high bar. A superelement consists of four rigid segments connected by joints (two Cardan joints and one rotational-translational joint) and springs (seven rotation springs and one tension-compression spring). The high bar was modelled using three superelements. The input data required for the high bar human body model were collected with video-kinematographic (50 Hz) and dynamometric (500 Hz) techniques. Masses and moments of inertia of the 15 segments were calculated using the data from the Zatsiorsky et al. (1984) model. There are two major phases characteristic of the giant swing prior to dismounts from the high bar. In the first phase the gymnast attempts to supply energy to the high bar-humanbody system through muscle activity and to store this energy in the high bar. The difference between the energy transferred to the high bar and the reduction in the total energy of the body could be adopted as a criterion for the utilisation of high bar elasticity. The energy previously transferred into the high bar is returned to the body during the second phase. An advantageous increase in total body energy at the end of the exercise could only be obtained through muscle energy supply. An index characterising the utilisation of muscle capacity was developed out of the difference between the increase in total body energy and the energy returned from the high bar. A delayed and initially slow but

  5. Validating the European Health Literacy Survey Questionnaire in people with type 2 diabetes: Latent trait analyses applying multidimensional Rasch modelling and confirmatory factor analysis.

    Science.gov (United States)

    Finbråten, Hanne Søberg; Pettersen, Kjell Sverre; Wilde-Larsson, Bodil; Nordström, Gun; Trollvik, Anne; Guttersrud, Øystein

    2017-11-01

    To validate the European Health Literacy Survey Questionnaire (HLS-EU-Q47) in people with type 2 diabetes mellitus. The HLS-EU-Q47 latent variable is outlined in a framework with four cognitive domains integrated in three health domains, implying 12 theoretically defined subscales. Valid and reliable health literacy measurers are crucial to effectively adapt health communication and education to individuals and groups of patients. Cross-sectional study applying confirmatory latent trait analyses. Using a paper-and-pencil self-administered approach, 388 adults responded in March 2015. The data were analysed using the Rasch methodology and confirmatory factor analysis. Response violation (response dependency) and trait violation (multidimensionality) of local independence were identified. Fitting the "multidimensional random coefficients multinomial logit" model, 1-, 3- and 12-dimensional Rasch models were applied and compared. Poor model fit and differential item functioning were present in some items, and several subscales suffered from poor targeting and low reliability. Despite multidimensional data, we did not observe any unordered response categories. Interpreting the domains as distinct but related latent dimensions, the data fit a 12-dimensional Rasch model and a 12-factor confirmatory factor model best. Therefore, the analyses did not support the estimation of one overall "health literacy score." To support the plausibility of claims based on the HLS-EU score(s), we suggest: removing the health care aspect to reduce the magnitude of multidimensionality; rejecting redundant items to avoid response dependency; adding "harder" items and applying a six-point rating scale to improve subscale targeting and reliability; and revising items to improve model fit and avoid bias owing to person factors. © 2017 John Wiley & Sons Ltd.

  6. Updated model for radionuclide transport in the near-surface till at Forsmark - Implementation of decay chains and sensitivity analyses

    Energy Technology Data Exchange (ETDEWEB)

    Pique, Angels; Pekala, Marek; Molinero, Jorge; Duro, Lara; Trinchero, Paolo; Vries, Luis Manuel de [Amphos 21 Consulting S.L., Barcelona (Spain)

    2013-02-15

    The Forsmark area has been proposed for potential siting of a deep underground (geological) repository for radioactive waste in Sweden. Safety assessment of the repository requires radionuclide transport from the disposal depth to recipients at the surface to be studied quantitatively. The near-surface quaternary deposits at Forsmark are considered a pathway for potential discharge of radioactivity from the underground facility to the biosphere, thus radionuclide transport in this system has been extensively investigated over the last years. The most recent work of Pique and co-workers (reported in SKB report R-10-30) demonstrated that in case of release of radioactivity the near-surface sedimentary system at Forsmark would act as an important geochemical barrier, retarding the transport of reactive radionuclides through a combination of retention processes. In this report the conceptual model of radionuclide transport in the quaternary till at Forsmark has been updated, by considering recent revisions regarding the near-surface lithology. In addition, the impact of important conceptual assumptions made in the model has been evaluated through a series of deterministic and probabilistic (Monte Carlo) sensitivity calculations. The sensitivity study focused on the following effects: 1. Radioactive decay of {sup 135}Cs, {sup 59}Ni, {sup 230}Th and {sup 226}Ra and effects on their transport. 2. Variability in key geochemical parameters, such as the composition of the deep groundwater, availability of sorbing materials in the till, and mineral equilibria. 3. Variability in hydraulic parameters, such as the definition of hydraulic boundaries, and values of hydraulic conductivity, dispersivity and the deep groundwater inflow rate. The overarching conclusion from this study is that the current implementation of the model is robust (the model is largely insensitive to variations in the parameters within the studied ranges) and conservative (the Base Case calculations have a

  7. The generic MESSy submodel TENDENCY (v1.0) for process-based analyses in Earth system models

    Science.gov (United States)

    Eichinger, R.; Jöckel, P.

    2014-07-01

    The tendencies of prognostic variables in Earth system models are usually only accessible, e.g. for output, as a sum over all physical, dynamical and chemical processes at the end of one time integration step. Information about the contribution of individual processes to the total tendency is lost, if no special precautions are implemented. The knowledge on individual contributions, however, can be of importance to track down specific mechanisms in the model system. We present the new MESSy (Modular Earth Submodel System) infrastructure submodel TENDENCY and use it exemplarily within the EMAC (ECHAM/MESSy Atmospheric Chemistry) model to trace process-based tendencies of prognostic variables. The main idea is the outsourcing of the tendency accounting for the state variables from the process operators (submodels) to the TENDENCY submodel itself. In this way, a record of the tendencies of all process-prognostic variable pairs can be stored. The selection of these pairs can be specified by the user, tailor-made for the desired application, in order to minimise memory requirements. Moreover, a standard interface allows the access to the individual process tendencies by other submodels, e.g. for on-line diagnostics or for additional parameterisations, which depend on individual process tendencies. An optional closure test assures the correct treatment of tendency accounting in all submodels and thus serves to reduce the model's susceptibility. TENDENCY is independent of the time integration scheme and therefore the concept is applicable to other model systems as well. Test simulations with TENDENCY show an increase of computing time for the EMAC model (in a setup without atmospheric chemistry) of 1.8 ± 1% due to the additional subroutine calls when using TENDENCY. Exemplary results reveal the dissolving mechanisms of the stratospheric tape recorder signal in height over time. The separation of the tendency of the specific humidity into the respective processes (large

  8. The generic MESSy submodel TENDENCY (v1.0 for process-based analyses in Earth system models

    Directory of Open Access Journals (Sweden)

    R. Eichinger

    2014-07-01

    Full Text Available The tendencies of prognostic variables in Earth system models are usually only accessible, e.g. for output, as a sum over all physical, dynamical and chemical processes at the end of one time integration step. Information about the contribution of individual processes to the total tendency is lost, if no special precautions are implemented. The knowledge on individual contributions, however, can be of importance to track down specific mechanisms in the model system. We present the new MESSy (Modular Earth Submodel System infrastructure submodel TENDENCY and use it exemplarily within the EMAC (ECHAM/MESSy Atmospheric Chemistry model to trace process-based tendencies of prognostic variables. The main idea is the outsourcing of the tendency accounting for the state variables from the process operators (submodels to the TENDENCY submodel itself. In this way, a record of the tendencies of all process–prognostic variable pairs can be stored. The selection of these pairs can be specified by the user, tailor-made for the desired application, in order to minimise memory requirements. Moreover, a standard interface allows the access to the individual process tendencies by other submodels, e.g. for on-line diagnostics or for additional parameterisations, which depend on individual process tendencies. An optional closure test assures the correct treatment of tendency accounting in all submodels and thus serves to reduce the model's susceptibility. TENDENCY is independent of the time integration scheme and therefore the concept is applicable to other model systems as well. Test simulations with TENDENCY show an increase of computing time for the EMAC model (in a setup without atmospheric chemistry of 1.8 ± 1% due to the additional subroutine calls when using TENDENCY. Exemplary results reveal the dissolving mechanisms of the stratospheric tape recorder signal in height over time. The separation of the tendency of the specific humidity into the respective

  9. Analyses of β-Bands of 230,232Th and 232,234U by the Projected Shell Model

    Institute of Scientific and Technical Information of China (English)

    CUI Ji-Wei; ZHOU Xian-Rong; CHEN Fang-Qi; SUN Yang; WU Cheng-Li

    2012-01-01

    The ground bands and β-bands of four nuclei 230,232Th and 232,234U in the actinide region are investigated by introducing a collective Do pair into the projected shell model.We discuss the collectivity of the Do pair.The calculated energy schemes agree well with experimental data,and so do the E2 transition rates.%The ground bands and β-bands of four nuclei 230,232 Th and 232,234 U in the actinide region are investigated by-introducing a collective Do pair into the projected shell model. We discuss the collectivity of the D0 pair. The calculated energy schemes agree well with experimental data, and so do the E2 transition rates.

  10. Dynamical mechanisms of phase-2 early afterdepolarizations in human ventricular myocytes: insights from bifurcation analyses of two mathematical models.

    Science.gov (United States)

    Kurata, Yasutaka; Tsumoto, Kunichika; Hayashi, Kenshi; Hisatome, Ichiro; Tanida, Mamoru; Kuda, Yuhichi; Shibamoto, Toshishige

    2017-01-01

    Early afterdepolarization (EAD) is known as a cause of ventricular arrhythmias in long QT syndromes. We theoretically investigated how the rapid (IKr) and slow (IKs) components of delayed-rectifier K(+) channel currents, L-type Ca(2+) channel current (ICaL), Na(+)/Ca(2+) exchanger current (INCX), Na(+)-K(+) pump current (INaK), intracellular Ca(2+) (Cai) handling via sarcoplasmic reticulum (SR), and intracellular Na(+) concentration (Nai) contribute to initiation, termination, and modulation of phase-2 EADs, using two human ventricular myocyte models. Bifurcation structures of dynamical behaviors in model cells were explored by calculating equilibrium points, limit cycles (LCs), and bifurcation points as functions of parameters. EADs were reproduced by numerical simulations. The results are summarized as follows: 1) decreasing IKs and/or IKr or increasing ICaL led to EAD generation, to which mid-myocardial cell models were especially susceptible; the parameter regions of EADs overlapped the regions of stable LCs. 2) Two types of EADs (termination mechanisms), IKs activation-dependent and ICaL inactivation-dependent EADs, were detected; IKs was not necessarily required for EAD formation. 3) Inhibiting INCX suppressed EADs via facilitating Ca(2+)-dependent ICaL inactivation. 4) Cai dynamics (SR Ca(2+) handling) and Nai strongly affected bifurcations and EAD generation in model cells via modulating ICaL, INCX, and INaK Parameter regions of EADs, often overlapping those of stable LCs, shifted depending on Cai and Nai in stationary and dynamic states. 5) Bradycardia-related induction of EADs was mainly due to decreases in Nai at lower pacing rates. This study demonstrates that bifurcation analysis allows us to understand the dynamical mechanisms of EAD formation more profoundly.

  11. Comparative analyses of $B\\to K_2^*l^+l^-$ in the standard model and new physics scenarios

    CERN Document Server

    Li, Run-Hui; Wang, Wei

    2010-01-01

    We analyze the $B\\to K_2^*(\\to K\\pi)l^+l^-$ (with $l=e,\\mu,\\tau$) decay in the standard model and two new physics scenarios: vector-like quark model and family non-universal $Z'$ model. We derive the differential angular distributions of the quasi-four-body decay, using the recently calculated form factors in the perturbative QCD approach. Branching ratios, polarizations, forward-backward asymmetries and transversity amplitudes are predicted, from which we find a promising prospective to observe this channel on the future experiment. We also update the constraints on effective Wilson coefficients and/or free parameters in these two new physics scenarios by making use of the experimental data of $B\\to K^*l^+l^-$ and $b\\to sl^+l^-$. Their impact on $B\\to K_2^*l^+l^-$ is subsequently explored and in particular the zero-crossing point for the forward-backward asymmetry in these new physics scenarios can sizably deviate from the SM scenario. In addition we also generalize the analysis to a similar mode $B_s\\to f_2...

  12. Mutational Analyses of HAMP Helices Suggest a Dynamic Bundle Model of Input-Output Signaling in Chemoreceptors

    Science.gov (United States)

    Zhou, Qin; Ames, Peter; Parkinson, John S.

    2009-01-01

    SUMMARY To test the gearbox model of HAMP signaling in the E. coli serine receptor, Tsr, we generated a series of amino acid replacements at each residue of the AS1 and AS2 helices. The residues most critical for Tsr function defined hydrophobic packing faces consistent with a 4-helix bundle. Suppression patterns of helix lesions conformed to the the predicted packing layers in the bundle. Although the properties and patterns of most AS1 and AS2 lesions were consistent with both proposed gearbox structures, some mutational features specifically indicate the functional importance of an x-da bundle over an alternative a-d bundle. These genetic data suggest that HAMP signaling could simply involve changes in the stability of its x-da bundle. We propose that Tsr HAMP controls output signals by modulating destabilizing phase clashes between the AS2 helices and the adjoining kinase control helices. Our model further proposes that chemoeffectors regulate HAMP bundle stability through a control cable connection between the transmembrane segments and AS1 helices. Attractant stimuli, which cause inward piston displacements in chemoreceptors, should reduce cable tension, thereby stabilizing the HAMP bundle. This study shows how transmembrane signaling and HAMP input-output control could occur without the helix rotations central to the gearbox model. PMID:19656294

  13. The nocturnal low-level jet in theWest African Sahel from observations, analyses, and conceptual models

    Science.gov (United States)

    Bessardon, Geoffrey; Brooks, Barbara; Marsham, John; Ross, Andrew

    2017-04-01

    There is a strong diurnal cycle in the West African monsoon (WAM) and the nocturnal low-level jet (NLLJ) is a key component of the nocturnal monsoon flow, transporting heat, moisture, and aerosols. Shear beneath the NLLJ has been linked to cloud formation and daytime mixing of NLLJ momentum to the surface is a key process for dust uplift. This study presents a comparison between observations from the WestnAfrican Sahel, reanalyses and two conceptual models of the NLLJ inertial oscillation. Past studies have identified inertial oscillations as the main cause of NLLJs at midlatitudes, but this study provides a novel quantitative test of conceptual models for the NLLJ at a monsoonal latitude. A comparison of 18 cases observed during the African Monsoon Multidisciplinary Analysis (AMMA) shows that an inertial oscillation is the main mechanism behind the NLLJ in the summertime Sahel. The inclusion of friction is essential for a realistic jet evolution. A simple conceptual model with friction captures the NLLJ strength, but gives too rapid rotation, likely due to the assumption of a constant equilibrium wind, when there are significant changes in geostrophic wind overnight. Reanalyses give a realistic rotation rate, but too weak NLLJ, with too strong winds at low-levels, due to too much mixing. This leads to substantial biases in reanalysed moisture transport.

  14. Model analyses of atmospheric mercury: present air quality and effects of transpacific transport on the United States

    Science.gov (United States)

    Lei, H.; Liang, X.-Z.; Wuebbles, D. J.; Tao, Z.

    2013-11-01

    Atmospheric mercury is a toxic air and water pollutant that is of significant concern because of its effects on human health and ecosystems. A mechanistic representation of the atmospheric mercury cycle is developed for the state-of-the-art global climate-chemistry model, CAM-Chem (Community Atmospheric Model with Chemistry). The model simulates the emission, transport, transformation and deposition of atmospheric mercury (Hg) in three forms: elemental mercury (Hg(0)), reactive mercury (Hg(II)), and particulate mercury (PHg). Emissions of mercury include those from human, land, ocean, biomass burning and volcano related sources. Land emissions are calculated based on surface solar radiation flux and skin temperature. A simplified air-sea mercury exchange scheme is used to calculate emissions from the oceans. The chemistry mechanism includes the oxidation of Hg(0) in gaseous phase by ozone with temperature dependence, OH, H2O2 and chlorine. Aqueous chemistry includes both oxidation and reduction of Hg(0). Transport and deposition of mercury species are calculated through adapting the original formulations in CAM-Chem. The CAM-Chem model with mercury is driven by present meteorology to simulate the present mercury air quality during the 1999-2001 period. The resulting surface concentrations of total gaseous mercury (TGM) are then compared with the observations from worldwide sites. Simulated wet depositions of mercury over the continental United States are compared to the observations from 26 Mercury Deposition Network stations to test the wet deposition simulations. The evaluations of gaseous concentrations and wet deposition confirm a strong capability for the CAM-Chem mercury mechanism to simulate the atmospheric mercury cycle. The general reproduction of global TGM concentrations and the overestimation on South Africa indicate that model simulations of TGM are seriously affected by emissions. The comparison to wet deposition indicates that wet deposition patterns

  15. A methodology for eliciting, representing, and analysing stakeholder knowledge for decision making on complex socio-ecological systems: from cognitive maps to agent-based models.

    Science.gov (United States)

    Elsawah, Sondoss; Guillaume, Joseph H A; Filatova, Tatiana; Rook, Josefine; Jakeman, Anthony J

    2015-03-15

    This paper aims to contribute to developing better ways for incorporating essential human elements in decision making processes for modelling of complex socio-ecological systems. It presents a step-wise methodology for integrating perceptions of stakeholders (qualitative) into formal simulation models (quantitative) with the ultimate goal of improving understanding and communication about decision making in complex socio-ecological systems. The methodology integrates cognitive mapping and agent based modelling. It cascades through a sequence of qualitative/soft and numerical methods comprising: (1) Interviews to elicit mental models; (2) Cognitive maps to represent and analyse individual and group mental models; (3) Time-sequence diagrams to chronologically structure the decision making process; (4) All-encompassing conceptual model of decision making, and (5) computational (in this case agent-based) Model. We apply the proposed methodology (labelled ICTAM) in a case study of viticulture irrigation in South Australia. Finally, we use strengths-weakness-opportunities-threats (SWOT) analysis to reflect on the methodology. Results show that the methodology leverages the use of cognitive mapping to capture the richness of decision making and mental models, and provides a combination of divergent and convergent analysis methods leading to the construction of an Agent Based Model.

  16. Increased congruence does not necessarily indicate increased phylogenetic accuracy--the behavior of the incongruence length difference test in mixed-model analyses.

    Science.gov (United States)

    Dowton, Mark; Austin, Andrew D

    2002-02-01

    Comprehensive phylogenetic analyses utilize data from distinct sources, including nuclear, mitochondrial, and plastid molecular sequences and morphology. Such heterogeneous datasets are likely to require distinct models of analysis, given the different histories of mutational biases operating on these characters. The incongruence length difference (ILD) test is increasingly being used to arbitrate between competing models of phylogenetic analysis in cases where multiple data partitions have been collected. Our work suggests that the ILD test is unlikely to be an effective measure of congruence when two datasets differ markedly in size. We show that models that increase the contribution of one data partition over another are likely to increase congruence, as measured by this test. More alarmingly, for many bipartition comparisons, character congruence increases bimodally - either increasing or decreasing the contribution of one data partition will increase congruence - making it impossible to arrive at a single optimally congruent model of analysis.

  17. Systematic review of model-based analyses reporting the cost-effectiveness and cost-utility of cardiovascular disease management programs.

    Science.gov (United States)

    Maru, Shoko; Byrnes, Joshua; Whitty, Jennifer A; Carrington, Melinda J; Stewart, Simon; Scuffham, Paul A

    2015-02-01

    The reported cost effectiveness of cardiovascular disease management programs (CVD-MPs) is highly variable, potentially leading to different funding decisions. This systematic review evaluates published modeled analyses to compare study methods and quality. Articles were included if an incremental cost-effectiveness ratio (ICER) or cost-utility ratio (ICUR) was reported, it is a multi-component intervention designed to manage or prevent a cardiovascular disease condition, and it addressed all domains specified in the American Heart Association Taxonomy for Disease Management. Nine articles (reporting 10 clinical outcomes) were included. Eight cost-utility and two cost-effectiveness analyses targeted hypertension (n=4), coronary heart disease (n=2), coronary heart disease plus stoke (n=1), heart failure (n=2) and hyperlipidemia (n=1). Study perspectives included the healthcare system (n=5), societal and fund holders (n=1), a third party payer (n=3), or was not explicitly stated (n=1). All analyses were modeled based on interventions of one to two years' duration. Time horizon ranged from two years (n=1), 10 years (n=1) and lifetime (n=8). Model structures included Markov model (n=8), 'decision analytic models' (n=1), or was not explicitly stated (n=1). Considerable variation was observed in clinical and economic assumptions and reporting practices. Of all ICERs/ICURs reported, including those of subgroups (n=16), four were above a US$50,000 acceptability threshold, six were below and six were dominant. The majority of CVD-MPs was reported to have favorable economic outcomes, but 25% were at unacceptably high cost for the outcomes. Use of standardized reporting tools should increase transparency and inform what drives the cost-effectiveness of CVD-MPs. © The European Society of Cardiology 2014.

  18. Modelling and simulation of a dual-clutch transmission vehicle to analyse the effect of pump selection on fuel economy

    Science.gov (United States)

    Ahlawat, R.; Fathy, H. K.; Lee, B.; Stein, J. L.; Jung, D.

    2010-07-01

    Positive displacement pumps are used in automotive transmissions to provide pressurised fluid to various hydraulic components in the transmission and also lubricate the mechanical components. The output flow of these pumps increases with pump/transmission speed, almost linearly, but the transmission flow requirements often saturate at higher speeds, resulting in excess flow capacity that must be wasted by allowing it to drain back to the sump. This represents a parasitic loss in the transmission leading to a loss in fuel economy. To overcome this issue, variable displacement pumps have been used in the transmission, where the output flow can be reduced by controlling the displacement of the pump. The use of these pumps in automatic transmissions has resulted in better fuel economy as compared with some types of fixed displacement pumps. However, the literature does not fully explore the benefits of variable displacement pumps to a specific type of transmission namely, dual-clutch transmission (DCT), which has different pressure and flow requirements from an epicyclic gear train. This paper presents an analysis of the effect of pump selection on fuel economy in a five-speed DCT of a commercial vehicle. Models of the engine, transmission, and vehicle are developed along with the models of two different types of pumps: a fixed displacement gerotor pump and a variable displacement vane pump. The models are then parameterised using experimental data, and the fuel economy of the vehicle is simulated on a standard driving cycle. The results suggest that the fuel economy benefit obtained by the use of the variable displacement pump in DCTs is comparable to the benefit previously shown for these pumps in automatic transmissions.

  19. CLASH-VLT: constraints on f(R) gravity models with galaxy clusters using lensing and kinematic analyses

    Science.gov (United States)

    Pizzuti, L.; Sartoris, B.; Amendola, L.; Borgani, S.; Biviano, A.; Umetsu, K.; Mercurio, A.; Rosati, P.; Balestra, I.; Caminha, G. B.; Girardi, M.; Grillo, C.; Nonino, M.

    2017-07-01

    We perform a maximum likelihood kinematic analysis of the two dynamically relaxed galaxy clusters MACS J1206.2-0847 at z=0.44 and RXC J2248.7-4431 at z=0.35 to determine the total mass profile in modified gravity models, using a modified version of the MAMPOSSt code of Mamon, Biviano and Bou&apose. Our work is based on the kinematic and lensing mass profiles derived using the data from the Cluster Lensing And Supernova survey with Hubble (hereafter CLASH) and the spectroscopic follow-up with the Very Large Telescope (hereafter CLASH-VLT). We assume a spherical Navarro-Frenk-White (NFW hereafter) profile in order to obtain a constraint on the fifth force interaction range λ for models in which the dependence of this parameter on the environment is negligible at the scale considered (i.e. λ=const) and fixing the fifth force strength to the value predicted in f(R) gravity. We then use information from lensing analysis to put a prior on the other NFW free parameters. In the case of MACSJ 1206 the joint kinematic+lensing analysis leads to an upper limit on the effective interaction range λdistribution. For RXJ 2248 instead a possible tension with the ΛCDM model appears when adding lensing information, with a lower limit λ>=0.14 mpc at Δχ2=2.71. This is consequence of the slight difference between the lensing and kinematic data, appearing in GR for this cluster, that could in principle be explained in terms of modifications of gravity. We discuss the impact of systematics and the limits of our analysis as well as future improvements of the results obtained. This work has interesting implications in view of upcoming and future large imaging and spectroscopic surveys, that will deliver lensing and kinematic mass reconstruction for a large number of galaxy clusters.

  20. Strong "bottom-up" influences on small mammal populations: State-space model analyses from long-term studies.

    Science.gov (United States)

    Flowerdew, John R; Amano, Tatsuya; Sutherland, William J

    2017-03-01

    "Bottom-up" influences, that is, masting, plus population density and climate, commonly influence woodland rodent demography. However, "top-down" influences (predation) also intervene. Here, we assess the impacts of masting, climate, and density on rodent populations placed in the context of what is known about "top-down" influences. To explain between-year variations in bank vole Myodes glareolus and wood mouse Apodemus sylvaticus population demography, we applied a state-space model to 33 years of catch-mark-release live-trapping, winter temperature, and precise mast-collection data. Experimental mast additions aided interpretation. Rodent numbers in European ash Fraxinus excelsior woodland were estimated (May/June, November/December). December-March mean minimum daily temperature represented winter severity. Total marked adult mice/voles (and juveniles in May/June) provided density indices validated against a model-generated population estimate; this allowed estimation of the structure of a time-series model and the demographic impacts of the climatic/biological variables. During two winters of insignificant fruit-fall, 6.79 g/m(2) sterilized ash seed (as fruit) was distributed over an equivalent woodland similarly live-trapped. September-March fruit-fall strongly increased bank vole spring reproductive rate and winter and summer population growth rates; colder winters weakly reduced winter population growth. September-March fruit-fall and warmer winters marginally increased wood mouse spring reproductive rate and September-December fruit-fall weakly elevated summer population growth. Density dependence significantly reduced both species' population growth. Fruit-fall impacts on demography still appeared after a year. Experimental ash fruit addition confirmed its positive influence on bank vole winter population growth with probable moderation by colder temperatures. The models show the strong impact of masting as a "bottom-up" influence on rodent demography

  1. Model analyses of atmospheric mercury: present air quality and effects of transpacific transport on the United States

    Directory of Open Access Journals (Sweden)

    H. Lei

    2013-04-01

    Full Text Available Atmospheric mercury is a toxic air and water pollutant that is of significant concern because of its effects on human health and ecosystems. A mechanistic representation of the atmospheric mercury cycle is developed for the state-of-the-art global climate-chemistry model, CAM-Chem (Community Atmospheric Model with Chemistry. The model simulates the emission, transport, transformation and deposition of atmospheric mercury (Hg in three forms: elemental mercury (Hg(0, reactive mercury (Hg(II, and particulate mercury (PHg. Emissions of mercury include those from human, land, ocean, biomass burning and volcano related sources. Land emissions are calculated based on surface solar radiation flux and skin temperature. A simplified air–sea mercury exchange scheme is used to calculate emissions from the oceans. The chemistry mechanism includes the oxidation of Hg(0 in gaseous phase by ozone with temperature dependence, OH, H2O2 and chlorine. Aqueous chemistry includes both oxidation and reduction of Hg(0. Transport and deposition of mercury species are calculated through adapting the original formulations in CAM-Chem. The CAM-Chem model with mercury is driven by present meteorology to simulate the present mercury air quality during the 1999–2001 periods. The resulting surface concentrations of total gaseous mercury (TGM are then compared with the observations from worldwide sites. Simulated wet depositions of mercury over the continental United States are compared to the observations from 26 Mercury Deposition Network stations to test the wet deposition simulations. The evaluations of gaseous concentrations and wet deposition confirm a strong capability for the CAM-Chem mercury mechanism to simulate the atmospheric mercury cycle. The results also indicate that mercury pollution in East Asia and Southern Africa is very significant with TGM concentrations above 3.0 ng m−3. The comparison to wet deposition indicates that wet deposition patterns of

  2. Generelle aspekter ved mediereception? – Et bud på en multidimensional model for analyse af kvalitative receptionsinterviews

    Directory of Open Access Journals (Sweden)

    Kim Schrøder

    2003-09-01

    Full Text Available Findes der generelle aspekter ved receptionen af medieprodukter, som det kan være analytisk frugtbart at orientere sig efter, og som man altid bør belyse, når man analyserer kvalitative receptionsdata – og måske også allerede når man skal planlægge det empiriske feltarbejde i et em- pirisk receptionsprojekt? Denne artikel bygger på, at dette spørgsmål kan besvares bekræftende, og fremlægger et bud på, hvordan en multi- dimensional model for kvalitativ receptionsanalyse kunne se ud.

  3. Analysing $j/\\Psi$ Production in Various RHIC Interactions with a Version of Sequential Chain Model (SCM)

    CERN Document Server

    Guptaroy, P; Sau, Goutam; Biswas, S K; Bhattacharya, S

    2009-01-01

    We have attempted to develop here tentatively a model for $J/\\Psi$ production in p+p, d+Au, Cu + Cu and Au + Au collisions at RHIC energies on the basic ansatz that the results of nucleus-nucleus collisions could be arrived at from the nucleon-nucleon (p + p)-interactions with induction of some additional specific features of high energy nuclear collisions. Based on the proposed new and somewhat unfamiliar model, we have tried (i) to capture the properties of invariant $p_T$ -spectra for $J/\\Psi$ meson production; (ii) to study the nature of centrality dependence of the $p_T$ -spectra; (iii) to understand the rapidity distributions; (iv) to obtain the characteristics of the average transverse momentum $$ and the values of $$ as well and (v) to trace the nature of nuclear modification factor. The alternative approach adopted here describes the data-sets on the above-mentioned various observables in a fairly satisfactory manner. And, finally, the nature of $J/\\Psi$-production at Large Hadron Collider(LHC)-energ...

  4. A geometric model of plaque incision and graft for Peyronie's disease with geometric analyses of different techniques.

    Science.gov (United States)

    Miranda, Alexandre F; Sampaio, Francisco J B

    2014-06-01

    A surgical approach with plaque incision and graft (PIG) to correct Peyronie's disease is the best method for complex, large deviations. However, the geometric and mechanical consequences of this intervention are poorly understood. The aim of this study was to analyze the geometric and mechanical consequences of PIG on penile straighten surgery. A tridimensional penile simile model with a curvature of 85° was created to test all of the most common PIG techniques. PIG with double-Y, H-shape, and Egydio techniques were used to rectify the curved penile model. The results that differed from a rectified cylinder shape were highlighted. All of the analyzed techniques created a geometric distortion that could be linked to poor surgical results. We suggest a new technique to resolve these abnormalities. Current techniques designed to correct penile deviation using PIG present geometric and mechanical imperfections with potential consequences to the postoperative success rate. The new technique proposed in this report could be a possible solution to solve the geometric distortion caused by PIG. © 2014 International Society for Sexual Medicine.

  5. Development of modelling tools for thermo-hydraulic analyses and design of JT-60SA TF coils

    Energy Technology Data Exchange (ETDEWEB)

    Lacroix, Benoit, E-mail: benoit.lacroix@cea.fr [CEA, IRFM, F-13108 Saint-Paul-lez-Durance (France); Portafaix, Christophe [CEA, IRFM, F-13108 Saint-Paul-lez-Durance (France); Barabaschi, Pietro [Fusion For Energy, D-85748 Garching (Germany); Duchateau, Jean-Luc; Hertout, Patrick; Lamaison, Valerie; Nicollet, Sylvie; Reynaud, Pascal [CEA, IRFM, F-13108 Saint-Paul-lez-Durance (France); Villari, Rosaria [Euratom-ENEA Association, IT-00044 Frascati (Italy); Zani, Louis [Fusion For Energy, D-85748 Garching (Germany)

    2011-10-15

    In the framework of the JT-60SA project, the Toroidal Field (TF) coils design has required to address reliably the choice between multiple design options and to calculate the temperature margin criterion for the superconductor. For this purpose, a tool was developed in two stages, interfacing the ANSYS code, used to model the thermal diffusion between the casing and the winding pack, with the GANDALF code which solves the 1D thermo-hydraulics inside each conductor. The first version of this Thermo-hydraulic EXtended TOol (TEXTO) was developed for producing conservative results and has allowed to simulate the fast discharge of the magnet, providing valuable results such as the mass flow expelled from each pancake. In the second stage, the ANSYS code was configured for modelling the helium transport in the casing and in the winding pack, thus computing more realistic transverse heat fluxes to be injected into the GANDALF code for an accurate calculation of the temperature margin. This second version of TEXTO, which integrates the TACOS (Thermo-hydraulic Ansys COmputation Semi 3D) module, has been used for studying the feasibility of positioning the helium inlets in the electrical connections. The temperature margin has then been found close but below the criterion of 1 K.

  6. Whole genome and global gene expression analyses of the model mushroom Flammulina velutipes reveal a high capacity for lignocellulose degradation.

    Directory of Open Access Journals (Sweden)

    Young-Jin Park

    Full Text Available Flammulina velutipes is a fungus with health and medicinal benefits that has been used for consumption and cultivation in East Asia. F. velutipes is also known to degrade lignocellulose and produce ethanol. The overlapping interests of mushroom production and wood bioconversion make F. velutipes an attractive new model for fungal wood related studies. Here, we present the complete sequence of the F. velutipes genome. This is the first sequenced genome for a commercially produced edible mushroom that also degrades wood. The 35.6-Mb genome contained 12,218 predicted protein-encoding genes and 287 tRNA genes assembled into 11 scaffolds corresponding with the 11 chromosomes of strain KACC42780. The 88.4-kb mitochondrial genome contained 35 genes. Well-developed wood degrading machinery with strong potential for lignin degradation (69 auxiliary activities, formerly FOLymes and carbohydrate degradation (392 CAZymes, along with 58 alcohol dehydrogenase genes were highly expressed in the mycelium, demonstrating the potential application of this organism to bioethanol production. Thus, the newly uncovered wood degrading capacity and sequential nature of this process in F. velutipes, offer interesting possibilities for more detailed studies on either lignin or (hemi- cellulose degradation in complex wood substrates. The mutual interest in wood degradation by the mushroom industry and (ligno-cellulose biomass related industries further increase the significance of F. velutipes as a new model.

  7. Whole genome and global gene expression analyses of the model mushroom Flammulina velutipes reveal a high capacity for lignocellulose degradation.

    Science.gov (United States)

    Park, Young-Jin; Baek, Jeong Hun; Lee, Seonwook; Kim, Changhoon; Rhee, Hwanseok; Kim, Hyungtae; Seo, Jeong-Sun; Park, Hae-Ran; Yoon, Dae-Eun; Nam, Jae-Young; Kim, Hong-Il; Kim, Jong-Guk; Yoon, Hyeokjun; Kang, Hee-Wan; Cho, Jae-Yong; Song, Eun-Sung; Sung, Gi-Ho; Yoo, Young-Bok; Lee, Chang-Soo; Lee, Byoung-Moo; Kong, Won-Sik

    2014-01-01

    Flammulina velutipes is a fungus with health and medicinal benefits that has been used for consumption and cultivation in East Asia. F. velutipes is also known to degrade lignocellulose and produce ethanol. The overlapping interests of mushroom production and wood bioconversion make F. velutipes an attractive new model for fungal wood related studies. Here, we present the complete sequence of the F. velutipes genome. This is the first sequenced genome for a commercially produced edible mushroom that also degrades wood. The 35.6-Mb genome contained 12,218 predicted protein-encoding genes and 287 tRNA genes assembled into 11 scaffolds corresponding with the 11 chromosomes of strain KACC42780. The 88.4-kb mitochondrial genome contained 35 genes. Well-developed wood degrading machinery with strong potential for lignin degradation (69 auxiliary activities, formerly FOLymes) and carbohydrate degradation (392 CAZymes), along with 58 alcohol dehydrogenase genes were highly expressed in the mycelium, demonstrating the potential application of this organism to bioethanol production. Thus, the newly uncovered wood degrading capacity and sequential nature of this process in F. velutipes, offer interesting possibilities for more detailed studies on either lignin or (hemi-) cellulose degradation in complex wood substrates. The mutual interest in wood degradation by the mushroom industry and (ligno-)cellulose biomass related industries further increase the significance of F. velutipes as a new model.

  8. 3D thermo-hydro-mechanical-migratory coupling model and FEM analyses for dual-porosity medium

    Institute of Scientific and Technical Information of China (English)

    2010-01-01

    One kind of 3D coupled thermo-hydro-mechanical-migratory model for saturated-unsaturated dual-porosity medium was established,in which the stress field and the temperature field are single,but the seepage field and the concentration field are double,and the influences of sets,spaces,angles,continuity ratios,stiffness of fractures on the constitutive relationship of the medium can be considered.The relative three-dimensional program of finite element method was also developed.By comparing with the existing computation example,reliability of the model and the program were verified.Taking a hypothetical nuclear waste repository as a calculation example,the radioactive nuclide leak was simulated numerically with both the rock mass and the buffer being unsaturated media,and the temperatures,negative pore pressures,flow velocities,nuclide concentrations and normal stresses in the rock mass were investigated.The results showed that the temperatures,negative pore pressures and nuclide concentrations in the buffer all present nonlinear changes and distributions that even though the saturation degree in porosity is only about 1/9 of that in fracture,the flow velocity of underground water in fracture is about 6 times of that in porosity because the permeability coefficient of fracture is almost four orders higher than that of porosity,and that the regions of stress concentration occur at the vicinity of two sides of the boundary between buffer and disposal pit wall.

  9. Exergoeconomic performance optimization of an endoreversible intercooled regenerated Brayton cogeneration plant. Part 1: Thermodynamic model and parameter analyses

    Directory of Open Access Journals (Sweden)

    Lingen Chen, Bo Yang, Fengrui Sun

    2011-03-01

    Full Text Available A thermodynamic model of an endoreversible intercooled regenerative Brayton heat and power cogeneration plant coupled to constant-temperature heat reservoirs is established by using finite time thermodynamics in Part 1 of this paper. The heat resistance losses in the hot-, cold- and consumer-side heat exchangers, the intercooler and the regenerator are taken into account. The finite time exergoeconomic performance of the cogeneration plant is investigated. The analytical formulae about dimensionless profit rate and exergetic efficiency are derived. The numerical examples show that there exists an optimal value of intercooling pressure ratio which leads to an optimal value of dimensionless profit rate for the fixed total pressure ratio. There also exists an optimal total pressure ratio which leads to a maximum profit rate for the variable total pressure ratio. The effects of intercooling, regeneration and the ratio of the hot-side heat reservoir temperature to environment temperature on dimensionless profit rate and the corresponding exergetic efficiency are analyzed. At last, it is found that there exists an optimal consumer-side temperature which leads to a double-maximum dimensionless profit rate. The profit rate of the model cycle is optimized by optimal allocation of the heat conductance of the heat exchangers in Part 2 of this paper.

  10. Modeling and experimental analyses reveals signaling plasticity in a bi-modular assembly of CD40 receptor activated kinases.

    Directory of Open Access Journals (Sweden)

    Uddipan Sarma

    Full Text Available Depending on the strength of signal dose, CD40 receptor (CD40 controls ERK-1/2 and p38MAPK activation. At low signal dose, ERK-1/2 is maximally phosphorylated but p38MAPK is minimally phosphorylated; as the signal dose increases, ERK-1/2 phosphorylation is reduced whereas p38MAPK phosphorylation is reciprocally enhanced. The mechanism of reciprocal activation of these two MAPKs remains un-elucidated. Here, our computational model, coupled to experimental perturbations, shows that the observed reciprocity is a system-level behavior of an assembly of kinases arranged in two modules. Experimental perturbations with kinase inhibitors suggest that a minimum of two trans-modular negative feedback loops are required to reproduce the experimentally observed reciprocity. The bi-modular architecture of the signaling pathways endows the system with an inherent plasticity which is further expressed in the skewing of the CD40-induced productions of IL-10 and IL-12, the respective anti-inflammatory and pro-inflammatory cytokines. Targeting the plasticity of CD40 signaling significantly reduces Leishmania major infection in a susceptible mouse strain. Thus, for the first time, using CD40 signaling as a model, we show how a bi-modular assembly of kinases imposes reciprocity to a receptor signaling. The findings unravel that the signalling plasticity is inherent to a reciprocal system and that the principle can be used for designing a therapy.

  11. Two Model-Based Methods for Policy Analyses of Fine Particulate Matter Control in China: Source Apportionment and Source Sensitivity

    Science.gov (United States)

    Li, X.; Zhang, Y.; Zheng, B.; Zhang, Q.; He, K.

    2013-12-01

    Anthropogenic emissions have been controlled in recent years in China to mitigate fine particulate matter (PM2.5) pollution. Recent studies show that sulfate dioxide (SO2)-only control cannot reduce total PM2.5 levels efficiently. Other species such as nitrogen oxide, ammonia, black carbon, and organic carbon may be equally important during particular seasons. Furthermore, each species is emitted from several anthropogenic sectors (e.g., industry, power plant, transportation, residential and agriculture). On the other hand, contribution of one emission sector to PM2.5 represents contributions of all species in this sector. In this work, two model-based methods are used to identify the most influential emission sectors and areas to PM2.5. The first method is the source apportionment (SA) based on the Particulate Source Apportionment Technology (PSAT) available in the Comprehensive Air Quality Model with extensions (CAMx) driven by meteorological predictions of the Weather Research and Forecast (WRF) model. The second method is the source sensitivity (SS) based on an adjoint integration technique (AIT) available in the GEOS-Chem model. The SA method attributes simulated PM2.5 concentrations to each emission group, while the SS method calculates their sensitivity to each emission group, accounting for the non-linear relationship between PM2.5 and its precursors. Despite their differences, the complementary nature of the two methods enables a complete analysis of source-receptor relationships to support emission control policies. Our objectives are to quantify the contributions of each emission group/area to PM2.5 in the receptor areas and to intercompare results from the two methods to gain a comprehensive understanding of the role of emission sources in PM2.5 formation. The results will be compared in terms of the magnitudes and rankings of SS or SA of emitted species and emission groups/areas. GEOS-Chem with AIT is applied over East Asia at a horizontal grid

  12. On the Structure of Personality Disorder Traits: Conjoint Analyses of the CAT-PD, PID-5, and NEO-PI-3 Trait Models

    Science.gov (United States)

    Wright, Aidan G.C.; Simms, Leonard J.

    2014-01-01

    The current study examines the relations among contemporary models of pathological and normal range personality traits. Specifically, we report on (a) conjoint exploratory factor analyses of the Computerized Adaptive Test of Personality Disorder static form (CAT-PD-SF) with the Personality Inventory for the DSM-5 (PID-5; Krueger et al., 2012) and NEO Personality Inventory-3 First Half (NEI-PI-3FH; McCrae & Costa, 2007), and (b) unfolding hierarchical analyses of the three measures in a large general psychiatric outpatient sample (N = 628; 64% Female). A five-factor solution provided conceptually coherent alignment among the CAT-PD-SF, PID-5, and NEO-PI-3FH scales. Hierarchical solutions suggested that higher-order factors bear strong resemblance to dimensions that emerge from structural models of psychopathology (e.g., Internalizing and Externalizing spectra). These results demonstrate that the CAT-PD-SF adheres to the consensual structure of broad trait domains at the five-factor level. Additionally, patterns of scale loadings further inform questions of structure and bipolarity of facet and domain level constructs. Finally, hierarchical analyses strengthen the argument for using broad dimensions that span normative and pathological functioning to scaffold a quantitatively derived phenotypic structure of psychopathology to orient future research on explanatory, etiological, and maintenance mechanisms. PMID:24588061

  13. Improved Analyses and Forecasts of Snowpack, Runoff and Drought through Remote Sensing and Land Surface Modeling in Southeastern Europe

    Science.gov (United States)

    Matthews, D.; Brilly, M.; Gregoric, G.; Polajnar, J.; Kobold, M.; Zagar, M.; Knoblauch, H.; Staudinger, M.; Mecklenburg, S.; Lehning, M.; Schweizer, J.; Balint, G.; Cacic, I.; Houser, P.; Pozzi, W.

    2008-12-01

    European hydrometeorological services and research centers are faced with increasing challenges from extremes of weather and climate that require significant investments in new technology and better utilization of existing human and natural resources to provide improved forecasts. Major advances in remote sensing, observation networks, data assimilation, numerical modeling, and communications continue to improve our ability to disseminate information to decision-makers and stake holders. This paper identifies gaps in current technologies, key research and decision-maker teams, and recommends means for moving forward through focused applied research and integration of results into decision support tools. This paper reports on the WaterNet - NASA Water Cycle Solutions Network contacts in Europe and summarizes progress in improving water cycle related decision-making using NASA research results. Products from the Hydrologic Sciences Branch, Goddard Space Flight Center, NASA, Land Information System's (LIS) Land Surface Models (LSM), the SPoRT, CREW , and European Space Agency (ESA), and Joint Research Center's (JRC) natural hazards products, and Swiss Federal Institute for Snow and Avalanche Research's (SLF), and others are discussed. They will be used in collaboration with the ESA and the European Commission to provide solutions for improved prediction of water supplies and stream flow, and droughts and floods, and snow avalanches in the major river basins serviced by EARS, ZAMG, SLF, Vituki Consult, and other European forecast centers. This region of Europe includes the Alps and Carpathian Mountains and is an area of extreme topography with abrupt 2000 m mountains adjacent to the Adriatic Sea. These extremes result in the highest precipitation ( > 5000 mm) in Europe in Montenegro and low precipitation of 300-400 mm at the mouth of the Danube during droughts. The current flood and drought forecasting systems have a spatial resolution of 9 km, which is currently being

  14. Analysing recent socioeconomic trends in coronary heart disease mortality in England, 2000-2007: a population modelling study.

    Directory of Open Access Journals (Sweden)

    Madhavi Bajekal

    Full Text Available BACKGROUND: Coronary heart disease (CHD mortality in England fell by approximately 6% every year between 2000 and 2007. However, rates fell differentially between social groups with inequalities actually widening. We sought to describe the extent to which this reduction in CHD mortality was attributable to changes in either levels of risk factors or treatment uptake, both across and within socioeconomic groups. METHODS AND FINDINGS: A widely used and replicated epidemiological model was used to synthesise estimates stratified by age, gender, and area deprivation quintiles for the English population aged 25 and older between 2000 and 2007. Mortality rates fell, with approximately 38,000 fewer CHD deaths in 2007. The model explained about 86% (95% uncertainty interval: 65%-107% of this mortality fall. Decreases in major cardiovascular risk factors contributed approximately 34% (21%-47% to the overall decline in CHD mortality: ranging from about 44% (31%-61% in the most deprived to 29% (16%-42% in the most affluent quintile. The biggest contribution came from a substantial fall in systolic blood pressure in the population not on hypertension medication (29%; 18%-40%; more so in deprived (37% than in affluent (25% areas. Other risk factor contributions were relatively modest across all social groups: total cholesterol (6%, smoking (3%, and physical activity (2%. Furthermore, these benefits were partly negated by mortality increases attributable to rises in body mass index and diabetes (-9%; -17% to -3%, particularly in more deprived quintiles. Treatments accounted for approximately 52% (40%-70% of the mortality decline, equitably distributed across all social groups. Lipid reduction (14%, chronic angina treatment (13%, and secondary prevention (11% made the largest medical contributions. CONCLUSIONS: The model suggests that approximately half the recent CHD mortality fall in England was attributable to improved treatment uptake. This benefit

  15. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in terrestrial ecosystems (ACONITE Version 1)

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-04-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. However there is little understanding of the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants. Here we describe a new, simple model of ecosystem C-N cycling and interactions (ACONITE), that builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C : N, N fixation, and plant C use efficiency) using emergent constraints provided by marginal returns on investment for C and/or N allocation. We simulated and evaluated steady-state ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C : N differed among the three ecosystem types (temperate deciduous demand for N and the marginal return on C investment to acquire N, was an order of magnitude higher in the tropical forest than in the temperate forest, consistent with observations. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C : N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C : N, while a more recently reported non-linear relationship performed better. A parameter governing how photosynthesis scales with day length had the largest influence on total vegetation C, GPP, and NPP. Multiple parameters associated with photosynthesis, respiration, and N uptake influenced the rate of N fixation. Overall, our ability to constrain leaf area index and have spatially and temporally variable leaf C : N helps

  16. The energy logistic model for analyses of transportation- and energy systems; Energilogistikmodell foer systemberaekningar av transport- och energifoersoerjningssystem

    Energy Technology Data Exchange (ETDEWEB)

    Blinge, M.

    1995-05-01

    The Energy Logistic Model has been improved to become a tool for analysis of all production processes, transportation systems and systems including several energy users and several fuels. Two cases were studied. The first case deals with terminal equipment for inter modal transport systems and the second case deals with diesel fuelled trucks, cranes and machines in the Goeteborg area. In both cases, the environmental improvements of the city air quality are analyzed when natural gas is substituted for diesel oil. The comparison between inter modal transport and road haulage shows that the environmental impacts from the operations at the terminal are limited, and that the potential for environmental benefits when using inter modal transport is improving with the transportation distance. The choice of electricity production system is of great importance when calculating the environmental impact from railway traffic in the total analysis of the transportation system. 13 refs, 27 tabs

  17. Genome-wide fitness analyses of the foodborne pathogen Campylobacter jejuni in in vitro and in vivo models

    DEFF Research Database (Denmark)

    de Vries, Stefan P. W.; Gupta, Srishti; Baig, Abiyad

    2017-01-01

    Campylobacter is the most common cause of foodborne bacterial illness worldwide. Faecal contamination of meat, especially chicken, during processing represents a key route of transmission to humans. There is a lack of insight into the mechanisms driving C. jejuni growth and survival within hosts...... and the environment. Here, we report a detailed analysis of C. jejuni fitness across models reflecting stages in its life cycle. Transposon (Tn) gene-inactivation libraries were generated in three C. jejuni strains and the impact on fitness during chicken colonisation, survival in houseflies and under nutrient......-rich and -poor conditions at 4 degrees C and infection of human gut epithelial cells was assessed by Tn-insertion site sequencing (Tn-seq). A total of 331 homologous gene clusters were essential for fitness during in vitro growth in three C. jejuni strains, revealing that a large part of its genome is dedicated...

  18. [PK/PD Modeling as a Tool for Predicting Bacterial Resistance to Antibiotics: Alternative Analyses of Experimental Data].

    Science.gov (United States)

    Golikova, M V; Strukova, E N; Portnoy, Y A; Firsov, A A

    2015-01-01

    Postexposure number of mutants (NM) is a conventional endpoint in bacterial resistance studies using in vitro dynamic models that simulate antibiotic pharmacokinetics. To compare NM with a recently introduced integral parameter AUBC(M), the area under the time course of resistance mutants, the enrichment of resistant Staphylococcus aureus was studied in vitro by simulation of mono(daptomycin, doxycycline) and combined treatments (daptomycin + rifampicin, rifampicin + linezolid). Differences in the time courses of resistant S. aureus could be reflected by AUBC(M) but not N(M). Moreover, unlike AUBC(M), N(M) did not reflect the pronounced differences in the time courses of S. aureus mutants resistant to 2x, 4x, 8x and 16xMIC of doxycycline and rifampicin. The findings suggested that AUBC(M) was a more appropriate endpoint of the amplification of resistant mutants than N(M).

  19. The mental health care model in Brazil: analyses of the funding, governance processes, and mechanisms of assessment.

    Science.gov (United States)

    Trapé, Thiago Lavras; Campos, Rosana Onocko

    2017-03-23

    This study aims to analyze the current status of the mental health care model of the Brazilian Unified Health System, according to its funding, governance processes, and mechanisms of assessment. We have carried out a documentary analysis of the ordinances, technical reports, conference reports, normative resolutions, and decrees from 2009 to 2014. This is a time of consolidation of the psychosocial model, with expansion of the health care network and inversion of the funding for community services with a strong emphasis on the area of crack cocaine and other drugs. Mental health is an underfunded area within the chronically underfunded Brazilian Unified Health System. The governance model constrains the progress of essential services, which creates the need for the incorporation of a process of regionalization of the management. The mechanisms of assessment are not incorporated into the health policy in the bureaucratic field. There is a need to expand the global funding of the area of health, specifically mental health, which has been shown to be a successful policy. The current focus of the policy seems to be archaic in relation to the precepts of the psychosocial model. Mechanisms of assessment need to be expanded. Analisar o estágio atual do modelo de atenção à saúde mental do Sistema Único de Saúde, segundo seu financiamento, processos de governança e mecanismos de avaliação. Foi realizada uma análise documental de portarias, informes técnicos, relatórios de conferência, resoluções e decretos de 2009 a 2014. Trata-se de um momento de consolidação do modelo psicossocial, com ampliação da rede assistencial, inversão de financiamento para serviços comunitários com forte ênfase na área de crack e outras drogas. A saúde mental é uma área subfinanciada dentro do subfinanciamento crônico do Sistema Único de Saúde. O modelo de governança constrange o avanço de serviços essenciais, havendo a necessidade da incorporação de um

  20. Mindfulness training promotes upward spirals of positive affect and cognition: multilevel and autoregressive latent trajectory modeling analyses.

    Science.gov (United States)

    Garland, Eric L; Geschwind, Nicole; Peeters, Frenk; Wichers, Marieke

    2015-01-01

    Recent theory suggests that positive psychological processes integral to health may be energized through the self-reinforcing dynamics of an upward spiral to counter emotion dysregulation. The present study examined positive emotion-cognition interactions among individuals in partial remission from depression who had been randomly assigned to treatment with mindfulness-based cognitive therapy (MBCT; n = 64) or a waitlist control condition (n = 66). We hypothesized that MBCT stimulates upward spirals by increasing positive affect and positive cognition. Experience sampling assessed changes in affect and cognition during 6 days before and after treatment, which were analyzed with a series of multilevel and autoregressive latent trajectory models. Findings suggest that MBCT was associated with significant increases in trait positive affect and momentary positive cognition, which were preserved through autoregressive and cross-lagged effects driven by global emotional tone. Findings suggest that daily positive affect and cognition are maintained by an upward spiral that might be promoted by mindfulness training.

  1. Studies and analyses of the space shuttle main engine. Failure information propagation model data base and software

    Science.gov (United States)

    Tischer, A. E.

    1987-01-01

    The failure information propagation model (FIPM) data base was developed to store and manipulate the large amount of information anticipated for the various Space Shuttle Main Engine (SSME) FIPMs. The organization and structure of the FIPM data base is described, including a summary of the data fields and key attributes associated with each FIPM data file. The menu-driven software developed to facilitate and control the entry, modification, and listing of data base records is also discussed. The transfer of the FIPM data base and software to the NASA Marshall Space Flight Center is described. Complete listings of all of the data base definition commands and software procedures are included in the appendixes.

  2. Mindfulness training promotes upward spirals of positive affect and cognition: multilevel and autoregressive latent trajectory modeling analyses

    Science.gov (United States)

    Garland, Eric L.; Geschwind, Nicole; Peeters, Frenk; Wichers, Marieke

    2015-01-01

    Recent theory suggests that positive psychological processes integral to health may be energized through the self-reinforcing dynamics of an upward spiral to counter emotion dysregulation. The present study examined positive emotion–cognition interactions among individuals in partial remission from depression who had been randomly assigned to treatment with mindfulness-based cognitive therapy (MBCT; n = 64) or a waitlist control condition (n = 66). We hypothesized that MBCT stimulates upward spirals by increasing positive affect and positive cognition. Experience sampling assessed changes in affect and cognition during 6 days before and after treatment, which were analyzed with a series of multilevel and autoregressive latent trajectory models. Findings suggest that MBCT was associated with significant increases in trait positive affect and momentary positive cognition, which were preserved through autoregressive and cross-lagged effects driven by global emotional tone. Findings suggest that daily positive affect and cognition are maintained by an upward spiral that might be promoted by mindfulness training. PMID:25698988

  3. Substrate recognition and motion mode analyses of PFV integrase in complex with viral DNA via coarse-grained models.

    Directory of Open Access Journals (Sweden)

    Jianping Hu

    Full Text Available HIV-1 integrase (IN is an important target in the development of drugs against the AIDS virus. Drug design based on the structure of IN was markedly hampered due to the lack of three-dimensional structure information of HIV-1 IN-viral DNA complex. The prototype foamy virus (PFV IN has a highly functional and structural homology with HIV-1 IN. Recently, the X-ray crystal complex structure of PFV IN with its cognate viral DNA has been obtained. In this study, both Gaussian network model (GNM and anisotropy network model (ANM have been applied to comparatively investigate the motion modes of PFV DNA-free and DNA-bound IN. The results show that the motion mode of PFV IN has only a slight change after binding with DNA. The motion of this enzyme is in favor of association with DNA, and the binding ability is determined by its intrinsic structural topology. Molecular docking experiments were performed to gain the binding modes of a series of diketo acid (DKA inhibitors with PFV IN obtained from ANM, from which the dependability of PFV IN-DNA used in the drug screen for strand transfer (ST inhibitors was confirmed. It is also found that the functional groups of keto-enol, bis-diketo, tetrazole and azido play a key role in aiding the recognition of viral DNA, and thus finally increase the inhibition capability for the corresponding DKA inhibitor. Our study provides some theoretical information and helps to design anti-AIDS drug based on the structure of IN.

  4. Selection of reliable biomarkers from PCR array analyses using relative distance computational model: methodology and proof-of-concept study.

    Directory of Open Access Journals (Sweden)

    Chunsheng Liu

    Full Text Available It is increasingly evident about the difficulty to monitor chemical exposure through biomarkers as almost all the biomarkers so far proposed are not specific for any individual chemical. In this proof-of-concept study, adult male zebrafish (Danio rerio were exposed to 5 or 25 µg/L 17β-estradiol (E2, 100 µg/L lindane, 5 nM 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD or 15 mg/L arsenic for 96 h, and the expression profiles of 59 genes involved in 7 pathways plus 2 well characterized biomarker genes, vtg1 (vitellogenin1 and cyp1a1 (cytochrome P450 1A1, were examined. Relative distance (RD computational model was developed to screen favorable genes and generate appropriate gene sets for the differentiation of chemicals/concentrations selected. Our results demonstrated that the known biomarker genes were not always good candidates for the differentiation of pair of chemicals/concentrations, and other genes had higher potentials in some cases. Furthermore, the differentiation of 5 chemicals/concentrations examined were attainable using expression data of various gene sets, and the best combination was the set consisting of 50 genes; however, as few as two genes (e.g. vtg1 and hspa5 [heat shock protein 5] were sufficient to differentiate the five chemical/concentration groups in the present test. These observations suggest that multi-parameter arrays should be more reliable for biomonitoring of chemical exposure than traditional biomarkers, and the RD computational model provides an effective tool for the selection of parameters and generation of parameter sets.

  5. Selection of reliable biomarkers from PCR array analyses using relative distance computational model: methodology and proof-of-concept study.

    Science.gov (United States)

    Liu, Chunsheng; Xu, Hongyan; Lam, Siew Hong; Gong, Zhiyuan

    2013-01-01

    It is increasingly evident about the difficulty to monitor chemical exposure through biomarkers as almost all the biomarkers so far proposed are not specific for any individual chemical. In this proof-of-concept study, adult male zebrafish (Danio rerio) were exposed to 5 or 25 µg/L 17β-estradiol (E2), 100 µg/L lindane, 5 nM 2,3,7,8-tetrachlorodibenzo-p-dioxin (TCDD) or 15 mg/L arsenic for 96 h, and the expression profiles of 59 genes involved in 7 pathways plus 2 well characterized biomarker genes, vtg1 (vitellogenin1) and cyp1a1 (cytochrome P450 1A1), were examined. Relative distance (RD) computational model was developed to screen favorable genes and generate appropriate gene sets for the differentiation of chemicals/concentrations selected. Our results demonstrated that the known biomarker genes were not always good candidates for the differentiation of pair of chemicals/concentrations, and other genes had higher potentials in some cases. Furthermore, the differentiation of 5 chemicals/concentrations examined were attainable using expression data of various gene sets, and the best combination was the set consisting of 50 genes; however, as few as two genes (e.g. vtg1 and hspa5 [heat shock protein 5]) were sufficient to differentiate the five chemical/concentration groups in the present test. These observations suggest that multi-parameter arrays should be more reliable for biomonitoring of chemical exposure than traditional biomarkers, and the RD computational model provides an effective tool for the selection of parameters and generation of parameter sets.

  6. Selenotranscriptomic Analyses Identify Signature Selenoproteins in Brain Regions in a Mouse Model of Parkinson’s Disease

    Science.gov (United States)

    Zhu, Hui; Sun, Sheng-Nan; Zheng, Jing; Fan, Hui-Hui; Wu, Hong-Mei; Chen, Song-Fang; Cheng, Wen-Hsing; Zhu, Jian-Hong

    2016-01-01

    Genes of selenoproteome have been increasingly implicated in various aspects of neurobiology and neurological disorders, but remain largely elusive in Parkinson’s disease (PD). In this study, we investigated the selenotranscriptome (24 selenoproteins in total) in five brain regions (cerebellum, substantia nigra, cortex, pons and hippocampus) by real time qPCR in a two-phase manner using a mouse model of chronic PD. A wide range of changes in selenotranscriptome was observed in a manner depending on selenoproteins and brain regions. While Selv mRNA was not detectable and Dio1& 3 mRNA levels were not affected, 1, 11 and 9 selenoproteins displayed patterns of increase only, decrease only, and mixed response, respectively, in these brain regions of PD mice. In particular, the mRNA expression of Gpx1-4 showed only a decreased trend in the PD mouse brains. In substantia nigra, levels of 17 selenoprotein mRNAs were significantly decreased whereas no selenoprotein was up-regulated in the PD mice. In contrast, the majority of selenotranscriptome did not change and a few selenoprotein mRNAs that respond displayed a mixed pattern of up- and down-regulation in cerebellum, cortex, hippocampus, and/or pons of the PD mice. Gpx4, Sep15, Selm, Sepw1, and Sepp1 mRNAs were most abundant across all these five brain regions. Our results showed differential responses of selenoproteins in various brain regions of the PD mouse model, providing critical selenotranscriptomic profiling for future functional investigation of individual selenoprotein in PD etiology. PMID:27656880

  7. Using hierarchical linear models to test differences in Swedish results from OECD’s PISA 2003: Integrated and subject-specific science education

    Directory of Open Access Journals (Sweden)

    Maria Åström

    2012-06-01

    Full Text Available The possible effects of different organisations of the science curriculum in schools participating in PISA 2003 are tested with a hierarchical linear model (HLM of two levels. The analysis is based on science results. Swedish schools are free to choose how they organise the science curriculum. They may choose to work subject-specifically (with Biology, Chemistry and Physics, integrated (with Science or to mix these two. In this study, all three ways of organising science classes in compulsory school are present to some degree. None of the different ways of organising science education displayed statistically significant better student results in scientific literacy as measured in PISA 2003. The HLM model used variables of gender, country of birth, home language, preschool attendance, an economic, social and cultural index as well as the teaching organisation.

  8. Greenhouse gas network design using backward Lagrangian particle dispersion modelling – Part 2: Sensitivity analyses and South African test case

    Directory of Open Access Journals (Sweden)

    A. Nickless

    2014-05-01

    Full Text Available This is the second part of a two-part paper considering network design based on a Lagrangian stochastic particle dispersion model (LPDM, aimed at reducing the uncertainty of the flux estimates achievable for the region of interest by the continuous observation of atmospheric CO2 concentrations at fixed monitoring stations. The LPDM model, which can be used to derive the sensitivity matrix used in an inversion, was run for each potential site for the months of July (representative of the Southern Hemisphere Winter and January (Summer. The magnitude of the boundary contributions to each potential observation site was tested to determine its inclusion in the network design, but found to be minimal. Through the use of the Bayesian inverse modelling technique, the sensitivity matrix, together with the prior estimates for the covariance matrices of the observations and surface fluxes were used to calculate the posterior covariance matrix of the estimated fluxes, used for the calculation of the cost function of the optimisation procedure. The optimisation procedure was carried out for South Africa under a standard set of conditions, similar to those applied to the Australian test case in Part 1, for both months and for the combined two months. The conditions were subtly changed, one at a time, and the optimisation routine re-run under each set of modified conditions, and compared to the original optimal network design. The results showed that changing the height of the surface grid cells, including an uncertainty estimate for the oceans, or increasing the night time observational uncertainty did not result in any major changes in the positioning of the stations relative to the basic design, but changing the covariance matrix or increasing the spatial resolution did. The genetic algorithm was able to find a slightly better solution than the incremental optimisation procedure, but did not drastically alter the solution compared to the standard case

  9. Multiple Deprivation, Severity and Latent Sub-Groups: Advantages of Factor Mixture Modelling for Analysing Material Deprivation.

    Science.gov (United States)

    Najera Catalan, Hector E

    2017-01-01

    Material deprivation is represented in different forms and manifestations. Two individuals with the same deprivation score (i.e. number of deprivations), for instance, are likely to be unable to afford or access entirely or partially different sets of goods and services, while one individual may fail to purchase clothes and consumer durables and another one may lack access to healthcare and be deprived of adequate housing . As such, the number of possible patterns or combinations of multiple deprivation become increasingly complex for a higher number of indicators. Given this difficulty, there is interest in poverty research in understanding multiple deprivation, as this analysis might lead to the identification of meaningful population sub-groups that could be the subjects of specific policies. This article applies a factor mixture model (FMM) to a real dataset and discusses its conceptual and empirical advantages and disadvantages with respect to other methods that have been used in poverty research . The exercise suggests that FMM is based on more sensible assumptions (i.e. deprivation covary within each class), provides valuable information with which to understand multiple deprivation and is useful to understand severity of deprivation and the additive properties of deprivation indicators.

  10. An automatic generation of non-uniform mesh for CFD analyses of image-based multiscale human airway models

    Science.gov (United States)

    Miyawaki, Shinjiro; Tawhai, Merryn H.; Hoffman, Eric A.; Lin, Ching-Long

    2014-11-01

    The authors have developed a method to automatically generate non-uniform CFD mesh for image-based human airway models. The sizes of generated tetrahedral elements vary in both radial and longitudinal directions to account for boundary layer and multiscale nature of pulmonary airflow. The proposed method takes advantage of our previously developed centerline-based geometry reconstruction method. In order to generate the mesh branch by branch in parallel, we used the open-source programs Gmsh and TetGen for surface and volume meshes, respectively. Both programs can specify element sizes by means of background mesh. The size of an arbitrary element in the domain is a function of wall distance, element size on the wall, and element size at the center of airway lumen. The element sizes on the wall are computed based on local flow rate and airway diameter. The total number of elements in the non-uniform mesh (10 M) was about half of that in the uniform mesh, although the computational time for the non-uniform mesh was about twice longer (170 min). The proposed method generates CFD meshes with fine elements near the wall and smooth variation of element size in longitudinal direction, which are required, e.g., for simulations with high flow rate. NIH Grants R01-HL094315, U01-HL114494, and S10-RR022421. Computer time provided by XSEDE.

  11. mRNA and microRNA transcriptomics analyses in a murine model of dystrophin loss and therapeutic restoration

    Directory of Open Access Journals (Sweden)

    Thomas C. Roberts

    2016-03-01

    Full Text Available Duchenne muscular dystrophy (DMD is a pediatric, X-linked, progressive muscle-wasting disorder caused by loss of function mutations affecting the gene encoding the dystrophin protein. While the primary genetic insult in DMD is well described, many details of the molecular and cellular pathologies that follow dystrophin loss are incompletely understood. To investigate gene expression in dystrophic muscle we have applied mRNA and microRNA (miRNA microarray technology to the mdx mouse model of DMD. This study was designed to generate a complete description of gene expression changes associated with dystrophic pathology and the response to an experimental therapy which restores dystrophin protein function. These datasets have enabled (1 the determination of gene expression changes associated with dystrophic pathology, (2 identification of differentially expressed genes that are restored towards wild-type levels after therapeutic dystrophin rescue, (3 investigation of the correlation between mRNA and protein expression (determined by parallel mass spectrometry proteomics analysis, and (4 prediction of pathology associated miRNA-target interactions. Here we describe in detail how the data were generated including the basic analysis as contained in the manuscript published in Human Molecular Genetics with PMID 26385637. The data have been deposited in the Gene Expression Omnibus (GEO with the accession number GSE64420.

  12. A Hamiltonian approach to model and analyse networks of nonlinear oscillators: Applications to gyroscopes and energy harvesters

    Indian Academy of Sciences (India)

    Pietro-Luciano Buono; Bernard Chan; Antonio Palacios; Visarath In

    2015-11-01

    Over the past twelve years, ideas and methods from nonlinear dynamics system theory, in particular, group theoretical methods in bifurcation theory, have been used to study, design, and fabricate novel engineering technologies. For instance, the existence and stability of heteroclinic cycles in coupled bistable systems has been exploited to develop and deploy highly sensitive, lowpower, magnetic and electric field sensors. Also, patterns of behaviour in networks of oscillators with certain symmetry groups have been extensively studied and the results have been applied to conceptualize a multifrequency up/down converter, a channelizer to lock into incoming signals, and a microwave signal generator at the nanoscale. In this manuscript, a review of the most recent work on modelling and analysis of two seemingly different systems, an array of gyroscopes and an array of energy harvesters, is presented. Empirical values of operational parameters suggest that damping and external forcing occur at a lower scale compared to other parameters, so that the individual units can be treated as Hamiltonian systems. Casting the governing equations in Hamiltonian form leads to a common approach to study both arrays. More importantly, the approach yields analytical expressions for the onset of bifurcations to synchronized oscillations. The expressions are valid for arrays of any size and the ensuing synchronized oscillations are critical to enhance performance.

  13. Self efficacy for fruit, vegetable and water intakes: Expanded and abbreviated scales from item response modeling analyses

    Directory of Open Access Journals (Sweden)

    Cullen Karen W

    2010-03-01

    Full Text Available Abstract Objective To improve an existing measure of fruit and vegetable intake self efficacy by including items that varied on levels of difficulty, and testing a corresponding measure of water intake self efficacy. Design Cross sectional assessment. Items were modified to have easy, moderate and difficult levels of self efficacy. Classical test theory and item response modeling were applied. Setting One middle school at each of seven participating sites (Houston TX, Irvine CA, Philadelphia PA, Pittsburg PA, Portland OR, rural NC, and San Antonio TX. Subjects 714 6th grade students. Results Adding items to reflect level (low, medium, high of self efficacy for fruit and vegetable intake achieved scale reliability and validity comparable to existing scales, but the distribution of items across the latent variable did not improve. Selecting items from among clusters of items at similar levels of difficulty along the latent variable resulted in an abbreviated scale with psychometric characteristics comparable to the full scale, except for reliability. Conclusions The abbreviated scale can reduce participant burden. Additional research is necessary to generate items that better distribute across the latent variable. Additional items may need to tap confidence in overcoming more diverse barriers to dietary intake.

  14. EXPERIMENTAL DATA, THERMODYNAMIC MODELING AND SENSITIVITY ANALYSES FOR THE PURIFICATION STEPS OF ETHYL BIODIESEL FROM FODDER RADISH OIL PRODUCTION

    Directory of Open Access Journals (Sweden)

    R. C. Basso

    Full Text Available Abstract The goals of this work were to present original liquid-liquid equilibrium data of the system containing glycerol + ethanol + ethyl biodiesel from fodder radish oil, including the individual distribution of each ethyl ester; to adjust binary parameters of the NRTL; to compare NRTL and UNIFAC-Dortmund in the LLE representation of the system containing glycerol; to simulate different mixer/settler flowsheets for biodiesel purification, evaluating the ratio water/biodiesel used. In thermodynamic modeling, the deviations between experimental data and calculated values were 0.97% and 3.6%, respectively, using NRTL and UNIFAC-Dortmund. After transesterification, with 3 moles of excess ethanol, removal of this component until a content equal to 0.08 before an ideal settling step allows a glycerol content lower than 0.02% in the ester-rich phase. Removal of ethanol, glycerol and water from biodiesel can be performed with countercurrent mixer/settler, using 0.27% of water in relation to the ester amount in the feed stream.

  15. Analysing star cluster populations with stochastic models: the HST/WFC3 sample of clusters in M83

    CERN Document Server

    Fouesneau, Morgan; Chandar, Rupali; Whitmore, Bradley C

    2012-01-01

    The majority of clusters in the Universe have masses well below 10^5 Msun. Hence their integrated fluxes and colors can be affected by the random presence of a few bright stars introduced by stochastic sampling of the stellar mass function. Specific methods are being developed to extend the analysis of cluster SEDs into the low-mass regime. In this paper, we apply such a method to observations of star clusters, in the nearby spiral galaxy M83. We reassess ages and masses of a sample of 1242 objects for which UBVIHalpha fluxes were obtained with the HST/WFC3 images. Synthetic clusters with known properties are used to characterize the limitations of the method. The ensemble of color predictions of the discrete cluster models are in good agreement with the distribution of observed colors. We emphasize the important role of the Halpha data in the assessment of the fraction of young objects, particularly in breaking the age-extinction degeneracy that hampers an analysis based on UBVI only. We find the mass distri...

  16. Validation of an osteoporotic animal model for dental implant analyses: an in vivo densitometric study in rabbits.

    Science.gov (United States)

    Martin-Monge, Elena; Tresguerres, Isabel F; Blanco, Luis; Khraisat, Ameen; Rodríguez-Torres, Rosa; Tresguerres, Jesús A F

    2011-01-01

    The achievement of primary stability in porous and soft bone, where implants are more likely to fail, is one of the unresolved challenges of implant dentistry. Therefore, the aim of the study was to validate an osteoporotic animal model for analysis of poor-quality bone. Sixteen female New Zealand rabbits, each 6 months old and weighing 4 to 5 kg, were used in this study. The animals were anesthetized, and an in vivo densitometric analysis was performed by dual-energy x-ray absorptiometry (DEXA) to measure bone mineral density (BMD) in the calvaria, cervical spine, and tibia. Ovariectomy was then performed, and animals were fed a low-calcium diet that featured 0.07% calcium, rather than the 0.45% calcium of a standard diet, for 6 weeks. After this period, new densitometric measurements were carried out. Two-way analysis of variance was used for statistical evaluation. A P value of less than .05 was considered to be significant. Together, ovariectomy and a low-calcium diet were able to induce a quick decrease in BMD, as measured at 6 weeks by DEXA. This decrease was statistically significant in the calvaria (P < .001) and the cervical spine (P < .05) but not in the tibia. Based upon this study, ovariectomy and a low-calcium diet are able to induce experimental osteoporosis in rabbits in a short period of time.

  17. Pair dynamics and the intermolecular nuclear Overhauser effect (NOE) in liquids analysed by simulation and model theories: application to an ionic liquid.

    Science.gov (United States)

    Gabl, Sonja; Schröder, Christian; Braun, Daniel; Weingärtner, Hermann; Steinhauser, Othmar

    2014-05-14

    Combining simulation and model theories, this paper analyses the impact of pair dynamics on the intermolecular nuclear Overhauser effect (NOE) in liquids. For the first time, we give a distance resolved NOE. When applied to the ionic liquid 1-ethyl-3-methyl-imidazolium tetrafluoroborate the NOE turns out to be of long-range nature. This behaviour translates to the experimentally measured cross- and longitudinal relaxation rates. We were able to calculate the heteronuclear NOE from simulation data, despite the high computational effort. Model theories are computationally less demanding and cover the complete frequency range of the respective spectral density function, they are usually based on a very simple pair distribution function and the solution of the diffusion equation. In order to model the simulated data sufficiently, these simplifications in structure and dynamics have to be generalised considerably.

  18. Thermodynamic modeling to analyse composition of carbonaceous coatings of MnO and other oxides of manganese grown by MOCVD

    Indian Academy of Sciences (India)

    Sukanya Dhar; A Varade; S A Shivashankar

    2011-02-01

    Equilibrium thermodynamic analysis has been applied to the low-pressure MOCVD process using manganese acetylacetonate as the precursor. ``CVD phase stability diagrams” have been constructed separately for the processes carried out in argon and oxygen ambient, depicting the compositions of the resulting films as functions of CVD parameters. For the process conduced in argon ambient, the analysis predicts the simultaneous deposition of MnO and elemental carbon in 1:3 molar proportion, over a range of temperatures. The analysis predicts also that, if CVD is carried out in oxygen ambient, even a very low flow of oxygen leads to the complete absence of carbon in the film deposited oxygen, with greater oxygen flow resulting in the simultaneous deposition of two different manganese oxides under certain conditions. The results of thermodynamic modeling have been verified quantitatively for lowpressure CVD conducted in argon ambient. Indeed, the large excess of carbon in the deposit is found to constitute a MnO/C nanocomposite, the associated cauliflower-like morphology making it a promising candidate for electrode material in supercapacitors. CVD carried out in oxygen flow, under specific conditions, leads to the deposition of more than one manganese oxide, as expected from thermodynamic analysis (and forming an oxide–oxide nanocomposite). These results together demonstrate that thermodynamic analysis of the MOCVD process can be employed to synthesize thin films in a predictive manner, thus avoiding the inefficient trial-and-error method usually associated with MOCVD process development. The prospect of developing thin films of novel compositions and characteristics in a predictive manner, through the appropriate choice of CVD precursors and process conditions, emerges from the present work.

  19. A high-resolution and harmonized model approach for reconstructing and analysing historic land changes in Europe

    Science.gov (United States)

    Fuchs, R.; Herold, M.; Verburg, P. H.; Clevers, J. G. P. W.

    2013-03-01

    Human-induced land use changes are nowadays the second largest contributor to atmospheric carbon dioxide after fossil fuel combustion. Existing historic land change reconstructions on the European scale do not sufficiently meet the requirements of greenhouse gas (GHG) and climate assessments, due to insufficient spatial and thematic detail and the consideration of various land change types. This paper investigates if the combination of different data sources, more detailed modelling techniques, and the integration of land conversion types allow us to create accurate, high-resolution historic land change data for Europe suited for the needs of GHG and climate assessments. We validated our reconstruction with historic aerial photographs from 1950 and 1990 for 73 sample sites across Europe and compared it with other land reconstructions like Klein Goldewijk et al. (2010, 2011), Ramankutty and Foley (1999), Pongratz et al. (2008) and Hurtt et al. (2006). The results indicate that almost 700 000 km2 (15.5%) of land cover in Europe has changed over the period 1950-2010, an area similar to France. In Southern Europe the relative amount was almost 3.5% higher than average (19%). Based on the results the specific types of conversion, hot-spots of change and their relation to political decisions and socio-economic transitions were studied. The analysis indicates that the main drivers of land change over the studied period were urbanization, the reforestation program resulting from the timber shortage after the Second World War, the fall of the Iron Curtain, the Common Agricultural Policy and accompanying afforestation actions of the EU. Compared to existing land cover reconstructions, the new method considers the harmonization of different datasets by achieving a high spatial resolution and regional detail with a full coverage of different land categories. These characteristics allow the data to be used to support and improve ongoing GHG inventories and climate research.

  20. Genetic and functional analyses of SHANK2 mutations suggest a multiple hit model of autism spectrum disorders.

    Directory of Open Access Journals (Sweden)

    Claire S Leblond

    2012-02-01

    Full Text Available Autism spectrum disorders (ASD are a heterogeneous group of neurodevelopmental disorders with a complex inheritance pattern. While many rare variants in synaptic proteins have been identified in patients with ASD, little is known about their effects at the synapse and their interactions with other genetic variations. Here, following the discovery of two de novo SHANK2 deletions by the Autism Genome Project, we identified a novel 421 kb de novo SHANK2 deletion in a patient with autism. We then sequenced SHANK2 in 455 patients with ASD and 431 controls and integrated these results with those reported by Berkel et al. 2010 (n = 396 patients and n = 659 controls. We observed a significant enrichment of variants affecting conserved amino acids in 29 of 851 (3.4% patients and in 16 of 1,090 (1.5% controls (P = 0.004, OR = 2.37, 95% CI = 1.23-4.70. In neuronal cell cultures, the variants identified in patients were associated with a reduced synaptic density at dendrites compared to the variants only detected in controls (P = 0.0013. Interestingly, the three patients with de novo SHANK2 deletions also carried inherited CNVs at 15q11-q13 previously associated with neuropsychiatric disorders. In two cases, the nicotinic receptor CHRNA7 was duplicated and in one case the synaptic translation repressor CYFIP1 was deleted. These results strengthen the role of synaptic gene dysfunction in ASD but also highlight the presence of putative modifier genes, which is in keeping with the "multiple hit model" for ASD. A better knowledge of these genetic interactions will be necessary to understand the complex inheritance pattern of ASD.

  1. A high-resolution and harmonized model approach for reconstructing and analysing historic land changes in Europe

    Directory of Open Access Journals (Sweden)

    R. Fuchs

    2013-03-01

    Full Text Available Human-induced land use changes are nowadays the second largest contributor to atmospheric carbon dioxide after fossil fuel combustion. Existing historic land change reconstructions on the European scale do not sufficiently meet the requirements of greenhouse gas (GHG and climate assessments, due to insufficient spatial and thematic detail and the consideration of various land change types. This paper investigates if the combination of different data sources, more detailed modelling techniques, and the integration of land conversion types allow us to create accurate, high-resolution historic land change data for Europe suited for the needs of GHG and climate assessments. We validated our reconstruction with historic aerial photographs from 1950 and 1990 for 73 sample sites across Europe and compared it with other land reconstructions like Klein Goldewijk et al. (2010, 2011, Ramankutty and Foley (1999, Pongratz et al. (2008 and Hurtt et al. (2006. The results indicate that almost 700 000 km2 (15.5% of land cover in Europe has changed over the period 1950–2010, an area similar to France. In Southern Europe the relative amount was almost 3.5% higher than average (19%. Based on the results the specific types of conversion, hot-spots of change and their relation to political decisions and socio-economic transitions were studied. The analysis indicates that the main drivers of land change over the studied period were urbanization, the reforestation program resulting from the timber shortage after the Second World War, the fall of the Iron Curtain, the Common Agricultural Policy and accompanying afforestation actions of the EU. Compared to existing land cover reconstructions, the new method considers the harmonization of different datasets by achieving a high spatial resolution and regional detail with a full coverage of different land categories. These characteristics allow the data to be used to support and improve ongoing GHG inventories and

  2. Genomic expression analyses reveal lysosomal, innate immunity proteins, as disease correlates in murine models of a lysosomal storage disorder.

    Directory of Open Access Journals (Sweden)

    Md Suhail Alam

    Full Text Available Niemann-Pick Type C (NPC disease is a rare, genetic, lysosomal disorder with progressive neurodegeneration. Poor understanding of the pathophysiology and a lack of blood-based diagnostic markers are major hurdles in the treatment and management of NPC and several additional, neurological lysosomal disorders. To identify disease severity correlates, we undertook whole genome expression profiling of sentinel organs, brain, liver, and spleen of Balb/c Npc1(-/- mice relative to Npc1(+/- at an asymptomatic stage, as well as early- and late-symptomatic stages. Unexpectedly, we found prominent up regulation of innate immunity genes with age-dependent change in their expression, in all three organs. We shortlisted a set of 12 secretory genes whose expression steadily increased with age in both brain and liver, as potential plasma correlates of neurological and/or liver disease. Ten were innate immune genes with eight ascribed to lysosomes. Several are known to be elevated in diseased organs of murine models of other lysosomal diseases including Gaucher's disease, Sandhoff disease and MPSIIIB. We validated the top candidate lysozyme, in the plasma of Npc1(-/- as well as Balb/c Npc1(nmf164 mice (bearing a point mutation closer to human disease mutants and show its reduction in response to an emerging therapeutic. We further established elevation of innate immunity in Npc1(-/- mice through multiple functional assays including inhibition of bacterial infection as well as cellular analysis and immunohistochemistry. These data revealed neutrophil elevation in the Npc1(-/- spleen and liver (where large foci were detected proximal to damaged tissue. Together our results yield a set of lysosomal, secretory innate immunity genes that have potential to be developed as pan or specific plasma markers for neurological diseases associated with lysosomal storage and where diagnosis is a major problem. Further, the accumulation of neutrophils in diseased organs

  3. The AquaDEB project: Physiological flexibility of aquatic animals analysed with a generic dynamic energy budget model (phase II)

    Science.gov (United States)

    Alunno-Bruscia, Marianne; van der Veer, Henk W.; Kooijman, Sebastiaan A. L. M.

    2011-11-01

    This second special issue of the Journal of Sea Research on development and applications of Dynamic Energy Budget (DEB) theory concludes the European Research Project AquaDEB (2007-2011). In this introductory paper we summarise the progress made during the running time of this 5 years' project, present context for the papers in this volume and discuss future directions. The main scientific objectives in AquaDEB were (i) to study and compare the sensitivity of aquatic species (mainly molluscs and fish) to environmental variability within the context of DEB theory for metabolic organisation, and (ii) to evaluate the inter-relationships between different biological levels (individual, population, ecosystem) and temporal scales (life cycle, population dynamics, evolution). AquaDEB phase I focussed on quantifying bio-energetic processes of various aquatic species ( e.g. molluscs, fish, crustaceans, algae) and phase II on: (i) comparing of energetic and physiological strategies among species through the DEB parameter values and identifying the factors responsible for any differences in bioenergetics and physiology; (ii) considering different scenarios of environmental disruption (excess of nutrients, diffuse or massive pollution, exploitation by man, climate change) to forecast effects on growth, reproduction and survival of key species; (iii) scaling up the models for a few species from the individual level up to the level of evolutionary processes. Apart from the three special issues in the Journal of Sea Research — including the DEBIB collaboration (see vol. 65 issue 2), a theme issue on DEB theory appeared in the Philosophical Transactions of the Royal Society B (vol 365, 2010); a large number of publications were produced; the third edition of the DEB book appeared (2010); open-source software was substantially expanded (over 1000 functions); a large open-source systematic collection of ecophysiological data and DEB parameters has been set up; and a series of DEB

  4. Phase-Space Density Analyses of the AE-8 Trapped Electron and the AP-8 Trapped Proton Model Environments

    Energy Technology Data Exchange (ETDEWEB)

    T.E. Cayton

    2005-08-12

    The AE-8 trapped electron and the AP-8 trapped proton models are used to examine the L-shell variation of phase-space densities for sets of transverse (or 1st) invariants, {mu}, and geometrical invariants, K (related to the first two adiabatic invariants). The motivation for this study is twofold: first, to discover the functional dependence of the phase-space density upon the invariants; and, second, to explore the global structure of the radiation belts within this context. Variation due to particle rest mass is considered as well. The overall goal of this work is to provide a framework for analyzing energetic particle data collected by instruments on Global Positioning System (GPS) spacecraft that fly through the most intense region of the radiation belt. For all considered values of {mu} and K, and for 3.5 R{sub E} < L < 6.5 R{sub E}, the AE-8 electron phase-space density increases with increasing L; this trend--the expected one for a population diffusing inward from an external source--continues to L = 7.5 R{sub E} for both small and large values of K but reverses slightly for intermediate values of K. The AP-8 proton phase-space density exhibits {mu}-dependent local minima around L = 5 R{sub E}. Both AE-8 and AP-8 exhibit critical or cutoff values for the invariants beyond which the flux and therefore the phase-space density vanish. For both electrons and protons, these cutoff values vary systematically with magnetic moment and L-shell and are smaller than those estimated for the atmospheric loss cone. For large magnetic moments, for both electrons and protons, the K-dependence of the phase-space density is exponential, with maxima at the magnetic equator (K = 0) and vanishing beyond a cutoff value, K{sub c}. Such features suggest that momentum-dependent trapping boundaries, perhaps drift-type loss cones, serve as boundary conditions for trapped electrons as well as trapped protons.

  5. Non-linear dynamic analyses of 3D masonry structures by means of a homogenized rigid body and spring model (HRBSM)

    Science.gov (United States)

    Bertolesi, Elisa; Milani, Gabriele; Casolo, Siro

    2016-12-01

    A simple homogenized rigid body and spring model (HRBSM) is presented and applied for the non-linear dynamic analysis of 3D masonry structures. The approach, previously developed by the authors for the modeling of in-plane loaded walls is herein extended to real 3D buildings subjected to in- and out-of-plane deformation modes. The elementary cell is discretized by means of three-noded plane stress elements and non-linear interfaces. At a structural level, the non-linear analyses are performed replacing the homogenized orthotropic continuum with a rigid element and non-linear spring assemblage (RBSM) by means of which both in and out of plane mechanisms are allowed. All the simulations here presented are performed using the commercial software Abaqus. In order to validate the proposed model for the analyses of full scale structures subjected to seismic actions, two different examples are critically discussed, namely a church façade and an in-scale masonry building, both subjected to dynamic excitation. The results obtained are compared with experimental or numerical results available in literature.

  6. New insights into the nature of cerebellar-dependent eyeblink conditioning deficits in schizophrenia: A hierarchical linear modeling approach

    Directory of Open Access Journals (Sweden)

    Amanda R Bolbecker

    2016-01-01

    Full Text Available Evidence of cerebellar dysfunction in schizophrenia has mounted over the past several decades, emerging from neuroimaging, neuropathological, and behavioral studies. Consistent with these findings, cerebellar-dependent delay eyeblink conditioning (dEBC deficits have been identified in schizophrenia. While repeated measures analysis of variance (ANOVA is traditionally used to analyze dEBC data, hierarchical linear modeling (HLM more reliably describes change over time by accounting for the dependence in repeated measures data. This analysis approach is well suited to dEBC data analysis because it has less restrictive assumptions and allows unequal variances. The current study examined dEBC measured with electromyography in a single-cue tone paradigm in an age-matched sample of schizophrenia participants and healthy controls (N=56 per group using HLM. Subjects participated in 90 trials (10 blocks of dEBC, during which a 400 ms tone co-terminated with a 50 ms air puff delivered to the left eye. Each block also contained 1 tone-alone trial. The resulting block averages of dEBC data were fitted to a 3-parameter logistic model in HLM, revealing significant differences between schizophrenia and control groups on asymptote and inflection point, but not slope. These findings suggest that while the learning rate is not significantly different compared to controls, associative learning begins to level off later and a lower ultimate level of associative learning is achieved in schizophrenia. Given the large sample size in the present study, HLM may provide a more nuanced and definitive analysis of differences between schizophrenia and controls on dEBC.

  7. New Insights into the Nature of Cerebellar-Dependent Eyeblink Conditioning Deficits in Schizophrenia: A Hierarchical Linear Modeling Approach.

    Science.gov (United States)

    Bolbecker, Amanda R; Petersen, Isaac T; Kent, Jerillyn S; Howell, Josselyn M; O'Donnell, Brian F; Hetrick, William P

    2016-01-01

    Evidence of cerebellar dysfunction in schizophrenia has mounted over the past several decades, emerging from neuroimaging, neuropathological, and behavioral studies. Consistent with these findings, cerebellar-dependent delay eyeblink conditioning (dEBC) deficits have been identified in schizophrenia. While repeated-measures analysis of variance is traditionally used to analyze dEBC data, hierarchical linear modeling (HLM) more reliably describes change over time by accounting for the dependence in repeated-measures data. This analysis approach is well suited to dEBC data analysis because it has less restrictive assumptions and allows unequal variances. The current study examined dEBC measured with electromyography in a single-cue tone paradigm in an age-matched sample of schizophrenia participants and healthy controls (N = 56 per group) using HLM. Subjects participated in 90 trials (10 blocks) of dEBC, during which a 400 ms tone co-terminated with a 50 ms air puff delivered to the left eye. Each block also contained 1 tone-alone trial. The resulting block averages of dEBC data were fitted to a three-parameter logistic model in HLM, revealing significant differences between schizophrenia and control groups on asymptote and inflection point, but not slope. These findings suggest that while the learning rate is not significantly different compared to controls, associative learning begins to level off later and a lower ultimate level of associative learning is achieved in schizophrenia. Given the large sample size in the present study, HLM may provide a more nuanced and definitive analysis of differences between schizophrenia and controls on dEBC.

  8. [11C]MADAM Used as a Model for Understanding the Radiometabolism of Diphenyl Sulfide Radioligands for Positron Emission Tomography (PET)

    Science.gov (United States)

    Gourand, Fabienne; Amini, Nahid; Jia, Zhisheng; Stone-Elander, Sharon; Guilloteau, Denis; Barré, Louisa; Halldin, Christer

    2015-01-01

    In quantitative PET measurements, the analysis of radiometabolites in plasma is essential for determining the exact arterial input function. Diphenyl sulfide compounds are promising PET and SPECT radioligands for in vivo quantification of the serotonin transporter (SERT) and it is therefore important to investigate their radiometabolism. We have chosen to explore the radiometabolic profile of [11C]MADAM, one of these radioligands widely used for in vivo PET-SERT studies. The metabolism of [11C]MADAM/MADAM was investigated using rat and human liver microsomes (RLM and HLM) in combination with radio-HPLC or UHPLC/Q-ToF-MS for their identification. The effect of carrier on the radiometabolic rate of the radioligand [11C]MADAM in vitro and in vivo was examined by radio-HPLC. RLM and HLM incubations were carried out at two different carrier concentrations of 1 and 10 μM. Urine samples after perfusion of [11C]MADAM/MADAM in rats were also analysed by radio-HPLC. Analysis by UHPLC/Q-ToF-MS identified the metabolites produced in vitro to be results of N-demethylation, S-oxidation and benzylic hydroxylation. The presence of carrier greatly affected the radiometabolism rate of [11C]MADAM in both RLM/HLM experiments and in vivo rat studies. The good concordance between the results predicted by RLM and HLM experiments and the in vivo data obtained in rat studies indicate that the kinetics of the radiometabolism of the radioligand [11C]MADAM is dose-dependent. This issue needs to be addressed when the diarylsulfide class of compounds are used in PET quantifications of SERT. PMID:26367261

  9. [11C]MADAM Used as a Model for Understanding the Radiometabolism of Diphenyl Sulfide Radioligands for Positron Emission Tomography (PET).

    Science.gov (United States)

    Gourand, Fabienne; Amini, Nahid; Jia, Zhisheng; Stone-Elander, Sharon; Guilloteau, Denis; Barré, Louisa; Halldin, Christer

    2015-01-01

    In quantitative PET measurements, the analysis of radiometabolites in plasma is essential for determining the exact arterial input function. Diphenyl sulfide compounds are promising PET and SPECT radioligands for in vivo quantification of the serotonin transporter (SERT) and it is therefore important to investigate their radiometabolism. We have chosen to explore the radiometabolic profile of [11C]MADAM, one of these radioligands widely used for in vivo PET-SERT studies. The metabolism of [11C]MADAM/MADAM was investigated using rat and human liver microsomes (RLM and HLM) in combination with radio-HPLC or UHPLC/Q-ToF-MS for their identification. The effect of carrier on the radiometabolic rate of the radioligand [11C]MADAM in vitro and in vivo was examined by radio-HPLC. RLM and HLM incubations were carried out at two different carrier concentrations of 1 and 10 μM. Urine samples after perfusion of [11C]MADAM/MADAM in rats were also analysed by radio-HPLC. Analysis by UHPLC/Q-ToF-MS identified the metabolites produced in vitro to be results of N-demethylation, S-oxidation and benzylic hydroxylation. The presence of carrier greatly affected the radiometabolism rate of [11C]MADAM in both RLM/HLM experiments and in vivo rat studies. The good concordance between the results predicted by RLM and HLM experiments and the in vivo data obtained in rat studies indicate that the kinetics of the radiometabolism of the radioligand [11C]MADAM is dose-dependent. This issue needs to be addressed when the diarylsulfide class of compounds are used in PET quantifications of SERT.

  10. [11C]MADAM Used as a Model for Understanding the Radiometabolism of Diphenyl Sulfide Radioligands for Positron Emission Tomography (PET.

    Directory of Open Access Journals (Sweden)

    Fabienne Gourand

    Full Text Available In quantitative PET measurements, the analysis of radiometabolites in plasma is essential for determining the exact arterial input function. Diphenyl sulfide compounds are promising PET and SPECT radioligands for in vivo quantification of the serotonin transporter (SERT and it is therefore important to investigate their radiometabolism. We have chosen to explore the radiometabolic profile of [11C]MADAM, one of these radioligands widely used for in vivo PET-SERT studies. The metabolism of [11C]MADAM/MADAM was investigated using rat and human liver microsomes (RLM and HLM in combination with radio-HPLC or UHPLC/Q-ToF-MS for their identification. The effect of carrier on the radiometabolic rate of the radioligand [11C]MADAM in vitro and in vivo was examined by radio-HPLC. RLM and HLM incubations were carried out at two different carrier concentrations of 1 and 10 μM. Urine samples after perfusion of [11C]MADAM/MADAM in rats were also analysed by radio-HPLC. Analysis by UHPLC/Q-ToF-MS identified the metabolites produced in vitro to be results of N-demethylation, S-oxidation and benzylic hydroxylation. The presence of carrier greatly affected the radiometabolism rate of [11C]MADAM in both RLM/HLM experiments and in vivo rat studies. The good concordance between the results predicted by RLM and HLM experiments and the in vivo data obtained in rat studies indicate that the kinetics of the radiometabolism of the radioligand [11C]MADAM is dose-dependent. This issue needs to be addressed when the diarylsulfide class of compounds are used in PET quantifications of SERT.

  11. Implementing a GLUE-based approach for analysing the uncertainties associated with the modelling of water mean transit times using tritium

    Science.gov (United States)

    Gallart, Francesc; Roig-Planasdemunt, Maria; Stewart, Michael K.; Llorens, Pilar; Morgenstern, Uwe; Stichler, Willibald; Pfister, Laurent; Latron, Jérôme

    2016-04-01

    The use of tritium in catchment water mean transit time (MTT) studies has recently been claimed as necessary, because it can demonstrate the contribution of old water not identified by stable isotopes. Recent analytical developments have substantially improved the precision of tritium activity determinations. This improvement may reinforce the use of tritium in hydrological investigations, taking advantage of the end of the interference caused by past nuclear weapon tests. TEPMGLUE, a Generalised Likelihood Uncertainty Estimation (GLUE) based approach was developed for analysing the uncertainties associated with the use of lumped parameter models for investigating water MTTs. The approach consists of two different steps; first, the analytical precision of tritium determination in both the input and catchment sample water analyses is taken into account, and subsequently the lumped model parameter identification issue is considered. This methodology was implemented using the exponential-piston model in the Vallcebre research catchments where several water samples were analysed for tritium in 1996, 1997 and 1998 (low analytical precision), and 2013 (high analytical precision). For every site and sample set, the TEPMGLUE approach provided two outcomes: first a map of the relationships between the ratio of exponential to total flow (model parameter f) and the MTTs and, second, a likelihood weighted cumulative density function for a range of MTT values instead of a single optimal one. This allowed the estimation of the statistical significance of differences observed in MTTs among diverse water sample sets using a resampling test. The results showed that MTTs were poorly sensitive to the model parameter f. Most of the uncertainty was due to parameter identifiability issues, whose contribution decreased from more than 90% for the older samples to less than 50% for the 2013 samples. The contribution of the analytical errors rose to 47% in the latter samples, despite their

  12. Possible future HERA analyses

    CERN Document Server

    Geiser, Achim

    2015-01-01

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing $ep$ collider data and their physics scope. Comparisons to the original scope of the HERA programme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-e...

  13. Mitochondrial dysfunction, oxidative stress and apoptosis revealed by proteomic and transcriptomic analyses of the striata in two mouse models of Parkinson’s disease

    Energy Technology Data Exchange (ETDEWEB)

    Chin, Mark H.; Qian, Weijun; Wang, Haixing; Petyuk, Vladislav A.; Bloom, Joshua S.; Sforza, Daniel M.; Lacan, Goran; Liu, Dahai; Khan, Arshad H.; Cantor, Rita M.; Bigelow, Diana J.; Melega, William P.; Camp, David G.; Smith, Richard D.; Smith, Desmond J.

    2008-02-10

    The molecular mechanisms underlying the changes in the nigrostriatal pathway in Parkinson disease (PD) are not completely understood. Here we use mass spectrometry and microarrays to study the proteomic and transcriptomic changes in the striatum of two mouse models of PD, induced by the distinct neurotoxins 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) and methamphetamine (METH). Proteomic analyses resulted in the identification and relative quantification of 912 proteins with two or more unique peptides and 85 proteins with significant abundance changes following neurotoxin treatment. Similarly, microarray analyses revealed 181 genes with significant changes in mRNA following neurotoxin treatment. The combined protein and gene list provides a clearer picture of the potential mechanisms underlying neurodegeneration observed in PD. Functional analysis of this combined list revealed a number of significant categories, including mitochondrial dysfunction, oxidative stress response and apoptosis. Additionally, codon usage and miRNAs may play an important role in translational control in the striatum. These results constitute one of the largest datasets integrating protein and transcript changes for these neurotoxin models with many similar endpoint phenotypes but distinct mechanisms.

  14. Analysis of a Heavy Rainfall Event over Beijing During 21-22 July 2012 Based on High Resolution Model Analyses and Forecasts