WorldWideScience

Sample records for modeling analyses examined

  1. Challenges and Opportunities in Analysing Students Modelling

    Science.gov (United States)

    Blanco-Anaya, Paloma; Justi, Rosária; Díaz de Bustamante, Joaquín

    2017-01-01

    Modelling-based teaching activities have been designed and analysed from distinct theoretical perspectives. In this paper, we use one of them--the model of modelling diagram (MMD)--as an analytical tool in a regular classroom context. This paper examines the challenges that arise when the MMD is used as an analytical tool to characterise the…

  2. Longitudinal hierarchical linear modeling analyses of California Psychological Inventory data from age 33 to 75: an examination of stability and change in adult personality.

    Science.gov (United States)

    Jones, Constance J; Livson, Norman; Peskin, Harvey

    2003-06-01

    Twenty aspects of personality assessed via the California Psychological Inventory (CPI; Gough & Bradley, 1996) from age 33 to 75 were examined in a sample of 279 individuals. Oakland Growth Study and Berkeley Guidance Study members completed the CPI a maximum of 4 times. We used longitudinal hierarchical linear modeling (HLM) to ask the following: Which personality characteristics change and which do not? Five CPI scales showed uniform lack of change, 2 showed heterogeneous change giving an averaged lack of change, 4 showed linear increases with age, 2 showed linear decreases with age, 4 showed gender or sample differences in linear change, 1 showed a quadratic peak, and 2 showed a quadratic nadir. The utility of HLM becomes apparent in portraying the complexity of personality change and stability.

  3. An extensible analysable system model

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    , this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...... allows for easy development of analyses for the abstracted systems. We briefly present one application of our approach, namely the analysis of systems for potential insider threats....

  4. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  5. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...

  6. Nutrients from dairy foods are difficult to replace in diets of Americans: food pattern modeling and an analyses of the National Health and Nutrition Examination Survey 2003-2006.

    Science.gov (United States)

    Fulgoni, Victor L; Keast, Debra R; Auestad, Nancy; Quann, Erin E

    2011-10-01

    Because dairy products provide shortfall nutrients (eg, calcium, potassium, and vitamin D) and other important nutrients, this study hypothesized that it would be difficult for Americans to meet nutritional requirements for these nutrients in the absence of dairy product consumption or when recommended nondairy calcium sources are consumed. To test this hypothesis, MyPyramid dietary pattern modeling exercises and an analyses of data from the National Health and Nutrition Examination Survey 2003-2006 were conducted in those aged at least 2 years (n = 16 822). Impact of adding or removing 1 serving of dairy, removing all dairy, and replacing dairy with nondairy calcium sources was evaluated. Dietary pattern modeling indicated that at least 3 servings of dairy foods are needed to help individuals meet recommendations for nutrients, such as calcium and magnesium, and 4 servings may be needed to help some groups meet potassium recommendations. A calcium-equivalent serving of dairy requires 1.1 servings of fortified soy beverage, 0.6 serving of fortified orange juice, 1.2 servings of bony fish, or 2.2 servings of leafy greens. The replacement of dairy with calcium-equivalent foods alters the overall nutritional profile of the diet and affects nutrients including protein, potassium, magnesium, phosphorus, riboflavin, vitamins A, D and B(12). Similar modeling exercises using consumption data from the National Health and Nutrition Examination Survey also demonstrated that nondairy calcium replacement foods are not a nutritionally equivalent substitute for dairy products. In conclusion, although it is possible to meet calcium intake recommendations without consuming dairy foods, calcium replacement foods are not a nutritionally equivalent substitute for dairy foods and consumption of a calcium-equivalent amount of some nondairy foods is unrealistic.

  7. Examination of Triacylglycerol Biosynthetic Pathways via De Novo Transcriptomic and Proteomic Analyses in an Unsequenced Microalga

    Science.gov (United States)

    2011-10-17

    Examination of Triacylglycerol Biosynthetic Pathways via De Novo Transcriptomic and Proteomic Analyses in an Unsequenced Microalga Michael T...dependent upon available genomic sequence data, and the lack of these data has hindered the pursuit of such analyses for many oleaginous microalgae . In order...to examine the triacylglycerol biosynthetic pathway in the unsequenced oleaginous microalga , Chlorella vulgaris, we have established a strategy with

  8. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate......System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...

  9. Modelling and Analysing Socio-Technical Systems

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Ivanova, Marieta Georgieva; Nielson, Flemming

    2015-01-01

    with social engineering. Due to this combination of attack steps on technical and social levels, risk assessment in socio-technical systems is complex. Therefore, established risk assessment methods often abstract away the internal structure of an organisation and ignore human factors when modelling...... and assessing attacks. In our work we model all relevant levels of socio-technical systems, and propose evaluation techniques for analysing the security properties of the model. Our approach simplifies the identification of possible attacks and provides qualified assessment and ranking of attacks based...... on the expected impact. We demonstrate our approach on a home-payment system. The system is specifically designed to help elderly or disabled people, who may have difficulties leaving their home, to pay for some services, e.g., care-taking or rent. The payment is performed using the remote control of a television...

  10. The impact of study size on meta-analyses: examination of underpowered studies in Cochrane reviews.

    Directory of Open Access Journals (Sweden)

    Rebecca M Turner

    Full Text Available Most meta-analyses include data from one or more small studies that, individually, do not have power to detect an intervention effect. The relative influence of adequately powered and underpowered studies in published meta-analyses has not previously been explored. We examine the distribution of power available in studies within meta-analyses published in Cochrane reviews, and investigate the impact of underpowered studies on meta-analysis results.For 14,886 meta-analyses of binary outcomes from 1,991 Cochrane reviews, we calculated power per study within each meta-analysis. We defined adequate power as ≥50% power to detect a 30% relative risk reduction. In a subset of 1,107 meta-analyses including 5 or more studies with at least two adequately powered and at least one underpowered, results were compared with and without underpowered studies. In 10,492 (70% of 14,886 meta-analyses, all included studies were underpowered; only 2,588 (17% included at least two adequately powered studies. 34% of the meta-analyses themselves were adequately powered. The median of summary relative risks was 0.75 across all meta-analyses (inter-quartile range 0.55 to 0.89. In the subset examined, odds ratios in underpowered studies were 15% lower (95% CI 11% to 18%, P<0.0001 than in adequately powered studies, in meta-analyses of controlled pharmacological trials; and 12% lower (95% CI 7% to 17%, P<0.0001 in meta-analyses of controlled non-pharmacological trials. The standard error of the intervention effect increased by a median of 11% (inter-quartile range -1% to 35% when underpowered studies were omitted; and between-study heterogeneity tended to decrease.When at least two adequately powered studies are available in meta-analyses reported by Cochrane reviews, underpowered studies often contribute little information, and could be left out if a rapid review of the evidence is required. However, underpowered studies made up the entirety of the evidence in most

  11. Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach

    Directory of Open Access Journals (Sweden)

    Alistair McNair Senior

    2016-01-01

    Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  12. Externalizing Behaviour for Analysing System Models

    NARCIS (Netherlands)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof; Kammüller, Florian

    Systems models have recently been introduced to model organisationsandevaluate their vulnerability to threats and especially insiderthreats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside

  13. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  14. Bayesian Uncertainty Analyses Via Deterministic Model

    Science.gov (United States)

    Krzysztofowicz, R.

    2001-05-01

    Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.

  15. Analysing Social Epidemics by Delayed Stochastic Models

    Directory of Open Access Journals (Sweden)

    Francisco-José Santonja

    2012-01-01

    Full Text Available We investigate the dynamics of a delayed stochastic mathematical model to understand the evolution of the alcohol consumption in Spain. Sufficient condition for stability in probability of the equilibrium point of the dynamic model with aftereffect and stochastic perturbations is obtained via Kolmanovskii and Shaikhet general method of Lyapunov functionals construction. We conclude that alcohol consumption in Spain will be constant (with stability in time with around 36.47% of nonconsumers, 62.94% of nonrisk consumers, and 0.59% of risk consumers. This approach allows us to emphasize the possibilities of the dynamical models in order to study human behaviour.

  16. Modelling, analyses and design of switching converters

    Science.gov (United States)

    Cuk, S. M.; Middlebrook, R. D.

    1978-01-01

    A state-space averaging method for modelling switching dc-to-dc converters for both continuous and discontinuous conduction mode is developed. In each case the starting point is the unified state-space representation, and the end result is a complete linear circuit model, for each conduction mode, which correctly represents all essential features, namely, the input, output, and transfer properties (static dc as well as dynamic ac small-signal). While the method is generally applicable to any switching converter, it is extensively illustrated for the three common power stages (buck, boost, and buck-boost). The results for these converters are then easily tabulated owing to the fixed equivalent circuit topology of their canonical circuit model. The insights that emerge from the general state-space modelling approach lead to the design of new converter topologies through the study of generic properties of the cascade connection of basic buck and boost converters.

  17. Modelling and Analyses of Embedded Systems Design

    DEFF Research Database (Denmark)

    Brekling, Aske Wiid

    We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....

  18. Structured physical examination data: a modeling challenge.

    Science.gov (United States)

    Doupi, P; van Ginneken, A M

    2001-01-01

    The success of systems facilitating collection of structured data by clinicians is largely dependent on the flexibility of the interface. The Open Record for CAre (ORCA) makes use of a generic model to support knowledge-based structured data entry for a variety of medical domains. An endeavor undertaken recently aimed to cover the broader area of Physical Examination by expanding the contents of the knowledge base. The model was found to be adequately expressive for supporting this task. Maintaining the balance between flexibility of the interface and constraints dictated by reliable retrieval, however, proved to be a considerable challenge. In this paper we illustrate through specific examples the effect of this trade off on the modeling process, together with the rationale for the chosen solutions and suggestions for future research focus.

  19. Examination of species boundaries in the Acropora cervicornis group (Scleractinia, cnidaria) using nuclear DNA sequence analyses.

    Science.gov (United States)

    Oppen, M J; Willis, B L; Vugt, H W; Miller, D J

    2000-09-01

    Although Acropora is the most species-rich genus of the scleractinian (stony) corals, only three species occur in the Caribbean: A. cervicornis, A. palmata and A. prolifera. Based on overall coral morphology, abundance and distribution patterns, it has been suggested that A. prolifera may be a hybrid between A. cervicornis and A. palmata. The species boundaries among these three morphospecies were examined using DNA sequence analyses of the nuclear Pax-C 46/47 intron and the ribosomal DNA Internal Transcribed Spacer (ITS1 and ITS2) and 5.8S regions. Moderate levels of sequence variability were observed in the ITS and 5.8S sequences (up to 5.2% overall sequence difference), but variability within species was as large as between species and all three species carried similar sequences. Since this is unlikely to represent a shared ancestral polymorphism, the data suggest that introgressive hybridization occurs among the three species. For the Pax-C intron, A. cervicornis and A. palmata had very distinct allele frequencies and A. cervicornis carried a unique allele at a frequency of 0.769 (although sequence differences between alleles were small). All A. prolifera colonies examined were heterozygous for the Pax-C intron, whereas heterozygosity was only 0.286 and 0.333 for A. cervicornis and A. palmata, respectively. These data support the hypothesis that A. prolifera is the product of hybridization between two species that have a different allelic composition for the Pax-C intron, i.e. A. cervicornis and A. palmata. We therefore suggest that A. prolifera is a hybrid between A. cervicornis and A. palmata, which backcrosses with the parental species at low frequency.

  20. Alternative models of DSM-5 PTSD: Examining diagnostic implications

    DEFF Research Database (Denmark)

    Murphy, Siobhan; Hansen, Maj; Elklit, Ask

    2017-01-01

    The factor structure of DSM-5 posttraumatic stress disorder (PTSD) has been extensively debated with evidence supporting the recently proposed seven-factor Hybrid model. However, despite myriad studies examining PTSD symptom structure few have assessed the diagnostic implications of these proposed...... estimated within a confirmatory factor analytic framework using the PTSD Checklist for DSM-5 (PCL-5). Data were analysed from a Malaysian adolescent community sample (n=481) of which 61.7% were female, with a mean age of 17.03 years. The results indicated that all models provided satisfactory model fit...... with statistical superiority for the Externalizing Behaviours and seven-factor Hybrid models. The PTSD prevalence estimates varied substantially ranging from 21.8% for the DSM-5 model to 10.0% for the Hybrid model. Estimates of risk associated with PTSD were inconsistent across the alternative models...

  1. Alternative models of DSM-5 PTSD: Examining diagnostic implications.

    Science.gov (United States)

    Murphy, Siobhan; Hansen, Maj; Elklit, Ask; Yong Chen, Yoke; Raudzah Ghazali, Siti; Shevlin, Mark

    2017-09-09

    The factor structure of DSM-5 posttraumatic stress disorder (PTSD) has been extensively debated with evidence supporting the recently proposed seven-factor Hybrid model. However, despite myriad studies examining PTSD symptom structure few have assessed the diagnostic implications of these proposed models. This study aimed to generate PTSD prevalence estimates derived from the 7 alternative factor models and assess whether pre-established risk factors associated with PTSD (e.g., transportation accidents and sexual victimisation) produce consistent risk estimates. Seven alternative models were estimated within a confirmatory factor analytic framework using the PTSD Checklist for DSM-5 (PCL-5). Data were analysed from a Malaysian adolescent community sample (n = 481) of which 61.7% were female, with a mean age of 17.03 years. The results indicated that all models provided satisfactory model fit with statistical superiority for the Externalising Behaviours and seven-factor Hybrid models. The PTSD prevalence estimates varied substantially ranging from 21.8% for the DSM-5 model to 10.0% for the Hybrid model. Estimates of risk associated with PTSD were inconsistent across the alternative models, with substantial variation emerging for sexual victimisation. These findings have important implications for research and practice and highlight that more research attention is needed to examine the diagnostic implications emerging from the alternative models of PTSD. Copyright © 2017. Published by Elsevier B.V.

  2. Identifying Useful Auxiliary Variables for Incomplete Data Analyses: A Note on a Group Difference Examination Approach

    Science.gov (United States)

    Raykov, Tenko; Marcoulides, George A.

    2014-01-01

    This research note contributes to the discussion of methods that can be used to identify useful auxiliary variables for analyses of incomplete data sets. A latent variable approach is discussed, which is helpful in finding auxiliary variables with the property that if included in subsequent maximum likelihood analyses they may enhance considerably…

  3. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  4. Modelling longevity bonds: Analysing the Swiss Re Kortis bond

    OpenAIRE

    2015-01-01

    A key contribution to the development of the traded market for longevity risk was the issuance of the Kortis bond, the world's first longevity trend bond, by Swiss Re in 2010. We analyse the design of the Kortis bond, develop suitable mortality models to analyse its payoff and discuss the key risk factors for the bond. We also investigate how the design of the Kortis bond can be adapted and extended to further develop the market for longevity risk.

  5. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or m

  6. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  7. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  8. The method of characteristics applied to analyse 2DH models

    NARCIS (Netherlands)

    Sloff, C.J.

    1992-01-01

    To gain insight into the physical behaviour of 2D hydraulic models (mathematically formulated as a system of partial differential equations), the method of characteristics is used to analyse the propagation of physical meaningful disturbances. These disturbances propagate as wave fronts along bichar

  9. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    D. E. Reusser

    2008-11-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns which can lead to the identification of model structural errors.

  10. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    E. Zehe

    2009-07-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns, which can lead to the identification of model structural errors.

  11. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Science.gov (United States)

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  12. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    Energy Technology Data Exchange (ETDEWEB)

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  13. Examination of Alternative Models of Job Satisfaction

    Science.gov (United States)

    Aldag, Ramon J.; Brief, Arthur P.

    1978-01-01

    Researchers have generally assumed overall job satisfaction to be an additive function of weighted job satisfaction facet scores. This paper considers the linear compensatory model as well as two nonlinear alternatives. Available from: Ramon J. Aldag, University of Wisconsin, 1155 Observatory Drive, Madison, Wisconsin 53706. (Author)

  14. Comparing modelling techniques for analysing urban pluvial flooding.

    Science.gov (United States)

    van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M

    2014-01-01

    Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.

  15. Analysing the Organizational Culture of Universities: Two Models

    Science.gov (United States)

    Folch, Marina Tomas; Ion, Georgeta

    2009-01-01

    This article presents the findings of two research projects, examining organizational culture by means of two different models of analysis--one at university level and one at department level--which were carried out over the last four years at Catalonian public universities (Spain). Theoretical and methodological approaches for the two…

  16. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Du, Qiang [Pennsylvania State Univ., State College, PA (United States)

    2014-11-12

    The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next

  17. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are more appropriate to accurately reflect the trial data.

  18. The language of worry: examining linguistic elements of worry models.

    Science.gov (United States)

    Geronimi, Elena M C; Woodruff-Borden, Janet

    2015-01-01

    Despite strong evidence that worry is a verbal process, studies examining linguistic features in individuals with generalised anxiety disorder (GAD) are lacking. The aim of the present study is to investigate language use in individuals with GAD and controls based on GAD and worry theoretical models. More specifically, the degree to which linguistic elements of the avoidance and intolerance of uncertainty worry models can predict diagnostic status was analysed. Participants were 19 women diagnosed with GAD and 22 control women and their children. After participating in a diagnostic semi-structured interview, dyads engaged in a free-play interaction where mothers' language sample was collected. Overall, the findings provided evidence for distinctive linguistic features of individuals with GAD. That is, after controlling for the effect of demographic variables, present tense, future tense, prepositions and number of questions correctly classified those with GAD and controls such that a considerable amount of the variance in diagnostic status was explained uniquely by language use. Linguistic confirmation of worry models is discussed.

  19. Analysing regenerative potential in zebrafish models of congenital muscular dystrophy.

    Science.gov (United States)

    Wood, A J; Currie, P D

    2014-11-01

    The congenital muscular dystrophies (CMDs) are a clinically and genetically heterogeneous group of muscle disorders. Clinically hypotonia is present from birth, with progressive muscle weakness and wasting through development. For the most part, CMDs can mechanistically be attributed to failure of basement membrane protein laminin-α2 sufficiently binding with correctly glycosylated α-dystroglycan. The majority of CMDs therefore arise as the result of either a deficiency of laminin-α2 (MDC1A) or hypoglycosylation of α-dystroglycan (dystroglycanopathy). Here we consider whether by filling a regenerative medicine niche, the zebrafish model can address the present challenge of delivering novel therapeutic solutions for CMD. In the first instance the readiness and appropriateness of the zebrafish as a model organism for pioneering regenerative medicine therapies in CMD is analysed, in particular for MDC1A and the dystroglycanopathies. Despite the recent rapid progress made in gene editing technology, these approaches have yet to yield any novel zebrafish models of CMD. Currently the most genetically relevant zebrafish models to the field of CMD, have all been created by N-ethyl-N-nitrosourea (ENU) mutagenesis. Once genetically relevant models have been established the zebrafish has several important facets for investigating the mechanistic cause of CMD, including rapid ex vivo development, optical transparency up to the larval stages of development and relative ease in creating transgenic reporter lines. Together, these tools are well suited for use in live-imaging studies such as in vivo modelling of muscle fibre detachment. Secondly, the zebrafish's contribution to progress in effective treatment of CMD was analysed. Two approaches were identified in which zebrafish could potentially contribute to effective therapies. The first hinges on the augmentation of functional redundancy within the system, such as upregulating alternative laminin chains in the candyfloss

  20. [Approach to depressogenic genes from genetic analyses of animal models].

    Science.gov (United States)

    Yoshikawa, Takeo

    2004-01-01

    Human depression or mood disorder is defined as a complex disease, making positional cloning of susceptibility genes a formidable task. We have undertaken genetic analyses of three different animal models for depression, comparing our results with advanced database resources. We first performed quantitative trait loci (QTL) analysis on two mouse models of "despair", namely, the forced swim test (FST) and tail suspension test (TST), and detected multiple chromosomal loci that control immobility time in these tests. Since one QTL detected on mouse chromosome 11 harbors the GABA A receptor subunit genes, we tested these genes for association in human mood disorder patients. We obtained significant associations of the alpha 1 and alpha 6 subunit genes with the disease, particularly in females. This result was striking, because we had previously detected an epistatic interaction between mouse chromosomes 11 and X that regulates immobility time in these animals. Next, we performed genome-wide expression analyses using a rat model of depression, learned helplessness (LH). We found that in the frontal cortex of LH rats, a disease implicated region, the LIM kinase 1 gene (Limk 1) showed greatest alteration, in this case down-regulation. By combining data from the QTL analysis of FST/TST and DNA microarray analysis of mouse frontal cortex, we identified adenylyl cyclase-associated CAP protein 1 (Cap 1) as another candidate gene for depression susceptibility. Both Limk 1 and Cap 1 are key players in the modulation of actin G-F conversion. In summary, our current study using animal models suggests disturbances of GABAergic neurotransmission and actin turnover as potential pathophysiologies for mood disorder.

  1. Magnetic fabric analyses in analogue models of clays

    Science.gov (United States)

    García-Lasanta, Cristina; Román-Berdiel, Teresa; Izquierdo-Llavall, Esther; Casas-Sainz, Antonio

    2017-04-01

    Anisotropy of magnetic susceptibility (AMS) studies in sedimentary rocks subjected to deformation indicate that magnetic fabrics orientation can be conditioned by multiple factors: sedimentary conditions, magnetic mineralogy, successive tectonic events, etc. All of them difficult the interpretation of the AMS as a marker of the deformation conditions. Analogue modeling allows to isolate the variables that act in a geological process and to determine the factors and in which extent they influence in the process. This study shows the magnetic fabric analyses applied to several analogue models developed with common commercial red clays. This material resembles natural clay materials that, despite their greater degree of impurities and heterogeneity, have been proved to record a robust magnetic signal carried by a mixture of para- and ferromagnetic minerals. The magnetic behavior of the modeled clay has been characterized by temperature dependent magnetic susceptibility curves (from 40 to 700°C). The measurements were performed combining a KLY-3S Kappabridge susceptometer with a CS3 furnace (AGICO Inc., Czech Republic). The obtained results indicate the presence of an important content of hematite as ferromagnetic phase, as well as a remarkable paramagnetic fraction, probably constituted by phyllosilicates. This mineralogy is common in natural materials such as Permo-Triassic red facies, and magnetic fabric analyses in these natural examples have given consistent results in different tectonic contexts. In this study, sedimentary conditions and magnetic mineralogy are kept constant and the influence of the tectonic regime in the magnetic fabrics is analyzed. Our main objective is to reproduce several tectonic contexts (strike-slip and compression) in a sedimentary environment where material is not yet compacted, in order to determine how tectonic conditions influence the magnetic fabric registered in each case. By dispersing the clays in water and after allowing their

  2. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  3. Pan-European modelling of riverine nutrient concentrations - spatial patterns, source detection, trend analyses, scenario modelling

    Science.gov (United States)

    Bartosova, Alena; Arheimer, Berit; Capell, Rene; Donnelly, Chantal; Strömqvist, Johan

    2016-04-01

    Nutrient transport models are important tools for large scale assessments of macro-nutrient fluxes (nitrogen, phosphorus) and thus can serve as support tool for environmental assessment and management. Results from model applications over large areas, i.e. from major river basin to continental scales can fill a gap where monitoring data is not available. Here, we present results from the pan-European rainfall-runoff and nutrient transfer model E-HYPE, which is based on open data sources. We investigate the ability of the E-HYPE model to replicate the spatial and temporal variations found in observed time-series of riverine N and P concentrations, and illustrate the model usefulness for nutrient source detection, trend analyses, and scenario modelling. The results show spatial patterns in N concentration in rivers across Europe which can be used to further our understanding of nutrient issues across the European continent. E-HYPE results show hot spots with highest concentrations of total nitrogen in Western Europe along the North Sea coast. Source apportionment was performed to rank sources of nutrient inflow from land to sea along the European coast. An integrated dynamic model as E-HYPE also allows us to investigate impacts of climate change and measure programs, which was illustrated in a couple of scenarios for the Baltic Sea. Comparing model results with observations shows large uncertainty in many of the data sets and the assumptions used in the model set-up, e.g. point source release estimates. However, evaluation of model performance at a number of measurement sites in Europe shows that mean N concentration levels are generally well simulated. P levels are less well predicted which is expected as the variability of P concentrations in both time and space is higher. Comparing model performance with model set-ups using local data for the Weaver River (UK) did not result in systematically better model performance which highlights the complexity of model

  4. Multi-state models: metapopulation and life history analyses

    Directory of Open Access Journals (Sweden)

    Arnason, A. N.

    2004-06-01

    Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such

  5. Dipole model test with one superconducting coil; results analysed

    CERN Document Server

    Durante, M; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  6. Dipole model test with one superconducting coil: results analysed

    CERN Document Server

    Bajas, H; Benda, V; Berriaud, C; Bajko, M; Bottura, L; Caspi, S; Charrondiere, M; Clément, S; Datskov, V; Devaux, M; Durante, M; Fazilleau, P; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  7. Incorporating flood event analyses and catchment structures into model development

    Science.gov (United States)

    Oppel, Henning; Schumann, Andreas

    2016-04-01

    The space-time variability in catchment response results from several hydrological processes which differ in their relevance in an event-specific way. An approach to characterise this variance consists in comparisons between flood events in a catchment and between flood responses of several sub-basins in such an event. In analytical frameworks the impact of space and time variability of rainfall on runoff generation due to rainfall excess can be characterised. Moreover the effect of hillslope and channel network routing on runoff timing can be specified. Hence, a modelling approach is needed to specify the runoff generation and formation. Knowing the space-time variability of rainfall and the (spatial averaged) response of a catchment it seems worthwhile to develop new models based on event and catchment analyses. The consideration of spatial order and the distribution of catchment characteristics in their spatial variability and interaction with the space-time variability of rainfall provides additional knowledge about hydrological processes at the basin scale. For this purpose a new procedure to characterise the spatial heterogeneity of catchments characteristics in their succession along the flow distance (differentiated between river network and hillslopes) was developed. It was applied to study of flood responses at a set of nested catchments in a river basin in eastern Germany. In this study the highest observed rainfall-runoff events were analysed, beginning at the catchment outlet and moving upstream. With regard to the spatial heterogeneities of catchment characteristics, sub-basins were separated by new algorithms to attribute runoff-generation, hillslope and river network processes. With this procedure the cumulative runoff response at the outlet can be decomposed and individual runoff features can be assigned to individual aspects of the catchment. Through comparative analysis between the sub-catchments and the assigned effects on runoff dynamics new

  8. A theoretical model for analysing gender bias in medicine

    Directory of Open Access Journals (Sweden)

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  9. An Illumination Modeling System for Human Factors Analyses

    Science.gov (United States)

    Huynh, Thong; Maida, James C.; Bond, Robert L. (Technical Monitor)

    2002-01-01

    Seeing is critical to human performance. Lighting is critical for seeing. Therefore, lighting is critical to human performance. This is common sense, and here on earth, it is easily taken for granted. However, on orbit, because the sun will rise or set every 45 minutes on average, humans working in space must cope with extremely dynamic lighting conditions. Contrast conditions of harsh shadowing and glare is also severe. The prediction of lighting conditions for critical operations is essential. Crew training can factor lighting into the lesson plans when necessary. Mission planners can determine whether low-light video cameras are required or whether additional luminaires need to be flown. The optimization of the quantity and quality of light is needed because of the effects on crew safety, on electrical power and on equipment maintainability. To address all of these issues, an illumination modeling system has been developed by the Graphics Research and Analyses Facility (GRAF) and Lighting Environment Test Facility (LETF) in the Space Human Factors Laboratory at NASA Johnson Space Center. The system uses physically based ray tracing software (Radiance) developed at Lawrence Berkeley Laboratories, a human factors oriented geometric modeling system (PLAID) and an extensive database of humans and environments. Material reflectivity properties of major surfaces and critical surfaces are measured using a gonio-reflectometer. Luminaires (lights) are measured for beam spread distribution, color and intensity. Video camera performances are measured for color and light sensitivity. 3D geometric models of humans and the environment are combined with the material and light models to form a system capable of predicting lighting conditions and visibility conditions in space.

  10. Examining an important urban transportation management tool: subarea modeling

    Directory of Open Access Journals (Sweden)

    Xueming CHEN

    2009-12-01

    Full Text Available At present, customized subarea models have been widely used in local transportation planning throughout the United States. The biggest strengths of a subarea model lie in its more detailed and accurate modeling outputs which better meet local planning requirements. In addition, a subarea model can substantially reduce database size and model running time. In spite of these advantages, subarea models remain quite weak in maintaining consistency with a regional model, modeling transit projects, smart growth measures, air quality conformity, and other areas. Both opportunities and threats exist for subarea modeling. In addition to examining subarea models, this paper introduces the decision-making process in choosing a proper subarea modeling approach (windowing versus focusing and software package. This study concludes that subarea modeling will become more popular in the future. More GIS applications, travel surveys, transit modeling, microsimulation software utilization, and other modeling improvements are expected to be incorporated into the subarea modeling process.

  11. Comparison of two potato simulation models under climate change. I. Model calibration and sensitivity analyses

    NARCIS (Netherlands)

    Wolf, J.

    2002-01-01

    To analyse the effects of climate change on potato growth and production, both a simple growth model, POTATOS, and a comprehensive model, NPOTATO, were applied. Both models were calibrated and tested against results from experiments and variety trials in The Netherlands. The sensitivity of model

  12. Examining a Model of Life Satisfaction among Unemployed Adults

    Science.gov (United States)

    Duffy, Ryan D.; Bott, Elizabeth M.; Allan, Blake A.; Torrey, Carrie L.

    2013-01-01

    The present study examined a model of life satisfaction among a diverse sample of 184 adults who had been unemployed for an average of 10.60 months. Using the Lent (2004) model of life satisfaction as a framework, a model was tested with 5 hypothesized predictor variables: optimism, job search self-efficacy, job search support, job search…

  13. APPLYING LOGISTIC REGRESSION MODEL TO THE EXAMINATION RESULTS DATA

    Directory of Open Access Journals (Sweden)

    Goutam Saha

    2011-01-01

    Full Text Available The binary logistic regression model is used to analyze the school examination results(scores of 1002 students. The analysis is performed on the basis of the independent variables viz.gender, medium of instruction, type of schools, category of schools, board of examinations andlocation of schools, where scores or marks are assumed to be dependent variables. The odds ratioanalysis compares the scores obtained in two examinations viz. matriculation and highersecondary.

  14. Temporal variations analyses and predictive modeling of microbiological seawater quality.

    Science.gov (United States)

    Lušić, Darija Vukić; Kranjčević, Lado; Maćešić, Senka; Lušić, Dražen; Jozić, Slaven; Linšak, Željko; Bilajac, Lovorka; Grbčić, Luka; Bilajac, Neiro

    2017-08-01

    Bathing water quality is a major public health issue, especially for tourism-oriented regions. Currently used methods within EU allow at least a 2.2 day period for obtaining the analytical results, making outdated the information forwarded to the public. Obtained results and beach assessment are influenced by the temporal and spatial characteristics of sample collection, and numerous environmental parameters, as well as by differences of official water standards. This paper examines the temporal variation of microbiological parameters during the day, as well as the influence of the sampling hour, on decision processes in the management of the beach. Apart from the fecal indicators stipulated by the EU Bathing Water Directive (E. coli and enterococci), additional fecal (C. perfringens) and non-fecal (S. aureus and P. aeriginosa) parameters were analyzed. Moreover, the effects of applying different evaluation criteria (national, EU and U.S. EPA) to beach ranking were studied, and the most common reasons for exceeding water-quality standards were investigated. In order to upgrade routine monitoring, a predictive statistical model was developed. The highest concentrations of fecal indicators were recorded early in the morning (6 AM) due to the lack of solar radiation during the night period. When compared to enterococci, E. coli criteria appears to be more stringent for the detection of fecal pollution. In comparison to EU and U.S. EPA criteria, Croatian national evaluation criteria provide stricter public health standards. Solar radiation and precipitation were the predominant environmental parameters affecting beach water quality, and these parameters were included in the predictive model setup. Predictive models revealed great potential for the monitoring of recreational water bodies, and with further development can become a useful tool for the improvement of public health protection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. An Examination of Need-Satisfaction Models of Job Attitudes

    Science.gov (United States)

    Salancik, Gerald R.; Pfeffer, Jeffrey

    1977-01-01

    An examination of need-satisfaction models indicates that they are frequently formulated so as to be almost impossible to refute, and the research testing them has been beset with consistency and priming artifacts. (Author/IRT)

  16. Reproductive mode evolution in lizards revisited: updated analyses examining geographic, climatic and phylogenetic effects support the cold-climate hypothesis.

    Science.gov (United States)

    Watson, C M; Makowsky, R; Bagley, J C

    2014-12-01

    Viviparity, the bearing of live young, has evolved well over 100 times among squamate reptiles. This reproductive strategy is hypothesized to allow maternal control of the foetus' thermal environment and thereby to increase the fitness of the parents and offspring. Two hypotheses have been posited to explain this phenomenon: (i) the cold-climate hypothesis (CCH), which advocates low temperatures as the primary selective force; and (ii) the maternal manipulation hypothesis (MMH), which advocates temperature variability as the primary selective force. Here, we investigate whether climatic and geographic variables associated with the CCH vs. the MMH best explain the current geographical distributions of viviparity in lizards while incorporating recent advances in comparative methods, squamate phylogenetics and geospatial analysis. To do this, we compared nonphylogenetic and phylogenetic models predicting viviparity based on point-of-capture data from 20,994 museum specimens representing 215 lizard species in conjunction with spatially explicit bioclimatic and geographic (elevation and latitude) data layers. The database we analysed emphasized Nearctic lizards from three species-rich genera (Phrynosoma, Plestiodon and Sceloporus); however, we additionally analysed a less substantial, but worldwide sample of species to verify the universality of our Nearctic results. We found that maximum temperature of the warmest month (and, less commonly, elevation and maximum temperature of the driest quarter) was frequently the best predictor of viviparity and showed an association consistent with the CCH. Our results strongly favour the CCH over the MMH in explaining lizard reproductive mode evolution.

  17. Examining the Nelson-Siegel Class of Term Structure Models

    NARCIS (Netherlands)

    M.D. de Pooter (Michiel)

    2007-01-01

    textabstractIn this paper I examine various extensions of the Nelson and Siegel (1987) model with the purpose of fitting and forecasting the term structure of interest rates. As expected, I find that using more flexible models leads to a better in-sample fit of the term structure. However, I show th

  18. Examining Elementary Social Studies Marginalization: A Multilevel Model

    Science.gov (United States)

    Fitchett, Paul G.; Heafner, Tina L.; Lambert, Richard G.

    2014-01-01

    Utilizing data from the National Center for Education Statistics Schools and Staffing Survey (SASS), a multilevel model (Hierarchical Linear Model) was developed to examine the association of teacher/classroom and state level indicators on reported elementary social studies instructional time. Findings indicated that state testing policy was a…

  19. Analyses on Four Models and Cases of Enterprise Informatization

    Institute of Scientific and Technical Information of China (English)

    Shi Chunsheng(石春生); Han Xinjuan; Yang Cuilan; Zhao Dongbai

    2003-01-01

    The basic conditions of the enterprise informatization in Heilongjiang province are analyzed and 4 models are designed to drive the industrial and commercial information enterprise. The 4 models are the Resource Integration Informatization Model, the Flow Management Informatization Model, the Intranet E-commerce Informatization Model and the Network Enterprise Informatization Model. The conditions for using and problems needing attentions of these 4 models are also analyzed.

  20. A comprehensive examination of the model underlying acceptance and commitment therapy for chronic pain.

    Science.gov (United States)

    Vowles, Kevin E; Sowden, Gail; Ashworth, Julie

    2014-05-01

    The therapeutic model underlying Acceptance and Commitment Therapy (ACT) is reasonably well-established as it applies to chronic pain. Several studies have examined measures of single ACT processes, or subsets of processes, and have almost uniformly indicated reliable relations with patient functioning. To date, however, no study has performed a comprehensive examination of the entire ACT model, including all of its component processes, as it relates to functioning. The present study performed this examination in 274 individuals with chronic pain presenting for an assessment appointment. Participants completed a battery of self-report questionnaires, assessing multiple aspects of the ACT model, as well as pain intensity, disability, and emotional distress. Initial exploratory factor analyses examined measures of the ACT model and measures of patient functioning separately with each analysis identifying three factors. Next, the fit of a model including ACT processes on the one hand and patient functioning on the other was examined using Structural Equation Modeling. Overall model fit was acceptable and indicated moderate correlations among the ACT processes themselves, as well as significant relations with pain intensity, emotional distress, and disability. These analyses build on the existing literature by providing, to our knowledge, the most comprehensive evaluation of the ACT theoretical model in chronic pain to date. Copyright © 2014. Published by Elsevier Ltd.

  1. An Examination of Family Communication within the Core and Balance Model of Family Leisure Functioning

    Science.gov (United States)

    Smith, Kevin M.; Freeman, Patti A.; Zabriskie, Ramon B.

    2009-01-01

    The purpose of this study was to examine family communication within the core and balance model of family leisure functioning. The study was conducted from a youth perspective of family leisure and family functioning. The sample consisted of youth (N= 95) aged 11 - 17 from 25 different states in the United States. Path analyses indicated that…

  2. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gunzburger, Max [Florida State Univ., Tallahassee, FL (United States)

    2015-02-17

    We have treated the modeling, analysis, numerical analysis, and algorithmic development for nonlocal models of diffusion and mechanics. Variational formulations were developed and finite element methods were developed based on those formulations for both steady state and time dependent problems. Obstacle problems and optimization problems for the nonlocal models were also treated and connections made with fractional derivative models.

  3. Examination of Hurricane Sandy's (2012 structure and intensity evolution from full-field and anomaly-field analyses

    Directory of Open Access Journals (Sweden)

    Wei-Hong Qian

    2016-08-01

    Full Text Available An anomaly-based field analysis approach and a set of simple beta-advection models (BAMs have been used to examine the structure evolution and unusual left turn of Hurricane Sandy (2012 before it made the landfall and caused severe damage along the eastern US coast. Results show that the anomaly-based analysis approach can clearly reveal Sandy's structure evolution, including its interaction with other synoptic-scale systems as well as the intensification and extratropical transition (ET processes. During its lifetime, Sandy experienced two consecutive periods of intensification caused by the merging of anomalous vortices on 27 and 29 October. The unusual left turn and the ET process prior to the landfall are respectively influenced by an anomalous anticyclone to the northeast and an anomalous cold vortex at the 300–850 hPa layer to the northwest, which is confirmed by the experiments using the generalised BAM.

  4. A New Model for Training in Periodontal Examinations Using Manikins.

    Science.gov (United States)

    Heym, Richard; Krause, Sebastian; Hennessen, Till; Pitchika, Vinay; Ern, Christina; Hickel, Reinhard

    2016-12-01

    The aim of this study was to develop and test models for training dental students in periodontal examinations using manikins that had distinct anatomical designs but were indistinguishable in external appearance. After four models were tested for inter- and intra-examiner reliability by two experienced dentists, 26 additional models were produced. The models were tested by 35 dental students at a dental school in Germany in 2014. The testing involved completing a periodontal examination that included probing depths, gingival recessions, and furcation involvements. The primary purpose of the study was to determine whether the models could be used as a tool for periodontal examination training by the students. Levels of agreement (students and dentists) and Kappa statistics (dentists) were calculated using absolute (±0 mm) and tolerable difference (±1 mm). Over the span of two weeks, the dentists' reliability with preset values for probing depths, gingival recessions, and furcation involvements ranged from 0.29 to 0.38, 0.52 to 0.61, and 0.54 to 0.57, respectively, under absolute difference and from 0.86 to 0.90, 0.96 to 0.99, and 0.62 to 0.73, respectively, under tolerable difference. The students' proportions of agreement for probing depths and gingival recessions under absolute vs. tolerable difference were 34.8% vs. 79.9% and 71.9% vs. 94.4%, respectively. The students frequently scored values higher than the preset values, overestimated furcation involvements, and failed to differentiate the levels of furcations. The models used did not pose any systematic or technical difficulties in the pilot study. Students were unable to measure furcation involvements with acceptable agreement. Thus, these models could be used for student periodontal examination training.

  5. Unmix 6.0 Model for environmental data analyses

    Science.gov (United States)

    Unmix Model is a mathematical receptor model developed by EPA scientists that provides scientific support for the development and review of the air and water quality standards, exposure research, and environmental forensics.

  6. Examination of Self-Determination within the Sport Education Model

    Science.gov (United States)

    Perlman, Dana J.

    2011-01-01

    The purpose of this study was to examine the influence of the Sport Education Model (SEM) on students' self-determined motivation and underlying psychological need(s) in physical education. A total of 182 Year-9 students were engaged in 20 lesson units of volleyball, using either the SEM or a traditional approach. Data was collected using a…

  7. Examining Response to Intervention (RTI) Models in Secondary Education

    Science.gov (United States)

    Epler, Pam, Ed.

    2015-01-01

    Response to Intervention (RTI) is an intervention model designed to assist all students regardless of their academic ability. It seeks to assist students who are struggling in academics by providing them with targeted assistance in the form of tutoring, pull-out services, and differentiated classroom instruction. "Examining Response to…

  8. Analysing Models as a Knowledge Technology in Transport Planning

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2011-01-01

    Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame for such a ......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic...

  9. Analyses of Tsunami Events using Simple Propagation Models

    Science.gov (United States)

    Chilvery, Ashwith Kumar; Tan, Arjun; Aggarwal, Mohan

    2012-03-01

    Tsunamis exhibit the characteristics of ``canal waves'' or ``gravity waves'' which belong to the class of ``long ocean waves on shallow water.'' The memorable tsunami events including the 2004 Indian Ocean tsunami and the 2011 Pacific Ocean tsunami off the coast of Japan are analyzed by constructing simple tsunami propagation models including the following: (1) One-dimensional propagation model; (2) Two-dimensional propagation model on flat surface; (3) Two-dimensional propagation model on spherical surface; and (4) A finite line-source model on two-dimensional surface. It is shown that Model 1 explains the basic features of the tsunami including the propagation speed, depth of the ocean, dispersion-less propagation and bending of tsunamis around obstacles. Models 2 and 3 explain the observed amplitude variations for long-distance tsunami propagation across the Pacific Ocean, including the effect of the equatorial ocean current on the arrival times. Model 3 further explains the enhancement effect on the amplitude due to the curvature of the Earth past the equatorial distance. Finally, Model 4 explains the devastating effect of superposition of tsunamis from two subduction event, which struck the Phuket region during the 2004 Indian Ocean tsunami.

  10. An Examination of Extended a-Rescaling Model

    Institute of Scientific and Technical Information of China (English)

    YAN Zhan-Yuan; DUAN Chun-Gui; HE Zhen-Min

    2001-01-01

    The extended x-rescaling model can explain the quark's nuclear effect very well. Weather it can also explain the gluon's nuclear effect should be investigated further. Associated J/ψ and γ production with large PT is a very clean channel to probe the gluon distribution in proton or nucleus. In this paper, using the extended x-rescaling model, the PT distribution of the nuclear effect factors of p + Fe → J/Ψ + γ+ X process is calculated and discussed. Comparing our theoretical results with the future experimental data, the extended x-rescaling model can be examined.``

  11. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The pur

  12. Hyperelastic Modelling and Finite Element Analysing of Rubber Bushing

    Directory of Open Access Journals (Sweden)

    Merve Yavuz ERKEK

    2015-03-01

    Full Text Available The objective of this paper is to obtain stiffness curves of rubber bushings which are used in automotive industry with hyperelastic finite element model. Hyperelastic material models were obtained with different material tests. Stress and strain values and static stiffness curves were determined. It is shown that, static stiffness curves are nonlinear. The level of stiffness affects the vehicle dynamics behaviour.

  13. Modelling theoretical uncertainties in phenomenological analyses for particle physics

    CERN Document Server

    Charles, Jérôme; Niess, Valentin; Silva, Luiz Vale

    2016-01-01

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding $p$-values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive $p$-value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavour p...

  14. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  15. Assessment of a geological model by surface wave analyses

    Science.gov (United States)

    Martorana, R.; Capizzi, P.; Avellone, G.; D'Alessandro, A.; Siragusa, R.; Luzio, D.

    2017-02-01

    A set of horizontal to vertical spectral ratio (HVSR) and multichannel analysis of surface waves (MASW) measurements, carried out in the Altavilla Milicia (Sicily) area, is analyzed to test a geological model of the area. Statistical techniques have been used in different stages of the data analysis, to optimize the reliability of the information extracted from geophysical measurements. In particular, cluster analysis algorithms have been implemented to select the time windows of the microseismic signal to be used for calculating the spectral ratio H/V and to identify sets of spectral ratio peaks likely caused by the same underground structures. Using results of reflection seismic lines, typical values of P-wave and S-wave velocity were estimated for each geological formation present in the area. These were used to narrow down the research space of parameters for the HVSR interpretation. MASW profiles have been carried out close to each HVSR measuring point, provided the parameters of the shallower layers for the HVSR models. MASW inversion has been constrained by extrapolating thicknesses from a known stratigraphic sequence. Preliminary 1D seismic models were obtained by adding deeper layers to models that resulted from MASW inversion. These justify the peaks of the HVSR curves due to layers deeper than MASW investigation depth. Furthermore, much deeper layers were included in the HVSR model, as suggested by geological setting and stratigraphic sequence. This choice was made considering that these latter layers do not generate other HVSR peaks and do not significantly affect the misfit. The starting models have been used to limit the starting research space for a more accurate interpretation, made considering the noise as a superposition of Rayleigh and Love waves. Results allowed to recognize four main seismic layers and to associate them to the main stratigraphic successions. The lateral correlation of seismic velocity models, joined with tectonic evidences

  16. Compound dislocation models (CDMs) for volcano deformation analyses

    Science.gov (United States)

    Nikkhoo, Mehdi; Walter, Thomas R.; Lundgren, Paul R.; Prats-Iraola, Pau

    2017-02-01

    Volcanic crises are often preceded and accompanied by volcano deformation caused by magmatic and hydrothermal processes. Fast and efficient model identification and parameter estimation techniques for various sources of deformation are crucial for process understanding, volcano hazard assessment and early warning purposes. As a simple model that can be a basis for rapid inversion techniques, we present a compound dislocation model (CDM) that is composed of three mutually orthogonal rectangular dislocations (RDs). We present new RD solutions, which are free of artefact singularities and that also possess full rotational degrees of freedom. The CDM can represent both planar intrusions in the near field and volumetric sources of inflation and deflation in the far field. Therefore, this source model can be applied to shallow dikes and sills, as well as to deep planar and equidimensional sources of any geometry, including oblate, prolate and other triaxial ellipsoidal shapes. In either case the sources may possess any arbitrary orientation in space. After systematically evaluating the CDM, we apply it to the co-eruptive displacements of the 2015 Calbuco eruption observed by the Sentinel-1A satellite in both ascending and descending orbits. The results show that the deformation source is a deflating vertical lens-shaped source at an approximate depth of 8 km centred beneath Calbuco volcano. The parameters of the optimal source model clearly show that it is significantly different from an isotropic point source or a single dislocation model. The Calbuco example reflects the convenience of using the CDM for a rapid interpretation of deformation data.

  17. A Formal Model to Analyse the Firewall Configuration Errors

    Directory of Open Access Journals (Sweden)

    T. T. Myo

    2015-01-01

    Full Text Available The firewall is widely known as a brandmauer (security-edge gateway. To provide the demanded security, the firewall has to be appropriately adjusted, i.e. be configured. Unfortunately, when configuring, even the skilled administrators may make mistakes, which result in decreasing level of a network security and network infiltration undesirable packages.The network can be exposed to various threats and attacks. One of the mechanisms used to ensure network security is the firewall.The firewall is a network component, which, using a security policy, controls packages passing through the borders of a secured network. The security policy represents the set of rules.Package filters work in the mode without inspection of a state: they investigate packages as the independent objects. Rules take the following form: (condition, action. The firewall analyses the entering traffic, based on the IP address of the sender and recipient, the port number of the sender and recipient, and the used protocol. When the package meets rule conditions, the action specified in the rule is carried out. It can be: allow, deny.The aim of this article is to develop tools to analyse a firewall configuration with inspection of states. The input data are the file with the set of rules. It is required to submit the analysis of a security policy in an informative graphic form as well as to reveal discrepancy available in rules. The article presents a security policy visualization algorithm and a program, which shows how the firewall rules act on all possible packages. To represent a result in an intelligible form a concept of the equivalence region is introduced.Our task is the program to display results of rules action on the packages in a convenient graphic form as well as to reveal contradictions between the rules. One of problems is the large number of measurements. As it was noted above, the following parameters are specified in the rule: Source IP address, appointment IP

  18. MTT显色反应实验条件分析%Examination and analyses to the coloring reacting of MTT

    Institute of Scientific and Technical Information of China (English)

    赵承彦; 靖志安; 牛青霞

    2000-01-01

    Objective: To improve experimental conditions of the colorimetric assay, depends on the reduction by living cells oftetrazolium salt, MTT, and to enhance sensitivity and reliability and to lower background. Methods: The eolorimetric assay withMTT, and the experiments of cytotoxicity and proliferation of cells. Results: The analyses and examining to the experimentalconditions of the colorimetric assay with MTT have found that majority of the mitochondrial enzyme succinate-dehydrogenase isdistributed over surface of cell membrane; the enviromment of reaction of enzyme with MTT is pH 6.7; following cells incubatedin the connon medium containing serums and phenol red were completed, the supematant is discarded, and leaving cells alonereact with MTr, and subsequent use of pure propanol to rapidly dissolve the formazan. The pure organic solvent can precipitatetrace proteins to be removed by centrifugation and lower acid organic solvent (pH5.8) can also eliminate to the influence ofphenol red to the assay. Cotclusion: When the experimental conditions have be improved, the cure of cell enzyme reactionwith MTT and the cure of MTT cleaved reaction-time under in saturated state are linear relationship. The background is verylowed down;the sensitivity and reliability are heightened;the minimum detectable number of K-562 and Human T-cells are respectively 1000 cells and 10000 cells in our hands.%目的:检验分析MTT实验方法本底高、灵敏度低、稳定性差的影响因素,改进实验条件提高灵敏度和稳定性,降低本底。方法:用MTT比色方法,细胞毒试验及细胞增殖实验,分析比较不同条件下MTT显色反应的性能特点。结果:比较了13种溶剂对Formazan的溶解性能。分析检验了琥珀酸脱氢酶在细胞上的分布特点、pH值对MTT反应体系的影响、酚红和牛血清对检验结果的影响,提出相应对策。检测了MTT反应时间曲线和细胞活力曲线。分析了改进后的MTT实验

  19. Enhancing Technology-Mediated Communication: Tools, Analyses, and Predictive Models

    Science.gov (United States)

    2007-09-01

    the home (see, for example, Nagel, Hudson, & Abowd, 2004), in social Chapter 2: Background 17 settings (see Kern, Antifakos, Schiele ...on Computer Supported Cooperative Work (CSCW 2006), pp. 525-528 ACM Press. Kern, N., Antifakos, S., Schiele , B., & Schwaninger, A. (2004). A model

  20. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    Institute of Scientific and Technical Information of China (English)

    Cai-Ping Feng; John Mundy

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions,TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also discussed.

  1. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    DEFF Research Database (Denmark)

    Feng, Cai-ping; Mundy, J.

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions, TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also...

  2. Examining subgrid models of supermassive black holes in cosmological simulation

    CERN Document Server

    Sutter, P M

    2010-01-01

    While supermassive black holes (SMBHs) play an important role in galaxy and cluster evolution, at present they can only be included in large-scale cosmological simulation via subgrid techniques. However, these subgrid models have not been studied in a systematic fashion. Using a newly-developed fast, parallel spherical overdensity halo finder built into the simulation code FLASH, we perform a suite of dark matter-only cosmological simulations to study the effects of subgrid model choice on relations between SMBH mass and dark matter halo mass and velocity dispersion. We examine three aspects of SMBH subgrid models: the choice of initial black hole seed mass, the test for merging two black holes, and the frequency of applying the subgrid model. We also examine the role that merging can play in determining the relations, ignoring the complicating effects of SMBH-driven accretion and feedback. We find that the choice of subgrid model can dramatically affect the black hole merger rate, the cosmic SMBH mass densit...

  3. Transactional Models Between Personality and Alcohol Involvement: A Further Examination

    OpenAIRE

    2012-01-01

    Although correlated changes between personality and alcohol involvement have been shown, the functional relation between these constructs is also of theoretical and clinical interest. Using bivariate latent difference score models, we examined transactional relations (i.e., personality predicting changes in alcohol involvement, which in turn predicts changes in personality) across two distinct but overlapping developmental time frames (i.e., across college and during young adulthood) using tw...

  4. A new model for analysing thermal stress in granular composite

    Institute of Scientific and Technical Information of China (English)

    郑茂盛; 金志浩; 浩宏奇

    1995-01-01

    A double embedding model of inletting reinforcement grain and hollow matrix ball into the effective media of the particulate-reinforced composite is advanced. And with this model the distributions of thermal stress in different phases of the composite during cooling are studied. Various expressions for predicting elastic and elastoplastic thermal stresses are derived. It is found that the reinforcement suffers compressive hydrostatic stress and the hydrostatic stress in matrix zone is a tensile one when temperature decreases; when temperature further decreases, yield area in matrix forms; when the volume fraction of reinforcement is enlarged, compressive stress on grain and tensile hydrostatic stress in matrix zone decrease; the initial temperature difference of the interface of reinforcement and matrix yielding rises, while that for the matrix yielding overall decreases.

  5. Analysing an Analytical Solution Model for Simultaneous Mobility

    Directory of Open Access Journals (Sweden)

    Md. Ibrahim Chowdhury

    2013-12-01

    Full Text Available Current mobility models for simultaneous mobility h ave their convolution in designing simultaneous movement where mobile nodes (MNs travel randomly f rom the two adjacent cells at the same time and also have their complexity in the measurement of th e occurrences of simultaneous handover. Simultaneou s mobility problem incurs when two of the MNs start h andover approximately at the same time. As Simultaneous mobility is different for the other mo bility pattern, generally occurs less number of tim es in real time; we analyze that a simplified simultaneou s mobility model can be considered by taking only symmetric positions of MNs with random steps. In ad dition to that, we simulated the model using mSCTP and compare the simulation results in different sce narios with customized cell ranges. The analytical results shows that with the bigger the cell sizes, simultaneous handover with random steps occurrences become lees and for the sequential mobility (where initial positions of MNs is predetermined with ran dom steps, simultaneous handover is more frequent.

  6. A simulation model for analysing brain structure deformations

    Energy Technology Data Exchange (ETDEWEB)

    Bona, Sergio Di [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy); Lutzemberger, Ludovico [Department of Neuroscience, Institute of Neurosurgery, University of Pisa, Via Roma, 67-56100 Pisa (Italy); Salvetti, Ovidio [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy)

    2003-12-21

    Recent developments of medical software applications from the simulation to the planning of surgical operations have revealed the need for modelling human tissues and organs, not only from a geometric point of view but also from a physical one, i.e. soft tissues, rigid body, viscoelasticity, etc. This has given rise to the term 'deformable objects', which refers to objects with a morphology, a physical and a mechanical behaviour of their own and that reflects their natural properties. In this paper, we propose a model, based upon physical laws, suitable for the realistic manipulation of geometric reconstructions of volumetric data taken from MR and CT scans. In particular, a physically based model of the brain is presented that is able to simulate the evolution of different nature pathological intra-cranial phenomena such as haemorrhages, neoplasm, haematoma, etc and to describe the consequences that are caused by their volume expansions and the influences they have on the anatomical and neuro-functional structures of the brain.

  7. Analyses of Cometary Silicate Crystals: DDA Spectral Modeling of Forsterite

    Science.gov (United States)

    Wooden, Diane

    2012-01-01

    Comets are the Solar System's deep freezers of gases, ices, and particulates that were present in the outer protoplanetary disk. Where comet nuclei accreted was so cold that CO ice (approximately 50K) and other supervolatile ices like ethane (C2H2) were preserved. However, comets also accreted high temperature minerals: silicate crystals that either condensed (greater than or equal to 1400 K) or that were annealed from amorphous (glassy) silicates (greater than 850-1000 K). By their rarity in the interstellar medium, cometary crystalline silicates are thought to be grains that formed in the inner disk and were then radially transported out to the cold and ice-rich regimes near Neptune. The questions that comets can potentially address are: How fast, how far, and over what duration were crystals that formed in the inner disk transported out to the comet-forming region(s)? In comets, the mass fractions of silicates that are crystalline, f_cryst, translate to benchmarks for protoplanetary disk radial transport models. The infamous comet Hale-Bopp has crystalline fractions of over 55%. The values for cometary crystalline mass fractions, however, are derived assuming that the mineralogy assessed for the submicron to micron-sized portion of the size distribution represents the compositional makeup of all larger grains in the coma. Models for fitting cometary SEDs make this assumption because models can only fit the observed features with submicron to micron-sized discrete crystals. On the other hand, larger (0.1-100 micrometer radii) porous grains composed of amorphous silicates and amorphous carbon can be easily computed with mixed medium theory wherein vacuum mixed into a spherical particle mimics a porous aggregate. If crystalline silicates are mixed in, the models completely fail to match the observations. Moreover, models for a size distribution of discrete crystalline forsterite grains commonly employs the CDE computational method for ellipsoidal platelets (c:a:b=8

  8. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  9. A conceptual model for analysing informal learning in online social networks for health professionals.

    Science.gov (United States)

    Li, Xin; Gray, Kathleen; Chang, Shanton; Elliott, Kristine; Barnett, Stephen

    2014-01-01

    Online social networking (OSN) provides a new way for health professionals to communicate, collaborate and share ideas with each other for informal learning on a massive scale. It has important implications for ongoing efforts to support Continuing Professional Development (CPD) in the health professions. However, the challenge of analysing the data generated in OSNs makes it difficult to understand whether and how they are useful for CPD. This paper presents a conceptual model for using mixed methods to study data from OSNs to examine the efficacy of OSN in supporting informal learning of health professionals. It is expected that using this model with the dataset generated in OSNs for informal learning will produce new and important insights into how well this innovation in CPD is serving professionals and the healthcare system.

  10. Analysing the Competency of Mathematical Modelling in Physics

    CERN Document Server

    Redish, Edward F

    2016-01-01

    A primary goal of physics is to create mathematical models that allow both predictions and explanations of physical phenomena. We weave maths extensively into our physics instruction beginning in high school, and the level and complexity of the maths we draw on grows as our students progress through a physics curriculum. Despite much research on the learning of both physics and math, the problem of how to successfully teach most of our students to use maths in physics effectively remains unsolved. A fundamental issue is that in physics, we don't just use maths, we think about the physical world with it. As a result, we make meaning with math-ematical symbology in a different way than mathematicians do. In this talk we analyze how developing the competency of mathematical modeling is more than just "learning to do math" but requires learning to blend physical meaning into mathematical representations and use that physical meaning in solving problems. Examples are drawn from across the curriculum.

  11. Fluctuating selection models and McDonald-Kreitman type analyses.

    Directory of Open Access Journals (Sweden)

    Toni I Gossmann

    Full Text Available It is likely that the strength of selection acting upon a mutation varies through time due to changes in the environment. However, most population genetic theory assumes that the strength of selection remains constant. Here we investigate the consequences of fluctuating selection pressures on the quantification of adaptive evolution using McDonald-Kreitman (MK style approaches. In agreement with previous work, we show that fluctuating selection can generate evidence of adaptive evolution even when the expected strength of selection on a mutation is zero. However, we also find that the mutations, which contribute to both polymorphism and divergence tend, on average, to be positively selected during their lifetime, under fluctuating selection models. This is because mutations that fluctuate, by chance, to positive selected values, tend to reach higher frequencies in the population than those that fluctuate towards negative values. Hence the evidence of positive adaptive evolution detected under a fluctuating selection model by MK type approaches is genuine since fixed mutations tend to be advantageous on average during their lifetime. Never-the-less we show that methods tend to underestimate the rate of adaptive evolution when selection fluctuates.

  12. A workflow model to analyse pediatric emergency overcrowding.

    Science.gov (United States)

    Zgaya, Hayfa; Ajmi, Ines; Gammoudi, Lotfi; Hammadi, Slim; Martinot, Alain; Beuscart, Régis; Renard, Jean-Marie

    2014-01-01

    The greatest source of delay in patient flow is the waiting time from the health care request, and especially the bed request to exit from the Pediatric Emergency Department (PED) for hospital admission. It represents 70% of the time that these patients occupied in the PED waiting rooms. Our objective in this study is to identify tension indicators and bottlenecks that contribute to overcrowding. Patient flow mapping through the PED was carried out in a continuous 2 years period from January 2011 to December 2012. Our method is to use the collected real data, basing on accurate visits made in the PED of the Regional University Hospital Center (CHRU) of Lille (France), in order to construct an accurate and complete representation of the PED processes. The result of this representation is a Workflow model of the patient journey in the PED representing most faithfully possible the reality of the PED of CHRU of Lille. This model allowed us to identify sources of delay in patient flow and aspects of the PED activity that could be improved. It must be enough retailed to produce an analysis allowing to identify the dysfunctions of the PED and also to propose and to estimate prevention indicators of tensions. Our survey is integrated into the French National Research Agency project, titled: "Hospital: optimization, simulation and avoidance of strain" (ANR HOST).

  13. Geographically Isolated Wetlands and Catchment Hydrology: A Modified Model Analyses

    Science.gov (United States)

    Evenson, G.; Golden, H. E.; Lane, C.; D'Amico, E.

    2014-12-01

    Geographically isolated wetlands (GIWs), typically defined as depressional wetlands surrounded by uplands, support an array of hydrological and ecological processes. However, key research questions concerning the hydrological connectivity of GIWs and their impacts on downgradient surface waters remain unanswered. This is particularly important for regulation and management of these systems. For example, in the past decade United States Supreme Court decisions suggest that GIWs can be afforded protection if significant connectivity exists between these waters and traditional navigable waters. Here we developed a simulation procedure to quantify the effects of various spatial distributions of GIWs across the landscape on the downgradient hydrograph using a refined version of the Soil and Water Assessment Tool (SWAT), a catchment-scale hydrological simulation model. We modified the SWAT FORTRAN source code and employed an alternative hydrologic response unit (HRU) definition to facilitate an improved representation of GIW hydrologic processes and connectivity relationships to other surface waters, and to quantify their downgradient hydrological effects. We applied the modified SWAT model to an ~ 202 km2 catchment in the Coastal Plain of North Carolina, USA, exhibiting a substantial population of mapped GIWs. Results from our series of GIW distribution scenarios suggest that: (1) Our representation of GIWs within SWAT conforms to field-based characterizations of regional GIWs in most respects; (2) GIWs exhibit substantial seasonally-dependent effects upon downgradient base flow; (3) GIWs mitigate peak flows, particularly following high rainfall events; and (4) The presence of GIWs on the landscape impacts the catchment water balance (e.g., by increasing groundwater outflows). Our outcomes support the hypothesis that GIWs have an important catchment-scale effect on downgradient streamflow.

  14. Examining of the Collision Breakup Model between Geostationary Orbit Objects

    Science.gov (United States)

    Hata, Hidehiro; Hanada, Toshiya; Akahoshi, Yasuhiro; Yasaka, Tetsuo; Harada, Shoji

    This paper will examine the applicability of the hypervelocity collision model included in the NASA standard breakup model 2000 revision to low-velocity collisions possible in space, especially in the geosynchronous regime. The analytic method used in the standard breakup model will be applied to experimental data accumulated through low-velocity impact experiments performed at Kyushu Institute of Technology at a velocity about 300m/s and 800m/s. The projectiles and target specimens used were aluminum solid balls and aluminum honeycomb sandwich panels with face sheets of carbon fiber reinforced plastic, respectively. Then, we have found that a kind of lower boundary exists on fragment area-to-mass distribution at a smaller characteristic length range. This paper will describe the theoretical derivation of lower boundary and propose another modification on fragment area-to-mass distribution and it will conclude that the hypervelocity collision model in the standard breakup model can be applied to low-velocity collisions possible with some modifications.

  15. Analysing animal social network dynamics: the potential of stochastic actor-oriented models.

    Science.gov (United States)

    Fisher, David N; Ilany, Amiyaal; Silk, Matthew J; Tregenza, Tom

    2017-03-01

    Animals are embedded in dynamically changing networks of relationships with conspecifics. These dynamic networks are fundamental aspects of their environment, creating selection on behaviours and other traits. However, most social network-based approaches in ecology are constrained to considering networks as static, despite several calls for such analyses to become more dynamic. There are a number of statistical analyses developed in the social sciences that are increasingly being applied to animal networks, of which stochastic actor-oriented models (SAOMs) are a principal example. SAOMs are a class of individual-based models designed to model transitions in networks between discrete time points, as influenced by network structure and covariates. It is not clear, however, how useful such techniques are to ecologists, and whether they are suited to animal social networks. We review the recent applications of SAOMs to animal networks, outlining findings and assessing the strengths and weaknesses of SAOMs when applied to animal rather than human networks. We go on to highlight the types of ecological and evolutionary processes that SAOMs can be used to study. SAOMs can include effects and covariates for individuals, dyads and populations, which can be constant or variable. This allows for the examination of a wide range of questions of interest to ecologists. However, high-resolution data are required, meaning SAOMs will not be useable in all study systems. It remains unclear how robust SAOMs are to missing data and uncertainty around social relationships. Ultimately, we encourage the careful application of SAOMs in appropriate systems, with dynamic network analyses likely to prove highly informative. Researchers can then extend the basic method to tackle a range of existing questions in ecology and explore novel lines of questioning. © 2016 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  16. Using System Dynamic Model and Neural Network Model to Analyse Water Scarcity in Sudan

    Science.gov (United States)

    Li, Y.; Tang, C.; Xu, L.; Ye, S.

    2017-07-01

    Many parts of the world are facing the problem of Water Scarcity. Analysing Water Scarcity quantitatively is an important step to solve the problem. Water scarcity in a region is gauged by WSI (water scarcity index), which incorporate water supply and water demand. To get the WSI, Neural Network Model and SDM (System Dynamic Model) that depict how environmental and social factors affect water supply and demand are developed to depict how environmental and social factors affect water supply and demand. The uneven distribution of water resource and water demand across a region leads to an uneven distribution of WSI within this region. To predict WSI for the future, logistic model, Grey Prediction, and statistics are applied in predicting variables. Sudan suffers from severe water scarcity problem with WSI of 1 in 2014, water resource unevenly distributed. According to the result of modified model, after the intervention, Sudan’s water situation will become better.

  17. Can peripheral blood smear examination be totally replaced by automated hematology analyser - with special reference to anemia?

    Directory of Open Access Journals (Sweden)

    Shivangi Singhal

    2016-10-01

    Conclusions: The Study concluded that even today PBS examination is very important and cannot be totally replaced by automated analyzer and both methods are complementary to each other. [Int J Res Med Sci 2016; 4(10.000: 4563-4566

  18. Mean and Covariance Structures Analyses: An Examination of the Rosenberg Self-Esteem Scale among Adolescents and Adults.

    Science.gov (United States)

    Whiteside-Mansell, Leanne; Corwyn, Robert Flynn

    2003-01-01

    Examined the cross-age comparability of the widely used Rosenberg Self-Esteem Scale (RSES) in 414 adolescents and 900 adults in families receiving Aid to Families with Dependent Children. Found similarities of means in the RSES across groups. (SLD)

  19. Mean and Covariance Structures Analyses: An Examination of the Rosenberg Self-Esteem Scale among Adolescents and Adults.

    Science.gov (United States)

    Whiteside-Mansell, Leanne; Corwyn, Robert Flynn

    2003-01-01

    Examined the cross-age comparability of the widely used Rosenberg Self-Esteem Scale (RSES) in 414 adolescents and 900 adults in families receiving Aid to Families with Dependent Children. Found similarities of means in the RSES across groups. (SLD)

  20. Examination of 1D Solar Cell Model Limitations Using 3D SPICE Modeling: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    McMahon, W. E.; Olson, J. M.; Geisz, J. F.; Friedman, D. J.

    2012-06-01

    To examine the limitations of one-dimensional (1D) solar cell modeling, 3D SPICE-based modeling is used to examine in detail the validity of the 1D assumptions as a function of sheet resistance for a model cell. The internal voltages and current densities produced by this modeling give additional insight into the differences between the 1D and 3D models.

  1. Field Study of Dairy Cows with Reduced Appetite in Early Lactation: Clinical Examinations, Blood and Rumen Fluid Analyses

    Directory of Open Access Journals (Sweden)

    Steen A

    2001-06-01

    Full Text Available The study included 125 cows with reduced appetite and with clinical signs interpreted by the owner as indicating bovine ketosis 6 to 75 days postpartum. Almost all of the cows were given concentrates 2 to 3 times daily. With a practitioners view to treatment and prophylaxis the cows were divided into 5 diagnostic groups on the basis of thorough clinical examination, milk ketotest, decreased protozoal activity and concentrations, increased methylene blue reduction time, and increased liver parameters: ketosis (n = 32, indigestion (n = 26, combined ketosis and indigestion (n = 29, liver disease combined with ketosis, indigestion, or both (n = 15, and no specific diagnosis (n = 17. Three cows with traumatic reticuloperitonitis and 3 with abomasal displacement were not grouped. Nonparametric methods were used when groups were compared. Aspartate aminotransferase, glutamate dehydrogenase, gamma-glutamyl transferase and total bilirubin were elevated in the group with liver disease. Free fatty acids were significantly elevated in cows with ketosis, compared with cows with indigestion. Activity and concentrations of large and small protozoas were reduced, and methylene blue reduction time was increased in cows with indigestion. The rumen fluid pH was the same for groups of cows with and without indigestion. Prolonged reduced appetite before examination could have led to misclassification. Without careful interpretation of the milk ketotest, many cases with additional diagnoses would have been reported as primary ketosis. Thorough clinical examination together with feasible rumen fluid examination and economically reasonable blood biochemistry did not uncover the reason(s for reduced appetite in 14% of the cows. More powerful diagnostic methods are needed.

  2. Forensic Image Analyses of Skin and Underlying Muscles as a Tool for Postmortem Interval Delimitation: Histopathologic Examination.

    Science.gov (United States)

    El-Nahass, El-Shaymaa; Moselhy, Walaa A; Hassan, Nour El-Houda Y

    2017-06-01

    One of the biggest challenges for forensic pathologists is to diagnose the postmortem interval (PMI) delimitation; therefore, the aim of this study was to use a routine histopathologic examination and quantitative analysis to obtain an accurate diagnosis of PMI. The current study was done by using 24 adult male albino rats divided into 8 groups based on the scarification schedule (0, 8, 16, 24, 32, 40, 48, and 72 hours PMI). Skin specimens were collected and subjected to a routine histopathologic processing. Examination of hematoxylin-eosin-stained sections from the skin, its appendages and underlying muscles were carried out. Morphometric analysis of epidermal nuclear chromatin intensities and area percentages, reticular dermis integrated density, and sebaceous gland nuclei areas and chromatin condensation was done. Progressive histopathologic changes could be detected in epidermis, dermis, hypodermis, underlying muscles including nerve endings, and red blood cells in relation to hours PMI. Significant difference was found in epidermal nuclear chromatin intensities at different-hours PMI (at P PMI. Quantitative analysis of measurements of dermal collagen area percentages revealed a high significant difference between 0 hours PMI and 24 to 72 hours PMI (P PMI increases, sebaceous gland nuclei and nuclear chromatin condensation showed a dramatic decrease. Significant differences of sebaceous gland nuclei areas between 0 hours and different-hours PMI (P PMI.

  3. Use of the LIBS method in oil paintings examination based on examples of analyses conducted at the Wilanow Palace Museum

    Science.gov (United States)

    Modzelewska, ElŻbieta; Pawlak, Agnieszka; Selerowicz, Anna; Skrzeczanowski, Wojciech; Marczak, Jan

    2013-05-01

    This paper describes the preliminary results of a study of the paint layers in 17th-century paintings belonging to the collection of the Wilanow Palace Museum. The works chosen for examination are of great importance to the Museum, as they might have been painted by court artists of King John III Sobieski. The aim of the study was therefore to determine the technological structure of the paintings, to determine the scope of conservation interventions and, above all, to gather comparative material that would serve to conduct further multidisciplinary attributive research. The presentation relates to studies in which laser-induced breakdown spectroscopy (LIBS) and optical microscopy were used as diagnostic tools. LIBS is based on the evaporation of a small amount of the material under investigation, and the generation of plasma which emits continuum and line radiation. The analysis of line radiation allows us to identify the elements appearing in the sample being investigated. The microscope pictures were taken using a Bresser Digital Hand Micro 1.3Mpx and the Hirox 8700 microscopes. The results obtained have confirmed the utility of the LIBS method in the study of artworks. They have also proven that it can be used as a method to complement microchemical analysis, as well as an method to identify and examine artworks from which samples cannot be taken, as it is micro-destructive and the analysis can be conducted directly on the object, without the need to take samples.

  4. Examining the response of larch needle carbohydrates to climate using compound-specific δ13C and concentration analyses

    Science.gov (United States)

    Rinne, Katja T.; Saurer, Matthias; Kirdyanov, Alexander V.; Bryukhanova, Marina V.; Prokushkin, Anatoly S.; Churakova Sidorova, Olga V.; Siegwolf, Rolf T. W.

    2016-04-01

    Little is known about the dynamics of concentrations and carbon isotope ratios of individual carbohydrates in leaves in response to climatic and physiological factors. Improved knowledge of the isotopic ratio in sugars will enhance our understanding of the tree ring isotope ratio and will help to decipher environmental conditions in retrospect more reliably. Carbohydrate samples from larch (Larix gmelinii) needles of two sites in the continuous permafrost zone of Siberia with differing growth conditions were analysed with the Compound-Specific Isotope Analysis (CSIA). We compared concentrations and carbon isotope values (δ13C) of sucrose, fructose, glucose and pinitol combined with phenological data. The results for the variability of the needle carbohydrates show high dynamics with distinct seasonal characteristics between and within the studied years with a clear link to the climatic conditions, particularly vapour pressure deficit. Compound-specific differences in δ13C values as a response to climate were detected. The δ13C of pinitol, which contributes up to 50% of total soluble carbohydrates, was almost invariant during the whole growing season. Our study provides the first in-depth characterization of compound-specific needle carbohydrate isotope variability, identifies involved mechanisms and shows the potential of such results for linking tree physiological responses to different climatic conditions.

  5. Examining Asymmetrical Relationships of Organizational Learning Antecedents: A Theoretical Model

    Directory of Open Access Journals (Sweden)

    Ery Tri Djatmika

    2016-02-01

    Full Text Available Global era is characterized by highly competitive advantage market demand. Responding to the challenge of rapid environmental changes, organizational learning is becoming a strategic way and solution to empower people themselves within the organization in order to create a novelty as valuable positioning source. For research purposes, determining the influential antecedents that affect organizational learning is vital to understand research-based solutions given for practical implications. Accordingly, identification of variables examined by asymmetrical relationships is critical to establish. Possible antecedent variables come from organizational and personal point of views. It is also possible to include a moderating one. A proposed theoretical model of asymmetrical effects of organizational learning and its antecedents is discussed in this article.

  6. Correlation of Klebsiella pneumoniae comparative genetic analyses with virulence profiles in a murine respiratory disease model.

    Directory of Open Access Journals (Sweden)

    Ramy A Fodah

    Full Text Available Klebsiella pneumoniae is a bacterial pathogen of worldwide importance and a significant contributor to multiple disease presentations associated with both nosocomial and community acquired disease. ATCC 43816 is a well-studied K. pneumoniae strain which is capable of causing an acute respiratory disease in surrogate animal models. In this study, we performed sequencing of the ATCC 43816 genome to support future efforts characterizing genetic elements required for disease. Furthermore, we performed comparative genetic analyses to the previously sequenced genomes from NTUH-K2044 and MGH 78578 to gain an understanding of the conservation of known virulence determinants amongst the three strains. We found that ATCC 43816 and NTUH-K2044 both possess the known virulence determinant for yersiniabactin, as well as a Type 4 secretion system (T4SS, CRISPR system, and an acetonin catabolism locus, all absent from MGH 78578. While both NTUH-K2044 and MGH 78578 are clinical isolates, little is known about the disease potential of these strains in cell culture and animal models. Thus, we also performed functional analyses in the murine macrophage cell lines RAW264.7 and J774A.1 and found that MGH 78578 (K52 serotype was internalized at higher levels than ATCC 43816 (K2 and NTUH-K2044 (K1, consistent with previous characterization of the antiphagocytic properties of K1 and K2 serotype capsules. We also examined the three K. pneumoniae strains in a novel BALB/c respiratory disease model and found that ATCC 43816 and NTUH-K2044 are highly virulent (LD50<100 CFU while MGH 78578 is relatively avirulent.

  7. Taxing CO2 and subsidising biomass: Analysed in a macroeconomic and sectoral model

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    2000-01-01

    This paper analyses the combination of taxes and subsidies as an instrument to enable a reduction in CO2 emission. The objective of the study is to compare recycling of a CO2 tax revenue as a subsidy for biomass use as opposed to traditional recycling such as reduced income or corporate taxation....... A model of Denmark's energy supply sector is used to analyse the e€ect of a CO2 tax combined with using the tax revenue for biomass subsidies. The energy supply model is linked to a macroeconomic model such that the macroeconomic consequences of tax policies can be analysed along with the consequences...

  8. Vitamin D and diabetes in Koreans: analyses based on the Fourth Korea National Health and Nutrition Examination Survey (KNHANES), 2008–2009

    Science.gov (United States)

    Rhee, S Y; Hwang, Y-C; Chung, H Y; Woo, J-T

    2012-01-01

    Aims A causal relationship between vitamin D deficiency and the incidence of diabetes mellitus has been suggested, but little research has been conducted on the Korean population. Methods We analysed the glucose tolerance status and serum 25-hydroxyvitamin D concentrations in 12 263 subjects > 19 years old who were registered for the Korea National Health and Nutrition Examination Survey, 2008–2009. Results Various demographic variables such as gender, age, season, resident area, physical activity, smoking, alcohol, marital status, education and occupation were associated with serum 25-hydroxyvitamin D concentrations. After adjusting for these variables as confounders, 25-hydroxyvitamin D concentrations in subjects with diabetes were significantly lower than those in subjects with normal glucose tolerance and those with impaired fasting glucose (P = 0.005). Compared with the ≥ 75 nmol/l subgroup of serum 25-hydroxyvitamin D concentration, the odds ratios and 95% confidence intervals for diabetes mellitus were 1.206 (95% CI 0.948–1.534) in the 50- to 74-nmol/l subgroup, 1.339 (1.051–1.707) in the 25- to 49-nmol/l subgroup and 1.759 (1.267–2.443) in the < 25-nmol/l subgroup. Compared with the serum ≥ 75-nmol/l 25-hydroxyvitamin D subgroup, serum insulin and homeostasis model assessment 2%B, a marker of insulin secretory capacity, were significantly higher, and homeostasis model assessment 2%S, a marker of insulin sensitivity, was significantly lower in the < 25- and 25- to 49-nmol/l serum 25-hydroxyvitamin D subgroups than those in the other subgroups (P < 0.001). Conclusions The findings suggest that vitamin D deficiency, possibly involving altered insulin sensitivity, is associated with an increased risk for diabetes mellitus in the Korean population. PMID:22247968

  9. Model error analyses of photochemistry mechanisms using the BEATBOX/BOXMOX data assimilation toy model

    Science.gov (United States)

    Knote, C. J.; Eckl, M.; Barré, J.; Emmons, L. K.

    2016-12-01

    Simplified descriptions of photochemistry in the atmosphere ('photochemical mechanisms') necessary to reduce the computational burden of a model simulation contribute significantly to the overall uncertainty of an air quality model. Understanding how the photochemical mechanism contributes to observed model errors through examination of results of the complete model system is next to impossible due to cancellation and amplification effects amongst the tightly interconnected model components. Here we present BEATBOX, a novel method to evaluate photochemical mechanisms using the underlying chemistry box model BOXMOX. With BOXMOX we can rapidly initialize various mechanisms (e.g. MOZART, RACM, CBMZ, MCM) with homogenized observations (e.g. from field campaigns) and conduct idealized 'chemistry in a jar' simulations under controlled conditions. BEATBOX is a data assimilation toy model built upon BOXMOX which allows to simulate the effects of assimilating observations (e.g., CO, NO2, O3) into these simulations. In this presentation we show how we use the Master Chemical Mechanism (MCM, U Leeds) as benchmark for more simplified mechanisms like MOZART, use BEATBOX to homogenize the chemical environment and diagnose errors within the more simplified mechanisms. We present BEATBOX as a new, freely available tool that allows researchers to rapidly evaluate their chemistry mechanism against a range of others under varying chemical conditions.

  10. Evaluation of Temperature and Humidity Profiles of Unified Model and ECMWF Analyses Using GRUAN Radiosonde Observations

    Directory of Open Access Journals (Sweden)

    Young-Chan Noh

    2016-07-01

    Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.

  11. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  12. Pathway models for analysing and managing the introduction of alien plant pests - an overview and categorization

    NARCIS (Netherlands)

    Douma, J.C.; Pautasso, M.; Venette, R.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Schans, J.; Werf, van der W.

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative

  13. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  14. An improved lake model for climate simulations: Model structure, evaluation, and sensitivity analyses in CESM1

    Directory of Open Access Journals (Sweden)

    Zachary Subin

    2012-02-01

    Full Text Available Lakes can influence regional climate, yet most general circulation models have, at best, simple and largely untested representations of lakes. We developed the Lake, Ice, Snow, and Sediment Simulator(LISSS for inclusion in the land-surface component (CLM4 of an earth system model (CESM1. The existing CLM4 lake modelperformed poorly at all sites tested; for temperate lakes, summer surface water temperature predictions were 10–25uC lower than observations. CLM4-LISSS modifies the existing model by including (1 a treatment of snow; (2 freezing, melting, and ice physics; (3 a sediment thermal submodel; (4 spatially variable prescribed lakedepth; (5 improved parameterizations of lake surface properties; (6 increased mixing under ice and in deep lakes; and (7 correction of previous errors. We evaluated the lake model predictions of water temperature and surface fluxes at three small temperate and boreal lakes where extensive observational data was available. We alsoevaluated the predicted water temperature and/or ice and snow thicknesses for ten other lakes where less comprehensive forcing observations were available. CLM4-LISSS performed very well compared to observations for shallow to medium-depth small lakes. For large, deep lakes, the under-prediction of mixing was improved by increasing the lake eddy diffusivity by a factor of 10, consistent with previouspublished analyses. Surface temperature and surface flux predictions were improved when the aerodynamic roughness lengths were calculated as a function of friction velocity, rather than using a constant value of 1 mm or greater. We evaluated the sensitivity of surface energy fluxes to modeled lake processes and parameters. Largechanges in monthly-averaged surface fluxes (up to 30 W m22 were found when excluding snow insulation or phase change physics and when varying the opacity, depth, albedo of melting lake ice, and mixing strength across ranges commonly found in real lakes. Typical

  15. A modified Lee-Carter model for analysing short-base-period data.

    Science.gov (United States)

    Zhao, Bojuan Barbara

    2012-03-01

    This paper introduces a new modified Lee-Carter model for analysing short-base-period mortality data, for which the original Lee-Carter model produces severely fluctuating predicted age-specific mortality. Approximating the unknown parameters in the modified model by linearized cubic splines and other additive functions, the model can be simplified into a logistic regression when fitted to binomial data. The expected death rate estimated from the modified model is smooth, not only over ages but also over years. The analysis of mortality data in China (2000-08) demonstrates the advantages of the new model over existing models.

  16. Examining Learning Through Modeling in K-6 Science Education

    Science.gov (United States)

    Louca, Loucas T.; Zacharia, Zacharias C.

    2015-04-01

    Despite the abundance of research in Modeling-based Learning (MbL) in science education, to date there is only limited research on MbL practices among K-6 novice modelers. More specifically, there is no information on how young/novice modelers' modeling enactments look so that researchers and educators have an idea of what should be expected from these novice/young modelers while engaged in MbL. Our purpose in this study was to investigate the ways in which K-6 novice modelers can engage in MbL in science, in rich modeling contexts, which feature various modeling media and tools. Using data from a variety of contexts, modeling means and tools and different student ages, we seek to develop, from the ground up, detailed descriptions of the modeling practices that K-6 students follow when involved in MbL. While using the modeling phases (e.g., construction of a model, evaluation of a model), along with their associated practices, as described in the literature for older learners and expert modelers as our basis, we followed ground research approaches to develop the descriptions of student-centered MbL. Our findings revealed that novice modelers enact certain MbL phases in a different manner than those described in the literature for older learners and/or expert modelers. We found that not only do the content and context of the various modeling phases differ, but also the sequence of these modeling phases and their associated practices, are different from those already described in the literature. Finally, we discuss how rich descriptions of MbL discourse can ultimately inform teachers and researchers about ways in which learning in science through MbL can be supported.

  17. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Science.gov (United States)

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p plaster models using the caliper and from the digital models using O3d software were identical.

  18. Three-dimensional lake water quality modeling: sensitivity and uncertainty analyses.

    Science.gov (United States)

    Missaghi, Shahram; Hondzo, Miki; Melching, Charles

    2013-11-01

    Two sensitivity and uncertainty analysis methods are applied to a three-dimensional coupled hydrodynamic-ecological model (ELCOM-CAEDYM) of a morphologically complex lake. The primary goals of the analyses are to increase confidence in the model predictions, identify influential model parameters, quantify the uncertainty of model prediction, and explore the spatial and temporal variabilities of model predictions. The influence of model parameters on four model-predicted variables (model output) and the contributions of each of the model-predicted variables to the total variations in model output are presented. The contributions of predicted water temperature, dissolved oxygen, total phosphorus, and algal biomass contributed 3, 13, 26, and 58% of total model output variance, respectively. The fraction of variance resulting from model parameter uncertainty was calculated by two methods and used for evaluation and ranking of the most influential model parameters. Nine out of the top 10 parameters identified by each method agreed, but their ranks were different. Spatial and temporal changes of model uncertainty were investigated and visualized. Model uncertainty appeared to be concentrated around specific water depths and dates that corresponded to significant storm events. The results suggest that spatial and temporal variations in the predicted water quality variables are sensitive to the hydrodynamics of physical perturbations such as those caused by stream inflows generated by storm events. The sensitivity and uncertainty analyses identified the mineralization of dissolved organic carbon, sediment phosphorus release rate, algal metabolic loss rate, internal phosphorus concentration, and phosphorus uptake rate as the most influential model parameters.

  19. Processes models, environmental analyses, and cognitive architectures: quo vadis quantum probability theory?

    Science.gov (United States)

    Marewski, Julian N; Hoffrage, Ulrich

    2013-06-01

    A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.

  20. Internet banking acceptance model: Cross-market examination

    OpenAIRE

    Alsajjan, B; Dennis, C.

    2009-01-01

    This article proposes a revised technology acceptance model to measure consumers’ acceptance of Internet banking, the Internet Banking Acceptance Model (IBAM). Data was collected from 618 university students in the United Kingdom and Saudi Arabia. The results suggest the importance of attitude, such that attitude and behavioral intentions emerge as a single factor, denoted as “attitudinal intentions” (AI). Structural equation modeling confirms the fit of the model, in which per...

  1. Majoring in Information Systems: An Examination of Role Model Influence

    Science.gov (United States)

    Akbulut, Asli Y.

    2016-01-01

    The importance of role models on individuals' academic and career development and success has been widely acknowledged in the literature. The purpose of this study was to understand the influence of role models on students' decisions to major in information systems (IS). Utilizing a model derived from the social cognitive career theory, we…

  2. Analyses and simulations in income frame regulation model for the network sector from 2007; Analyser og simuleringer i inntektsrammereguleringsmodellen for nettbransjen fra 2007

    Energy Technology Data Exchange (ETDEWEB)

    Askeland, Thomas Haave; Fjellstad, Bjoern

    2007-07-01

    Analyses of the income frame regulation model for the network sector in Norway, introduced 1.st of January 2007. The model's treatment of the norm cost is evaluated, especially the effect analyses carried out by a so called Data Envelopment Analysis model. It is argued that there may exist an age lopsidedness in the data set, and that this should and can be corrected in the effect analyses. The adjustment is proposed corrected for by introducing an age parameter in the data set. Analyses of how the calibration effects in the regulation model affect the business' total income frame, as well as each network company's income frame have been made. It is argued that the calibration, the way it is presented, is not working according to its intention, and should be adjusted in order to provide the sector with the rate of reference in return (ml)

  3. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  4. Sensitivity analyses of spatial population viability analysis models for species at risk and habitat conservation planning.

    Science.gov (United States)

    Naujokaitis-Lewis, Ilona R; Curtis, Janelle M R; Arcese, Peter; Rosenfeld, Jordan

    2009-02-01

    Population viability analysis (PVA) is an effective framework for modeling species- and habitat-recovery efforts, but uncertainty in parameter estimates and model structure can lead to unreliable predictions. Integrating complex and often uncertain information into spatial PVA models requires that comprehensive sensitivity analyses be applied to explore the influence of spatial and nonspatial parameters on model predictions. We reviewed 87 analyses of spatial demographic PVA models of plants and animals to identify common approaches to sensitivity analysis in recent publications. In contrast to best practices recommended in the broader modeling community, sensitivity analyses of spatial PVAs were typically ad hoc, inconsistent, and difficult to compare. Most studies applied local approaches to sensitivity analyses, but few varied multiple parameters simultaneously. A lack of standards for sensitivity analysis and reporting in spatial PVAs has the potential to compromise the ability to learn collectively from PVA results, accurately interpret results in cases where model relationships include nonlinearities and interactions, prioritize monitoring and management actions, and ensure conservation-planning decisions are robust to uncertainties in spatial and nonspatial parameters. Our review underscores the need to develop tools for global sensitivity analysis and apply these to spatial PVA.

  5. Modern statistical models for forensic fingerprint examinations: a critical review.

    Science.gov (United States)

    Abraham, Joshua; Champod, Christophe; Lennard, Chris; Roux, Claude

    2013-10-10

    Over the last decade, the development of statistical models in support of forensic fingerprint identification has been the subject of increasing research attention, spurned on recently by commentators who claim that the scientific basis for fingerprint identification has not been adequately demonstrated. Such models are increasingly seen as useful tools in support of the fingerprint identification process within or in addition to the ACE-V framework. This paper provides a critical review of recent statistical models from both a practical and theoretical perspective. This includes analysis of models of two different methodologies: Probability of Random Correspondence (PRC) models that focus on calculating probabilities of the occurrence of fingerprint configurations for a given population, and Likelihood Ratio (LR) models which use analysis of corresponding features of fingerprints to derive a likelihood value representing the evidential weighting for a potential source.

  6. Examining the Fidelity of Climate model via Shadowing Time

    Science.gov (United States)

    Du, H.; Smith, L. A.

    2015-12-01

    Fully fledged climate models provide the best available simulations for reflecting the future, yet we have scant insight into their fidelity, in particular as to the duration into the future at which the real world should be expected to evolve in a manner today's models cannot foresee. We know now that our best available models are not adequate for many sought after purposes. To throw some light on the maximum fidelity expected from a given generation of models, and thereby aid both policy making and model development, we can test the weaknesses of a model as a dynamical system to get an informed idea of its potential applicability at various lead times. Shadowing times reflect the duration on which a GCM reflects the observations; extracting the shortcomings of the model which limit shadowing times allows informed speculation regarding the fidelity of the model in the future. More specifically, the relevant phenomena limiting model fidelity can be learned by identifying the reasons models cannot shadow; the time scales on which feedbacks on the system (which are not active in the model) are likely to result in model irrelevance can be discerned. The methodology is developed in the "low dimensional laboratory" of relatively simple dynamical systems, for example Lorenz 95 systems. The results are presented in Lorenz 95 systems, high dimensional fluid dynamical simulations of rotating annulus and GCMs. There are severe limits on the light shadowing experiments can shine on GCM predictions. Never the less, they appear to be one of the brightest lights we can shine to illuminate the likely fidelity of GCM extrapolations into the future.

  7. Examining Pedestrian Injury Severity Using Alternative Disaggregate Models

    DEFF Research Database (Denmark)

    Abay, Kibrom Araya

    2013-01-01

    to the choice of these models. The empirical analysis reveals that detailed road user characteristics such as crime history of drivers and momentary activities of road users at the time of the accident provides an interesting insight in the injury severity analysis. Likewise, the alternative analytical...... specification of the models reveals that some of the conventionally employed fixed parameters injury severity models could underestimate the effect of some important behavioral attributes of the accidents. For instance, the standard ordered logit model underestimated the marginal effects of some...

  8. Integrate models of ultrasonics examination for NDT expertise

    Energy Technology Data Exchange (ETDEWEB)

    Calmon, P.; Lhemery, A.; Lecoeur-Taibi, I.; Raillon, R.

    1996-12-31

    For several years, the French Atomic Energy Commission (CEA) has developed a system called CIVA for multiple-technique NDE data acquisition and processing. Modeling tools for ultrasonic non-destructive testing have been developed and implemented within this allowing direct comparison between measured and predicted results. These models are not only devoted to laboratory uses bus also must be usable by ultrasonic operators without special training in simulation techniques. Therefore, emphasis has been on finding the best compromise between as accurate as possible quantitative predictions and ease, simplicity and speed, crucial requirements in the industrial context. This approach has led us to develop approximate models for the different phenomena involved in ultrasonic inspections: radiation, transmission through interfaces, propagation, scattering by defects and boundaries, reception etc. Two main models have been implemented, covering the most commonly encountered NDT configurations. At first, these two models are shortly described. Then, two examples of their applications are shown. Based on the same underlying theories, specific modeling tools are proposed to industrial partners to answer special requirements. To illustrate this, an example is given of a software used a tool to help experts`s interpretation during on-site french PWR vessel inspections. Other models can be implemented in CIVA when some assumptions made in the previous models Champ-Sons and Mephisto are not fulfilled, e. g., when less-conventional testing configurations are concerned. We briefly presents as an example a modeling study of echoes arising from cladded steel surfaces achieved in the laboratory. (authors). 13 refs.

  9. Design evaluation and optimisation in crossover pharmacokinetic studies analysed by nonlinear mixed effects models

    OpenAIRE

    Nguyen, Thu Thuy; Bazzoli, Caroline; Mentré, France

    2012-01-01

    International audience; Bioequivalence or interaction trials are commonly studied in crossover design and can be analysed by nonlinear mixed effects models as an alternative to noncompartmental approach. We propose an extension of the population Fisher information matrix in nonlinear mixed effects models to design crossover pharmacokinetic trials, using a linearisation of the model around the random effect expectation, including within-subject variability and discrete covariates fixed or chan...

  10. Analysing outsourcing policies in an asset management context: a six-stage model

    OpenAIRE

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates the top-down discussion for analysing the degree of outsourcing maintenance. The model is based on the cyclic nature of maintenance. The six-stage model can: (1) give clear statements about the pre...

  11. Examination of the New Tech Model as a Holistic Democracy

    Science.gov (United States)

    Bradley-Levine, Jill; Mosier, Gina

    2017-01-01

    Using the Degrees of Democracy Framework (Woods & Woods, 2012), we examined eight New Tech (NT) high schools to determine the extent to which they demonstrated characteristics of holistic democracy. We collected qualitative data, including observations and interviews during the fourth year of implementation. Findings indicated that the eight…

  12. An Examination of Concussion Injury Rates in Various Models of Football Helmets in NCAA Football Athletes

    Institute of Scientific and Technical Information of China (English)

    Ryan Moran; Tracey Covassin

    2015-01-01

    While newer, advanced helmet models have been designed with the intentions of decreasing concussions, very little research exists on injury rates in various football helmets at the collegiate level. The aim of this study was to examine concussion injury rates in various models of football helmets in collegiate football athletes. In addition, to compare injury rates of newer, advanced football helmets to older, traditional helmets among collegiate football athletes, a total of 209 concussions and 563,701 AEs (athlete-exposures) among 2,107 collegiate football athletes in seven helmet models were included in the analyses. Concussion injury rates revealed that the Riddell Revolution~ had the highest rate of 0.41 concussions per 1,000 AEs. The Schutt ION 4DTM helmet had the lowest rate of 0.25 concussions per 1,000 AEs. These newer helmet models did not significantly differ from one another (P = 0.74), however, all models significantly differed from the older, traditional helmet model (P 〈 0.001). The findings of this study suggest that concussion rates do not differ between newer and more advanced helmet models. More importantly, there are currently no helmets available to prevent concussions from occurring in football athletes.

  13. Examining Tatooine: Atmospheric Models of Neptune-Like Circumbinary Planets

    CERN Document Server

    May, E M

    2016-01-01

    Circumbinary planets experience a time varying irradiation pattern as they orbit their two host stars. In this work, we present the first detailed study of the atmospheric effects of this irradiation pattern on known and hypothetical gaseous circumbinary planets. Using both a one-dimensional Energy Balance Model and a three-dimensional General Circulation Model, we look at the temperature differences between circumbinary planets and their equivalent single-star cases in order to determine the nature of the atmospheres of these planets. We find that for circumbinary planets on stable orbits around their host stars, temperature differences are on average no more than 1.0% in the most extreme cases. Based on detailed modeling with the General Circulation Model, we find that these temperature differences are not large enough to excite circulation differences between the two cases. We conclude that gaseous circumbinary planets can be treated as their equivalent single-star case in future atmospheric modeling effor...

  14. Examining Teacher Outcomes of the School-Wide Positive Behavior Support Model in Norway

    Directory of Open Access Journals (Sweden)

    Mari-Anne Sørlie

    2016-05-01

    Full Text Available Research on teacher outcomes of the School-Wide Positive Behavior Support (SWPBS model has been scarce. The present study adds to the knowledge base by examining the effects of the Norwegian version of SWPBS (N-PALS on school staffs’ behavior management practices and on their individual and collective efficacy. Questionnaire data were collected from staff and students (Grades 4-7 at four measurement points across four successive school years in 28 intervention schools and 20 comparison schools. Using longitudinal multilevel analyses, indications of positive 3-year main effects of the N-PALS model were observed for staff-reported collective efficacy, self-efficacy, and positive behavior support practices. The intervention effects as measured by Cohen’s d ranged from .14 to .91. The effects on student perceptions of teachers’ behavior management strategies were, however, not consistent with the positive staff ratings. Results are discussed in relation to prior research, future research, and study limitations.

  15. Geographical variation of sporadic Legionnaires' disease analysed in a grid model

    DEFF Research Database (Denmark)

    Rudbeck, M.; Jepsen, Martin Rudbeck; Sonne, I.B.;

    2010-01-01

    clusters. Four cells had excess incidence in all three time periods. The analysis in 25 different grid positions indicated a low risk of overlooking cells with excess incidence in a random grid. The coefficient of variation ranged from 0.08 to 0.11 independent of the threshold. By application of a random......The aim was to analyse variation in incidence of sporadic Legionnaires' disease in a geographical information system in three time periods (1990-2005) by the application of a grid model and to assess the model's validity by analysing variation according to grid position. Coordinates...

  16. Examining Pedestrian Injury Severity Using Alternative Disaggregate Models

    DEFF Research Database (Denmark)

    Abay, Kibrom Araya

    2013-01-01

    This paper investigates the injury severity of pedestrians considering detailed road user characteristics and alternative model specification using a high-quality Danish road accident data. Such detailed and alternative modeling approach helps to assess the sensitivity of empirical inferences...... to the choice of these models. The empirical analysis reveals that detailed road user characteristics such as crime history of drivers and momentary activities of road users at the time of the accident provides an interesting insight in the injury severity analysis. Likewise, the alternative analytical...

  17. Predictors of breast self - examination among female teachers in Ethiopia using health belief model.

    Science.gov (United States)

    Birhane, Negussie; Mamo, Abebe; Girma, Eshetu; Asfaw, Shifera

    2015-01-01

    Breast cancer is by far the most frequent cancer of women. It is the second leading cause of death in women worldwide. Approximately one out of eight women develops breast cancer all over the world. Majority of cases of cancer of the breast are detected by women themselves, stressing the importance of breast self-examination. The main objective of this study was to assess predictors of breast self-examination among female teachers in Kafa Zone, South West part of Ethiopia. A cross-sectional study was conducted among randomly selected 315 female teachers. Self administered a structured questionnaire including socio-demographic characteristics, knowledge about breast cancer and perception of teachers on breast self examination using the Champion's revised Health Belief Model sub scales used as data collection instrument. Multivariable logistic regression analyses were used to identify independent predictors of breast self -examination performance. Three hundred and fifteen female teachers were participated in this study. Their mean age was 33 SD [±7] years. Only 52 (16.5 %) participants ever heard about breast self examination and from those who heard about breast self examination 38 (73.07 %) of them ever performed breast self examination. After controlling for possible confounding factors, the result showed that knowledge towards breast self examination, perceived susceptibility, perceived severity and the net perceived benefit were found to be the major predictors of breast self examination. This study revealed that breast self examination performance among female teachers was very low. Therefore, behavior change communication and interventions that emphasize different domains that increase the perceived threat to breast cancer as well as on the benefits of breast self-examination to increase the perception of the teachers in an integrated manner may be the most effective strategies that should be considered by the health offices and educational offices. These

  18. Examination of a Theoretical Model of Streaming Potential Coupling Coefficient

    Directory of Open Access Journals (Sweden)

    D. T. Luong

    2014-01-01

    Full Text Available Seismoelectric effects and streaming potentials play an important role in geophysical applications. The key parameter for those phenomena is the streaming potential coupling coefficient, which is, for example, dependent on the zeta potential of the interface of the porous rocks. Comparison of an existing theoretical model to experimental data sets from available published data for streaming potentials has been performed. However, the existing experimental data sets are based on samples with dissimilar fluid conductivity, pH of pore fluid, temperature, and sample compositions. All those dissimilarities may cause the observed deviations. To critically assess the models, we have carried out streaming potential measurement as a function of electrolyte concentration and temperature for a set of well-defined consolidated samples. The results show that the existing theoretical model is not in good agreement with the experimental observations when varying the electrolyte concentration, especially at low electrolyte concentration. However, if we use a modified model in which the zeta potential is considered to be constant over the electrolyte concentration, the model fits the experimental data well in a whole range of concentration. Also, for temperature dependence, the comparison shows that the theoretical model is not fully adequate to describe the experimental data but does describe correctly the increasing trend of the coupling coefficient as function of temperature.

  19. X-ray CT analyses, models and numerical simulations: a comparison with petrophysical analyses in an experimental CO2 study

    Science.gov (United States)

    Henkel, Steven; Pudlo, Dieter; Enzmann, Frieder; Reitenbach, Viktor; Albrecht, Daniel; Ganzer, Leonhard; Gaupp, Reinhard

    2016-06-01

    An essential part of the collaborative research project H2STORE (hydrogen to store), which is funded by the German government, was a comparison of various analytical methods for characterizing reservoir sandstones from different stratigraphic units. In this context Permian, Triassic and Tertiary reservoir sandstones were analysed. Rock core materials, provided by RWE Gasspeicher GmbH (Dortmund, Germany), GDF Suez E&P Deutschland GmbH (Lingen, Germany), E.ON Gas Storage GmbH (Essen, Germany) and RAG Rohöl-Aufsuchungs Aktiengesellschaft (Vienna, Austria), were processed by different laboratory techniques; thin sections were prepared, rock fragments were crushed and cubes of 1 cm edge length and plugs 3 to 5 cm in length with a diameter of about 2.5 cm were sawn from macroscopic homogeneous cores. With this prepared sample material, polarized light microscopy and scanning electron microscopy, coupled with image analyses, specific surface area measurements (after Brunauer, Emmet and Teller, 1938; BET), He-porosity and N2-permeability measurements and high-resolution microcomputer tomography (μ-CT), which were used for numerical simulations, were applied. All these methods were practised on most of the same sample material, before and on selected Permian sandstones also after static CO2 experiments under reservoir conditions. A major concern in comparing the results of these methods is an appraisal of the reliability of the given porosity, permeability and mineral-specific reactive (inner) surface area data. The CO2 experiments modified the petrophysical as well as the mineralogical/geochemical rock properties. These changes are detectable by all applied analytical methods. Nevertheless, a major outcome of the high-resolution μ-CT analyses and following numerical data simulations was that quite similar data sets and data interpretations were maintained by the different petrophysical standard methods. Moreover, the μ-CT analyses are not only time saving, but also non

  20. An Examination of Operational Availability in Life Cycle Cost Models

    Science.gov (United States)

    1983-09-01

    Systems. Kenneth E. Marks, H. Garrison Massey, and Brent D. Bradley. Rand No. R-2287-AF. Santa Monica CA: The Rand Corporation, October 1978. AD...AFB OH, September 1982. AD A123045. Bryan, Noreen S.; Jacqueline J. Rosen; and Nancey T. Marland. "A New Life Cycle Cost Model: Flexible, Interactive

  1. An Examination of a Model of Anti-Pollution Behavior.

    Science.gov (United States)

    Iwata, Osamu

    1981-01-01

    Reports results of a study in which Japanese female undergraduates (N=118) responded to an environmental concern scale based upon a model of anti-pollution behavior focusing on: approach to information, confidence in science and technology, appreciation of natural beauty, causes, consequences, and purchasing and coping behaviors. (DC)

  2. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    Science.gov (United States)

    Piparo, D.; Schott, G.; Quast, G.

    2009-12-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  3. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    CERN Document Server

    Piparo, D; Quast, Prof G

    2008-01-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  4. Combined Task and Physical Demands Analyses towards a Comprehensive Human Work Model

    Science.gov (United States)

    2014-09-01

    velocities, and accelerations over time for each postural sequence. Neck strain measures derived from biomechanical analyses of these postural...and whole missions. The result is a comprehensive model of tasks and associated physical demands from which one can estimate the accumulative neck ...Griffon Helicopter aircrew (Pilots and Flight Engineers) reported neck pain particularly when wearing Night Vision Goggles (NVGs) (Forde et al. , 2011

  5. Dutch AG-MEMOD model; A tool to analyse the agri-food sector

    NARCIS (Netherlands)

    Leeuwen, van M.G.A.; Tabeau, A.A.

    2005-01-01

    Agricultural policies in the European Union (EU) have a history of continuous reform. AG-MEMOD, acronym for Agricultural sector in the Member states and EU: econometric modelling for projections and analysis of EU policies on agriculture, forestry and the environment, provides a system for analysing

  6. Supply Chain Modeling for Fluorspar and Hydrofluoric Acid and Implications for Further Analyses

    Science.gov (United States)

    2015-04-01

    analysis. 15. SUBJECT TERMS supply chain , model, fluorspar, hydrofluoric acid, shortfall, substitution, Defense Logistics Agency, National Defense...unlimited. IDA Document D-5379 Log: H 15-000099 INSTITUTE FOR DEFENSE ANALYSES 4850 Mark Center Drive Alexandria, Virginia 22311-1882 Supply Chain ...E F E N S E A N A L Y S E S IDA Document D-5379 D. Sean Barnett Jerome Bracken Supply Chain Modeling for Fluorspar and Hydrofluoric Acid and

  7. Wavelet-based spatial comparison technique for analysing and evaluating two-dimensional geophysical model fields

    Directory of Open Access Journals (Sweden)

    S. Saux Picart

    2011-11-01

    Full Text Available Complex numerical models of the Earth's environment, based around 3-D or 4-D time and space domains are routinely used for applications including climate predictions, weather forecasts, fishery management and environmental impact assessments. Quantitatively assessing the ability of these models to accurately reproduce geographical patterns at a range of spatial and temporal scales has always been a difficult problem to address. However, this is crucial if we are to rely on these models for decision making. Satellite data are potentially the only observational dataset able to cover the large spatial domains analysed by many types of geophysical models. Consequently optical wavelength satellite data is beginning to be used to evaluate model hindcast fields of terrestrial and marine environments. However, these satellite data invariably contain regions of occluded or missing data due to clouds, further complicating or impacting on any comparisons with the model. A methodology has recently been developed to evaluate precipitation forecasts using radar observations. It allows model skill to be evaluated at a range of spatial scales and rain intensities. Here we extend the original method to allow its generic application to a range of continuous and discontinuous geophysical data fields, and therefore allowing its use with optical satellite data. This is achieved through two major improvements to the original method: (i all thresholds are determined based on the statistical distribution of the input data, so no a priori knowledge about the model fields being analysed is required and (ii occluded data can be analysed without impacting on the metric results. The method can be used to assess a model's ability to simulate geographical patterns over a range of spatial scales. We illustrate how the method provides a compact and concise way of visualising the degree of agreement between spatial features in two datasets. The application of the new method, its

  8. The Rasch Model: Its Use by the National Board of Medical Examiners.

    Science.gov (United States)

    Kelley, Paul R.; Schumacher, Charles F.

    1984-01-01

    The National Board of Medical Examiners uses the Rasch model to calibrate test items, maintain item banks, equate scores, and monitor the consistency of examiner item response patterns. The model is also being used in the study of patient management problems examinations, standard-setting, and computer-based examinations. (Author/BS)

  9. A Model for Integrating Fixed-, Random-, and Mixed-Effects Meta-Analyses into Structural Equation Modeling

    Science.gov (United States)

    Cheung, Mike W.-L.

    2008-01-01

    Meta-analysis and structural equation modeling (SEM) are two important statistical methods in the behavioral, social, and medical sciences. They are generally treated as two unrelated topics in the literature. The present article proposes a model to integrate fixed-, random-, and mixed-effects meta-analyses into the SEM framework. By applying an…

  10. Examining the Relationship between Physical Models and Students' Science Practices

    Science.gov (United States)

    Miller, Alison Riley

    Scientists engage with practices like model development and use, data analysis and interpretation, explanation construction, and argumentation in order to expand the frontiers of science, so it can be inferred that students' engagement with science practices may help them deepen their own science understanding. As one of three dimensions on which the Next Generation Science Standards is built, science practices are recognized as an important component of science instruction. However, the contexts in which these practices happen are under-researched. Furthermore, research on science practices among students tends to focus on one or two practices in isolation when, in reality, students and scientists tend to engage with multiple overlapping practices. This study focused on identifying and characterizing multiple science practices as eighth and ninth-grade Earth Science students participated in a small group collaborative problem solving activity both with and without the use of a physical model. This study found a range of sophistication in the observed science practices as well as a relationship between the frequency of those practices and the accuracy of the groups' outcomes. Based on this relationship, groups were assigned to one of three categories. Further analysis revealed that model use varied among the three categories of groups. Comparisons across these three group categories suggest that there may be a bootstrapping relationship between students' engagement with science practices and the development of their content understanding. This metaphor of bootstrapping is used to represent how students may develop deeper science content understanding through engagement with science practices and concurrently develop greater facility with science practices as they learn science content. Implications are presented for curriculum designers, teachers and teacher educators. These include recommendations for curriculum design that encourage structured opportunities for

  11. Stellar abundance analyses in the light of 3D hydrodynamical model atmospheres

    CERN Document Server

    Asplund, M

    2003-01-01

    I describe recent progress in terms of 3D hydrodynamical model atmospheres and 3D line formation and their applications to stellar abundance analyses of late-type stars. Such 3D studies remove the free parameters inherent in classical 1D investigations (mixing length parameters, macro- and microturbulence) yet are highly successful in reproducing a large arsenal of observational constraints such as detailed line shapes and asymmetries. Their potential for abundance analyses is illustrated by discussing the derived oxygen abundances in the Sun and in metal-poor stars, where they seem to resolve long-standing problems as well as significantly alter the inferred conclusions.

  12. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    Science.gov (United States)

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).

  13. Application of an approximate vectorial diffraction model to analysing diffractive micro-optical elements

    Institute of Scientific and Technical Information of China (English)

    Niu Chun-Hui; Li Zhi-Yuan; Ye Jia-Sheng; Gu Ben-Yuan

    2005-01-01

    Scalar diffraction theory, although simple and efficient, is too rough for analysing diffractive micro-optical elements.Rigorous vectorial diffraction theory requires extensive numerical efforts, and is not a convenient design tool. In this paper we employ a simple approximate vectorial diffraction model which combines the principle of the scalar diffraction theory with an approximate local field model to analyse the diffraction of optical waves by some typical two-dimensional diffractive micro-optical elements. The TE and TM polarization modes are both considered. We have found that the approximate vectorial diffraction model can agree much better with the rigorous electromagnetic simulation results than the scalar diffraction theory for these micro-optical elements.

  14. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST. Tropical response

    Energy Technology Data Exchange (ETDEWEB)

    Moron, V. [Universite' de Provence, UFR des sciences geographiques et de l' amenagement, Aix-en-Provence (France); Navarra, A. [Istituto Nazionale di Geofisica e Vulcanologia, Bologna (Italy); Ward, M. N. [University of Oklahoma, Cooperative Institute for Mesoscale Meteorological Studies, Norman OK (United States); Foland, C. K. [Hadley Center for Climate Prediction and Research, Meteorological Office, Bracknell (United Kingdom); Friederichs, P. [Meteorologisches Institute des Universitaet Bonn, Bonn (Germany); Maynard, K.; Polcher, J. [Paris Universite' Pierre et Marie Curie, Paris (France). Centre Nationale de la Recherche Scientifique, Laboratoire de Meteorologie Dynamique, Paris

    2001-08-01

    The ECHAM 3.2 (T21), ECHAM (T30) and LMD (version 6, grid-point resolution with 96 longitudes x 72 latitudes) atmospheric general circulation models were integrated through the period 1961 to 1993 forces with the same observed Sea Surface Temperatures (SSTs) as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The large-scale tropical inter-annual variability is analysed to give a picture of a skill of each model and of some sort of combination of the three models. To analyse the similarity of model response averaged over the same key regions, several widely-used indices are calculated: Southern Oscillation Index (SOI), large-scale wind shear indices of the boreal summer monsoon in Asia and West Africa and rainfall indices for NE Brazil, Sahel and India. Even for the indices where internal noise is large, some years are consistent amongst all the runs, suggesting inter-annual variability of the strength of SST forcing. Averaging the ensemble mean of the three models (the super-ensemble mean) yields improved skill. When each run is weighted according to its skill, taking three runs from different models instead of three runs of the same model improves the mean skill. There is also some indication that one run of a given model could be better than another, suggesting that persistent anomalies could change its sensitivity to SST. The index approach lacks flexibility to assess whether a model's response to SST has been geographically displaced. It can focus on the first mode in the global tropics, found through singular value decomposition analysis, which is clearly related to El Nino/Southern Oscillation (ENSO) in all seasons. The Observed-Model and Model-Model analyses lead to almost the same patterns, suggesting that the dominant pattern of model response is also the most skilful mode. Seasonal modulation of both skill and spatial patterns (both model and observed) clearly exists with highest skill

  15. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  16. Distinguishing Mediational Models and Analyses in Clinical Psychology: Atemporal Associations Do Not Imply Causation.

    Science.gov (United States)

    Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R

    2016-09-01

    A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.

  17. Sensitivity to model geometry in finite element analyses of reconstructed skeletal structures: experience with a juvenile pelvis.

    Science.gov (United States)

    Watson, Peter J; Fagan, Michael J; Dobson, Catherine A

    2015-01-01

    Biomechanical analysis of juvenile pelvic growth can be used in the evaluation of medical devices and investigation of hip joint disorders. This requires access to scan data of healthy juveniles, which are not always freely available. This article analyses the application of a geometric morphometric technique, which facilitates the reconstruction of the articulated juvenile pelvis from cadaveric remains, in biomechanical modelling. The sensitivity of variation in reconstructed morphologies upon predicted stress/strain distributions is of particular interest. A series of finite element analyses of a 9-year-old hemi-pelvis were performed to examine differences in predicted strain distributions between a reconstructed model and the originally fully articulated specimen. Only minor differences in the minimum principal strain distributions were observed between two varying hemi-pelvic morphologies and that of the original articulation. A Wilcoxon rank-sum test determined there was no statistical significance between the nodal strains recorded at 60 locations throughout the hemi-pelvic structures. This example suggests that finite element models created by this geometric morphometric reconstruction technique can be used with confidence, and as observed with this hemi-pelvis model, even a visual morphological difference does not significantly affect the predicted results. The validated use of this geometric morphometric reconstruction technique in biomechanical modelling reduces the dependency on clinical scan data.

  18. A preliminary model of work during initial examination and treatment planning appointments.

    Science.gov (United States)

    Irwin, J Y; Torres-Urquidy, M H; Schleyer, T; Monaco, V

    2009-01-10

    Objective This study's objective was to formally describe the work process for charting and treatment planning in general dental practice to inform the design of a new clinical computing environment.Methods Using a process called contextual inquiry, researchers observed 23 comprehensive examination and treatment planning sessions during 14 visits to 12 general US dental offices. For each visit, field notes were analysed and reformulated as formalised models. Subsequently, each model type was consolidated across all offices and visits. Interruptions to the workflow, called breakdowns, were identified.Results Clinical work during dental examination and treatment planning appointments is a highly collaborative activity involving dentists, hygienists and assistants. Personnel with multiple overlapping roles complete complex multi-step tasks supported by a large and varied collection of equipment, artifacts and technology. Most of the breakdowns were related to technology which interrupted the workflow, caused rework and increased the number of steps in work processes.Conclusion Current dental software could be significantly improved with regard to its support for communication and collaboration, workflow, information design and presentation, information content, and data entry.

  19. A bifactor model of disgust proneness: examination of the Disgust Emotion Scale.

    Science.gov (United States)

    Olatunji, Bunmi O; Ebesutani, Chad; Reise, Steven P

    2015-04-01

    The current research evaluated a bifactor model for the Disgust Emotion Scale (DES) in three samples: N = 1,318 nonclinical participants, N = 152 clinic-referred patients, and N = 352 nonclinical participants. The primary goals were to (a) use bifactor modeling to examine the latent structure of the DES and in turn (b) evaluate whether the DES should be scored as a unidimensional scale or whether subscales should also be interpreted. Results suggested that a bifactor model fit the DES data well and that all DES items were strongly influenced by a general disgust proneness dimension and by five content dimensions. Moreover, model-based reliability analyses suggested that scoring a general disgust dimension is justified despite the confirmed multidimensional structure. However, subscales were found to be unreliable after controlling for the general disgust factor with the potential exception of the Mutilation/Death and Animals subscale. Subsequent analysis also showed that only the general disgust factor robustly predicted an obsessive-compulsive disorder symptom latent factor-a clinical condition closely related to disgust proneness; latent variables representing DES domains displayed weak relations with an obsessive-compulsive disorder factor above and beyond the general disgust factor. Implications for better understanding the structure of DES responses and its use in clinical research are discussed.

  20. FluxExplorer: A general platform for modeling and analyses of metabolic networks based on stoichiometry

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Stoichiometry-based analyses of meta- bolic networks have aroused significant interest of systems biology researchers in recent years. It is necessary to develop a more convenient modeling platform on which users can reconstruct their network models using completely graphical operations, and explore them with powerful analyzing modules to get a better understanding of the properties of metabolic systems. Herein, an in silico platform, FluxExplorer, for metabolic modeling and analyses based on stoichiometry has been developed as a publicly available tool for systems biology research. This platform integrates various analytic approaches, in- cluding flux balance analysis, minimization of meta- bolic adjustment, extreme pathways analysis, shadow prices analysis, and singular value decom- position, providing a thorough characterization of the metabolic system. Using a graphic modeling process, metabolic networks can be reconstructed and modi- fied intuitively and conveniently. The inconsistencies of a model with respect to the FBA principles can be proved automatically. In addition, this platform sup- ports systems biology markup language (SBML). FluxExplorer has been applied to rebuild a metabolic network in mammalian mitochondria, producing meaningful results. Generally, it is a powerful and very convenient tool for metabolic network modeling and analysis.

  1. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  2. Calibration of back-analysed model parameters for landslides using classification statistics

    Science.gov (United States)

    Cepeda, Jose; Henderson, Laura

    2016-04-01

    Back-analyses are useful for characterizing the geomorphological and mechanical processes and parameters involved in the initiation and propagation of landslides. These processes and parameters can in turn be used for improving forecasts of scenarios and hazard assessments in areas or sites which have similar settings to the back-analysed cases. The selection of the modeled landslide that produces the best agreement with the actual observations requires running a number of simulations by varying the type of model and the sets of input parameters. The comparison of the simulated and observed parameters is normally performed by visual comparison of geomorphological or dynamic variables (e.g., geometry of scarp and final deposit, maximum velocities and depths). Over the past six years, a method developed by NGI has been used by some researchers for a more objective selection of back-analysed input model parameters. That method includes an adaptation of the equations for calculation of classifiers, and a comparative evaluation of classifiers of the selected parameter sets in the Receiver Operating Characteristic (ROC) space. This contribution presents an updating of the methodology. The proposed procedure allows comparisons between two or more "clouds" of classifiers. Each cloud represents the performance of a model over a range of input parameters (e.g., samples of probability distributions). Considering the fact that each cloud does not necessarily produce a full ROC curve, two new normalised ROC-space parameters are introduced for characterizing the performance of each cloud. The first parameter is representative of the cloud position relative to the point of perfect classification. The second parameter characterizes the position of the cloud relative to the theoretically perfect ROC curve and the no-discrimination line. The methodology is illustrated with back-analyses of slope stability and landslide runout of selected case studies. This research activity has been

  3. Volvo Logistics Corporation Returnable Packaging System : a model for analysing cost savings when switching packaging system

    OpenAIRE

    2008-01-01

    This thesis is a study for analysing costs affected by packaging in a producing industry. The purpose is to develop a model that will calculate and present possible cost savings for the customer by using Volvo Logistics Corporations, VLC’s, returnable packaging instead of other packaging solutions. The thesis is based on qualitative data gained from both theoretical and empirical studies. The methodology for gaining information has been to study theoretical sources such as course literature a...

  4. Examination of outcome after mild traumatic brain injury: the contribution of injury beliefs and Leventhal's common sense model.

    Science.gov (United States)

    Snell, Deborah L; Hay-Smith, E Jean C; Surgenor, Lois J; Siegert, Richard J

    2013-01-01

    Associations between components of Leventhal's common sense model of health behaviour (injury beliefs, coping, distress) and outcome after mild traumatic brain injury (MTBI) were examined. Participants (n = 147) were recruited within three months following MTBI and assessed six months later, completing study questionnaires at both visits (Illness Perceptions Questionnaire Revised, Brief COPE, Hospital Anxiety and Depression Scale). Outcome measures included the Rivermead Post-Concussion Symptoms Questionnaire and Rivermead Head Injury Follow-Up Questionnaire. Univariate and multivariate (logistic regression) analyses examined associations between injury beliefs, coping and distress at baseline, and later outcome. Participants endorsing stronger injury identity beliefs (p model. Consistent with Leventhal's model, participant beliefs about their injury and recovery had significant associations with outcome over time. Coping also appeared to have important associations with outcome but more research is required to examine these. Current reassurance-based interventions may be improved by targeting variables such as injury beliefs, coping and adjustment soon after injury.

  5. Computational model for supporting SHM systems design: Damage identification via numerical analyses

    Science.gov (United States)

    Sartorato, Murilo; de Medeiros, Ricardo; Vandepitte, Dirk; Tita, Volnei

    2017-02-01

    This work presents a computational model to simulate thin structures monitored by piezoelectric sensors in order to support the design of SHM systems, which use vibration based methods. Thus, a new shell finite element model was proposed and implemented via a User ELement subroutine (UEL) into the commercial package ABAQUS™. This model was based on a modified First Order Shear Theory (FOST) for piezoelectric composite laminates. After that, damaged cantilever beams with two piezoelectric sensors in different positions were investigated by using experimental analyses and the proposed computational model. A maximum difference in the magnitude of the FRFs between numerical and experimental analyses of 7.45% was found near the resonance regions. For damage identification, different levels of damage severity were evaluated by seven damage metrics, including one proposed by the present authors. Numerical and experimental damage metrics values were compared, showing a good correlation in terms of tendency. Finally, based on comparisons of numerical and experimental results, it is shown a discussion about the potentials and limitations of the proposed computational model to be used for supporting SHM systems design.

  6. Modeling and performance analyses of evaporators in frozen-food supermarket display cabinets at low temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Getu, H.M.; Bansal, P.K. [Department of Mechanical Engineering, The University of Auckland, Private Bag 92019, Auckland (New Zealand)

    2007-11-15

    This paper presents modeling and experimental analyses of evaporators in 'in situ' frozen-food display cabinets at low temperatures in the supermarket industry. Extensive experiments were conducted to measure store and display cabinet relative humidities and temperatures, and pressures, temperatures and mass flow rates of the refrigerant. The mathematical model adopts various empirical correlations of heat transfer coefficients and frost properties in a fin-tube heat exchanger in order to investigate the influence of indoor conditions on the performance of the display cabinets. The model is validated with the experimental data of 'in situ' cabinets. The model would be a good guide tool to the design engineers to evaluate the performance of supermarket display cabinet heat exchangers under various store conditions. (author)

  7. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.; Sobel, A.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overview of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.

  8. Risk Factor Analyses for the Return of Spontaneous Circulation in the Asphyxiation Cardiac Arrest Porcine Model

    Directory of Open Access Journals (Sweden)

    Cai-Jun Wu

    2015-01-01

    Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.

  9. The Leicester AATSR Global Analyser (LAGA) - Giving Young Students the Opportunity to Examine Space Observations of Global Climate-Related Processes

    Science.gov (United States)

    Llewellyn-Jones, David; Good, Simon; Corlett, Gary

    A pc-based analysis package has been developed, for the dual purposes of, firstly, providing ‘quick-look' capability to research workers inspecting long time-series of global satellite datasets of Sea-surface Temperature (SST); and, secondly, providing an introduction for students, either undergraduates, or advanced high-school students to the characteristics of commonly used analysis techniques for large geophysical data-sets from satellites. Students can also gain insight into the behaviour of some basic climate-related large-scale or global processes. The package gives students immediate access to up to 16 years of continuous global SST data, mainly from the Advanced Along-Track Scanning Radiometer, currently flying on ESA's Envisat satellite. The data are available and are presented in the form of monthly averages and spatial averaged to half-degree or one-sixth degree longitude-latitude grids. There are simple button-operated facilities for defining and calculating box-averages; producing time-series of such averages; defining and displaying transects and their evolution over time; and the examination anomalous behaviour by displaying the difference between observed values and values derived from climatological means. By using these facilities a student rapidly gains familiarity with such processes as annual variability, the El Nĩo effect, as well as major current systems n such as the Gulf Stream and other climatically important phenomena. In fact, the student is given immediate insights into the basic methods of examining geophysical data in a research context, without needing to acquire special analysis skills are go trough lengthy data retrieval and preparation procedures which are more generally required, as precursors to serious investigation, in the research laboratory. This software package, called the Leicester AAATSR Global Analyser (LAGA), is written in a well-known and widely used analysis language and the package can be run by using software

  10. Examining Factors Affecting Science Achievement of Hong Kong in PISA 2006 Using Hierarchical Linear Modeling

    Science.gov (United States)

    Lam, Terence Yuk Ping; Lau, Kwok Chi

    2014-10-01

    This study uses hierarchical linear modeling to examine the influence of a range of factors on the science performances of Hong Kong students in PISA 2006. Hong Kong has been consistently ranked highly in international science assessments, such as Programme for International Student Assessment and Trends in International Mathematics and Science Study; therefore, an exploration of the factors that affect science performances of Hong Kong students can give a lens to examine how science education can be improved in Hong Kong and other countries. The analyses reveal that student backgrounds as male, at higher grade levels, and born in mainland (when in the same grade) are associated with better science performance. Among the attitudinal factors, enjoyment of science and self-efficacy in science play important roles in scientific achievements. Most of the parental factors, on the other hand, are not having significant impacts on achievement after student attitudes are taken into account, with only parents' value of science having a small effect. School student intake is found to be a strong predictor of school average achievement, as well as a major mediator of the effects of school enrollment size and school socio-economic status. The findings differ from recently reported results, which suggested that school enrollment size was associated with achievement. This study also points out the problems of the use of science instruction time as a school-level variable to explain science achievement in Hong Kong.

  11. Analyses of simulations of three-dimensional lattice proteins in comparison with a simplified statistical mechanical model of protein folding.

    Science.gov (United States)

    Abe, H; Wako, H

    2006-07-01

    Folding and unfolding simulations of three-dimensional lattice proteins were analyzed using a simplified statistical mechanical model in which their amino acid sequences and native conformations were incorporated explicitly. Using this statistical mechanical model, under the assumption that only interactions between amino acid residues within a local structure in a native state are considered, the partition function of the system can be calculated for a given native conformation without any adjustable parameter. The simulations were carried out for two different native conformations, for each of which two foldable amino acid sequences were considered. The native and non-native contacts between amino acid residues occurring in the simulations were examined in detail and compared with the results derived from the theoretical model. The equilibrium thermodynamic quantities (free energy, enthalpy, entropy, and the probability of each amino acid residue being in the native state) at various temperatures obtained from the simulations and the theoretical model were also examined in order to characterize the folding processes that depend on the native conformations and the amino acid sequences. Finally, the free energy landscapes were discussed based on these analyses.

  12. Prediction Uncertainty Analyses for the Combined Physically-Based and Data-Driven Models

    Science.gov (United States)

    Demissie, Y. K.; Valocchi, A. J.; Minsker, B. S.; Bailey, B. A.

    2007-12-01

    The unavoidable simplification associated with physically-based mathematical models can result in biased parameter estimates and correlated model calibration errors, which in return affect the accuracy of model predictions and the corresponding uncertainty analyses. In this work, a physically-based groundwater model (MODFLOW) together with error-correcting artificial neural networks (ANN) are used in a complementary fashion to obtain an improved prediction (i.e. prediction with reduced bias and error correlation). The associated prediction uncertainty of the coupled MODFLOW-ANN model is then assessed using three alternative methods. The first method estimates the combined model confidence and prediction intervals using first-order least- squares regression approximation theory. The second method uses Monte Carlo and bootstrap techniques for MODFLOW and ANN, respectively, to construct the combined model confidence and prediction intervals. The third method relies on a Bayesian approach that uses analytical or Monte Carlo methods to derive the intervals. The performance of these approaches is compared with Generalized Likelihood Uncertainty Estimation (GLUE) and Calibration-Constrained Monte Carlo (CCMC) intervals of the MODFLOW predictions alone. The results are demonstrated for a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study comprises structural, parameter, and measurement uncertainties. The preliminary results indicate that the proposed three approaches yield comparable confidence and prediction intervals, thus making the computationally efficient first-order least-squares regression approach attractive for estimating the coupled model uncertainty. These results will be compared with GLUE and CCMC results.

  13. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull

    Science.gov (United States)

    Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2013-01-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  14. Analysing adverse events by time-to-event models: the CLEOPATRA study.

    Science.gov (United States)

    Proctor, Tanja; Schumacher, Martin

    2016-07-01

    When analysing primary and secondary endpoints in a clinical trial with patients suffering from a chronic disease, statistical models for time-to-event data are commonly used and accepted. This is in contrast to the analysis of data on adverse events where often only a table with observed frequencies and corresponding test statistics is reported. An example is the recently published CLEOPATRA study where a three-drug regimen is compared with a two-drug regimen in patients with HER2-positive first-line metastatic breast cancer. Here, as described earlier, primary and secondary endpoints (progression-free and overall survival) are analysed using time-to-event models, whereas adverse events are summarized in a simple frequency table, although the duration of study treatment differs substantially. In this paper, we demonstrate the application of time-to-event models to first serious adverse events using the data of the CLEOPATRA study. This will cover the broad range between a simple incidence rate approach over survival and competing risks models (with death as a competing event) to multi-state models. We illustrate all approaches by means of graphical displays highlighting the temporal dynamics and compare the obtained results. For the CLEOPATRA study, the resulting hazard ratios are all in the same order of magnitude. But the use of time-to-event models provides valuable and additional information that would potentially be overlooked by only presenting incidence proportions. These models adequately address the temporal dynamics of serious adverse events as well as death of patients. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Models and analyses for inertial-confinement fusion-reactor studies

    Energy Technology Data Exchange (ETDEWEB)

    Bohachevsky, I.O.

    1981-05-01

    This report describes models and analyses devised at Los Alamos National Laboratory to determine the technical characteristics of different inertial confinement fusion (ICF) reactor elements required for component integration into a functional unit. We emphasize the generic properties of the different elements rather than specific designs. The topics discussed are general ICF reactor design considerations; reactor cavity phenomena, including the restoration of interpulse ambient conditions; first-wall temperature increases and material losses; reactor neutronics and hydrodynamic blanket response to neutron energy deposition; and analyses of loads and stresses in the reactor vessel walls, including remarks about the generation and propagation of very short wavelength stress waves. A discussion of analytic approaches useful in integrations and optimizations of ICF reactor systems concludes the report.

  16. Dynamics and spatial structure of ENSO from re-analyses versus CMIP5 models

    Science.gov (United States)

    Serykh, Ilya; Sonechkin, Dmitry

    2016-04-01

    Basing on a mathematical idea about the so-called strange nonchaotic attractor (SNA) in the quasi-periodically forced dynamical systems, the currently available re-analyses data are considered. It is found that the El Niño - Southern Oscillation (ENSO) is driven not only by the seasonal heating, but also by three more external periodicities (incommensurate to the annual period) associated with the ~18.6-year lunar-solar nutation of the Earth rotation axis, ~11-year sunspot activity cycle and the ~14-month Chandler wobble in the Earth's pole motion. Because of the incommensurability of their periods all four forces affect the system in inappropriate time moments. As a result, the ENSO time series look to be very complex (strange in mathematical terms) but nonchaotic. The power spectra of ENSO indices reveal numerous peaks located at the periods that are multiples of the above periodicities as well as at their sub- and super-harmonic. In spite of the above ENSO complexity, a mutual order seems to be inherent to the ENSO time series and their spectra. This order reveals itself in the existence of a scaling of the power spectrum peaks and respective rhythms in the ENSO dynamics that look like the power spectrum and dynamics of the SNA. It means there are no limits to forecast ENSO, in principle. In practice, it opens a possibility to forecast ENSO for several years ahead. Global spatial structures of anomalies during El Niño and power spectra of ENSO indices from re-analyses are compared with the respective output quantities in the CMIP5 climate models (the Historical experiment). It is found that the models reproduce global spatial structures of the near surface temperature and sea level pressure anomalies during El Niño very similar to these fields in the re-analyses considered. But the power spectra of the ENSO indices from the CMIP5 models show no peaks at the same periods as the re-analyses power spectra. We suppose that it is possible to improve modeled

  17. Design evaluation and optimisation in crossover pharmacokinetic studies analysed by nonlinear mixed effects models.

    Science.gov (United States)

    Nguyen, Thu Thuy; Bazzoli, Caroline; Mentré, France

    2012-05-20

    Bioequivalence or interaction trials are commonly studied in crossover design and can be analysed by nonlinear mixed effects models as an alternative to noncompartmental approach. We propose an extension of the population Fisher information matrix in nonlinear mixed effects models to design crossover pharmacokinetic trials, using a linearisation of the model around the random effect expectation, including within-subject variability and discrete covariates fixed or changing between periods. We use the expected standard errors of treatment effect to compute the power for the Wald test of comparison or equivalence and the number of subjects needed for a given power. We perform various simulations mimicking crossover two-period trials to show the relevance of these developments. We then apply these developments to design a crossover pharmacokinetic study of amoxicillin in piglets and implement them in the new version 3.2 of the r function PFIM.

  18. An age-dependent model to analyse the evolutionary stability of bacterial quorum sensing.

    Science.gov (United States)

    Mund, A; Kuttler, C; Pérez-Velázquez, J; Hense, B A

    2016-09-21

    Bacterial communication is enabled through the collective release and sensing of signalling molecules in a process called quorum sensing. Cooperative processes can easily be destabilized by the appearance of cheaters, who contribute little or nothing at all to the production of common goods. This especially applies for planktonic cultures. In this study, we analyse the dynamics of bacterial quorum sensing and its evolutionary stability under two levels of cooperation, namely signal and enzyme production. The model accounts for mutation rates and switches between planktonic and biofilm state of growth. We present a mathematical approach to model these dynamics using age-dependent colony models. We explore the conditions under which cooperation is stable and find that spatial structuring can lead to long-term scenarios such as coexistence or bistability, depending on the non-linear combination of different parameters like death rates and production costs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Assessing Cognitive Processes with Diffusion Model Analyses: A Tutorial based on fast-dm-30

    Directory of Open Access Journals (Sweden)

    Andreas eVoss

    2015-03-01

    Full Text Available Diffusion models can be used to infer cognitive processes involved in fast binary decision tasks. The model assumes that information is accumulated continuously until one of two thresholds is hit. In the analysis, response time distributions from numerous trials of the decision task are used to estimate a set of parameters mapping distinct cognitive processes. In recent years, diffusion model analyses have become more and more popular in different fields of psychology. This increased popularity is based on the recent development of several software solutions for the parameter estimation. Although these programs make the application of the model relatively easy, there is a shortage of knowledge about different steps of a state-of-the-art diffusion model study. In this paper, we give a concise tutorial on diffusion modelling, and we present fast-dm-30, a thoroughly revised and extended version of the fast-dm software (Voss & Voss, 2007 for diffusion model data analysis. The most important improvement of the fast-dm version is the possibility to choose between different optimization criteria (i.e., Maximum Likelihood, Chi-Square, and Kolmogorov-Smirnov, which differ in applicability for different data sets.

  20. Models of population-based analyses for data collected from large extended families.

    Science.gov (United States)

    Wang, Wenyu; Lee, Elisa T; Howard, Barbara V; Fabsitz, Richard R; Devereux, Richard B; MacCluer, Jean W; Laston, Sandra; Comuzzie, Anthony G; Shara, Nawar M; Welty, Thomas K

    2010-12-01

    Large studies of extended families usually collect valuable phenotypic data that may have scientific value for purposes other than testing genetic hypotheses if the families were not selected in a biased manner. These purposes include assessing population-based associations of diseases with risk factors/covariates and estimating population characteristics such as disease prevalence and incidence. Relatedness among participants however, violates the traditional assumption of independent observations in these classic analyses. The commonly used adjustment method for relatedness in population-based analyses is to use marginal models, in which clusters (families) are assumed to be independent (unrelated) with a simple and identical covariance (family) structure such as those called independent, exchangeable and unstructured covariance structures. However, using these simple covariance structures may not be optimally appropriate for outcomes collected from large extended families, and may under- or over-estimate the variances of estimators and thus lead to uncertainty in inferences. Moreover, the assumption that families are unrelated with an identical family structure in a marginal model may not be satisfied for family studies with large extended families. The aim of this paper is to propose models incorporating marginal models approaches with a covariance structure for assessing population-based associations of diseases with their risk factors/covariates and estimating population characteristics for epidemiological studies while adjusting for the complicated relatedness among outcomes (continuous/categorical, normally/non-normally distributed) collected from large extended families. We also discuss theoretical issues of the proposed models and show that the proposed models and covariance structure are appropriate for and capable of achieving the aim.

  1. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  2. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Directory of Open Access Journals (Sweden)

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  3. Sampling and sensitivity analyses tools (SaSAT) for computational modelling.

    Science.gov (United States)

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-02-27

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab, a numerical mathematical software package, and utilises algorithms contained in the Matlab Statistics Toolbox. However, Matlab is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  4. Power analyses for negative binomial models with application to multiple sclerosis clinical trials.

    Science.gov (United States)

    Rettiganti, Mallik; Nagaraja, H N

    2012-01-01

    We use negative binomial (NB) models for the magnetic resonance imaging (MRI)-based brain lesion count data from parallel group (PG) and baseline versus treatment (BVT) trials for relapsing remitting multiple sclerosis (RRMS) patients, and describe the associated likelihood ratio (LR), score, and Wald tests. We perform power analyses and sample size estimation using the simulated percentiles of the exact distribution of the test statistics for the PG and BVT trials. When compared to the corresponding nonparametric test, the LR test results in 30-45% reduction in sample sizes for the PG trials and 25-60% reduction for the BVT trials.

  5. Analysing and modelling battery drain of 3G terminals due to port scan attacks

    OpenAIRE

    Pascual Trigos, Mar

    2010-01-01

    In this thesis there is detected a threat in 3G mobile phone, specifically in the eventual draining terminal's battery due to undesired data traffic. The objectives of the thesis are to analyse the battery drain of 3G mobile phones because of uplink and downlink traffic and to model the battery drain. First of all, there is described how we can make a mobile phone to increase its consumption, and therefore to shorten its battery life time. Concretely, we focus in data traffic. This traffic ca...

  6. Analysing the Effects of Flood-Resilience Technologies in Urban Areas Using a Synthetic Model Approach

    Directory of Open Access Journals (Sweden)

    Reinhard Schinke

    2016-11-01

    Full Text Available Flood protection systems with their spatial effects play an important role in managing and reducing flood risks. The planning and decision process as well as the technical implementation are well organized and often exercised. However, building-related flood-resilience technologies (FReT are often neglected due to the absence of suitable approaches to analyse and to integrate such measures in large-scale flood damage mitigation concepts. Against this backdrop, a synthetic model-approach was extended by few complementary methodical steps in order to calculate flood damage to buildings considering the effects of building-related FReT and to analyse the area-related reduction of flood risks by geo-information systems (GIS with high spatial resolution. It includes a civil engineering based investigation of characteristic properties with its building construction including a selection and combination of appropriate FReT as a basis for derivation of synthetic depth-damage functions. Depending on the real exposition and the implementation level of FReT, the functions can be used and allocated in spatial damage and risk analyses. The application of the extended approach is shown at a case study in Valencia (Spain. In this way, the overall research findings improve the integration of FReT in flood risk management. They provide also some useful information for advising of individuals at risk supporting the selection and implementation of FReT.

  7. Modeling of high homologous temperature deformation behavior for stress and life-time analyses

    Energy Technology Data Exchange (ETDEWEB)

    Krempl, E. [Rensselaer Polytechnic Institute, Troy, NY (United States)

    1997-12-31

    Stress and lifetime analyses need realistic and accurate constitutive models for the inelastic deformation behavior of engineering alloys at low and high temperatures. Conventional creep and plasticity models have fundamental difficulties in reproducing high homologous temperature behavior. To improve the modeling capabilities {open_quotes}unified{close_quotes} state variable theories were conceived. They consider all inelastic deformation rate-dependent and do not have separate repositories for creep and plasticity. The viscoplasticity theory based on overstress (VBO), one of the unified theories, is introduced and its properties are delineated. At high homologous temperature where secondary and tertiary creep are observed modeling is primarily accomplished by a static recovery term and a softening isotropic stress. At low temperatures creep is merely a manifestation of rate dependence. The primary creep modeled at low homologous temperature is due to the rate dependence of the flow law. The model is unaltered in the transition from low to high temperature except that the softening of the isotropic stress and the influence of the static recovery term increase with an increase of the temperature.

  8. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    Science.gov (United States)

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on

  9. Analysis of Breast Self-Examination Practice Using Fishbein's Model of Behavioral Intentions.

    Science.gov (United States)

    Horne, Deborah A.; And Others

    1986-01-01

    This study examined the extent to which Fishbein's Model of Behavioral Intentions was able to predict breast self-examination behavior. A questionnaire was administered to 350 women who were classified according to frequency of breast self-examination. Results are set forth. Implications for health educators are considered. (MT)

  10. Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.

    Science.gov (United States)

    Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D

    2014-05-01

    In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available.

  11. Rockslide and Impulse Wave Modelling in the Vajont Reservoir by DEM-CFD Analyses

    Science.gov (United States)

    Zhao, T.; Utili, S.; Crosta, G. B.

    2016-06-01

    This paper investigates the generation of hydrodynamic water waves due to rockslides plunging into a water reservoir. Quasi-3D DEM analyses in plane strain by a coupled DEM-CFD code are adopted to simulate the rockslide from its onset to the impact with the still water and the subsequent generation of the wave. The employed numerical tools and upscaling of hydraulic properties allow predicting a physical response in broad agreement with the observations notwithstanding the assumptions and characteristics of the adopted methods. The results obtained by the DEM-CFD coupled approach are compared to those published in the literature and those presented by Crosta et al. (Landslide spreading, impulse waves and modelling of the Vajont rockslide. Rock mechanics, 2014) in a companion paper obtained through an ALE-FEM method. Analyses performed along two cross sections are representative of the limit conditions of the eastern and western slope sectors. The max rockslide average velocity and the water wave velocity reach ca. 22 and 20 m/s, respectively. The maximum computed run up amounts to ca. 120 and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 and 190 m, respectively). Therefore, the overall study lays out a possible DEM-CFD framework for the modelling of the generation of the hydrodynamic wave due to the impact of a rapid moving rockslide or rock-debris avalanche.

  12. Economic modeling of electricity production from hot dry rock geothermal reservoirs: methodology and analyses. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, R.G.; Morris, G.E.

    1979-09-01

    An analytical methodology is developed for assessing alternative modes of generating electricity from hot dry rock (HDR) geothermal energy sources. The methodology is used in sensitivity analyses to explore relative system economics. The methodology used a computerized, intertemporal optimization model to determine the profit-maximizing design and management of a unified HDR electric power plant with a given set of geologic, engineering, and financial conditions. By iterating this model on price, a levelized busbar cost of electricity is established. By varying the conditions of development, the sensitivity of both optimal management and busbar cost to these conditions are explored. A plausible set of reference case parameters is established at the outset of the sensitivity analyses. This reference case links a multiple-fracture reservoir system to an organic, binary-fluid conversion cycle. A levelized busbar cost of 43.2 mills/kWh ($1978) was determined for the reference case, which had an assumed geothermal gradient of 40/sup 0/C/km, a design well-flow rate of 75 kg/s, an effective heat transfer area per pair of wells of 1.7 x 10/sup 6/ m/sup 2/, and plant design temperature of 160/sup 0/C. Variations in the presumed geothermal gradient, size of the reservoir, drilling costs, real rates of return, and other system parameters yield minimum busbar costs between -40% and +76% of the reference case busbar cost.

  13. Establishing a Numerical Modeling Framework for Hydrologic Engineering Analyses of Extreme Storm Events

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby

    2017-08-01

    In this study a numerical modeling framework for simulating extreme storm events was established using the Weather Research and Forecasting (WRF) model. Such a framework is necessary for the derivation of engineering parameters such as probable maximum precipitation that are the cornerstone of large water management infrastructure design. Here this framework was built based on a heavy storm that occurred in Nashville (USA) in 2010, and verified using two other extreme storms. To achieve the optimal setup, several combinations of model resolutions, initial/boundary conditions (IC/BC), cloud microphysics and cumulus parameterization schemes were evaluated using multiple metrics of precipitation characteristics. The evaluation suggests that WRF is most sensitive to IC/BC option. Simulation generally benefits from finer resolutions up to 5 km. At the 15km level, NCEP2 IC/BC produces better results, while NAM IC/BC performs best at the 5km level. Recommended model configuration from this study is: NAM or NCEP2 IC/BC (depending on data availability), 15km or 15km-5km nested grids, Morrison microphysics and Kain-Fritsch cumulus schemes. Validation of the optimal framework suggests that these options are good starting choices for modeling extreme events similar to the test cases. This optimal framework is proposed in response to emerging engineering demands of extreme storm events forecasting and analyses for design, operations and risk assessment of large water infrastructures.

  14. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper;

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  15. Development of steady-state model for MSPT and detailed analyses of receiver

    Science.gov (United States)

    Yuasa, Minoru; Sonoda, Masanori; Hino, Koichi

    2016-05-01

    Molten salt parabolic trough system (MSPT) uses molten salt as heat transfer fluid (HTF) instead of synthetic oil. The demonstration plant of MSPT was constructed by Chiyoda Corporation and Archimede Solar Energy in Italy in 2013. Chiyoda Corporation developed a steady-state model for predicting the theoretical behavior of the demonstration plant. The model was designed to calculate the concentrated solar power and heat loss using ray tracing of incident solar light and finite element modeling of thermal energy transferred into the medium. This report describes the verification of the model using test data on the demonstration plant, detailed analyses on the relation between flow rate and temperature difference on the metal tube of receiver and the effect of defocus angle on concentrated power rate, for solar collector assembly (SCA) development. The model is accurate to an extent of 2.0% as systematic error and 4.2% as random error. The relationships between flow rate and temperature difference on metal tube and the effect of defocus angle on concentrated power rate are shown.

  16. The development of a competency-based group health teaching performance examination model for BSN graduates.

    Science.gov (United States)

    Tai, Chun-Yi; Chung, Ue-Lin

    2008-12-01

    Under the current nursing education system in Taiwan, a fair and objective evaluation of group health teaching competency has been lacking for a long time. Therefore, the purpose of this study was to establish a competency-based group health teaching performance examination model for baccalaureate graduates. Action research was the main research methodology used in this study. The research consisted of two phases. In the first phase, a development committee was established. Based on routine discussions, literature reviews and realistic cases, a draft examination model with quasi-clinical situation model content and procedure was developed. Examination Facility Preparations, Simulated Scenarios and Client Recruitments, Examination Result Evaluation (evaluated by teachers) and Learning Guidelines were also prepared. This draft was reviewed twice for expert opinion, a pilot test was done and both the draft and pilot testing were reviewed again before the draft was finalized. The second phase involved refining the examination model by actually practicing the completed draft examination model in a simulated group-teaching setting in order to examine the model's reliability and validity. Fifteen people were involved in this experiment: three nursing personnel each having at least two years' clinical and teaching experience; three nursing students who did not have actual clinical experience and had not taken the course of teaching principles; three senior teachers; and six virtual patients. The responses from the nursing personnel, nursing students, teachers, and virtual patients who participated in the testing were gathered and integrated to refine the model. The model has content, expert and discriminative validity. The reliability of the model was proven by the high consistency in administration and scoring of the model by clinical examiners. This examination model is not only applicable for the proof of students' credit point exemption, but also as an alternative

  17. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  18. Evaluation of hydrological models for scenario analyses: signal-to-noise-ratio between scenario effects and model uncertainty

    Directory of Open Access Journals (Sweden)

    H. Bormann

    2005-01-01

    Full Text Available Many model applications suffer from the fact that although it is well known that model application implies different sources of uncertainty there is no objective criterion to decide whether a model is suitable for a particular application or not. This paper introduces a comparative index between the uncertainty of a model and the change effects of scenario calculations which enables the modeller to objectively decide about suitability of a model to be applied in scenario analysis studies. The index is called "signal-to-noise-ratio", and it is applied for an exemplary scenario study which was performed within the GLOWA-IMPETUS project in Benin. The conceptual UHP model was applied on the upper Ouémé basin. Although model calibration and validation were successful, uncertainties on model parameters and input data could be identified. Applying the "signal-to-noise-ratio" on regional scale subcatchments of the upper Ouémé comparing water availability indicators for uncertainty studies and scenario analyses the UHP model turned out to be suitable to predict long-term water balances under the present poor data availability and changing environmental conditions in subhumid West Africa.

  19. Hierarchical linear modeling analyses of the NEO-PI-R scales in the Baltimore Longitudinal Study of Aging.

    Science.gov (United States)

    Terracciano, Antonio; McCrae, Robert R; Brant, Larry J; Costa, Paul T

    2005-09-01

    The authors examined age trends in the 5 factors and 30 facets assessed by the Revised NEO Personality Inventory in Baltimore Longitudinal Study of Aging data (N=1,944; 5,027 assessments) collected between 1989 and 2004. Consistent with cross-sectional results, hierarchical linear modeling analyses showed gradual personality changes in adulthood: a decline in Neuroticism up to age 80, stability and then decline in Extraversion, decline in Openness, increase in Agreeableness, and increase in Conscientiousness up to age 70. Some facets showed different curves from the factor they define. Birth cohort effects were modest, and there were no consistent Gender x Age interactions. Significant nonnormative changes were found for all 5 factors; they were not explained by attrition but might be due to genetic factors, disease, or life experience. Copyright (c) 2005 APA, all rights reserved.

  20. Hierarchical Linear Modeling Analyses of NEO-PI-R Scales In the Baltimore Longitudinal Study of Aging

    Science.gov (United States)

    Terracciano, Antonio; McCrae, Robert R.; Brant, Larry J.; Costa, Paul T.

    2009-01-01

    We examined age trends in the five factors and 30 facets assessed by the Revised NEO Personality Inventory in Baltimore Longitudinal Study of Aging data (N = 1,944; 5,027 assessments) collected between 1989 and 2004. Consistent with cross-sectional results, Hierarchical Linear Modeling analyses showed gradual personality changes in adulthood: a decline up to age 80 in Neuroticism, stability and then decline in Extraversion, decline in Openness, increase in Agreeableness, and increase up to age 70 in Conscientiousness. Some facets showed different curves from the factor they define. Birth cohort effects were modest, and there were no consistent Gender × Age interactions. Significant non-normative changes were found for all five factors; they were not explained by attrition but might be due to genetic factors, disease, or life experience. PMID:16248708

  1. A model intercomparison analysing the link between column ozone and geopotential height anomalies in January

    Directory of Open Access Journals (Sweden)

    P. Braesicke

    2008-05-01

    Full Text Available A statistical framework to evaluate the performance of chemistry-climate models with respect to the interaction between meteorology and column ozone during northern hemisphere mid-winter, in particularly January, is used. Different statistical diagnostics from four chemistry-climate models (E39C, ME4C, UMUCAM, ULAQ are compared with the ERA-40 re-analysis. First, we analyse vertical coherence in geopotential height anomalies as described by linear correlations between two different pressure levels (30 and 200 hPa of the atmosphere. In addition, linear correlations between column ozone and geopotential height anomalies at 200 hPa are discussed to motivate a simple picture of the meteorological impacts on column ozone on interannual timescales. Secondly, we discuss characteristic spatial structures in geopotential height and column ozone anomalies as given by their first two empirical orthogonal functions. Finally, we describe the covariance patterns between reconstructed anomalies of geopotential height and column ozone. In general we find good agreement between the models with higher horizontal resolution (E39C, ME4C, UMUCAM and ERA-40. The Pacific-North American (PNA pattern emerges as a useful qualitative benchmark for the model performance. Models with higher horizontal resolution and high upper boundary (ME4C and UMUCAM show good agreement with the PNA tripole derived from ERA-40 data, including the column ozone modulation over the Pacfic sector. The model with lowest horizontal resolution does not show a classic PNA pattern (ULAQ, and the model with the lowest upper boundary (E39C does not capture the PNA related column ozone variations over the Pacific sector. Those discrepancies have to be taken into account when providing confidence intervals for climate change integrations.

  2. Multivariate genetic analyses of the 2D:4D ratio: examining the effects of hand and measurement technique in data from 757 twin families.

    Science.gov (United States)

    Medland, Sarah E; Loehlin, John C

    2008-06-01

    The ratio of the lengths of the second to fourth digits of the hand (2D:4D) is a sexually dimorphic trait that has been proposed as a measure of prenatal testosterone exposure and a putative correlate of a variety of later behavioral and physiological outcomes including personality, fitness and sexual orientation. We present analyses of 2D:4D ratios collected from twins (1413 individuals) and their nontwin siblings (328 individuals) from 757 families. In this sample 2D:4D was measured from photocopies using digital calipers, and for a subset of participants, computer-aided measurement. Multivariate modeling of the left- and right-hand measurements revealed significant genetic and environmental covariation between hands. The two methods yielded very similar results, and the majority of variance was explained by factors shared by both measurement methods. Neither common environmental nor dominant genetic effects were found, and the covariation between siblings could be accounted for by additive genetic effects accounting for 80% and 71% of the variance for the left and right hands, respectively. There was no evidence of sex differences in the total variance, nor in the magnitude or source of genetic and environmental influences, suggesting that X-linked effects (such as the previously identified association with the Androgen receptor) are likely to be small. However, there were also nonshared environmental effects specific to each hand, which, in addition to measurement error, may in part explain why some studies within in the literature find effects for the 2D:4D ratio of one hand but not the other.

  3. PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems.

    Science.gov (United States)

    Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota

    2016-07-01

    PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems.

  4. Kinetic analyses and mathematical modeling of primary photochemical and photoelectrochemical processes in plant photosystems.

    Science.gov (United States)

    Vredenberg, Wim

    2011-02-01

    In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with multi-turnover pulses (MTFs) of variable intensity and duration. These analyses have led to definition and formulation of rate equations that describe the sequence of primary linear electron transfer (LET) steps in photosystem II (PSII) and of cyclic electron transport (CET) in PSI. The model considers heterogeneity in PSII reaction centers (RCs) associated with the S-states of the OEC and incorporates in a dark-adapted state the presence of a 15-35% fraction of Q(B)-nonreducing RCs that probably is identical with the S₀ fraction. The fluorescence induction algorithm (FIA) in the 10 μs-1s excitation time range considers a photochemical O-J-D, a photo-electrochemical J-I and an I-P phase reflecting the response of the variable fluorescence to the electric trans-thylakoid potential generated by the proton pump fuelled by CET in PSI. The photochemical phase incorporates the kinetics associated with the double reduction of the acceptor pair of pheophytin (Phe) and plastoquinone Q(A) [PheQ(A)] in Q(B) nonreducing RCs and the associated doubling of the variable fluorescence, in agreement with the three-state trapping model (TSTM) of PS II. The decline in fluorescence emission during the so called SMT in the 1-100s excitation time range, known as the Kautsky curve, is shown to be associated with a substantial decrease of CET-powered proton efflux from the stroma into the chloroplast lumen through the ATPsynthase of the photosynthetic machinery.

  5. D Recording for 2d Delivering - the Employment of 3d Models for Studies and Analyses -

    Science.gov (United States)

    Rizzi, A.; Baratti, G.; Jiménez, B.; Girardi, S.; Remondino, F.

    2011-09-01

    In the last years, thanks to the advances of surveying sensors and techniques, many heritage sites could be accurately replicated in digital form with very detailed and impressive results. The actual limits are mainly related to hardware capabilities, computation time and low performance of personal computer. Often, the produced models are not visible on a normal computer and the only solution to easily visualized them is offline using rendered videos. This kind of 3D representations is useful for digital conservation, divulgation purposes or virtual tourism where people can visit places otherwise closed for preservation or security reasons. But many more potentialities and possible applications are available using a 3D model. The problem is the ability to handle 3D data as without adequate knowledge this information is reduced to standard 2D data. This article presents some surveying and 3D modeling experiences within the APSAT project ("Ambiente e Paesaggi dei Siti d'Altura Trentini", i.e. Environment and Landscapes of Upland Sites in Trentino). APSAT is a multidisciplinary project funded by the Autonomous Province of Trento (Italy) with the aim documenting, surveying, studying, analysing and preserving mountainous and hill-top heritage sites located in the region. The project focuses on theoretical, methodological and technological aspects of the archaeological investigation of mountain landscape, considered as the product of sequences of settlements, parcelling-outs, communication networks, resources, and symbolic places. The mountain environment preserves better than others the traces of hunting and gathering, breeding, agricultural, metallurgical, symbolic activities characterised by different lengths and environmental impacts, from Prehistory to the Modern Period. Therefore the correct surveying and documentation of this heritage sites and material is very important. Within the project, the 3DOM unit of FBK is delivering all the surveying and 3D material to

  6. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Directory of Open Access Journals (Sweden)

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  7. DESCRIPTION OF MODELING ANALYSES IN SUPPORT OF THE 200-ZP-1 REMEDIAL DESIGN/REMEDIAL ACTION

    Energy Technology Data Exchange (ETDEWEB)

    VONGARGEN BH

    2009-11-03

    The Feasibility Study/or the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-28) and the Proposed Plan/or Remediation of the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-33) describe the use of groundwater pump-and-treat technology for the 200-ZP-1 Groundwater Operable Unit (OU) as part of an expanded groundwater remedy. During fiscal year 2008 (FY08), a groundwater flow and contaminant transport (flow and transport) model was developed to support remedy design decisions at the 200-ZP-1 OU. This model was developed because the size and influence of the proposed 200-ZP-1 groundwater pump-and-treat remedy will have a larger areal extent than the current interim remedy, and modeling is required to provide estimates of influent concentrations and contaminant mass removal rates to support the design of the aboveground treatment train. The 200 West Area Pre-Conceptual Design/or Final Extraction/Injection Well Network: Modeling Analyses (DOE/RL-2008-56) documents the development of the first version of the MODFLOW/MT3DMS model of the Hanford Site's Central Plateau, as well as the initial application of that model to simulate a potential well field for the 200-ZP-1 remedy (considering only the contaminants carbon tetrachloride and technetium-99). This document focuses on the use of the flow and transport model to identify suitable extraction and injection well locations as part of the 200 West Area 200-ZP-1 Pump-and-Treat Remedial Design/Remedial Action Work Plan (DOEIRL-2008-78). Currently, the model has been developed to the extent necessary to provide approximate results and to lay a foundation for the design basis concentrations that are required in support of the remedial design/remediation action (RD/RA) work plan. The discussion in this document includes the following: (1) Assignment of flow and transport parameters for the model; (2) Definition of initial conditions for the transport model for each simulated contaminant of concern (COC) (i.e., carbon

  8. Assessing the hydrodynamic boundary conditions for risk analyses in coastal areas: a stochastic storm surge model

    Directory of Open Access Journals (Sweden)

    T. Wahl

    2011-11-01

    Full Text Available This paper describes a methodology to stochastically simulate a large number of storm surge scenarios (here: 10 million. The applied model is very cheap in computation time and will contribute to improve the overall results from integrated risk analyses in coastal areas. Initially, the observed storm surge events from the tide gauges of Cuxhaven (located in the Elbe estuary and Hörnum (located in the southeast of Sylt Island are parameterised by taking into account 25 parameters (19 sea level parameters and 6 time parameters. Throughout the paper, the total water levels are considered. The astronomical tides are semidiurnal in the investigation area with a tidal range >2 m. The second step of the stochastic simulation consists in fitting parametric distribution functions to the data sets resulting from the parameterisation. The distribution functions are then used to run Monte-Carlo-Simulations. Based on the simulation results, a large number of storm surge scenarios are reconstructed. Parameter interdependencies are considered and different filter functions are applied to avoid inconsistencies. Storm surge scenarios, which are of interest for risk analyses, can easily be extracted from the results.

  9. Models for regionalizing economic data and their applications within the scope of forensic disaster analyses

    Science.gov (United States)

    Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.

    2015-04-01

    The impact of natural hazards on the economic system can be observed in many different regions all over the world. Once the local economic structure is hit by an event direct costs instantly occur. However, the disturbance on a local level (e.g. parts of city or industries along a river bank) might also cause monetary damages in other, indirectly affected sectors. If the impact of an event is strong, these damages are likely to cascade and spread even on an international scale (e.g. the eruption of Eyjafjallajökull and its impact on the automotive sector in Europe). In order to determine these special impacts, one has to gain insights into the directly hit economic structure before being able to calculate these side effects. Especially, regarding the development of a model used for near real-time forensic disaster analyses any simulation needs to be based on data that is rapidly available or easily to be computed. Therefore, we investigated commonly used or recently discussed methodologies for regionalizing economic data. Surprisingly, even for German federal states there is no official input-output data available that can be used, although it might provide detailed figures concerning economic interrelations between different industry sectors. In the case of highly developed countries, such as Germany, we focus on models for regionalizing nationwide input-output table which is usually available at the national statistical offices. However, when it comes to developing countries (e.g. South-East Asia) the data quality and availability is usually much poorer. In this case, other sources need to be found for the proper assessment of regional economic performance. We developed an indicator-based model that can fill this gap because of its flexibility regarding the level of aggregation and the composability of different input parameters. Our poster presentation brings up a literature review and a summary on potential models that seem to be useful for this specific task

  10. Modifications in the AA5083 Johnson-Cook Material Model for Use in Friction Stir Welding Computational Analyses

    Science.gov (United States)

    2011-12-30

    REPORT Modifications in the AA5083 Johnson-Cook Material Model for Use in Friction Stir Welding Computational Analyses 14. ABSTRACT 16. SECURITY...TERMS AA5083, friction stir welding , Johnson-Cook material model M. Grujicic, B. Pandurangan, C.-F. Yen, B. A. Cheeseman Clemson University Office of...Use in Friction Stir Welding Computational Analyses Report Title ABSTRACT Johnson-Cook strength material model is frequently used in finite-element

  11. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  12. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  13. Optimization of extraction procedures for ecotoxicity analyses: Use of TNT contaminated soil as a model

    Energy Technology Data Exchange (ETDEWEB)

    Sunahara, G.I.; Renoux, A.Y.; Dodard, S.; Paquet, L.; Hawari, J. [BRI, Montreal, Quebec (Canada); Ampleman, G.; Lavigne, J.; Thiboutot, S. [DREV, Courcelette, Quebec (Canada)

    1995-12-31

    The environmental impact of energetic substances (TNT, RDX, GAP, NC) in soil is being examined using ecotoxicity bioassays. An extraction method was characterized to optimize bioassay assessment of TNT toxicity in different soil types. Using the Microtox{trademark} (Photobacterium phosphoreum) assay and non-extracted samples, TNT was most acutely toxic (IC{sub 50} = 1--9 PPM) followed by RDX and GAP; NC did not show obvious toxicity (probably due to solubility limitations). TNT (in 0.25% DMSO) yielded an IC{sub 50} 0.98 + 0.10 (SD) ppm. The 96h-EC{sub 50} (Selenastrum capricornutum growth inhibition) of TNT (1. 1 ppm) was higher than GAP and RDX; NC was not apparently toxic (probably due to solubility limitations). Soil samples (sand or a silt-sand mix) were spiked with either 2,000 or 20,000 mg TNT/kg soil, and were adjusted to 20% moisture. Samples were later mixed with acetonitrile, sonicated, and then treated with CaCl{sub 2} before filtration, HPLC and ecotoxicity analyses. Results indicated that: the recovery of TNT from soil (97.51% {+-} 2.78) was independent of the type of soil or moisture content; CaCl{sub 2} interfered with TNT toxicity and acetonitrile extracts could not be used directly for algal testing. When TNT extracts were diluted to fixed concentrations, similar TNT-induced ecotoxicities were generally observed and suggested that, apart from the expected effects of TNT concentrations in the soil, the soil texture and the moisture effects were minimal. The extraction procedure permits HPLC analyses as well as ecotoxicity testing and minimizes secondary soil matrix effects. Studies will be conducted to study the toxic effects of other energetic substances present in soil using this approach.

  14. Testing a dual-systems model of adolescent brain development using resting-state connectivity analyses.

    Science.gov (United States)

    van Duijvenvoorde, A C K; Achterberg, M; Braams, B R; Peters, S; Crone, E A

    2016-01-01

    The current study aimed to test a dual-systems model of adolescent brain development by studying changes in intrinsic functional connectivity within and across networks typically associated with cognitive-control and affective-motivational processes. To this end, resting-state and task-related fMRI data were collected of 269 participants (ages 8-25). Resting-state analyses focused on seeds derived from task-related neural activation in the same participants: the dorsal lateral prefrontal cortex (dlPFC) from a cognitive rule-learning paradigm and the nucleus accumbens (NAcc) from a reward-paradigm. Whole-brain seed-based resting-state analyses showed an age-related increase in dlPFC connectivity with the caudate and thalamus, and an age-related decrease in connectivity with the (pre)motor cortex. nAcc connectivity showed a strengthening of connectivity with the dorsal anterior cingulate cortex (ACC) and subcortical structures such as the hippocampus, and a specific age-related decrease in connectivity with the ventral medial PFC (vmPFC). Behavioral measures from both functional paradigms correlated with resting-state connectivity strength with their respective seed. That is, age-related change in learning performance was mediated by connectivity between the dlPFC and thalamus, and age-related change in winning pleasure was mediated by connectivity between the nAcc and vmPFC. These patterns indicate (i) strengthening of connectivity between regions that support control and learning, (ii) more independent functioning of regions that support motor and control networks, and (iii) more independent functioning of regions that support motivation and valuation networks with age. These results are interpreted vis-à-vis a dual-systems model of adolescent brain development.

  15. Comparative modeling analyses of Cs-137 fate in the rivers impacted by Chernobyl and Fukushima accidents

    Energy Technology Data Exchange (ETDEWEB)

    Zheleznyak, M.; Kivva, S. [Institute of Environmental Radioactivity, Fukushima University (Japan)

    2014-07-01

    The consequences of two largest nuclear accidents of the last decades - at Chernobyl Nuclear Power Plant (ChNPP) (1986) and at Fukushima Daiichi NPP (FDNPP) (2011) clearly demonstrated that radioactive contamination of water bodies in vicinity of NPP and on the waterways from it, e.g., river- reservoir water after Chernobyl accident and rivers and coastal marine waters after Fukushima accident, in the both cases have been one of the main sources of the public concerns on the accident consequences. The higher weight of water contamination in public perception of the accidents consequences in comparison with the real fraction of doses via aquatic pathways in comparison with other dose components is a specificity of public perception of environmental contamination. This psychological phenomenon that was confirmed after these accidents provides supplementary arguments that the reliable simulation and prediction of the radionuclide dynamics in water and sediments is important part of the post-accidental radioecological research. The purpose of the research is to use the experience of the modeling activities f conducted for the past more than 25 years within the Chernobyl affected Pripyat River and Dnieper River watershed as also data of the new monitoring studies in Japan of Abukuma River (largest in the region - the watershed area is 5400 km{sup 2}), Kuchibuto River, Uta River, Niita River, Natsui River, Same River, as also of the studies on the specific of the 'water-sediment' {sup 137}Cs exchanges in this area to refine the 1-D model RIVTOX and 2-D model COASTOX for the increasing of the predictive power of the modeling technologies. The results of the modeling studies are applied for more accurate prediction of water/sediment radionuclide contamination of rivers and reservoirs in the Fukushima Prefecture and for the comparative analyses of the efficiency of the of the post -accidental measures to diminish the contamination of the water bodies. Document

  16. Development of microbial-enzyme-mediated decomposition model parameters through steady-state and dynamic analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Gangsheng [ORNL; Post, Wilfred M [ORNL; Mayes, Melanie [ORNL

    2013-01-01

    We developed a Microbial-ENzyme-mediated Decomposition (MEND) model, based on the Michaelis-Menten kinetics, that describes the dynamics of physically defined pools of soil organic matter (SOC). These include particulate, mineral-associated, dissolved organic matter (POC, MOC, and DOC, respectively), microbial biomass, and associated exoenzymes. The ranges and/or distributions of parameters were determined by both analytical steady-state and dynamic analyses with SOC data from the literature. We used an improved multi-objective parameter sensitivity analysis (MOPSA) to identify the most important parameters for the full model: maintenance of microbial biomass, turnover and synthesis of enzymes, and carbon use efficiency (CUE). The model predicted an increase of 2 C (baseline temperature =12 C) caused the pools of POC-Cellulose, MOC, and total SOC to increase with dynamic CUE and decrease with constant CUE, as indicated by the 50% confidence intervals. Regardless of dynamic or constant CUE, the pool sizes of POC, MOC, and total SOC varied from 8% to 8% under +2 C. The scenario analysis using a single parameter set indicates that higher temperature with dynamic CUE might result in greater net increases in both POC-Cellulose and MOC pools. Different dynamics of various SOC pools reflected the catalytic functions of specific enzymes targeting specific substrates and the interactions between microbes, enzymes, and SOC. With the feasible parameter values estimated in this study, models incorporating fundamental principles of microbial-enzyme dynamics can lead to simulation results qualitatively different from traditional models with fast/slow/passive pools.

  17. Genomic analyses with biofilter 2.0: knowledge driven filtering, annotation, and model development.

    Science.gov (United States)

    Pendergrass, Sarah A; Frase, Alex; Wallace, John; Wolfe, Daniel; Katiyar, Neerja; Moore, Carrie; Ritchie, Marylyn D

    2013-12-30

    The ever-growing wealth of biological information available through multiple comprehensive database repositories can be leveraged for advanced analysis of data. We have now extensively revised and updated the multi-purpose software tool Biofilter that allows researchers to annotate and/or filter data as well as generate gene-gene interaction models based on existing biological knowledge. Biofilter now has the Library of Knowledge Integration (LOKI), for accessing and integrating existing comprehensive database information, including more flexibility for how ambiguity of gene identifiers are handled. We have also updated the way importance scores for interaction models are generated. In addition, Biofilter 2.0 now works with a range of types and formats of data, including single nucleotide polymorphism (SNP) identifiers, rare variant identifiers, base pair positions, gene symbols, genetic regions, and copy number variant (CNV) location information. Biofilter provides a convenient single interface for accessing multiple publicly available human genetic data sources that have been compiled in the supporting database of LOKI. Information within LOKI includes genomic locations of SNPs and genes, as well as known relationships among genes and proteins such as interaction pairs, pathways and ontological categories.Via Biofilter 2.0 researchers can:• Annotate genomic location or region based data, such as results from association studies, or CNV analyses, with relevant biological knowledge for deeper interpretation• Filter genomic location or region based data on biological criteria, such as filtering a series SNPs to retain only SNPs present in specific genes within specific pathways of interest• Generate Predictive Models for gene-gene, SNP-SNP, or CNV-CNV interactions based on biological information, with priority for models to be tested based on biological relevance, thus narrowing the search space and reducing multiple hypothesis-testing. Biofilter is a software

  18. Controls on Yardang Morphology: Insights from Field Measurements, Lidar Topographic Analyses, and Numerical Modeling

    Science.gov (United States)

    Pelletier, J. D.; Kapp, P. A.

    2014-12-01

    Yardangs are streamlined bedforms sculpted by the wind and wind-blown sand. They can form as relatively resistant exposed rocks erode more slowly than surrounding exposed rocks, thus causing the more resistant rocks to stand higher in the landscape and deflect the wind and wind-blown sand into adjacent troughs in a positive feedback. How this feedback gives rise to streamlined forms that locally have a consistent size is not well understood theoretically. In this study we combine field measurements in the yardangs of Ocotillo Wells SVRA with analyses of airborne and terrestrial lidar datasets and numerical modeling to quantify and understand the controls on yardang morphology. The classic model for yardang morphology is that they evolve to an ideal 4:1 length-to-width aspect ratio that minimizes aerodynamic drag. We show using computational fluid dynamics (CFD) modeling that this model is incorrect: the 4:1 aspect ratio is the value corresponding to minimum drag for free bodies, i.e. obstacles around which air flows on all sides. Yardangs, in contrast, are embedded in Earth's surface. For such rough streamlined half-bodies, the aspect ratio corresponding to minimum drag is larger than 20:1. As an alternative to the minimum-drag model, we propose that the aspect ratio of yardangs not significantly influenced by structural controls is controlled by the angle of dispersion of the aerodynamic jet created as deflected wind and wind-blown sand exits the troughs between incipient yardang noses. Aerodynamic jets have a universal dispersion angle of 11.8 degrees, thus predicting a yardang aspect ratio of ~5:1. We developed a landscape evolution model that combines the physics of boundary layer flow with aeolian saltation and bedrock erosion to form yardangs with a range of sizes and aspect ratios similar to those observed in nature. Yardangs with aspect ratios both larger and smaller than 5:1 occur in the model since the strike and dip of the resistant rock unit also exerts

  19. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  20. A Hidden Markov model web application for analysing bacterial genomotyping DNA microarray experiments.

    Science.gov (United States)

    Newton, Richard; Hinds, Jason; Wernisch, Lorenz

    2006-01-01

    Whole genome DNA microarray genomotyping experiments compare the gene content of different species or strains of bacteria. A statistical approach to analysing the results of these experiments was developed, based on a Hidden Markov model (HMM), which takes adjacency of genes along the genome into account when calling genes present or absent. The model was implemented in the statistical language R and applied to three datasets. The method is numerically stable with good convergence properties. Error rates are reduced compared with approaches that ignore spatial information. Moreover, the HMM circumvents a problem encountered in a conventional analysis: determining the cut-off value to use to classify a gene as absent. An Apache Struts web interface for the R script was created for the benefit of users unfamiliar with R. The application may be found at http://hmmgd.cryst.bbk.ac.uk/hmmgd. The source code illustrating how to run R scripts from an Apache Struts-based web application is available from the corresponding author on request. The application is also available for local installation if required.

  1. Predicting Examination Performance Using an Expanded Integrated Hierarchical Model of Test Emotions and Achievement Goals

    Science.gov (United States)

    Putwain, Dave; Deveney, Carolyn

    2009-01-01

    The aim of this study was to examine an expanded integrative hierarchical model of test emotions and achievement goal orientations in predicting the examination performance of undergraduate students. Achievement goals were theorised as mediating the relationship between test emotions and performance. 120 undergraduate students completed…

  2. Global isoprene emissions estimated using MEGAN, ECMWF analyses and a detailed canopy environment model

    Directory of Open Access Journals (Sweden)

    J.-F. Müller

    2008-03-01

    Full Text Available The global emissions of isoprene are calculated at 0.5° resolution for each year between 1995 and 2006, based on the MEGAN (Model of Emissions of Gases and Aerosols from Nature version 2 model (Guenther et al., 2006 and a detailed multi-layer canopy environment model for the calculation of leaf temperature and visible radiation fluxes. The calculation is driven by meteorological fields – air temperature, cloud cover, downward solar irradiance, windspeed, volumetric soil moisture in 4 soil layers – provided by analyses of the European Centre for Medium-Range Weather Forecasts (ECMWF. The estimated annual global isoprene emission ranges between 374 Tg (in 1996 and 449 Tg (in 1998 and 2005, for an average of ca. 410 Tg/year over the whole period, i.e. about 30% less than the standard MEGAN estimate (Guenther et al., 2006. This difference is due, to a large extent, to the impact of the soil moisture stress factor, which is found here to decrease the global emissions by more than 20%. In qualitative agreement with past studies, high annual emissions are found to be generally associated with El Niño events. The emission inventory is evaluated against flux measurement campaigns at Harvard forest (Massachussets and Tapajós in Amazonia, showing that the model can capture quite well the short-term variability of emissions, but that it fails to reproduce the observed seasonal variation at the tropical rainforest site, with largely overestimated wet season fluxes. The comparison of the HCHO vertical columns calculated by a chemistry and transport model (CTM with HCHO distributions retrieved from space provides useful insights on tropical isoprene emissions. For example, the relatively low emissions calculated over Western Amazonia (compared to the corresponding estimates in the inventory of Guenther et al., 1995 are validated by the excellent agreement found between the CTM and HCHO data over this region. The parameterized impact of the soil moisture

  3. Stream Tracer Integrity: Comparative Analyses of Rhodamine-WT and Sodium Chloride through Transient Storage Modeling

    Science.gov (United States)

    Smull, E. M.; Wlostowski, A. N.; Gooseff, M. N.; Bowden, W. B.; Wollheim, W. M.

    2013-12-01

    Solute transport in natural channels describes the transport of water and dissolved matter through a river reach of interest. Conservative tracers allow us to label a parcel of stream water, such that we can track its movement downstream through space and time. A transient storage model (TSM) can be fit to the breakthrough curve (BTC) following a stream tracer experiment, as a way to quantify advection, dispersion, and transient storage processes. Arctic streams and rivers, in particular, are continuously underlain by permafrost, which provides for a simplified surface water-groundwater exchange. Sodium chloride (NaCl) and Rhodamine-WT (RWT) are widely used tracers, and differences between the two in conservative behavior and detection limits have been noted in small-scale field and laboratory studies. This study seeks to further this understanding by applying the OTIS model to NaCl and RWT BTC data from a field study on the Kuparuk River, Alaska, at varying flow rates. There are two main questions to be answered: 1) Do differences in NaCl and RWT manifest in OTIS parameter values? 2) Are the OTIS model results reliable for NaCl, RWT, or both? Fieldwork was performed in the summer of 2012 on the Kuparuk River, and modeling was performed using a modified OTIS framework, which provided for parameter optimization and further global sensitivity analyses. The results of this study will contribute to the greater body of literature surrounding Arctic stream hydrology, and it will assist in methodology for future tracer field studies. Additionally, the modeling work will provide an analysis for OTIS parameter identifiability, and assess stream tracer integrity (i.e. how well the BTC data represents the system) and its relation to TSM performance (i.e. how well the TSM can find a unique fit to the BTC data). The quantitative tools used can be applied to other solute transport studies, to better understand potential deviations in model outcome due to stream tracer choice and

  4. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Directory of Open Access Journals (Sweden)

    Varsha H. Rallapalli

    2016-10-01

    Full Text Available Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM has demonstrated that the signal-to-noise ratio (SNRENV from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N is assumed to: (a reduce S + N envelope power by filling in dips within clean speech (S and (b introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  5. Using species abundance distribution models and diversity indices for biogeographical analyses

    Science.gov (United States)

    Fattorini, Simone; Rigal, François; Cardoso, Pedro; Borges, Paulo A. V.

    2016-01-01

    We examine whether Species Abundance Distribution models (SADs) and diversity indices can describe how species colonization status influences species community assembly on oceanic islands. Our hypothesis is that, because of the lack of source-sink dynamics at the archipelago scale, Single Island Endemics (SIEs), i.e. endemic species restricted to only one island, should be represented by few rare species and consequently have abundance patterns that differ from those of more widespread species. To test our hypothesis, we used arthropod data from the Azorean archipelago (North Atlantic). We divided the species into three colonization categories: SIEs, archipelagic endemics (AZEs, present in at least two islands) and native non-endemics (NATs). For each category, we modelled rank-abundance plots using both the geometric series and the Gambin model, a measure of distributional amplitude. We also calculated Shannon entropy and Buzas and Gibson's evenness. We show that the slopes of the regression lines modelling SADs were significantly higher for SIEs, which indicates a relative predominance of a few highly abundant species and a lack of rare species, which also depresses diversity indices. This may be a consequence of two factors: (i) some forest specialist SIEs may be at advantage over other, less adapted species; (ii) the entire populations of SIEs are by definition concentrated on a single island, without possibility for inter-island source-sink dynamics; hence all populations must have a minimum number of individuals to survive natural, often unpredictable, fluctuations. These findings are supported by higher values of the α parameter of the Gambin mode for SIEs. In contrast, AZEs and NATs had lower regression slopes, lower α but higher diversity indices, resulting from their widespread distribution over several islands. We conclude that these differences in the SAD models and diversity indices demonstrate that the study of these metrics is useful for

  6. Promoting Social Inclusion through Sport for Refugee-Background Youth in Australia: Analysing Different Participation Models

    Directory of Open Access Journals (Sweden)

    Karen Block

    2017-06-01

    Full Text Available Sports participation can confer a range of physical and psychosocial benefits and, for refugee and migrant youth, may even act as a critical mediator for achieving positive settlement and engaging meaningfully in Australian society. This group has low participation rates however, with identified barriers including costs; discrimination and a lack of cultural sensitivity in sporting environments; lack of knowledge of mainstream sports services on the part of refugee-background settlers; inadequate access to transport; culturally determined gender norms; and family attitudes. Organisations in various sectors have devised programs and strategies for addressing these participation barriers. In many cases however, these responses appear to be ad hoc and under-theorised. This article reports findings from a qualitative exploratory study conducted in a range of settings to examine the benefits, challenges and shortcomings associated with different participation models. Interview participants were drawn from non-government organisations, local governments, schools, and sports clubs. Three distinct models of participation were identified, including short term programs for refugee-background children; ongoing programs for refugee-background children and youth; and integration into mainstream clubs. These models are discussed in terms of their relative challenges and benefits and their capacity to promote sustainable engagement and social inclusion for this population group.

  7. Exploring prospective secondary mathematics teachers' interpretation of student thinking through analysing students' work in modelling

    Science.gov (United States)

    Didis, Makbule Gozde; Erbas, Ayhan Kursat; Cetinkaya, Bulent; Cakiroglu, Erdinc; Alacaci, Cengiz

    2016-09-01

    Researchers point out the importance of teachers' knowledge of student thinking and the role of examining student work in various contexts to develop a knowledge base regarding students' ways of thinking. This study investigated prospective secondary mathematics teachers' interpretations of students' thinking as manifested in students' work that embodied solutions of mathematical modelling tasks. The data were collected from 25 prospective mathematics teachers enrolled in an undergraduate course through four 2-week-long cycles. Analysis of data revealed that the prospective teachers interpreted students' thinking in four ways: describing, questioning, explaining, and comparing. Moreover, whereas some of the prospective teachers showed a tendency to increase their attention to the meaning of students' ways of thinking more while they engaged in students' work in depth over time and experience, some of them continued to focus on only judging the accuracy of students' thinking. The implications of the findings for understanding and developing prospective teachers' ways of interpreting students' thinking are discussed.

  8. Examining the Longitudinal Biliterate Trajectory of Emerging Bilingual Learners in a Paired Literacy Instructional Model

    Science.gov (United States)

    Sparrow, Wendy; Butvilofsky, Sandra; Escamilla, Kathy; Hopewell, Susan; Tolento, Teresa

    2014-01-01

    This longitudinal study examines the biliteracy results of Spanish-English emerging bilingual students who participated in a K-5 paired literacy model in a large school district in Oregon. Spanish and English reading and writing data show longitudinal gains in students' biliterate development, demonstrating the potential of the model in developing…

  9. An Examination of Pre-Service Mathematics Teachers' Approaches to Construct and Solve Mathematical Modelling Problems

    Science.gov (United States)

    Bukova-Guzel, Esra

    2011-01-01

    This study examines the approaches displayed by pre-service mathematics teachers in their experiences of constructing mathematical modelling problems and the extent to which they perform the modelling process when solving the problems they construct. This case study was carried out with 35 pre-service teachers taking the Mathematical Modelling…

  10. Examining the Bifactor IRT Model for Vertical Scaling in K-12 Assessment

    Science.gov (United States)

    Koepfler, James R.

    2012-01-01

    Over the past decade, educational policy trends have shifted to a focus on examining students' growth from kindergarten through twelfth grade (K-12). One way states can track students' growth is with a vertical scale. Presently, every state that uses a vertical scale bases the scale on a unidimensional IRT model. These models make a…

  11. Simultaneous and Delayed Video Modeling: An Examination of System Effectiveness and Student Preferences

    Science.gov (United States)

    Taber-Doughyt, Teresa; Patton, Scott E.; Brennan, Stephanie

    2009-01-01

    The effectiveness of simultaneous and delayed video modeling when used by three middle-school students with moderate intellectual disabilities was examined. Alternating between modeling systems, students were taught to use the public library computer to locate specific book call numbers and use the Dewey Decimal Classification System to locate…

  12. Modeling an integrative physical examination program for the Departments of Defense and Veterans Affairs.

    Science.gov (United States)

    Goodrich, Scott G

    2006-10-01

    Current policies governing the Departments of Defense and Veterans Affairs physical examination programs are out of step with current evidence-based medical practice. Replacing periodic and other routine physical examination types with annual preventive health assessments would afford our service members additional health benefit at reduced cost. Additionally, the Departments of Defense and Veterans Affairs repeat the physical examination process at separation and have been unable to reconcile their respective disability evaluation systems to reduce duplication and waste. A clear, coherent, and coordinated strategy to improve the relevance and utility of our physical examination programs is long overdue. This article discusses existing physical examination programs and proposes a model for a new integrative physical examination program based on need, science, and common sense.

  13. Application of model bread baking in the examination of arabinoxylan-protein complexes in rye bread.

    Science.gov (United States)

    Buksa, Krzysztof

    2016-09-01

    The changes in molecular mass of arabinoxylan (AX) and protein caused by bread baking process were examined using a model rye bread. Instead of the normal flour, the dough contained starch, water-extractable AX and protein which were isolated from rye wholemeal. From the crumb of selected model breads, starch was removed releasing AX-protein complexes, which were further examined by size exclusion chromatography. On the basis of the research, it was concluded that optimum model mix can be composed of 3-6% AX and 3-6% rye protein isolate at 94-88% of rye starch meaning with the most similar properties to low extraction rye flour. Application of model rye bread allowed to examine the interactions between AX and proteins. Bread baked with a share of AX, rye protein and starch, from which the complexes of the highest molar mass were isolated, was characterized by the strongest structure of the bread crumb.

  14. Analysing and modelling the impact of habitat fragmentation on species diversity: a macroecological perspective

    Directory of Open Access Journals (Sweden)

    Thomas Matthews

    2015-07-01

    Full Text Available My research aimed to examine a variety of macroecological and biogeographical patterns using a large number of purely habitat island datasets (i.e. isolated patches of natural habitat set within in a matrix of human land uses sourced from both the literature and my own sampling, with the objective of testing various macroecological and biogeographical patterns. These patterns can be grouped under four broad headings: 1 species–area relationships (SAR, 2 nestedness, 3 species abundance distributions (SADs and 4 species incidence functions (function of area. Overall, I found that there were few hard macroecological generalities that hold in all cases across habitat island systems. This is because most habitat island systems are highly disturbed environments, with a variety of confounding variables and ‘undesirable’ species (e.g. species associated with human land uses acting to modulate the patterns of interest. Nonetheless, some clear patterns did emerge. For example, the power model was by the far the best general SAR model for habitat islands. The slope of the island species–area relationship (ISAR was related to the matrix type surrounding archipelagos, such that habitat island ISARs were shallower than true island ISARs. Significant compositional and functional nestedness was rare in habitat island datasets, although island area was seemingly responsible for what nestedness was observed. Species abundance distribution models were found to provide useful information for conservation in fragmented landscapes, but the presence of undesirable species substantially affected the shape of the SAD. In conclusion, I found that the application of theory derived from the study of true islands, to habitat island systems, is inappropriate as it fails to incorporate factors that are unique to habitat islands. 

  15. Examination of Solubility Models for the Determination of Transition Metals within Liquid Alkali Metals

    Directory of Open Access Journals (Sweden)

    Jeremy Isler

    2016-06-01

    Full Text Available The experimental solubility of transition metals in liquid alkali metal was compared to the modeled solubility calculated using various equations for solubility. These equations were modeled using the enthalpy calculations of the semi-empirical Miedema model and various entropy calculations. The accuracy of the predicted solubility compared to the experimental data is more dependent on which liquid alkali metal is being examined rather than the transition metal solute examined. For liquid lithium the calculated solubility by the model was generally larger than experimental values, while for liquid cesium the modeling solubility was significantly smaller than the experimental values. For liquid sodium, potassium, and rubidium the experimental solubilities were within the range calculated by this study. Few data approached the predicted temperature dependence of solubility and instead most data exhibited a less pronounced temperature dependence.

  16. Socioeconomic status, food security, and dental caries in US children: mediation analyses of data from the National Health and Nutrition Examination Survey, 2007-2008.

    Science.gov (United States)

    Chi, Donald L; Masterson, Erin E; Carle, Adam C; Mancl, Lloyd A; Coldwell, Susan E

    2014-05-01

    We examined associations of household socioeconomic status (SES) and food security with children's oral health outcomes. We analyzed 2007 and 2008 US National Health and Nutrition Examination Survey data for children aged 5 to 17 years (n = 2206) to examine the relationship between food security and untreated dental caries and to assess whether food security mediates the SES-caries relationship. About 20.1% of children had untreated caries. Most households had full food security (62%); 13% had marginal, 17% had low, and 8% had very low food security. Higher SES was associated with significantly lower caries prevalence (prevalence ratio [PR] = 0.77; 95% confidence interval = 0.63, 0.94; P = .01). Children from households with low or very low food security had significantly higher caries prevalence (PR = 2.00 and PR = 1.70, respectively) than did children living in fully food-secure households. Caries prevalence did not differ among children from fully and marginally food-secure households (P = .17). Food insecurity did not appear to mediate the SES-caries relationship. Interventions and policies to ensure food security may help address the US pediatric caries epidemic.

  17. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input subs

  18. Molecular analyses of neurogenic defects in a human pluripotent stem cell model of fragile X syndrome.

    Science.gov (United States)

    Boland, Michael J; Nazor, Kristopher L; Tran, Ha T; Szücs, Attila; Lynch, Candace L; Paredes, Ryder; Tassone, Flora; Sanna, Pietro Paolo; Hagerman, Randi J; Loring, Jeanne F

    2017-01-29

    New research suggests that common pathways are altered in many neurodevelopmental disorders including autism spectrum disorder; however, little is known about early molecular events that contribute to the pathology of these diseases. The study of monogenic, neurodevelopmental disorders with a high incidence of autistic behaviours, such as fragile X syndrome, has the potential to identify genes and pathways that are dysregulated in autism spectrum disorder as well as fragile X syndrome. In vitro generation of human disease-relevant cell types provides the ability to investigate aspects of disease that are impossible to study in patients or animal models. Differentiation of human pluripotent stem cells recapitulates development of the neocortex, an area affected in both fragile X syndrome and autism spectrum disorder. We have generated induced human pluripotent stem cells from several individuals clinically diagnosed with fragile X syndrome and autism spectrum disorder. When differentiated to dorsal forebrain cell fates, our fragile X syndrome human pluripotent stem cell lines exhibited reproducible aberrant neurogenic phenotypes. Using global gene expression and DNA methylation profiling, we have analysed the early stages of neurogenesis in fragile X syndrome human pluripotent stem cells. We discovered aberrant DNA methylation patterns at specific genomic regions in fragile X syndrome cells, and identified dysregulated gene- and network-level correlates of fragile X syndrome that are associated with developmental signalling, cell migration, and neuronal maturation. Integration of our gene expression and epigenetic analysis identified altered epigenetic-mediated transcriptional regulation of a distinct set of genes in fragile X syndrome. These fragile X syndrome-aberrant networks are significantly enriched for genes associated with autism spectrum disorder, giving support to the idea that underlying similarities exist among these neurodevelopmental diseases.

  19. EXAMINING THE MOVEMENTS OF MOBILE NODES IN THE REAL WORLD TO PRODUCE ACCURATE MOBILITY MODELS

    Directory of Open Access Journals (Sweden)

    TANWEER ALAM

    2010-09-01

    Full Text Available All communication occurs through a wireless median in an ad hoc network. Ad hoc networks are dynamically created and maintained by the individual nodes comprising the network. Random Waypoint Mobility Model is a model that includes pause times between changes in destination and speed. To produce a real-world environment within which an ad hoc network can be formed among a set of nodes, there is a need for the development of realistic, generic and comprehensive mobility models. In this paper, we examine the movements of entities in the real world and present the production of mobility model in an ad hoc network.

  20. A simple beam model to analyse the durability of adhesively bonded tile floorings in presence of shrinkage

    Directory of Open Access Journals (Sweden)

    S. de Miranda

    2014-07-01

    Full Text Available A simple beam model for the evaluation of tile debonding due to substrate shrinkage is presented. The tile-adhesive-substrate package is modeled as an Euler-Bernoulli beam laying on a two-layer elastic foundation. An effective discrete model for inter-tile grouting is introduced with the aim of modelling workmanship defects due to partial filled groutings. The model is validated using the results of a 2D FE model. Different defect configurations and adhesive typologies are analysed, focusing the attention on the prediction of normal stresses in the adhesive layer under the assumption of Mode I failure of the adhesive.

  1. An examination of the tripartite model of anxiety and depression and its application to youth.

    Science.gov (United States)

    Laurent, J; Ettelson, R

    2001-09-01

    The ability to differentiate anxiety and depression has been a topic of discussion in the adult and youth literatures for several decades. The tripartite model of anxiety and depression proposed by L. A. Clark and D. Watson (1991) has helped focus the discussion. In the tripartite model, anxiety is characterized by elevated levels of physiological hyperarousal (PH), depression is characterized by low levels of positive affect (PA), and negative affect (NA) or generalized emotional distress is common to both. The advent of the model led to the development of measures of tripartite constructs and subsequent validity studies. The tripartite model and resultant activity concerning the model was largely devoted to adult samples. However. those interested in anxiety and depression among youth are now incorporating the tripartite model in their work. This paper examines the current influence of the tripartite model in the youth literature, especially with regard to measuring anxiety and depression.

  2. The Trauma Outcome Process Assessment Model: A Structural Equation Model Examination of Adjustment

    Science.gov (United States)

    Borja, Susan E.; Callahan, Jennifer L.

    2009-01-01

    This investigation sought to operationalize a comprehensive theoretical model, the Trauma Outcome Process Assessment, and test it empirically with structural equation modeling. The Trauma Outcome Process Assessment reflects a robust body of research and incorporates known ecological factors (e.g., family dynamics, social support) to explain…

  3. Civacuve analysis software for mis machine examination of pressurized water reactor vessels; Civacuve logiciel d'analyse des controles mis des cuves de reacteurs nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, Ph.; Gagnor, A. [Intercontrole, 94 - Rungis (France)

    2001-07-01

    The product software CIVACUVE is used by INTERCONTROLE for the analysis of UT examinations, for detection, performed by the In-Service Inspection Machine (MIS) of the vessels of nuclear power plants. This software is based on an adaptation of an algorithm of SEGMENTATION (CEA CEREM), which is applied prior to any analysis. It is equipped with tools adapted to industrial use. It allows to: - perform image analysis thanks to advanced graphic tools (Zooms, True Bscan, 'contour' selection...), - backup of all data in a database (complete and transparent backup of all informations used and obtained during the different analysis operations), - connect PC to the Database (export of Reports and even of segmented points), - issue Examination Reports, Operating Condition Sheets, Sizing curves... - and last, perform a graphic and numerical comparison between different inspections of the same vessel. Used in Belgium and France on different kind of reactor vessels, CIVACUVE has allowed to show that the principle of SEGMENTATION can be adapted to detection exams. The use of CIVACUVE generates a important time gain as well as the betterment of quality in analysis. Wide data opening toward PC's allows a real flexibility with regard to client's requirements and preoccupations.

  4. High protists diversity in the plankton of sulfurous lakes and lagoons examined by 18s rRNA gene sequence analyses.

    Science.gov (United States)

    Triadó-Margarit, Xavier; Casamayor, Emilio O

    2015-12-01

    Diversity of small protists was studied in sulfidic and anoxic (euxinic) stratified karstic lakes and coastal lagoons by 18S rRNA gene analyses. We hypothesized a major sulfide effect, reducing protist diversity and richness with only a few specialized populations adapted to deal with low-redox conditions and high-sulfide concentrations. However, genetic fingerprinting suggested similar ecological diversity in anoxic and sulfurous than in upper oxygen rich water compartments with specific populations inhabiting euxinic waters. Many of them agreed with genera previously identified by microscopic observations, but also new and unexpected groups were detected. Most of the sequences matched a rich assemblage of Ciliophora (i.e., Coleps, Prorodon, Plagiopyla, Strombidium, Metopus, Vorticella and Caenomorpha, among others) and algae (mainly Cryptomonadales). Unidentified Cercozoa, Fungi, Stramenopiles and Discoba were recurrently found. The lack of GenBank counterparts was higher in deep hypolimnetic waters and appeared differentially allocated in the different taxa, being higher within Discoba and lower in Cryptophyceae. A larger number of populations than expected were specifically detected in the deep sulfurous waters, with unknown ecological interactions and metabolic capabilities.

  5. Examining the response of needle carbohydrates from Siberian larch trees to climate using compound-specific δ(13) C and concentration analyses.

    Science.gov (United States)

    Rinne, K T; Saurer, M; Kirdyanov, A V; Bryukhanova, M V; Prokushkin, A S; Churakova Sidorova, O V; Siegwolf, R T W

    2015-11-01

    Little is known about the dynamics of concentrations and carbon isotope ratios of individual carbohydrates in leaves in response to climatic and physiological factors. Improved knowledge of the isotopic ratio in sugars will enhance our understanding of the tree ring isotope ratio and will help to decipher environmental conditions in retrospect more reliably. Carbohydrate samples from larch (Larix gmelinii) needles of two sites in the continuous permafrost zone of Siberia with differing growth conditions were analysed with the Compound-Specific Isotope Analysis (CSIA). We compared concentrations and carbon isotope values (δ(13) C) of sucrose, fructose, glucose and pinitol combined with phenological data. The results for the variability of the needle carbohydrates show high dynamics with distinct seasonal characteristics between and within the studied years with a clear link to the climatic conditions, particularly vapour pressure deficit. Compound-specific differences in δ(13) C values as a response to climate were detected. The δ(13) C of pinitol, which contributes up to 50% of total soluble carbohydrates, was almost invariant during the whole growing season. Our study provides the first in-depth characterization of compound-specific needle carbohydrate isotope variability, identifies involved mechanisms and shows the potential of such results for linking tree physiological responses to different climatic conditions.

  6. A model inter-comparison study to examine limiting factors in modelling Australian tropical savannas

    Directory of Open Access Journals (Sweden)

    R. Whitley

    2015-12-01

    Full Text Available Savanna ecosystems are one of the most dominant and complex terrestrial biomes that derives from a distinct vegetative surface comprised of co-dominant tree and grass populations. While these two vegetation types co-exist functionally, demographically they are not static, but are dynamically changing in response to environmental forces such as annual fire events and rainfall variability. Modelling savanna environments with the current generation of terrestrial biosphere models (TBMs has presented many problems, particularly describing fire frequency and intensity, phenology, leaf biochemistry of C3 and C4 photosynthesis vegetation, and root water uptake. In order to better understand why TBMs perform so poorly in savannas, we conducted a model inter-comparison of 6 TBMs and assessed their performance at simulating latent energy (LE and gross primary productivity (GPP for five savanna sites along a rainfall gradient in northern Australia. Performance in predicting LE and GPP was measured using an empirical benchmarking system, which ranks models by their ability to utilise meteorological driving information to predict the fluxes. On average, the TBMs performed as well as a multi-linear regression of the fluxes against solar radiation, temperature and vapour pressure deficit, but were outperformed by a more complicated nonlinear response model that also included the leaf area index (LAI. This identified that the TBMs are not fully utilising their input information effectively in determining savanna LE and GPP, and highlights that savanna dynamics cannot be calibrated into models and that there are problems in underlying model processes. We identified key weaknesses in a model's ability to simulate savanna fluxes and their seasonal variation, related to the representation of vegetation by the models and root water uptake. We underline these weaknesses in terms of three critical areas for development. First, prescribed tree-rooting depths must be

  7. A model inter-comparison study to examine limiting factors in modelling Australian tropical savannas

    Science.gov (United States)

    Whitley, Rhys; Beringer, Jason; Hutley, Lindsay B.; Abramowitz, Gab; De Kauwe, Martin G.; Duursma, Remko; Evans, Bradley; Haverd, Vanessa; Li, Longhui; Ryu, Youngryel; Smith, Benjamin; Wang, Ying-Ping; Williams, Mathew; Yu, Qiang

    2016-06-01

    The savanna ecosystem is one of the most dominant and complex terrestrial biomes, deriving from a distinct vegetative surface comprised of co-dominant tree and grass populations. While these two vegetation types co-exist functionally, demographically they are not static but are dynamically changing in response to environmental forces such as annual fire events and rainfall variability. Modelling savanna environments with the current generation of terrestrial biosphere models (TBMs) has presented many problems, particularly describing fire frequency and intensity, phenology, leaf biochemistry of C3 and C4 photosynthesis vegetation, and root-water uptake. In order to better understand why TBMs perform so poorly in savannas, we conducted a model inter-comparison of six TBMs and assessed their performance at simulating latent energy (LE) and gross primary productivity (GPP) for five savanna sites along a rainfall gradient in northern Australia. Performance in predicting LE and GPP was measured using an empirical benchmarking system, which ranks models by their ability to utilise meteorological driving information to predict the fluxes. On average, the TBMs performed as well as a multi-linear regression of the fluxes against solar radiation, temperature and vapour pressure deficit but were outperformed by a more complicated nonlinear response model that also included the leaf area index (LAI). This identified that the TBMs are not fully utilising their input information effectively in determining savanna LE and GPP and highlights that savanna dynamics cannot be calibrated into models and that there are problems in underlying model processes. We identified key weaknesses in a model's ability to simulate savanna fluxes and their seasonal variation, related to the representation of vegetation by the models and root-water uptake. We underline these weaknesses in terms of three critical areas for development. First, prescribed tree-rooting depths must be deep enough

  8. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters.

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-09-11

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  9. Stardust Interstellar Preliminary Examination IV: Scanning transmission X-ray microscopy analyses of impact features in the Stardust Interstellar Dust Collector

    Science.gov (United States)

    Butterworth, Anna L.; Westphal, Andrew J.; Tyliszczak, Tolek; Gainsforth, Zack; Stodolna, Julien; Frank, David R.; Allen, Carlton; Anderson, David; Ansari, Asna; Bajt, SašA.; Bastien, Ron K.; Bassim, Nabil; Bechtel, Hans A.; Borg, Janet; Brenker, Frank E.; Bridges, John; Brownlee, Donald E.; Burchell, Mark; Burghammer, Manfred; Changela, Hitesh; Cloetens, Peter; Davis, Andrew M.; Doll, Ryan; Floss, Christine; Flynn, George; Grün, Eberhard; Heck, Philipp R.; Hillier, Jon K.; Hoppe, Peter; Hudson, Bruce; Huth, Joachim; Hvide, Brit; Kearsley, Anton; King, Ashley J.; Lai, Barry; Leitner, Jan; Lemelle, Laurence; Leroux, Hugues; Leonard, Ariel; Lettieri, Robert; Marchant, William; Nittler, Larry R.; Ogliore, Ryan; Ong, Wei Ja; Postberg, Frank; Price, Mark C.; Sandford, Scott A.; Tresseras, Juan-Angel Sans; Schmitz, Sylvia; Schoonjans, Tom; Silversmit, Geert; Simionovici, Alexandre S.; Solé, Vicente A.; Srama, Ralf; Stadermann, Frank J.; Stephan, Thomas; Sterken, Veerle J.; Stroud, Rhonda M.; Sutton, Steven; Trieloff, Mario; Tsou, Peter; Tsuchiyama, Akira; Vekemans, Bart; Vincze, Laszlo; von Korff, Joshua; Wordsworth, Naomi; Zevin, Daniel; Zolensky, Michael E.

    2014-09-01

    We report the quantitative characterization by synchrotron soft X-ray spectroscopy of 31 potential impact features in the aerogel capture medium of the Stardust Interstellar Dust Collector. Samples were analyzed in aerogel by acquiring high spatial resolution maps and high energy-resolution spectra of major rock-forming elements Mg, Al, Si, Fe, and others. We developed diagnostic screening tests to reject spacecraft secondary ejecta and terrestrial contaminants from further consideration as interstellar dust candidates. The results support an extraterrestrial origin for three interstellar candidates: I1043,1,30 (Orion) is a 3 pg particle with Mg-spinel, forsterite, and an iron-bearing phase. I1047,1,34 (Hylabrook) is a 4 pg particle comprising an olivine core surrounded by low-density, amorphous Mg-silicate and amorphous Fe, Cr, and Mn phases. I1003,1,40 (Sorok) has the track morphology of a high-speed impact, but contains no detectable residue that is convincingly distinguishable from the background aerogel. Twenty-two samples with an anthropogenic origin were rejected, including four secondary ejecta from impacts on the Stardust spacecraft aft solar panels, nine ejecta from secondary impacts on the Stardust Sample Return Capsule, and nine contaminants lacking evidence of an impact. Other samples in the collection included I1029,1,6, which contained surviving solar system impactor material. Four samples remained ambiguous: I1006,2,18, I1044,2,32, and I1092,2,38 were too dense for analysis, and we did not detect an intact projectile in I1044,3,33. We detected no radiation effects from the synchrotron soft X-ray analyses; however, we recorded the effects of synchrotron hard X-ray radiation on I1043,1,30 and I1047,1,34.

  10. Examination of Modeling Languages to Allow Quantitative Analysis for Model-Based Systems Engineering

    Science.gov (United States)

    2014-06-01

    model of the system (Friendenthal, Moore and Steiner 2008, 17). The premise is that maintaining a logical and consistent model can be accomplished...Standard for Exchange of Product data (STEP) subgroup of ISO, and defines a standard data format for certain types of SE information ( Johnson 2006...search.credoreference.com/content/entry/encyccs/formal_languages/0. Friedenthal, Sanford, Alan Moore, and Rick Steiner . 2008. A Practical Guide to SysML

  11. Identifying an appropriate measurement modeling approach for the Mini-Mental State Examination.

    Science.gov (United States)

    Rubright, Jonathan D; Nandakumar, Ratna; Karlawish, Jason

    2016-02-01

    The Mini-Mental State Examination (MMSE) is a 30-item, dichotomously scored test of general cognition. A number of benefits could be gained by modeling the MMSE in an item response theory (IRT) framework, as opposed to the currently used classical additive approach. However, the test, which is built from groups of items related to separate cognitive subdomains, may violate a key assumption of IRT: local item independence. This study aimed to identify the most appropriate measurement model for the MMSE: a unidimensional IRT model, a testlet response theory model, or a bifactor model. Local dependence analysis using nationally representative data showed a meaningful violation of the local item independence assumption, indicating multidimensionality. In addition, the testlet and bifactor models displayed superior fit indices over a unidimensional IRT model. Statistical comparisons showed that the bifactor model fit MMSE respondent data significantly better than the other models considered. These results suggest that application of a traditional unidimensional IRT model is inappropriate in this context. Instead, a bifactor model is suggested for future modeling of MMSE data as it more accurately represents the multidimensional nature of the scale. (PsycINFO Database Record

  12. Insights into the evolution of tectonically-active glaciated mountain ranges from digital elevation model analyses

    Science.gov (United States)

    Brocklehurst, S. H.; Whipple, K. X.

    2003-12-01

    Glaciers have played an important role in the development of most active mountain ranges around the world during the Quaternary, but the interaction between glacial erosion (as modulated by climate change) and tectonic processes is poorly understood. The so-called glacial buzzsaw hypothesis (Brozovic et al., 1997) proposes that glaciers can incise as rapidly as the most rapid rock uplift rates, such that glaciated landscapes experiencing different rock uplift rates but the same snowline elevation will look essentially the same, with mean elevations close to the snowline. Digital elevation model-based analyses of the glaciated landscapes of the Nanga Parbat region, Pakistan, and the Southern Alps, New Zealand, lend some support to this hypothesis, but also reveal considerably more variety to the landscapes of glaciated, tectonically-active mountain ranges. Larger glaciers in the Nanga Parbat region maintain a low downvalley gradient and valley floor elevations close to the snowline, even in the face of extremely rapid rock uplift. However, smaller glaciers steepen in response to rapid uplift, similar to the response of rivers. A strong correlation between the height of hillslopes rising from the cirque floors and rock uplift rates implies that erosion processes on hillslopes cannot initially keep up with more rapid glacial incision rates. It is these staggering hillslopes that permit mountain peaks to rise above 8000m. The glacial buzzsaw hypothesis does not describe the evolution of the Southern Alps as well, because here mean elevations rise in areas of more rapid rock uplift. The buzzsaw hypothesis may work well in the Nanga Parbat region because the zone of rapid rock uplift is structurally confined to a narrow region. Alternatively, the Southern Alps may not have been rising sufficiently rapidly or sufficiently long for the glacial buzzsaw to be imposed outside the most rapidly uplifting region, around Mount Cook. The challenge now is to understand in detail

  13. Developing an Innovative Customer Relationship Management Model for Better Health Examination Service

    Directory of Open Access Journals (Sweden)

    Lyu JrJung

    2014-11-01

    Full Text Available People emphasize on their own health and wish to know more about their conditions. Chronic diseases now take up to 50 percent of top 10 causes of death. As a result, the health-care industry has emerged and kept thriving. This work adopts an innovative customer-oriented business model since most clients are proactive and spontaneous in taking the “distinguished” health examination programs. We adopt the soft system dynamics methodology (SSDM to develop and to evaluate the steps of introducing customer relationship management model into a case health examination organization. Quantitative results are also presented for a case physical examination center and to assess the improved efficiency. The case study shows that the procedures developed here could provide a better service.

  14. Developing a Customer Relationship Management Model for Better Health Examination Service

    Directory of Open Access Journals (Sweden)

    Lyu Jr-Jung

    2014-11-01

    Full Text Available People emphasize on their own health and wish to know more about their conditions. Chronic diseases now take up to 50 percent of top 10 causes of death. As a result, the health-care industry has emerged and kept thriving. This work adopts a customer-oriented business model since most clients are proactive and spontaneous in taking the “distinguished” health examination programs. We adopt the soft system dynamics methodology (SSDM to develop and to evaluate the steps of introducing customer relationship management model into a case health examination organization. Quantitative results are also presented for a case physical examination center and to assess the improved efficiency. The case study shows that the procedures developed here could provide a better service.

  15. Using the Kaleidoscope Career Model to Examine Generational Differences in Work Attitudes

    Science.gov (United States)

    Sullivan, Sherry E.; Forret, Monica L.; Carraher, Shawn M.; Mainiero, Lisa A.

    2009-01-01

    Purpose: The purpose of this paper is to examine, utilising the Kaleidoscope Career Model, whether members of the Baby Boom generation and Generation X differ in their needs for authenticity, balance, and challenge. Design/methodology/approach: Survey data were obtained from 982 professionals located across the USA. Correlations, t-tests, and…

  16. Sport Education and Extracurricular Sport Participation: An Examination Using the Trans-Contextual Model of Motivation

    Science.gov (United States)

    Wallhead, Tristan L.; Hagger, Martin; Smith, Derek T.

    2010-01-01

    In this study, we used the trans-contextual model of motivation (TCM) to examine the effect of Sport Education (SE) on students' participation in a voluntary lunch recess sport club. A total of 192 participants (ages 9-14 years) completed measures of the TCM constructs before and after a 12-week SE intervention period. Participants had the…

  17. Examining a model of dispositional mindfulness, body comparison, and body satisfaction

    NARCIS (Netherlands)

    Dijkstra, Pieternel; Barelds, Dick P. H.

    2011-01-01

    The present study examined the links between dispositional mindfulness, body comparison, and body satisfaction. It was expected that mindfulness would be associated with less body comparison and more body satisfaction. Two models were tested: one exploring body comparison as a mediator between mindf

  18. Does Model Matter? Examining Change across Time for Youth in Group Homes

    Science.gov (United States)

    Farmer, Elizabeth M. Z.; Seifert, Heather; Wagner, H. Ryan; Burns, Barbara J.; Murray, Maureen

    2017-01-01

    Group homes are a frequently used but controversial treatment setting for youth with mental health problems. Within the relatively sparse literature on group homes, there is some evidence that some models of treatment may be associated with more positive outcomes for youth. This article explores this possibility by examining differences across…

  19. Examining a model of dispositional mindfulness, body comparison, and body satisfaction

    NARCIS (Netherlands)

    Dijkstra, Pieternel; Barelds, Dick P. H.

    2011-01-01

    The present study examined the links between dispositional mindfulness, body comparison, and body satisfaction. It was expected that mindfulness would be associated with less body comparison and more body satisfaction. Two models were tested: one exploring body comparison as a mediator between mindf

  20. Educational productivity in higher education : An examination of part of the Walberg Educational Productivity Model

    NARCIS (Netherlands)

    Bruinsma, M.; Jansen, E. P. W. A.

    Several factors in the H. J. Walberg Educational Productivity Model, which assumes that 9 factors affect academic achievement, were examined with a limited sample of 1st-year students in the University of Groningen. Information concerning 8 of these factors - grades, motivation, age, prior

  1. College Students Coping with Interpersonal Stress: Examining a Control-Based Model of Coping

    Science.gov (United States)

    Coiro, Mary Jo; Bettis, Alexandra H.; Compas, Bruce E.

    2017-01-01

    Objective: The ways that college students cope with stress, particularly interpersonal stress, may be a critical factor in determining which students are at risk for impairing mental health disorders. Using a control-based model of coping, the present study examined associations between interpersonal stress, coping strategies, and symptoms.…

  2. Structure of Anxiety and Depression in Urban Youth: An Examination of the Tripartite Model

    Science.gov (United States)

    Lambert, Sharon F.; McCreary, Beth T.; Joiner, Thomas E.; Schmidt, Norman B.; Ialongo, Nicolas S.

    2004-01-01

    In this study, the authors examined the validity of the tripartite model of anxiety and depression (L. A. Clark & D. Watson, 1991) in a community epidemiological sample of 467 urban African American youth. Participants completed the Baltimore How I Feel (N. S. Ialongo, S. G. Kellam, & J. Poduska, 1999), a measure of anxiety and depressive…

  3. Examining Factors Affecting Science Achievement of Hong Kong in PISA 2006 Using Hierarchical Linear Modeling

    Science.gov (United States)

    Lam, Terence Yuk Ping; Lau, Kwok Chi

    2014-01-01

    This study uses hierarchical linear modeling to examine the influence of a range of factors on the science performances of Hong Kong students in PISA 2006. Hong Kong has been consistently ranked highly in international science assessments, such as Programme for International Student Assessment and Trends in International Mathematics and Science…

  4. Examining a model of dispositional mindfulness, body comparison, and body satisfaction

    NARCIS (Netherlands)

    Dijkstra, Pieternel; Barelds, Dick P. H.

    The present study examined the links between dispositional mindfulness, body comparison, and body satisfaction. It was expected that mindfulness would be associated with less body comparison and more body satisfaction. Two models were tested: one exploring body comparison as a mediator between

  5. Sport Education and Extracurricular Sport Participation: An Examination Using the Trans-Contextual Model of Motivation

    Science.gov (United States)

    Wallhead, Tristan L.; Hagger, Martin; Smith, Derek T.

    2010-01-01

    In this study, we used the trans-contextual model of motivation (TCM) to examine the effect of Sport Education (SE) on students' participation in a voluntary lunch recess sport club. A total of 192 participants (ages 9-14 years) completed measures of the TCM constructs before and after a 12-week SE intervention period. Participants had the…

  6. Educational productivity in higher education : An examination of part of the Walberg Educational Productivity Model

    NARCIS (Netherlands)

    Bruinsma, M.; Jansen, E. P. W. A.

    2007-01-01

    Several factors in the H. J. Walberg Educational Productivity Model, which assumes that 9 factors affect academic achievement, were examined with a limited sample of 1st-year students in the University of Groningen. Information concerning 8 of these factors - grades, motivation, age, prior achieveme

  7. Using the Kaleidoscope Career Model to Examine Generational Differences in Work Attitudes

    Science.gov (United States)

    Sullivan, Sherry E.; Forret, Monica L.; Carraher, Shawn M.; Mainiero, Lisa A.

    2009-01-01

    Purpose: The purpose of this paper is to examine, utilising the Kaleidoscope Career Model, whether members of the Baby Boom generation and Generation X differ in their needs for authenticity, balance, and challenge. Design/methodology/approach: Survey data were obtained from 982 professionals located across the USA. Correlations, t-tests, and…

  8. Soil carbon response to land-use change: evaluation of a global vegetation model using observational meta-analyses

    Science.gov (United States)

    Nyawira, Sylvia S.; Nabel, Julia E. M. S.; Don, Axel; Brovkin, Victor; Pongratz, Julia

    2016-10-01

    Global model estimates of soil carbon changes from past land-use changes remain uncertain. We develop an approach for evaluating dynamic global vegetation models (DGVMs) against existing observational meta-analyses of soil carbon changes following land-use change. Using the DGVM JSBACH, we perform idealized simulations where the entire globe is covered by one vegetation type, which then undergoes a land-use change to another vegetation type. We select the grid cells that represent the climatic conditions of the meta-analyses and compare the mean simulated soil carbon changes to the meta-analyses. Our simulated results show model agreement with the observational data on the direction of changes in soil carbon for some land-use changes, although the model simulated a generally smaller magnitude of changes. The conversion of crop to forest resulted in soil carbon gain of 10 % compared to a gain of 42 % in the data, whereas the forest-to-crop change resulted in a simulated loss of -15 % compared to -40 %. The model and the observational data disagreed for the conversion of crop to grasslands. The model estimated a small soil carbon loss (-4 %), while observational data indicate a 38 % gain in soil carbon for the same land-use change. These model deviations from the observations are substantially reduced by explicitly accounting for crop harvesting and ignoring burning in grasslands in the model. We conclude that our idealized simulation approach provides an appropriate framework for evaluating DGVMs against meta-analyses and that this evaluation helps to identify the causes of deviation of simulated soil carbon changes from the meta-analyses.

  9. Examining Human Behavior in Video Games: The Development of a Computational Model to Measure Aggression.

    Science.gov (United States)

    Lamb, Richard; Annetta, Leonard; Hoston, Douglas; Shapiro, Marina; Matthews, Benjamin

    2017-04-11

    Video games with violent content have raised considerable concern in popular media and within academia. Recently, there has been considerable attention regarding the claim of the relationship between aggression and video game play. The authors of this study propose the use of a new class of tools developed via computational models to allow examination of the question; is there is a relationship between violent video games and aggression. The purpose of this study is to computationally model and compare the General Aggression Model with the Diathesis Mode of Aggression related to the play of violent content in video games. A secondary purpose is to provide a method of measuring and examining individual aggression arising from video game play. Total participants examined for this study are N=1065. This study occurs in three phases. Phase 1 is the development and quantification of the profile combination of traits via latent class profile analysis. Phase 2 is the training of the artificial neural network. Phase 3 is the comparison of each model as a computational model with and without the presence of video game violence. Results suggest that a combination of environmental factors and genetic predispositions trigger aggression related to video games.

  10. A very simple dynamic soil acidification model for scenario analyses and target load calculations

    NARCIS (Netherlands)

    Posch, M.; Reinds, G.J.

    2009-01-01

    A very simple dynamic soil acidification model, VSD, is described, which has been developed as the simplest extension of steady-state models for critical load calculations and with an eye on regional applications. The model requires only a minimum set of inputs (compared to more detailed models) and

  11. A Conceptual Model for Analysing Management Development in the UK Hospitality Industry

    Science.gov (United States)

    Watson, Sandra

    2007-01-01

    This paper presents a conceptual, contingent model of management development. It explains the nature of the UK hospitality industry and its potential influence on MD practices, prior to exploring dimensions and relationships in the model. The embryonic model is presented as a model that can enhance our understanding of the complexities of the…

  12. An examination of the cross-cultural validity of the Identity Capital Model: American and Japanese students compared.

    Science.gov (United States)

    Côté, James E; Mizokami, Shinichi; Roberts, Sharon E; Nakama, Reiko

    2016-01-01

    The Identity Capital Model proposes that forms of personal agency are associated with identity development as part of the transition to adulthood. This model was examined in two cultural contexts, taking into account age and gender, among college and university students aged 18 to 24 (N = 995). Confirmatory Factor Analyses verified cultural, age, and gender invariance of the two key operationalizations of the model. A Structural Equation Model path analysis confirmed that the model applies in both cultures with minor variations-types of personal agency are associated with the formation of adult- and societal-identities as part of the resolution of the identity stage. It was concluded that forms of personal agency providing the most effective ways of dealing with "individualization" (e.g., internal locus of control) are more important in the transition to adulthood among American students, whereas types of personal agency most effective in dealing with "individualistic collectivism" (e.g., ego strength) are more important among Japanese students.

  13. Secondary Evaluations of MTA 36-Month Outcomes: Propensity Score and Growth Mixture Model Analyses

    Science.gov (United States)

    Swanson, James M.; Hinshaw, Stephen P.; Arnold, L. Eugene; Gibbons, Robert D.; Marcus, Sue; Hur, Kwan; Jensen, Peter S.; Vitiello, Benedetto; Abikoff, Howard B.: Greenhill, Laurence L.; Hechtman, Lily; Pelham, William E.; Wells, Karen C.; Conners, C. Keith; March, John S.; Elliott, Glen R.; Epstein, Jeffery N.; Hoagwood, Kimberly; Hoza, Betsy; Molina, Brooke S. G.; Newcorn, Jeffrey H.; Severe, Joanne B.; Wigal, Timothy

    2007-01-01

    Objective: To evaluate two hypotheses: that self-selection bias contributed to lack of medication advantage at the 36-month assessment of the Multimodal Treatment Study of Children With ADHD (MTA) and that overall improvement over time obscured treatment effects in subgroups with different outcome trajectories. Method: Propensity score analyses,…

  14. Secondary Evaluations of MTA 36-Month Outcomes: Propensity Score and Growth Mixture Model Analyses

    Science.gov (United States)

    Swanson, James M.; Hinshaw, Stephen P.; Arnold, L. Eugene; Gibbons, Robert D.; Marcus, Sue; Hur, Kwan; Jensen, Peter S.; Vitiello, Benedetto; Abikoff, Howard B.: Greenhill, Laurence L.; Hechtman, Lily; Pelham, William E.; Wells, Karen C.; Conners, C. Keith; March, John S.; Elliott, Glen R.; Epstein, Jeffery N.; Hoagwood, Kimberly; Hoza, Betsy; Molina, Brooke S. G.; Newcorn, Jeffrey H.; Severe, Joanne B.; Wigal, Timothy

    2007-01-01

    Objective: To evaluate two hypotheses: that self-selection bias contributed to lack of medication advantage at the 36-month assessment of the Multimodal Treatment Study of Children With ADHD (MTA) and that overall improvement over time obscured treatment effects in subgroups with different outcome trajectories. Method: Propensity score analyses,…

  15. Discrete Event Simulation Models for CT Examination Queuing in West China Hospital.

    Science.gov (United States)

    Luo, Li; Liu, Hangjiang; Liao, Huchang; Tang, Shijun; Shi, Yingkang; Guo, Huili

    2016-01-01

    In CT examination, the emergency patients (EPs) have highest priorities in the queuing system and thus the general patients (GPs) have to wait for a long time. This leads to a low degree of satisfaction of the whole patients. The aim of this study is to improve the patients' satisfaction by designing new queuing strategies for CT examination. We divide the EPs into urgent type and emergency type and then design two queuing strategies: one is that the urgent patients (UPs) wedge into the GPs' queue with fixed interval (fixed priority model) and the other is that the patients have dynamic priorities for queuing (dynamic priority model). Based on the data from Radiology Information Database (RID) of West China Hospital (WCH), we develop some discrete event simulation models for CT examination according to the designed strategies. We compare the performance of different strategies on the basis of the simulation results. The strategy that patients have dynamic priorities for queuing makes the waiting time of GPs decrease by 13 minutes and the degree of satisfaction increase by 40.6%. We design a more reasonable CT examination queuing strategy to decrease patients' waiting time and increase their satisfaction degrees.

  16. Examining rainfall and cholera dynamics in Haiti using statistical and dynamic modeling approaches.

    Science.gov (United States)

    Eisenberg, Marisa C; Kujbida, Gregory; Tuite, Ashleigh R; Fisman, David N; Tien, Joseph H

    2013-12-01

    Haiti has been in the midst of a cholera epidemic since October 2010. Rainfall is thought to be associated with cholera here, but this relationship has only begun to be quantitatively examined. In this paper, we quantitatively examine the link between rainfall and cholera in Haiti for several different settings (including urban, rural, and displaced person camps) and spatial scales, using a combination of statistical and dynamic models. Statistical analysis of the lagged relationship between rainfall and cholera incidence was conducted using case crossover analysis and distributed lag nonlinear models. Dynamic models consisted of compartmental differential equation models including direct (fast) and indirect (delayed) disease transmission, where indirect transmission was forced by empirical rainfall data. Data sources include cholera case and hospitalization time series from the Haitian Ministry of Public Health, the United Nations Water, Sanitation and Health Cluster, International Organization for Migration, and Hôpital Albert Schweitzer. Rainfall data was obtained from rain gauges from the U.S. Geological Survey and Haiti Regeneration Initiative, and remote sensing rainfall data from the National Aeronautics and Space Administration Tropical Rainfall Measuring Mission. A strong relationship between rainfall and cholera was found for all spatial scales and locations examined. Increased rainfall was significantly correlated with increased cholera incidence 4-7 days later. Forcing the dynamic models with rainfall data resulted in good fits to the cholera case data, and rainfall-based predictions from the dynamic models closely matched observed cholera cases. These models provide a tool for planning and managing the epidemic as it continues.

  17. Pathophysiologic and transcriptomic analyses of viscerotropic yellow fever in a rhesus macaque model.

    Science.gov (United States)

    Engelmann, Flora; Josset, Laurence; Girke, Thomas; Park, Byung; Barron, Alex; Dewane, Jesse; Hammarlund, Erika; Lewis, Anne; Axthelm, Michael K; Slifka, Mark K; Messaoudi, Ilhem

    2014-01-01

    Infection with yellow fever virus (YFV), an explosively replicating flavivirus, results in viral hemorrhagic disease characterized by cardiovascular shock and multi-organ failure. Unvaccinated populations experience 20 to 50% fatality. Few studies have examined the pathophysiological changes that occur in humans during YFV infection due to the sporadic nature and remote locations of outbreaks. Rhesus macaques are highly susceptible to YFV infection, providing a robust animal model to investigate host-pathogen interactions. In this study, we characterized disease progression as well as alterations in immune system homeostasis, cytokine production and gene expression in rhesus macaques infected with the virulent YFV strain DakH1279 (YFV-DakH1279). Following infection, YFV-DakH1279 replicated to high titers resulting in viscerotropic disease with ∼72% mortality. Data presented in this manuscript demonstrate for the first time that lethal YFV infection results in profound lymphopenia that precedes the hallmark changes in liver enzymes and that although tissue damage was noted in liver, kidneys, and lymphoid tissues, viral antigen was only detected in the liver. These observations suggest that additional tissue damage could be due to indirect effects of viral replication. Indeed, circulating levels of several cytokines peaked shortly before euthanasia. Our study also includes the first description of YFV-DakH1279-induced changes in gene expression within peripheral blood mononuclear cells 3 days post-infection prior to any clinical signs. These data show that infection with wild type YFV-DakH1279 or live-attenuated vaccine strain YFV-17D, resulted in 765 and 46 differentially expressed genes (DEGs), respectively. DEGs detected after YFV-17D infection were mostly associated with innate immunity, whereas YFV-DakH1279 infection resulted in dysregulation of genes associated with the development of immune response, ion metabolism, and apoptosis. Therefore, WT-YFV infection

  18. Using an operating cost model to analyse the selection of aircraft type on short-haul routes

    CSIR Research Space (South Africa)

    Ssamula, B

    2006-08-01

    Full Text Available and the effect of passenger volume analysed. The model was applied to a specific route within Africa, and thereafter varying passenger numbers, to choose the least costly aircraft. The results showed that smaller capacity aircraft, even though limited by maximum...

  19. Pathway models for analysing and managing the introduction of alien plant pests—an overview and categorization

    Science.gov (United States)

    J.C. Douma; M. Pautasso; R.C. Venette; C. Robinet; L. Hemerik; M.C.M. Mourits; J. Schans; W. van der Werf

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative estimates of introduction risks and effectiveness of management options....

  20. Developing computational model-based diagnostics to analyse clinical chemistry data

    NARCIS (Netherlands)

    Schalkwijk, D.B. van; Bochove, K. van; Ommen, B. van; Freidig, A.P.; Someren, E.P. van; Greef, J. van der; Graaf, A.A. de

    2010-01-01

    This article provides methodological and technical considerations to researchers starting to develop computational model-based diagnostics using clinical chemistry data.These models are of increasing importance, since novel metabolomics and proteomics measuring technologies are able to produce large

  1. Bio-economic farm modelling to analyse agricultural land productivity in Rwanda

    NARCIS (Netherlands)

    Bidogeza, J.C.

    2011-01-01

    Keywords: Rwanda; farm household typology; sustainable technology adoption; multivariate analysis;
    land degradation; food security; bioeconomic model; crop simulation models; organic fertiliser; inorganic fertiliser; policy incentives In Rwanda, land degradation contributes to the low and

  2. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care

    OpenAIRE

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-01-01

    Objectives To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). Setting 2 university hospitals in south-eastern Spain from April to October 2013. Design A correlational descriptive study. Participants A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. Results The differences in obstetrical results were (biom...

  3. Quantifying and Analysing Neighbourhood Characteristics Supporting Urban Land-Use Modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2009-01-01

    Land-use modelling and spatial scenarios have gained increased attention as a means to meet the challenge of reducing uncertainty in the spatial planning and decision-making. Several organisations have developed software for land-use modelling. Many of the recent modelling efforts incorporate cel...

  4. Examining the Support Peer Supporters Provide Using Structural Equation Modeling: Nondirective and Directive Support in Diabetes Management.

    Science.gov (United States)

    Kowitt, Sarah D; Ayala, Guadalupe X; Cherrington, Andrea L; Horton, Lucy A; Safford, Monika M; Soto, Sandra; Tang, Tricia S; Fisher, Edwin B

    2017-04-17

    Little research has examined the characteristics of peer support. Pertinent to such examination may be characteristics such as the distinction between nondirective support (accepting recipients' feelings and cooperative with their plans) and directive (prescribing "correct" choices and feelings). In a peer support program for individuals with diabetes, this study examined (a) whether the distinction between nondirective and directive support was reflected in participants' ratings of support provided by peer supporters and (b) how nondirective and directive support were related to depressive symptoms, diabetes distress, and Hemoglobin A1c (HbA1c). Three hundred fourteen participants with type 2 diabetes provided data on depressive symptoms, diabetes distress, and HbA1c before and after a diabetes management intervention delivered by peer supporters. At post-intervention, participants reported how the support provided by peer supporters was nondirective or directive. Confirmatory factor analysis (CFA), correlation analyses, and structural equation modeling examined the relationships among reports of nondirective and directive support, depressive symptoms, diabetes distress, and measured HbA1c. CFA confirmed the factor structure distinguishing between nondirective and directive support in participants' reports of support delivered by peer supporters. Controlling for demographic factors, baseline clinical values, and site, structural equation models indicated that at post-intervention, participants' reports of nondirective support were significantly associated with lower, while reports of directive support were significantly associated with greater depressive symptoms, altogether (with control variables) accounting for 51% of the variance in depressive symptoms. Peer supporters' nondirective support was associated with lower, but directive support was associated with greater depressive symptoms.

  5. Evaluation of a skin self examination attitude scale using an item response theory model approach.

    Science.gov (United States)

    Djaja, Ngadiman; Youl, Pip; Aitken, Joanne; Janda, Monika

    2014-12-24

    The Skin Self-Examination Attitude Scale (SSEAS) is a brief measure that allows for the assessment of attitudes in relation to skin self-examination. This study evaluated the psychometric properties of the SSEAS using Item Response Theory (IRT) methods in a large sample of men ≥ 50 years in Queensland, Australia. A sample of 831 men (420 intervention and 411 control) completed a telephone assessment at the 13-month follow-up of a randomized-controlled trial of a video-based intervention to improve skin self-examination (SSE) behaviour. Descriptive statistics (mean, standard deviation, item-total correlations, and Cronbach's alpha) were compiled and difficulty parameters were computed with Winsteps using the polytomous Rasch Rating Scale Model (RRSM). An item person (Wright) map of the SSEAS was examined for content coverage and item targeting. The SSEAS have good psychometric properties including good internal consistency (Cronbach's alpha = 0.80), fit with the model and no evidence for differential item functioning (DIF) due to experimental trial grouping was detected. The present study confirms the SSEA scale as a brief, useful and reliable tool for assessing attitudes towards skin self-examination in a population of men 50 years or older in Queensland, Australia. The 8-item scale shows unidimensionality, allowing levels of SSE attitude, and the item difficulties, to be ranked on a single continuous scale. In terms of clinical practice, it is very important to assess skin cancer self-examination attitude to identify people who may need a more extensive intervention to allow early detection of skin cancer.

  6. Driver Model of a Powered Wheelchair Operation as a Tool of Theoretical Analyses

    Science.gov (United States)

    Ito, Takuma; Inoue, Takenobu; Shino, Motoki; Kamata, Minoru

    This paper describes the construction of a driver model of a powered wheelchair operation for the understanding of the characteristics of the driver. The main targets of existing researches about driver models are the operation of the automobiles and motorcycles, not a low-speed vehicle such as powered wheelchairs. Therefore, we started by verifying the possibility of modeling the turning operation at a corner of a corridor. At first, we conducted an experiment on a daily powered wheelchair user by using his vehicle. High reproducibility of driving and the driving characteristics for the construction of a driver model were both confirmed from the result of the experiment. Next, experiments with driving simulators were conducted for the collection of quantitative driving data. The parameters of the proposed driver model were identified from experimental results. From the simulations with the proposed driver model and identified parameters, the characteristics of the proposed driver model were analyzed.

  7. Fixed- and random-effects meta-analytic structural equation modeling: examples and analyses in R.

    Science.gov (United States)

    Cheung, Mike W-L

    2014-03-01

    Meta-analytic structural equation modeling (MASEM) combines the ideas of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Cheung and Chan (Psychological Methods 10:40-64, 2005b, Structural Equation Modeling 16:28-53, 2009) proposed a two-stage structural equation modeling (TSSEM) approach to conducting MASEM that was based on a fixed-effects model by assuming that all studies have the same population correlation or covariance matrices. The main objective of this article is to extend the TSSEM approach to a random-effects model by the inclusion of study-specific random effects. Another objective is to demonstrate the procedures with two examples using the metaSEM package implemented in the R statistical environment. Issues related to and future directions for MASEM are discussed.

  8. A Research Agenda to Examine the Efficacy and Relevance of the Transtheoretical Model for Physical Activity Behavior.

    Science.gov (United States)

    Nigg, Claudio R; Geller, Karly S; Motl, Rob W; Horwath, Caroline C; Wertin, Kristin K; Dishman, Rodney K

    2011-01-01

    Regular physical activity (PA) decreases the risk of several chronic diseases including some cancers, type II diabetes, obesity, and cardiovascular disease; however, the majority of US adults are not meeting the recommended levels to experience these benefits. To address this public health concern, the underlying mechanisms for behavior change need to be understood, translated and disseminated into appropriately tailored interventions. The Transtheoretical Model (TTM) provides a framework for both the conceptualization and measurement of behavior change, as well as facilitating promotion strategies that are individualized and easily adapted. The purpose of this manuscript is to present the constructs of the TTM as they relate to PA behavior change. We begin with a brief synopsis of recent examinations of the TTM constructs and their application. Subsequent to its introduction, we specifically present the TTM within the PA context and discuss its application and usefulness to researchers and practitioners. Criticisms of the TTM are also noted and presented as opportunities for future research to enhance the valid application of the TTM. We offer general study design recommendations to appropriately test the hypothesized relationships within the model. With further examinations using appropriate study design and statistical analyses, we believe the TTM has the potential to advance the public health impact of future PA promotion interventions.

  9. White Matter Abnormalities and Animal Models Examining a Putative Role of Altered White Matter in Schizophrenia

    Directory of Open Access Journals (Sweden)

    Haiyun Xu

    2011-01-01

    Full Text Available Schizophrenia is a severe mental disorder affecting about 1% of the population worldwide. Although the dopamine (DA hypothesis is still keeping a dominant position in schizophrenia research, new advances have been emerging in recent years, which suggest the implication of white matter abnormalities in schizophrenia. In this paper, we will briefly review some of recent human studies showing white matter abnormalities in schizophrenic brains and altered oligodendrocyte-(OL- and myelin-related genes in patients with schizophrenia and will consider abnormal behaviors reported in patients with white matter diseases. Following these, we will selectively introduce some animal models examining a putative role of white matter abnormalities in schizophrenia. The emphasis will be put on the cuprizone (CPZ model. CPZ-fed mice show demyelination and OLs loss, display schizophrenia-related behaviors, and have higher DA levels in the prefrontal cortex. These features suggest that the CPZ model is a novel animal model of schizophrenia.

  10. Examination of a sociocultural model of excessive exercise among male and female adolescents.

    Science.gov (United States)

    White, James; Halliwell, Emma

    2010-06-01

    There is substantial evidence that sociocultural pressures and body image disturbances can lead to disordered eating, yet few studies have examined their impact on excessive exercise. The study adapted a sociocultural model for disordered eating to predict excessive exercise using data from boys and girls in early adolescence (N=421). Perceived sociocultural pressures to lose weight and build muscle, body image disturbance and appearance investment were associated with a compulsive need to exercise. Adolescents' investment in appearance and body image disturbance fully mediated the relationship between sociocultural pressures and a compulsive need for exercise. There was no support for the meditational model in predicting adolescents' frequency or duration of exercise. Results support the sociocultural model as an explanatory model for excessive exercise, but suggest appearance investment and body image disturbance are important mediators of sociocultural pressures. 2010 Elsevier Ltd. All rights reserved.

  11. Cross-cultural examination of the tripartite model with children: data from the Barretstown studies.

    Science.gov (United States)

    Kiernan, G; Laurent, J; Joiner, T E; Catanzaro, S J; MacLachlan, M

    2001-10-01

    The Positive and Negative Affect Scale for Children (PANAS-C) and the Physiological Hyperarousal Scale for Children (PH-C) were administered to a group of 240 children from European countries to determine their utility in examining the tripartite model of anxiety and depression in a cross-cultural sample. Most of the children (n = 196) had been diagnosed with a medical illness; the remainder were siblings of these youngsters (n = 44). Only slight variations were noted in items between this sample and samples from the United States. Despite these minor differences, 3 distinct scales measuring the positive affect, negative affect, and physiological hyperarousal constructs of the tripartite model were identified. These findings illustrate that the PH-PANAS-C provides a useful measure of the tripartite model in a cross-cultural sample of youth. The findings also demonstrate that the tripartite model is generalizable to a cross-cultural milieu.

  12. Transcriptomics and proteomics analyses of the PACAP38 influenced ischemic brain in permanent middle cerebral artery occlusion model mice

    Directory of Open Access Journals (Sweden)

    Hori Motohide

    2012-11-01

    Full Text Available Abstract Introduction The neuropeptide pituitary adenylate cyclase-activating polypeptide (PACAP is considered to be a potential therapeutic agent for prevention of cerebral ischemia. Ischemia is a most common cause of death after heart attack and cancer causing major negative social and economic consequences. This study was designed to investigate the effect of PACAP38 injection intracerebroventrically in a mouse model of permanent middle cerebral artery occlusion (PMCAO along with corresponding SHAM control that used 0.9% saline injection. Methods Ischemic and non-ischemic brain tissues were sampled at 6 and 24 hours post-treatment. Following behavioral analyses to confirm whether the ischemia has occurred, we investigated the genome-wide changes in gene and protein expression using DNA microarray chip (4x44K, Agilent and two-dimensional gel electrophoresis (2-DGE coupled with matrix assisted laser desorption/ionization-time of flight-mass spectrometry (MALDI-TOF-MS, respectively. Western blotting and immunofluorescent staining were also used to further examine the identified protein factor. Results Our results revealed numerous changes in the transcriptome of ischemic hemisphere (ipsilateral treated with PACAP38 compared to the saline-injected SHAM control hemisphere (contralateral. Previously known (such as the interleukin family and novel (Gabra6, Crtam genes were identified under PACAP influence. In parallel, 2-DGE analysis revealed a highly expressed protein spot in the ischemic hemisphere that was identified as dihydropyrimidinase-related protein 2 (DPYL2. The DPYL2, also known as Crmp2, is a marker for the axonal growth and nerve development. Interestingly, PACAP treatment slightly increased its abundance (by 2-DGE and immunostaining at 6 h but not at 24 h in the ischemic hemisphere, suggesting PACAP activates neuronal defense mechanism early on. Conclusions This study provides a detailed inventory of PACAP influenced gene expressions

  13. Examining the dimensional structure models of secondary traumatic stress based on DSM-5 symptoms.

    Science.gov (United States)

    Mordeno, Imelu G; Go, Geraldine P; Yangson-Serondo, April

    2017-02-01

    Latent factor structure of Secondary Traumatic Stress (STS) has been examined using Diagnostic Statistic Manual-IV (DSM-IV)'s Posttraumatic Stress Disorder (PTSD) nomenclature. With the advent of Diagnostic Statistic Manual-5 (DSM-5), there is an impending need to reexamine STS using DSM-5 symptoms in light of the most updated PTSD models in the literature. The study investigated and determined the best fitted PTSD models using DSM-5 PTSD criteria symptoms. Confirmatory factor analysis (CFA) was conducted to examine model fit using the Secondary Traumatic Stress Scale in 241 registered and practicing Filipino nurses (166 females and 75 males) who worked in the Philippines and gave direct nursing services to patients. Based on multiple fit indices, the results showed the 7-factor hybrid model, comprising of intrusion, avoidance, negative affect, anhedonia, externalizing behavior, anxious arousal, and dysphoric arousal factors has excellent fit to STS. This model asserts that: (1) hyperarousal criterion needs to be divided into anxious and dysphoric arousal factors; (2) symptoms characterizing negative and positive affect need to be separated to two separate factors, and; (3) a new factor would categorize externalized, self-initiated impulse and control-deficit behaviors. Comparison of nested and non-nested models showed Hybrid model to have superior fit over other models. The specificity of the symptom structure of STS based on DSM-5 PTSD criteria suggests having more specific interventions addressing the more elaborate symptom-groupings that would alleviate the condition of nurses exposed to STS on a daily basis. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. WOMBAT——A tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML)

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model;estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses.Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from http://agbu.une.edu.au/~kmeyer/wombat.html

  15. Kinetic models for analysing myocardial [{sup 11}C]palmitate data

    Energy Technology Data Exchange (ETDEWEB)

    Jong, Hugo W.A.M. de [University Medical Centre Utrecht, Department of Radiology and Nuclear Medicine, Utrecht (Netherlands); VU University Medical Centre, Department of Nuclear Medicine and PET Research, Amsterdam (Netherlands); Rijzewijk, Luuk J.; Diamant, Michaela [VU University Medical Centre, Diabetes Centre, Amsterdam (Netherlands); Lubberink, Mark; Lammertsma, Adriaan A. [VU University Medical Centre, Department of Nuclear Medicine and PET Research, Amsterdam (Netherlands); Meer, Rutger W. van der; Lamb, Hildo J. [Leiden University Medical Centre, Department of Radiology, Leiden (Netherlands); Smit, Jan W.A. [Leiden University Medical Centre, Department of Endocrinology, Leiden (Netherlands)

    2009-06-15

    [{sup 11}C]Palmitate PET can be used to study myocardial fatty acid metabolism in vivo. Several models have been applied to describe and quantify its kinetics, but to date no systematic analysis has been performed to define the most suitable model. In this study a total of 21 plasma input models comprising one to three compartments and up to six free rate constants were compared using statistical analysis of clinical data and simulations. To this end, 14 healthy volunteers were scanned using [{sup 11}C]palmitate, whilst myocardial blood flow was measured using H{sub 2} {sup 15}O. Models including an oxidative pathway, representing production of {sup 11}CO{sub 2}, provided significantly better fits to the data than other models. Model robustness was increased by fixing efflux of {sup 11}CO{sub 2} to the oxidation rate. Simulations showed that a three-tissue compartment model describing oxidation and esterification was feasible when no more than three free rate constants were included. Although further studies in patients are required to substantiate this choice, based on the accuracy of data description, the number of free parameters and generality, the three-tissue model with three free rate constants was the model of choice for describing [{sup 11}C]palmitate kinetics in terms of oxidation and fatty acid accumulation in the cell. (orig.)

  16. Collapsing Factors in Multitrait-Multimethod Models: Examining Consequences of a Mismatch Between Measurement Design and Model

    Directory of Open Access Journals (Sweden)

    Christian eGeiser

    2015-08-01

    Full Text Available Models of confirmatory factor analysis (CFA are frequently applied to examine the convergent validity of scores obtained from multiple raters or methods in so-called multitrait-multimethod (MTMM investigations. Many applications of CFA-MTMM and similarly structured models result in solutions in which at least one method (or specific factor shows non-significant loading or variance estimates. Eid et al. (2008 distinguished between MTMM measurement designs with interchangeable (randomly selected versus structurally different (fixed methods and showed that each type of measurement design implies specific CFA-MTMM measurement models. In the current study, we hypothesized that some of the problems that are commonly seen in applications of CFA-MTMM models may be due to a mismatch between the underlying measurement design and fitted models. Using simulations, we found that models with M method factors (where M is the total number of methods and unconstrained loadings led to a higher proportion of solutions in which at least one method factor became empirically unstable when these models were fit to data generated from structurally different methods. The simulations also revealed that commonly used model goodness-of-fit criteria frequently failed to identify incorrectly specified CFA-MTMM models. We discuss implications of these findings for other complex CFA models in which similar issues occur, including nested (bifactor and latent state-trait models.

  17. Collapsing factors in multitrait-multimethod models: examining consequences of a mismatch between measurement design and model

    Science.gov (United States)

    Geiser, Christian; Bishop, Jacob; Lockhart, Ginger

    2015-01-01

    Models of confirmatory factor analysis (CFA) are frequently applied to examine the convergent validity of scores obtained from multiple raters or methods in so-called multitrait-multimethod (MTMM) investigations. Many applications of CFA-MTMM and similarly structured models result in solutions in which at least one method (or specific) factor shows non-significant loading or variance estimates. Eid et al. (2008) distinguished between MTMM measurement designs with interchangeable (randomly selected) vs. structurally different (fixed) methods and showed that each type of measurement design implies specific CFA-MTMM measurement models. In the current study, we hypothesized that some of the problems that are commonly seen in applications of CFA-MTMM models may be due to a mismatch between the underlying measurement design and fitted models. Using simulations, we found that models with M method factors (where M is the total number of methods) and unconstrained loadings led to a higher proportion of solutions in which at least one method factor became empirically unstable when these models were fit to data generated from structurally different methods. The simulations also revealed that commonly used model goodness-of-fit criteria frequently failed to identify incorrectly specified CFA-MTMM models. We discuss implications of these findings for other complex CFA models in which similar issues occur, including nested (bifactor) and latent state-trait models. PMID:26283977

  18. A novel substance flow analysis model for analysing multi-year phosphorus flow at the regional scale.

    Science.gov (United States)

    Chowdhury, Rubel Biswas; Moore, Graham A; Weatherley, Anthony J; Arora, Meenakshi

    2016-12-01

    Achieving sustainable management of phosphorus (P) is crucial for both global food security and global environmental protection. In order to formulate informed policy measures to overcome existing barriers of achieving sustainable P management, there is need for a sound understanding of the nature and magnitude of P flow through various systems at different geographical and temporal scales. So far, there is a limited understanding on the nature and magnitude of P flow over multiple years at the regional scale. In this study, we have developed a novel substance flow analysis (SFA) model in the MATLAB/Simulink® software platform that can be effectively utilized to analyse the nature and magnitude of multi-year P flow at the regional scale. The model is inclusive of all P flows and storage relating to all key systems, subsystems, processes or components, and the associated interactions of P flow required to represent a typical P flow system at the regional scale. In an annual time step, this model can analyse P flow and storage over as many as years required at a time, and therefore, can indicate the trends and changes in P flow and storage over many years, which is not offered by the existing regional scale SFA models of P. The model is flexible enough to allow any modification or the inclusion of any degree of complexity, and therefore, can be utilized for analysing P flow in any region around the world. The application of the model in the case of Gippsland region, Australia has revealed that the model generates essential information about the nature and magnitude of P flow at the regional scale which can be utilized for making improved management decisions towards attaining P sustainability. A systematic reliability check on the findings of model application also indicates that the model produces reliable results.

  19. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care.

    Science.gov (United States)

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-08-26

    To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). 2 university hospitals in south-eastern Spain from April to October 2013. A correlational descriptive study. A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. The differences in obstetrical results were (biomedical model/humanised model): onset of labour (spontaneous 66/137, augmentation 70/1, p=0.0005), pain relief (epidural 172/132, no pain relief 9/40, p=0.0005), mode of delivery (normal vaginal 140/165, instrumental 48/23, p=0.004), length of labour (0-4 hours 69/93, >4 hours 133/108, p=0.011), condition of perineum (intact perineum or tear 94/178, episiotomy 100/24, p=0.0005). The total questionnaire score (100) gave a mean (M) of 78.33 and SD of 8.46 in the biomedical model of care and an M of 82.01 and SD of 7.97 in the humanised model of care (p=0.0005). In the analysis of the results per items, statistical differences were found in 8 of the 9 subscales. The highest scores were reached in the humanised model of maternity care. The humanised model of maternity care offers better obstetrical outcomes and women's satisfaction scores during the labour, birth and immediate postnatal period than does the biomedical model. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  20. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care

    Science.gov (United States)

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-01-01

    Objectives To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). Setting 2 university hospitals in south-eastern Spain from April to October 2013. Design A correlational descriptive study. Participants A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. Results The differences in obstetrical results were (biomedical model/humanised model): onset of labour (spontaneous 66/137, augmentation 70/1, p=0.0005), pain relief (epidural 172/132, no pain relief 9/40, p=0.0005), mode of delivery (normal vaginal 140/165, instrumental 48/23, p=0.004), length of labour (0–4 hours 69/93, >4 hours 133/108, p=0.011), condition of perineum (intact perineum or tear 94/178, episiotomy 100/24, p=0.0005). The total questionnaire score (100) gave a mean (M) of 78.33 and SD of 8.46 in the biomedical model of care and an M of 82.01 and SD of 7.97 in the humanised model of care (p=0.0005). In the analysis of the results per items, statistical differences were found in 8 of the 9 subscales. The highest scores were reached in the humanised model of maternity care. Conclusions The humanised model of maternity care offers better obstetrical outcomes and women's satisfaction scores during the labour, birth and immediate postnatal period than does the biomedical model. PMID:27566632

  1. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    Science.gov (United States)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  2. Exploring Prospective Secondary Mathematics Teachers' Interpretation of Student Thinking through Analysing Students' Work in Modelling

    Science.gov (United States)

    Didis, Makbule Gozde; Erbas, Ayhan Kursat; Cetinkaya, Bulent; Cakiroglu, Erdinc; Alacaci, Cengiz

    2016-01-01

    Researchers point out the importance of teachers' knowledge of student thinking and the role of examining student work in various contexts to develop a knowledge base regarding students' ways of thinking. This study investigated prospective secondary mathematics teachers' interpretations of students' thinking as manifested in students' work that…

  3. Comparative Analyses of MIRT Models and Software (BMIRT and flexMIRT)

    Science.gov (United States)

    Yavuz, Guler; Hambleton, Ronald K.

    2017-01-01

    Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…

  4. The Cannon 2: A data-driven model of stellar spectra for detailed chemical abundance analyses

    CERN Document Server

    Casey, Andrew R; Ness, Melissa; Rix, Hans-Walter; Ho, Anna Q Y; Gilmore, Gerry

    2016-01-01

    We have shown that data-driven models are effective for inferring physical attributes of stars (labels; Teff, logg, [M/H]) from spectra, even when the signal-to-noise ratio is low. Here we explore whether this is possible when the dimensionality of the label space is large (Teff, logg, and 15 abundances: C, N, O, Na, Mg, Al, Si, S, K, Ca, Ti, V, Mn, Fe, Ni) and the model is non-linear in its response to abundance and parameter changes. We adopt ideas from compressed sensing to limit overall model complexity while retaining model freedom. The model is trained with a set of 12,681 red-giant stars with high signal-to-noise spectroscopic observations and stellar parameters and abundances taken from the APOGEE Survey. We find that we can successfully train and use a model with 17 stellar labels. Validation shows that the model does a good job of inferring all 17 labels (typical abundance precision is 0.04 dex), even when we degrade the signal-to-noise by discarding ~50% of the observing time. The model dependencie...

  5. Analysing empowerment-oriented email consultation for parents : Development of the Guiding the Empowerment Process model

    NARCIS (Netherlands)

    Nieuwboer, C.C.; Fukkink, R.G.; Hermanns, J.M.A.

    2017-01-01

    Online consultation is increasingly offered by parenting practitioners, but it is not clear if it is feasible to provide empowerment-oriented support in a single session email consultation. Based on the empowerment theory, we developed the Guiding the Empowerment Process model (GEP model) to evaluat

  6. Transport of nutrients from land to sea: Global modeling approaches and uncertainty analyses

    NARCIS (Netherlands)

    Beusen, A.H.W.

    2014-01-01

    This thesis presents four examples of global models developed as part of the Integrated Model to Assess the Global Environment (IMAGE). They describe different components of global biogeochemical cycles of the nutrients nitrogen (N), phosphorus (P) and silicon (Si), with a focus on approaches to

  7. Modelling and analysing track cycling Omnium performances using statistical and machine learning techniques.

    Science.gov (United States)

    Ofoghi, Bahadorreza; Zeleznikow, John; Dwyer, Dan; Macmahon, Clare

    2013-01-01

    This article describes the utilisation of an unsupervised machine learning technique and statistical approaches (e.g., the Kolmogorov-Smirnov test) that assist cycling experts in the crucial decision-making processes for athlete selection, training, and strategic planning in the track cycling Omnium. The Omnium is a multi-event competition that will be included in the summer Olympic Games for the first time in 2012. Presently, selectors and cycling coaches make decisions based on experience and intuition. They rarely have access to objective data. We analysed both the old five-event (first raced internationally in 2007) and new six-event (first raced internationally in 2011) Omniums and found that the addition of the elimination race component to the Omnium has, contrary to expectations, not favoured track endurance riders. We analysed the Omnium data and also determined the inter-relationships between different individual events as well as between those events and the final standings of riders. In further analysis, we found that there is no maximum ranking (poorest performance) in each individual event that riders can afford whilst still winning a medal. We also found the required times for riders to finish the timed components that are necessary for medal winning. The results of this study consider the scoring system of the Omnium and inform decision-making toward successful participation in future major Omnium competitions.

  8. Mathematical modeling of materially nonlinear problems in structural analyses, Part II: Application in contemporary software

    Directory of Open Access Journals (Sweden)

    Bonić Zoran

    2010-01-01

    Full Text Available The paper presents application of nonlinear material models in the software package Ansys. The development of the model theory is presented in the paper of the mathematical modeling of material nonlinear problems in structural analysis (part I - theoretical foundations, and here is described incremental-iterative procedure for solving problems of nonlinear material used by this package and an example of modeling of spread footing by using Bilinear-kinematics and Drucker-Prager mode was given. A comparative analysis of the results obtained by these modeling and experimental research of the author was made. Occurrence of the load level that corresponds to plastic deformation was noted, development of deformations with increasing load, as well as the distribution of dilatation in the footing was observed. Comparison of calculated and measured values of reinforcement dilatation shows their very good agreement.

  9. Crowd-structure interaction in footbridges: Modelling, application to a real case-study and sensitivity analyses

    Science.gov (United States)

    Bruno, Luca; Venuti, Fiammetta

    2009-06-01

    A mathematical and computational model used to simulate crowd-structure interaction in lively footbridges is presented in this work. The model is based on the mathematical and numerical decomposition of the coupled multiphysical nonlinear system into two interacting subsystems. The model was conceived to simulate the synchronous lateral excitation phenomenon caused by pedestrians walking on footbridges. The model was first applied to simulate a crowd event on an actual footbridge, the T-bridge in Japan. Three sensitivity analyses were then performed on the same benchmark to evaluate the properties of the model. The simulation results show good agreement with the experimental data found in literature and the model could be considered a useful tool for designers and engineers in the different phases of footbridge design.

  10. Selection of asset investment models by hospitals: examination of influencing factors, using Switzerland as an example.

    Science.gov (United States)

    Eicher, Bernhard

    2016-10-01

    Hospitals are responsible for a remarkable part of the annual increase in healthcare expenditure. This article examines one of the major cost drivers, the expenditure for investment in hospital assets. The study, conducted in Switzerland, identifies factors that influence hospitals' investment decisions. A suggestion on how to categorize asset investment models is presented based on the life cycle of an asset, and its influencing factors defined based on transaction cost economics. The influence of five factors (human asset specificity, physical asset specificity, uncertainty, bargaining power, and privacy of ownership) on the selection of an asset investment model is examined using a two-step fuzzy-set Qualitative Comparative Analysis. The research shows that outsourcing-oriented asset investment models are particularly favored in the presence of two combinations of influencing factors: First, if technological uncertainty is high and both human asset specificity and bargaining power of a hospital are low. Second, if assets are very specific, technological uncertainty is high and there is a private hospital with low bargaining power, outsourcing-oriented asset investment models are favored too. Using Qualitative Comparative Analysis, it can be demonstrated that investment decisions of hospitals do not depend on isolated influencing factors but on a combination of factors. Copyright © 2016 John Wiley & Sons, Ltd.

  11. Testing a Dual Process Model of Gender-Based Violence: A Laboratory Examination.

    Science.gov (United States)

    Berke, Danielle S; Zeichner, Amos

    2016-01-01

    The dire impact of gender-based violence on society compels development of models comprehensive enough to capture the diversity of its forms. Research has established hostile sexism (HS) as a robust predictor of gender-based violence. However, to date, research has yet to link men's benevolent sexism (BS) to physical aggression toward women, despite correlations between BS and HS and between BS and victim blaming. One model, the opposing process model of benevolent sexism (Sibley & Perry, 2010), suggests that, for men, BS acts indirectly through HS to predict acceptance of hierarchy-enhancing social policy as an expression of a preference for in-group dominance (i. e., social dominance orientation [SDO]). The extent to which this model applies to gender-based violence remains untested. Therefore, in this study, 168 undergraduate men in a U. S. university participated in a competitive reaction time task, during which they had the option to shock an ostensible female opponent as a measure of gender-based violence. Results of multiple-mediation path analyses indicated dual pathways potentiating gender-based violence and highlight SDO as a particularly potent mechanism of this violence. Findings are discussed in terms of group dynamics and norm-based violence prevention.

  12. Examination of Scale-Awareness of Convective Transport for Parameterization Development in Mesoscale and Climate Models

    Science.gov (United States)

    Liu, Y.; Fan, J.; Zhang, G. J.; Xu, K.

    2013-12-01

    Cumulus convection plays a key role in atmospheric circulation. The results of global climate models, which have been widely used in climate research, are highly sensitive to cumulus parameterizations used for modeling cumulus clouds. Existing parameterization schemes have relied upon a number of assumptions whose validity is questionable at high spatial resolutions. In this study, we intended to develop a scale-aware cumulus parameterization based on the conventional Zhang-McFarlane scheme which is suitable for a broad range of uses, ranging from meso-scale to climate models. We conduct analyses from cloud resolving model (CRM) simulations, including two cases from the Midlatitude Continental Convective Clouds Experiment (MC3E), to understand scale-dependencies of convective cloud properties following the unified parameterization framework of Arakawa and Wu (2013), but with a more complete set of considerations such as including downdrafts and at different convective stages for eddy flux approximations. Our preliminary results show that downdrafts could make a significant contribution to eddy flux transport at the developed stage of convection. The eddy transported by updrafts and downdrafts with respect to the environmental background increased with the increasing of grid-spacing, but do not change with fraction. There are large differences between the explicit calculation of eddy flux and that from approximations used in cumulus parameterization at grid-spacings of less than 64 km. Much of this difference is due to the sub-grid inhomogeneity of updrafts and downdrafts.

  13. Stochastic Spatio-Temporal Models for Analysing NDVI Distribution of GIMMS NDVI3g Images

    Directory of Open Access Journals (Sweden)

    Ana F. Militino

    2017-01-01

    Full Text Available The normalized difference vegetation index (NDVI is an important indicator for evaluating vegetation change, monitoring land surface fluxes or predicting crop models. Due to the great availability of images provided by different satellites in recent years, much attention has been devoted to testing trend changes with a time series of NDVI individual pixels. However, the spatial dependence inherent in these data is usually lost unless global scales are analyzed. In this paper, we propose incorporating both the spatial and the temporal dependence among pixels using a stochastic spatio-temporal model for estimating the NDVI distribution thoroughly. The stochastic model is a state-space model that uses meteorological data of the Climatic Research Unit (CRU TS3.10 as auxiliary information. The model will be estimated with the Expectation-Maximization (EM algorithm. The result is a set of smoothed images providing an overall analysis of the NDVI distribution across space and time, where fluctuations generated by atmospheric disturbances, fire events, land-use/cover changes or engineering problems from image capture are treated as random fluctuations. The illustration is carried out with the third generation of NDVI images, termed NDVI3g, of the Global Inventory Modeling and Mapping Studies (GIMMS in continental Spain. This data are taken in bymonthly periods from January 2011 to December 2013, but the model can be applied to many other variables, countries or regions with different resolutions.

  14. Examining the link between patient satisfaction and adherence to HIV care: a structural equation model.

    Science.gov (United States)

    Dang, Bich N; Westbrook, Robert A; Black, William C; Rodriguez-Barradas, Maria C; Giordano, Thomas P

    2013-01-01

    Analogous to the business model of customer satisfaction and retention, patient satisfaction could serve as an innovative, patient-centered focus for increasing retention in HIV care and adherence to HAART, and ultimately HIV suppression. To test, through structural equation modeling (SEM), a model of HIV suppression in which patient satisfaction influences HIV suppression indirectly through retention in HIV care and adherence to HAART. We conducted a cross-sectional study of adults receiving HIV care at two clinics in Texas. Patient satisfaction was based on two validated items, one adapted from the Consumer Assessment of Healthcare Providers and Systems survey ("Would you recommend this clinic to other patients with HIV?) and one adapted from the Delighted-Terrible Scale, ("Overall, how do you feel about the care you got at this clinic in the last 12 months?"). A validated, single-item question measured adherence to HAART over the past 4 weeks. Retention in HIV care was based on visit constancy in the year prior to the survey. HIV suppression was defined as plasma HIV RNA survey. We used SEM to test hypothesized relationships. The analyses included 489 patients (94% of eligible patients). The patient satisfaction score had a mean of 8.5 (median 9.2) on a 0- to 10- point scale. A total of 46% reported "excellent" adherence, 76% had adequate retention, and 70% had HIV suppression. In SEM analyses, patient satisfaction with care influences retention in HIV care and adherence to HAART, which in turn serve as key determinants of HIV suppression (all psatisfaction may have direct effects on retention in HIV care and adherence to HAART. Interventions to improve the care experience, without necessarily targeting objective clinical performance measures, could serve as an innovative method for optimizing HIV outcomes.

  15. Revisiting the Leadership Scale for Sport: Examining Factor Structure Through Exploratory Structural Equation Modeling.

    Science.gov (United States)

    Chiu, Weisheng; Rodriguez, Fernando M; Won, Doyeon

    2016-10-01

    This study examines the factor structure of the shortened version of the Leadership Scale for Sport, through a survey of 201 collegiate swimmers at National Collegiate Athletic Association Division II and III institutions, using both exploratory structural equation modeling and confirmatory factor analysis. Both exploratory structural equation modeling and confirmatory factor analysis showed that a five-factor solution fit the data adequately. The sizes of factor loadings on target factors substantially differed between the confirmatory factor analysis and exploratory structural equation modeling solutions. In addition, the inter-correlations between factors of the Leadership Scale for Sport and the correlations with athletes' satisfaction were found to be inflated in the confirmatory factor analysis solution. Overall, the findings provide evidence of the factorial validity of the shortened Leadership Scale for Sport.

  16. Neural Network-Based Model for Landslide Susceptibility and Soil Longitudinal Profile Analyses

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Barari, Amin; Choobbasti, A. J.

    2011-01-01

    The purpose of this study was to create an empirical model for assessing the landslide risk potential at Savadkouh Azad University, which is located in the rural surroundings of Savadkouh, about 5 km from the city of Pol-Sefid in northern Iran. The soil longitudinal profile of the city of Babol......, located 25 km from the Caspian Sea, also was predicted with an artificial neural network (ANN). A multilayer perceptron neural network model was applied to the landslide area and was used to analyze specific elements in the study area that contributed to previous landsliding events. The ANN models were...... studies in landslide susceptibility zonation....

  17. Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time

    Science.gov (United States)

    Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan

    2012-01-01

    Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).

  18. Analysing the forward premium anomaly using a Logistic Smooth Transition Regression model.

    OpenAIRE

    Sofiane Amri

    2008-01-01

    Several researchers have suggested that exchange rates may be characterized by nonlinear behaviour. This paper examines these nonlinearities and asymetries and estimates a Logistic Transition Regression (LSTR) of Fama Regression with the Risk Adjusted Forward Premia as transition variable. Results confirm the existence of nonlinear dynamics in the relationship between spot exchange rate differential and the forward premium for all the currencies of the sample and for all maturities (three and...

  19. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach......This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section....... The model is able to correctly separate the two experimental groups. Two different approaches to estimate the thickness of each section of specimen being imaged are introduced. The first approach uses Darboux frame and Cartan matrix to measure the isophote curvature and the second approach is based...

  20. Genomic analyses with biofilter 2.0: knowledge driven filtering, annotation, and model development

    National Research Council Canada - National Science Library

    Pendergrass, Sarah A; Frase, Alex; Wallace, John; Wolfe, Daniel; Katiyar, Neerja; Moore, Carrie; Ritchie, Marylyn D

    2013-01-01

    .... We have now extensively revised and updated the multi-purpose software tool Biofilter that allows researchers to annotate and/or filter data as well as generate gene-gene interaction models based...

  1. Wave modelling for the North Indian Ocean using MSMR analysed winds

    Digital Repository Service at National Institute of Oceanography (India)

    Vethamony, P.; Sudheesh, K.; Rupali, S.P.; Babu, M.T.; Jayakumar, S.; Saran, A.K.; Basu, S.K.; Kumar, R.; Sarkar, A.

    NCMRWF (National Centre for Medium Range Weather Forecast) winds assimilated with MSMR (Multi-channel Scanning Microwave Radiometer) winds are used as input to MIKE21 Offshore Spectral Wave model (OSW) which takes into account wind induced wave...

  2. The strut-and-tie models in reinforced concrete structures analysed by a numerical technique

    Directory of Open Access Journals (Sweden)

    V. S. Almeida

    Full Text Available The strut-and-tie models are appropriate to design and to detail certain types of structural elements in reinforced concrete and in regions of stress concentrations, called "D" regions. This is a good model representation of the structural behavior and mechanism. The numerical techniques presented herein are used to identify stress regions which represent the strut-and-tie elements and to quantify their respective efforts. Elastic linear plane problems are analyzed using strut-and-tie models by coupling the classical evolutionary structural optimization, ESO, and a new variant called SESO - Smoothing ESO, for finite element formulation. The SESO method is based on the procedure of gradual reduction of stiffness contribution of the inefficient elements at lower stress until it no longer has any influence. Optimal topologies of strut-and-tie models are presented in several instances with good settings comparing with other pioneer works allowing the design of reinforcement for structural elements.

  3. Framing patient consent for student involvement in pelvic examination: a dual model of autonomy.

    Science.gov (United States)

    Carson-Stevens, Andrew; Davies, Myfanwy M; Jones, Rhiain; Chik, Aiman D Pawan; Robbé, Iain J; Fiander, Alison N

    2013-11-01

    Patient consent has been formulated in terms of radical individualism rather than shared benefits. Medical education relies on the provision of patient consent to provide medical students with the training and experience to become competent doctors. Pelvic examination represents an extreme case in which patients may legitimately seek to avoid contact with inexperienced medical students particularly where these are male. However, using this extreme case, this paper will examine practices of framing and obtaining consent as perceived by medical students. This paper reports findings of an exploratory qualitative study of medical students and junior doctors. Participants described a number of barriers to obtaining informed consent. These related to misunderstandings concerning student roles and experiences and insufficient information on the nature of the examination. Participants reported perceptions of the negative framing of decisions on consent by nursing staff where the student was male. Potentially coercive practices of framing of the decision by senior doctors were also reported. Participants outlined strategies they adopted to circumvent patients' reasons for refusal. Practices of framing the information used by students, nurses and senior doctors to enable patients to decide about consent are discussed in the context of good ethical practice. In the absence of a clear ethical model, coercion appears likely. We argue for an expanded model of autonomy in which the potential tension between respecting patients' autonomy and ensuring the societal benefit of well-trained doctors is recognised. Practical recommendations are made concerning information provision and clear delineations of student and patient roles and expectations.

  4. Het Job-demands resources model: Een motivationele analyse vanuit de Zelf-Determinatie Theorie

    OpenAIRE

    2013-01-01

    This article details the doctoral dissertation of Anja Van Broeck (2010) detailing employee motivation from two different recent perspectives: the job demands-resources model (JD-R model) en the self-determination theory (SDT). This article primarily highlights how the studies of this dissertation add to the JDR by relying on SDT. First, a distinction is made between two types of job demands: job hindrances and job challenges Second, motivation is shown to represent the underlying mechanism ...

  5. Examining the Relationships among Antecedents of Guests’ Behavioural Intentions in Ghana’s Hospitality Industry: A Structural Equation Modelling Approach

    Directory of Open Access Journals (Sweden)

    Simon Gyasi Nimako

    2013-04-01

    Full Text Available This study empirically examines the critical antecedents of behavioural intentions and the structural interrelationships that exist among the antecedents in the hotel industry in Ghana. The study was a cross-sectional survey of 700 respondents using structured questionnaire personally administered. A usable 359 questionnaire were obtained, representing 51.3% response rate and analysed using Structural Equation Modelling approach. The findings indicate that the proposed model has high goodness-of-fit indices and explains 89.5 and 91% of the two behavioural intention variables loyalty and Positive Word of Mouth Communication (PWOMC respectively. It also found that loyalty could be influenced through PWOMC, customer satisfaction, perceived service quality, perceived value and perceived quality of ambient factors, whereas PWOMC was influenced by satisfaction, perceived value and perceived quality of Ambient factors. Theoretically, the study fills the dearth of conceptual models in understanding the critical determinants of BI in the hotel sector in developing country context. It also provides important implications for marketing management in hotel industry. Limitations of the study are noted and recommendations for future research have been suggested. This study contributes to the body of knowledge in the area of consumer loyalty in the hospitality industry.

  6. Confirmatory factor analysis for the Eating Disorder Examination Questionnaire: Evidence supporting a three-factor model.

    Science.gov (United States)

    Barnes, Jennifer; Prescott, Tim; Muncer, Steven

    2012-12-01

    The purpose of this investigation was to compare the goodness-of-fit of a one factor model with the four factor model proposed by Fairburn (2008) and the three factor model proposed by Peterson and colleagues (2007) for the Eating Disorder Examination Questionnaire (EDE-Q 6.0) (Fairburn and Beglin, 1994). Using a cross-sectional design, the EDE-Q was completed by 569 adults recruited from universities and eating disorder charities in the UK. Confirmatory factor analysis (CFA) was carried out for both the student and non-student groups. CFA indicated that Peterson et al.'s (2007) three factor model was the best fit for both groups within the current data sample. Acceptable levels of internal reliability were observed and there was clear evidence for a hierarchical factor of eating disorder. The results of this study provide support for the three factor model of the EDE-Q suggested by Peterson and colleagues (2007) in that this model was appropriate for both the student and non-student sample populations. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Analysing stratified medicine business models and value systems: innovation-regulation interactions.

    Science.gov (United States)

    Mittra, James; Tait, Joyce

    2012-09-15

    Stratified medicine offers both opportunities and challenges to the conventional business models that drive pharmaceutical R&D. Given the increasingly unsustainable blockbuster model of drug development, due in part to maturing product pipelines, alongside increasing demands from regulators, healthcare providers and patients for higher standards of safety, efficacy and cost-effectiveness of new therapies, stratified medicine promises a range of benefits to pharmaceutical and diagnostic firms as well as healthcare providers and patients. However, the transition from 'blockbusters' to what might now be termed 'niche-busters' will require the adoption of new, innovative business models, the identification of different and perhaps novel types of value along the R&D pathway, and a smarter approach to regulation to facilitate innovation in this area. In this paper we apply the Innogen Centre's interdisciplinary ALSIS methodology, which we have developed for the analysis of life science innovation systems in contexts where the value creation process is lengthy, expensive and highly uncertain, to this emerging field of stratified medicine. In doing so, we consider the complex collaboration, timing, coordination and regulatory interactions that shape business models, value chains and value systems relevant to stratified medicine. More specifically, we explore in some depth two convergence models for co-development of a therapy and diagnostic before market authorisation, highlighting the regulatory requirements and policy initiatives within the broader value system environment that have a key role in determining the probable success and sustainability of these models.

  8. Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes

    Directory of Open Access Journals (Sweden)

    Marc Carreras

    2016-08-01

    Full Text Available Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain for the year 2012 (N = 92,498 individuals. A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals.

  9. Evaluating mepindolol in a test model of examination anxiety in students.

    Science.gov (United States)

    Krope, P; Kohrs, A; Ott, H; Wagner, W; Fichte, K

    1982-03-01

    The effect of a single dose of beta-blocker (5 or 10 mg mepindolol) during a written examination was investigated in two double-blind studies (N : 49 and 55 students, respectively). The question was whether the beta-blocker would in comparison to placebo diminish examination anxiety and improve the performance of highly complex tasks, while leaving the performance of less complex tasks unchanged. A reduction in examination anxiety after beta-blocker intake could not be demonstrated with a multi-level test model (which included the parameters self-rated anxiety, motor behaviour, task performance and physiology), although pulse rates were lowered significantly. An improvement in performance could not be observed, while - by the same token - the performance was not impaired by the beta-blocker. A hypothesis according to which a beta-blocker has an anxiolytic effect and improves performance, dependent on the level of habitual examination anxiety, was tested post hoc, but could not be confirmed. Ten of the subjects treated with 10 mg mepindolol, complained of different side effects, including dizziness, fatigue and headache.

  10. An LP-model to analyse economic and ecological sustainability on Dutch dairy farms: model presentation and application for experimental farm "de Marke"

    NARCIS (Netherlands)

    Calker, van K.J.; Berentsen, P.B.M.; Boer, de I.J.M.; Giesen, G.W.J.; Huirne, R.B.M.

    2004-01-01

    Farm level modelling can be used to determine how farm management adjustments and environmental policy affect different sustainability indicators. In this paper indicators were included in a dairy farm LP (linear programming)-model to analyse the effects of environmental policy and management

  11. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  12. AMME: an Automatic Mental Model Evaluation to analyse user behaviour traced in a finite, discrete state space.

    Science.gov (United States)

    Rauterberg, M

    1993-11-01

    To support the human factors engineer in designing a good user interface, a method has been developed to analyse the empirical data of the interactive user behaviour traced in a finite discrete state space. The sequences of actions produced by the user contain valuable information about the mental model of this user, the individual problem solution strategies for a given task and the hierarchical structure of the task-subtasks relationships. The presented method, AMME, can analyse the action sequences and automatically generate (1) a net description of the task dependent model of the user, (2) a complete state transition matrix, and (3) various quantitative measures of the user's task solving process. The behavioural complexity of task-solving processes carried out by novices has been found to be significantly larger than the complexity of task-solving processes carried out by experts.

  13. Model-driven meta-analyses for informing health care: a diabetes meta-analysis as an exemplar.

    Science.gov (United States)

    Brown, Sharon A; Becker, Betsy Jane; García, Alexandra A; Brown, Adama; Ramírez, Gilbert

    2015-04-01

    A relatively novel type of meta-analysis, a model-driven meta-analysis, involves the quantitative synthesis of descriptive, correlational data and is useful for identifying key predictors of health outcomes and informing clinical guidelines. Few such meta-analyses have been conducted and thus, large bodies of research remain unsynthesized and uninterpreted for application in health care. We describe the unique challenges of conducting a model-driven meta-analysis, focusing primarily on issues related to locating a sample of published and unpublished primary studies, extracting and verifying descriptive and correlational data, and conducting analyses. A current meta-analysis of the research on predictors of key health outcomes in diabetes is used to illustrate our main points.

  14. On the Structure of Personality Disorder Traits: Conjoint Analyses of the CAT-PD, PID-5, and NEO-PI-3 Trait Models

    Science.gov (United States)

    Wright, Aidan G.C.; Simms, Leonard J.

    2014-01-01

    The current study examines the relations among contemporary models of pathological and normal range personality traits. Specifically, we report on (a) conjoint exploratory factor analyses of the Computerized Adaptive Test of Personality Disorder static form (CAT-PD-SF) with the Personality Inventory for the DSM-5 (PID-5; Krueger et al., 2012) and NEO Personality Inventory-3 First Half (NEI-PI-3FH; McCrae & Costa, 2007), and (b) unfolding hierarchical analyses of the three measures in a large general psychiatric outpatient sample (N = 628; 64% Female). A five-factor solution provided conceptually coherent alignment among the CAT-PD-SF, PID-5, and NEO-PI-3FH scales. Hierarchical solutions suggested that higher-order factors bear strong resemblance to dimensions that emerge from structural models of psychopathology (e.g., Internalizing and Externalizing spectra). These results demonstrate that the CAT-PD-SF adheres to the consensual structure of broad trait domains at the five-factor level. Additionally, patterns of scale loadings further inform questions of structure and bipolarity of facet and domain level constructs. Finally, hierarchical analyses strengthen the argument for using broad dimensions that span normative and pathological functioning to scaffold a quantitatively derived phenotypic structure of psychopathology to orient future research on explanatory, etiological, and maintenance mechanisms. PMID:24588061

  15. Multicollinearity in prognostic factor analyses using the EORTC QLQ-C30: identification and impact on model selection.

    Science.gov (United States)

    Van Steen, Kristel; Curran, Desmond; Kramer, Jocelyn; Molenberghs, Geert; Van Vreckem, Ann; Bottomley, Andrew; Sylvester, Richard

    2002-12-30

    Clinical and quality of life (QL) variables from an EORTC clinical trial of first line chemotherapy in advanced breast cancer were used in a prognostic factor analysis of survival and response to chemotherapy. For response, different final multivariate models were obtained from forward and backward selection methods, suggesting a disconcerting instability. Quality of life was measured using the EORTC QLQ-C30 questionnaire completed by patients. Subscales on the questionnaire are known to be highly correlated, and therefore it was hypothesized that multicollinearity contributed to model instability. A correlation matrix indicated that global QL was highly correlated with 7 out of 11 variables. In a first attempt to explore multicollinearity, we used global QL as dependent variable in a regression model with other QL subscales as predictors. Afterwards, standard diagnostic tests for multicollinearity were performed. An exploratory principal components analysis and factor analysis of the QL subscales identified at most three important components and indicated that inclusion of global QL made minimal difference to the loadings on each component, suggesting that it is redundant in the model. In a second approach, we advocate a bootstrap technique to assess the stability of the models. Based on these analyses and since global QL exacerbates problems of multicollinearity, we therefore recommend that global QL be excluded from prognostic factor analyses using the QLQ-C30. The prognostic factor analysis was rerun without global QL in the model, and selected the same significant prognostic factors as before.

  16. Analysing improvements to on-street public transport systems: a mesoscopic model approach

    DEFF Research Database (Denmark)

    Ingvardson, Jesper Bláfoss; Kornerup Jensen, Jonas; Nielsen, Otto Anker

    2017-01-01

    Light rail transit and bus rapid transit have shown to be efficient and cost-effective in improving public transport systems in cities around the world. As these systems comprise various elements, which can be tailored to any given setting, e.g. pre-board fare-collection, holding strategies...... and other advanced public transport systems (APTS), the attractiveness of such systems depends heavily on their implementation. In the early planning stage it is advantageous to deploy simple and transparent models to evaluate possible ways of implementation. For this purpose, the present study develops...... a mesoscopic model which makes it possible to evaluate public transport operations in details, including dwell times, intelligent traffic signal timings and holding strategies while modelling impacts from other traffic using statistical distributional data thereby ensuring simplicity in use and fast...

  17. Latent Variable Modelling and Item Response Theory Analyses in Marketing Research

    Directory of Open Access Journals (Sweden)

    Brzezińska Justyna

    2016-12-01

    Full Text Available Item Response Theory (IRT is a modern statistical method using latent variables designed to model the interaction between a subject’s ability and the item level stimuli (difficulty, guessing. Item responses are treated as the outcome (dependent variables, and the examinee’s ability and the items’ characteristics are the latent predictor (independent variables. IRT models the relationship between a respondent’s trait (ability, attitude and the pattern of item responses. Thus, the estimation of individual latent traits can differ even for two individuals with the same total scores. IRT scores can yield additional benefits and this will be discussed in detail. In this paper theory and application with R software with the use of packages designed for modelling IRT will be presented.

  18. Psychological and physical dimensions explaining life satisfaction among the elderly: a structural model examination.

    Science.gov (United States)

    Meléndez, Juan Carlos; Tomás, José Manuel; Oliver, Amparo; Navarro, Esperanza

    2009-01-01

    The aim of the present paper is to analyze the effects of psychological well-being, physical functioning and socio-demographic factors on life satisfaction. Both a bivariate and a multivariate level of analyses have been used. Finally, a structural model explaining life satisfaction has been developed and validated. With respect to bivariate relations, there was evidence of significant positive relations between psychological well-being dimensions and life satisfaction and between physical conditions and life satisfaction as well. Also, as age increased there was a slow decrease in life satisfaction. Educational level was positively related to life satisfaction. A structural model gave valuable information about the pattern of multivariate relationships among the variables. A first result of the model was the large effect of physical and psychological well-being on life satisfaction, albeit it was psychological well-being the major predictor of life satisfaction. A second result was that the effects of socio-demographic variables on life satisfaction were low and they operated through the effects that maintain either on psychological well-being (or its individual indicators) or on physical conditions. The role gender or age played was indirect rather than direct.

  19. Modeling human papillomavirus and cervical cancer in the United States for analyses of screening and vaccination

    Directory of Open Access Journals (Sweden)

    Ortendahl Jesse

    2007-10-01

    Full Text Available Abstract Background To provide quantitative insight into current U.S. policy choices for cervical cancer prevention, we developed a model of human papillomavirus (HPV and cervical cancer, explicitly incorporating uncertainty about the natural history of disease. Methods We developed a stochastic microsimulation of cervical cancer that distinguishes different HPV types by their incidence, clearance, persistence, and progression. Input parameter sets were sampled randomly from uniform distributions, and simulations undertaken with each set. Through systematic reviews and formal data synthesis, we established multiple epidemiologic targets for model calibration, including age-specific prevalence of HPV by type, age-specific prevalence of cervical intraepithelial neoplasia (CIN, HPV type distribution within CIN and cancer, and age-specific cancer incidence. For each set of sampled input parameters, likelihood-based goodness-of-fit (GOF scores were computed based on comparisons between model-predicted outcomes and calibration targets. Using 50 randomly resampled, good-fitting parameter sets, we assessed the external consistency and face validity of the model, comparing predicted screening outcomes to independent data. To illustrate the advantage of this approach in reflecting parameter uncertainty, we used the 50 sets to project the distribution of health outcomes in U.S. women under different cervical cancer prevention strategies. Results Approximately 200 good-fitting parameter sets were identified from 1,000,000 simulated sets. Modeled screening outcomes were externally consistent with results from multiple independent data sources. Based on 50 good-fitting parameter sets, the expected reductions in lifetime risk of cancer with annual or biennial screening were 76% (range across 50 sets: 69–82% and 69% (60–77%, respectively. The reduction from vaccination alone was 75%, although it ranged from 60% to 88%, reflecting considerable parameter

  20. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    Stress can affect the brain functionality in many ways. As the synaptic vesicles have a major role in nervous signal transportation in synapses, their distribution in relationship to the active zone is very important in studying the neuron responses. We study the effect of stress on brain functio...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....

  1. Analysing Amazonian forest productivity using a new individual and trait-based model (TFS v.1)

    Science.gov (United States)

    Fyllas, N. M.; Gloor, E.; Mercado, L. M.; Sitch, S.; Quesada, C. A.; Domingues, T. F.; Galbraith, D. R.; Torre-Lezama, A.; Vilanova, E.; Ramírez-Angulo, H.; Higuchi, N.; Neill, D. A.; Silveira, M.; Ferreira, L.; Aymard C., G. A.; Malhi, Y.; Phillips, O. L.; Lloyd, J.

    2014-07-01

    Repeated long-term censuses have revealed large-scale spatial patterns in Amazon basin forest structure and dynamism, with some forests in the west of the basin having up to a twice as high rate of aboveground biomass production and tree recruitment as forests in the east. Possible causes for this variation could be the climatic and edaphic gradients across the basin and/or the spatial distribution of tree species composition. To help understand causes of this variation a new individual-based model of tropical forest growth, designed to take full advantage of the forest census data available from the Amazonian Forest Inventory Network (RAINFOR), has been developed. The model allows for within-stand variations in tree size distribution and key functional traits and between-stand differences in climate and soil physical and chemical properties. It runs at the stand level with four functional traits - leaf dry mass per area (Ma), leaf nitrogen (NL) and phosphorus (PL) content and wood density (DW) varying from tree to tree - in a way that replicates the observed continua found within each stand. We first applied the model to validate canopy-level water fluxes at three eddy covariance flux measurement sites. For all three sites the canopy-level water fluxes were adequately simulated. We then applied the model at seven plots, where intensive measurements of carbon allocation are available. Tree-by-tree multi-annual growth rates generally agreed well with observations for small trees, but with deviations identified for larger trees. At the stand level, simulations at 40 plots were used to explore the influence of climate and soil nutrient availability on the gross (ΠG) and net (ΠN) primary production rates as well as the carbon use efficiency (CU). Simulated ΠG, ΠN and CU were not associated with temperature. On the other hand, all three measures of stand level productivity were positively related to both mean annual precipitation and soil nutrient status

  2. Using the Single Prolonged Stress Model to Examine the Pathophysiology of PTSD

    Directory of Open Access Journals (Sweden)

    Rimenez R. Souza

    2017-09-01

    Full Text Available The endurance of memories of emotionally arousing events serves the adaptive role of minimizing future exposure to danger and reinforcing rewarding behaviors. However, following a traumatic event, a subset of individuals suffers from persistent pathological symptoms such as those seen in posttraumatic stress disorder (PTSD. Despite the availability of pharmacological treatments and evidence-based cognitive behavioral therapy, a considerable number of PTSD patients do not respond to the treatment, or show partial remission and relapse of the symptoms. In controlled laboratory studies, PTSD patients show deficient ability to extinguish conditioned fear. Failure to extinguish learned fear could be responsible for the persistence of PTSD symptoms such as elevated anxiety, arousal, and avoidance. It may also explain the high non-response and dropout rates seen during treatment. Animal models are useful for understanding the pathophysiology of the disorder and the development of new treatments. This review examines studies in a rodent model of PTSD with the goal of identifying behavioral and physiological factors that predispose individuals to PTSD symptoms. Single prolonged stress (SPS is a frequently used rat model of PTSD that involves exposure to several successive stressors. SPS rats show PTSD-like symptoms, including impaired extinction of conditioned fear. Since its development by the Liberzon lab in 1997, the SPS model has been referred to by more than 200 published papers. Here we consider the findings of these studies and unresolved questions that may be investigated using the model.

  3. Integrated modeling/analyses of thermal-shock effects in SNS targets

    Energy Technology Data Exchange (ETDEWEB)

    Taleyarkhan, R.P.; Haines, J. [Oak Ridge National Lab., TN (United States)

    1996-06-01

    In a spallation neutron source (SNS), extremely rapid energy pulses are introduced in target materials such as mercury, lead, tungsten, uranium, etc. Shock phenomena in such systems may possibly lead to structural material damage beyond the design basis. As expected, the progression of shock waves and interaction with surrounding materials for liquid targets can be quite different from that in solid targets. The purpose of this paper is to describe ORNL`s modeling framework for `integrated` assessment of thermal-shock issues in liquid and solid target designs. This modeling framework is being developed based upon expertise developed from past reactor safety studies, especially those related to the Advanced Neutron Source (ANS) Project. Unlike previous separate-effects modeling approaches employed (for evaluating target behavior when subjected to thermal shocks), the present approach treats the overall problem in a coupled manner using state-of-the-art equations of state for materials of interest (viz., mercury, tungsten and uranium). That is, the modeling framework simultaneously accounts for localized (and distributed) compression pressure pulse generation due to transient heat deposition, the transport of this shock wave outwards, interaction with surrounding boundaries, feedback to mercury from structures, multi-dimensional reflection patterns & stress induced (possible) breakup or fracture.

  4. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Science.gov (United States)

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  5. Using Latent Trait Measurement Models to Analyse Attitudinal Data: A Synthesis of Viewpoints.

    Science.gov (United States)

    Andrich, David

    A Rasch model for ordered response categories is derived and it is shown that it retains the key features of both the Thurstone and Likert approaches to studying attitude. Key features of the latter approaches are reviewed. Characteristics in common with the Thurstone approach are: statements are scaled with respect to their affective values;…

  6. An anisotropic numerical model for thermal hydraulic analyses: application to liquid metal flow in fuel assemblies

    Science.gov (United States)

    Vitillo, F.; Vitale Di Maio, D.; Galati, C.; Caruso, G.

    2015-11-01

    A CFD analysis has been carried out to study the thermal-hydraulic behavior of liquid metal coolant in a fuel assembly of triangular lattice. In order to obtain fast and accurate results, the isotropic two-equation RANS approach is often used in nuclear engineering applications. A different approach is provided by Non-Linear Eddy Viscosity Models (NLEVM), which try to take into account anisotropic effects by a nonlinear formulation of the Reynolds stress tensor. This approach is very promising, as it results in a very good numerical behavior and in a potentially better fluid flow description than classical isotropic models. An Anisotropic Shear Stress Transport (ASST) model, implemented into a commercial software, has been applied in previous studies, showing very trustful results for a large variety of flows and applications. In the paper, the ASST model has been used to perform an analysis of the fluid flow inside the fuel assembly of the ALFRED lead cooled fast reactor. Then, a comparison between the results of wall-resolved conjugated heat transfer computations and the results of a decoupled analysis using a suitable thermal wall-function previously implemented into the solver has been performed and presented.

  7. Cyclodextrin--piroxicam inclusion complexes: analyses by mass spectrometry and molecular modelling

    Science.gov (United States)

    Gallagher, Richard T.; Ball, Christopher P.; Gatehouse, Deborah R.; Gates, Paul J.; Lobell, Mario; Derrick, Peter J.

    1997-11-01

    Mass spectrometry has been used to investigate the natures of non-covalent complexes formed between the anti-inflammatory drug piroxicam and [alpha]-, [beta]- and [gamma]-cyclodextrins. Energies of these complexes have been calculated by means of molecular modelling. There is a correlation between peak intensities in the mass spectra and the calculated energies.

  8. Survival data analyses in ecotoxicology: critical effect concentrations, methods and models. What should we use?

    Science.gov (United States)

    Forfait-Dubuc, Carole; Charles, Sandrine; Billoir, Elise; Delignette-Muller, Marie Laure

    2012-05-01

    In ecotoxicology, critical effect concentrations are the most common indicators to quantitatively assess risks for species exposed to contaminants. Three types of critical effect concentrations are classically used: lowest/ no observed effect concentration (LOEC/NOEC), LC( x) (x% lethal concentration) and NEC (no effect concentration). In this article, for each of these three types of critical effect concentration, we compared methods or models used for their estimation and proposed one as the most appropriate. We then compared these critical effect concentrations to each other. For that, we used nine survival data sets corresponding to D. magna exposition to nine different contaminants, for which the time-course of the response was monitored. Our results showed that: (i) LOEC/NOEC values at day 21 were method-dependent, and that the Cochran-Armitage test with a step-down procedure appeared to be the most protective for the environment; (ii) all tested concentration-response models we compared gave close values of LC50 at day 21, nevertheless the Weibull model had the lowest global mean deviance; (iii) a simple threshold NEC-model both concentration and time dependent more completely described whole data (i.e. all timepoints) and enabled a precise estimation of the NEC. We then compared the three critical effect concentrations and argued that the use of the NEC might be a good option for environmental risk assessment.

  9. Transformation of Baumgarten's aesthetics into a tool for analysing works and for modelling

    DEFF Research Database (Denmark)

    Thomsen, Bente Dahl

    2006-01-01

      Abstract: Is this the best form, or does it need further work? The aesthetic object does not possess the perfect qualities; but how do I proceed with the form? These are questions that all modellers ask themselves at some point, and with which they can grapple for days - even weeks - before the...

  10. Modelling and analysing 3D buildings with a primal/dual data structure

    NARCIS (Netherlands)

    Boguslawski, P.; Gold, C.; Ledoux, H.

    2011-01-01

    While CityGML permits us to represent 3D city models, its use for applications where spatial analysis and/or real-time modifications are required is limited since at this moment the possibility to store topological relationships between the elements is rather limited and often not exploited. We pres

  11. Modelling and analysing 3D buildings with a primal/dual data structure

    NARCIS (Netherlands)

    Boguslawski, P.; Gold, C.; Ledoux, H.

    2011-01-01

    While CityGML permits us to represent 3D city models, its use for applications where spatial analysis and/or real-time modifications are required is limited since at this moment the possibility to store topological relationships between the elements is rather limited and often not exploited. We

  12. A multi-scale modelling approach for analysing landscape service dynamics

    NARCIS (Netherlands)

    Willemen, L.; Veldkamp, A.; Verburg, P.H.; Hein, L.G.; Leemans, R.

    2012-01-01

    Shifting societal needs drive and shape landscapes and the provision of their services. This paper presents a modelling approach to visualize the regional spatial and temporal dynamics in landscape service supply as a function of changing landscapes and societal demand. This changing demand can resu

  13. GSEVM v.2: MCMC software to analyse genetically structured environmental variance models

    DEFF Research Database (Denmark)

    Ibáñez-Escriche, N; Garcia, M; Sorensen, D

    2010-01-01

    This note provides a description of software that allows to fit Bayesian genetically structured variance models using Markov chain Monte Carlo (MCMC). The gsevm v.2 program was written in Fortran 90. The DOS and Unix executable programs, the user's guide, and some example files are freely availab...

  14. Analysing outsourcing policies in an asset management context: a six-stage model

    NARCIS (Netherlands)

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates

  15. Analysing outsourcing policies in an asset management context: a six-stage model

    NARCIS (Netherlands)

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates

  16. Analysing green supply chain management practices in Brazil's electrical/electronics industry using interpretive structural modelling

    DEFF Research Database (Denmark)

    Govindan, Kannan; Kannan, Devika; Mathiyazhagan, K.

    2013-01-01

    that exists between GSCM practices with regard to their adoption within Brazilian electrical/electronic industry with the help of interpretive structural modelling (ISM). From the results, we infer that cooperation with customers for eco-design practice is driving other practices, and this practice acts...

  17. Sharing, caring, and surveilling: an actor-partner interdependence model examination of Facebook relational maintenance strategies.

    Science.gov (United States)

    McEwan, Bree

    2013-12-01

    Abstract Relational maintenance is connected to high quality friendships. Friendship maintenance behaviors may occur online via social networking sites. This study utilized an Actor-Partner Interdependence Model to examine how Facebook maintenance and surveillance affect friendship quality. Bryant and Marmo's (2012) Facebook maintenance scale was evaluated, revealing two factors: sharing and caring. Facebook surveillance was also measured. For friendship satisfaction and liking, significant positive actor and partner effects emerged for caring; significant negative actor, partner, and interaction effects emerged for sharing; and significant positive actor effects emerged for surveillance. For friendship closeness, significant positive actor effects emerged for caring and surveillance.

  18. Examining faking on personality inventories using unfolding item response theory models.

    Science.gov (United States)

    Scherbaum, Charles A; Sabet, Jennifer; Kern, Michael J; Agnello, Paul

    2013-01-01

    A concern about personality inventories in diagnostic and decision-making contexts is that individuals will fake. Although there is extensive research on faking, little research has focused on how perceptions of personality items change when individuals are faking or responding honestly. This research demonstrates how the delta parameter from the generalized graded unfolding item response theory model can be used to examine how individuals' perceptions about personality items might change when responding honestly or when faking. The results indicate that perceptions changed from honest to faking conditions for several neuroticism items. The direction of the change varied, indicating that faking can operate to increase or decrease scores within a personality factor.

  19. Analysing the Severity and Frequency of Traffic Crashes in Riyadh City Using Statistical Models

    Directory of Open Access Journals (Sweden)

    Saleh Altwaijri

    2012-12-01

    Full Text Available Traffic crashes in Riyadh city cause losses in the form of deaths, injuries and property damages, in addition to the pain and social tragedy affecting families of the victims. In 2005, there were a total of 47,341 injury traffic crashes occurred in Riyadh city (19% of the total KSA crashes and 9% of those crashes were severe. Road safety in Riyadh city may have been adversely affected by: high car ownership, migration of people to Riyadh city, high daily trips reached about 6 million, high rate of income, low-cost of petrol, drivers from different nationalities, young drivers and tremendous growth in population which creates a high level of mobility and transport activities in the city. The primary objective of this paper is therefore to explore factors affecting the severity and frequency of road crashes in Riyadh city using appropriate statistical models aiming to establish effective safety policies ready to be implemented to reduce the severity and frequency of road crashes in Riyadh city. Crash data for Riyadh city were collected from the Higher Commission for the Development of Riyadh (HCDR for a period of five years from 1425H to 1429H (roughly corresponding to 2004-2008. Crash data were classified into three categories: fatal, serious-injury and slight-injury. Two nominal response models have been developed: a standard multinomial logit model (MNL and a mixed logit model to injury-related crash data. Due to a severe underreporting problem on the slight injury crashes binary and mixed binary logistic regression models were also estimated for two categories of severity: fatal and serious crashes. For frequency, two count models such as Negative Binomial (NB models were employed and the unit of analysis was 168 HAIs (wards in Riyadh city. Ward-level crash data are disaggregated by severity of the crash (such as fatal and serious injury crashes. The results from both multinomial and binary response models are found to be fairly consistent but

  20. Evaluation of a dentoalveolar model for testing mouthguards: stress and strain analyses.

    Science.gov (United States)

    Verissimo, Crisnicaw; Costa, Paulo Victor Moura; Santos-Filho, Paulo César Freitas; Fernandes-Neto, Alfredo Júlio; Tantbirojn, Daranee; Versluis, Antheunis; Soares, Carlos José

    2016-02-01

    Custom-fitted mouthguards are devices used to decrease the likelihood of dental trauma. The aim of this study was to develop an experimental bovine dentoalveolar model with periodontal ligament to evaluate mouthguard shock absorption, and impact strain and stress behavior. A pendulum impact device was developed to perform the impact tests with two different impact materials (steel ball and baseball). Five bovine jaws were selected with standard age and dimensions. Six-mm mouthguards were made for the impact tests. The jaws were fixed in a pendulum device and impacts were performed from 90, 60, and 45° angles, with and without mouthguard. Strain gauges were attached at the palatal surface of the impacted tooth. The strain and shock absorption of the mouthguards was calculated and data were analyzed with 3-way anova and Tukey's test (α = 0.05). Two-dimensional finite element models were created based on the cross-section of the bovine dentoalveolar model used in the experiment. A nonlinear dynamic impact analysis was performed to evaluate the strain and stress distributions. Without mouthguards, the increase in impact angulation significantly increased strains and stresses. Mouthguards reduced strain and stress values. Impact velocity, impact object (steel ball or baseball), and mouthguard presence affected the impact stresses and strains in a bovine dentoalveolar model. Experimental strain measurements and finite element models predicted similar behavior; therefore, both methodologies are suitable for evaluating the biomechanical performance of mouthguards. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Novel basophil- or eosinophil-depleted mouse models for functional analyses of allergic inflammation.

    Science.gov (United States)

    Matsuoka, Kunie; Shitara, Hiroshi; Taya, Choji; Kohno, Kenji; Kikkawa, Yoshiaki; Yonekawa, Hiromichi

    2013-01-01

    Basophils and eosinophils play important roles in various host defense mechanisms but also act as harmful effectors in allergic disorders. We generated novel basophil- and eosinophil-depletion mouse models by introducing the human diphtheria toxin (DT) receptor gene under the control of the mouse CD203c and the eosinophil peroxidase promoter, respectively, to study the critical roles of these cells in the immunological response. These mice exhibited selective depletion of the target cells upon DT administration. In the basophil-depletion model, DT administration attenuated a drop in body temperature in IgG-mediated systemic anaphylaxis in a dose-dependent manner and almost completely abolished the development of ear swelling in IgE-mediated chronic allergic inflammation (IgE-CAI), a typical skin swelling reaction with massive eosinophil infiltration. In contrast, in the eosinophil-depletion model, DT administration ameliorated the ear swelling in IgE-CAI whether DT was administered before, simultaneously, or after, antigen challenge, with significantly lower numbers of eosinophils infiltrating into the swelling site. These results confirm that basophils and eosinophils act as the initiator and the effector, respectively, in IgE-CAI. In addition, antibody array analysis suggested that eotaxin-2 is a principal chemokine that attracts proinflammatory cells, leading to chronic allergic inflammation. Thus, the two mouse models established in this study are potentially useful and powerful tools for studying the in vivo roles of basophils and eosinophils. The combination of basophil- and eosinophil-depletion mouse models provides a new approach to understanding the complicated mechanism of allergic inflammation in conditions such as atopic dermatitis and asthma.

  2. Novel basophil- or eosinophil-depleted mouse models for functional analyses of allergic inflammation.

    Directory of Open Access Journals (Sweden)

    Kunie Matsuoka

    Full Text Available Basophils and eosinophils play important roles in various host defense mechanisms but also act as harmful effectors in allergic disorders. We generated novel basophil- and eosinophil-depletion mouse models by introducing the human diphtheria toxin (DT receptor gene under the control of the mouse CD203c and the eosinophil peroxidase promoter, respectively, to study the critical roles of these cells in the immunological response. These mice exhibited selective depletion of the target cells upon DT administration. In the basophil-depletion model, DT administration attenuated a drop in body temperature in IgG-mediated systemic anaphylaxis in a dose-dependent manner and almost completely abolished the development of ear swelling in IgE-mediated chronic allergic inflammation (IgE-CAI, a typical skin swelling reaction with massive eosinophil infiltration. In contrast, in the eosinophil-depletion model, DT administration ameliorated the ear swelling in IgE-CAI whether DT was administered before, simultaneously, or after, antigen challenge, with significantly lower numbers of eosinophils infiltrating into the swelling site. These results confirm that basophils and eosinophils act as the initiator and the effector, respectively, in IgE-CAI. In addition, antibody array analysis suggested that eotaxin-2 is a principal chemokine that attracts proinflammatory cells, leading to chronic allergic inflammation. Thus, the two mouse models established in this study are potentially useful and powerful tools for studying the in vivo roles of basophils and eosinophils. The combination of basophil- and eosinophil-depletion mouse models provides a new approach to understanding the complicated mechanism of allergic inflammation in conditions such as atopic dermatitis and asthma.

  3. Static simulation and analyses of mower's ROPS behavior in a finite element model.

    Science.gov (United States)

    Wang, X; Ayers, P; Womac, A R

    2009-10-01

    The goal of this research was to numerically predict the maximum lateral force acting on a mower rollover protective structure (ROPS) and the energy absorbed by the ROPS during a lateral continuous roll. A finite element (FE) model of the ROPS was developed using elastic and plastic theories including nonlinear relationships between stresses and strains in the plastic deformation range. Model validation was performed using field measurements of ROPS behavior in a lateral continuous roll on a purpose-designed test slope. Field tests determined the maximum deformation of the ROPS of a 900 kg John Deere F925 mower with a 183 cm (72 in.) mowing deck during an actual lateral roll on a pad and on soil. In the FE model, lateral force was gradually added to the ROPS until the field-measured maximum deformation was achieved. The results from the FE analysis indicated that the top corners of the ROPS enter slightly into the plastic deformation region. Maximum lateral forces acting on the ROPS during the simulated impact with the pad and soil were 19650 N and 22850 N, respectively. The FE model predicted that the energy absorbed by the ROPS (643 J) in the lateral roll test on the pad was less than the static test requirements (1575 J) of Organization for Economic Development (OECD) Code 6. In addition, the energy absorbed by the ROPS (1813 J) in the test on the soil met the static test requirements (1575 J). Both the FE model and the field test results indicated that the deformed ROPS of the F925 mower with deck did not intrude into the occupant clearance zone during the lateral continuous or non-continuous roll.

  4. Interpersonal Proximity and Impression Formation: A Partial Examination of Hall's Proxemic Model.

    Science.gov (United States)

    Tesch, Frederick E

    1979-02-01

    Interpersonal proximity was examined as a cue in impression formation by varying factorially four interpersonal distances (2', 3 1/4', 5 1/2', 9 1/2'), sex of S (48 male and 48 female American college students), and sex of C (three male and three female students). Interpersonal proximity in the interview situation did not directly affect Ss' impressions of the Cs as measured by the Gough and Heilbrun Adjective Check List and Schutz's FIRO-B test. Although the four distances operationalized two of the interpersonal distance zones in Hall's normative model of human spatial behavior, Ss did not report the expected differences in the experiences of these two zones. The implications of the present findings for the limited role of interpersonal proximity as a cue in impression formation and for Hall's model are discussed.

  5. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  6. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling......, which leads to more accurate results. Finally, we present a thorough statistical investigation of the shape, orientation and interactions of the synaptic vesicles during active time of the synapse. Focused ion beam-scanning electron microscopy images of a male mammalian brain are used for this study...

  7. A note on the Fourier series model for analysing line transect data.

    Science.gov (United States)

    Buckland, S T

    1982-06-01

    The Fourier series model offers a powerful procedure for the estimation of animal population density from line transect data. The estimate is reliable over a wide range of detection functions. In contrast, analytic confidence intervals yield, at best, 90% confidence for nominal 95% intervals. Three solutions, one using Monte Carlo techniques, another making direct use of replicate lines and the third based on the jackknife method, are discussed and compared.

  8. Analysing Amazonian forest productivity using a new individual and trait-based model (TFS v.1

    Directory of Open Access Journals (Sweden)

    N. M. Fyllas

    2014-02-01

    Full Text Available Repeated long-term censuses have revealed large-scale spatial patterns in Amazon Basin forest structure and dynamism, with some forests in the west of the Basin having up to a twice as high rate of aboveground biomass production and tree recruitment as forests in the east. Possible causes for this variation could be the climatic and edaphic gradients across the Basin and/or the spatial distribution of tree species composition. To help understand causes of this variation a new individual-based model of tropical forest growth designed to take full advantage of the forest census data available from the Amazonian Forest Inventory Network (RAINFOR has been developed. The model incorporates variations in tree size distribution, functional traits and soil physical properties and runs at the stand level with four functional traits, leaf dry mass per area (Ma, leaf nitrogen (NL and phosphorus (PL content and wood density (DW used to represent a continuum of plant strategies found in tropical forests. We first applied the model to validate canopy-level water fluxes at three Amazon eddy flux sites. For all three sites the canopy-level water fluxes were adequately simulated. We then applied the model at seven plots, where intensive measurements of carbon allocation are available. Tree-by-tree multi-annual growth rates generally agreed well with observations for small trees, but with deviations identified for large trees. At the stand-level, simulations at 40 plots were used to explore the influence of climate and soil fertility on the gross (ΠG and net (ΠN primary production rates as well as the carbon use efficiency (CU. Simulated ΠG, ΠN and CU were not associated with temperature. However all three measures of stand level productivity were positively related to annual precipitation and soil fertility.

  9. Systematic Selection of Key Logistic Regression Variables for Risk Prediction Analyses: A Five-Factor Maximum Model.

    Science.gov (United States)

    Hewett, Timothy E; Webster, Kate E; Hurd, Wendy J

    2017-08-16

    The evolution of clinical practice and medical technology has yielded an increasing number of clinical measures and tests to assess a patient's progression and return to sport readiness after injury. The plethora of available tests may be burdensome to clinicians in the absence of evidence that demonstrates the utility of a given measurement. Thus, there is a critical need to identify a discrete number of metrics to capture during clinical assessment to effectively and concisely guide patient care. The data sources included Pubmed and PMC Pubmed Central articles on the topic. Therefore, we present a systematic approach to injury risk analyses and how this concept may be used in algorithms for risk analyses for primary anterior cruciate ligament (ACL) injury in healthy athletes and patients after ACL reconstruction. In this article, we present the five-factor maximum model, which states that in any predictive model, a maximum of 5 variables will contribute in a meaningful manner to any risk factor analysis. We demonstrate how this model already exists for prevention of primary ACL injury, how this model may guide development of the second ACL injury risk analysis, and how the five-factor maximum model may be applied across the injury spectrum for development of the injury risk analysis.

  10. Hydrogeologic analyses in support of the conceptual model for the LANL Area G LLRW performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Vold, E.L.; Birdsell, K.; Rogers, D.; Springer, E.; Krier, D.; Turin, H.J.

    1996-04-01

    The Los Alamos National Laboratory low level radioactive waste disposal facility at Area G is currently completing a draft of the site Performance Assessment. Results from previous field studies have estimated a range in recharge rate up to 1 cm/yr. Recent estimates of unsaturated hydraulic conductivity for each stratigraphic layer under a unit gradient assumption show a wide range in recharge rate of 10{sup {minus}4} to 1 cm/yr depending upon location. Numerical computations show that a single net infiltration rate at the mesa surface does not match the moisture profile in each stratigraphic layer simultaneously, suggesting local source or sink terms possibly due to surface connected porous regions. The best fit to field data at deeper stratigraphic layers occurs for a net infiltration of about 0.1 cm/yr. A recent detailed analysis evaluated liquid phase vertical moisture flux, based on moisture profiles in several boreholes and van Genuchten fits to the hydraulic properties for each of the stratigraphic units. Results show a near surface infiltration region averages 8m deep, below which is a dry, low moisture content, and low flux region, where liquid phase recharge averages to zero. Analysis shows this low flux region is dominated by vapor movement. Field data from tritium diffusion studies, from pressure fluctuation attenuation studies, and from comparisons of in-situ and core sample permeabilities indicate that the vapor diffusion is enhanced above that expected in the matrix and is presumably due to enhanced flow through the fractures. Below this dry region within the mesa is a moisture spike which analyses show corresponds to a moisture source. The likely physical explanation is seasonal transient infiltration through surface-connected fractures. This anomalous region is being investigated in current field studies, because it is critical in understanding the moisture flux which continues to deeper regions through the unsaturated zone.

  11. Best-fit model of exploratory and confirmatory factor analysis of the 2010 Medical Council of Canada Qualifying Examination Part I clinical decision-making cases.

    Science.gov (United States)

    Champlain, André F De

    2015-01-01

    This study aims to assess the fit of a number of exploratory and confirmatory factor analysis models to the 2010 Medical Council of Canada Qualifying Examination Part I (MCCQE1) clinical decision-making (CDM) cases. The outcomes of this study have important implications for a range of domains, including scoring and test development. The examinees included all first-time Canadian medical graduates and international medical graduates who took the MCCQE1 in spring or fall 2010. The fit of one- to five-factor exploratory models was assessed for the item response matrix of the 2010 CDM cases. Five confirmatory factor analytic models were also examined with the same CDM response matrix. The structural equation modeling software program Mplus was used for all analyses. Out of the five exploratory factor analytic models that were evaluated, a three-factor model provided the best fit. Factor 1 loaded on three medicine cases, two obstetrics and gynecology cases, and two orthopedic surgery cases. Factor 2 corresponded to pediatrics, and the third factor loaded on psychiatry cases. Among the five confirmatory factor analysis models examined in this study, three- and four-factor lifespan period models and the five-factor discipline models provided the best fit. The results suggest that knowledge of broad disciplinary domains best account for performance on CDM cases. In test development, particular effort should be placed on developing CDM cases according to broad discipline and patient age domains; CDM testlets should be assembled largely using the criteria of discipline and age.

  12. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling

    Science.gov (United States)

    Schneider, P. A.; Blank, H.; Geiger, B.; Mank, K.; Martinov, S.; Ryter, F.; Weiland, M.; Weller, A.

    2015-07-01

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  13. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, P. A.; Blank, H.; Geiger, B.; Mank, K.; Martinov, S.; Ryter, F.; Weiland, M.; Weller, A. [Max-Planck-Institut für Plasmaphysik, Garching (Germany)

    2015-07-15

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  14. High-temperature series analyses of the classical Heisenberg and XY model

    CERN Document Server

    Adler, J; Janke, W

    1993-01-01

    Although there is now a good measure of agreement between Monte Carlo and high-temperature series expansion estimates for Ising ($n=1$) models, published results for the critical temperature from series expansions up to 12{\\em th} order for the three-dimensional classical Heisenberg ($n=3$) and XY ($n=2$) model do not agree very well with recent high-precision Monte Carlo estimates. In order to clarify this discrepancy we have analyzed extended high-temperature series expansions of the susceptibility, the second correlation moment, and the second field derivative of the susceptibility, which have been derived a few years ago by L\\"uscher and Weisz for general $O(n)$ vector spin models on $D$-dimensional hypercubic lattices up to 14{\\em th} order in $K \\equiv J/k_B T$. By analyzing these series expansions in three dimensions with two different methods that allow for confluent correction terms, we obtain good agreement with the standard field theory exponent estimates and with the critical temperature estimates...

  15. Metabolic model for the filamentous ‘Candidatus Microthrix parvicella' based on genomic and metagenomic analyses

    Science.gov (United States)

    Jon McIlroy, Simon; Kristiansen, Rikke; Albertsen, Mads; Michael Karst, Søren; Rossetti, Simona; Lund Nielsen, Jeppe; Tandoi, Valter; James Seviour, Robert; Nielsen, Per Halkjær

    2013-01-01

    ‘Candidatus Microthrix parvicella' is a lipid-accumulating, filamentous bacterium so far found only in activated sludge wastewater treatment plants, where it is a common causative agent of sludge separation problems. Despite attracting considerable interest, its detailed physiology is still unclear. In this study, the genome of the RN1 strain was sequenced and annotated, which facilitated the construction of a theoretical metabolic model based on available in situ and axenic experimental data. This model proposes that under anaerobic conditions, this organism accumulates preferentially long-chain fatty acids as triacylglycerols. Utilisation of trehalose and/or polyphosphate stores or partial oxidation of long-chain fatty acids may supply the energy required for anaerobic lipid uptake and storage. Comparing the genome sequence of this isolate with metagenomes from two full-scale wastewater treatment plants with enhanced biological phosphorus removal reveals high similarity, with few metabolic differences between the axenic and the dominant community ‘Ca. M. parvicella' strains. Hence, the metabolic model presented in this paper could be considered generally applicable to strains in full-scale treatment systems. The genomic information obtained here will provide the basis for future research into in situ gene expression and regulation. Such information will give substantial insight into the ecophysiology of this unusual and biotechnologically important filamentous bacterium. PMID:23446830

  16. Consequence modeling for nuclear weapons probabilistic cost/benefit analyses of safety retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Harvey, T.F.; Peters, L.; Serduke, F.J.D.; Hall, C.; Stephens, D.R.

    1998-01-01

    The consequence models used in former studies of costs and benefits of enhanced safety retrofits are considered for (1) fuel fires; (2) non-nuclear detonations; and, (3) unintended nuclear detonations. Estimates of consequences were made using a representative accident location, i.e., an assumed mixed suburban-rural site. We have explicitly quantified land- use impacts and human-health effects (e.g. , prompt fatalities, prompt injuries, latent cancer fatalities, low- levels of radiation exposure, and clean-up areas). Uncertainty in the wind direction is quantified and used in a Monte Carlo calculation to estimate a range of results for a fuel fire with uncertain respirable amounts of released Pu. We define a nuclear source term and discuss damage levels of concern. Ranges of damages are estimated by quantifying health impacts and property damages. We discuss our dispersal and prompt effects models in some detail. The models used to loft the Pu and fission products and their particle sizes are emphasized.

  17. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling.

    Science.gov (United States)

    Schneider, P A; Blank, H; Geiger, B; Mank, K; Martinov, S; Ryter, F; Weiland, M; Weller, A

    2015-07-01

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  18. Analysing the origin of long-range interactions in proteins using lattice models

    Directory of Open Access Journals (Sweden)

    Unger Ron

    2009-01-01

    Full Text Available Abstract Background Long-range communication is very common in proteins but the physical basis of this phenomenon remains unclear. In order to gain insight into this problem, we decided to explore whether long-range interactions exist in lattice models of proteins. Lattice models of proteins have proven to capture some of the basic properties of real proteins and, thus, can be used for elucidating general principles of protein stability and folding. Results Using a computational version of double-mutant cycle analysis, we show that long-range interactions emerge in lattice models even though they are not an input feature of them. The coupling energy of both short- and long-range pairwise interactions is found to become more positive (destabilizing in a linear fashion with increasing 'contact-frequency', an entropic term that corresponds to the fraction of states in the conformational ensemble of the sequence in which the pair of residues is in contact. A mathematical derivation of the linear dependence of the coupling energy on 'contact-frequency' is provided. Conclusion Our work shows how 'contact-frequency' should be taken into account in attempts to stabilize proteins by introducing (or stabilizing contacts in the native state and/or through 'negative design' of non-native contacts.

  19. Analyses of the redistribution of work following cardiac resynchronisation therapy in a patient specific model.

    Directory of Open Access Journals (Sweden)

    Steven Alexander Niederer

    Full Text Available Regulation of regional work is essential for efficient cardiac function. In patients with heart failure and electrical dysfunction such as left branch bundle block regional work is often depressed in the septum. Following cardiac resynchronisation therapy (CRT this heterogeneous distribution of work can be rebalanced by altering the pattern of electrical activation. To investigate the changes in regional work in these patients and the mechanisms underpinning the improved function following CRT we have developed a personalised computational model. Simulations of electromechanical cardiac function in the model estimate the regional stress, strain and work pre- and post-CRT. These simulations predict that the increase in observed work performed by the septum following CRT is not due to an increase in the volume of myocardial tissue recruited during contraction but rather that the volume of recruited myocardium remains the same and the average peak work rate per unit volume increases. These increases in the peak average rate of work is is attributed to slower and more effective contraction in the septum, as opposed to a change in active tension. Model results predict that this improved septal work rate following CRT is a result of resistance to septal contraction provided by the LV free wall. This resistance results in septal shortening over a longer period which, in turn, allows the septum to contract while generating higher levels of active tension to produce a higher work rate.

  20. Marginal estimation for multi-stage models: waiting time distributions and competing risks analyses.

    Science.gov (United States)

    Satten, Glen A; Datta, Somnath

    2002-01-15

    We provide non-parametric estimates of the marginal cumulative distribution of stage occupation times (waiting times) and non-parametric estimates of marginal cumulative incidence function (proportion of persons who leave stage j for stage j' within time t of entering stage j) using right-censored data from a multi-stage model. We allow for stage and path dependent censoring where the censoring hazard for an individual may depend on his or her natural covariate history such as the collection of stages visited before the current stage and their occupation times. Additional external time dependent covariates that may induce dependent censoring can also be incorporated into our estimates, if available. Our approach requires modelling the censoring hazard so that an estimate of the integrated censoring hazard can be used in constructing the estimates of the waiting times distributions. For this purpose, we propose the use of an additive hazard model which results in very flexible (robust) estimates. Examples based on data from burn patients and simulated data with tracking are also provided to demonstrate the performance of our estimators.

  1. A biophysically-based finite state machine model for analysing gastric experimental entrainment and pacing recordings

    Science.gov (United States)

    Sathar, Shameer; Trew, Mark L.; Du, Peng; O’ Grady, Greg; Cheng, Leo K.

    2014-01-01

    Gastrointestinal motility is coordinated by slow waves (SWs) generated by the interstitial cells of Cajal (ICC). Experimental studies have shown that SWs spontaneously activate at different intrinsic frequencies in isolated tissue, whereas in intact tissues they are entrained to a single frequency. Gastric pacing has been used in an attempt to improve motility in disorders such as gastroparesis by modulating entrainment, but the optimal methods of pacing are currently unknown. Computational models can aid in the interpretation of complex in-vivo recordings and help to determine optical pacing strategies. However, previous computational models of SW entrainment are limited to the intrinsic pacing frequency as the primary determinant of the conduction velocity, and are not able to accurately represent the effects of external stimuli and electrical anisotropies. In this paper, we present a novel computationally efficient method for modelling SW propagation through the ICC network while accounting for conductivity parameters and fiber orientations. The method successfully reproduced experimental recordings of entrainment following gastric transection and the effects of gastric pacing on SW activity. It provides a reliable new tool for investigating gastric electrophysiology in normal and diseased states, and to guide and focus future experimental studies. PMID:24276722

  2. Study on dynamic response of embedded long span corrugated steel culverts using scaled model shaking table tests and numerical analyses

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A series of scaled-model shaking table tests and its simulation analyses using dynamic finite element method were performed to clarify the dynamic behaviors and the seismic stability of embedded corrugated steel culverts due to strong earthquakes like the 1995 Hyogoken-nanbu earthquake. The dynamic strains of the embedded culvert models and the seismic soil pressure acting on the models due to sinusoidal and random strong motions were investigated. This study verified that the corrugated culvert model was subjected to dynamic horizontal forces (lateral seismic soil pressure) from the surrounding ground,which caused the large bending strains on the structure; and that the structures do not exceed the allowable plastic deformation and do not collapse completely during strong earthquake like Hyogoken-nanbu earthquake. The results obtained are useful for design and construction of embedded long span corrugated steel culverts in seismic regions.

  3. Examining CAM use disclosure using the Behavioral Model of Health Services Use.

    Science.gov (United States)

    Faith, Jennifer; Thorburn, Sheryl; Tippens, Kimberly M

    2013-10-01

    To improve understanding of factors that may influence disclosure of complementary and alternative medicine (CAM) use in the U.S. Cross-sectional survey. Data are from the 2001 Health Care Quality Survey (HCQS), a nationally representative study of adults aged 18 and older living in the continental United States. Using the Behavioral Model of Health Services Use, we conducted multivariate logistic regressions to identify factors associated with disclosing CAM use among the sub-sample of recent CAM users (n=1995). Disclosure of CAM use. Most CAM users (71.0%) disclosed their use of CAM to their doctors. Contextual, individual, and health behavior factors were associated with CAM use disclosure. Of particular interest, disclosure was significantly more likely among those who perceived high quality relationships with their providers (AOR=1.59, CI: 1.01, 2.49) and among those who had a regular source of medical care (AOR=1.54, CI: 1.03, 2.29). The odds of disclosure were also higher among those who used practitioner-provided CAM, with (AOR=2.02, CI: 1.34, 3.06) or without (AOR=1.52, CI: 1.05, 2.20) concurrent herbal medicine use, compared to those who used herbal medicines only. The Behavioral Model of Health Services Use is a useful framework for examining factors that may influence disclosure of CAM use. Further research should examine these relationships using more comprehensive measures. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality.

    Science.gov (United States)

    Woodley, Hayden J R; Bourdage, Joshua S; Ogunfowora, Babatunde; Nguyen, Brenda

    2015-01-01

    The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called "Benevolents." Individuals low on equity sensitivity are more outcome oriented, and are described as "Entitleds." Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity.

  5. Distributed Lag Models: Examining Associations Between the Built Environment and Health.

    Science.gov (United States)

    Baek, Jonggyu; Sánchez, Brisa N; Berrocal, Veronica J; Sanchez-Vaznaugh, Emma V

    2016-01-01

    Built environment factors constrain individual level behaviors and choices, and thus are receiving increasing attention to assess their influence on health. Traditional regression methods have been widely used to examine associations between built environment measures and health outcomes, where a fixed, prespecified spatial scale (e.g., 1 mile buffer) is used to construct environment measures. However, the spatial scale for these associations remains largely unknown and misspecifying it introduces bias. We propose the use of distributed lag models (DLMs) to describe the association between built environment features and health as a function of distance from the locations of interest and circumvent a-priori selection of a spatial scale. Based on simulation studies, we demonstrate that traditional regression models produce associations biased away from the null when there is spatial correlation among the built environment features. Inference based on DLMs is robust under a range of scenarios of the built environment. We use this innovative application of DLMs to examine the association between the availability of convenience stores near California public schools, which may affect children's dietary choices both through direct access to junk food and exposure to advertisement, and children's body mass index z scores.

  6. Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski functionals

    Science.gov (United States)

    Buchert, Thomas; France, Martin J.; Steiner, Frank

    2017-05-01

    Despite the wealth of Planck results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependences. Aiming at detecting the NGs of the CMB temperature anisotropy δ T , while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function P(δ T) , related to v0, the first Minkowski Functional (MF), and the two other MFs, v1 and v2. From their analytical Gaussian predictions we build the discrepancy functions {{ Δ }k} (k  =  P, 0, 1, 2) which are applied to an ensemble of 105 CMB realization maps of the Λ CDM model and to the Planck CMB maps. In our analysis we use general Hermite expansions of the {{ Δ }k} up to the 12th order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the second order expansions of Matsubara to arbitrary order in the standard deviation {σ0} for P(δ T) and v0, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the Λ CDM map sample and the Planck data. We confirm the weak level of non-Gaussianity (1-2)σ of the foreground corrected masked Planck 2015 maps.

  7. Computational and Statistical Analyses of Insertional Polymorphic Endogenous Retroviruses in a Non-Model Organism

    Directory of Open Access Journals (Sweden)

    Le Bao

    2014-11-01

    Full Text Available Endogenous retroviruses (ERVs are a class of transposable elements found in all vertebrate genomes that contribute substantially to genomic functional and structural diversity. A host species acquires an ERV when an exogenous retrovirus infects a germ cell of an individual and becomes part of the genome inherited by viable progeny. ERVs that colonized ancestral lineages are fixed in contemporary species. However, in some extant species, ERV colonization is ongoing, which results in variation in ERV frequency in the population. To study the consequences of ERV colonization of a host genome, methods are needed to assign each ERV to a location in a species’ genome and determine which individuals have acquired each ERV by descent. Because well annotated reference genomes are not widely available for all species, de novo clustering approaches provide an alternative to reference mapping that are insensitive to differences between query and reference and that are amenable to mobile element studies in both model and non-model organisms. However, there is substantial uncertainty in both identifying ERV genomic position and assigning each unique ERV integration site to individuals in a population. We present an analysis suitable for detecting ERV integration sites in species without the need for a reference genome. Our approach is based on improved de novo clustering methods and statistical models that take the uncertainty of assignment into account and yield a probability matrix of shared ERV integration sites among individuals. We demonstrate that polymorphic integrations of a recently identified endogenous retrovirus in deer reflect contemporary relationships among individuals and populations.

  8. Using Rasch Modeling to Re-Evaluate Rapid Malaria Diagnosis Test Analyses

    Directory of Open Access Journals (Sweden)

    Dawit G. Ayele

    2014-06-01

    Full Text Available The objective of this study was to demonstrate the use of the Rasch model by assessing the appropriateness of the demographic, social-economic and geographic factors in providing a total score in malaria RDT in accordance with the model’s expectations. The baseline malaria indicator survey was conducted in Amhara, Oromiya and Southern Nation Nationalities and People (SNNP regions of Ethiopia by The Carter Center in 2007. The result shows high reliability and little disordering of thresholds with no evidence of differential item functioning.

  9. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    Stress can affect the brain functionality in many ways. As the synaptic vesicles have a major role in nervous signal transportation in synapses, their distribution in relationship to the active zone is very important in studying the neuron responses. We study the effect of stress on brain...... functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...

  10. The influence of jet-grout constitutive modelling in excavation analyses

    OpenAIRE

    Ciantia, M.; Arroyo Alvarez de Toledo, Marcos; Castellanza, R; Gens Solé, Antonio

    2012-01-01

    A bonded elasto-plastic soil model is employed to characterize cement-treated clay in the finite element analysis of an excavation on soft clay supported with a soil-cement slab at the bottom. The soft clay is calibrated to represent the behaviour of Bangkok soft clay. A parametric study is run for a series of materials characterised by increasing cement content in the clay-cement mixture. The different mixtures are indirectly specified by means of their unconfined compressive strength. A ...

  11. Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes

    Directory of Open Access Journals (Sweden)

    Stoyan Stoyanov

    2009-08-01

    Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.

  12. Daniel K. Inouye Solar Telescope: computational fluid dynamic analyses and evaluation of the air knife model

    Science.gov (United States)

    McQuillen, Isaac; Phelps, LeEllen; Warner, Mark; Hubbard, Robert

    2016-08-01

    Implementation of an air curtain at the thermal boundary between conditioned and ambient spaces allows for observation over wavelength ranges not practical when using optical glass as a window. The air knife model of the Daniel K. Inouye Solar Telescope (DKIST) project, a 4-meter solar observatory that will be built on Haleakalā, Hawai'i, deploys such an air curtain while also supplying ventilation through the ceiling of the coudé laboratory. The findings of computational fluid dynamics (CFD) analysis and subsequent changes to the air knife model are presented. Major design constraints include adherence to the Interface Control Document (ICD), separation of ambient and conditioned air, unidirectional outflow into the coudé laboratory, integration of a deployable glass window, and maintenance and accessibility requirements. Optimized design of the air knife successfully holds full 12 Pa backpressure under temperature gradients of up to 20°C while maintaining unidirectional outflow. This is a significant improvement upon the .25 Pa pressure differential that the initial configuration, tested by Linden and Phelps, indicated the curtain could hold. CFD post- processing, developed by Vogiatzis, is validated against interferometry results of initial air knife seeing evaluation, performed by Hubbard and Schoening. This is done by developing a CFD simulation of the initial experiment and using Vogiatzis' method to calculate error introduced along the optical path. Seeing error, for both temperature differentials tested in the initial experiment, match well with seeing results obtained from the CFD analysis and thus validate the post-processing model. Application of this model to the realizable air knife assembly yields seeing errors that are well within the error budget under which the air knife interface falls, even with a temperature differential of 20°C between laboratory and ambient spaces. With ambient temperature set to 0°C and conditioned temperature set to 20

  13. Subchannel and Computational Fluid Dynamic Analyses of a Model Pin Bundle

    Energy Technology Data Exchange (ETDEWEB)

    Gairola, A.; Arif, M.; Suh, K. Y. [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    The current study showed that the simplistic approach of subchannel analysis code MATRA was not good in capturing the physical behavior of the coolant inside the rod bundle. With the incorporation of more detailed geometry of the grid spacer in the CFX code it was possible to approach the experimental values. However, it is vital to incorporate more advanced turbulence mixing models to more realistically simulate behavior of the liquid metal coolant inside the model pin bundle in parallel with the incorporation of the bottom and top grid structures. In the framework of the 11{sup th} international meeting of International Association for Hydraulic Research and Engineering (IAHR) working group on the advanced reactor thermal hydraulics a standard problem was conducted. The quintessence of the problem was to check on the hydraulics and heat transfer in a novel pin bundle with different pitch to rod diameter ratio and heat flux cooled by liquid metal. The standard problem stems from the field of nuclear safety research with the idea of validating and checking the performances of computer codes against the experimental results. Comprehensive checks between the two will succor in improving the dependability and exactness of the codes used for accident simulations.

  14. Integration of 3d Models and Diagnostic Analyses Through a Conservation-Oriented Information System

    Science.gov (United States)

    Mandelli, A.; Achille, C.; Tommasi, C.; Fassi, F.

    2017-08-01

    In the recent years, mature technologies for producing high quality virtual 3D replicas of Cultural Heritage (CH) artefacts has grown thanks to the progress of Information Technologies (IT) tools. These methods are an efficient way to present digital models that can be used with several scopes: heritage managing, support to conservation, virtual restoration, reconstruction and colouring, art cataloguing and visual communication. The work presented is an emblematic case of study oriented to the preventive conservation through monitoring activities, using different acquisition methods and instruments. It was developed inside a project founded by Lombardy Region, Italy, called "Smart Culture", which was aimed to realise a platform that gave the users the possibility to easily access to the CH artefacts, using as an example a very famous statue. The final product is a 3D reality-based model that contains a lot of information inside it, and that can be consulted through a common web browser. In the end, it was possible to define the general strategies oriented to the maintenance and the valorisation of CH artefacts, which, in this specific case, must consider the integration of different techniques and competencies, to obtain a complete, accurate and continuative monitoring of the statue.

  15. Preliminary Thermal Hydraulic Analyses of the Conceptual Core Models with Tubular Type Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Hee Taek; Park, Jong Hark; Park, Cheol

    2006-11-15

    A new research reactor (AHR, Advanced HANARO Reactor) based on the HANARO has being conceptually developed for the future needs of research reactors. A tubular type fuel was considered as one of the fuel options of the AHR. A tubular type fuel assembly has several curved fuel plates arranged with a constant small gap to build up cooling channels, which is very similar to an annulus pipe with many layers. This report presents the preliminary analysis of thermal hydraulic characteristics and safety margins for three conceptual core models using tubular fuel assemblies. Four design criteria, which are the fuel temperature, ONB (Onset of Nucleate Boiling) margin, minimum DNBR (Departure from Nucleate Boiling Ratio) and OFIR (Onset of Flow Instability Ratio), were investigated along with various core flow velocities in the normal operating conditions. And the primary coolant flow rate based a conceptual core model was suggested as a design information for the process design of the primary cooling system. The computational fluid dynamics analysis was also carried out to evaluate the coolant velocity distributions between tubular channels and the pressure drop characteristics of the tubular fuel assembly.

  16. A new non-randomized model for analysing sensitive questions with binary outcomes.

    Science.gov (United States)

    Tian, Guo-Liang; Yu, Jun-Wu; Tang, Man-Lai; Geng, Zhi

    2007-10-15

    We propose a new non-randomized model for assessing the association of two sensitive questions with binary outcomes. Under the new model, respondents only need to answer a non-sensitive question instead of the original two sensitive questions. As a result, it can protect a respondent's privacy, avoid the usage of any randomizing device, and be applied to both the face-to-face interview and mail questionnaire. We derive the constrained maximum likelihood estimates of the cell probabilities and the odds ratio for two binary variables associated with the sensitive questions via the EM algorithm. The corresponding standard error estimates are then obtained by bootstrap approach. A likelihood ratio test and a chi-squared test are developed for testing association between the two binary variables. We discuss the loss of information due to the introduction of the non-sensitive question, and the design of the co-operative parameters. Simulations are performed to evaluate the empirical type I error rates and powers for the two tests. In addition, a simulation is conducted to study the relationship between the probability of obtaining valid estimates and the sample size for any given cell probability vector. A real data set from an AIDS study is used to illustrate the proposed methodologies.

  17. Coupled biophysical global ocean model and molecular genetic analyses identify multiple introductions of cryptogenic species.

    Science.gov (United States)

    Dawson, Michael N; Sen Gupta, Alex; England, Matthew H

    2005-08-23

    The anthropogenic introduction of exotic species is one of the greatest modern threats to marine biodiversity. Yet exotic species introductions remain difficult to predict and are easily misunderstood because knowledge of natural dispersal patterns, species diversity, and biogeography is often insufficient to distinguish between a broadly dispersed natural population and an exotic one. Here we compare a global molecular phylogeny of a representative marine meroplanktonic taxon, the moon-jellyfish Aurelia, with natural dispersion patterns predicted by a global biophysical ocean model. Despite assumed high dispersal ability, the phylogeny reveals many cryptic species and predominantly regional structure with one notable exception: the globally distributed Aurelia sp.1, which, molecular data suggest, may occasionally traverse the Pacific unaided. This possibility is refuted by the ocean model, which shows much more limited dispersion and patterns of distribution broadly consistent with modern biogeographic zones, thus identifying multiple introductions worldwide of this cryptogenic species. This approach also supports existing evidence that (i) the occurrence in Hawaii of Aurelia sp. 4 and other native Indo-West Pacific species with similar life histories is most likely due to anthropogenic translocation, and (ii) there may be a route for rare natural colonization of northeast North America by the European marine snail Littorina littorea, whose status as endemic or exotic is unclear.

  18. Cotton chromosome substitution lines crossed with cultivars: genetic model evaluation and seed trait analyses.

    Science.gov (United States)

    Wu, Jixiang; McCarty, Jack C; Jenkins, Johnie N

    2010-05-01

    Seed from upland cotton, Gossypium hirsutum L., provides a desirable and important nutrition profile. In this study, several seed traits (protein content, oil content, seed hull fiber content, seed index, seed volume, embryo percentage) for F(3) hybrids of 13 cotton chromosome substitution lines crossed with five elite cultivars over four environments were evaluated. Oil and protein were expressed both as percentage of total seed weight and as an index which is the grams of product/100 seeds. An additive and dominance (AD) genetic model with cytoplasmic effects was designed, assessed by simulations, and employed to analyze these seed traits. Simulated results showed that this model was sufficient for analyzing the data structure with F(3) and parents in multiple environments without replications. Significant cytoplasmic effects were detected for seed oil content, oil index, seed index, seed volume, and seed embryo percentage. Additive effects were significant for protein content, fiber content, protein index, oil index, fiber index, seed index, seed volume, and embryo percentage. Dominance effects were significant for oil content, oil index, seed index, and seed volume. Cytoplasmic and additive effects for parents and dominance effects in homozygous and heterozygous forms were predicted. Favorable genetic effects were predicted in this study and the results provided evidence that these seed traits can be genetically improved. In addition, chromosome associations with AD effects were detected and discussed in this study.

  19. Parsimony and Model-Based Analyses of Indels in Avian Nuclear Genes Reveal Congruent and Incongruent Phylogenetic Signals

    Directory of Open Access Journals (Sweden)

    Frederick H. Sheldon

    2013-03-01

    Full Text Available Insertion/deletion (indel mutations, which are represented by gaps in multiple sequence alignments, have been used to examine phylogenetic hypotheses for some time. However, most analyses combine gap data with the nucleotide sequences in which they are embedded, probably because most phylogenetic datasets include few gap characters. Here, we report analyses of 12,030 gap characters from an alignment of avian nuclear genes using maximum parsimony (MP and a simple maximum likelihood (ML framework. Both trees were similar, and they exhibited almost all of the strongly supported relationships in the nucleotide tree, although neither gap tree supported many relationships that have proven difficult to recover in previous studies. Moreover, independent lines of evidence typically corroborated the nucleotide topology instead of the gap topology when they disagreed, although the number of conflicting nodes with high bootstrap support was limited. Filtering to remove short indels did not substantially reduce homoplasy or reduce conflict. Combined analyses of nucleotides and gaps resulted in the nucleotide topology, but with increased support, suggesting that gap data may prove most useful when analyzed in combination with nucleotide substitutions.

  20. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST: northern extratropical response

    Directory of Open Access Journals (Sweden)

    K. Maynard

    2001-06-01

    Full Text Available The ECHAM 3.2 (T21, ECHAM 4 (T30 and LMD (version 6, grid-point resolution with 96 longitudes × 72 latitudes atmospheric general circulation models were integrated through the period 1961 to 1993 forced with the same observed Sea Surface Temperatures (SSTs as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The mid-latitude circulation pattern which maximises the covariance between the simulation and the observations, i.e. the most skilful mode, and the one which maximises the covariance amongst the runs, i.e. the most reproducible mode, is calculated as the leading mode of a Singular Value Decomposition (SVD analysis of observed and simulated Sea Level Pressure (SLP and geopotential height at 500 hPa (Z500 seasonal anomalies. A common response amongst the different models, having different resolution and parametrization should be considered as a more robust atmospheric response to SST than the same response obtained with only one model. A robust skilful mode is found mainly in December-February (DJF, and in June-August (JJA. In DJF, this mode is close to the SST-forced pattern found by Straus and Shukla (2000 over the North Pacific and North America with a wavy out-of-phase between the NE Pacific and the SE US on the one hand and the NE North America on the other. This pattern evolves in a NAO-like pattern over the North Atlantic and Europe (SLP and in a more N-S tripole on the Atlantic and European sector with an out-of-phase between the middle Europe on the one hand and the northern and southern parts on the other (Z500. There are almost no spatial shifts between either field around North America (just a slight eastward shift of the highest absolute heterogeneous correlations for SLP relative to the Z500 ones. The time evolution of the SST-forced mode is moderatly to strongly related to the ENSO/LNSO events but the spread amongst the ensemble of runs is not systematically related

  1. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST. Northern extra tropical response

    Energy Technology Data Exchange (ETDEWEB)

    Moron, V. [Universite' de Provence, UFR des sciences geographiques et de l' amenagement, Aix-en-Provence (France); Navarra, A. [Istituto Nazionale di Geofisica e Vulcanologia, Bologna (Italy); Ward, M. N. [University of Oklahoma, Cooperative Institute for Mesoscale Meteorological Studies, Norman OK (United States); Foland, C. K. [Hadley Center for Climate Prediction and Research, Meteorological Office, Bracknell (United Kingdom); Friederichs, P. [Meteorologisches Institute des Universitaet Bonn, Bonn (Germany); Maynard, K.; Polcher, J. [Paris Universite' Pierre et Marie Curie, Paris (France). Centre Nationale de la Recherche Scientifique, Laboratoire de Meteorologie Dynamique, Paris

    2001-08-01

    The ECHAM 3.2 (T21), ECHAM 4 (T30) and LMD (version 6, grid-point resolution with 96 longitudes x 72 latitudes) atmospheric general circulation models were integrated through the period 1961 to 1993 forced with the same observed Sa Surface Temperatures (SSTs) as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The mid-latitude circulation pattern which maximises the covariance between the simulation and the observations, i.e. the most skilful mode, and the one which maximises the covariance amongst the runs, i.e. the most reproducible mode, is calculated as the leading mode of a Singular Value Decomposition (SVD) analysis of observed and simulated Sea Level Pressure (SLP) and geo potential height at 500 hPa (Z500) seasonal anomalies. A common response amongst the different models, having different resolution and parametrization should be considered as a more robust atmospheric response to SST than the sam response obtained with only one model A robust skilful mode is found mainly in December-February (DJF), and in June-August (JJA). In DJF, this mode is close to the SST-forced pattern found by Straus nd Shukla (2000) over the North Pacific and North America with a wavy out-of-phase between the NE Pacific and the SE US on the one hand and the NE North America on the other. This pattern evolves in a NAO-like pattern over the North Atlantic and Europe (SLP) and in a more N-S tripote on the Atlantic and European sector with an out-of-phase between the middle Europe on the one hand and the northern and southern parts on the other (Z500). There are almost no spatial shifts between either field around North America (just a slight eastward shift of the highest absolute heterogenous correlations for SLP relative to the Z500 ones). The time evolution of the SST-forced mode is moderately to strongly related to the ENSO/LNSO events but the spread amongst the ensemble of runs is not systematically related at all to

  2. Possibilities for a sustainable development. Muligheter for en baerekraftig utvikling; Analyser paa ''World Model''

    Energy Technology Data Exchange (ETDEWEB)

    Bjerkholt, O.; Johnsen, T.; Thonstad, K.

    1993-01-01

    This report is the final report of a project that the Central Bureau of Statistics of Norway has carried out. The report present analyses of the relations between economic development, energy consumption and emission of pollutants to air in a global perspective. The analyses are based on the ''World Model'', that has been developed at the Institute for Economic Analysis at New York University. The analyses show that it will be very difficult to obtain a global stabilization of the CO[sub 2] emission on the 1990 level. In the reference scenario of the United Nations report ''Our Common Future'', the increase of CO[sub 2] emissions from 1990 to 2020 was 73%. Even in the scenario with the most drastic measures, the emissions in 2020 will be about 43% above the 1990 level, according to the present report. A stabilization of the global emissions at the 1990 level will require strong measures beyond those assumed in the model calculations, or a considerable breakthrough in energy technology. 17 refs., 5 figs., 21 tabs.

  3. Examining Impact of Global warming on the summer monsoon system using regional Climate Model (PRECIS)

    Science.gov (United States)

    Patwardhan, S. K.; Kundeti, K.; Krishna Kumar, K.

    2011-12-01

    Every year, southwest monsoon arrives over Indian region with remarkable regularity. It hits the southern state of Kerala first by the end of May or the early June. More than 70% of the annual precipitation is received during the four monsoon months viz. June to September. This monsoon rainfall is vital for the agriculture as well as for the yearly needs of Indian population. The performance of the monsoon depends on the timely onset over southern tip of India and its progress along the entire country. This northward progression of monsoon to cover the entire Indian landmass, many times, is associated with the formation of synoptic scale system in the Bay of Bengal region and their movement along the monsoon trough region. The analysis of the observed cyclonic disturbances show that their frequency has reduced in recent decades. It is, therefore, necessary to assess the effect of global warming on the monsoon climate of India. A state-of-art regional climate modelling system, known as PRECIS (Providing REgional Climates for Impacts Studies) developed by the Hadley Centre for Climate Prediction and Research, U.K. is applied over the South Asian domain to investigate the impact of global warming on the cyclonic disturbances. The PRECIS simulations at 50 km x 50 km horizontal resolution are made for two time slices, present (1961-1990) and the future (2071-2100), for two socio-economic scenarios A2 and B2. The model skills are evaluated using observed precipitation and surface air temperature. The model has shown reasonably good skill in simulating seasonal monsoon rainfall, whereas cold bias is seen in surface air temperature especially in post-monsoon months. The typical monsoon features like monsoon trough, precipitation maxima over west coast and northeast India are well simulated by the model. The model simulations under the scenarios of increasing greenhouse gas concentrations and sulphate aerosols are analysed to study the likely changes in the quasi

  4. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in forested ecosystems

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-12-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. Here we explore the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants using a new, simple model of ecosystem C-N cycling and interactions (ACONITE). ACONITE builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C:N, N fixation, and plant C use efficiency) based on the optimization of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state and transient ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C:N differed among the three ecosystem types (temperate deciduous traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C:N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C:N, while a more recently reported non-linear relationship simulated leaf C:N that compared better to the global trait database than the linear relationship. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C:N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple approach with emergent properties based on coupled C-N dynamics has

  5. Application of satellite precipitation data to analyse and model arbovirus activity in the tropics

    Directory of Open Access Journals (Sweden)

    Corner Robert J

    2011-01-01

    Full Text Available Abstract Background Murray Valley encephalitis virus (MVEV is a mosquito-borne Flavivirus (Flaviviridae: Flavivirus which is closely related to Japanese encephalitis virus, West Nile virus and St. Louis encephalitis virus. MVEV is enzootic in northern Australia and Papua New Guinea and epizootic in other parts of Australia. Activity of MVEV in Western Australia (WA is monitored by detection of seroconversions in flocks of sentinel chickens at selected sample sites throughout WA. Rainfall is a major environmental factor influencing MVEV activity. Utilising data on rainfall and seroconversions, statistical relationships between MVEV occurrence and rainfall can be determined. These relationships can be used to predict MVEV activity which, in turn, provides the general public with important information about disease transmission risk. Since ground measurements of rainfall are sparse and irregularly distributed, especially in north WA where rainfall is spatially and temporally highly variable, alternative data sources such as remote sensing (RS data represent an attractive alternative to ground measurements. However, a number of competing alternatives are available and careful evaluation is essential to determine the most appropriate product for a given problem. Results The Tropical Rainfall Measurement Mission (TRMM Multi-satellite Precipitation Analysis (TMPA 3B42 product was chosen from a range of RS rainfall products to develop rainfall-based predictor variables and build logistic regression models for the prediction of MVEV activity in the Kimberley and Pilbara regions of WA. Two models employing monthly time-lagged rainfall variables showed the strongest discriminatory ability of 0.74 and 0.80 as measured by the Receiver Operating Characteristics area under the curve (ROC AUC. Conclusions TMPA data provide a state-of-the-art data source for the development of rainfall-based predictive models for Flavivirus activity in tropical WA. Compared to

  6. IMPROVEMENTS IN HANFORD TRANSURANIC (TRU) PROGRAM UTILIZING SYSTEMS MODELING AND ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    UYTIOCO EM

    2007-11-12

    Hanford's Transuranic (TRU) Program is responsible for certifying contact-handled (CH) TRU waste and shipping the certified waste to the Waste Isolation Pilot Plant (WIPP). Hanford's CH TRU waste includes material that is in retrievable storage as well as above ground storage, and newly generated waste. Certifying a typical container entails retrieving and then characterizing it (Real-Time Radiography, Non-Destructive Assay, and Head Space Gas Sampling), validating records (data review and reconciliation), and designating the container for a payload. The certified payload is then shipped to WIPP. Systems modeling and analysis techniques were applied to Hanford's TRU Program to help streamline the certification process and increase shipping rates.

  7. Analysing green supply chain management practices in Brazil's electrical/electronics industry using interpretive structural modelling

    DEFF Research Database (Denmark)

    Govindan, Kannan; Kannan, Devika; Mathiyazhagan, K.

    2013-01-01

    Industries need to adopt the environmental management concepts in the traditional supply chain management. The green supply chain management (GSCM) is an established concept to ensure environment-friendly activities in industry. This paper identifies the relationship of driving and dependence...... that exists between GSCM practices with regard to their adoption within Brazilian electrical/electronic industry with the help of interpretive structural modelling (ISM). From the results, we infer that cooperation with customers for eco-design practice is driving other practices, and this practice acts...... as a vital role among other practices. Commitment to GSCM from senior managers and cooperation with customers for cleaner production occupy the highest level. © 2013 © 2013 Taylor & Francis....

  8. Statistical Analyses and Modeling of the Implementation of Agile Manufacturing Tactics in Industrial Firms

    Directory of Open Access Journals (Sweden)

    Mohammad D. AL-Tahat

    2012-01-01

    Full Text Available This paper provides a review and introduction on agile manufacturing. Tactics of agile manufacturing are mapped into different production areas (eight-construct latent: manufacturing equipment and technology, processes technology and know-how, quality and productivity improvement, production planning and control, shop floor management, product design and development, supplier relationship management, and customer relationship management. The implementation level of agile manufacturing tactics is investigated in each area. A structural equation model is proposed. Hypotheses are formulated. Feedback from 456 firms is collected using five-point-Likert-scale questionnaire. Statistical analysis is carried out using IBM SPSS and AMOS. Multicollinearity, content validity, consistency, construct validity, ANOVA analysis, and relationships between agile components are tested. The results of this study prove that the agile manufacturing tactics have positive effect on the overall agility level. This conclusion can be used by manufacturing firms to manage challenges when trying to be agile.

  9. Reporting Results from Structural Equation Modeling Analyses in Archives of Scientific Psychology.

    Science.gov (United States)

    Hoyle, Rick H; Isherwood, Jennifer C

    2013-02-01

    Psychological research typically involves the analysis of data (e.g., questionnaire responses, records of behavior) using statistical methods. The description of how those methods are used and the results they produce is a key component of scholarly publications. Despite their importance, these descriptions are not always complete and clear. In order to ensure the completeness and clarity of these descriptions, the Archives of Scientific Psychology requires that authors of manuscripts to be considered for publication adhere to a set of publication standards. Although the current standards cover most of the statistical methods commonly used in psychological research, they do not cover them all. In this manuscript, we propose adjustments to the current standards and the addition of additional standards for a statistical method not adequately covered in the current standards-structural equation modeling (SEM). Adherence to the standards we propose would ensure that scholarly publications that report results of data analyzed using SEM are complete and clear.

  10. Personality change over 40 years of adulthood: hierarchical linear modeling analyses of two longitudinal samples.

    Science.gov (United States)

    Helson, Ravenna; Jones, Constance; Kwan, Virginia S Y

    2002-09-01

    Normative personality change over 40 years was shown in 2 longitudinal cohorts with hierarchical linear modeling of California Psychological Inventory data obtained at multiple times between ages 21-75. Although themes of change and the paucity of differences attributable to gender and cohort largely supported findings of multiethnic cross-sectional samples, the authors also found much quadratic change and much individual variability. The form of quadratic change supported predictions about the influence of period of life and social climate as factors in change over the adult years: Scores on Dominance and Independence peaked in the middle age of both cohorts, and scores on Responsibility were lowest during peak years of the culture of individualism. The idea that personality change is most pronounced before age 30 and then reaches a plateau received no support.

  11. The usefulness of optical analyses for detecting vulnerable plaques using rabbit models

    Science.gov (United States)

    Nakai, Kanji; Ishihara, Miya; Kawauchi, Satoko; Shiomi, Masashi; Kikuchi, Makoto; Kaji, Tatsumi

    2011-03-01

    Purpose: Carotid artery stenting (CAS) has become a widely used option for treatment of carotid stenosis. Although technical improvements have led to a decrease in complications related to CAS, distal embolism continues to be a problem. The purpose of this research was to investigate the usefulness of optical methods (Time-Resolved Laser- Induced Fluorescence Spectroscopy [TR-LIFS] and reflection spectroscopy [RS] as diagnostic tools for assessment of vulnerable atherosclerotic lesions, using rabbit models of vulnerable plaque. Materials & Methods: Male Japanese white rabbits were divided into a high cholesterol diet group and a normal diet group. In addition, we used a Watanabe heritable hyperlipidemic (WHHL) rabbit, because we confirmed the reliability of our animal model for this study. Experiment 1: TR-LIFS. Fluorescence was induced using the third harmonic wave of a Q switch Nd:YAG laser. The TR-LIFS was performed using a photonic multi-channel analyzer with ICCD (wavelength range, 200 - 860 nm). Experiment 2: RS. Refection spectra in the wavelength range of 900 to 1700 nm were acquired using a spectrometer. Results: In the TR-LIFS, the wavelength at the peak was longer by plaque formation. The TR-LIFS method revealed a difference in peak levels between a normal aorta and a lipid-rich aorta. The RS method showed increased absorption from 1450 to 1500 nm for lipid-rich plaques. We observed absorption around 1200 nm due to lipid only in the WHHL group. Conclusion: These methods using optical analysis might be useful for diagnosis of vulnerable plaques. Keywords: Carotid artery stenting, vulnerable plaque, Time-Resolved Laser-Induced Fluorescence

  12. Theoretical analyses and numerical experiments of variational assimilation for one-dimensional ocean temperature model with techniques in inverse problems

    Institute of Scientific and Technical Information of China (English)

    HUANG; Sixun; HAN; Wei; WU; Rongsheng

    2004-01-01

    In the present work, the data assimilation problem in meteorology and physical oceanography is re-examined using the variational optimal control approaches in combination with regularization techniques in inverse problem. Here the estimations of the initial condition,boundary condition and model parameters are performed simultaneously in the framework of variational data assimilation. To overcome the difficulty of ill-posedness, especially for the model parameters distributed in space and time, an additional term is added into the cost functional as a stabilized functional. Numerical experiments show that even with noisy observations the initial conditions and model parameters are recovered to an acceptable degree of accuracy.

  13. Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Wetterslev, Jørn; Awad, Tahany;

    2011-01-01

    In random-effects model meta-analysis, the conventional DerSimonian-Laird (DL) estimator typically underestimates the between-trial variance. Alternative variance estimators have been proposed to address this bias. This study aims to empirically compare statistical inferences from random......-effects model meta-analyses on the basis of the DL estimator and four alternative estimators, as well as distributional assumptions (normal distribution and t-distribution) about the pooled intervention effect. We evaluated the discrepancies of p-values, 95% confidence intervals (CIs) in statistically...... significant meta-analyses, and the degree (percentage) of statistical heterogeneity (e.g. I(2)) across 920 Cochrane primary outcome meta-analyses. In total, 414 of the 920 meta-analyses were statistically significant with the DL meta-analysis, and 506 were not. Compared with the DL estimator, the four...

  14. Examining the Efficiency of Models Using Tangent Coordinates or Principal Component Scores in Allometry Studies.

    Science.gov (United States)

    Sigirli, Deniz; Ercan, Ilker

    2015-09-01

    Most of the studies in medical and biological sciences are related to the examination of geometrical properties of an organ or organism. Growth and allometry studies are important in the way of investigating the effects of diseases and the environmental factors effects on the structure of the organ or organism. Thus, statistical shape analysis has recently become more important in the medical and biological sciences. Shape is all geometrical information that remains when location, scale and rotational effects are removed from an object. Allometry, which is a relationship between size and shape, plays an important role in the development of statistical shape analysis. The aim of the present study was to compare two different models for allometry which includes tangent coordinates and principal component scores of tangent coordinates as dependent variables in multivariate regression analysis. The results of the simulation study showed that the model constructed by taking tangent coordinates as dependent variables is more appropriate than the model constructed by taking principal component scores of tangent coordinates as dependent variables, for all sample sizes.

  15. Modeling using clinical examination indicators predicts interstitial lung disease among patients with rheumatoid arthritis

    Science.gov (United States)

    Wang, Yao; Song, Wuqi; Wu, Jing; Li, Zhangming; Mu, Fengyun; Li, Yang; Huang, He; Zhu, Wenliang

    2017-01-01

    Interstitial lung disease (ILD) is a severe extra-articular manifestation of rheumatoid arthritis (RA) that is well-defined as a chronic systemic autoimmune disease. A proportion of patients with RA-associated ILD (RA-ILD) develop pulmonary fibrosis (PF), resulting in poor prognosis and increased lifetime risk. We investigated whether routine clinical examination indicators (CEIs) could be used to identify RA patients with high PF risk. A total of 533 patients with established RA were recruited in this study for model building and 32 CEIs were measured for each of them. To identify PF risk, a new artificial neural network (ANN) was built, in which inputs were generated by calculating Euclidean distance of CEIs between patients. Receiver operating characteristic curve analysis indicated that the ANN performed well in predicting the PF risk (Youden index = 0.436) by only incorporating four CEIs including age, eosinophil count, platelet count, and white blood cell count. A set of 218 RA patients with healthy lungs or suffering from ILD and a set of 87 RA patients suffering from PF were used for independent validation. Results showed that the model successfully identified ILD and PF with a true positive rate of 84.9% and 82.8%, respectively. The present study suggests that model integration of multiple routine CEIs contributes to identification of potential PF risk among patients with RA.

  16. Modelling of food intake in Brazil and Germany: Examining the effects of self-construals.

    Science.gov (United States)

    Hirata, Elizabeth; Kühnen, Ulrich; Hermans, Roel C J; Lippke, Sonia

    2015-12-01

    The current research focused on the influence of informational eating norms on people's food intake, and examined whether this influence was moderated by participants' self-construal levels. In two experiments, a two (intake norm manipulation: low vs. high) by two (self-construal manipulation: interdependent versus independent) between-participant factorial design was used. The studies were conducted in Brazil (Experiment 1) and in Germany (Experiment 2) as participants' self-construal levels differ between these countries. In Experiment 1, results indicated that participants exposed to a high-intake norm ate more than participants exposed to a low-intake norm. However, self-construal was not found to moderate the influence of food intake norms on participants' intake. In Experiment 2, replicating the results of Experiment 1, exposure to a high-intake norm increased participants' food intake, but self-construals again did not moderate modelling effects on food intake. Although differences in individuals' self-construal were found between both countries, they did not affect the magnitude of modelling effects on eating. Our studies provide evidence for cross-cultural similarity in the extent to which Brazilian and German female young adults are vulnerable to modelling effects on food intake, independent on their self-construal.

  17. In silico analyses of dystrophin Dp40 cellular distribution, nuclear export signals and structure modeling

    Directory of Open Access Journals (Sweden)

    Alejandro Martínez-Herrera

    2015-09-01

    Full Text Available Dystrophin Dp40 is the shortest protein encoded by the DMD (Duchenne muscular dystrophy gene. This protein is unique since it lacks the C-terminal end of dystrophins. In this data article, we describe the subcellular localization, nuclear export signals and the three-dimensional structure modeling of putative Dp40 proteins using bioinformatics tools. The Dp40 wild type protein was predicted as a cytoplasmic protein while the Dp40n4 was predicted to be nuclear. Changes L93P and L170P are involved in the nuclear localization of Dp40n4 protein. A close analysis of Dp40 protein scored that amino acids 93LEQEHNNLV101 and 168LLLHDSIQI176 could function as NES sequences and the scores are lost in Dp40n4. In addition, the changes L93/170P modify the tertiary structure of putative Dp40 mutants. The analysis showed that changes of residues 93 and 170 from leucine to proline allow the nuclear localization of Dp40 proteins. The data described here are related to the research article entitled “EF-hand domains are involved in the differential cellular distribution of dystrophin Dp40” (J. Aragón et al. Neurosci. Lett. 600 (2015 115–120 [1].

  18. Analysing hydro-mechanical behaviour of reinforced slopes through centrifuge modelling

    Science.gov (United States)

    Veenhof, Rick; Wu, Wei

    2017-04-01

    Every year, slope instability is causing casualties and damage to properties and the environment. The behaviour of slopes during and after these kind of events is complex and depends on meteorological conditions, slope geometry, hydro-mechanical soil properties, boundary conditions and the initial state of the soils. This study describes the effects of adding reinforcement, consisting of randomly distributed polyolefin monofilament fibres or Ryegrass (Lolium), on the behaviour of medium-fine sand in loose and medium dense conditions. Direct shear tests were performed on sand specimens with different void ratios, water content and fibre or root density, respectively. To simulate the stress state of real scale field situations, centrifuge model tests were conducted on sand specimens with different slope angles, thickness of the reinforced layer, fibre density, void ratio and water content. An increase in peak shear strength is observed in all reinforced cases. Centrifuge tests show that for slopes that are reinforced the period until failure is extended. The location of shear band formation and patch displacement behaviour indicate that the design of slope reinforcement has a significant effect on the failure behaviour. Future research will focus on the effect of plant water uptake on soil cohesion.

  19. Biomechanical analyses of prosthetic mesh repair in a hiatal hernia model.

    Science.gov (United States)

    Alizai, Patrick Hamid; Schmid, Sofie; Otto, Jens; Klink, Christian Daniel; Roeth, Anjali; Nolting, Jochen; Neumann, Ulf Peter; Klinge, Uwe

    2014-10-01

    Recurrence rate of hiatal hernia can be reduced with prosthetic mesh repair; however, type and shape of the mesh are still a matter of controversy. The purpose of this study was to investigate the biomechanical properties of four conventional meshes: pure polypropylene mesh (PP-P), polypropylene/poliglecaprone mesh (PP-U), polyvinylidenefluoride/polypropylene mesh (PVDF-I), and pure polyvinylidenefluoride mesh (PVDF-S). Meshes were tested either in warp direction (parallel to production direction) or perpendicular to the warp direction. A Zwick testing machine was used to measure elasticity and effective porosity of the textile probes. Stretching of the meshes in warp direction required forces that were up to 85-fold higher than the same elongation in perpendicular direction. Stretch stress led to loss of effective porosity in most meshes, except for PVDF-S. Biomechanical impact of the mesh was additionally evaluated in a hiatal hernia model. The different meshes were used either as rectangular patches or as circular meshes. Circular meshes led to a significant reinforcement of the hiatus, largely unaffected by the orientation of the warp fibers. In contrast, rectangular meshes provided a significant reinforcement only when warp fibers ran perpendicular to the crura. Anisotropic elasticity of prosthetic meshes should therefore be considered in hiatal closure with rectangular patches.

  20. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  1. Fungal-Induced Deterioration of Mural Paintings: In Situ and Mock-Model Microscopy Analyses.

    Science.gov (United States)

    Unković, Nikola; Grbić, Milica Ljaljević; Stupar, Miloš; Savković, Željko; Jelikić, Aleksa; Stanojević, Dragan; Vukojević, Jelena

    2016-04-01

    Fungal deterioration of frescoes was studied in situ on a selected Serbian church, and on a laboratory model, utilizing standard and newly implemented microscopy techniques. Scanning electron microscopy (SEM) with energy-dispersive X-ray confirmed the limestone components of the plaster. Pigments used were identified as carbon black, green earth, iron oxide, ocher, and an ocher/cinnabar mixture. In situ microscopy, applied via a portable microscope ShuttlePix P-400R, proved very useful for detection of invisible micro-impairments and hidden, symptomless, microbial growth. SEM and optical microscopy established that observed deterioration symptoms, predominantly discoloration and pulverization of painted layers, were due to bacterial filaments and fungal hyphal penetration, and formation of a wide range of fungal structures (i.e., melanized hyphae, chlamydospores, microcolonial clusters, Cladosporium-like conidia, and Chaetomium perithecia and ascospores). The all year-round monitoring of spontaneous and induced fungal colonization of a "mock painting" in controlled laboratory conditions confirmed the decisive role of humidity level (70.18±6.91% RH) in efficient colonization of painted surfaces, as well as demonstrated increased bioreceptivity of painted surfaces to fungal colonization when plant-based adhesives (ilinocopie, murdent), compared with organic adhesives of animal origin (bone glue, egg white), are used for pigment sizing.

  2. Analysing movements in investor’s risk aversion using the Heston volatility model

    Directory of Open Access Journals (Sweden)

    Alexie ALUPOAIEI

    2013-03-01

    Full Text Available In this paper we intend to identify and analyze, if it is the case, an “epidemiological” relationship between forecasts of professional investors and short-term developments in the EUR/RON exchange rate. Even that we don’t call a typical epidemiological model as those ones used in biology fields of research, we investigated the hypothesis according to which after the Lehman Brothers crash and implicit the generation of the current financial crisis, the forecasts of professional investors pose a significant explanatory power on the futures short-run movements of EUR/RON. How does it work this mechanism? Firstly, the professional forecasters account for the current macro, financial and political states, then they elaborate forecasts. Secondly, based on that forecasts they get positions in the Romanian exchange market for hedging and/or speculation purposes. But their positions incorporate in addition different degrees of uncertainty. In parallel, a part of their anticipations are disseminated to the public via media channels. Since some important movements are viewed within macro, financial or political fields, the positions of professsional investors from FX derivative market are activated. The current study represents a first step in that direction of analysis for Romanian case. For the above formulated objectives, in this paper different measures of EUR/RON rate volatility have been estimated and compared with implied volatilities. In a second timeframe we called the co-integration and dynamic correlation based tools in order to investigate the relationship between implied volatility and daily returns of EUR/RON exchange rate.

  3. Model-based analyses of bioequivalence crossover trials using the stochastic approximation expectation maximisation algorithm.

    Science.gov (United States)

    Dubois, Anne; Lavielle, Marc; Gsteiger, Sandro; Pigeolet, Etienne; Mentré, France

    2011-09-20

    In this work, we develop a bioequivalence analysis using nonlinear mixed effects models (NLMEM) that mimics the standard noncompartmental analysis (NCA). We estimate NLMEM parameters, including between-subject and within-subject variability and treatment, period and sequence effects. We explain how to perform a Wald test on a secondary parameter, and we propose an extension of the likelihood ratio test for bioequivalence. We compare these NLMEM-based bioequivalence tests with standard NCA-based tests. We evaluate by simulation the NCA and NLMEM estimates and the type I error of the bioequivalence tests. For NLMEM, we use the stochastic approximation expectation maximisation (SAEM) algorithm implemented in monolix. We simulate crossover trials under H(0) using different numbers of subjects and of samples per subject. We simulate with different settings for between-subject and within-subject variability and for the residual error variance. The simulation study illustrates the accuracy of NLMEM-based geometric means estimated with the SAEM algorithm, whereas the NCA estimates are biased for sparse design. NCA-based bioequivalence tests show good type I error except for high variability. For a rich design, type I errors of NLMEM-based bioequivalence tests (Wald test and likelihood ratio test) do not differ from the nominal level of 5%. Type I errors are inflated for sparse design. We apply the bioequivalence Wald test based on NCA and NLMEM estimates to a three-way crossover trial, showing that Omnitrope®; (Sandoz GmbH, Kundl, Austria) powder and solution are bioequivalent to Genotropin®; (Pfizer Pharma GmbH, Karlsruhe, Germany). NLMEM-based bioequivalence tests are an alternative to standard NCA-based tests. However, caution is needed for small sample size and highly variable drug.

  4. Tropical cyclones in a T159 resolution global climate model: comparison with observations and re-analyses

    Science.gov (United States)

    Bengtsson, L.; Hodges, K. I.; Esch, M.

    2007-08-01

    Tropical cyclones have been investigated in a T159 version of the MPI ECHAM5 climate model using a novel technique to diagnose the evolution of the three-dimensional vorticity structure of tropical cyclones, including their full life cycle from weak initial vortices to their possible extra-tropical transition. Results have been compared with re-analyses [the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-yr Re-analysis (ERA40) and Japanese 25 yr re-analysis (JRA25)] and observed tropical storms during the period 1978-1999 for the Northern Hemisphere. There is no indication of any trend in the number or intensity of tropical storms during this period in ECHAM5 or in re-analyses but there are distinct inter-annual variations. The storms simulated by ECHAM5 are realistic both in space and time, but the model and even more so the re-analyses, underestimate the intensities of the most intense storms (in terms of their maximum wind speeds). There is an indication of a response to El Niño-Southern Oscillation (ENSO) with a smaller number of Atlantic storms during El Niño in agreement with previous studies. The global divergence circulation responds to El Niño by setting up a large-scale convergence flow, with the centre over the central Pacific with enhanced subsidence over the tropical Atlantic. At the same time there is an increase in the vertical wind shear in the region of the tropical Atlantic where tropical storms normally develop. There is a good correspondence between the model and ERA40 except that the divergence circulation is somewhat stronger in the model. The model underestimates storms in the Atlantic but tends to overestimate them in the Western Pacific and in the North Indian Ocean. It is suggested that the overestimation of storms in the Pacific by the model is related to an overly strong response to the tropical Pacific sea surface temperature (SST) anomalies. The overestimation in the North Indian Ocean is likely to be due to an over

  5. Using the Stereotype Content Model to examine group depictions in Fascism: An Archival Approach.

    Science.gov (United States)

    Durante, Federica; Volpato, Chiara; Fiske, Susan T

    2010-04-01

    The Stereotype Content Model (SCM) suggests potentially universal intergroup depictions. If universal, they should apply across history in archival data. Bridging this gap, we examined social groups descriptions during Italy's Fascist era. In Study 1, articles published in a Fascist magazine- La Difesa della Razza -were content analyzed, and results submitted to correspondence analysis. Admiration prejudice depicted ingroups; envious and contemptuous prejudices depicted specific outgroups, generally in line with SCM predictions. No paternalistic prejudice appeared; historical reasons might explain this finding. Results also fit the recently developed BIAS Map of behavioral consequences. In Study 2, ninety-six undergraduates rated the content-analysis traits on warmth and competence, without knowing their origin. They corroborated SCM's interpretations of the archival data.

  6. A critical examination of the maximum velocity of shortening used in simulation models of human movement.

    Science.gov (United States)

    Domire, Zachary J; Challis, John H

    2010-12-01

    The maximum velocity of shortening of a muscle is an important parameter in musculoskeletal models. The most commonly used values are derived from animal studies; however, these values are well above the values that have been reported for human muscle. The purpose of this study was to examine the sensitivity of simulations of maximum vertical jumping performance to the parameters describing the force-velocity properties of muscle. Simulations performed with parameters derived from animal studies were similar to measured jump heights from previous experimental studies. While simulations performed with parameters derived from human muscle were much lower than previously measured jump heights. If current measurements of maximum shortening velocity in human muscle are correct, a compensating error must exist. Of the possible compensating errors that could produce this discrepancy, it was concluded that reduced muscle fibre excursion is the most likely candidate.

  7. The active learning hypothesis of the job-demand-control model: an experimental examination.

    Science.gov (United States)

    Häusser, Jan Alexander; Schulz-Hardt, Stefan; Mojzisch, Andreas

    2014-01-01

    The active learning hypothesis of the job-demand-control model [Karasek, R. A. 1979. "Job Demands, Job Decision Latitude, and Mental Strain: Implications for Job Redesign." Administration Science Quarterly 24: 285-307] proposes positive effects of high job demands and high job control on performance. We conducted a 2 (demands: high vs. low) × 2 (control: high vs. low) experimental office workplace simulation to examine this hypothesis. Since performance during a work simulation is confounded by the boundaries of the demands and control manipulations (e.g. time limits), we used a post-test, in which participants continued working at their task, but without any manipulation of demands and control. This post-test allowed for examining active learning (transfer) effects in an unconfounded fashion. Our results revealed that high demands had a positive effect on quantitative performance, without affecting task accuracy. In contrast, high control resulted in a speed-accuracy tradeoff, that is participants in the high control conditions worked slower but with greater accuracy than participants in the low control conditions.

  8. Examining school-based bullying interventions using multilevel discrete time hazard modeling.

    Science.gov (United States)

    Ayers, Stephanie L; Wagaman, M Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E C

    2012-10-01

    Although schools have been trying to address bullying by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K - 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR = 0.65, p connection between the students' mesosystems as well as utilizing disciplinary strategies that take into consideration student's microsystem roles.

  9. Examining Accumulated Emotional Traits in Suicide Blogs With an Emotion Topic Model.

    Science.gov (United States)

    Ren, Fuji; Kang, Xin; Quan, Changqin

    2016-09-01

    Suicide has been a major cause of death throughout the world. Recent studies have proved a reliable connection between the emotional traits and suicide. However, detection and prevention of suicide are mostly carried out in the clinical centers, which limit the effective treatments to a restricted group of people. To assist detecting suicide risks among the public, we propose a novel method by exploring the accumulated emotional information from people's daily writings (i.e., Blogs), and examining these emotional traits that are predictive of suicidal behaviors. A complex emotion topic model is employed to detect the underlying emotions and emotion-related topics in the Blog streams, based on eight basic emotion categories and five levels of emotion intensities. Since suicide is caused through an accumulative process, we propose three accumulative emotional traits, i.e., accumulation, covariance, and transition of the consecutive Blog emotions, and employ a generalized linear regression algorithm to examine the relationship between emotional traits and suicide risk. Our experiment results suggest that the emotion transition trait turns to be more discriminative of the suicide risk, and that the combination of three traits in linear regression would generate even more discriminative predictions. A classification of the suicide and nonsuicide Blog articles in our additional experiment verifies this result. Finally, we conduct a case study of the most commonly mentioned emotion-related topics in the suicidal Blogs, to further understand the association between emotions and thoughts for these authors.

  10. Examining School-Based Bullying Interventions Using Multilevel Discrete Time Hazard Modeling

    Science.gov (United States)

    Wagaman, M. Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E. C.

    2014-01-01

    Although schools have been trying to address bulling by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K – 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR=0.65, pbullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students’ mesosystems as well as utilizing disciplinary strategies that take into consideration student’s microsystem roles. PMID:22878779

  11. [Selection of a statistical model for the evaluation of the reliability of the results of toxicological analyses. II. Selection of our statistical model for the evaluation].

    Science.gov (United States)

    Antczak, K; Wilczyńska, U

    1980-01-01

    Part II presents a statistical model devised by the authors for evaluating toxicological analyses results. The model includes: 1. Establishment of a reference value, basing on our own measurements taken by two independent analytical methods. 2. Selection of laboratories -- basing on the deviation of the obtained values from reference ones. 3. On consideration of variance analysis, t-student's test and differences test, subsequent quality controls and particular laboratories have been evaluated.

  12. A systematic review of care delivery models and economic analyses in lymphedema: health policy impact (2004-2011).

    Science.gov (United States)

    Stout, N L; Weiss, R; Feldman, J L; Stewart, B R; Armer, J M; Cormier, J N; Shih, Y-C T

    2013-03-01

    A project of the American Lymphedema Framework Project (ALFP), this review seeks to examine the policy and economic impact of caring for patients with lymphedema, a common side effect of cancer treatment. This review is the first of its kind undertaken to investigate, coordinate, and streamline lymphedema policy initiatives in the United States with potential applicability worldwide. As part of a large scale literature review aiming to systematically evaluate the level of evidence of contemporary peer-reviewed lymphedema literature (2004 to 2011), publications on care delivery models, health policy, and economic impact were retrieved, summarized, and evaluated by a team of investigators and clinical experts. The review substantiates lymphedema education models and clinical models implemented at the community, health care provider, and individual level that improve delivery of care. The review exposes the lack of economic analysis related to lymphedema. Despite a dearth of evidence, efforts towards policy initiatives at the federal and state level are underway. These initiatives and the evidence to support them are examined and recommendations for translating these findings into clinical practice are made. Medical and community-based disease management interventions, taking on a public approach, are effective delivery models for lymphedema care and demonstrate great potential to improve cancer survivorship care. Efforts to create policy at the federal, state, and local level should target implementation of these models. More research is needed to identify costs associated with the treatment of lymphedema and to model the cost outlays and potential cost savings associated with comprehensive management of chronic lymphedema.

  13. An assessment of the wind re-analyses in the modelling of an extreme sea state in the Black Sea

    Science.gov (United States)

    Akpinar, Adem; Ponce de León, S.

    2016-03-01

    This study aims at an assessment of wind re-analyses for modelling storms in the Black Sea. A wind-wave modelling system (Simulating WAve Nearshore, SWAN) is applied to the Black Sea basin and calibrated with buoy data for three recent re-analysis wind sources, namely the European Centre for Medium-Range Weather Forecasts Reanalysis-Interim (ERA-Interim), Climate Forecast System Reanalysis (CFSR), and Modern Era Retrospective Analysis for Research and Applications (MERRA) during an extreme wave condition that occurred in the north eastern part of the Black Sea. The SWAN model simulations are carried out for default and tuning settings for deep water source terms, especially whitecapping. Performances of the best model configurations based on calibration with buoy data are discussed using data from the JASON2, TOPEX-Poseidon, ENVISAT and GFO satellites. The SWAN model calibration shows that the best configuration is obtained with Janssen and Komen formulations with whitecapping coefficient (Cds) equal to 1.8e-5 for wave generation by wind and whitecapping dissipation using ERA-Interim. In addition, from the collocated SWAN results against the satellite records, the best configuration is determined to be the SWAN using the CFSR winds. Numerical results, thus show that the accuracy of a wave forecast will depend on the quality of the wind field and the ability of the SWAN model to simulate the waves under extreme wind conditions in fetch limited wave conditions.

  14. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  15. Experiments and sensitivity analyses for heat transfer in a meter-scale regularly fractured granite model with water flow

    Institute of Scientific and Technical Information of China (English)

    Wei LU; Yan-yong XIANG

    2012-01-01

    Experiments of saturated water flow and heat transfer were conducted for a meter-scale model of regularly fractured granite.The fractured rock model (height 1502.5 mm,width 904 mm,and thickness 300 mm),embedded with two vertical and two horizontal fractures of pre-set apertures,was constructed using 18 pieces of intact granite.The granite was taken from a site currently being investigated for a high-level nuclear waste repository in China.The experiments involved different heat source temperatures and vertical water fluxes in the embedded fractures either open or filled with sand.A finite difference scheme and computer code for calculation of water flow and heat transfer in regularly fractured rocks was developed,verified against both the experimental data and calculations from the TOUGH2 code,and employed for parametric sensitivity analyses.The experiments revealed that,among other things,the temperature distribution was influenced by water flow in the fractures,especially the water flow in the vertical fracture adjacent to the heat source,and that the heat conduction between the neighboring rock blocks in the model with sand-filled fractures was enhanced by the sand,with larger range of influence of the heat source and longer time for approaching asymptotic steady-state than those of the model with open fractures.The temperatures from the experiments were in general slightly smaller than those from the numerical calculations,probably due to the fact that a certain amount of outward heat transfer at the model perimeter was unavoidable in the experiments.The parametric sensitivity analyses indicated that the temperature distribution was highly sensitive to water flow in the fractures,and the water temperature in the vertical fracture adjacent to the heat source was rather insensitive to water flow in other fractures.

  16. Spatial Modeling Techniques for Characterizing Geomaterials: Deterministic vs. Stochastic Modeling for Single-Variable and Multivariate Analyses%Spatial Modeling Techniques for Characterizing Geomaterials:Deterministic vs. Stochastic Modeling for Single-Variable and Multivariate Analyses

    Institute of Scientific and Technical Information of China (English)

    Katsuaki Koike

    2011-01-01

    Sample data in the Earth and environmental sciences are limited in quantity and sampling location and therefore, sophisticated spatial modeling techniques are indispensable for accurate imaging of complicated structures and properties of geomaterials. This paper presents several effective methods that are grouped into two categories depending on the nature of regionalized data used. Type I data originate from plural populations and type II data satisfy the prerequisite of stationarity and have distinct spatial correlations. For the type I data, three methods are shown to be effective and demonstrated to produce plausible results: (1) a spline-based method, (2) a combination of a spline-based method with a stochastic simulation, and (3) a neural network method. Geostatistics proves to be a powerful tool for type II data. Three new approaches of geostatistics are presented with case studies: an application to directional data such as fracture, multi-scale modeling that incorporates a scaling law,and space-time joint analysis for multivariate data. Methods for improving the contribution of such spatial modeling to Earth and environmental sciences are also discussed and future important problems to be solved are summarized.

  17. Longitudinal Examination of Resilience after Traumatic Brain Injury: A Traumatic Brain Injury Model Systems Study.

    Science.gov (United States)

    Marwitz, Jennifer H; Sima, Adam P; Kreutzer, Jeffrey S; Dreer, Laura E; Bergquist, Thomas F; Zafonte, Ross; Johnson-Greene, Douglas; Felix, Elizabeth R

    2017-07-19

    To evaluate the trajectory of resilience during the first year following a moderate-severe TBI, factors associated with resilience at 3, 6 and 12-months post-injury, and changing relationships over time between resilience and other factors. Longitudinal analysis of an observational cohort. Five inpatient rehabilitation centers. Patients with TBI (N = 195) enrolled in the resilience module of the TBI Model Systems study with data collected at 3, 6, and 12-month follow-up. Not applicable. Connor-Davidson Resilience Scale. Initially, resilience levels appeared to be stable during the first year post-injury. Individual growth curve models were used to examine resilience over time in relation to demographic, psychosocial, and injury characteristics. After adjusting for these characteristics, resilience actually declined over time. Higher levels of resilience were related to non-minority status, absence of pre-injury substance abuse, lower anxiety and disability level, and greater life satisfaction. Resilience is a construct that is relevant to understanding brain injury outcomes and has potential value in planning clinical interventions. Copyright © 2017. Published by Elsevier Inc.

  18. Aerosol penetration of leak pathways : an examination of the available data and models.

    Energy Technology Data Exchange (ETDEWEB)

    Powers, Dana Auburn

    2009-04-01

    Data and models of aerosol particle deposition in leak pathways are described. Pathways considered include capillaries, orifices, slots and cracks in concrete. The Morewitz-Vaughan criterion for aerosol plugging of leak pathways is shown to be applicable only to a limited range of particle settling velocities and Stokes numbers. More useful are sampling efficiency criteria defined by Davies and by Liu and Agarwal. Deposition of particles can be limited by bounce from surfaces defining leak pathways and by resuspension of particles deposited on these surfaces. A model of the probability of particle bounce is described. Resuspension of deposited particles can be triggered by changes in flow conditions, particle impact on deposits and by shock or vibration of the surfaces. This examination was performed as part of the review of the AP1000 Standard Combined License Technical Report, APP-GW-GLN-12, Revision 0, 'Offsite and Control Room Dose Changes' (TR-112) in support of the USNRC AP1000 Standard Combined License Pre-Application Review.

  19. Examining the justification of superposition model of FePc ; A DMC study

    CERN Document Server

    Ichibha, Tom; Hongo, Kenta; Maezono, Ryo

    2016-01-01

    We have applied CASSCF-DMC to evaluate relative stabilities of the possible electronic configurations of an isolated FePc under $D_{4h}$ symmetry. It predicts $A_{2g}$ ground state, supporting preceding DFT studies,[J. Chem. Phys. 114, 9780 (2001), Appl. Phys. 95, 165 (2009), Phys. Rev. B 85, 235129 (2012)] with confidence overcoming the ambiguity about exchange-correlation (XC) functionals. By comparing DMC with several XC, we clarified the importance of the short range exchange to describe the relative stability. We examined why the predicted $A_{2g}$ is excluded from possible ground states in the recent ligand field based model.[J. Chem. Phys. 138, 244308 (2013)] Simplified assumptions made in the superposition model [Rep. Prog. Phys. 52, 699 (1989)] are identified to give unreasonably less energy gain for $A_{2g}$ when compared with the reality. The state is found to have possible reasons for the stabilization, reducing the occupations from an unstable anti-bonding orbital, preventing double occupancies i...

  20. Examining a comprehensive model of disaster-related posttraumatic stress disorder in systematically studied survivors of 10 disasters.

    Science.gov (United States)

    North, Carol S; Oliver, Julianne; Pandya, Anand

    2012-10-01

    Using a comprehensive disaster model, we examined predictors of posttraumatic stress disorder (PTSD) in combined data from 10 different disasters. The combined sample included data from 811 directly exposed survivors of 10 disasters between 1987 and 1995. We used consistent methods across all 10 disaster samples, including full diagnostic assessment. In multivariate analyses, predictors of PTSD were female gender, younger age, Hispanic ethnicity, less education, ever-married status, predisaster psychopathology, disaster injury, and witnessing injury or death; exposure through death or injury to friends or family members and witnessing the disaster aftermath did not confer additional PTSD risk. Intentionally caused disasters associated with PTSD in bivariate analysis did not independently predict PTSD in multivariate analysis. Avoidance and numbing symptoms represented a PTSD marker. Despite confirming some previous research findings, we found no associations between PTSD and disaster typology. Prospective research is needed to determine whether early avoidance and numbing symptoms identify individuals likely to develop PTSD later. Our findings may help identify at-risk populations for treatment research.

  1. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    Science.gov (United States)

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  2. Data Assimilation Tools for CO2 Reservoir Model Development – A Review of Key Data Types, Analyses, and Selected Software

    Energy Technology Data Exchange (ETDEWEB)

    Rockhold, Mark L.; Sullivan, E. C.; Murray, Christopher J.; Last, George V.; Black, Gary D.

    2009-09-30

    Pacific Northwest National Laboratory (PNNL) has embarked on an initiative to develop world-class capabilities for performing experimental and computational analyses associated with geologic sequestration of carbon dioxide. The ultimate goal of this initiative is to provide science-based solutions for helping to mitigate the adverse effects of greenhouse gas emissions. This Laboratory-Directed Research and Development (LDRD) initiative currently has two primary focus areas—advanced experimental methods and computational analysis. The experimental methods focus area involves the development of new experimental capabilities, supported in part by the U.S. Department of Energy’s (DOE) Environmental Molecular Science Laboratory (EMSL) housed at PNNL, for quantifying mineral reaction kinetics with CO2 under high temperature and pressure (supercritical) conditions. The computational analysis focus area involves numerical simulation of coupled, multi-scale processes associated with CO2 sequestration in geologic media, and the development of software to facilitate building and parameterizing conceptual and numerical models of subsurface reservoirs that represent geologic repositories for injected CO2. This report describes work in support of the computational analysis focus area. The computational analysis focus area currently consists of several collaborative research projects. These are all geared towards the development and application of conceptual and numerical models for geologic sequestration of CO2. The software being developed for this focus area is referred to as the Geologic Sequestration Software Suite or GS3. A wiki-based software framework is being developed to support GS3. This report summarizes work performed in FY09 on one of the LDRD projects in the computational analysis focus area. The title of this project is Data Assimilation Tools for CO2 Reservoir Model Development. Some key objectives of this project in FY09 were to assess the current state

  3. Kvalitative analyser ..

    DEFF Research Database (Denmark)

    Boolsen, Merete Watt

    bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse......bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse...

  4. PartitionFinder 2: New Methods for Selecting Partitioned Models of Evolution for Molecular and Morphological Phylogenetic Analyses.

    Science.gov (United States)

    Lanfear, Robert; Frandsen, Paul B; Wright, April M; Senfeld, Tereza; Calcott, Brett

    2017-03-01

    PartitionFinder 2 is a program for automatically selecting best-fit partitioning schemes and models of evolution for phylogenetic analyses. PartitionFinder 2 is substantially faster and more efficient than version 1, and incorporates many new methods and features. These include the ability to analyze morphological datasets, new methods to analyze genome-scale datasets, new output formats to facilitate interoperability with downstream software, and many new models of molecular evolution. PartitionFinder 2 is freely available under an open source license and works on Windows, OSX, and Linux operating systems. It can be downloaded from www.robertlanfear.com/partitionfinder. The source code is available at https://github.com/brettc/partitionfinder. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  5. ANALYSES ON NONLINEAR COUPLING OF MAGNETO-THERMO-ELASTICITY OF FERROMAGNETIC THIN SHELL-Ⅱ: FINITE ELEMENT MODELING AND APPLICATION

    Institute of Scientific and Technical Information of China (English)

    Xingzhe Wang; Xiaojing Zheng

    2009-01-01

    Based on the generalized variational principle of magneto-thermo-elasticity of a ferromagnetic thin shell established (see, Analyses on nonlinear coupling of magneto-thermo-elasticity of ferromagnetic thin shell-Ⅰ), the present paper developed a finite element modeling for the mechanical-magneto-thermal multi-field coupling of a ferromagnetic thin shell. The numerical modeling composes of finite element equations for three sub-systems of magnetic, thermal and deformation fields, as well as iterative methods for nonlinearities of the geometrical large-deflection and the multi-field coupling of the ferromagnetic shell. As examples, the numerical simulations on magneto-elastic behaviors of a ferromagnetic cylindrical shell in an applied magnetic field, and magneto-thermo-elastic behaviors of the shell in applied magnetic and thermal fields are carried out. The results are in good agreement with the experimental ones.

  6. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    Directory of Open Access Journals (Sweden)

    Ilona Naujokaitis-Lewis

    2016-07-01

    Full Text Available Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0 that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat

  7. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes.

    Science.gov (United States)

    Naujokaitis-Lewis, Ilona; Curtis, Janelle M R

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  8. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  9. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality

    Directory of Open Access Journals (Sweden)

    Hayden J. R. Woodley

    2016-01-01

    Full Text Available The construct of equity sensitivity describes an individual’s preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called Benevolents. Individuals low on equity sensitivity are more outcome oriented, and are described as Entitleds. Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170, two personality models (i.e., the Big Five and HEXACO, the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory, and using both self and peer reports of personality (in Study 3. Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model’s dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity.

  10. PALM KERNEL OIL SOLUBITY EXAMINATION AND ITS MODELING IN EXTRACTION PROCESS USING SUPERCRITICAL CARBON DIOXIDE

    Directory of Open Access Journals (Sweden)

    Wahyu Bahari Setianto

    2013-11-01

    Full Text Available Application of  supercritical carbon dioxide (SC-CO2 to vegetable oil extraction became an attractive technique due to its high solubility, short extraction time and simple purification. The method is considered as earth friendly technology due to the absence of chemical usage. Solubility of solute-SC-CO2 is an important data for application of the SC-CO2 extraction. In this work, the equilibrium solubility of the palm kernel oil (PKO in SC-CO2 has been examined using extraction curve analysis. The examinations were performed at temperature and pressure ranges of  323.15 K to 353.15 K and 20.7 to 34.5 MPa respectively. It was obtained that the experimental solubility were from 0.0160 to 0.0503 g oil/g CO2 depend on the extraction condition. The experimental solubility data was well correlated with a solvent density based model with absolute percent deviation of 0.96. PENENTUAN KELARUTAN MINYAK INTI KELAPA SAWIT DAN PEMODELAN EKSTRAKSI DENGAN KARBON DIOKSIDA SUPERKRITIK. Sehubungan dengan kelarutan yang tinggi, waktu ekstraksi yang pendek dan pemurnian hasil yang mudah, aplikasi karbon dioksida superkritis (SC-CO2 pada ekstraksi minyak nabati menjadi sebuah teknik ekstraksi yang menarik. Karena tanpa penggunaan bahan kimia, metode ekstraksi ini dianggap sebagai teknologi yang ramah lingkungan. Kelarutan zat terlarut pada SC-CO2 merupakan data yang penting dalam aplikasi SC-CO2 pada proses ekstraksi.  Pada penelitian ini,  kelarutan kesetimbangan dari minyak biji sawit (PKO dalam SC-CO2 telah diuji dengan mengunakan analisa kurva proses ekstraksi. Pengujian kelarutan tersebut dilakukan pada rentang suhu 323,15 K sampai 353,15 K dan rentang tekanan 20,7 MPa sampai 34,5 MPa. Hasil analisa menunjukkan bahwa kelarutan kesetimbangan hasil percobaan  PKO pada SC-CO2 adalah 0.0160 g minyak/g CO2 sampai 0,0503 g minyak/g CO2 tergantung pada kondisi ekstraksi. Data kelarutan kesetimbangan hasil percobaan  telah dikorelasaikan dengan baik menggunakan

  11. Examination of a climate stabilization pathway via zero-emissions using Earth system models

    Science.gov (United States)

    Nohara, Daisuke; Tsutsui, J.; Watanabe, S.; Tachiiri, K.; Hajima, T.; Okajima, H.; Matsuno, T.

    2015-09-01

    Long-term climate experiments up to the year 2300 have been conducted using two full-scale complex Earth system models (ESMs), CESM1(BGC) and MIROC-ESM, for a CO2 emissions reduction pathway, termed Z650, where annual CO2 emissions peak at 11 PgC in 2020, decline by 50% every 30 years, and reach zero in 2160. The results have been examined by focusing on the approximate linear relationship between the temperature increase and cumulative CO2 emissions. Although the temperature increase is nearly proportional to the cumulative CO2 emissions in both models, this relationship does not necessarily provide a robust basis for the restriction of CO2 emissions because it is substantially modulated by non-CO2 forcing. CO2-induced warming, estimated from the atmospheric CO2 concentrations in the models, indicates an approximate compensation of nonlinear changes between fast-mode responses to concentration changes at less than 10 years and slow-mode response at more than 100 years due to the thermal inertia of the ocean. In this estimate, CESM1(BGC) closely approximates a linear trend of 1.7 °C per 1000 PgC, whereas MIROC-ESM shows a deviation toward higher temperatures after the emissions peak, from 1.8 °C to 2.4 °C per 1000 PgC over the range of 400-850 PgC cumulative emissions corresponding to years 2000-2050. The evolution of temperature under zero emissions, 2160-2300, shows a slight decrease of about 0.1 °C per century in CESM1(BGC), but remains almost constant in MIROC-ESM. The fast-mode response toward the equilibrium state decreases with a decrease in the airborne fraction owing to continued CO2 uptake (carbon cycle inertia), whereas the slow-mode response results in more warming owing to continued heat uptake (thermal inertia). Several specific differences are noted between the two models regarding the degree of this compensation and in some key regional aspects associated with sustained warming and long-term climate risks. Overall, elevated temperatures continue

  12. Using FOSM-Based Data Worth Analyses to Design Geophysical Surveys to Reduce Uncertainty in a Regional Groundwater Model Update

    Science.gov (United States)

    Smith, B. D.; White, J.; Kress, W. H.; Clark, B. R.; Barlow, J.

    2016-12-01

    Hydrogeophysical surveys have become an integral part of understanding hydrogeological frameworks used in groundwater models. Regional models cover a large area where water well data is, at best, scattered and irregular. Since budgets are finite, priorities must be assigned to select optimal areas for geophysical surveys. For airborne electromagnetic (AEM) geophysical surveys, optimization of mapping depth and line spacing needs to take in account the objectives of the groundwater models. The approach discussed here uses a first-order, second-moment (FOSM) uncertainty analyses which assumes an approximate linear relation between model parameters and observations. This assumption allows FOSM analyses to be applied to estimate the value of increased parameter knowledge to reduce forecast uncertainty. FOSM is used to facilitate optimization of yet-to-be-completed geophysical surveying to reduce model forecast uncertainty. The main objective of geophysical surveying is assumed to estimate values and spatial variation in hydrologic parameters (i.e. hydraulic conductivity) as well as map lower permeability layers that influence the spatial distribution of recharge flux. The proposed data worth analysis was applied to Mississippi Embayment Regional Aquifer Study (MERAS) which is being updated. The objective of MERAS is to assess the ground-water availability (status and trends) of the Mississippi embayment aquifer system. The study area covers portions of eight states including Alabama, Arkansas, Illinois, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee. The active model grid covers approximately 70,000 square miles, and incorporates some 6,000 miles of major rivers and over 100,000 water wells. In the FOSM analysis, a dense network of pilot points was used to capture uncertainty in hydraulic conductivity and recharge. To simulate the effect of AEM flight lines, the prior uncertainty for hydraulic conductivity and recharge pilots along potential flight lines was

  13. Using plant growth modeling to analyse C source-sink relations under drought: inter and intra specific comparison

    Directory of Open Access Journals (Sweden)

    Benoit ePallas

    2013-11-01

    Full Text Available The ability to assimilate C and allocate NSC (non structural carbohydrates to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyse such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed.

  14. Examining Nock and Prinstein's four-function model with offenders who self-injure.

    Science.gov (United States)

    Power, Jenelle; Smith, Hayden P; Beaudette, Janelle N

    2016-07-01

    Nonsuicidal self-injury (NSSI) is the deliberate bodily harm or disfigurement without suicidal intent and for purposes not socially sanctioned (e.g., cutting, burning, head banging). Nock and Prinstein (2004) proposed a 4-function model (FFM) of NSSI, in which the functions of NSSI are categorized by two dichotomous factors: (a) positive (i.e., involves the addition of a favorable stimulus) or negative (i.e., involves the removal of an aversive stimulus; and (b) automatic (i.e., intrapersonal) or social (i.e., interpersonal). This study examined the validity of this model with incarcerated populations. In-depth semistructured interviews with 201 incarcerated offenders were analyzed and categorized based on the FFM. Participants' descriptions of functions of NSSI were most commonly categorized as automatic negative reinforcement (25.0%; e.g., coping with negative emotions), followed by automatic positive reinforcement (31.3%; e.g., self-punishment), social positive reinforcement (31.3%; e.g., to communicate with others), and social negative reinforcement (12.5%; e.g., to avoid hurting someone else). While the uniqueness of the correctional environment affects some of the specific functions evident in offenders, FFM can be used to adequately organize the functions of NSSI in offenders, providing a useful tool for explaining this complex behavior. Clinically, NSSI in offenders can be viewed has having the same underlying motivations, although automatic positive reinforcement is more prevalent in offenders and social positive reinforcement is more prevalence in nonoffenders. Given that the motivations underlying nonsuicidal self-injury are similar for offender and nonoffender populations, similar treatment approaches may be effective with both populations. (PsycINFO Database Record

  15. A multinomial logit model-Bayesian network hybrid approach for driver injury severity analyses in rear-end crashes.

    Science.gov (United States)

    Chen, Cong; Zhang, Guohui; Tarefder, Rafiqul; Ma, Jianming; Wei, Heng; Guan, Hongzhi

    2015-07-01

    Rear-end crash is one of the most common types of traffic crashes in the U.S. A good understanding of its characteristics and contributing factors is of practical importance. Previously, both multinomial Logit models and Bayesian network methods have been used in crash modeling and analysis, respectively, although each of them has its own application restrictions and limitations. In this study, a hybrid approach is developed to combine multinomial logit models and Bayesian network methods for comprehensively analyzing driver injury severities in rear-end crashes based on state-wide crash data collected in New Mexico from 2010 to 2011. A multinomial logit model is developed to investigate and identify significant contributing factors for rear-end crash driver injury severities classified into three categories: no injury, injury, and fatality. Then, the identified significant factors are utilized to establish a Bayesian network to explicitly formulate statistical associations between injury severity outcomes and explanatory attributes, including driver behavior, demographic features, vehicle factors, geometric and environmental characteristics, etc. The test results demonstrate that the proposed hybrid approach performs reasonably well. The Bayesian network reference analyses indicate that the factors including truck-involvement, inferior lighting conditions, windy weather conditions, the number of vehicles involved, etc. could significantly increase driver injury severities in rear-end crashes. The developed methodology and estimation results provide insights for developing effective countermeasures to reduce rear-end crash injury severities and improve traffic system safety performance.

  16. Comparative analyses reveal potential uses of Brachypodium distachyon as a model for cold stress responses in temperate grasses

    Directory of Open Access Journals (Sweden)

    Li Chuan

    2012-05-01

    Full Text Available Abstract Background Little is known about the potential of Brachypodium distachyon as a model for low temperature stress responses in Pooideae. The ice recrystallization inhibition protein (IRIP genes, fructosyltransferase (FST genes, and many C-repeat binding factor (CBF genes are Pooideae specific and important in low temperature responses. Here we used comparative analyses to study conservation and evolution of these gene families in B. distachyon to better understand its potential as a model species for agriculturally important temperate grasses. Results Brachypodium distachyon contains cold responsive IRIP genes which have evolved through Brachypodium specific gene family expansions. A large cold responsive CBF3 subfamily was identified in B. distachyon, while CBF4 homologs are absent from the genome. No B. distachyon FST gene homologs encode typical core Pooideae FST-motifs and low temperature induced fructan accumulation was dramatically different in B. distachyon compared to core Pooideae species. Conclusions We conclude that B. distachyon can serve as an interesting model for specific molecular mechanisms involved in low temperature responses in core Pooideae species. However, the evolutionary history of key genes involved in low temperature responses has been different in Brachypodium and core Pooideae species. These differences limit the use of B. distachyon as a model for holistic studies relevant for agricultural core Pooideae species.

  17. 3D RECORDING FOR 2D DELIVERING – THE EMPLOYMENT OF 3D MODELS FOR STUDIES AND ANALYSES

    Directory of Open Access Journals (Sweden)

    A. Rizzi

    2012-09-01

    Full Text Available In the last years, thanks to the advances of surveying sensors and techniques, many heritage sites could be accurately replicated in digital form with very detailed and impressive results. The actual limits are mainly related to hardware capabilities, computation time and low performance of personal computer. Often, the produced models are not visible on a normal computer and the only solution to easily visualized them is offline using rendered videos. This kind of 3D representations is useful for digital conservation, divulgation purposes or virtual tourism where people can visit places otherwise closed for preservation or security reasons. But many more potentialities and possible applications are available using a 3D model. The problem is the ability to handle 3D data as without adequate knowledge this information is reduced to standard 2D data. This article presents some surveying and 3D modeling experiences within the APSAT project ("Ambiente e Paesaggi dei Siti d’Altura Trentini", i.e. Environment and Landscapes of Upland Sites in Trentino. APSAT is a multidisciplinary project funded by the Autonomous Province of Trento (Italy with the aim documenting, surveying, studying, analysing and preserving mountainous and hill-top heritage sites located in the region. The project focuses on theoretical, methodological and technological aspects of the archaeological investigation of mountain landscape, considered as the product of sequences of settlements, parcelling-outs, communication networks, resources, and symbolic places. The mountain environment preserves better than others the traces of hunting and gathering, breeding, agricultural, metallurgical, symbolic activities characterised by different lengths and environmental impacts, from Prehistory to the Modern Period. Therefore the correct surveying and documentation of this heritage sites and material is very important. Within the project, the 3DOM unit of FBK is delivering all the surveying

  18. Assessing models of speciation under different biogeographic scenarios; An empirical study using multi-locus and RNA-seq analyses

    Science.gov (United States)

    Edwards, Taylor; Tollis, Marc; Hsieh, PingHsun; Gutenkunst, Ryan N.; Liu, Zhen; Kusumi, Kenro; Culver, Melanie; Murphy, Robert W.

    2016-01-01

    Evolutionary biology often seeks to decipher the drivers of speciation, and much debate persists over the relative importance of isolation and gene flow in the formation of new species. Genetic studies of closely related species can assess if gene flow was present during speciation, because signatures of past introgression often persist in the genome. We test hypotheses on which mechanisms of speciation drove diversity among three distinct lineages of desert tortoise in the genus Gopherus. These lineages offer a powerful system to study speciation, because different biogeographic patterns (physical vs. ecological segregation) are observed at opposing ends of their distributions. We use 82 samples collected from 38 sites, representing the entire species' distribution and generate sequence data for mtDNA and four nuclear loci. A multilocus phylogenetic analysis in *BEAST estimates the species tree. RNA-seq data yield 20,126 synonymous variants from 7665 contigs from two individuals of each of the three lineages. Analyses of these data using the demographic inference package ∂a∂i serve to test the null hypothesis of no gene flow during divergence. The best-fit demographic model for the three taxa is concordant with the *BEAST species tree, and the ∂a∂i analysis does not indicate gene flow among any of the three lineages during their divergence. These analyses suggest that divergence among the lineages occurred in the absence of gene flow and in this scenario the genetic signature of ecological isolation (parapatric model) cannot be differentiated from geographic isolation (allopatric model).

  19. Virus-induced gene silencing as a tool for functional analyses in the emerging model plant Aquilegia (columbine, Ranunculaceae

    Directory of Open Access Journals (Sweden)

    Kramer Elena M

    2007-04-01

    Full Text Available Abstract Background The lower eudicot genus Aquilegia, commonly known as columbine, is currently the subject of extensive genetic and genomic research aimed at developing this taxon as a new model for the study of ecology and evolution. The ability to perform functional genetic analyses is a critical component of this development process and ultimately has the potential to provide insight into the genetic basis for the evolution of a wide array of traits that differentiate flowering plants. Aquilegia is of particular interest due to both its recent evolutionary history, which involves a rapid adaptive radiation, and its intermediate phylogenetic position between core eudicot (e.g., Arabidopsis and grass (e.g., Oryza model species. Results Here we demonstrate the effective use of a reverse genetic technique, virus-induced gene silencing (VIGS, to study gene function in this emerging model plant. Using Agrobacterium mediated transfer of tobacco rattle virus (TRV based vectors, we induce silencing of PHYTOENE DESATURASE (AqPDS in Aquilegia vulgaris seedlings, and ANTHOCYANIDIN SYNTHASE (AqANS and the B-class floral organ identity gene PISTILLATA in A. vulgaris flowers. For all of these genes, silencing phenotypes are associated with consistent reduction in endogenous transcript levels. In addition, we show that silencing of AqANS has no effect on overall floral morphology and is therefore a suitable marker for the identification of silenced flowers in dual-locus silencing experiments. Conclusion Our results show that TRV-VIGS in Aquilegia vulgaris allows data to be rapidly obtained and can be reproduced with effective survival and silencing rates. Furthermore, this method can successfully be used to evaluate the function of early-acting developmental genes. In the future, data derived from VIGS analyses will be combined with large-scale sequencing and microarray experiments already underway in order to address both recent and ancient evolutionary

  20. A model-based examination of multivariate physical modes in the Gulf of Alaska

    Science.gov (United States)

    Hermann, A. J.; Ladd, C.; Cheng, W.; Curchitser, E. N.; Hedstrom, K.

    2016-10-01

    We use multivariate output from a hydrodynamic model of the Gulf of Alaska (GOA) to explore the covariance among its physical state and air/sea fluxes. We attempt to summarize this coupled variability using a limited set of patterns, and examine their correlation to three large-scale climate indices relevant to the Northeast Pacific. This analysis is focused on perturbations from monthly climatology of the following attributes of the GOA: sea surface temperature, sea surface height, mixed layer depth, sea surface salinity, latent heat flux, sensible heat flux, shortwave irradiance, net long wave irradiance, currents at 40 m depth, and wind stress. We identified two multivariate modes, both substantially correlated with the Pacific Decadal Oscillation (PDO) and Multivariate El Nino (MEI) indices on interannual timescales, which together account for ~30% of the total normalized variance of the perturbation time series. These two modes indicate the following covarying events during periods of positive PDO/MEI: (1) anomalously warm, wet and windy conditions (typically in winter), with elevated coastal SSH, followed 2-5 months later by (2) reduced cloud cover, with emerging shelf-break eddies. Similar modes are found when the analysis is performed separately on the eastern and western GOA; in general, modal amplitudes appear stronger in the western GOA.

  1. Longitudinal examination of the exercise and self-esteem model in middle-aged women.

    Science.gov (United States)

    Elavsky, Steriani

    2010-12-01

    This 2-year prospective study examined the exercise and self-esteem model in middle-aged women (N = 143) previously enrolled in a randomized controlled exercise trial. Across the 2-year period, increases in physical activity (PA) and self-efficacy and reductions in body mass index (BMI) were associated with improved subdomain self-perceptions relative to physical condition, and reductions in BMI were associated with improved subdomain self-perceptions relative to physical condition and body attractiveness. The effects of PA, self-efficacy, and BMI on changes in physical self-worth and global self-esteem were mediated by changes in self-perceptions relative to physical condition and body attractiveness. The results of this longitudinal analysis support the hierarchical and multidimensional structure of self-esteem and indicate that middle-aged women can enhance how they perceive their condition and body attractiveness by continued participation in physical activity, increasing their self-efficacy, and maintaining healthy BMI levels.

  2. Examining individual and school characteristics associated with child obesity using a multilevel growth model.

    Science.gov (United States)

    Miyazaki, Yasuo; Stack, Maria

    2015-03-01

    The childhood obesity epidemic continues to be a serious concern in the U.S., disproportionately affecting low socioeconomic and minority groups. Because many interventions are based in schools, both individual and school factors contributing to obesity were examined in this study. Employing data from the Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999 (ECLS-K), a three level hierarchical linear model was used to estimate children's body mass index (BMI) growth trajectories within their school contexts. Results indicated an inverse relationship between BMI and socioeconomic status (SES), except for black males. Additionally, results showed that low school SES and rural locality of the school were school-level risk factors of obesity. Lastly, a major portion of the between-schools variance was explained by aggregated student characteristics, indicating that students were more likely to attend schools with peers of similar BMI who had similar SES and race/ethnicity, supporting a school-level compositional effect associated with obesity.

  3. Systems genetics of obesity in an F2 pig model by genome-wide association, genetic network and pathway analyses

    Directory of Open Access Journals (Sweden)

    Lisette J. A. Kogelman

    2014-07-01

    Full Text Available Obesity is a complex condition with world-wide exponentially rising prevalence rates, linked with severe diseases like Type 2 Diabetes. Economic and welfare consequences have led to a raised interest in a better understanding of the biological and genetic background. To date, whole genome investigations focusing on single genetic variants have achieved limited success, and the importance of including genetic interactions is becoming evident. Here, the aim was to perform an integrative genomic analysis in an F2 pig resource population that was constructed with an aim to maximize genetic variation of obesity-related phenotypes and genotyped using the 60K SNP chip. Firstly, Genome Wide Association (GWA analysis was performed on the Obesity Index to locate candidate genomic regions that were further validated using combined Linkage Disequilibrium Linkage Analysis and investigated by evaluation of haplotype blocks. We built Weighted Interaction SNP Hub (WISH and differentially wired (DW networks using genotypic correlations amongst obesity-associated SNPs resulting from GWA analysis. GWA results and SNP modules detected by WISH and DW analyses were further investigated by functional enrichment analyses. The functional annotation of SNPs revealed several genes associated with obesity, e.g. NPC2 and OR4D10. Moreover, gene enrichment analyses identified several significantly associated pathways, over and above the GWA study results, that may influence obesity and obesity related diseases, e.g. metabolic processes. WISH networks based on genotypic correlations allowed further identification of various gene ontology terms and pathways related to obesity and related traits, which were not identified by the GWA study. In conclusion, this is the first study to develop a (genetic obesity index and employ systems genetics in a porcine model to provide important insights into the complex genetic architecture associated with obesity and many biological pathways

  4. Systems genetics of obesity in an F2 pig model by genome-wide association, genetic network, and pathway analyses.

    Science.gov (United States)

    Kogelman, Lisette J A; Pant, Sameer D; Fredholm, Merete; Kadarmideen, Haja N

    2014-01-01

    Obesity is a complex condition with world-wide exponentially rising prevalence rates, linked with severe diseases like Type 2 Diabetes. Economic and welfare consequences have led to a raised interest in a better understanding of the biological and genetic background. To date, whole genome investigations focusing on single genetic variants have achieved limited success, and the importance of including genetic interactions is becoming evident. Here, the aim was to perform an integrative genomic analysis in an F2 pig resource population that was constructed with an aim to maximize genetic variation of obesity-related phenotypes and genotyped using the 60K SNP chip. Firstly, Genome Wide Association (GWA) analysis was performed on the Obesity Index to locate candidate genomic regions that were further validated using combined Linkage Disequilibrium Linkage Analysis and investigated by evaluation of haplotype blocks. We built Weighted Interaction SNP Hub (WISH) and differentially wired (DW) networks using genotypic correlations amongst obesity-associated SNPs resulting from GWA analysis. GWA results and SNP modules detected by WISH and DW analyses were further investigated by functional enrichment analyses. The functional annotation of SNPs revealed several genes associated with obesity, e.g., NPC2 and OR4D10. Moreover, gene enrichment analyses identified several significantly associated pathways, over and above the GWA study results, that may influence obesity and obesity related diseases, e.g., metabolic processes. WISH networks based on genotypic correlations allowed further identification of various gene ontology terms and pathways related to obesity and related traits, which were not identified by the GWA study. In conclusion, this is the first study to develop a (genetic) obesity index and employ systems genetics in a porcine model to provide important insights into the complex genetic architecture associated with obesity and many biological pathways that underlie

  5. Analyses of the Classical Model for Porous Materials%多孔材料模型分析

    Institute of Scientific and Technical Information of China (English)

    刘培生; 夏凤金; 罗军

    2009-01-01

    New developments are ceaselessly gained for the preparation, the application and the property study of porous materials. As to the theories about the structure and properties of porous materials, the famous classical model-Gibson-Ashby model has been being commonly endorsed in the field of porous materials all over the world, and is the theoretical foundation widespreadly applied by numerous investigators to their relative researches up to now. Some supplementary thinking and analyses are made for the shortages in this model in the present paper, and it is found that some shortages can even break the completivity originally shown by this model. Based on the summery about these problems, another new model is introduced which can make up the shortcomings existed in Gibson-Ashby model.%多孔泡沫材料的制备、应用和性能研究均不断取得新的进展.在关于多孔材料结构和性能方面的理论中,著名的经典性模型--Gibson-Ashby模型一直受到国际同行的普遍认同,迄今仍然是众多研究者在研究工作中广泛应用的理论基础.对该模型尚存在的若干不足和问题进行了一些补充思考和分析,发现其中有些缺陷甚至可以打破该模型原来表现出来的"完满性".在总结陈述这些问题的基础上,引荐了可以克服或弥补上述模型不足的另一个模型.

  6. CREB3 subfamily transcription factors are not created equal: Recent insights from global analyses and animal models

    Directory of Open Access Journals (Sweden)

    Chan Chi-Ping

    2011-02-01

    Full Text Available Abstract The CREB3 subfamily of membrane-bound bZIP transcription factors has five members in mammals known as CREB3 and CREB3L1-L4. One current model suggests that CREB3 subfamily transcription factors are similar to ATF6 in regulated intramembrane proteolysis and transcriptional activation. Particularly, they were all thought to be proteolytically activated in response to endoplasmic reticulum (ER stress to stimulate genes that are involved in unfolded protein response (UPR. Although the physiological inducers of their proteolytic activation remain to be identified, recent findings from microarray analyses, RNAi screens and gene knockouts not only demonstrated their critical roles in regulating development, metabolism, secretion, survival and tumorigenesis, but also revealed cell type-specific patterns in the activation of their target genes. Members of the CREB3 subfamily show differential activity despite their structural similarity. The spectrum of their biological function expands beyond ER stress and UPR. Further analyses are required to elucidate the mechanism of their proteolytic activation and the molecular basis of their target recognition.

  7. Towards a Better Experience: Examining Student Needs in the Online Classroom through Maslow's Hierarchy of Needs Model

    National Research Council Canada - National Science Library

    Karen L. Milheim

    2012-01-01

    .... Using Maslow's hierarchy of needs model as a conceptual framework, the paper examines how student needs can be addressed at various levels in online courses, from basic needs to the ultimate goal of self-actualization...

  8. COUPLING EFFECTS FOR CELL-TRUSS SPAR PLATFORM: COMPARISON OF FREQUENCY- AND TIME-DOMAIN ANALYSES WITH MODEL TESTS

    Institute of Scientific and Technical Information of China (English)

    ZHANG Fan; YANG Jian-min; LI Run-pei; CHEN Gang

    2008-01-01

    For the floating structures in deepwater, the coupling effects of the mooring lines and risers on the motion responses of the structures become increasingly significant. Viscous damping, inertial mass, current loading and restoring, etc. from these slender structures should be carefully handled to accurately predict the motion responses and line tensions. For the spar platforms, coupling the mooring system and riser with the vessel motion typically results in a reduction in extreme motion responses. This article presents numerical simulations and model tests on a new cell-truss spar platform in the State Key Laboratory of Ocean Engineering in Shanghai Jiaotong University. Results from three calculation methods, including frequency-domain analysis, time-domain semi-coupled and fully-coupled analyses, were compared with the experimental data to find the applicability of different approaches. Proposals for the improvement of numerical calculations and experimental technique were tabled as well.

  9. Using niche-modelling and species-specific cost analyses to determine a multispecies corridor in a fragmented landscape

    Science.gov (United States)

    Zurano, Juan Pablo; Selleski, Nicole; Schneider, Rosio G.

    2017-01-01

    types independent of the degree of legal protection. These data used with multifocal GIS analyses balance the varying degree of overlap and unique properties among them allowing for comprehensive conservation strategies to be developed relatively rapidly. Our comprehensive approach serves as a model to other regions faced with habitat loss and lack of data. The five carnivores focused on in our study have wide ranges, so the results from this study can be expanded and combined with surrounding countries, with analyses at the species or community level. PMID:28841692

  10. From Global Climate Model Projections to Local Impacts Assessments: Analyses in Support of Planning for Climate Change

    Science.gov (United States)

    Snover, A. K.; Littell, J. S.; Mantua, N. J.; Salathe, E. P.; Hamlet, A. F.; McGuire Elsner, M.; Tohver, I.; Lee, S.

    2010-12-01

    Assessing and planning for the impacts of climate change require regionally-specific information. Information is required not only about projected changes in climate but also the resultant changes in natural and human systems at the temporal and spatial scales of management and decision making. Therefore, climate impacts assessment typically results in a series of analyses, in which relatively coarse-resolution global climate model projections of changes in regional climate are downscaled to provide appropriate input to local impacts models. This talk will describe recent examples in which coarse-resolution (~150 to 300km) GCM output was “translated” into information requested by decision makers at relatively small (watershed) and large (multi-state) scales using regional climate modeling, statistical downscaling, hydrologic modeling, and sector-specific impacts modeling. Projected changes in local air temperature, precipitation, streamflow, and stream temperature were developed to support Seattle City Light’s assessment of climate change impacts on hydroelectric operations, future electricity load, and resident fish populations. A state-wide assessment of climate impacts on eight sectors (agriculture, coasts, energy, forests, human health, hydrology and water resources, salmon, and urban stormwater infrastructure) was developed for Washington State to aid adaptation planning. Hydro-climate change scenarios for approximately 300 streamflow locations in the Columbia River basin and selected coastal drainages west of the Cascades were developed in partnership with major water management agencies in the Pacific Northwest to allow planners to consider how hydrologic changes may affect management objectives. Treatment of uncertainty in these assessments included: using “bracketing” scenarios to describe a range of impacts, using ensemble averages to characterize the central estimate of future conditions (given an emissions scenario), and explicitly assessing

  11. Civil engineering: EDF needs for concrete modelling; Genie civile: analyse des besoins EDF en modelisation du comportement des betons

    Energy Technology Data Exchange (ETDEWEB)

    Didry, O.; Gerard, B.; Bui, D. [Electricite de France (EDF), Direction des Etudes et Recherches, 92 - Clamart (France)

    1997-12-31

    Concrete structures which are encountered at EDF, like all civil engineering structures, age. In order to adapt the maintenance conditions of these structures, particularly to extend their service life, and also to prepare constructions of future structures, tools for predicting the behaviour of these structures in their environment should be available. For EDF the technical risks are high and consequently very appropriate R and D actions are required. In this context the Direction des Etudes et Recherches (DER) has developed a methodology for analysing concrete structure behaviour modelling. This approach has several aims: - making a distinction between the problems which refer to the existing models and those which require R and D; - displaying disciplinary links between different problems encountered on EDF structures (non-linear mechanical, chemical - hydraulic - mechanical coupling, etc); - listing of the existing tools and positioning the DER `Aster` finite element code among them. This document is a state of the art of scientific knowledge intended to shed light on the fields in which one should be involved when there is, on one part a strong requirement on the side of structure operators, and on the other one, the present tools do not allow this requirement to be satisfactorily met. The analysis has been done on 12 scientific subjects: 1) Hydration of concrete at early ages: exothermicity, hardening, autogenous shrinkage; 2) Drying and drying shrinkage; 3) Alkali-silica reaction and bulky stage formation; 4) Long term deterioration by leaching; 5) Ionic diffusion and associated attacks: the chlorides case; 6) Permeability / tightness of concrete; 7) Concretes -nonlinear behaviour and cracking (I): contribution of the plasticity models; 8) Concretes - nonlinear behaviour and cracking (II): contribution of the damage models; 9) Concretes - nonlinear behaviour and cracking (III): the contribution of the probabilistic analysis model; 10) Delayed behaviour of

  12. World Health Organization Quality-of-Life Scale (WHOQOL-BREF: Analyses Of Their Item Response Theory Properties Based On The Graded Responses Model

    Directory of Open Access Journals (Sweden)

    Shahrum Vahedi

    2010-11-01

    Full Text Available "nObjective: This study has used Item Response Theory (IRT to examine the psychometric properties of Health-Related Quality-of-Life. "nMethod: This investigation is a descriptive- analytic study. Subjects were 370 undergraduate students of nursing and midwifery who were selected from Tabriz University of Medical Sciences. All participants were asked to complete the Farsi version of WHOQOL-BREF. Samejima's graded response model was used for the analyses. "nResults: The results revealed that the discrimination parameters for all items in the four scales were low to moderate. The threshold parameters showed adequate representation of the relevant traits from low to the mean trait level. With the exception of 15, 18, 24 and 26 items, all other items showed low item information function values, and thus relatively high reliability from low trait levels to moderate levels. "nConclusions: The results of this study indicate that although there was general support for the psychometric properties of the WHOQOL-BREF from an IRT perspective, this measure can be further improved. IRT analyses provided useful measurement information and demonstrated to be a better methodological approach for enhancing our knowledge of the functionality of WHOQOL-BREF.

  13. World Health Organization Quality-of-Life Scale (WHOQOL-BREF): Analyses of Their Item Response Theory Properties Based on the Graded Responses Model

    Science.gov (United States)

    2010-01-01

    Objective This study has used Item Response Theory (IRT) to examine the psychometric properties of Health-Related Quality-of-Life. Method This investigation is a descriptive- analytic study. Subjects were 370 undergraduate students of nursing and midwifery who were selected from Tabriz University of Medical Sciences. All participants were asked to complete the Farsi version of WHOQOL-BREF. Samejima's graded response model was used for the analyses. Results The results revealed that the discrimination parameters for all items in the four scales were low to moderate. The threshold parameters showed adequate representation of the relevant traits from low to the mean trait level. With the exception of 15, 18, 24 and 26 items, all other items showed low item information function values, and thus relatively high reliability from low trait levels to moderate levels. Conclusions The results of this study indicate that although there was general support for the psychometric properties of the WHOQOL-BREF from an IRT perspective, this measure can be further improved. IRT analyses provided useful measurement information and demonstrated to be a better methodological approach for enhancing our knowledge of the functionality of WHOQOL-BREF. PMID:22952508

  14. The News Model of Asset Price Determination - An Empirical Examination of the Danish Football Club Bröndby IF

    DEFF Research Database (Denmark)

    Stadtmann, Georg; Moritzen; Jörgensen

    2012-01-01

    According to the news model of asset price determination, only the unexpected component of an information should drive the stock price. We use the Danish publicly listed football club Brøndby IF to analyse how match outcome impacts the stock price. To disentangle gross news from net news, betting...... odd information is used to control for the expected match outcome....

  15. The News Model of Asset Price Determination - An Empirical Examination of the Danish Football Club Bröndby IF

    DEFF Research Database (Denmark)

    Stadtmann, Georg; Moritzen; Jörgensen

    2012-01-01

    According to the news model of asset price determination, only the unexpected component of an information should drive the stock price. We use the Danish publicly listed football club Brøndby IF to analyse how match outcome impacts the stock price. To disentangle gross news from net news, betting...... odd information is used to control for the expected match outcome....

  16. Evaluation of breast self-examination program using Health Belief Model in female students

    Directory of Open Access Journals (Sweden)

    Mitra Moodi

    2011-01-01

    Full Text Available Background: Breast cancer has been considered as a major health problem in females, because of its high incidence in recent years. Due to the role of breast self-examination (BSE in early diagnosis and prevention of morbidity and mortality rate of breast cancer, promoting student knowledge, capabilities and attitude are required in this regard. This study was conducted to evaluation BSE education in female University students using Health Belief Model. Methods: In this semi-experimental study, 243 female students were selected using multi-stage randomized sampling in 2008. The data were collected by validated and reliable questionnaire (43 questions before intervention and one week after intervention. The intervention program was consisted of one educational session lasting 120 minutes by lecturing and showing a film based on HBM constructs. The obtained data were analyzed by SPSS (version11.5 using statistical paired t-test and ANOVA at the significant level of α = 0.05. Results: 243 female students aged 20.6 ± 2.8 years old were studied. Implementing the educational program resulted in increased knowledge and HBM (perceived susceptibility, severity, benefit and barrier scores in the students (p ≤ 0.01. Significant increases were also observed in knowledge and perceived benefit after the educational program (p ≤ 0.05. ANOVA statistical test showed significant difference in perceived benefit score in students of different universities (p = 0.05. Conclusions: Due to the positive effects of education on increasing knowledge and attitude of university students about BSE, the efficacy of the HBM in BSE education for female students was confirmed.

  17. Understanding Knowledge Sharing Behavior: An Examination of the Extended Model of Theory of Planned Behavior

    Directory of Open Access Journals (Sweden)

    Sabrina O. Sihombing

    2011-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Knowledge is recognized as one valuable asset for many organizations. Thus, knowledge-sharing is one of important activities in many organizations, including university. Knowledge sharing is defined as activities of transferring or disseminating organizationally relevant information, ideas, suggestions, and expertise with one another. This research applied Christian values as a moderating variable in the framework of theory of planned behavior. The aims of this research to assess applicability of the theory of planned behavior to predict knowledge sharing and to examine the effects of Christian values in the relationship between attitude and intention to share knowledge. A self-administered questionnaire was used to collect the data for this study. The data was then analyzed using structural equation modeling. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

  18. Examining the interaction of apo E and neurotoxicity on a murine model of ALS-PDC.

    Science.gov (United States)

    Wilson, J M B; Petrik, M S; Moghadasian, M H; Shaw, C A

    2005-02-01

    Epidemiological studies have shown a positive relationship between cycad flour consumption and the development of the neurodegenerative disorder, amyotrophic lateral sclerosis - parkinsonism - dementia complex (ALS-PDC). Apolipoprotein E (apo E) allele variations have been associated with genetic susceptibility in neurodegenerative diseases, including ALS-PDC. We have studied cycad toxicity in a mouse model of ALS-PDC with a particular interest in its impact on the central nervous system (CNS) in both apo E knock-out (KO) mice and their wild-type (WT) counterparts. Behavioral motor tests, motor neuron counts, and immunohistochemical staining in brain and spinal cord, as well as routine histological examinations on internal organs, were performed to evaluate cycad toxicity. Plasma cholesterol levels were also measured before and during the study. Cycad treatment was associated with higher levels of plasma cholesterol only in apo E KO mice; increased levels of plasma cholesterol did not result in increased athero genesis. Cycad-fed wild-type mice developed progressive behavioral deficits including ALS-PDC-like pathological outcomes, while cycad-fed apo E KO mice were not significantly affected. Cycad-fed wild-type mice had shorter gait length measurements along with higher active caspase-3 levels in the striatum, substantia nigra, primary motor cortex, and spinal cord as compared with corresponding controls. These changes were associated with decreased labeling for glutamate transporter 1B and tyrosine hydroxylase activity levels. No evidence of cycad toxicity was observed in internal organs of either wild-type or apo E KO mice. Our data demonstrate that apo E KO mice are less susceptible to cycad toxicity, suggesting a role for apo E as a possible genetic susceptibility factor for some forms of toxin-induced neurodegeneration.

  19. Evaluation of breast self-examination program using Health Belief Model in female students.

    Science.gov (United States)

    Moodi, Mitra; Mood, Mahdi Baladi; Sharifirad, Gholam Reza; Shahnazi, Hossein; Sharifzadeh, Gholamreza

    2011-03-01

    Breast cancer has been considered as a major health problem in females, because of its high incidence in recent years. Due to the role of breast self-examination (BSE) in early diagnosis and prevention of morbidity and mortality rate of breast cancer, promoting student knowledge, capabilities and attitude are required in this regard. This study was conducted to evaluation BSE education in female University students using Health Belief Model. In this semi-experimental study, 243 female students were selected using multi-stage randomized sampling in 2008. The data were collected by validated and reliable questionnaire (43 questions) before intervention and one week after intervention. The intervention program was consisted of one educational session lasting 120 minutes by lecturing and showing a film based on HBM constructs. The obtained data were analyzed by SPSS (version11.5) using statistical paired t-test and ANOVA at the significant level of α = 0.05. 243 female students aged 20.6 ± 2.8 years old were studied. Implementing the educational program resulted in increased knowledge and HBM (perceived susceptibility, severity, benefit and barrier) scores in the students (p ≤ 0.01). Significant increases were also observed in knowledge and perceived benefit after the educational program (p ≤ 0.05). ANOVA statistical test showed significant difference in perceived benefit score in students of different universities (p = 0.05). Due to the positive effects of education on increasing knowledge and attitude of university students about BSE, the efficacy of the HBM in BSE education for female students was confirmed.

  20. Best-fit model of exploratory and confirmatory factor analysis of the 2010 Medical Council of Canada Qualifying Examination Part I clinical decision-making cases

    Directory of Open Access Journals (Sweden)

    André F. De Champlain

    2015-04-01

    Full Text Available Purpose: This study aims to assess the fit of a number of exploratory and confirmatory factor analysis models to the 2010 Medical Council of Canada Qualifying Examination Part I (MCCQE1 clinical decision-making (CDM cases. The outcomes of this study have important implications for a range of domains, including scoring and test development. Methods: The examinees included all first-time Canadian medical graduates and international medical graduates who took the MCCQE1 in spring or fall 2010. The fit of one- to five-factor exploratory models was assessed for the item response matrix of the 2010 CDM cases. Five confirmatory factor analytic models were also examined with the same CDM response matrix. The structural equation modeling software program Mplus was used for all analyses. Results: Out of the five exploratory factor analytic models that were evaluated, a three-factor model provided the best fit. Factor 1 loaded on three medicine cases, two obstetrics and gynecology cases, and two orthopedic surgery cases. Factor 2 corresponded to pediatrics, and the third factor loaded on psychiatry cases. Among the five confirmatory factor analysis models examined in this study, three- and four-factor lifespan period models and the five-factor discipline models provided the best fit. Conclusion: The results suggest that knowledge of broad disciplinary domains best account for performance on CDM cases. In test development, particular effort should be placed on developing CDM cases according to broad discipline and patient age domains; CDM testlets should be assembled largely using the criteria of discipline and age.

  1. Spatially quantitative models for vulnerability analyses and resilience measures in flood risk management: Case study Rafina, Greece

    Science.gov (United States)

    Karagiorgos, Konstantinos; Chiari, Michael; Hübl, Johannes; Maris, Fotis; Thaler, Thomas; Fuchs, Sven

    2013-04-01

    We will address spatially quantitative models for vulnerability analyses in flood risk management in the catchment of Rafina, 25 km east of Athens, Greece; and potential measures to reduce damage costs. The evaluation of flood damage losses is relatively advanced. Nevertheless, major problems arise since there are no market prices for the evaluation process available. Moreover, there is particular gap in quantifying the damages and necessary expenditures for the implementation of mitigation measures with respect to flash floods. The key issue is to develop prototypes for assessing flood losses and the impact of mitigation measures on flood resilience by adjusting a vulnerability model and to further develop the method in a Mediterranean region influenced by both, mountain and coastal characteristics of land development. The objective of this study is to create a spatial and temporal analysis of the vulnerability factors based on a method combining spatially explicit loss data, data on the value of exposed elements at risk, and data on flood intensities. In this contribution, a methodology for the development of a flood damage assessment as a function of the process intensity and the degree of loss is presented. It is shown that (1) such relationships for defined object categories are dependent on site-specific and process-specific characteristics, but there is a correlation between process types that have similar characteristics; (2) existing semi-quantitative approaches of vulnerability assessment for elements at risk can be improved based on the proposed quantitative method; and (3) the concept of risk can be enhanced with respect to a standardised and comprehensive implementation by applying the vulnerability functions to be developed within the proposed research. Therefore, loss data were collected from responsible administrative bodies and analysed on an object level. The used model is based on a basin scale approach as well as data on elements at risk exposed

  2. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  3. Development of Web-Based Examination System Using Open Source Programming Model

    Science.gov (United States)

    Abass, Olalere A.; Olajide, Samuel A.; Samuel, Babafemi O.

    2017-01-01

    The traditional method of assessment (examination) is often characterized by examination questions leakages, human errors during marking of scripts and recording of scores. The technological advancement in the field of computer science has necessitated the need for computer usage in majorly all areas of human life and endeavors, education sector…

  4. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    Science.gov (United States)

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  5. FAMULATUR PLUS - A successful model for improving students' physical examination skills?

    Science.gov (United States)

    Jerg, Achim; Öchsner, Wolfgang; Traue, Harald; Jerg-Bretzke, Lucia

    2017-01-01

    Introduction/Project description: Several studies have revealed insufficient physical examination skills among medical students, both with regard to the completeness of the physical examination and the accuracy of the techniques used. FAMULATUR PLUS was developed in response to these findings. As part of this practice-oriented instructional intervention, physical examination skills should be taught through examination seminars and problem-oriented learning approaches. In order to ensure practical relevance, all courses are integrated into a 30-day clinical traineeship in the surgery or internal medicine department of a hospital (FAMULATUR PLUS). Research question: Does participation in the FAMULATUR PLUS project lead to a more optimistic self-assessment of examination skills and/or improved performance of the physical examination? Methodology: A total of 49 medical students participated in the study. The inclusion criteria were as follows: enrollment in the clinical studies element of their degree program at the University of Ulm and completion of the university course in internal medicine examinations. Based on their personal preferences, students were assigned to either the intervention (surgery/internal medicine; n=24) or the control group (internal medicine; n=25). All students completed a self-assessment of their physical examination skills in the form of a questionnaire. However, practical examination skills were only assessed in the students in the intervention group. These students were asked to carry out a general physical examination of the simulation patient, which was recorded and evaluated in a standardized manner. In both instances, data collection was carried out prior to and after the intervention. Results: The scores arising from the student self-assessment in the intervention (IG) and control groups (CG) improves significantly in the pre-post comparison, with average scores increasing from 3.83 (±0.72; IG) and 3.54 (±0.37; CG) to 1.92 (±0

  6. FAMULATUR PLUS – A successful model for improving students' physical examination skills?

    Science.gov (United States)

    Jerg, Achim; Öchsner, Wolfgang; Traue, Harald; Jerg-Bretzke, Lucia

    2017-01-01

    Introduction/Project description: Several studies have revealed insufficient physical examination skills among medical students, both with regard to the completeness of the physical examination and the accuracy of the techniques used. FAMULATUR PLUS was developed in response to these findings. As part of this practice-oriented instructional intervention, physical examination skills should be taught through examination seminars and problem-oriented learning approaches. In order to ensure practical relevance, all courses are integrated into a 30-day clinical traineeship in the surgery or internal medicine department of a hospital (FAMULATUR PLUS). Research question: Does participation in the FAMULATUR PLUS project lead to a more optimistic self-assessment of examination skills and/or improved performance of the physical examination? Methodology: A total of 49 medical students participated in the study. The inclusion criteria were as follows: enrollment in the clinical studies element of their degree program at the University of Ulm and completion of the university course in internal medicine examinations. Based on their personal preferences, students were assigned to either the intervention (surgery/internal medicine; n=24) or the control group (internal medicine; n=25). All students completed a self-assessment of their physical examination skills in the form of a questionnaire. However, practical examination skills were only assessed in the students in the intervention group. These students were asked to carry out a general physical examination of the simulation patient, which was recorded and evaluated in a standardized manner. In both instances, data collection was carried out prior to and after the intervention. Results: The scores arising from the student self-assessment in the intervention (IG) and control groups (CG) improves significantly in the pre-post comparison, with average scores increasing from 3.83 (±0.72; IG) and 3.54 (±0.37; CG) to 1.92 (±0

  7. Phenotyping chronic pelvic pain based on latent class modeling of physical examination.

    Science.gov (United States)

    Fenton, B W; Grey, S F; Reichenbach, M; McCarroll, M; Von Gruenigen, V

    2013-01-01

    Introduction. Defining clinical phenotypes based on physical examination is required for clarifying heterogeneous disorders such as chronic pelvic pain (CPP). The objective of this study was to determine the number of classes within 4 examinable regions and then establish threshold and optimal exam criteria for the classes discovered. Methods. A total of 476 patients meeting the criteria for CPP were examined using pain pressure threshold (PPT) algometry and standardized numeric scale (NRS) pain ratings at 30 distinct sites over 4 pelvic regions. Exploratory factor analysis, latent profile analysis, and ROC curves were then used to identify classes, optimal examination points, and threshold scores. Results. Latent profile analysis produced two classes for each region: high and low pain groups. The optimal examination sites (and high pain minimum thresholds) were for the abdominal wall region: the pair at the midabdomen (PPT threshold depression of > 2); vulvar vestibule region: 10:00 position (NRS > 2); pelvic floor region: puborectalis (combined NRS > 6); vaginal apex region: uterosacral ligaments (combined NRS > 8). Conclusion. Physical examination scores of patients with CPP are best categorized into two classes: high pain and low pain. Standardization of the physical examination in CPP provides both researchers and general gynecologists with a validated technique.

  8. Phenotyping Chronic Pelvic Pain Based on Latent Class Modeling of Physical Examination

    Directory of Open Access Journals (Sweden)

    B. W. Fenton

    2013-01-01

    Full Text Available Introduction. Defining clinical phenotypes based on physical examination is required for clarifying heterogeneous disorders such as chronic pelvic pain (CPP. The objective of this study was to determine the number of classes within 4 examinable regions and then establish threshold and optimal exam criteria for the classes discovered. Methods. A total of 476 patients meeting the criteria for CPP were examined using pain pressure threshold (PPT algometry and standardized numeric scale (NRS pain ratings at 30 distinct sites over 4 pelvic regions. Exploratory factor analysis, latent profile analysis, and ROC curves were then used to identify classes, optimal examination points, and threshold scores. Results. Latent profile analysis produced two classes for each region: high and low pain groups. The optimal examination sites (and high pain minimum thresholds were for the abdominal wall region: the pair at the midabdomen (PPT threshold depression of > 2; vulvar vestibule region: 10:00 position (NRS > 2; pelvic floor region: puborectalis (combined NRS > 6; vaginal apex region: uterosacral ligaments (combined NRS > 8. Conclusion. Physical examination scores of patients with CPP are best categorized into two classes: high pain and low pain. Standardization of the physical examination in CPP provides both researchers and general gynecologists with a validated technique.

  9. Time Headway Modelling of Motorcycle-Dominated Traffic to Analyse Traffic Safety Performance and Road Link Capacity of Single Carriageways

    Directory of Open Access Journals (Sweden)

    D. M. Priyantha Wedagama

    2017-04-01

    Full Text Available This study aims to develop time headway distribution models to analyse traffic safety performance and road link capacities for motorcycle-dominated traffic in Denpasar, Bali. Three road links selected as the case study are Jl. Hayam Wuruk, Jl.Hang Tuah, and Jl. Padma. Data analysis showed that between 55%-80% of motorists in Denpasar during morning and evening peak hours paid less attention to the safe distance with the vehicles in front. The study found that Lognormal distribution models are best to fit time headway data during morning peak hours while either Weibull (3P or Pearson III distributions is for evening peak hours. Road link capacities for mixed traffic predominantly motorcycles are apparently affected by the behaviour of motorists in keeping safe distance with the vehicles in front. Theoretical road link capacities for Jl. Hayam Wuruk, Jl. Hang Tuah and Jl. Padma are 3,186 vehicles/hour, 3,077 vehicles/hour and 1935 vehicles/hour respectively.

  10. Bayesian salamanders: analysing the demography of an underground population of the European plethodontid Speleomantes strinatii with state-space modelling

    Directory of Open Access Journals (Sweden)

    Salvidio Sebastiano

    2010-02-01

    Full Text Available Abstract Background It has been suggested that Plethodontid salamanders are excellent candidates for indicating ecosystem health. However, detailed, long-term data sets of their populations are rare, limiting our understanding of the demographic processes underlying their population fluctuations. Here we present a demographic analysis based on a 1996 - 2008 data set on an underground population of Speleomantes strinatii (Aellen in NW Italy. We utilised a Bayesian state-space approach allowing us to parameterise a stage-structured Lefkovitch model. We used all the available population data from annual temporary removal experiments to provide us with the baseline data on the numbers of juveniles, subadults and adult males and females present at any given time. Results Sampling the posterior chains of the converged state-space model gives us the likelihood distributions of the state-specific demographic rates and the associated uncertainty of these estimates. Analysing the resulting parameterised Lefkovitch matrices shows that the population growth is very close to 1, and that at population equilibrium we expect half of the individuals present to be adults of reproductive age which is what we also observe in the data. Elasticity analysis shows that adult survival is the key determinant for population growth. Conclusion This analysis demonstrates how an understanding of population demography can be gained from structured population data even in a case where following marked individuals over their whole lifespan is not practical.

  11. Dynamic examination of the femur in a rat model of osteoporosis after injection of CPC containing ABK and PLLA

    Energy Technology Data Exchange (ETDEWEB)

    Tanaka, A.; Kusaka, T.; Sasaki, S.; Takano, I.; Tahara, Y.; Ishii, Y. [Kyorin Univ. School of Medicine, Tokyo (Japan). Dept. of Orthopaedic Surgery

    2001-07-01

    We developed calcium phosphate cement containing antibiotics and poly lactic acid, and examined the effects on bone strength by injecting the cement into the medullary space of the femur in model rats with osteoporosis. A good strength of bone was obtained over 6 months by injecting bone paste into the medullary space of the femur in model rats with bone formation. (orig.)

  12. Testicular Self-Examination: A Test of the Health Belief Model and the Theory of Planned Behaviour

    Science.gov (United States)

    McClenahan, Carol; Shevlin, Mark; Adamson, Gary; Bennett, Cara; O'Neill, Brenda

    2007-01-01

    The aim of this study was to test the utility and efficiency of the theory of planned behaviour (TPB) and the health belief model (HBM) in predicting testicular self-examination (TSE) behaviour. A questionnaire was administered to an opportunistic sample of 195 undergraduates aged 18-39 years. Structural equation modelling indicated that, on the…

  13. Integrating Social Activity Theory and Critical Discourse Analysis: A Multilayered Methodological Model for Examining Knowledge Mediation in Mentoring

    Science.gov (United States)

    Becher, Ayelet; Orland-Barak, Lily

    2016-01-01

    This study suggests an integrative qualitative methodological framework for capturing complexity in mentoring activity. Specifically, the model examines how historical developments of a discipline direct mentors' mediation of professional knowledge through the language that they use. The model integrates social activity theory and a framework of…

  14. Testicular Self-Examination: A Test of the Health Belief Model and the Theory of Planned Behaviour

    Science.gov (United States)

    McClenahan, Carol; Shevlin, Mark; Adamson, Gary; Bennett, Cara; O'Neill, Brenda

    2007-01-01

    The aim of this study was to test the utility and efficiency of the theory of planned behaviour (TPB) and the health belief model (HBM) in predicting testicular self-examination (TSE) behaviour. A questionnaire was administered to an opportunistic sample of 195 undergraduates aged 18-39 years. Structural equation modelling indicated that, on the…

  15. Examining the impacts of increased corn production on groundwater quality using a coupled modeling system

    Data.gov (United States)

    U.S. Environmental Protection Agency — This dataset was used to create graphics associated with manuscript: Garcia et al., Examining the impacts of increased corn production on groundwater quality using a...

  16. Attitude Scales an Congeneric Tests: A Re-Examination of an Attitude-Behavior Model

    Science.gov (United States)

    Alwin, Duane F.

    1976-01-01

    A structural equation model for attitude-behavior relationships is presented which conceptualizes attitude scales as congeneric measurements. The model represents a re-parameterization of an earlier one. (Author/RC)

  17. Century‐scale variability in global annual runoff examined using a water balance model

    National Research Council Canada - National Science Library

    McCabe, Gregory J; Wolock, David M

    2011-01-01

    A monthly water balance model (WB model) is used with CRUTS2.1 monthly temperature and precipitation data to generate time series of monthly runoff for all land areas of the globe for the period 1905 through 2002...

  18. Formulation, Implementation and Examination of Vertical Coordinate Choices in the Global Navy Coastal Ocean Model (NCOM)

    Science.gov (United States)

    2006-01-01

    U0.00 ~ __ -0.03 i.1 . -0.06 ....rnc .... ..alesatT. boy . .... h fi ur Zee d ......... 1Hb i -)p r 4.. ....... heR t tin Fig. 10). Thedifrnen R values...applications have been tested in the North Atlantic, East Asian Seas, Intra-Americas Seas, Gulf of Mexico , and US west coast regions. Analyses of these

  19. ANALYTICAL EXAMINATION OF METHODS FOR MODELING BUSINESS PROCESSES IN E-COMMERCE

    Directory of Open Access Journals (Sweden)

    R. A. Shkil

    2008-03-01

    Full Text Available n this article the theoretical aspects of modeling business-processes are considered, the characteristic of business modeling methods is given, their merits and demerits are determined, the comparative analysis of methods of modeling business-processes in e-commerce is performed.

  20. Examining the Effects of Video Modeling and Prompts to Teach Activities of Daily Living Skills.

    Science.gov (United States)

    Aldi, Catarina; Crigler, Alexandra; Kates-McElrath, Kelly; Long, Brian; Smith, Hillary; Rehak, Kim; Wilkinson, Lisa

    2016-12-01

    Video modeling has been shown to be effective in teaching a number of skills to learners diagnosed with autism spectrum disorders (ASD). In this study, we taught two young men diagnosed with ASD three different activities of daily living skills (ADLS) using point-of-view video modeling. Results indicated that both participants met criterion for all ADLS. Participants did not maintain mastery criterion at a 1-month follow-up, but did score above baseline at maintenance with and without video modeling. • Point-of-view video models may be an effective intervention to teach daily living skills. • Video modeling with handheld portable devices (Apple iPod or iPad) can be just as effective as video modeling with stationary viewing devices (television or computer). • The use of handheld portable devices (Apple iPod and iPad) makes video modeling accessible and possible in a wide variety of environments.