WorldWideScience

Sample records for modeling analyses examined

  1. Patterns of medicinal plant use: an examination of the Ecuadorian Shuar medicinal flora using contingency table and binomial analyses.

    Science.gov (United States)

    Bennett, Bradley C; Husby, Chad E

    2008-03-28

    Botanical pharmacopoeias are non-random subsets of floras, with some taxonomic groups over- or under-represented. Moerman [Moerman, D.E., 1979. Symbols and selectivity: a statistical analysis of Native American medical ethnobotany, Journal of Ethnopharmacology 1, 111-119] introduced linear regression/residual analysis to examine these patterns. However, regression, the commonly-employed analysis, suffers from several statistical flaws. We use contingency table and binomial analyses to examine patterns of Shuar medicinal plant use (from Amazonian Ecuador). We first analyzed the Shuar data using Moerman's approach, modified to better meet requirements of linear regression analysis. Second, we assessed the exact randomization contingency table test for goodness of fit. Third, we developed a binomial model to test for non-random selection of plants in individual families. Modified regression models (which accommodated assumptions of linear regression) reduced R(2) to from 0.59 to 0.38, but did not eliminate all problems associated with regression analyses. Contingency table analyses revealed that the entire flora departs from the null model of equal proportions of medicinal plants in all families. In the binomial analysis, only 10 angiosperm families (of 115) differed significantly from the null model. These 10 families are largely responsible for patterns seen at higher taxonomic levels. Contingency table and binomial analyses offer an easy and statistically valid alternative to the regression approach.

  2. Examination of triacylglycerol biosynthetic pathways via de novo transcriptomic and proteomic analyses in an unsequenced microalga.

    Directory of Open Access Journals (Sweden)

    Michael T Guarnieri

    Full Text Available Biofuels derived from algal lipids represent an opportunity to dramatically impact the global energy demand for transportation fuels. Systems biology analyses of oleaginous algae could greatly accelerate the commercialization of algal-derived biofuels by elucidating the key components involved in lipid productivity and leading to the initiation of hypothesis-driven strain-improvement strategies. However, higher-level systems biology analyses, such as transcriptomics and proteomics, are highly dependent upon available genomic sequence data, and the lack of these data has hindered the pursuit of such analyses for many oleaginous microalgae. In order to examine the triacylglycerol biosynthetic pathway in the unsequenced oleaginous microalga, Chlorella vulgaris, we have established a strategy with which to bypass the necessity for genomic sequence information by using the transcriptome as a guide. Our results indicate an upregulation of both fatty acid and triacylglycerol biosynthetic machinery under oil-accumulating conditions, and demonstrate the utility of a de novo assembled transcriptome as a search model for proteomic analysis of an unsequenced microalga.

  3. Alternative models of DSM-5 PTSD: Examining diagnostic implications.

    Science.gov (United States)

    Murphy, Siobhan; Hansen, Maj; Elklit, Ask; Yong Chen, Yoke; Raudzah Ghazali, Siti; Shevlin, Mark

    2018-04-01

    The factor structure of DSM-5 posttraumatic stress disorder (PTSD) has been extensively debated with evidence supporting the recently proposed seven-factor Hybrid model. However, despite myriad studies examining PTSD symptom structure few have assessed the diagnostic implications of these proposed models. This study aimed to generate PTSD prevalence estimates derived from the 7 alternative factor models and assess whether pre-established risk factors associated with PTSD (e.g., transportation accidents and sexual victimisation) produce consistent risk estimates. Seven alternative models were estimated within a confirmatory factor analytic framework using the PTSD Checklist for DSM-5 (PCL-5). Data were analysed from a Malaysian adolescent community sample (n = 481) of which 61.7% were female, with a mean age of 17.03 years. The results indicated that all models provided satisfactory model fit with statistical superiority for the Externalising Behaviours and seven-factor Hybrid models. The PTSD prevalence estimates varied substantially ranging from 21.8% for the DSM-5 model to 10.0% for the Hybrid model. Estimates of risk associated with PTSD were inconsistent across the alternative models, with substantial variation emerging for sexual victimisation. These findings have important implications for research and practice and highlight that more research attention is needed to examine the diagnostic implications emerging from the alternative models of PTSD. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Examination of constitutive model for evaluating long-term mechanical behavior of buffer. 3

    International Nuclear Information System (INIS)

    Takaji, Kazuhiko; Shigeno, Yoshimasa; Shimogouchi, Takafumi; Shiratake, Toshikazu; Tamura, Hirokuni

    2004-02-01

    On the R and D of the high-level radioactive waste repository, it is essential that Engineered Barrier System (EBS) is stable mechanically over a long period of time for maintaining each ability required to EBS. After closing the repository, the various external forces will be affected to buffer intricately for a long period of time. So, to make clear the mechanical deformation behavior of buffer against the external force is important, because of carrying out safety assessment of EBS accurately. In this report, several sets of parameters are chosen for the previously selected two constitutive models, Sekiguchi-Ohta model and Adachi-Oka model, and the element tests and mock-up tests are simulated using these parameters. Through the simulation, applicability of the constitutive models and parameters is examined. Moreover, simulation analyses of EBS using these parameters is examined. Moreover, simulation analyses of EBS using these parameters were carried out, and mechanical behavior is evaluated over a long period of time. Analysis estimated the amount of settlement of the over pack, the stress state of buffer material, the reaction force to a base rock, etc., and the result that EBS is mechanically stable over a long period of time was obtained. Next, in order to prove analyses results a side, literature survey was conducted about geological age, the dynamics history of a Smectite layer. The outline plan was drawn up about the natural analogue verification method and preliminary examination was performed about the applicability of Freezing Sampling'. (author)

  5. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Science.gov (United States)

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

  6. A model finite-element to analyse the mechanical behavior of a PWR fuel rod

    International Nuclear Information System (INIS)

    Galeao, A.C.N.R.; Tanajura, C.A.S.

    1988-01-01

    A model to analyse the mechanical behavior of a PWR fuel rod is presented. We drew our attention to the phenomenon of pellet-pellet and pellet-cladding contact by taking advantage of an elastic model which include the effects of thermal gradients, cladding internal and external pressures, swelling and initial relocation. The problem of contact gives rise ro a variational formulation which employs Lagrangian multipliers. An iterative scheme is constructed and the finite element method is applied to obtain the numerical solution. Some results and comments are presented to examine the performance of the model. (author) [pt

  7. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  8. Modelling and analysing oriented fibrous structures

    International Nuclear Information System (INIS)

    Rantala, M; Lassas, M; Siltanen, S; Sampo, J; Takalo, J; Timonen, J

    2014-01-01

    A mathematical model for fibrous structures using a direction dependent scaling law is presented. The orientation of fibrous nets (e.g. paper) is analysed with a method based on the curvelet transform. The curvelet-based orientation analysis has been tested successfully on real data from paper samples: the major directions of fibrefibre orientation can apparently be recovered. Similar results are achieved in tests on data simulated by the new model, allowing a comparison with ground truth

  9. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...... attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate...

  10. Development of ITER 3D neutronics model and nuclear analyses

    International Nuclear Information System (INIS)

    Zeng, Q.; Zheng, S.; Lu, L.; Li, Y.; Ding, A.; Hu, H.; Wu, Y.

    2007-01-01

    ITER nuclear analyses rely on the calculations with the three-dimensional (3D) Monte Carlo code e.g. the widely-used MCNP. However, continuous changes in the design of the components require the 3D neutronics model for nuclear analyses should be updated. Nevertheless, the modeling of a complex geometry with MCNP by hand is a very time-consuming task. It is an efficient way to develop CAD-based interface code for automatic conversion from CAD models to MCNP input files. Based on the latest CAD model and the available interface codes, the two approaches of updating 3D nuetronics model have been discussed by ITER IT (International Team): The first is to start with the existing MCNP model 'Brand' and update it through a combination of direct modification of the MCNP input file and generation of models for some components directly from the CAD data; The second is to start from the full CAD model, make the necessary simplifications, and generate the MCNP model by one of the interface codes. MCAM as an advanced CAD-based MCNP interface code developed by FDS Team in China has been successfully applied to update the ITER 3D neutronics model by adopting the above two approaches. The Brand model has been updated to generate portions of the geometry based on the newest CAD model by MCAM. MCAM has also successfully performed conversion to MCNP neutronics model from a full ITER CAD model which is simplified and issued by ITER IT to benchmark the above interface codes. Based on the two updated 3D neutronics models, the related nuclear analyses are performed. This paper presents the status of ITER 3D modeling by using MCAM and its nuclear analyses, as well as a brief introduction of advanced version of MCAM. (authors)

  11. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  12. Grundlegende quantitative Analysen medizinischer Prüfungen [Basic quantitative analyses of medical examinations

    Directory of Open Access Journals (Sweden)

    Möltner, Andreas

    2006-08-01

    Full Text Available [english] The evaluation steps are described which are necessary for an elementary test-theoretic analysis of an exam and sufficient as a basis of item-revisions, improvements of the composition of tests and feedback to teaching coordinators and curriculum developers. These steps include the evaluation of the results, the analysis of item difficulty and discrimination and - where appropriate - the corresponding evaluation of single answers. To complete the procedure, the internal consistency is determined, which makes an estimate of the reliability and significance of the examination. [german] Es werden die Auswertungsschritte beschrieben, die für eine einfache testtheoretische Analyse einer Prüfung notwendig sowie als Grundlage von Aufgabenrevisionen, Verbesserungen von Prüfungszusammenstellungen und Rückmeldung an Lehrbeauftragte und Curriculumsentwickler ausreichend sind. Diese Schritte umfassen die Ergebnisauswertung, die Analyse der Aufgabenschwierigkeiten und der Trennschärfen, sowie - wo angebracht - die entsprechenden Auswertungen der Einzelantworten. Vervollständigt wird das Vorgehen durch die Bestimmung der internen Konsistenz, durch die die Zuverlässigkeit und Aussagekraft (Reliabilität der Prüfung abgeschätzt wird.

  13. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  14. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  15. Performance of neutron kinetics models for ADS transient analyses

    International Nuclear Information System (INIS)

    Rineiski, A.; Maschek, W.; Rimpault, G.

    2002-01-01

    Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One

  16. A critical examination of the validity of simplified models for radiant heat transfer analysis.

    Science.gov (United States)

    Toor, J. S.; Viskanta, R.

    1972-01-01

    Examination of the directional effects of the simplified models by comparing the experimental data with the predictions based on simple and more detailed models for the radiation characteristics of surfaces. Analytical results indicate that the constant property diffuse and specular models do not yield the upper and lower bounds on local radiant heat flux. In general, the constant property specular analysis yields higher values of irradiation than the constant property diffuse analysis. A diffuse surface in the enclosure appears to destroy the effect of specularity of the other surfaces. Semigray and gray analyses predict the irradiation reasonably well provided that the directional properties and the specularity of the surfaces are taken into account. The uniform and nonuniform radiosity diffuse models are in satisfactory agreement with each other.

  17. A dialogue game for analysing group model building: framing collaborative modelling and its facilitation

    NARCIS (Netherlands)

    Hoppenbrouwers, S.J.B.A.; Rouwette, E.A.J.A.

    2012-01-01

    This paper concerns a specific approach to analysing and structuring operational situations in collaborative modelling. Collaborative modelling is viewed here as 'the goal-driven creation and shaping of models that are based on the principles of rational description and reasoning'. Our long term

  18. Demographic origins of skewed operational and adult sex ratios: perturbation analyses of two-sex models.

    Science.gov (United States)

    Veran, Sophie; Beissinger, Steven R

    2009-02-01

    Skewed sex ratios - operational (OSR) and Adult (ASR) - arise from sexual differences in reproductive behaviours and adult survival rates due to the cost of reproduction. However, skewed sex-ratio at birth, sex-biased dispersal and immigration, and sexual differences in juvenile mortality may also contribute. We present a framework to decompose the roles of demographic traits on sex ratios using perturbation analyses of two-sex matrix population models. Metrics of sensitivity are derived from analyses of sensitivity, elasticity, life-table response experiments and life stage simulation analyses, and applied to the stable stage distribution instead of lambda. We use these approaches to examine causes of male-biased sex ratios in two populations of green-rumped parrotlets (Forpus passerinus) in Venezuela. Female local juvenile survival contributed the most to the unbalanced OSR and ASR due to a female-biased dispersal rate, suggesting sexual differences in philopatry can influence sex ratios more strongly than the cost of reproduction.

  19. Model-based Recursive Partitioning for Subgroup Analyses

    OpenAIRE

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-01-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by...

  20. Beta-Poisson model for single-cell RNA-seq data analyses.

    Science.gov (United States)

    Vu, Trung Nghia; Wills, Quin F; Kalari, Krishna R; Niu, Nifang; Wang, Liewei; Rantalainen, Mattias; Pawitan, Yudi

    2016-07-15

    Single-cell RNA-sequencing technology allows detection of gene expression at the single-cell level. One typical feature of the data is a bimodality in the cellular distribution even for highly expressed genes, primarily caused by a proportion of non-expressing cells. The standard and the over-dispersed gamma-Poisson models that are commonly used in bulk-cell RNA-sequencing are not able to capture this property. We introduce a beta-Poisson mixture model that can capture the bimodality of the single-cell gene expression distribution. We further integrate the model into the generalized linear model framework in order to perform differential expression analyses. The whole analytical procedure is called BPSC. The results from several real single-cell RNA-seq datasets indicate that ∼90% of the transcripts are well characterized by the beta-Poisson model; the model-fit from BPSC is better than the fit of the standard gamma-Poisson model in > 80% of the transcripts. Moreover, in differential expression analyses of simulated and real datasets, BPSC performs well against edgeR, a conventional method widely used in bulk-cell RNA-sequencing data, and against scde and MAST, two recent methods specifically designed for single-cell RNA-seq data. An R package BPSC for model fitting and differential expression analyses of single-cell RNA-seq data is available under GPL-3 license at https://github.com/nghiavtr/BPSC CONTACT: yudi.pawitan@ki.se or mattias.rantalainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Science.gov (United States)

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  2. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2011-01-01

    Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.

  3. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  4. Examining hydrogen transitions.

    Energy Technology Data Exchange (ETDEWEB)

    Plotkin, S. E.; Energy Systems

    2007-03-01

    This report describes the results of an effort to identify key analytic issues associated with modeling a transition to hydrogen as a fuel for light duty vehicles, and using insights gained from this effort to suggest ways to improve ongoing modeling efforts. The study reported on here examined multiple hydrogen scenarios reported in the literature, identified modeling issues associated with those scenario analyses, and examined three DOE-sponsored hydrogen transition models in the context of those modeling issues. The three hydrogen transition models are HyTrans (contractor: Oak Ridge National Laboratory), MARKAL/DOE* (Brookhaven National Laboratory), and NEMS-H2 (OnLocation, Inc). The goals of these models are (1) to help DOE improve its R&D effort by identifying key technology and other roadblocks to a transition and testing its technical program goals to determine whether they are likely to lead to the market success of hydrogen technologies, (2) to evaluate alternative policies to promote a transition, and (3) to estimate the costs and benefits of alternative pathways to hydrogen development.

  5. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  6. Present status of theories and data analyses of mathematical models for carcinogenesis

    International Nuclear Information System (INIS)

    Kai, Michiaki; Kawaguchi, Isao

    2007-01-01

    Reviewed are the basic mathematical models (hazard functions), present trend of the model studies and that for radiation carcinogenesis. Hazard functions of carcinogenesis are described for multi-stage model and 2-event model related with cell dynamics. At present, the age distribution of cancer mortality is analyzed, relationship between mutation and carcinogenesis is discussed, and models for colorectal carcinogenesis are presented. As for radiation carcinogenesis, models of Armitage-Doll and of generalized MVK (Moolgavkar, Venson, Knudson, 1971-1990) by 2-stage clonal expansion have been applied to analysis of carcinogenesis in A-bomb survivors, workers in uranium mine (Rn exposure) and smoking doctors in UK and other cases, of which characteristics are discussed. In analyses of A-bomb survivors, models above are applied to solid tumors and leukemia to see the effect, if any, of stage, age of exposure, time progression etc. In miners and smokers, stages of the initiation, promotion and progression in carcinogenesis are discussed on the analyses. Others contain the analyses of workers in Canadian atomic power plant, and of patients who underwent the radiation therapy. Model analysis can help to understand the carcinogenic process in a quantitative aspect rather than to describe the process. (R.T.)

  7. SVM models for analysing the headstreams of mine water inrush

    Energy Technology Data Exchange (ETDEWEB)

    Yan Zhi-gang; Du Pei-jun; Guo Da-zhi [China University of Science and Technology, Xuzhou (China). School of Environmental Science and Spatial Informatics

    2007-08-15

    The support vector machine (SVM) model was introduced to analyse the headstrean of water inrush in a coal mine. The SVM model, based on a hydrogeochemical method, was constructed for recognising two kinds of headstreams and the H-SVMs model was constructed for recognising multi- headstreams. The SVM method was applied to analyse the conditions of two mixed headstreams and the value of the SVM decision function was investigated as a means of denoting the hydrogeochemical abnormality. The experimental results show that the SVM is based on a strict mathematical theory, has a simple structure and a good overall performance. Moreover the parameter W in the decision function can describe the weights of discrimination indices of the headstream of water inrush. The value of the decision function can denote hydrogeochemistry abnormality, which is significant in the prevention of water inrush in a coal mine. 9 refs., 1 fig., 7 tabs.

  8. Model-Based Recursive Partitioning for Subgroup Analyses.

    Science.gov (United States)

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-05-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by predictive factors. The method starts with a model for the overall treatment effect as defined for the primary analysis in the study protocol and uses measures for detecting parameter instabilities in this treatment effect. The procedure produces a segmented model with differential treatment parameters corresponding to each patient subgroup. The subgroups are linked to predictive factors by means of a decision tree. The method is applied to the search for subgroups of patients suffering from amyotrophic lateral sclerosis that differ with respect to their Riluzole treatment effect, the only currently approved drug for this disease.

  9. Bayesian uncertainty analyses of probabilistic risk models

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1989-01-01

    Applications of Bayesian principles to the uncertainty analyses are discussed in the paper. A short review of the most important uncertainties and their causes is provided. An application of the principle of maximum entropy to the determination of Bayesian prior distributions is described. An approach based on so called probabilistic structures is presented in order to develop a method of quantitative evaluation of modelling uncertainties. The method is applied to a small example case. Ideas for application areas for the proposed method are discussed

  10. Analyses and simulations in income frame regulation model for the network sector from 2007

    International Nuclear Information System (INIS)

    Askeland, Thomas Haave; Fjellstad, Bjoern

    2007-01-01

    Analyses of the income frame regulation model for the network sector in Norway, introduced 1.st of January 2007. The model's treatment of the norm cost is evaluated, especially the effect analyses carried out by a so called Data Envelopment Analysis model. It is argued that there may exist an age lopsidedness in the data set, and that this should and can be corrected in the effect analyses. The adjustment is proposed corrected for by introducing an age parameter in the data set. Analyses of how the calibration effects in the regulation model affect the business' total income frame, as well as each network company's income frame have been made. It is argued that the calibration, the way it is presented, is not working according to its intention, and should be adjusted in order to provide the sector with the rate of reference in return (ml)

  11. Use of flow models to analyse loss of coolant accidents

    International Nuclear Information System (INIS)

    Pinet, Bernard

    1978-01-01

    This article summarises current work on developing the use of flow models to analyse loss-of-coolant accident in pressurized-water plants. This work is being done jointly, in the context of the LOCA Technical Committee, by the CEA, EDF and FRAMATOME. The construction of the flow model is very closely based on some theoretical studies of the two-fluid model. The laws of transfer at the interface and at the wall are tested experimentally. The representativity of the model then has to be checked in experiments involving several elementary physical phenomena [fr

  12. Analyses of non-fatal accidents in an opencast mine by logistic regression model - a case study.

    Science.gov (United States)

    Onder, Seyhan; Mutlu, Mert

    2017-09-01

    Accidents cause major damage for both workers and enterprises in the mining industry. To reduce the number of occupational accidents, these incidents should be properly registered and carefully analysed. This study efficiently examines the Aegean Lignite Enterprise (ELI) of Turkish Coal Enterprises (TKI) in Soma between 2006 and 2011, and opencast coal mine occupational accident records were used for statistical analyses. A total of 231 occupational accidents were analysed for this study. The accident records were categorized into seven groups: area, reason, occupation, part of body, age, shift hour and lost days. The SPSS package program was used in this study for logistic regression analyses, which predicted the probability of accidents resulting in greater or less than 3 lost workdays for non-fatal injuries. Social facilities-area of surface installations, workshops and opencast mining areas are the areas with the highest probability for accidents with greater than 3 lost workdays for non-fatal injuries, while the reasons with the highest probability for these types of accidents are transporting and manual handling. Additionally, the model was tested for such reported accidents that occurred in 2012 for the ELI in Soma and estimated the probability of exposure to accidents with lost workdays correctly by 70%.

  13. Examining a renormalizable supersymmetric SO(10) model

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Zhi-Yong; Zhang, Da-Xin [Peking University, School of Physics and State Key Laboratory of Nuclear Physics and Technology, Beijing (China)

    2017-10-15

    We examine a renormalizable SUSY SO(10) model without fine-tuning. We show how to construct MSSM doublets and to predict proton decay. We find that in the minimal set of Yukawa couplings the model is consistent with the experiments, while including 120{sub H} to fit the data there are inconsistencies. (orig.)

  14. Examining the intersection of sex and stress in modelling neuropsychiatric disorders.

    Science.gov (United States)

    Goel, N; Bale, T L

    2009-03-01

    Sex-biased neuropsychiatric disorders, including major depressive disorder and schizophrenia, are the major cause of disability in the developed world. Elevated stress sensitivity has been proposed as a key underlying factor in disease onset. Sex differences in stress sensitivity are associated with corticotrophin-releasing factor (CRF) and serotonin neurotransmission, which are important central regulators of mood and coping responses. To elucidate the underlying neurobiology of stress-related disease predisposition, it is critical to develop appropriate animal models of stress pathway dysregulation. Furthermore, the inclusion of sex difference comparisons in stress responsive behaviours, physiology and central stress pathway maturation in these models is essential. Recent studies by our laboratory and others have begun to investigate the intersection of stress and sex where the development of mouse models of stress pathway dysregulation via prenatal stress experience or early-life manipulations has provided insight into points of developmental vulnerability. In addition, examination of the maturation of these pathways, including the functional importance of the organisational and activational effects of gonadal hormones on stress responsivity, is essential for determination of when sex differences in stress sensitivity may begin. In such studies, we have detected distinct sex differences in stress coping strategies where activational effects of testosterone produced females that displayed male-like strategies in tests of passive coping, but were similar to females in tests of active coping. In a second model of elevated stress sensitivity, male mice experiencing prenatal stress early in gestation showed feminised physiological and behavioural stress responses, and were highly sensitive to a low dose of selective serotonin reuptake inhibitors. Analyses of expression and epigenetic patterns revealed changes in CRF and glucocorticoid receptor genes in these mice

  15. Examining the intersection of sex and stress in modeling neuropsychiatric disorders

    Science.gov (United States)

    Goel, Nirupa; Bale, Tracy L.

    2009-01-01

    Sex-biased neuropsychiatric disorders, including major depressive disorder and schizophrenia, are the major cause of disability in the developed world. Elevated stress sensitivity has been proposed as a key underlying factor in disease onset. Sex differences in stress sensitivity are associated with CRF and serotonin neurotransmission, important central regulators of mood and coping responses. To elucidate the underlying neurobiology of stress-related disease predisposition, it is critical to develop appropriate animal models of stress pathway dysregulation. Further, the inclusion of sex difference comparisons in stress responsive behaviors, physiology, and central stress pathway maturation in these models is essential. Recent studies by our lab and others have begun to investigate the intersection of stress and sex where the development of mouse models of stress pathway dysregulation via prenatal stress experience or early life manipulations has provided insight into points of developmental vulnerability. In addition, examination of the maturation of these pathways including the functional importance of the organizational and activational effects of gonadal hormones on stress responsivity is essential for determination of when sex differences in stress sensitivity may begin. In such studies, we have detected distinct sex differences in stress coping strategies where activational effects of testosterone produced females that displayed male-like strategies in tests of passive coping, but were similar to females in tests of active coping. In a second model of elevated stress sensitivity, male mice experiencing prenatal stress early in gestation showed feminized physiological and behavioral stress responses, and were highly sensitive to a low dose of SSRI. Analyses of expression and epigenetic patterns revealed changes in CRF and glucocorticoid receptor genes in these mice. Mechanistically, stress early in pregnancy produced a significant sex-dependent effect on

  16. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  17. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  18. Models and analyses for inertial-confinement fusion-reactor studies

    International Nuclear Information System (INIS)

    Bohachevsky, I.O.

    1981-05-01

    This report describes models and analyses devised at Los Alamos National Laboratory to determine the technical characteristics of different inertial confinement fusion (ICF) reactor elements required for component integration into a functional unit. We emphasize the generic properties of the different elements rather than specific designs. The topics discussed are general ICF reactor design considerations; reactor cavity phenomena, including the restoration of interpulse ambient conditions; first-wall temperature increases and material losses; reactor neutronics and hydrodynamic blanket response to neutron energy deposition; and analyses of loads and stresses in the reactor vessel walls, including remarks about the generation and propagation of very short wavelength stress waves. A discussion of analytic approaches useful in integrations and optimizations of ICF reactor systems concludes the report

  19. Experimental and Computational Modal Analyses for Launch Vehicle Models considering Liquid Propellant and Flange Joints

    Directory of Open Access Journals (Sweden)

    Chang-Hoon Sim

    2018-01-01

    Full Text Available In this research, modal tests and analyses are performed for a simplified and scaled first-stage model of a space launch vehicle using liquid propellant. This study aims to establish finite element modeling techniques for computational modal analyses by considering the liquid propellant and flange joints of launch vehicles. The modal tests measure the natural frequencies and mode shapes in the first and second lateral bending modes. As the liquid filling ratio increases, the measured frequencies decrease. In addition, as the number of flange joints increases, the measured natural frequencies increase. Computational modal analyses using the finite element method are conducted. The liquid is modeled by the virtual mass method, and the flange joints are modeled using one-dimensional spring elements along with the node-to-node connection. Comparison of the modal test results and predicted natural frequencies shows good or moderate agreement. The correlation between the modal tests and analyses establishes finite element modeling techniques for modeling the liquid propellant and flange joints of space launch vehicles.

  20. Performance on the adult rheumatology in-training examination and relationship to outcomes on the rheumatology certification examination.

    Science.gov (United States)

    Lohr, Kristine M; Clauser, Amanda; Hess, Brian J; Gelber, Allan C; Valeriano-Marcet, Joanne; Lipner, Rebecca S; Haist, Steven A; Hawley, Janine L; Zirkle, Sarah; Bolster, Marcy B

    2015-11-01

    The American College of Rheumatology (ACR) Adult Rheumatology In-Training Examination (ITE) is a feedback tool designed to identify strengths and weaknesses in the content knowledge of individual fellows-in-training and the training program curricula. We determined whether scores on the ACR ITE, as well as scores on other major standardized medical examinations and competency-based ratings, could be used to predict performance on the American Board of Internal Medicine (ABIM) Rheumatology Certification Examination. Between 2008 and 2012, 629 second-year fellows took the ACR ITE. Bivariate correlation analyses of assessment scores and multiple linear regression analyses were used to determine whether ABIM Rheumatology Certification Examination scores could be predicted on the basis of ACR ITE scores, United States Medical Licensing Examination scores, ABIM Internal Medicine Certification Examination scores, fellowship directors' ratings of overall clinical competency, and demographic variables. Logistic regression was used to evaluate whether these assessments were predictive of a passing outcome on the Rheumatology Certification Examination. In the initial linear model, the strongest predictors of the Rheumatology Certification Examination score were the second-year fellows' ACR ITE scores (β = 0.438) and ABIM Internal Medicine Certification Examination scores (β = 0.273). Using a stepwise model, the strongest predictors of higher scores on the Rheumatology Certification Examination were second-year fellows' ACR ITE scores (β = 0.449) and ABIM Internal Medicine Certification Examination scores (β = 0.276). Based on the findings of logistic regression analysis, ACR ITE performance was predictive of a pass/fail outcome on the Rheumatology Certification Examination (odds ratio 1.016 [95% confidence interval 1.011-1.021]). The predictive value of the ACR ITE score with regard to predicting performance on the Rheumatology Certification Examination

  1. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Directory of Open Access Journals (Sweden)

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  2. Analysing the Transformation of Higher Education Governance in Bulgaria and Lithuania

    NARCIS (Netherlands)

    Dobbins, Michael; Leisyte, Liudvika

    2014-01-01

    Drawing on sociological neo-institutional theory and models of higher education governance, we examine current developments in Bulgaria and Lithuania and explore to what extent those developments were shaped by the Bologna reform. We analyse to what extent the state has moved away from a model of

  3. A 1024 channel analyser of model FH 465

    International Nuclear Information System (INIS)

    Tang Cunxun

    1988-01-01

    The FH 465 is renewed type of the 1024 Channel Analyser of model FH451. Besides simple operation and fine display, featured by the primary one, the core memory is replaced by semiconductor memory; the integration has been improved; employment of 74LS low power consumpted devices widely used in the world has not only greatly decreased the cost, but also can be easily interchanged with Apple-II, Great Wall-0520-CH or IBM-PC/XT Microcomputers. The operating principle, main specifications and test results are described

  4. arXiv Statistical Analyses of Higgs- and Z-Portal Dark Matter Models

    CERN Document Server

    Ellis, John; Marzola, Luca; Raidal, Martti

    2018-06-12

    We perform frequentist and Bayesian statistical analyses of Higgs- and Z-portal models of dark matter particles with spin 0, 1/2 and 1. Our analyses incorporate data from direct detection and indirect detection experiments, as well as LHC searches for monojet and monophoton events, and we also analyze the potential impacts of future direct detection experiments. We find acceptable regions of the parameter spaces for Higgs-portal models with real scalar, neutral vector, Majorana or Dirac fermion dark matter particles, and Z-portal models with Majorana or Dirac fermion dark matter particles. In many of these cases, there are interesting prospects for discovering dark matter particles in Higgs or Z decays, as well as dark matter particles weighing $\\gtrsim 100$ GeV. Negative results from planned direct detection experiments would still allow acceptable regions for Higgs- and Z-portal models with Majorana or Dirac fermion dark matter particles.

  5. Vocational Teachers and Professionalism - A Model Based on Empirical Analyses

    DEFF Research Database (Denmark)

    Duch, Henriette Skjærbæk; Andreasen, Karen E

    Vocational Teachers and Professionalism - A Model Based on Empirical Analyses Several theorists has developed models to illustrate the processes of adult learning and professional development (e.g. Illeris, Argyris, Engeström; Wahlgren & Aarkorg, Kolb and Wenger). Models can sometimes be criticized...... emphasis on the adult employee, the organization, its surroundings as well as other contextual factors. Our concern is adult vocational teachers attending a pedagogical course and teaching at vocational colleges. The aim of the paper is to discuss different models and develop a model concerning teachers...... at vocational colleges based on empirical data in a specific context, vocational teacher-training course in Denmark. By offering a basis and concepts for analysis of practice such model is meant to support the development of vocational teachers’ professionalism at courses and in organizational contexts...

  6. Material model for non-linear finite element analyses of large concrete structures

    NARCIS (Netherlands)

    Engen, Morten; Hendriks, M.A.N.; Øverli, Jan Arve; Åldstedt, Erik; Beushausen, H.

    2016-01-01

    A fully triaxial material model for concrete was implemented in a commercial finite element code. The only required input parameter was the cylinder compressive strength. The material model was suitable for non-linear finite element analyses of large concrete structures. The importance of including

  7. Taxing CO2 and subsidising biomass: Analysed in a macroeconomic and sectoral model

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    2000-01-01

    This paper analyses the combination of taxes and subsidies as an instrument to enable a reduction in CO2 emission. The objective of the study is to compare recycling of a CO2 tax revenue as a subsidy for biomass use as opposed to traditional recycling such as reduced income or corporate taxation....... A model of Denmark's energy supply sector is used to analyse the e€ect of a CO2 tax combined with using the tax revenue for biomass subsidies. The energy supply model is linked to a macroeconomic model such that the macroeconomic consequences of tax policies can be analysed along with the consequences...... for speci®c sectors such as agriculture. Electricity and heat are produced at heat and power plants utilising fuels which minimise total fuel cost, while the authorities regulate capacity expansion technologies. The e€ect of fuel taxes and subsidies on fuels is very sensitive to the fuel substitution...

  8. An Examination of Extended a-Rescaling Model

    Institute of Scientific and Technical Information of China (English)

    YAN Zhan-Yuan; DUAN Chun-Gui; HE Zhen-Min

    2001-01-01

    The extended x-rescaling model can explain the quark's nuclear effect very well. Weather it can also explain the gluon's nuclear effect should be investigated further. Associated J/ψ and γ production with large PT is a very clean channel to probe the gluon distribution in proton or nucleus. In this paper, using the extended x-rescaling model, the PT distribution of the nuclear effect factors of p + Fe → J/Ψ + γ+ X process is calculated and discussed. Comparing our theoretical results with the future experimental data, the extended x-rescaling model can be examined.``

  9. Quantifying and Analysing Neighbourhood Characteristics Supporting Urban Land-Use Modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2009-01-01

    Land-use modelling and spatial scenarios have gained increased attention as a means to meet the challenge of reducing uncertainty in the spatial planning and decision-making. Several organisations have developed software for land-use modelling. Many of the recent modelling efforts incorporate...... cellular automata (CA) to accomplish spatially explicit land-use change modelling. Spatial interaction between neighbour land-uses is an important component in urban cellular automata. Nevertheless, this component is calibrated through trial-and-error estimation. The aim of the current research project has...... been to quantify and analyse land-use neighbourhood characteristics and impart useful information for cell based land-use modelling. The results of our research is a major step forward, because we have estimated rules for neighbourhood interaction from really observed land-use changes at a yearly basis...

  10. A non-equilibrium neutral model for analysing cultural change.

    Science.gov (United States)

    Kandler, Anne; Shennan, Stephen

    2013-08-07

    Neutral evolution is a frequently used model to analyse changes in frequencies of cultural variants over time. Variants are chosen to be copied according to their relative frequency and new variants are introduced by a process of random mutation. Here we present a non-equilibrium neutral model which accounts for temporally varying population sizes and mutation rates and makes it possible to analyse the cultural system under consideration at any point in time. This framework gives an indication whether observed changes in the frequency distributions of a set of cultural variants between two time points are consistent with the random copying hypothesis. We find that the likelihood of the existence of the observed assemblage at the end of the considered time period (expressed by the probability of the observed number of cultural variants present in the population during the whole period under neutral evolution) is a powerful indicator of departures from neutrality. Further, we study the effects of frequency-dependent selection on the evolutionary trajectories and present a case study of change in the decoration of pottery in early Neolithic Central Europe. Based on the framework developed we show that neutral evolution is not an adequate description of the observed changes in frequency. Copyright © 2013 Elsevier Ltd. All rights reserved.

  11. Multi-state models: metapopulation and life history analyses

    Directory of Open Access Journals (Sweden)

    Arnason, A. N.

    2004-06-01

    Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such

  12. Evaluation of Uncertainties in hydrogeological modeling and groundwater flow analyses. Model calibration

    International Nuclear Information System (INIS)

    Ijiri, Yuji; Ono, Makoto; Sugihara, Yutaka; Shimo, Michito; Yamamoto, Hajime; Fumimura, Kenichi

    2003-03-01

    groundwater has not resulted in reduction of uncertainty only in the calibration of the model using the transient hydraulic data. Therefore, examination of presuming technique of effective porosity and acquisition of direct data by investigation such as tracer examination will be important in the future. (3) In grasping the groundwater flow characteristic efficiently, it is effective to update hydrogeological structure gradually using several modeling techniques with progress of investigation and to feed back the result to investigation. (author)

  13. Modelling and Analyses of Embedded Systems Design

    DEFF Research Database (Denmark)

    Brekling, Aske Wiid

    We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....... semantics to the elements of specifications in the MoVES language. We show that even for seem- ingly simple systems, the complexity of verifying real-time constraints can be overwhelming - but we give an upper limit to the size of the search-space that needs examining. Furthermore, the formal model exposes...

  14. Regional analyses of labor markets and demography: a model based Norwegian example.

    Science.gov (United States)

    Stambol, L S; Stolen, N M; Avitsland, T

    1998-01-01

    The authors discuss the regional REGARD model, developed by Statistics Norway to analyze the regional implications of macroeconomic development of employment, labor force, and unemployment. "In building the model, empirical analyses of regional producer behavior in manufacturing industries have been performed, and the relation between labor market development and regional migration has been investigated. Apart from providing a short description of the REGARD model, this article demonstrates the functioning of the model, and presents some results of an application." excerpt

  15. Examination of an eHealth literacy scale and a health literacy scale in a population with moderate to high cardiovascular risk: Rasch analyses.

    Directory of Open Access Journals (Sweden)

    Sarah S Richtering

    Full Text Available Electronic health (eHealth strategies are evolving making it important to have valid scales to assess eHealth and health literacy. Item response theory methods, such as the Rasch measurement model, are increasingly used for the psychometric evaluation of scales. This paper aims to examine the internal construct validity of an eHealth and health literacy scale using Rasch analysis in a population with moderate to high cardiovascular disease risk.The first 397 participants of the CONNECT study completed the electronic health Literacy Scale (eHEALS and the Health Literacy Questionnaire (HLQ. Overall Rasch model fit as well as five key psychometric properties were analysed: unidimensionality, response thresholds, targeting, differential item functioning and internal consistency.The eHEALS had good overall model fit (χ2 = 54.8, p = 0.06, ordered response thresholds, reasonable targeting and good internal consistency (person separation index (PSI 0.90. It did, however, appear to measure two constructs of eHealth literacy. The HLQ subscales (except subscale 5 did not fit the Rasch model (χ2: 18.18-60.60, p: 0.00-0.58 and had suboptimal targeting for most subscales. Subscales 6 to 9 displayed disordered thresholds indicating participants had difficulty distinguishing between response options. All subscales did, nonetheless, demonstrate moderate to good internal consistency (PSI: 0.62-0.82.Rasch analyses demonstrated that the eHEALS has good measures of internal construct validity although it appears to capture different aspects of eHealth literacy (e.g. using eHealth and understanding eHealth. Whilst further studies are required to confirm this finding, it may be necessary for these constructs of the eHEALS to be scored separately. The nine HLQ subscales were shown to measure a single construct of health literacy. However, participants' scores may not represent their actual level of ability, as distinction between response categories was unclear for

  16. Analysing and controlling the tax evasion dynamics via majority-vote model

    Energy Technology Data Exchange (ETDEWEB)

    Lima, F W S, E-mail: fwslima@gmail.co, E-mail: wel@ufpi.edu.b [Departamento de Fisica, Universidade Federal do PiauI, 64049-550, Teresina - PI (Brazil)

    2010-09-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q{sub c} to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  17. Analysing and controlling the tax evasion dynamics via majority-vote model

    International Nuclear Information System (INIS)

    Lima, F W S

    2010-01-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q c to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  18. On the use of uncertainty analyses to test hypotheses regarding deterministic model predictions of environmental processes

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Bittner, E.A.; Essington, E.H.

    1995-01-01

    This paper illustrates the use of Monte Carlo parameter uncertainty and sensitivity analyses to test hypotheses regarding predictions of deterministic models of environmental transport, dose, risk and other phenomena. The methodology is illustrated by testing whether 238 Pu is transferred more readily than 239+240 Pu from the gastrointestinal (GI) tract of cattle to their tissues (muscle, liver and blood). This illustration is based on a study wherein beef-cattle grazed for up to 1064 days on a fenced plutonium (Pu)-contaminated arid site in Area 13 near the Nevada Test Site in the United States. Periodically, cattle were sacrificed and their tissues analyzed for Pu and other radionuclides. Conditional sensitivity analyses of the model predictions were also conducted. These analyses indicated that Pu cattle tissue concentrations had the largest impact of any model parameter on the pdf of predicted Pu fractional transfers. Issues that arise in conducting uncertainty and sensitivity analyses of deterministic models are discussed. (author)

  19. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  20. RELAP5 analyses of overcooling transients in a pressurized water reactor

    International Nuclear Information System (INIS)

    Bolander, M.A.; Fletcher, C.D.; Ogden, D.M.; Stitt, B.D.; Waterman, M.E.

    1983-01-01

    In support of the Pressurized Thermal Shock Integration Study sponsored by the United States Nuclear Regulatory Commission, the Idaho National Engineering Laboratory has performed analyses of overcooling transients using the RELAP5/MOD1.5 computer code. These analyses were performed for Oconee Plants 1 and 3, which are pressurized water reactors of Babcock and Wilcox lowered-loop design. Results of the RELAP5 analyses are presented, including a comparison with plant data. The capabilities and limitations of the RELAP5/MOD1.5 computer code in analyzing integral plant transients are examined. These analyses require detailed thermal-hydraulic and control system computer models

  1. Correlation of Klebsiella pneumoniae comparative genetic analyses with virulence profiles in a murine respiratory disease model.

    Directory of Open Access Journals (Sweden)

    Ramy A Fodah

    Full Text Available Klebsiella pneumoniae is a bacterial pathogen of worldwide importance and a significant contributor to multiple disease presentations associated with both nosocomial and community acquired disease. ATCC 43816 is a well-studied K. pneumoniae strain which is capable of causing an acute respiratory disease in surrogate animal models. In this study, we performed sequencing of the ATCC 43816 genome to support future efforts characterizing genetic elements required for disease. Furthermore, we performed comparative genetic analyses to the previously sequenced genomes from NTUH-K2044 and MGH 78578 to gain an understanding of the conservation of known virulence determinants amongst the three strains. We found that ATCC 43816 and NTUH-K2044 both possess the known virulence determinant for yersiniabactin, as well as a Type 4 secretion system (T4SS, CRISPR system, and an acetonin catabolism locus, all absent from MGH 78578. While both NTUH-K2044 and MGH 78578 are clinical isolates, little is known about the disease potential of these strains in cell culture and animal models. Thus, we also performed functional analyses in the murine macrophage cell lines RAW264.7 and J774A.1 and found that MGH 78578 (K52 serotype was internalized at higher levels than ATCC 43816 (K2 and NTUH-K2044 (K1, consistent with previous characterization of the antiphagocytic properties of K1 and K2 serotype capsules. We also examined the three K. pneumoniae strains in a novel BALB/c respiratory disease model and found that ATCC 43816 and NTUH-K2044 are highly virulent (LD50<100 CFU while MGH 78578 is relatively avirulent.

  2. Integrated freight network model : a GIS-based platform for transportation analyses.

    Science.gov (United States)

    2015-01-01

    The models currently used to examine the behavior transportation systems are usually mode-specific. That is, they focus on a single mode (i.e. railways, highways, or waterways). The lack of : integration limits the usefulness of models to analyze the...

  3. On the Structure of Personality Disorder Traits: Conjoint Analyses of the CAT-PD, PID-5, and NEO-PI-3 Trait Models

    Science.gov (United States)

    Wright, Aidan G.C.; Simms, Leonard J.

    2014-01-01

    The current study examines the relations among contemporary models of pathological and normal range personality traits. Specifically, we report on (a) conjoint exploratory factor analyses of the Computerized Adaptive Test of Personality Disorder static form (CAT-PD-SF) with the Personality Inventory for the DSM-5 (PID-5; Krueger et al., 2012) and NEO Personality Inventory-3 First Half (NEI-PI-3FH; McCrae & Costa, 2007), and (b) unfolding hierarchical analyses of the three measures in a large general psychiatric outpatient sample (N = 628; 64% Female). A five-factor solution provided conceptually coherent alignment among the CAT-PD-SF, PID-5, and NEO-PI-3FH scales. Hierarchical solutions suggested that higher-order factors bear strong resemblance to dimensions that emerge from structural models of psychopathology (e.g., Internalizing and Externalizing spectra). These results demonstrate that the CAT-PD-SF adheres to the consensual structure of broad trait domains at the five-factor level. Additionally, patterns of scale loadings further inform questions of structure and bipolarity of facet and domain level constructs. Finally, hierarchical analyses strengthen the argument for using broad dimensions that span normative and pathological functioning to scaffold a quantitatively derived phenotypic structure of psychopathology to orient future research on explanatory, etiological, and maintenance mechanisms. PMID:24588061

  4. YALINA Booster subcritical assembly modeling and analyses

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, Y.; Aliberti, G.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Sadovich, S.

    2010-01-01

    Full text: Accurate simulation models of the YALINA Booster assembly of the Joint Institute for Power and Nuclear Research (JIPNR)-Sosny, Belarus have been developed by Argonne National Laboratory (ANL) of the USA. YALINA-Booster has coupled zones operating with fast and thermal neutron spectra, which requires a special attention in the modelling process. Three different uranium enrichments of 90%, 36% or 21% were used in the fast zone and 10% uranium enrichment was used in the thermal zone. Two of the most advanced Monte Carlo computer programs have been utilized for the ANL analyses: MCNP of the Los Alamos National Laboratory and MONK of the British Nuclear Fuel Limited and SERCO Assurance. The developed geometrical models for both computer programs modelled all the details of the YALINA Booster facility as described in the technical specifications defined in the International Atomic Energy Agency (IAEA) report without any geometrical approximation or material homogenization. Materials impurities and the measured material densities have been used in the models. The obtained results for the neutron multiplication factors calculated in criticality mode (keff) and in source mode (ksrc) with an external neutron source from the two Monte Carlo programs are very similar. Different external neutron sources have been investigated including californium, deuterium-deuterium (D-D), and deuterium-tritium (D-T) neutron sources. The spatial neutron flux profiles and the neutron spectra in the experimental channels were calculated. In addition, the kinetic parameters were defined including the effective delayed neutron fraction, the prompt neutron lifetime, and the neutron generation time. A new calculation methodology has been developed at ANL to simulate the pulsed neutron source experiments. In this methodology, the MCNP code is used to simulate the detector response from a single pulse of the external neutron source and a C code is used to superimpose the pulse until the

  5. Global analyses of historical masonry buildings: Equivalent frame vs. 3D solid models

    Science.gov (United States)

    Clementi, Francesco; Mezzapelle, Pardo Antonio; Cocchi, Gianmichele; Lenci, Stefano

    2017-07-01

    The paper analyses the seismic vulnerability of two different masonry buildings. It provides both an advanced 3D modelling with solid elements and an equivalent frame modelling. The global structural behaviour and the dynamic properties of the compound have been evaluated using the Finite Element Modelling (FEM) technique, where the nonlinear behaviour of masonry has been taken into account by proper constitutive assumptions. A sensitivity analysis is done to evaluate the effect of the choice of the structural models.

  6. An examination of the cross-cultural validity of the Identity Capital Model: American and Japanese students compared.

    Science.gov (United States)

    Côté, James E; Mizokami, Shinichi; Roberts, Sharon E; Nakama, Reiko

    2016-01-01

    The Identity Capital Model proposes that forms of personal agency are associated with identity development as part of the transition to adulthood. This model was examined in two cultural contexts, taking into account age and gender, among college and university students aged 18 to 24 (N = 995). Confirmatory Factor Analyses verified cultural, age, and gender invariance of the two key operationalizations of the model. A Structural Equation Model path analysis confirmed that the model applies in both cultures with minor variations-types of personal agency are associated with the formation of adult- and societal-identities as part of the resolution of the identity stage. It was concluded that forms of personal agency providing the most effective ways of dealing with "individualization" (e.g., internal locus of control) are more important in the transition to adulthood among American students, whereas types of personal agency most effective in dealing with "individualistic collectivism" (e.g., ego strength) are more important among Japanese students. Copyright © 2015 The Foundation for Professionals in Services for Adolescents. Published by Elsevier Ltd. All rights reserved.

  7. Vibration tests and analyses of the reactor building model on a small scale

    International Nuclear Information System (INIS)

    Tsuchiya, Hideo; Tanaka, Mitsuru; Ogihara, Yukio; Moriyama, Ken-ichi; Nakayama, Masaaki

    1985-01-01

    The purpose of this paper is to describe the vibration tests and the simulation analyses of the reactor building model on a small scale. The model vibration tests were performed to investigate the vibrational characteristics of the combined super-structure and to verify the computor code based on Dr. H. Tajimi's Thin Layered Element Theory, using the uniaxial shaking table (60 cm x 60 cm). The specimens consist of ground model, three structural model (prestressed concrete containment vessel, inner concrete structure, and enclosure building), a combined structural model and a combined structure-soil interaction model. These models are made of silicon-rubber, and they have a scale of 1:600. Harmonic step by step excitation of 40 gals was performed to investigate the vibrational characteristics for each structural model. The responses of the specimen to harmonic excitation were measured by optical displacement meters, and analyzed by a real time spectrum analyzer. The resonance and phase lag curves of the specimens to the shaking table were obtained respectively. As for the tests of a combined structure-soil interaction model, three predominant frequencies were observed in the resonance curves. These values were in good agreement with the analytical transfer function curves on the computer code. From the vibration tests and the simulation analyses, the silicon-rubber model test is useful for the fundamental study of structural problems. The computer code based on the Thin Element Theory can simulate well the test results. (Kobozono, M.)

  8. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  9. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    International Nuclear Information System (INIS)

    2014-12-01

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  10. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-12-15

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  11. Applications of one-dimensional models in simplified inelastic analyses

    International Nuclear Information System (INIS)

    Kamal, S.A.; Chern, J.M.; Pai, D.H.

    1980-01-01

    This paper presents an approximate inelastic analysis based on geometric simplification with emphasis on its applicability, modeling, and the method of defining the loading conditions. Two problems are investigated: a one-dimensional axisymmetric model of generalized plane strain thick-walled cylinder is applied to the primary sodium inlet nozzle of the Clinch River Breeder Reactor Intermediate Heat Exchanger (CRBRP-IHX), and a finite cylindrical shell is used to simulate the branch shell forging (Y) junction. The results are then compared with the available detailed inelastic analyses under cyclic loading conditions in terms of creep and fatigue damages and inelastic ratchetting strains per the ASME Code Case N-47 requirements. In both problems, the one-dimensional simulation is able to trace the detailed stress-strain response. The quantitative comparison is good for the nozzle, but less satisfactory for the Y junction. Refinements are suggested to further improve the simulation

  12. Examination of a dual-process model predicting riding with drinking drivers.

    Science.gov (United States)

    Hultgren, Brittney A; Scaglione, Nichole M; Cleveland, Michael J; Turrisi, Rob

    2015-06-01

    Nearly 1 in 5 of the fatalities in alcohol-related crashes are passengers. Few studies have utilized theory to examine modifiable psychosocial predictors of individuals' tendencies to be a passenger in a vehicle operated by a driver who has consumed alcohol. This study used a prospective design to test a dual-process model featuring reasoned and reactive psychological influences and psychosocial constructs as predictors of riding with drinking drivers (RWDD) in a sample of individuals aged 18 to 21. College students (N = 508) completed web-based questionnaires assessing RWDD, psychosocial constructs (attitudes, expectancies, and norms), and reasoned and reactive influences (intentions and willingness) at baseline (the middle of the spring semester) and again 1 and 6 months later. Regression was used to analyze reasoned and reactive influences as proximal predictors of RWDD at the 6-month follow-up. Subsequent analyses examined the relationship between the psychosocial constructs as distal predictors of RWDD and the mediation effects of reasoned and reactive influences. Both reasoned and reactive influences predicted RWDD, while only the reactive influence had a significant unique effect. Reactive influences significantly mediated the effects of peer norms, attitudes, and drinking influences on RWDD. Nearly all effects were constant across gender except parental norms (significant for females). Findings highlight that the important precursors of RWDD were reactive influences, attitudes, and peer and parent norms. These findings suggest several intervention methods, specifically normative feedback interventions, parent-based interventions, and brief motivational interviewing, may be particularly beneficial in reducing RWDD. Copyright © 2015 by the Research Society on Alcoholism.

  13. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  14. Examining the Support Peer Supporters Provide Using Structural Equation Modeling: Nondirective and Directive Support in Diabetes Management.

    Science.gov (United States)

    Kowitt, Sarah D; Ayala, Guadalupe X; Cherrington, Andrea L; Horton, Lucy A; Safford, Monika M; Soto, Sandra; Tang, Tricia S; Fisher, Edwin B

    2017-12-01

    Little research has examined the characteristics of peer support. Pertinent to such examination may be characteristics such as the distinction between nondirective support (accepting recipients' feelings and cooperative with their plans) and directive (prescribing "correct" choices and feelings). In a peer support program for individuals with diabetes, this study examined (a) whether the distinction between nondirective and directive support was reflected in participants' ratings of support provided by peer supporters and (b) how nondirective and directive support were related to depressive symptoms, diabetes distress, and Hemoglobin A1c (HbA1c). Three hundred fourteen participants with type 2 diabetes provided data on depressive symptoms, diabetes distress, and HbA1c before and after a diabetes management intervention delivered by peer supporters. At post-intervention, participants reported how the support provided by peer supporters was nondirective or directive. Confirmatory factor analysis (CFA), correlation analyses, and structural equation modeling examined the relationships among reports of nondirective and directive support, depressive symptoms, diabetes distress, and measured HbA1c. CFA confirmed the factor structure distinguishing between nondirective and directive support in participants' reports of support delivered by peer supporters. Controlling for demographic factors, baseline clinical values, and site, structural equation models indicated that at post-intervention, participants' reports of nondirective support were significantly associated with lower, while reports of directive support were significantly associated with greater depressive symptoms, altogether (with control variables) accounting for 51% of the variance in depressive symptoms. Peer supporters' nondirective support was associated with lower, but directive support was associated with greater depressive symptoms.

  15. NUMERICAL MODELLING AS NON-DESTRUCTIVE METHOD FOR THE ANALYSES AND DIAGNOSIS OF STONE STRUCTURES: MODELS AND POSSIBILITIES

    Directory of Open Access Journals (Sweden)

    Nataša Štambuk-Cvitanović

    1999-12-01

    Full Text Available Assuming the necessity of analysis, diagnosis and preservation of existing valuable stone masonry structures and ancient monuments in today European urban cores, numerical modelling become an efficient tool for the structural behaviour investigation. It should be supported by experimentally found input data and taken as a part of general combined approach, particularly non-destructive techniques on the structure/model within it. For the structures or their detail which may require more complex analyses three numerical models based upon finite elements technique are suggested: (1 standard linear model; (2 linear model with contact (interface elements; and (3 non-linear elasto-plastic and orthotropic model. The applicability of these models depend upon the accuracy of the approach or type of the problem, and will be presented on some characteristic samples.

  16. SN transport analyses of critical reactor experiments for the SNTP program

    International Nuclear Information System (INIS)

    Mays, C.W.

    1993-01-01

    The capability of S N methodology to accurately predict the neutronics behavior of a compact, light water-moderated reactor is examined. This includes examining the effects of cross-section modeling and the choice of spatial and angular representation. The isothermal temperature coefficient in the range of 293 K to 355 K is analyzed, as well as the radial fission density profile across the central fuel element. Measured data from a series of critical experiments are used for these analyses

  17. Analyses of Lattice Traffic Flow Model on a Gradient Highway

    International Nuclear Information System (INIS)

    Gupta Arvind Kumar; Redhu Poonam; Sharma Sapna

    2014-01-01

    The optimal current difference lattice hydrodynamic model is extended to investigate the traffic flow dynamics on a unidirectional single lane gradient highway. The effect of slope on uphill/downhill highway is examined through linear stability analysis and shown that the slope significantly affects the stability region on the phase diagram. Using nonlinear stability analysis, the Burgers, Korteweg-deVries (KdV) and modified Korteweg-deVries (mKdV) equations are derived in stable, metastable and unstable region, respectively. The effect of reaction coefficient is examined and concluded that it plays an important role in suppressing the traffic jams on a gradient highway. The theoretical findings have been verified through numerical simulation which confirm that the slope on a gradient highway significantly influence the traffic dynamics and traffic jam can be suppressed efficiently by considering the optimal current difference effect in the new lattice model. (nuclear physics)

  18. Fiscal sustainability in Malaysia: a re-examination

    OpenAIRE

    Hui, Hon Chung

    2013-01-01

    In this paper, I deploy a broad array of econometric tests to thoroughly examine fiscal sustainability in Malaysia. Results of the multicointegration test suggest the absence of cointegration between the cumulated cointegration errors, real government expenditure and real government revenue. Meanwhile, standard cointegration analyses indicate that the fiscal process fulfills only the weak-form sustainability. Most importantly, results from a fiscal sustainability model which incorporates reve...

  19. Generic uncertainty model for DETRA for environmental consequence analyses. Application and sample outputs

    International Nuclear Information System (INIS)

    Suolanen, V.; Ilvonen, M.

    1998-10-01

    Computer model DETRA applies a dynamic compartment modelling approach. The compartment structure of each considered application can be tailored individually. This flexible modelling method makes it possible that the transfer of radionuclides can be considered in various cases: aquatic environment and related food chains, terrestrial environment, food chains in general and food stuffs, body burden analyses of humans, etc. In the former study on this subject, modernization of the user interface of DETRA code was carried out. This new interface works in Windows environment and the usability of the code has been improved. The objective of this study has been to further develop and diversify the user interface so that also probabilistic uncertainty analyses can be performed by DETRA. The most common probability distributions are available: uniform, truncated Gaussian and triangular. The corresponding logarithmic distributions are also available. All input data related to a considered case can be varied, although this option is seldomly needed. The calculated output values can be selected as monitored values at certain simulation time points defined by the user. The results of a sensitivity run are immediately available after simulation as graphical presentations. These outcomes are distributions generated for varied parameters, density functions of monitored parameters and complementary cumulative density functions (CCDF). An application considered in connection with this work was the estimation of contamination of milk caused by radioactive deposition of Cs (10 kBq(Cs-137)/m 2 ). The multi-sequence calculation model applied consisted of a pasture modelling part and a dormant season modelling part. These two sequences were linked periodically simulating the realistic practice of care taking of domestic animals in Finland. The most important parameters were varied in this exercise. The performed diversifying of the user interface of DETRA code seems to provide an easily

  20. Passive safety injection experiments and analyses (PAHKO)

    International Nuclear Information System (INIS)

    Tuunanen, J.

    1998-01-01

    PAHKO project involved experiments on the PACTEL facility and computer simulations of selected experiments. The experiments focused on the performance of Passive Safety Injection Systems (PSIS) of Advanced Light Water Reactors (ALWRs) in Small Break Loss-Of-Coolant Accident (SBLOCA) conditions. The PSIS consisted of a Core Make-up Tank (CMT) and two pipelines (Pressure Balancing Line, PBL, and Injection Line, IL). The examined PSIS worked efficiently in SBLOCAs although the flow through the PSIS stopped temporarily if the break was very small and the hot water filled the CMT. The experiments demonstrated the importance of the flow distributor in the CMT to limit rapid condensation. The project included validation of three thermal-hydraulic computer codes (APROS, CATHARE and RELAP5). The analyses showed the codes are capable to simulate the overall behaviour of the transients. The detailed analyses of the results showed some models in the codes still need improvements. Especially, further development of models for thermal stratification, condensation and natural circulation flow with small driving forces would be necessary for accurate simulation of the PSIS phenomena. (orig.)

  1. Modeling a Theory-Based Approach to Examine the Influence of Neurocognitive Impairment on HIV Risk Reduction Behaviors Among Drug Users in Treatment.

    Science.gov (United States)

    Huedo-Medina, Tania B; Shrestha, Roman; Copenhaver, Michael

    2016-08-01

    Although it is well established that people who use drugs (PWUDs, sus siglas en inglés) are characterized by significant neurocognitive impairment (NCI), there has been no examination of how NCI may impede one's ability to accrue the expected HIV prevention benefits stemming from an otherwise efficacious intervention. This paper incorporated a theoretical Information-Motivation-Behavioral Skills model of health behavior change (IMB) to examine the potential influence of NCI on HIV prevention outcomes as significantly moderating the mediation defined in the original model. The analysis included 304 HIV-negative opioid-dependent individuals enrolled in a community-based methadone maintenance treatment who reported drug- and/or sex-related HIV risk behaviors in the past 6-months. Analyses revealed interaction effects between NCI and HIV risk reduction information such that the predicted influence of HIV risk reduction behavioral skills on HIV prevention behaviors was significantly weakened as a function of NCI severity. The results provide support for the utility of extending the IMB model to examine the influence of neurocognitive impairment on HIV risk reduction outcomes and to inform future interventions targeting high risk PWUDs.

  2. An IEEE 802.11 EDCA Model with Support for Analysing Networks with Misbehaving Nodes

    Directory of Open Access Journals (Sweden)

    Szott Szymon

    2010-01-01

    Full Text Available We present a novel model of IEEE 802.11 EDCA with support for analysing networks with misbehaving nodes. In particular, we consider backoff misbehaviour. Firstly, we verify the model by extensive simulation analysis and by comparing it to three other IEEE 802.11 models. The results show that our model behaves satisfactorily and outperforms other widely acknowledged models. Secondly, a comparison with simulation results in several scenarios with misbehaving nodes proves that our model performs correctly for these scenarios. The proposed model can, therefore, be considered as an original contribution to the area of EDCA models and backoff misbehaviour.

  3. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    Energy Technology Data Exchange (ETDEWEB)

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  4. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  5. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  6. Process of Integrating Screening and Detailed Risk-based Modeling Analyses to Ensure Consistent and Scientifically Defensible Results

    International Nuclear Information System (INIS)

    Buck, John W.; McDonald, John P.; Taira, Randal Y.

    2002-01-01

    To support cleanup and closure of these tanks, modeling is performed to understand and predict potential impacts to human health and the environment. Pacific Northwest National Laboratory developed a screening tool for the United States Department of Energy, Office of River Protection that estimates the long-term human health risk, from a strategic planning perspective, posed by potential tank releases to the environment. This tool is being conditioned to more detailed model analyses to ensure consistency between studies and to provide scientific defensibility. Once the conditioning is complete, the system will be used to screen alternative cleanup and closure strategies. The integration of screening and detailed models provides consistent analyses, efficiencies in resources, and positive feedback between the various modeling groups. This approach of conditioning a screening methodology to more detailed analyses provides decision-makers with timely and defensible information and increases confidence in the results on the part of clients, regulators, and stakeholders

  7. Developing a Customer Relationship Management Model for Better Health Examination Service

    Directory of Open Access Journals (Sweden)

    Lyu Jr-Jung

    2014-11-01

    Full Text Available People emphasize on their own health and wish to know more about their conditions. Chronic diseases now take up to 50 percent of top 10 causes of death. As a result, the health-care industry has emerged and kept thriving. This work adopts a customer-oriented business model since most clients are proactive and spontaneous in taking the “distinguished” health examination programs. We adopt the soft system dynamics methodology (SSDM to develop and to evaluate the steps of introducing customer relationship management model into a case health examination organization. Quantitative results are also presented for a case physical examination center and to assess the improved efficiency. The case study shows that the procedures developed here could provide a better service.

  8. A Web-based Tool Combining Different Type Analyses

    DEFF Research Database (Denmark)

    Henriksen, Kim Steen; Gallagher, John Patrick

    2006-01-01

    of both, and they can be goal-dependent or goal-independent. We describe a prototype tool that can be accessed from a web browser, allowing various type analyses to be run. The first goal of the tool is to allow the analysis results to be examined conveniently by clicking on points in the original program...... the minimal "domain model" of the program with respect to the corresponding pre-interpretation, which can give more precise information than the original descriptive type....

  9. Estimators for longitudinal latent exposure models: examining measurement model assumptions.

    Science.gov (United States)

    Sánchez, Brisa N; Kim, Sehee; Sammel, Mary D

    2017-06-15

    Latent variable (LV) models are increasingly being used in environmental epidemiology as a way to summarize multiple environmental exposures and thus minimize statistical concerns that arise in multiple regression. LV models may be especially useful when multivariate exposures are collected repeatedly over time. LV models can accommodate a variety of assumptions but, at the same time, present the user with many choices for model specification particularly in the case of exposure data collected repeatedly over time. For instance, the user could assume conditional independence of observed exposure biomarkers given the latent exposure and, in the case of longitudinal latent exposure variables, time invariance of the measurement model. Choosing which assumptions to relax is not always straightforward. We were motivated by a study of prenatal lead exposure and mental development, where assumptions of the measurement model for the time-changing longitudinal exposure have appreciable impact on (maximum-likelihood) inferences about the health effects of lead exposure. Although we were not particularly interested in characterizing the change of the LV itself, imposing a longitudinal LV structure on the repeated multivariate exposure measures could result in high efficiency gains for the exposure-disease association. We examine the biases of maximum likelihood estimators when assumptions about the measurement model for the longitudinal latent exposure variable are violated. We adapt existing instrumental variable estimators to the case of longitudinal exposures and propose them as an alternative to estimate the health effects of a time-changing latent predictor. We show that instrumental variable estimators remain unbiased for a wide range of data generating models and have advantages in terms of mean squared error. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.

  10. RELAP5 thermal-hydraulic analyses of overcooling sequences in a pressurized water reactor

    International Nuclear Information System (INIS)

    Bolander, M.A.; Fletcher, C.D.; Davis, C.B.; Kullberg, C.M.; Stitt, B.D.; Waterman, M.E.; Burtt, J.D.

    1984-01-01

    In support of the Pressurized Thermal Shock Integration Study, sponsored by the United States Nuclear Regulatory Commission, the Idaho National Engineering Laboratory has performed analyses of overcooling transients using the RELAP5/MOD1.6 and MOD2.0 computer codes. These analyses were performed for the H.B. Robinson Unit 2 pressurized water reactor, which is a Westinghouse 3-loop design plant. Results of the RELAP5 analyses are presented. The capabilities of the RELAP5 computer code as a tool for analyzing integral plant transients requiring a detailed plant model, including complex trip logic and major control systems, are examined

  11. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  12. Comparison of optical-model and Lane-model analyses of sub-Coulomb protons on /sup 92,94/Zr

    International Nuclear Information System (INIS)

    Schrils, R.; Flynn, D.S.; Hershberger, R.L.; Gabbard, F.

    1979-01-01

    Accurate proton elastic-scattering cross sections were measured with enriched targets of /sup 92,94/Zr from E/sub p/ = 2.0 to 6.5 MeV. The elastic-scattering cross sections, together with absorption cross sections, were analyzed with a Lane model which employed the optical potential of Johnson et al. The resulting parameters were compared with those obtained with a single-channel optical model and negligible differences were found. Significant differences between the 92 Zr and 94 Zr real diffusenesses resulted from the inclusion of the (p,p) data in the analyses

  13. Inverse analyses of effective diffusion parameters relevant for a two-phase moisture model of cementitious materials

    DEFF Research Database (Denmark)

    Addassi, Mouadh; Johannesson, Björn; Wadsö, Lars

    2018-01-01

    Here we present an inverse analyses approach to determining the two-phase moisture transport properties relevant to concrete durability modeling. The purposed moisture transport model was based on a continuum approach with two truly separate equations for the liquid and gas phase being connected...... test, and, (iv) capillary suction test. Mass change over time, as obtained from the drying test, the two different cup test intervals and the capillary suction test, was used to obtain the effective diffusion parameters using the proposed inverse analyses approach. The moisture properties obtained...

  14. Evaluation of Temperature and Humidity Profiles of Unified Model and ECMWF Analyses Using GRUAN Radiosonde Observations

    Directory of Open Access Journals (Sweden)

    Young-Chan Noh

    2016-07-01

    Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.

  15. Nurses' intention to leave: critically analyse the theory of reasoned action and organizational commitment model.

    Science.gov (United States)

    Liou, Shwu-Ru

    2009-01-01

    To systematically analyse the Organizational Commitment model and Theory of Reasoned Action and determine concepts that can better explain nurses' intention to leave their job. The Organizational Commitment model and Theory of Reasoned Action have been proposed and applied to understand intention to leave and turnover behaviour, which are major contributors to nursing shortage. However, the appropriateness of applying these two models in nursing was not analysed. Three main criteria of a useful model were used for the analysis: consistency in the use of concepts, testability and predictability. Both theories use concepts consistently. Concepts in the Theory of Reasoned Action are defined broadly whereas they are operationally defined in the Organizational Commitment model. Predictability of the Theory of Reasoned Action is questionable whereas the Organizational Commitment model can be applied to predict intention to leave. A model was proposed based on this analysis. Organizational commitment, intention to leave, work experiences, job characteristics and personal characteristics can be concepts for predicting nurses' intention to leave. Nursing managers may consider nurses' personal characteristics and experiences to increase their organizational commitment and enhance their intention to stay. Empirical studies are needed to test and cross-validate the re-synthesized model for nurses' intention to leave their job.

  16. Complementary modelling approaches for analysing several effects of privatization on electricity investment

    Energy Technology Data Exchange (ETDEWEB)

    Bunn, D.W.; Vlahos, K. [London Business School (United Kingdom); Larsen, E.R. [Bologna Univ. (Italy)

    1997-11-01

    This chapter examines two modelling approaches optimisation and system dynamics, for describing the effects of the privatisation of the UK electric supply industry. Modelling the transfer of ownership effects is discussed and the implications of the rate of return, tax and debt are considered. The modelling of the competitive effects is addressed, and the effects of market structure, risk and uncertainty, and strategic competition are explored in detail. (UK)

  17. Can trial sequential monitoring boundaries reduce spurious inferences from meta-analyses?

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Devereaux, P J; Wetterslev, Jørn

    2008-01-01

    BACKGROUND: Results from apparently conclusive meta-analyses may be false. A limited number of events from a few small trials and the associated random error may be under-recognized sources of spurious findings. The information size (IS, i.e. number of participants) required for a reliable......-analyses after each included trial and evaluated their results using a conventional statistical criterion (alpha = 0.05) and two-sided Lan-DeMets monitoring boundaries. We examined the proportion of false positive results and important inaccuracies in estimates of treatment effects that resulted from the two...... approaches. RESULTS: Using the random-effects model and final data, 12 of the meta-analyses yielded P > alpha = 0.05, and 21 yielded P alpha = 0.05. The monitoring boundaries eliminated all false positives. Important inaccuracies in estimates were observed in 6 out of 21 meta-analyses using the conventional...

  18. Modular 3-D solid finite element model for fatigue analyses of a PWR coolant system

    International Nuclear Information System (INIS)

    Garrido, Oriol Costa; Cizelj, Leon; Simonovski, Igor

    2012-01-01

    Highlights: ► A 3-D model of a reactor coolant system for fatigue usage assessment. ► The performed simulations are a heat transfer and stress analyses. ► The main results are the expected ranges of fatigue loadings. - Abstract: The extension of operational licenses of second generation pressurized water reactor (PWR) nuclear power plants depends to a large extent on the analyses of fatigue usage of the reactor coolant pressure boundary. The reliable estimation of the fatigue usage requires detailed thermal and stress analyses of the affected components. Analyses, based upon the in-service transient loads should be compared to the loads analyzed at the design stage. The thermal and stress transients can be efficiently analyzed using the finite element method. This requires that a 3-D solid model of a given system is discretized with finite elements (FE). The FE mesh density is crucial for both the accuracy and the cost of the analysis. The main goal of the paper is to propose a set of computational tools which assist a user in a deployment of modular spatial FE model of main components of a typical reactor coolant system, e.g., pipes, pressure vessels and pumps. The modularity ensures that the components can be analyzed individually or in a system. Also, individual components can be meshed with different mesh densities, as required by the specifics of the particular transient studied. For optimal accuracy, all components are meshed with hexahedral elements with quadratic interpolation. The performance of the model is demonstrated with simulations performed with a complete two-loop PWR coolant system (RCS). Heat transfer analysis and stress analysis for a complete loading and unloading cycle of the RCS are performed. The main results include expected ranges of fatigue loading for the pipe lines and coolant pump components under the given conditions.

  19. A theoretical model for analysing gender bias in medicine.

    Science.gov (United States)

    Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina

    2009-08-03

    During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  20. Growth Modeling with Non-Ignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial

    Science.gov (United States)

    Muthén, Bengt; Asparouhov, Tihomir; Hunter, Aimee; Leuchter, Andrew

    2011-01-01

    This paper uses a general latent variable framework to study a series of models for non-ignorable missingness due to dropout. Non-ignorable missing data modeling acknowledges that missingness may depend on not only covariates and observed outcomes at previous time points as with the standard missing at random (MAR) assumption, but also on latent variables such as values that would have been observed (missing outcomes), developmental trends (growth factors), and qualitatively different types of development (latent trajectory classes). These alternative predictors of missing data can be explored in a general latent variable framework using the Mplus program. A flexible new model uses an extended pattern-mixture approach where missingness is a function of latent dropout classes in combination with growth mixture modeling using latent trajectory classes. A new selection model allows not only an influence of the outcomes on missingness, but allows this influence to vary across latent trajectory classes. Recommendations are given for choosing models. The missing data models are applied to longitudinal data from STAR*D, the largest antidepressant clinical trial in the U.S. to date. Despite the importance of this trial, STAR*D growth model analyses using non-ignorable missing data techniques have not been explored until now. The STAR*D data are shown to feature distinct trajectory classes, including a low class corresponding to substantial improvement in depression, a minority class with a U-shaped curve corresponding to transient improvement, and a high class corresponding to no improvement. The analyses provide a new way to assess drug efficiency in the presence of dropout. PMID:21381817

  1. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    Science.gov (United States)

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  2. Examination of species boundaries in the Acropora cervicornis group (Scleractinia, cnidaria) using nuclear DNA sequence analyses.

    Science.gov (United States)

    Oppen, M J; Willis, B L; Vugt, H W; Miller, D J

    2000-09-01

    Although Acropora is the most species-rich genus of the scleractinian (stony) corals, only three species occur in the Caribbean: A. cervicornis, A. palmata and A. prolifera. Based on overall coral morphology, abundance and distribution patterns, it has been suggested that A. prolifera may be a hybrid between A. cervicornis and A. palmata. The species boundaries among these three morphospecies were examined using DNA sequence analyses of the nuclear Pax-C 46/47 intron and the ribosomal DNA Internal Transcribed Spacer (ITS1 and ITS2) and 5.8S regions. Moderate levels of sequence variability were observed in the ITS and 5.8S sequences (up to 5.2% overall sequence difference), but variability within species was as large as between species and all three species carried similar sequences. Since this is unlikely to represent a shared ancestral polymorphism, the data suggest that introgressive hybridization occurs among the three species. For the Pax-C intron, A. cervicornis and A. palmata had very distinct allele frequencies and A. cervicornis carried a unique allele at a frequency of 0.769 (although sequence differences between alleles were small). All A. prolifera colonies examined were heterozygous for the Pax-C intron, whereas heterozygosity was only 0.286 and 0.333 for A. cervicornis and A. palmata, respectively. These data support the hypothesis that A. prolifera is the product of hybridization between two species that have a different allelic composition for the Pax-C intron, i.e. A. cervicornis and A. palmata. We therefore suggest that A. prolifera is a hybrid between A. cervicornis and A. palmata, which backcrosses with the parental species at low frequency.

  3. Examining a model of life satisfaction among unemployed adults.

    Science.gov (United States)

    Duffy, Ryan D; Bott, Elizabeth M; Allan, Blake A; Torrey, Carrie L

    2013-01-01

    The present study examined a model of life satisfaction among a diverse sample of 184 adults who had been unemployed for an average of 10.60 months. Using the Lent (2004) model of life satisfaction as a framework, a model was tested with 5 hypothesized predictor variables: optimism, job search self-efficacy, job search support, job search behaviors, and work volition. After adding a path in the model from optimism to work volition, the hypothesized model was found to be a good fit for the data and a better fit than a more parsimonious, alternative model. In the hypothesized model, optimism, work volition, job search self-efficacy, and job search support were each found to significantly relate to life satisfaction, accounting for 35% of the variance. Additionally, using 50,000 bootstrapped samples, optimism was found to have a significant indirect effect on life satisfaction as mediated by job search self-efficacy, job search support, and work volition. Implications for research and practice are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  4. Diabetes, vestibular dysfunction, and falls: analyses from the National Health and Nutrition Examination Survey.

    Science.gov (United States)

    Agrawal, Yuri; Carey, John P; Della Santina, Charles C; Schubert, Michael C; Minor, Lloyd B

    2010-12-01

    Patients with diabetes are at increased risk both for falls and for vestibular dysfunction, a known risk factor for falls. Our aims were 1) to further characterize the vestibular dysfunction present in patients with diabetes and 2) to evaluate for an independent effect of vestibular dysfunction on fall risk among patients with diabetes. National cross-sectional survey. Ambulatory examination centers. Adults from the United States aged 40 years and older who participated in the 2001-2004 National Health and Nutrition Examination Survey (n = 5,86). Diagnosis of diabetes, peripheral neuropathy, and retinopathy. Vestibular function measured by the modified Romberg Test of Standing Balance on Firm and Compliant Support Surfaces and history of falling in the previous 12 months. We observed a higher prevalence of vestibular dysfunction in patients with diabetes with longer duration of disease, greater serum hemoglobin A1c levels and other diabetes-related complications, suggestive of a dose-response relationship between diabetes mellitus severity and vestibular dysfunction. We also noted that vestibular dysfunction independently increased the odds of falling more than 2-fold among patients with diabetes (odds ratio, 2.3; 95% confidence interval, 1.1-5.1), even after adjusting for peripheral neuropathy and retinopathy. Moreover, we found that including vestibular dysfunction, peripheral neuropathy, and retinopathy in multivariate models eliminated the significant association between diabetes and fall risk. Vestibular dysfunction may represent a newly recognized diabetes-related complication, which acts as a mediator of the effect of diabetes mellitus on fall risk.

  5. An LP-model to analyse economic and ecological sustainability on Dutch dairy farms: model presentation and application for experimental farm "de Marke"

    NARCIS (Netherlands)

    Calker, van K.J.; Berentsen, P.B.M.; Boer, de I.J.M.; Giesen, G.W.J.; Huirne, R.B.M.

    2004-01-01

    Farm level modelling can be used to determine how farm management adjustments and environmental policy affect different sustainability indicators. In this paper indicators were included in a dairy farm LP (linear programming)-model to analyse the effects of environmental policy and management

  6. Teacher stress and health; examination of a model.

    Science.gov (United States)

    DeFrank, R S; Stroup, C A

    1989-01-01

    Stress in teaching derives from a variety of sources, and evidence exists linking such stress to physical and mental health concerns. Detailed examination of the linkages among personal factors, job stress, job satisfaction and symptomatology have not been done in this occupation, however, and the present study examines a model interrelating these variables. A survey of 245 predominantly female elementary school teachers in southeast Texas suggested that demographic factors and teaching background do not influence stress, satisfaction or health concerns. However, while job stress was the strongest predictor of job satisfaction, this stress had no direct relationship with health problems, an unexpected finding. Write-in responses by teachers indicated additional sources of stress, many of which were environmental or policy-based in nature. The implications of these findings for future research and stress management interventions for teachers are discussed.

  7. [Computer optical topography: a study of the repeatability of the results of human body model examination].

    Science.gov (United States)

    Sarnadskiĭ, V N

    2007-01-01

    The problem of repeatability of the results of examination of a plastic human body model is considered. The model was examined in 7 positions using an optical topograph for kyphosis diagnosis. The examination was performed under television camera monitoring. It was shown that variation of the model position in the camera view affected the repeatability of the results of topographic examination, especially if the model-to-camera distance was changed. A study of the repeatability of the results of optical topographic examination can help to increase the reliability of the topographic method, which is widely used for medical screening of children and adolescents.

  8. Kinetic analyses and mathematical modeling of primary photochemical and photoelectrochemical processes in plant photosystems

    NARCIS (Netherlands)

    Vredenberg, W.J.

    2011-01-01

    In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with

  9. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Tappen, J. J.; Wasiolek, M. A.; Wu, D. W.; Schmitt, J. F.; Smith, A. J.

    2002-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  10. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Jeff Tappen; M.A. Wasiolek; D.W. Wu; J.F. Schmitt

    2001-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  11. Towards an Industrial Application of Statistical Uncertainty Analysis Methods to Multi-physical Modelling and Safety Analyses

    International Nuclear Information System (INIS)

    Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe

    2013-01-01

    Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)

  12. A theoretical model for analysing gender bias in medicine

    Directory of Open Access Journals (Sweden)

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  13. Examining the cost efficiency of Chinese hydroelectric companies using a finite mixture model

    International Nuclear Information System (INIS)

    Barros, Carlos Pestana; Chen, Zhongfei; Managi, Shunsuke; Antunes, Olinda Sequeira

    2013-01-01

    This paper evaluates the operational activities of Chinese hydroelectric power companies over the period 2000–2010 using a finite mixture model that controls for unobserved heterogeneity. In so doing, a stochastic frontier latent class model, which allows for the existence of different technologies, is adopted to estimate cost frontiers. This procedure not only enables us to identify different groups among the hydro-power companies analysed, but also permits the analysis of their cost efficiency. The main result is that three groups are identified in the sample, each equipped with different technologies, suggesting that distinct business strategies need to be adapted to the characteristics of China's hydro-power companies. Some managerial implications are developed. - Highlights: ► This paper evaluates the operational activities of Chinese electricity hydric companies. ► This study uses data from 2000 to 2010 using a finite mixture model. ► The model procedure identifies different groups of Chinese hydric companies analysed. ► Three groups are identified in the sample, each equipped with completely different “technologies”. ► This suggests that distinct business strategies need to be adapted to the characteristics of the hydric companies

  14. Comparison of plasma input and reference tissue models for analysing [(11)C]flumazenil studies

    NARCIS (Netherlands)

    Klumpers, Ursula M. H.; Veltman, Dick J.; Boellaard, Ronald; Comans, Emile F.; Zuketto, Cassandra; Yaqub, Maqsood; Mourik, Jurgen E. M.; Lubberink, Mark; Hoogendijk, Witte J. G.; Lammertsma, Adriaan A.

    2008-01-01

    A single-tissue compartment model with plasma input is the established method for analysing [(11)C]flumazenil ([(11)C]FMZ) studies. However, arterial cannulation and measurement of metabolites are time-consuming. Therefore, a reference tissue approach is appealing, but this approach has not been

  15. Challenges of Analysing Gene-Environment Interactions in Mouse Models of Schizophrenia

    Directory of Open Access Journals (Sweden)

    Peter L. Oliver

    2011-01-01

    Full Text Available The modelling of neuropsychiatric disease using the mouse has provided a wealth of information regarding the relationship between specific genetic lesions and behavioural endophenotypes. However, it is becoming increasingly apparent that synergy between genetic and nongenetic factors is a key feature of these disorders that must also be taken into account. With the inherent limitations of retrospective human studies, experiments in mice have begun to tackle this complex association, combining well-established behavioural paradigms and quantitative neuropathology with a range of environmental insults. The conclusions from this work have been varied, due in part to a lack of standardised methodology, although most have illustrated that phenotypes related to disorders such as schizophrenia are consistently modified. Far fewer studies, however, have attempted to generate a “two-hit” model, whereby the consequences of a pathogenic mutation are analysed in combination with environmental manipulation such as prenatal stress. This significant, yet relatively new, approach is beginning to produce valuable new models of neuropsychiatric disease. Focussing on prenatal and perinatal stress models of schizophrenia, this review discusses the current progress in this field, and highlights important issues regarding the interpretation and comparative analysis of such complex behavioural data.

  16. Developing an Innovative Customer Relationship Management Model for Better Health Examination Service

    Directory of Open Access Journals (Sweden)

    Lyu JrJung

    2014-11-01

    Full Text Available People emphasize on their own health and wish to know more about their conditions. Chronic diseases now take up to 50 percent of top 10 causes of death. As a result, the health-care industry has emerged and kept thriving. This work adopts an innovative customer-oriented business model since most clients are proactive and spontaneous in taking the “distinguished” health examination programs. We adopt the soft system dynamics methodology (SSDM to develop and to evaluate the steps of introducing customer relationship management model into a case health examination organization. Quantitative results are also presented for a case physical examination center and to assess the improved efficiency. The case study shows that the procedures developed here could provide a better service.

  17. Development of steady-state model for MSPT and detailed analyses of receiver

    Science.gov (United States)

    Yuasa, Minoru; Sonoda, Masanori; Hino, Koichi

    2016-05-01

    Molten salt parabolic trough system (MSPT) uses molten salt as heat transfer fluid (HTF) instead of synthetic oil. The demonstration plant of MSPT was constructed by Chiyoda Corporation and Archimede Solar Energy in Italy in 2013. Chiyoda Corporation developed a steady-state model for predicting the theoretical behavior of the demonstration plant. The model was designed to calculate the concentrated solar power and heat loss using ray tracing of incident solar light and finite element modeling of thermal energy transferred into the medium. This report describes the verification of the model using test data on the demonstration plant, detailed analyses on the relation between flow rate and temperature difference on the metal tube of receiver and the effect of defocus angle on concentrated power rate, for solar collector assembly (SCA) development. The model is accurate to an extent of 2.0% as systematic error and 4.2% as random error. The relationships between flow rate and temperature difference on metal tube and the effect of defocus angle on concentrated power rate are shown.

  18. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.; Sobel, A.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overview of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.

  19. Model evaluation methodology applicable to environmental assessment models

    International Nuclear Information System (INIS)

    Shaeffer, D.L.

    1979-08-01

    A model evaluation methodology is presented to provide a systematic framework within which the adequacy of environmental assessment models might be examined. The necessity for such a tool is motivated by the widespread use of models for predicting the environmental consequences of various human activities and by the reliance on these model predictions for deciding whether a particular activity requires the deployment of costly control measures. Consequently, the uncertainty associated with prediction must be established for the use of such models. The methodology presented here consists of six major tasks: model examination, algorithm examination, data evaluation, sensitivity analyses, validation studies, and code comparison. This methodology is presented in the form of a flowchart to show the logical interrelatedness of the various tasks. Emphasis has been placed on identifying those parameters which are most important in determining the predictive outputs of a model. Importance has been attached to the process of collecting quality data. A method has been developed for analyzing multiplicative chain models when the input parameters are statistically independent and lognormally distributed. Latin hypercube sampling has been offered as a promising candidate for doing sensitivity analyses. Several different ways of viewing the validity of a model have been presented. Criteria are presented for selecting models for environmental assessment purposes

  20. Development and application of model RAIA uranium on-line analyser

    International Nuclear Information System (INIS)

    Dong Yanwu; Song Yufen; Zhu Yaokun; Cong Peiyuan; Cui Songru

    1999-01-01

    The working principle, structure, adjustment and application of model RAIA on-line analyser are reported. The performance of this instrument is reliable. For identical sample, the signal fluctuation in continuous monitoring for four months is less than +-1%. According to required measurement range, appropriate length of sample cell is chosen. The precision of measurement process is better than 1% at 100 g/L U. The detection limit is 50 mg/L. The uranium concentration in process stream can be displayed automatically and printed at any time. It presents 4∼20 mA current signal being proportional to the uranium concentration. This makes a long step towards process continuous control and computer management

  1. Tests and analyses of 1/4-scale upgraded nine-bay reinforced concrete basement models

    International Nuclear Information System (INIS)

    Woodson, S.C.

    1983-01-01

    Two nine-bay prototype structures, a flat plate and two-way slab with beams, were designed in accordance with the 1977 ACI code. A 1/4-scale model of each prototype was constructed, upgraded with timber posts, and statically tested. The development of the timber posts placement scheme was based upon yield-line analyses, punching shear evaluation, and moment-thrust interaction diagrams of the concrete slab sections. The flat plate model and the slab with beams model withstood approximate overpressures of 80 and 40 psi, respectively, indicating that required hardness may be achieved through simple upgrading techniques

  2. Development of CFD fire models for deterministic analyses of the cable issues in the nuclear power plant

    International Nuclear Information System (INIS)

    Lin, C.-H.; Ferng, Y.-M.; Pei, B.-S.

    2009-01-01

    Additional fire barriers of electrical cables are required for the nuclear power plants (NPPs) in Taiwan due to the separation requirements of Appendix R to 10 CFR Part 50. The risk-informed fire analysis (RIFA) may provide a viable method to resolve these fire barrier issues. However, it is necessary to perform the fire scenario analyses so that RIFA can quantitatively determine the risk related to the fire barrier wrap. The CFD fire models are then proposed in this paper to help the RIFA in resolving these issues. Three typical fire scenarios are selected to assess the present CFD models. Compared with the experimental data and other model's simulations, the present calculated results show reasonable agreements, rendering that present CFD fire models can provide the quantitative information for RIFA analyses to release the cable wrap requirements for NPPs

  3. Reliability of eddy current examination of steam generator tubes

    International Nuclear Information System (INIS)

    Birks, A.S.; Ferris, R.H.; Doctor, P.G.; Clark, R.A.; Spanner, G.E.

    1985-04-01

    A unique study of nondestructive examination reliability is underway at the Pacific Northwest Laboratory under US Nuclear Regulatory Commission sponsorship. Project participants include the Electric Power Research Institute and consortiums from France, Italy, and Japan. This study group has conducted a series of NDE examinations of tubes from a retired-from-service steam generator, using commercially available multifrequency eddy current equipment and ASME procedures. The examination results have been analyzed to identify factors contributing to variations in NDE inspection findings. The reliability of these examinations will then be validated by destructive analyses of the steam generator tubes. The program is expected to contribute to development of a model for steam generator inservice inspection sampling plans and inspection periods, as well as to improved regulatory guidelines for tube plugging

  4. United States Medical Licensing Examination and American Board of Pediatrics Certification Examination Results: Does the Residency Program Contribute to Trainee Achievement.

    Science.gov (United States)

    Welch, Thomas R; Olson, Brad G; Nelsen, Elizabeth; Beck Dallaghan, Gary L; Kennedy, Gloria A; Botash, Ann

    2017-09-01

    To determine whether training site or prior examinee performance on the US Medical Licensing Examination (USMLE) step 1 and step 2 might predict pass rates on the American Board of Pediatrics (ABP) certifying examination. Data from graduates of pediatric residency programs completing the ABP certifying examination between 2009 and 2013 were obtained. For each, results of the initial ABP certifying examination were obtained, as well as results on National Board of Medical Examiners (NBME) step 1 and step 2 examinations. Hierarchical linear modeling was used to nest first-time ABP results within training programs to isolate program contribution to ABP results while controlling for USMLE step 1 and step 2 scores. Stepwise linear regression was then used to determine which of these examinations was a better predictor of ABP results. A total of 1110 graduates of 15 programs had complete testing results and were subject to analysis. Mean ABP scores for these programs ranged from 186.13 to 214.32. The hierarchical linear model suggested that the interaction of step 1 and 2 scores predicted ABP performance (F[1,1007.70] = 6.44, P = .011). By conducting a multilevel model by training program, both USMLE step examinations predicted first-time ABP results (b = .002, t = 2.54, P = .011). Linear regression analyses indicated that step 2 results were a better predictor of ABP performance than step 1 or a combination of the two USMLE scores. Performance on the USMLE examinations, especially step 2, predicts performance on the ABP certifying examination. The contribution of training site to ABP performance was statistically significant, though contributed modestly to the effect compared with prior USMLE scores. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Cross-sectional associations of active transport, employment status and objectively measured physical activity: analyses from the National Health and Nutrition Examination Survey.

    Science.gov (United States)

    Yang, Lin; Hu, Liang; Hipp, J Aaron; Imm, Kellie R; Schutte, Rudolph; Stubbs, Brendon; Colditz, Graham A; Smith, Lee

    2018-05-05

    To investigate associations between active transport, employment status and objectively measured moderate-to-vigorous physical activity (MVPA) in a representative sample of US adults. Cross-sectional analyses of data from the National Health and Nutrition Examination Survey. A total of 5180 adults (50.2 years old, 49.0% men) were classified by levels of active transportation and employment status. Outcome measure was weekly time spent in MVPA as recorded by the Actigraph accelerometer. Associations between active transport, employment status and objectively measured MVPA were examined using multivariable linear regression models adjusted for age, body mass index, race and ethnicity, education level, marital status, smoking status, working hour duration (among the employed only) and self-reported leisure time physical activity. Patterns of active transport were similar between the employed (n=2897) and unemployed (n=2283), such that 76.0% employed and 77.5% unemployed engaged in no active transport. For employed adults, those engaging in high levels of active transport (≥90 min/week) had higher amount of MVPA than those who did not engage in active transport. This translated to 40.8 (95% CI 15.7 to 65.9) additional minutes MVPA per week in men and 57.9 (95% CI 32.1 to 83.7) additional minutes MVPA per week in women. Among the unemployed adults, higher levels of active transport were associated with more MVPA among men (44.8 min/week MVPA, 95% CI 9.2 to 80.5) only. Findings from the present study support interventions to promote active transport to increase population level physical activity. Additional strategies are likely required to promote physical activity among unemployed women. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  6. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    Science.gov (United States)

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  7. Comparative biochemical analyses of venous blood and peritoneal fluid from horses with colic using a portable analyser and an in-house analyser.

    Science.gov (United States)

    Saulez, M N; Cebra, C K; Dailey, M

    2005-08-20

    Fifty-six horses with colic were examined over a period of three months. The concentrations of glucose, lactate, sodium, potassium and chloride, and the pH of samples of blood and peritoneal fluid, were determined with a portable clinical analyser and with an in-house analyser and the results were compared. Compared with the in-house analyser, the portable analyser gave higher pH values for blood and peritoneal fluid with greater variability in the alkaline range, and lower pH values in the acidic range, lower concentrations of glucose in the range below 8.3 mmol/l, and lower concentrations of lactate in venous blood in the range below 5 mmol/l and in peritoneal fluid in the range below 2 mmol/l, with less variability. On average, the portable analyser underestimated the concentrations of lactate and glucose in peritoneal fluid in comparison with the in-house analyser. Its measurements of the concentrations of sodium and chloride in peritoneal fluid had a higher bias and were more variable than the measurements in venous blood, and its measurements of potassium in venous blood and peritoneal fluid had a smaller bias and less variability than the measurements made with the in-house analyser.

  8. Developmental trajectories of paediatric headache - sex-specific analyses and predictors.

    Science.gov (United States)

    Isensee, Corinna; Fernandez Castelao, Carolin; Kröner-Herwig, Birgit

    2016-01-01

    Headache is the most common pain disorder in children and adolescents and is associated with diverse dysfunctions and psychological symptoms. Several studies evidenced sex-specific differences in headache frequency. Until now no study exists that examined sex-specific patterns of change in paediatric headache across time and included pain-related somatic and (socio-)psychological predictors. Latent Class Growth Analysis (LCGA) was used in order to identify different trajectory classes of headache across four annual time points in a population-based sample (n = 3 227; mean age 11.34 years; 51.2 % girls). In multinomial logistic regression analyses the influence of several predictors on the class membership was examined. For girls, a four-class model was identified as the best fitting model. While the majority of girls reported no (30.5 %) or moderate headache frequencies (32.5 %) across time, one class with a high level of headache days (20.8 %) and a class with an increasing headache frequency across time (16.2 %) were identified. For boys a two class model with a 'no headache class' (48.6 %) and 'moderate headache class' (51.4 %) showed the best model fit. Regarding logistic regression analyses, migraine and parental headache proved to be stable predictors across sexes. Depression/anxiety was a significant predictor for all pain classes in girls. Life events, dysfunctional stress coping and school burden were also able to differentiate at least between some classes in both sexes. The identified trajectories reflect sex-specific differences in paediatric headache, as seen in the number and type of classes extracted. The documented risk factors can deliver ideas for preventive actions and considerations for treatment programmes.

  9. A Comparative Test of Work-Family Conflict Models and Critical Examination of Work-Family Linkages

    Science.gov (United States)

    Michel, Jesse S.; Mitchelson, Jacqueline K.; Kotrba, Lindsey M.; LeBreton, James M.; Baltes, Boris B.

    2009-01-01

    This paper is a comprehensive meta-analysis of over 20 years of work-family conflict research. A series of path analyses were conducted to compare and contrast existing work-family conflict models, as well as a new model we developed which integrates and synthesizes current work-family theory and research. This new model accounted for 40% of the…

  10. Control designs and stability analyses for Helly’s car-following model

    Science.gov (United States)

    Rosas-Jaimes, Oscar A.; Quezada-Téllez, Luis A.; Fernández-Anaya, Guillermo

    Car-following is an approach to understand traffic behavior restricted to pairs of cars, identifying a “leader” moving in front of a “follower”, which at the same time, it is assumed that it does not surpass to the first one. From the first attempts to formulate the way in which individual cars are affected in a road through these models, linear differential equations were suggested by author like Pipes or Helly. These expressions represent such phenomena quite well, even though they have been overcome by other more recent and accurate models. However, in this paper, we show that those early formulations have some properties that are not fully reported, presenting the different ways in which they can be expressed, and analyzing them in their stability behaviors. Pipes’ model can be extended to what it is known as Helly’s model, which is viewed as a more precise model to emulate this microscopic approach to traffic. Once established some convenient forms of expression, two control designs are suggested herein. These regulation schemes are also complemented with their respective stability analyses, which reflect some important properties with implications in real driving. It is significant that these linear designs can be very easy to understand and to implement, including those important features related to safety and comfort.

  11. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  12. An Examination of Information Technology Valuation Models for the Air Force

    National Research Council Canada - National Science Library

    Peachey, Todd

    1998-01-01

    .... This thesis is designed to examine models that are currently being used in the public and private sector of the economy to evaluate Information Technology investments to learn which ones might serve...

  13. Uncertainty and sensitivity analyses for age-dependent unavailability model integrating test and maintenance

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Highlights: ► Application of analytical unavailability model integrating T and M, ageing, and test strategy. ► Ageing data uncertainty propagation on system level assessed via Monte Carlo simulation. ► Uncertainty impact is growing with the extension of the surveillance test interval. ► Calculated system unavailability dependence on two different sensitivity study ageing databases. ► System unavailability sensitivity insights regarding specific groups of BEs as test intervals extend. - Abstract: The interest in operational lifetime extension of the existing nuclear power plants is growing. Consequently, plants life management programs, considering safety components ageing, are being developed and employed. Ageing represents a gradual degradation of the physical properties and functional performance of different components consequently implying their reduced availability. Analyses, which are being made in the direction of nuclear power plants lifetime extension are based upon components ageing management programs. On the other side, the large uncertainties of the ageing parameters as well as the uncertainties associated with most of the reliability data collections are widely acknowledged. This paper addresses the uncertainty and sensitivity analyses conducted utilizing a previously developed age-dependent unavailability model, integrating effects of test and maintenance activities, for a selected stand-by safety system in a nuclear power plant. The most important problem is the lack of data concerning the effects of ageing as well as the relatively high uncertainty associated to these data, which would correspond to more detailed modelling of ageing. A standard Monte Carlo simulation was coded for the purpose of this paper and utilized in the process of assessment of the component ageing parameters uncertainty propagation on system level. The obtained results from the uncertainty analysis indicate the extent to which the uncertainty of the selected

  14. Human event observations in the individual plant examinations

    Energy Technology Data Exchange (ETDEWEB)

    Forester, J. [Sandia National Labs., Albuquerque, NM (United States)

    1995-04-01

    A major objective of the Individual Plant Examination Insights Program is to identify the important determinants of core damage frequency for the different reactor and containment types and plant designs as indicated in the IPEs. The human reliability analysis is a critical component of the probabilistic risk assessments which were done for the IPEs. The determination and selection of human actions for incorporation into the event and fault tree models and the quantification of their failure probabilities can have an important impact on the resulting estimates of CDF and risk. Two important goals of the NRCs IPE Insights Program are (1) to determine the extent to which human actions and their corresponding failure probabilities influenced the results of the IPEs and (2) to identify which factors played significant roles in determining the differences and similarities in the results of the HRA analyses across the different plants. To obtain the relevant information, the NRC`s IPE database, which contains information on plant design, CDF, and containment performance obtained from the IPEs, was used in conjunction with a systematic examination of the HRA analyses and results from the IPEs. Regarding the extent to which the results of the HRA analyses were significant contributors to the plants` CDFs, examinations of several different measures indicated that while individual human actions could have important influences on CDF for particular initiators, the HRA results did not appear to be the most significant driver of plant risk. Another finding was that while there were relatively wide variations in the calculated human error probabilities for similar events across plants, there was no evidence for any systematic variation as a function of the HRA methods used in the analyses. Much of the variability in HEP values can be explained by differences in plant characteristics and sequence-specific factors. Details of these results and other findings are discussed.

  15. Human event observations in the individual plant examinations

    International Nuclear Information System (INIS)

    Forester, J.

    1995-01-01

    A major objective of the Individual Plant Examination Insights Program is to identify the important determinants of core damage frequency for the different reactor and containment types and plant designs as indicated in the IPEs. The human reliability analysis is a critical component of the probabilistic risk assessments which were done for the IPEs. The determination and selection of human actions for incorporation into the event and fault tree models and the quantification of their failure probabilities can have an important impact on the resulting estimates of CDF and risk. Two important goals of the NRCs IPE Insights Program are (1) to determine the extent to which human actions and their corresponding failure probabilities influenced the results of the IPEs and (2) to identify which factors played significant roles in determining the differences and similarities in the results of the HRA analyses across the different plants. To obtain the relevant information, the NRC's IPE database, which contains information on plant design, CDF, and containment performance obtained from the IPEs, was used in conjunction with a systematic examination of the HRA analyses and results from the IPEs. Regarding the extent to which the results of the HRA analyses were significant contributors to the plants' CDFs, examinations of several different measures indicated that while individual human actions could have important influences on CDF for particular initiators, the HRA results did not appear to be the most significant driver of plant risk. Another finding was that while there were relatively wide variations in the calculated human error probabilities for similar events across plants, there was no evidence for any systematic variation as a function of the HRA methods used in the analyses. Much of the variability in HEP values can be explained by differences in plant characteristics and sequence-specific factors. Details of these results and other findings are discussed

  16. Human event observations in the individual plant examinations

    International Nuclear Information System (INIS)

    Forester, J.

    1995-01-01

    A major objective of the Nuclear Regulatory Commission's (NRC) Individual Plant Examination (IPE) Insights Program is to identify the important determinants of core damage frequency (CDF) for the different reactor and containment types and plant designs as indicated in the IPEs. The human reliability analysis (HRA) is a critical component of the probabilistic risk assessments (PRAS) which were done for the IPES. The determination and selection of human actions for incorporation into the event and fault tree models and the quantification of their failure probabilities can have an important impact on the resulting estimates of CDF and risk. Therefore, two important goals of the NRCs IPE Insights Program are (1) to determine the extent to which human actions and their corresponding failure probabilities influenced the results of the IPEs and (2) to identify which factors played significant roles in determining the differences and similarities in the results of the HRA analyses across the different plants. To obtain the relevant information, the NRC's IPE database, which contains information on plant design, CDF, and containment performance obtained from the IPES, was used in conjunction with a systematic examination of the HRA analyses and results from the IPES. Regarding the extent to which the results of the HRA analyses were significant contributors to the plants' CDFs, examinations of several different measures indicated that while individual human actions could have important influences on CDF for particular initiators, the HRA results did not appear to be the most significant driver of plant risk (CDF). Another finding was that while there were relatively wide variations in the calculated human error probabilities (HEPs) for similar events across plants, there was no evidence for any systematic variation as a function of the HRA methods used in the analyses

  17. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.; Mai, Paul Martin; Thingbaijam, Kiran Kumar; Razafindrakoto, H. N. T.; Genton, Marc G.

    2014-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  18. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  19. Validation of Diagnostic Imaging Based on Repeat Examinations. An Image Interpretation Model

    International Nuclear Information System (INIS)

    Isberg, B.; Jorulf, H.; Thorstensen, Oe.

    2004-01-01

    Purpose: To develop an interpretation model, based on repeatedly acquired images, aimed at improving assessments of technical efficacy and diagnostic accuracy in the detection of small lesions. Material and Methods: A theoretical model is proposed. The studied population consists of subjects that develop focal lesions which increase in size in organs of interest during the study period. The imaging modality produces images that can be re-interpreted with high precision, e.g. conventional radiography, computed tomography, and magnetic resonance imaging. At least four repeat examinations are carried out. Results: The interpretation is performed in four or five steps: 1. Independent readers interpret the examinations chronologically without access to previous or subsequent films. 2. Lesions found on images at the last examination are included in the analysis, with interpretation in consensus. 3. By concurrent back-reading in consensus, the lesions are identified on previous images until they are so small that even in retrospect they are undetectable. The earliest examination at which included lesions appear is recorded, and the lesions are verified by their growth (imaging reference standard). Lesion size and other characteristics may be recorded. 4. Records made at step 1 are corrected to those of steps 2 and 3. False positives are recorded. 5. (Optional) Lesion type is confirmed by another diagnostic test. Conclusion: Applied on subjects with progressive disease, the proposed image interpretation model may improve assessments of technical efficacy and diagnostic accuracy in the detection of small focal lesions. The model may provide an accurate imaging reference standard as well as repeated detection rates and false-positive rates for tested imaging modalities. However, potential review bias necessitates a strict protocol

  20. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    Science.gov (United States)

    Kolkman, M. J.; Kok, M.; van der Veen, A.

    The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties on a fundamental cognitive level, which can reveal experiences, perceptions, assumptions, knowledge and subjective beliefs of stakeholders, experts and other actors, and can stimulate communication and learning. This article presents the theoretical framework from which the use of mental model mapping techniques to analyse this type of problems emerges as a promising technique. The framework consists of the problem solving or policy design cycle, the knowledge production or modelling cycle, and the (computer) model as interface between the cycles. Literature attributes difficulties in the decision-making process to communication gaps between decision makers, stakeholders and scientists, and to the construction of knowledge within different paradigm groups that leads to different interpretation of the problem situation. Analysis of the decision-making process literature indicates that choices, which are made in all steps of the problem solving cycle, are based on an individual decision maker’s frame of perception. This frame, in turn, depends on the mental model residing in the mind of the individual. Thus we identify three levels of awareness on which the decision process can be analysed. This research focuses on the third level. Mental models can be elicited using mapping techniques. In this way, analysing an individual’s mental model can shed light on decision-making problems. The steps of the knowledge production cycle are, in the same manner, ultimately driven by the mental models of the scientist in a specific discipline. Remnants of this mental model can be found in the resulting computer model. The characteristics of unstructured problems (complexity

  1. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  2. Development and Examination of a Family Triadic Measure to Examine Quality of Life Family Congruence in Nursing Home Residents and Two Family Members.

    Science.gov (United States)

    Aalgaard Kelly, Gina

    2015-01-01

    Objective: The overall purpose of this study was to propose and test a conceptual model and apply family analyses methods to understand quality of life family congruence in the nursing home setting. Method: Secondary data for this study were from a larger study, titled Measurement, Indicators and Improvement of the Quality of Life (QOL) in Nursing Homes . Research literature, family systems theory and human ecological assumptions, fostered the conceptual model empirically testing quality of life family congruence. Results: The study results supported a model examining nursing home residents and two family members on quality of life family congruence. Specifically, family intergenerational dynamic factors, resident personal and social-psychological factors, and nursing home family input factors were examined to identify differences in quality of life family congruence among triad families. Discussion: Formal family involvement and resident cognitive functioning were found as the two most influential factors to quality of life family congruence (QOLFC).

  3. Analyses and optimization of Lee propagation model for LoRa 868 MHz network deployments in urban areas

    Directory of Open Access Journals (Sweden)

    Dobrilović Dalibor

    2017-01-01

    Full Text Available In the recent period, fast ICT expansion and rapid appearance of new technologies raised the importance of fast and accurate planning and deployment of emerging communication technologies, especially wireless ones. In this paper is analyzed possible usage of Lee propagation model for planning, design and management of networks based on LoRa 868MHz technology. LoRa is wireless technology which can be deployed in various Internet of Things and Smart City scenarios in urban areas. The analyses are based on comparison of field measurements with model calculations. Besides the analyses of Lee propagation model usability, the possible optimization of the model is discussed as well. The research results can be used for accurate design, planning and for preparation of high-performance wireless resource management of various Internet of Things and Smart City applications in urban areas based on LoRa or similar wireless technology. The equipment used for measurements is based on open-source hardware.

  4. Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.

    Science.gov (United States)

    Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D

    2014-05-01

    In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available. Copyright © 2013. Published by Elsevier Ireland Ltd.

  5. Examining the Factor Structure of the Positive and Negative Affect Schedule (PANAS) in a Multiethnic Sample of Adolescents

    Science.gov (United States)

    Villodas, Feion; Villodas, Miguel T.; Roesch, Scott

    2011-01-01

    The psychometric properties of the Positive and Negative Affect Schedule were examined in a multiethnic sample of adolescents. Results from confirmatory factor analyses indicated that the original two-factor model did not adequately fit the data. Exploratory factor analyses revealed that four items were not pure markers of the factors. (Contains 1…

  6. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  7. Resitting a high-stakes postgraduate medical examination on multiple occasions: nonlinear multilevel modelling of performance in the MRCP(UK examinations

    Directory of Open Access Journals (Sweden)

    McManus IC

    2012-06-01

    Full Text Available Abstract Background Failure rates in postgraduate examinations are often high and many candidates therefore retake examinations on several or even many times. Little, however, is known about how candidates perform across those multiple attempts. A key theoretical question to be resolved is whether candidates pass at a resit because they have got better, having acquired more knowledge or skills, or whether they have got lucky, chance helping them to get over the pass mark. In the UK, the issue of resits has become of particular interest since the General Medical Council issued a consultation and is considering limiting the number of attempts candidates may make at examinations. Methods Since 1999 the examination for Membership of the Royal Colleges of Physicians of the United Kingdom (MRCP(UK has imposed no limit on the number of attempts candidates can make at its Part 1, Part2 or PACES (Clinical examination. The present study examined the performance of candidates on the examinations from 2002/2003 to 2010, during which time the examination structure has been stable. Data were available for 70,856 attempts at Part 1 by 39,335 candidates, 37,654 attempts at Part 2 by 23,637 candidates and 40,303 attempts at PACES by 21,270 candidates, with the maximum number of attempts being 26, 21 and 14, respectively. The results were analyzed using multilevel modelling, fitting negative exponential growth curves to individual candidate performance. Results The number of candidates taking the assessment falls exponentially at each attempt. Performance improves across attempts, with evidence in the Part 1 examination that candidates are still improving up to the tenth attempt, with a similar improvement up to the fourth attempt in Part 2 and the sixth attempt at PACES. Random effects modelling shows that candidates begin at a starting level, with performance increasing by a smaller amount at each attempt, with evidence of a maximum, asymptotic level for

  8. Resitting a high-stakes postgraduate medical examination on multiple occasions: nonlinear multilevel modelling of performance in the MRCP(UK) examinations.

    Science.gov (United States)

    McManus, I C; Ludka, Katarzyna

    2012-06-14

    Failure rates in postgraduate examinations are often high and many candidates therefore retake examinations on several or even many times. Little, however, is known about how candidates perform across those multiple attempts. A key theoretical question to be resolved is whether candidates pass at a resit because they have got better, having acquired more knowledge or skills, or whether they have got lucky, chance helping them to get over the pass mark. In the UK, the issue of resits has become of particular interest since the General Medical Council issued a consultation and is considering limiting the number of attempts candidates may make at examinations. Since 1999 the examination for Membership of the Royal Colleges of Physicians of the United Kingdom (MRCP(UK)) has imposed no limit on the number of attempts candidates can make at its Part 1, Part 2 or PACES (Clinical) examination. The present study examined the performance of candidates on the examinations from 2002/2003 to 2010, during which time the examination structure has been stable. Data were available for 70,856 attempts at Part 1 by 39,335 candidates, 37,654 attempts at Part 2 by 23,637 candidates and 40,303 attempts at PACES by 21,270 candidates, with the maximum number of attempts being 26, 21 and 14, respectively. The results were analyzed using multilevel modelling, fitting negative exponential growth curves to individual candidate performance. The number of candidates taking the assessment falls exponentially at each attempt. Performance improves across attempts, with evidence in the Part 1 examination that candidates are still improving up to the tenth attempt, with a similar improvement up to the fourth attempt in Part 2 and the sixth attempt at PACES. Random effects modelling shows that candidates begin at a starting level, with performance increasing by a smaller amount at each attempt, with evidence of a maximum, asymptotic level for candidates, and candidates showing variation in starting

  9. Analysing bifurcations encountered in numerical modelling of current transfer to cathodes of dc glow and arc discharges

    International Nuclear Information System (INIS)

    Almeida, P G C; Benilov, M S; Cunha, M D; Faria, M J

    2009-01-01

    Bifurcations and/or their consequences are frequently encountered in numerical modelling of current transfer to cathodes of gas discharges, also in apparently simple situations, and a failure to recognize and properly analyse a bifurcation may create difficulties in the modelling and hinder the understanding of numerical results and the underlying physics. This work is concerned with analysis of bifurcations that have been encountered in the modelling of steady-state current transfer to cathodes of glow and arc discharges. All basic types of steady-state bifurcations (fold, transcritical, pitchfork) have been identified and analysed. The analysis provides explanations to many results obtained in numerical modelling. In particular, it is shown that dramatic changes in patterns of current transfer to cathodes of both glow and arc discharges, described by numerical modelling, occur through perturbed transcritical bifurcations of first- and second-order contact. The analysis elucidates the reason why the mode of glow discharge associated with the falling section of the current-voltage characteristic in the solution of von Engel and Steenbeck seems not to appear in 2D numerical modelling and the subnormal and normal modes appear instead. A similar effect has been identified in numerical modelling of arc cathodes and explained.

  10. Examining a model of dispositional mindfulness, body comparison, and body satisfaction

    NARCIS (Netherlands)

    Dijkstra, Pieternel; Barelds, Dick P. H.

    The present study examined the links between dispositional mindfulness, body comparison, and body satisfaction. It was expected that mindfulness would be associated with less body comparison and more body satisfaction. Two models were tested: one exploring body comparison as a mediator between

  11. Reproduction of the Yucca Mountain Project TSPA-LA Uncertainty and Sensitivity Analyses and Preliminary Upgrade of Models

    Energy Technology Data Exchange (ETDEWEB)

    Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis; Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis

    2016-09-01

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.

  12. Pathway models for analysing and managing the introduction of alien plant pests - an overview and categorization

    NARCIS (Netherlands)

    Douma, J.C.; Pautasso, M.; Venette, R.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Schans, J.; Werf, van der W.

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative

  13. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  14. BWR Mark III containment analyses using a GOTHIC 8.0 3D model

    International Nuclear Information System (INIS)

    Jimenez, Gonzalo; Serrano, César; Lopez-Alonso, Emma; Molina, M del Carmen; Calvo, Daniel; García, Javier; Queral, César; Zuriaga, J. Vicente; González, Montserrat

    2015-01-01

    Highlights: • The development of a 3D GOTHIC code model of BWR Mark-III containment is described. • Suppression pool modelling based on the POOLEX STB-20 and STB-16 experimental tests. • LOCA and SBO transient simulated to verify the behaviour of the 3D GOTHIC model. • Comparison between the 3D GOTHIC model and MAAP4.07 model is conducted. • Accurate reproduction of pre severe accident conditions with the 3D GOTHIC model. - Abstract: The purpose of this study is to establish a detailed three-dimensional model of Cofrentes NPP BWR/6 Mark III containment building using the containment code GOTHIC 8.0. This paper presents the model construction, the phenomenology tests conducted and the selected transient for the model evaluation. In order to study the proper settings for the model in the suppression pool, two experiments conducted with the experimental installation POOLEX have been simulated, allowing to obtain a proper behaviour of the model under different suppression pool phenomenology. In the transient analyses, a Loss of Coolant Accident (LOCA) and a Station Blackout (SBO) transient have been performed. The main results of the simulations of those transients were qualitative compared with the results obtained from simulations with MAAP 4.07 Cofrentes NPP model, used by the plant for simulating severe accidents. From this comparison, a verification of the model in terms of pressurization, asymmetric discharges and high pressure release were obtained. The completeness of this model has proved to adequately simulate the thermal hydraulic phenomena which occur in the containment during accidental sequences

  15. Three Modeling Applications to Promote Automatic Item Generation for Examinations in Dentistry.

    Science.gov (United States)

    Lai, Hollis; Gierl, Mark J; Byrne, B Ellen; Spielman, Andrew I; Waldschmidt, David M

    2016-03-01

    Test items created for dentistry examinations are often individually written by content experts. This approach to item development is expensive because it requires the time and effort of many content experts but yields relatively few items. The aim of this study was to describe and illustrate how items can be generated using a systematic approach. Automatic item generation (AIG) is an alternative method that allows a small number of content experts to produce large numbers of items by integrating their domain expertise with computer technology. This article describes and illustrates how three modeling approaches to item content-item cloning, cognitive modeling, and image-anchored modeling-can be used to generate large numbers of multiple-choice test items for examinations in dentistry. Test items can be generated by combining the expertise of two content specialists with technology supported by AIG. A total of 5,467 new items were created during this study. From substitution of item content, to modeling appropriate responses based upon a cognitive model of correct responses, to generating items linked to specific graphical findings, AIG has the potential for meeting increasing demands for test items. Further, the methods described in this study can be generalized and applied to many other item types. Future research applications for AIG in dental education are discussed.

  16. Do students use contextual protective behaviors to reduce alcohol-related sexual risk? Examination of a dual-process decision-making model.

    Science.gov (United States)

    Scaglione, Nichole M; Hultgren, Brittney A; Reavy, Racheal; Mallett, Kimberly A; Turrisi, Rob; Cleveland, Michael J; Sell, Nichole M

    2015-09-01

    Recent studies suggest drinking protective behaviors (DPBs) and contextual protective behaviors (CPBs) can uniquely reduce alcohol-related sexual risk in college students. Few studies have examined CPBs independently, and even fewer have utilized theory to examine modifiable psychosocial predictors of students' decisions to use CPBs. The current study used a prospective design to examine (a) rational and reactive pathways and psychosocial constructs predictive of CPB use and (b) how gender might moderate these influences in a sample of college students. Students (n = 508) completed Web-based baseline (mid-Spring semester) and 1- and 6-month follow-up assessments of CPB use; psychosocial constructs (expectancies, normative beliefs, attitudes, and self-concept); and rational and reactive pathways (intentions and willingness). Regression was used to examine rational and reactive influences as proximal predictors of CPB use at the 6-month follow-up. Subsequent path analyses examined the effects of psychosocial constructs, as distal predictors of CPB use, mediated through the rational and reactive pathways. Both rational (intentions to use CPB) and reactive (willingness to use CPB) influences were significantly associated with increased CPB use. The examined distal predictors were found to effect CPB use differentially through the rational and reactive pathways. Gender did not significantly moderate any relationships within in the model. Findings suggest potential entry points for increasing CPB use that include both rational and reactive pathways. Overall, this study demonstrates the mechanisms underlying how to increase the use of CPBs in programs designed to reduce alcohol-related sexual consequences and victimization. (c) 2015 APA, all rights reserved).

  17. A chip-level modeling approach for rail span collapse and survivability analyses

    International Nuclear Information System (INIS)

    Marvis, D.G.; Alexander, D.R.; Dinger, G.L.

    1989-01-01

    A general semiautomated analysis technique has been developed for analyzing rail span collapse and survivability of VLSI microcircuits in high ionizing dose rate radiation environments. Hierarchical macrocell modeling permits analyses at the chip level and interactive graphical postprocessing provides a rapid visualization of voltage, current and power distributions over an entire VLSIC. The technique is demonstrated for a 16k C MOS/SOI SRAM and a CMOS/SOS 8-bit multiplier. The authors also present an efficient method to treat memory arrays as well as a three-dimensional integration technique to compute sapphire photoconduction from the design layout

  18. Relationship between candidate communication ability and oral certification examination scores.

    Science.gov (United States)

    Lunz, Mary E; Bashook, Philip G

    2008-12-01

    Structured case-based oral examinations are widely used in medical certifying examinations in the USA. These orals assess the candidate's decision-making skills using real or realistic patient cases. Frequently mentioned but not empirically evaluated is the potential bias introduced by the candidate's communication ability. This study aimed to assess the relationship between candidate communication ability and medical certification oral examination scores. Non-doctor communication observers rated a random sample of 90 candidates on communication ability during a medical oral certification examination. The multi-facet Rasch model was used to analyse the communication survey and the oral examination data. The multi-facet model accounts for observer and examiner severity bias. anova was used to measure differences in communication ability between passing and failing candidates and candidates grouped by level of communication ability. Pearson's correlations were used to compare candidate communication ability and oral certification examination performance. Candidate separation reliability values for the communication survey and the oral examination were 0.85 and 0.97, respectively, suggesting accurate candidate measurement. The correlation between communication scores and oral examination scores was 0.10. No significant difference was found between passing and failing candidates for measured communication ability. When candidates were grouped by high, moderate and low communication ability, there was no significant difference in their oral certification examination performance. Candidates' communication ability has little relationship to candidate performance on high-stakes, case-based oral examinations. Examiners for this certifying examination focused on assessing candidate decision-making ability and were not influenced by candidate communication ability.

  19. The structure of common emotion regulation strategies: A meta-analytic examination.

    Science.gov (United States)

    Naragon-Gainey, Kristin; McMahon, Tierney P; Chacko, Thomas P

    2017-04-01

    Emotion regulation has been examined extensively with regard to important outcomes, including psychological and physical health. However, the literature includes many different emotion regulation strategies but little examination of how they relate to one another, making it difficult to interpret and synthesize findings. The goal of this meta-analysis was to examine the underlying structure of common emotion regulation strategies (i.e., acceptance, behavioral avoidance, distraction, experiential avoidance, expressive suppression, mindfulness, problem solving, reappraisal, rumination, worry), and to evaluate this structure in light of theoretical models of emotion regulation. We also examined how distress tolerance-an important emotion regulation ability -relates to strategy use. We conducted meta-analyses estimating the correlations between emotion regulation strategies (based on 331 samples and 670 effect sizes), as well as between distress tolerance and strategies. The resulting meta-analytic correlation matrix was submitted to confirmatory and exploratory factor analyses. None of the confirmatory models, based on prior theory, was an acceptable fit to the data. Exploratory factor analysis suggested that 3 underlying factors best characterized these data. Two factors-labeled Disengagement and Aversive Cognitive Perseveration-emerged as strongly correlated but distinct factors, with the latter consisting of putatively maladaptive strategies. The third factor, Adaptive Engagement, was a less unified factor and weakly related to the other 2 factors. Distress tolerance was most closely associated with low levels of repetitive negative thought and experiential avoidance, and high levels of acceptance and mindfulness. We discuss the theoretical implications of these findings and applications to emotion regulation assessment. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Impact of sophisticated fog spray models on accident analyses

    International Nuclear Information System (INIS)

    Roblyer, S.P.; Owzarski, P.C.

    1978-01-01

    The N-Reactor confinement system release dose to the public in a postulated accident is reduced by washing the confinement atmosphere with fog sprays. This allows a low pressure release of confinement atmosphere containing fission products through filters and out an elevated stack. The current accident analysis required revision of the CORRAL code and other codes such as CONTEMPT to properly model the N Reactor confinement into a system of multiple fog-sprayed compartments. In revising these codes, more sophisticated models for the fog sprays and iodine plateout were incorporated to remove some of the conservatism of steam condensing rate, fission product washout and iodine plateout than used in previous studies. The CORRAL code, which was used to describe the transport and deposition of airborne fission products in LWR containment systems for the Rasmussen Study, was revised to describe fog spray removal of molecular iodine (I 2 ) and particulates in multiple compartments for sprays having individual characteristics of on-off times, flow rates, fall heights, and drop sizes in changing containment atmospheres. During postulated accidents, the code determined the fission product removal rates internally rather than from input decontamination factors. A discussion is given of how the calculated plateout and washout rates vary with time throughout the analysis. The results of the accident analyses indicated that more credit could be given to fission product washout and plateout. An important finding was that the release of fission products to the atmosphere and adsorption of fission products on the filters were significantly lower than previous studies had indicated

  1. Examining Dense Data Usage near the Regions with Severe Storms in All-Sky Microwave Radiance Data Assimilation and Impacts on GEOS Hurricane Analyses

    Science.gov (United States)

    Kim, Min-Jeong; Jin, Jianjun; McCarty, Will; El Akkraoui, Amal; Todling, Ricardo; Gelaro, Ron

    2018-01-01

    Many numerical weather prediction (NWP) centers assimilate radiances affected by clouds and precipitation from microwave sensors, with the expectation that these data can provide critical constraints on meteorological parameters in dynamically sensitive regions to make significant impacts on forecast accuracy for precipitation. The Global Modeling and Assimilation Office (GMAO) at NASA Goddard Space Flight Center assimilates all-sky microwave radiance data from various microwave sensors such as all-sky GPM Microwave Imager (GMI) radiance in the Goddard Earth Observing System (GEOS) atmospheric data assimilation system (ADAS), which includes the GEOS atmospheric model, the Gridpoint Statistical Interpolation (GSI) atmospheric analysis system, and the Goddard Aerosol Assimilation System (GAAS). So far, most of NWP centers apply same large data thinning distances, that are used in clear-sky radiance data to avoid correlated observation errors, to all-sky microwave radiance data. For example, NASA GMAO is applying 145 km thinning distances for most of satellite radiance data including microwave radiance data in which all-sky approach is implemented. Even with these coarse observation data usage in all-sky assimilation approach, noticeable positive impacts from all-sky microwave data on hurricane track forecasts were identified in GEOS-5 system. The motivation of this study is based on the dynamic thinning distance method developed in our all-sky framework to use of denser data in cloudy and precipitating regions due to relatively small spatial correlations of observation errors. To investigate the benefits of all-sky microwave radiance on hurricane forecasts, several hurricane cases selected between 2016-2017 are examined. The dynamic thinning distance method is utilized in our all-sky approach to understand the sources and mechanisms to explain the benefits of all-sky microwave radiance data from various microwave radiance sensors like Advanced Microwave Sounder Unit

  2. Work-Related Trauma, Alienation, and Posttraumatic and Depressive Symptoms in Medical Examiner Employees.

    Science.gov (United States)

    Brondolo, Elizabeth; Eftekharzadeh, Pegah; Clifton, Christine; Schwartz, Joseph E; Delahanty, Douglas

    2017-10-05

    First-responder employees, including firefighters, police, and medical examiners, are at risk for the development of depression and posttraumatic stress disorder (PTSD) as a result of exposure to workplace trauma. However, pathways linking workplace trauma exposure to mental health symptoms are not well understood. In the context of social-cognitive models of depression/PTSD, we examined the role of negative cognitions as mediators of the cross-sectional and longitudinal relationship of workplace trauma exposure to symptoms of depression/PTSD in medical examiner (ME) employees. 259 ME personnel were recruited from 8 sites nationwide and completed an online questionnaire assessing potential trauma exposure (i.e., exposure to disturbing cases and contact with distressed families of the deceased), negative cognitions, and symptoms of depression and PTSD, and 151 completed similar assessments 3 months later. Longitudinal analyses indicated that increases in negative cognitions, and, in particular, thoughts about alienation predicted increases in depressive symptoms from Time 1 to Time 2. In cross-sectional analyses, but not longitudinal analyses, negative cognitions mediated the relationship of case exposure to symptoms of both depression and PTSD. Negative cognition also mediated the relationship of contact with distressed families to depressive symptoms. The strongest effects were for negative cognitions about being alienated from others. The results of this study support social-cognitive models of the development of posttraumatic distress in the workplace and have implications for the development of interventions to prevent and treat mental health symptoms in first responders. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  3. A simple beam model to analyse the durability of adhesively bonded tile floorings in presence of shrinkage

    Directory of Open Access Journals (Sweden)

    S. de Miranda

    2014-07-01

    Full Text Available A simple beam model for the evaluation of tile debonding due to substrate shrinkage is presented. The tile-adhesive-substrate package is modeled as an Euler-Bernoulli beam laying on a two-layer elastic foundation. An effective discrete model for inter-tile grouting is introduced with the aim of modelling workmanship defects due to partial filled groutings. The model is validated using the results of a 2D FE model. Different defect configurations and adhesive typologies are analysed, focusing the attention on the prediction of normal stresses in the adhesive layer under the assumption of Mode I failure of the adhesive.

  4. Quantitative Model for Economic Analyses of Information Security Investment in an Enterprise Information System

    Directory of Open Access Journals (Sweden)

    Bojanc Rok

    2012-11-01

    Full Text Available The paper presents a mathematical model for the optimal security-technology investment evaluation and decision-making processes based on the quantitative analysis of security risks and digital asset assessments in an enterprise. The model makes use of the quantitative analysis of different security measures that counteract individual risks by identifying the information system processes in an enterprise and the potential threats. The model comprises the target security levels for all identified business processes and the probability of a security accident together with the possible loss the enterprise may suffer. The selection of security technology is based on the efficiency of selected security measures. Economic metrics are applied for the efficiency assessment and comparative analysis of different protection technologies. Unlike the existing models for evaluation of the security investment, the proposed model allows direct comparison and quantitative assessment of different security measures. The model allows deep analyses and computations providing quantitative assessments of different options for investments, which translate into recommendations facilitating the selection of the best solution and the decision-making thereof. The model was tested using empirical examples with data from real business environment.

  5. Application of model bread baking in the examination of arabinoxylan-protein complexes in rye bread.

    Science.gov (United States)

    Buksa, Krzysztof

    2016-09-05

    The changes in molecular mass of arabinoxylan (AX) and protein caused by bread baking process were examined using a model rye bread. Instead of the normal flour, the dough contained starch, water-extractable AX and protein which were isolated from rye wholemeal. From the crumb of selected model breads, starch was removed releasing AX-protein complexes, which were further examined by size exclusion chromatography. On the basis of the research, it was concluded that optimum model mix can be composed of 3-6% AX and 3-6% rye protein isolate at 94-88% of rye starch meaning with the most similar properties to low extraction rye flour. Application of model rye bread allowed to examine the interactions between AX and proteins. Bread baked with a share of AX, rye protein and starch, from which the complexes of the highest molar mass were isolated, was characterized by the strongest structure of the bread crumb. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Modelling Terminal Examination System For Senior High Schools In Ghana

    Directory of Open Access Journals (Sweden)

    Seidu Azizu

    2017-10-01

    Full Text Available Modelling terminal examination management system using link softwares for Senior High Schools in Ghana is reported. Both Microsoft Excel and Access were integrated as back and front-end respectively. The two softwares were linked for update of records as well as security purposes during data entry of students records. The link was collapsed after the deadline of data entry to convert the access table to local and enhance data security. Based on the proposed system multiple parameters such as invigilators marks grades attendance and absenteeism were assessed and identified for the various subjects in the entire examination processes. The System applied structured query languagesql for searching specific named parameter for analysis where the total number written papers number of students and performance could also be accessed.

  7. Association between Adult Height and Risk of Colorectal, Lung, and Prostate Cancer : Results from Meta-analyses of Prospective Studies and Mendelian Randomization Analyses

    NARCIS (Netherlands)

    Khankari, Nikhil K.; Shu, Xiao Ou; Wen, Wanqing; Kraft, Peter; Lindström, Sara; Peters, Ulrike; Schildkraut, Joellen; Schumacher, Fredrick; Bofetta, Paolo; Risch, Angela; Bickeböller, Heike; Amos, Christopher I.; Easton, Douglas; Eeles, Rosalind A.; Gruber, Stephen B.; Haiman, Christopher A.; Hunter, David J.; Chanock, Stephen J.; Pierce, Brandon L.; Zheng, Wei; Blalock, Kendra; Campbell, Peter T.; Casey, Graham; Conti, David V.; Edlund, Christopher K.; Figueiredo, Jane; James Gauderman, W.; Gong, Jian; Green, Roger C.; Harju, John F.; Harrison, Tabitha A.; Jacobs, Eric J.; Jenkins, Mark A.; Jiao, Shuo; Li, Li; Lin, Yi; Manion, Frank J.; Moreno, Victor; Mukherjee, Bhramar; Raskin, Leon; Schumacher, Fredrick R.; Seminara, Daniela; Severi, Gianluca; Stenzel, Stephanie L.; Thomas, Duncan C.; Hopper, John L.; Southey, Melissa C.; Makalic, Enes; Schmidt, Daniel F.; Fletcher, Olivia; Peto, Julian; Gibson, Lorna; dos Santos Silva, Isabel; Ahsan, Habib; Whittemore, Alice; Waisfisz, Quinten; Meijers-Heijboer, Hanne; Adank, Muriel; van der Luijt, Rob B.; Uitterlinden, Andre G.; Hofman, Albert; Meindl, Alfons; Schmutzler, Rita K.; Müller-Myhsok, Bertram; Lichtner, Peter; Nevanlinna, Heli; Muranen, Taru A.; Aittomäki, Kristiina; Blomqvist, Carl; Chang-Claude, Jenny; Hein, Rebecca; Dahmen, Norbert; Beckman, Lars; Crisponi, Laura; Hall, Per; Czene, Kamila; Irwanto, Astrid; Liu, Jianjun; Easton, Douglas F.; Turnbull, Clare; Rahman, Nazneen; Eeles, Rosalind; Kote-Jarai, Zsofia; Muir, Kenneth; Giles, Graham; Neal, David; Donovan, Jenny L.; Hamdy, Freddie C.; Wiklund, Fredrik; Gronberg, Henrik; Haiman, Christopher; Schumacher, Fred; Travis, Ruth; Riboli, Elio; Hunter, David; Gapstur, Susan; Berndt, Sonja; Chanock, Stephen; Han, Younghun; Su, Li; Wei, Yongyue; Hung, Rayjean J.; Brhane, Yonathan; McLaughlin, John; Brennan, Paul; McKay, James D.; Rosenberger, Albert; Houlston, Richard S.; Caporaso, Neil; Teresa Landi, Maria; Heinrich, Joachim; Wu, Xifeng; Ye, Yuanqing; Christiani, David C.

    2016-01-01

    Background: Observational studies examining associations between adult height and risk of colorectal, prostate, and lung cancers have generated mixed results. We conducted meta-analyses using data from prospective cohort studies and further carried out Mendelian randomization analyses, using

  8. Evaluating mepindolol in a test model of examination anxiety in students.

    Science.gov (United States)

    Krope, P; Kohrs, A; Ott, H; Wagner, W; Fichte, K

    1982-03-01

    The effect of a single dose of beta-blocker (5 or 10 mg mepindolol) during a written examination was investigated in two double-blind studies (N : 49 and 55 students, respectively). The question was whether the beta-blocker would in comparison to placebo diminish examination anxiety and improve the performance of highly complex tasks, while leaving the performance of less complex tasks unchanged. A reduction in examination anxiety after beta-blocker intake could not be demonstrated with a multi-level test model (which included the parameters self-rated anxiety, motor behaviour, task performance and physiology), although pulse rates were lowered significantly. An improvement in performance could not be observed, while - by the same token - the performance was not impaired by the beta-blocker. A hypothesis according to which a beta-blocker has an anxiolytic effect and improves performance, dependent on the level of habitual examination anxiety, was tested post hoc, but could not be confirmed. Ten of the subjects treated with 10 mg mepindolol, complained of different side effects, including dizziness, fatigue and headache.

  9. Rockslide and Impulse Wave Modelling in the Vajont Reservoir by DEM-CFD Analyses

    Science.gov (United States)

    Zhao, T.; Utili, S.; Crosta, G. B.

    2016-06-01

    This paper investigates the generation of hydrodynamic water waves due to rockslides plunging into a water reservoir. Quasi-3D DEM analyses in plane strain by a coupled DEM-CFD code are adopted to simulate the rockslide from its onset to the impact with the still water and the subsequent generation of the wave. The employed numerical tools and upscaling of hydraulic properties allow predicting a physical response in broad agreement with the observations notwithstanding the assumptions and characteristics of the adopted methods. The results obtained by the DEM-CFD coupled approach are compared to those published in the literature and those presented by Crosta et al. (Landslide spreading, impulse waves and modelling of the Vajont rockslide. Rock mechanics, 2014) in a companion paper obtained through an ALE-FEM method. Analyses performed along two cross sections are representative of the limit conditions of the eastern and western slope sectors. The max rockslide average velocity and the water wave velocity reach ca. 22 and 20 m/s, respectively. The maximum computed run up amounts to ca. 120 and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 and 190 m, respectively). Therefore, the overall study lays out a possible DEM-CFD framework for the modelling of the generation of the hydrodynamic wave due to the impact of a rapid moving rockslide or rock-debris avalanche.

  10. Dependence regulation in newlywed couples: A prospective examination.

    Science.gov (United States)

    Derrick, Jaye L; Leonard, Kenneth E; Homish, Gregory G

    2012-12-01

    According to the Risk Regulation Model (Murray, S. L., Holmes, J. G., & Collins, N. L. (2006). Optimizing assurance: The risk regulation system in relationships. Psychological Bulletin, 132 , 641-666), people need to trust in their partner's regard before they risk interdependence. The current study prospectively examines the association between perceived regard and levels of dependence in newlywed couples over nine years of marriage. Analyses demonstrate that changes in perceived regard predict levels of dependence, changes in dependence do not predict perceived regard, and alternative explanations cannot account for these effects. Further, changes in perceived regard prospectively predict divorce, and levels of dependence mediate this association. Results are discussed in terms of the dependence regulation component of the Risk Regulation Model.

  11. Examining the stress-burnout relationship: the mediating role of negative thoughts.

    Science.gov (United States)

    Chang, Ko-Hsin; Lu, Frank J H; Chyi, Theresa; Hsu, Ya-Wen; Chan, Shi-Wei; Wang, Erica T W

    2017-01-01

    Using Smith's (1986) cognitive-affective model of athletic burnout as a guiding framework, the purpose of this study was to examine the relationships among athletes' stress in life, negative thoughts, and the mediating role of negative thoughts on the stress-burnout relationship. A total of 300 college student-athletes (males = 174; females = 126, M age  = 20.43 y, SD = 1.68) completed the College Student Athlete's Life Stress Scale (CSALSS; Lu et al., 2012), the Automatic Thoughts Questionnaire (ATQ; Hollon & Kendall, 1980), and the Athlete Burnout Questionnaire (ABQ; Raedeke & Smith, 2001). Correlational analyses found that two types of life stress and four types of negative thoughts correlated with burnout. Additionally, hierarchical regression analyses found that four types of negative thoughts partially mediated the stress-burnout relationship. We concluded that an athlete's negative thoughts play a pivotal role in predicting athletes' stress-burnout relationship. Future study may examine how irrational cognition influences athletes' motivation and psychological well-being.

  12. Examining the stress-burnout relationship: the mediating role of negative thoughts

    Science.gov (United States)

    Chyi, Theresa; Hsu, Ya-Wen; Chan, Shi-Wei; Wang, Erica T.W.

    2017-01-01

    Background Using Smith’s (1986) cognitive-affective model of athletic burnout as a guiding framework, the purpose of this study was to examine the relationships among athletes’ stress in life, negative thoughts, and the mediating role of negative thoughts on the stress-burnout relationship. Methods A total of 300 college student-athletes (males = 174; females = 126, Mage = 20.43 y, SD = 1.68) completed the College Student Athlete’s Life Stress Scale (CSALSS; Lu et al., 2012), the Automatic Thoughts Questionnaire (ATQ; Hollon & Kendall, 1980), and the Athlete Burnout Questionnaire (ABQ; Raedeke & Smith, 2001). Results Correlational analyses found that two types of life stress and four types of negative thoughts correlated with burnout. Additionally, hierarchical regression analyses found that four types of negative thoughts partially mediated the stress-burnout relationship. Discussion We concluded that an athlete’s negative thoughts play a pivotal role in predicting athletes’ stress-burnout relationship. Future study may examine how irrational cognition influences athletes’ motivation and psychological well-being. PMID:29302397

  13. Modelling and Analysing Access Control Policies in XACML 3.0

    DEFF Research Database (Denmark)

    Ramli, Carroline Dewi Puspa Kencana

    (c.f. GM03,Mos05,Ris13) and manual analysis of the overall effect and consequences of a large XACML policy set is a very daunting and time-consuming task. In this thesis we address the problem of understanding the semantics of access control policy language XACML, in particular XACML version 3.0....... The main focus of this thesis is modelling and analysing access control policies in XACML 3.0. There are two main contributions in this thesis. First, we study and formalise XACML 3.0, in particular the Policy Decision Point (PDP). The concrete syntax of XACML is based on the XML format, while its standard...... semantics is described normatively using natural language. The use of English text in standardisation leads to the risk of misinterpretation and ambiguity. In order to avoid this drawback, we define an abstract syntax of XACML 3.0 and a formal XACML semantics. Second, we propose a logic-based XACML analysis...

  14. Risk Factor Analyses for the Return of Spontaneous Circulation in the Asphyxiation Cardiac Arrest Porcine Model

    Directory of Open Access Journals (Sweden)

    Cai-Jun Wu

    2015-01-01

    Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.

  15. Pathway models for analysing and managing the introduction of alien plant pests—an overview and categorization

    Science.gov (United States)

    J.C. Douma; M. Pautasso; R.C. Venette; C. Robinet; L. Hemerik; M.C.M. Mourits; J. Schans; W. van der Werf

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative estimates of introduction risks and effectiveness of management options....

  16. Predicting Eating Disorder Group Membership: An Examination and Extension of the Sociocultural Model

    Science.gov (United States)

    Engler, Patricia A.; Crowther, Janis H.; Dalton, Ginnie; Sanftner, Jennifer L.

    2006-01-01

    The purpose of this research was to examine and extend portions of the sociocultural model of bulimia nervosa (Stice, E. (1994). Review of the evidence for a sociocultural model of bulimia nervosa and an exploration of the mechanisms of action. "Clinical Psychology Review," 14, 633-661; Stice, E., & Agras, W. S. (1998). Predicting onset and…

  17. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  18. Devising a New Model of Demand-Based Learning Integrated with Social Networks and Analyses of its Performance

    Directory of Open Access Journals (Sweden)

    Bekim Fetaji

    2018-02-01

    Full Text Available The focus of the research study is to devise a new model for demand based learning that will be integrated with social networks such as Facebook, twitter and other. The study investigates this by reviewing the published literature and realizes a case study analyses in order to analyze the new models’ analytical perspectives of practical implementation. The study focuses on analyzing demand-based learning and investigating how it can be improved by devising a specific model that incorporates social network use. Statistical analyses of the results of the questionnaire through research of the raised questions and hypothesis showed that there is a need for introducing new models in the teaching process. The originality stands on the prologue of the social login approach to an educational environment, whereas the approach is counted as a contribution of developing a demand-based web application, which aims to modernize the educational pattern of communication, introduce the social login approach, and increase the process of knowledge transfer as well as improve learners’ performance and skills. Insights and recommendations are provided, argumented and discussed.

  19. From IPE [individual plant examinations] to IPEEE [individual plant examination of external events

    International Nuclear Information System (INIS)

    Newton, I.M.

    1994-01-01

    In addition to doing individual plant examinations (IPEs) which assess risk to nuclear plants from internal factors, all US plants are now also required to analyse external events and submit an IPEEE (Individual Plant Examination of External Events). Specifically, the IPEEEs require an assessment of plant-specific risks from the following types of initiating events: seismic events; fire; wind; tornadoes; flooding; accidents involving transportation or nearby facilities, such as oil refineries. (author)

  20. "PERLE bedside-examination-course for candidates in state examination" - Developing a training program for the third part of medical state examination (oral examination with practical skills).

    Science.gov (United States)

    Karthaus, Anne; Schmidt, Anita

    2016-01-01

    In preparation for the state examination, many students have open questions and a need for advice. Tutors of the Skills Lab PERLE-"Praxis ERfahren und Lernen" (experiencing and learning practical skills) have developed a new course concept to provide support and practical assistance for the examinees. The course aims to familiarize the students with the exam situation in order to gain more confidence. This enables the students to experience a confrontation with the specific situation of the exam in a protected environment. Furthermore, soft skills are utilized and trained. Concept of the course: The course was inspired by the OSCE-model (Objective Structured Clinical Examination), an example for case-based learning and controlling. Acquired knowledge can be revised and extended through the case studies. Experienced tutors provide assistance in discipline-specific competencies, and help in organizational issues such as dress code and behaviour. Evaluation of the course: An evaluation was conducted by the attending participants after every course. Based on this assessment, the course is constantly being developed. In March, April and October 2015 six courses, with a total of 84 participants, took place. Overall 76 completed questionnaires (91%) were analysed. Strengths of the course are a good tutor-participants-ratio with 1:4 (1 Tutor provides guidance for 4 participants), the interactivity of the course, and the high flexibility in responding to the group's needs. Weaknesses are the tight schedule, and the currently not yet performed evaluation before and after the course. In terms of "best practise", this article shows an example of how to offer low-cost and low-threshold preparation for the state examination.

  1. Integrated tokamak modelling with the fast-ion Fokker–Planck solver adapted for transient analyses

    International Nuclear Information System (INIS)

    Toma, M; Hamamatsu, K; Hayashi, N; Honda, M; Ide, S

    2015-01-01

    Integrated tokamak modelling that enables the simulation of an entire discharge period is indispensable for designing advanced tokamak plasmas. For this purpose, we extend the integrated code TOPICS to make it more suitable for transient analyses in the fast-ion part. The fast-ion Fokker–Planck solver is integrated into TOPICS at the same level as the bulk transport solver so that the time evolutions of the fast ion and the bulk plasma are consistent with each other as well as with the equilibrium magnetic field. The fast-ion solver simultaneously handles neutral beam-injected ions and alpha particles. Parallelisation of the fast-ion solver in addition to its computational lightness owing to a dimensional reduction in the phase space enables transient analyses for long periods in the order of tens of seconds. The fast-ion Fokker–Planck calculation is compared and confirmed to be in good agreement with an orbit following a Monte Carlo calculation. The integrated code is applied to ramp-up simulations for JT-60SA and ITER to confirm its capability and effectiveness in transient analyses. In the integrated simulations, the coupled evolution of the fast ions, plasma profiles, and equilibrium magnetic fields are presented. In addition, the electric acceleration effect on fast ions is shown and discussed. (paper)

  2. An integrative process model of leadership: examining loci, mechanisms, and event cycles.

    Science.gov (United States)

    Eberly, Marion B; Johnson, Michael D; Hernandez, Morela; Avolio, Bruce J

    2013-09-01

    Utilizing the locus (source) and mechanism (transmission) of leadership framework (Hernandez, Eberly, Avolio, & Johnson, 2011), we propose and examine the application of an integrative process model of leadership to help determine the psychological interactive processes that constitute leadership. In particular, we identify the various dynamics involved in generating leadership processes by modeling how the loci and mechanisms interact through a series of leadership event cycles. We discuss the major implications of this model for advancing an integrative understanding of what constitutes leadership and its current and future impact on the field of psychological theory, research, and practice. © 2013 APA, all rights reserved.

  3. Visual persuasion with physically attractive models in ads: An examination of how the ad model influences product evaluations

    OpenAIRE

    Söderlund, Magnus; Lange, Fredrik

    2006-01-01

    This paper examines the prevalent advertising practice of visually juxtaposing an anonymous, physically attractive ad model and a product in terms of its effects on the attitude toward the product. In this appeal, in which there are no explicit verbal claims about how the two objects are connected, we argue that the physically attractive model sets in motion a process in which emotions and the attitude toward the ad model serve as mediating variables, and that this process ultimately results ...

  4. Model tests and numerical analyses on horizontal impedance functions of inclined single piles embedded in cohesionless soil

    Science.gov (United States)

    Goit, Chandra Shekhar; Saitoh, Masato

    2013-03-01

    Horizontal impedance functions of inclined single piles are measured experimentally for model soil-pile systems with both the effects of local soil nonlinearity and resonant characteristics. Two practical pile inclinations of 5° and 10° in addition to a vertical pile embedded in cohesionless soil and subjected to lateral harmonic pile head loadings for a wide range of frequencies are considered. Results obtained with low-to-high amplitude of lateral loadings on model soil-pile systems encased in a laminar shear box show that the local nonlinearities have a profound impact on the horizontal impedance functions of piles. Horizontal impedance functions of inclined piles are found to be smaller than the vertical pile and the values decrease as the angle of pile inclination increases. Distinct values of horizontal impedance functions are obtained for the `positive' and `negative' cycles of harmonic loadings, leading to asymmetric force-displacement relationships for the inclined piles. Validation of these experimental results is carried out through three-dimensional nonlinear finite element analyses, and the results from the numerical models are in good agreement with the experimental data. Sensitivity analyses conducted on the numerical models suggest that the consideration of local nonlinearity at the vicinity of the soil-pile interface influence the response of the soil-pile systems.

  5. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Directory of Open Access Journals (Sweden)

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  6. Complementary Exploratory and Confirmatory Factor Analyses of the French WISC-V: Analyses Based on the Standardization Sample.

    Science.gov (United States)

    Lecerf, Thierry; Canivez, Gary L

    2017-12-28

    Interpretation of the French Wechsler Intelligence Scale for Children-Fifth Edition (French WISC-V; Wechsler, 2016a) is based on a 5-factor model including Verbal Comprehension (VC), Visual Spatial (VS), Fluid Reasoning (FR), Working Memory (WM), and Processing Speed (PS). Evidence for the French WISC-V factorial structure was established exclusively through confirmatory factor analyses (CFAs). However, as recommended by Carroll (1995); Reise (2012), and Brown (2015), factorial structure should derive from both exploratory factor analysis (EFA) and CFA. The first goal of this study was to examine the factorial structure of the French WISC-V using EFA. The 15 French WISC-V primary and secondary subtest scaled scores intercorrelation matrix was used and factor extraction criteria suggested from 1 to 4 factors. To disentangle the contribution of first- and second-order factors, the Schmid and Leiman (1957) orthogonalization transformation (SLT) was applied. Overall, no EFA evidence for 5 factors was found. Results indicated that the g factor accounted for about 67% of the common variance and that the contributions of the first-order factors were weak (3.6 to 11.9%). CFA was used to test numerous alternative models. Results indicated that bifactor models produced better fit to these data than higher-order models. Consistent with previous studies, findings suggested dominance of the general intelligence factor and that users should thus emphasize the Full Scale IQ (FSIQ) when interpreting the French WISC-V. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  7. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    Kevin Coppersmith

    2001-01-01

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  8. Two-phase modeling of deflagration-to-detonation transition in granular materials: A critical examination of modeling issues

    International Nuclear Information System (INIS)

    Bdzil, J.B.; Menikoff, R.; Son, S.F.; Kapila, A.K.; Stewart, D.S.

    1999-01-01

    The two-phase mixture model developed by Baer and Nunziato (BN) to study the deflagration-to-detonation transition (DDT) in granular explosives is critically reviewed. The continuum-mixture theory foundation of the model is examined, with particular attention paid to the manner in which its constitutive functions are formulated. Connections between the mechanical and energetic phenomena occurring at the scales of the grains, and their manifestations on the continuum averaged scale, are explored. The nature and extent of approximations inherent in formulating the constitutive terms, and their domain of applicability, are clarified. Deficiencies and inconsistencies in the derivation are cited, and improvements suggested. It is emphasized that the entropy inequality constrains but does not uniquely determine the phase interaction terms. The resulting flexibility is exploited to suggest improved forms for the phase interactions. These improved forms better treat the energy associated with the dynamic compaction of the bed and the single-phase limits of the model. Companion papers of this study [Kapila et al., Phys. Fluids 9, 3885 (1997); Kapila et al., in preparation; Son et al., in preparation] examine simpler, reduced models, in which the fine scales of velocity and pressure disequilibrium between the phases allow the corresponding relaxation zones to be treated as discontinuities that need not be resolved in a numerical computation. copyright 1999 American Institute of Physics

  9. Geomechanical analyses to investigate wellbore/mine interactions in the Potash Enclave of Southeastern New Mexico.

    Energy Technology Data Exchange (ETDEWEB)

    Ehgartner, Brian L.; Bean, James E. (Sandia Staffing Alliance, LLC, Albuquerque, NM); Arguello, Jose Guadalupe, Jr.; Stone, Charles Michael

    2010-04-01

    Geomechanical analyses have been performed to investigate potential mine interactions with wellbores that could occur in the Potash Enclave of Southeastern New Mexico. Two basic models were used in the study; (1) a global model that simulates the mechanics associated with mining and subsidence and (2) a wellbore model that examines the resulting interaction impacts on the wellbore casing. The first model is a 2D approximation of a potash mine using a plane strain idealization for mine depths of 304.8 m (1000 ft) and 609.6 m (2000 ft). A 3D wellbore model then considers the impact of bedding plane slippage across single and double cased wells cemented through the Salado formation. The wellbore model establishes allowable slippage to prevent casing yield.

  10. Systematic review of model-based analyses reporting the cost-effectiveness and cost-utility of cardiovascular disease management programs.

    Science.gov (United States)

    Maru, Shoko; Byrnes, Joshua; Whitty, Jennifer A; Carrington, Melinda J; Stewart, Simon; Scuffham, Paul A

    2015-02-01

    The reported cost effectiveness of cardiovascular disease management programs (CVD-MPs) is highly variable, potentially leading to different funding decisions. This systematic review evaluates published modeled analyses to compare study methods and quality. Articles were included if an incremental cost-effectiveness ratio (ICER) or cost-utility ratio (ICUR) was reported, it is a multi-component intervention designed to manage or prevent a cardiovascular disease condition, and it addressed all domains specified in the American Heart Association Taxonomy for Disease Management. Nine articles (reporting 10 clinical outcomes) were included. Eight cost-utility and two cost-effectiveness analyses targeted hypertension (n=4), coronary heart disease (n=2), coronary heart disease plus stoke (n=1), heart failure (n=2) and hyperlipidemia (n=1). Study perspectives included the healthcare system (n=5), societal and fund holders (n=1), a third party payer (n=3), or was not explicitly stated (n=1). All analyses were modeled based on interventions of one to two years' duration. Time horizon ranged from two years (n=1), 10 years (n=1) and lifetime (n=8). Model structures included Markov model (n=8), 'decision analytic models' (n=1), or was not explicitly stated (n=1). Considerable variation was observed in clinical and economic assumptions and reporting practices. Of all ICERs/ICURs reported, including those of subgroups (n=16), four were above a US$50,000 acceptability threshold, six were below and six were dominant. The majority of CVD-MPs was reported to have favorable economic outcomes, but 25% were at unacceptably high cost for the outcomes. Use of standardized reporting tools should increase transparency and inform what drives the cost-effectiveness of CVD-MPs. © The European Society of Cardiology 2014.

  11. Examining human behavior in video games: The development of a computational model to measure aggression.

    Science.gov (United States)

    Lamb, Richard; Annetta, Leonard; Hoston, Douglas; Shapiro, Marina; Matthews, Benjamin

    2018-06-01

    Video games with violent content have raised considerable concern in popular media and within academia. Recently, there has been considerable attention regarding the claim of the relationship between aggression and video game play. The authors of this study propose the use of a new class of tools developed via computational models to allow examination of the question of whether there is a relationship between violent video games and aggression. The purpose of this study is to computationally model and compare the General Aggression Model with the Diathesis Mode of Aggression related to the play of violent content in video games. A secondary purpose is to provide a method of measuring and examining individual aggression arising from video game play. Total participants examined for this study are N = 1065. This study occurs in three phases. Phase 1 is the development and quantification of the profile combination of traits via latent class profile analysis. Phase 2 is the training of the artificial neural network. Phase 3 is the comparison of each model as a computational model with and without the presence of video game violence. Results suggest that a combination of environmental factors and genetic predispositions trigger aggression related to video games.

  12. Performance and Vibration Analyses of Lift-Offset Helicopters

    Directory of Open Access Journals (Sweden)

    Jeong-In Go

    2017-01-01

    Full Text Available A validation study on the performance and vibration analyses of the XH-59A compound helicopter is conducted to establish techniques for the comprehensive analysis of lift-offset compound helicopters. This study considers the XH-59A lift-offset compound helicopter using a rigid coaxial rotor system as a verification model. CAMRAD II (Comprehensive Analytical Method of Rotorcraft Aerodynamics and Dynamics II, a comprehensive analysis code, is used as a tool for the performance, vibration, and loads analyses. A general free wake model, which is a more sophisticated wake model than other wake models, is used to obtain good results for the comprehensive analysis. Performance analyses of the XH-59A helicopter with and without auxiliary propulsion are conducted in various flight conditions. In addition, vibration analyses of the XH-59A compound helicopter configuration are conducted in the forward flight condition. The present comprehensive analysis results are in good agreement with the flight test and previous analyses. Therefore, techniques for the comprehensive analysis of lift-offset compound helicopters are appropriately established. Furthermore, the rotor lifts are calculated for the XH-59A lift-offset compound helicopter in the forward flight condition to investigate the airloads characteristics of the ABC™ (Advancing Blade Concept rotor.

  13. Structural validity of the Wechsler Intelligence Scale for Children-Fifth Edition: Confirmatory factor analyses with the 16 primary and secondary subtests.

    Science.gov (United States)

    Canivez, Gary L; Watkins, Marley W; Dombrowski, Stefan C

    2017-04-01

    The factor structure of the Wechsler Intelligence Scale for Children-Fifth Edition (WISC-V; Wechsler, 2014a) standardization sample (N = 2,200) was examined using confirmatory factor analyses (CFA) with maximum likelihood estimation for all reported models from the WISC-V Technical and Interpretation Manual (Wechsler, 2014b). Additionally, alternative bifactor models were examined and variance estimates and model-based reliability estimates (ω coefficients) were provided. Results from analyses of the 16 primary and secondary WISC-V subtests found that all higher-order CFA models with 5 group factors (VC, VS, FR, WM, and PS) produced model specification errors where the Fluid Reasoning factor produced negative variance and were thus judged inadequate. Of the 16 models tested, the bifactor model containing 4 group factors (VC, PR, WM, and PS) produced the best fit. Results from analyses of the 10 primary WISC-V subtests also found the bifactor model with 4 group factors (VC, PR, WM, and PS) produced the best fit. Variance estimates from both 16 and 10 subtest based bifactor models found dominance of general intelligence (g) in accounting for subtest variance (except for PS subtests) and large ω-hierarchical coefficients supporting general intelligence interpretation. The small portions of variance uniquely captured by the 4 group factors and low ω-hierarchical subscale coefficients likely render the group factors of questionable interpretive value independent of g (except perhaps for PS). Present CFA results confirm the EFA results reported by Canivez, Watkins, and Dombrowski (2015); Dombrowski, Canivez, Watkins, and Beaujean (2015); and Canivez, Dombrowski, and Watkins (2015). (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  15. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    Science.gov (United States)

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  16. Multicollinearity in prognostic factor analyses using the EORTC QLQ-C30: identification and impact on model selection.

    Science.gov (United States)

    Van Steen, Kristel; Curran, Desmond; Kramer, Jocelyn; Molenberghs, Geert; Van Vreckem, Ann; Bottomley, Andrew; Sylvester, Richard

    2002-12-30

    Clinical and quality of life (QL) variables from an EORTC clinical trial of first line chemotherapy in advanced breast cancer were used in a prognostic factor analysis of survival and response to chemotherapy. For response, different final multivariate models were obtained from forward and backward selection methods, suggesting a disconcerting instability. Quality of life was measured using the EORTC QLQ-C30 questionnaire completed by patients. Subscales on the questionnaire are known to be highly correlated, and therefore it was hypothesized that multicollinearity contributed to model instability. A correlation matrix indicated that global QL was highly correlated with 7 out of 11 variables. In a first attempt to explore multicollinearity, we used global QL as dependent variable in a regression model with other QL subscales as predictors. Afterwards, standard diagnostic tests for multicollinearity were performed. An exploratory principal components analysis and factor analysis of the QL subscales identified at most three important components and indicated that inclusion of global QL made minimal difference to the loadings on each component, suggesting that it is redundant in the model. In a second approach, we advocate a bootstrap technique to assess the stability of the models. Based on these analyses and since global QL exacerbates problems of multicollinearity, we therefore recommend that global QL be excluded from prognostic factor analyses using the QLQ-C30. The prognostic factor analysis was rerun without global QL in the model, and selected the same significant prognostic factors as before. Copyright 2002 John Wiley & Sons, Ltd.

  17. Testing a Dual Process Model of Gender-Based Violence: A Laboratory Examination.

    Science.gov (United States)

    Berke, Danielle S; Zeichner, Amos

    2016-01-01

    The dire impact of gender-based violence on society compels development of models comprehensive enough to capture the diversity of its forms. Research has established hostile sexism (HS) as a robust predictor of gender-based violence. However, to date, research has yet to link men's benevolent sexism (BS) to physical aggression toward women, despite correlations between BS and HS and between BS and victim blaming. One model, the opposing process model of benevolent sexism (Sibley & Perry, 2010), suggests that, for men, BS acts indirectly through HS to predict acceptance of hierarchy-enhancing social policy as an expression of a preference for in-group dominance (i. e., social dominance orientation [SDO]). The extent to which this model applies to gender-based violence remains untested. Therefore, in this study, 168 undergraduate men in a U. S. university participated in a competitive reaction time task, during which they had the option to shock an ostensible female opponent as a measure of gender-based violence. Results of multiple-mediation path analyses indicated dual pathways potentiating gender-based violence and highlight SDO as a particularly potent mechanism of this violence. Findings are discussed in terms of group dynamics and norm-based violence prevention.

  18. Predicting Examination Performance Using an Expanded Integrated Hierarchical Model of Test Emotions and Achievement Goals

    Science.gov (United States)

    Putwain, Dave; Deveney, Carolyn

    2009-01-01

    The aim of this study was to examine an expanded integrative hierarchical model of test emotions and achievement goal orientations in predicting the examination performance of undergraduate students. Achievement goals were theorised as mediating the relationship between test emotions and performance. 120 undergraduate students completed…

  19. Collapsing Factors in Multitrait-Multimethod Models: Examining Consequences of a Mismatch Between Measurement Design and Model

    Directory of Open Access Journals (Sweden)

    Christian eGeiser

    2015-08-01

    Full Text Available Models of confirmatory factor analysis (CFA are frequently applied to examine the convergent validity of scores obtained from multiple raters or methods in so-called multitrait-multimethod (MTMM investigations. Many applications of CFA-MTMM and similarly structured models result in solutions in which at least one method (or specific factor shows non-significant loading or variance estimates. Eid et al. (2008 distinguished between MTMM measurement designs with interchangeable (randomly selected versus structurally different (fixed methods and showed that each type of measurement design implies specific CFA-MTMM measurement models. In the current study, we hypothesized that some of the problems that are commonly seen in applications of CFA-MTMM models may be due to a mismatch between the underlying measurement design and fitted models. Using simulations, we found that models with M method factors (where M is the total number of methods and unconstrained loadings led to a higher proportion of solutions in which at least one method factor became empirically unstable when these models were fit to data generated from structurally different methods. The simulations also revealed that commonly used model goodness-of-fit criteria frequently failed to identify incorrectly specified CFA-MTMM models. We discuss implications of these findings for other complex CFA models in which similar issues occur, including nested (bifactor and latent state-trait models.

  20. Examining the stress-burnout relationship: the mediating role of negative thoughts

    Directory of Open Access Journals (Sweden)

    Ko-Hsin Chang

    2017-12-01

    Full Text Available Background Using Smith’s (1986 cognitive-affective model of athletic burnout as a guiding framework, the purpose of this study was to examine the relationships among athletes’ stress in life, negative thoughts, and the mediating role of negative thoughts on the stress-burnout relationship. Methods A total of 300 college student-athletes (males = 174; females = 126, Mage = 20.43 y, SD = 1.68 completed the College Student Athlete’s Life Stress Scale (CSALSS; Lu et al., 2012, the Automatic Thoughts Questionnaire (ATQ; Hollon & Kendall, 1980, and the Athlete Burnout Questionnaire (ABQ; Raedeke & Smith, 2001. Results Correlational analyses found that two types of life stress and four types of negative thoughts correlated with burnout. Additionally, hierarchical regression analyses found that four types of negative thoughts partially mediated the stress-burnout relationship. Discussion We concluded that an athlete’s negative thoughts play a pivotal role in predicting athletes’ stress-burnout relationship. Future study may examine how irrational cognition influences athletes’ motivation and psychological well-being.

  1. Modelling and simulation of complex sociotechnical systems: envisioning and analysing work environments

    Science.gov (United States)

    Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter

    2015-01-01

    Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227

  2. Analysing Feature Model Changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable sys- tems is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this con- text, the evolution of the feature model closely follows the evolution of the system.

  3. Examining the identity of Yukawa with gauge couplings in supersymmetric QCD at LHC

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, A. [Zuerich Univ. (Switzerland). Inst. fuer Theoretische Physik; Skands, P. [Fermi National Accelerator Lab., Batavia, IL (United States); Spira, M. [Paul Scherrer Inst. (PSI), Villigen (Switzerland); Zerwas, P.M. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2007-03-15

    The identity of the quark-squark-gluino Yukawa coupling with the corresponding quark-quark-gluon QCD coupling in supersymmetric theories can be examined experimentally at the Large Hadron Collider (LHC). Extending earlier investigations of like-sign di-lepton final states, we include jets in the analysis of the minimal supersymmetric standard model, adding squark-gluino and gluino-pair production to squark-pair production. Moreover we expand the method towards model-independent analyses which cover more general scenarios. In all cases, squark decays to light charginos and neutralinos persist to play a dominant role. (orig.)

  4. Examining the identity of Yukawa with gauge couplings in supersymmetric QCD at LHC

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, Ayres; /Zurich U.; Skands, Peter Z.; /Fermilab; Spira, M.; /PSI, Villigen; Zerwas, P.M.; /DESY

    2007-03-01

    The identity of the quark-squark-gluino Yukawa coupling with the corresponding quark-quark-gluon QCD coupling in supersymmetric theories can be examined experimentally at the Large Hadron Collider (LHC). Extending earlier investigations of like-sign di-lepton final states, we include jets in the analysis of the minimal supersymmetric standard model, adding squark-gluino and gluino-pair production to squark-pair production. Moreover we expand the method towards model-independent analyses which cover more general scenarios. In all cases, squark decays to light charginos and neutralinos persist to play a dominant role.

  5. Examining the identity of Yukawa with gauge couplings in supersymmetric QCD at LHC

    International Nuclear Information System (INIS)

    Freitas, A.; Spira, M.; Zerwas, P.M.

    2007-03-01

    The identity of the quark-squark-gluino Yukawa coupling with the corresponding quark-quark-gluon QCD coupling in supersymmetric theories can be examined experimentally at the Large Hadron Collider (LHC). Extending earlier investigations of like-sign di-lepton final states, we include jets in the analysis of the minimal supersymmetric standard model, adding squark-gluino and gluino-pair production to squark-pair production. Moreover we expand the method towards model-independent analyses which cover more general scenarios. In all cases, squark decays to light charginos and neutralinos persist to play a dominant role. (orig.)

  6. An examination of stress and burnout in certified athletic trainers at division I-a universities.

    Science.gov (United States)

    Hendrix, A E; Acevedo, E O; Hebert, E

    2000-04-01

    A growing body of knowledge indicates that too much stress can negatively influence psychological and physical health. A model proposed by Smith to explore personal and situational variables, stress appraisal, and burnout has led to significant understanding of burnout of individuals working in service professions. We examined the relationship of hardiness, social support, and work-related issues relevant to athletic trainers to perceived stress and the relationship of perceived stress to burnout. Correlational analyses were performed to examine the relationships predicted by Smith's model. In addition, we conducted stepwise multiple regression analyses to assess the relative contributions of the personal and situational variables to perceived stress and to examine the relative impact of perceived stress on 3 burnout factors (emotional exhaustion, personal accomplishment, and depersonalization). One hundred eighteen certified athletic trainers working in National Collegiate Athletic Association Division I-A intercollegiate settings that maintain a football program. We assessed personal and situational variables using the Hardiness Test, the Social Support Questionnaire, and the Athletic Training Issues Survey, adapted for this study. The Perceived Stress Scale was used to assess stress appraisal, and the Maslach Burnout Inventory was used to assess 3 dimensions of burnout. Our results were in support of Smith's theoretical model of stress and burnout. Athletic trainers who scored lower on hardiness and social support and higher on athletic training issues tended to have higher levels of perceived stress. Furthermore, higher perceived stress scores were related to higher emotional exhaustion and depersonalization and lower levels of personal accomplishment. Our findings examining burnout in Division I athletic trainers were similar to those of other studies investigating coaches and coach-teachers and in support of Smith's theoretical model of stress and burnout.

  7. Comparative Analyses of Physics Candidates Scores in West African and National Examinations Councils

    Science.gov (United States)

    Utibe, Uduak James; Agah, John Joseph

    2015-01-01

    The study is a comparative analysis of physics candidates' scores in West African and National Examinations Councils. It also investigates influence of gender. Results of 480 candidates were randomly selected form three randomly selected Senior Science Colleges using the WASSCE and NECOSSCE computer printout sent to the schools, transformed using…

  8. Multi-state Markov models for disease progression in the presence of informative examination times: an application to hepatitis C.

    Science.gov (United States)

    Sweeting, M J; Farewell, V T; De Angelis, D

    2010-05-20

    In many chronic diseases it is important to understand the rate at which patients progress from infection through a series of defined disease states to a clinical outcome, e.g. cirrhosis in hepatitis C virus (HCV)-infected individuals or AIDS in HIV-infected individuals. Typically data are obtained from longitudinal studies, which often are observational in nature, and where disease state is observed only at selected examinations throughout follow-up. Transition times between disease states are therefore interval censored. Multi-state Markov models are commonly used to analyze such data, but rely on the assumption that the examination times are non-informative, and hence the examination process is ignorable in a likelihood-based analysis. In this paper we develop a Markov model that relaxes this assumption through the premise that the examination process is ignorable only after conditioning on a more regularly observed auxiliary variable. This situation arises in a study of HCV disease progression, where liver biopsies (the examinations) are sparse, irregular, and potentially informative with respect to the transition times. We use additional information on liver function tests (LFTs), commonly collected throughout follow-up, to inform current disease state and to assume an ignorable examination process. The model developed has a similar structure to a hidden Markov model and accommodates both the series of LFT measurements and the partially latent series of disease states. We show through simulation how this model compares with the commonly used ignorable Markov model, and a Markov model that assumes the examination process is non-ignorable. Copyright 2010 John Wiley & Sons, Ltd.

  9. Modeling Acequia Irrigation Systems Using System Dynamics: Model Development, Evaluation, and Sensitivity Analyses to Investigate Effects of Socio-Economic and Biophysical Feedbacks

    Directory of Open Access Journals (Sweden)

    Benjamin L. Turner

    2016-10-01

    Full Text Available Agriculture-based irrigation communities of northern New Mexico have survived for centuries despite the arid environment in which they reside. These irrigation communities are threatened by regional population growth, urbanization, a changing demographic profile, economic development, climate change, and other factors. Within this context, we investigated the extent to which community resource management practices centering on shared resources (e.g., water for agricultural in the floodplains and grazing resources in the uplands and mutualism (i.e., shared responsibility of local residents to maintaining traditional irrigation policies and upholding cultural and spiritual observances embedded within the community structure influence acequia function. We used a system dynamics modeling approach as an interdisciplinary platform to integrate these systems, specifically the relationship between community structure and resource management. In this paper we describe the background and context of acequia communities in northern New Mexico and the challenges they face. We formulate a Dynamic Hypothesis capturing the endogenous feedbacks driving acequia community vitality. Development of the model centered on major stock-and-flow components, including linkages for hydrology, ecology, community, and economics. Calibration metrics were used for model evaluation, including statistical correlation of observed and predicted values and Theil inequality statistics. Results indicated that the model reproduced trends exhibited by the observed system. Sensitivity analyses of socio-cultural processes identified absentee decisions, cumulative income effect on time in agriculture, and land use preference due to time allocation, community demographic effect, effect of employment on participation, and farm size effect as key determinants of system behavior and response. Sensitivity analyses of biophysical parameters revealed that several key parameters (e.g., acres per

  10. An efficient modeling of fine air-gaps in tokamak in-vessel components for electromagnetic analyses

    International Nuclear Information System (INIS)

    Oh, Dong Keun; Pak, Sunil; Jhang, Hogun

    2012-01-01

    Highlights: ► A simple and efficient modeling technique is introduced to avoid undesirable massive air mesh which is usually encountered at the modeling of fine structures in tokamak in-vessel component. ► This modeling method is based on the decoupled nodes at the boundary element mocking the air gaps. ► We demonstrated the viability and efficacy, comparing this method with brute force modeling of air-gaps and effective resistivity approximation instead of detail modeling. ► Application of the method to the ITER machine was successfully carried out without sacrificing computational resources and speed. - Abstract: A simple and efficient modeling technique is presented for a proper analysis of complicated eddy current flows in conducting structures with fine air gaps. It is based on the idea of replacing a slit with the decoupled boundary of finite elements. The viability and efficacy of the technique is demonstrated in a simple problem. Application of the method to electromagnetic load analyses during plasma disruptions in ITER has been successfully carried out without sacrificing computational resources and speed. This shows the proposed method is applicable to a practical system with complicated geometrical structures.

  11. An extensible analysable system model

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    , this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...

  12. BWR core melt progression phenomena: Experimental analyses

    International Nuclear Information System (INIS)

    Ott, L.J.

    1992-01-01

    In the BWR Core Melt in Progression Phenomena Program, experimental results concerning severe fuel damage and core melt progression in BWR core geometry are used to evaluate existing models of the governing phenomena. These include control blade eutectic liquefaction and the subsequent relocation and attack on the channel box structure; oxidation heating and hydrogen generation; Zircaloy melting and relocation; and the continuing oxidation of zirconium with metallic blockage formation. Integral data have been obtained from the BWR DF-4 experiment in the ACRR and from BWR tests in the German CORA exreactor fuel-damage test facility. Additional integral data will be obtained from new CORA BWR test, the full-length FLHT-6 BWR test in the NRU test reactor, and the new program of exreactor experiments at Sandia National Laboratories (SNL) on metallic melt relocation and blockage formation. an essential part of this activity is interpretation and use of the results of the BWR tests. The Oak Ridge National Laboratory (ORNL) has developed experiment-specific models for analysis of the BWR experiments; to date, these models have permitted far more precise analyses of the conditions in these experiments than has previously been available. These analyses have provided a basis for more accurate interpretation of the phenomena that the experiments are intended to investigate. The results of posttest analyses of BWR experiments are discussed and significant findings from these analyses are explained. The ORNL control blade/canister models with materials interaction, relocation and blockage models are currently being implemented in SCDAP/RELAP5 as an optional structural component

  13. Determinants of a GP visit and cervical cancer screening examination in Great Britain.

    Directory of Open Access Journals (Sweden)

    Alexander Michael Labeit

    Full Text Available In the UK, women are requested to attend a cervical cancer test every 3 years as part of the NHS Cervical Screening Programme. This analysis compares the determinants of a cervical cancer screening examination with the determinants of a GP visit in the same year and investigates if cervical cancer screening participation is more likely for women who visit their GP.A recursive probit model was used to analyse the determinants of GP visits and cervical cancer screening examinations. GP visits were considered to be endogenous in the cervical cancer screening examination. The analysed sample consisted of 52,551 observations from 8,386 women of the British Household Panel Survey.The analysis showed that a higher education level and a worsening self-perceived health status increased the probability of a GP visit, whereas smoking decreased the probability of a GP visit. GP visits enhanced the uptake of a cervical cancer screening examination in the same period. The only variables which had the same positive effect on both dependent variables were higher education and living with a partner. The probability of a cervical cancer screening examination increased also with previous cervical cancer screening examinations and being in the recommended age groups. All other variables had different results for the uptake of a GP visit or a cervical cancer screening examination.Most of the determinants of visiting a GP and cervical cancer screening examination differ from each other and a GP visit enhances the uptake of a smear test.

  14. Examination of concept of next generation computer. Progress report 1999

    Energy Technology Data Exchange (ETDEWEB)

    Higuchi, Kenji; Hasegawa, Yukihiro; Hirayama, Toshio

    2000-12-01

    The Center for Promotion of Computational Science and Engineering has conducted R and D works on the technology of parallel processing and has started the examination of the next generation computer in 1999. This report describes the behavior analyses of quantum calculation codes. It also describes the consideration for the analyses and examination results for the method to reduce cash misses. Furthermore, it describes a performance simulator that is being developed to quantitatively examine the concept of the next generation computer. (author)

  15. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    Science.gov (United States)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  16. A typology of interpartner conflict and maternal parenting practices in high-risk families: examining spillover and compensatory models and implications for child adjustment.

    Science.gov (United States)

    Sturge-Apple, Melissa L; Davies, Patrick T; Cicchetti, Dante; Fittoria, Michael G

    2014-11-01

    The present study incorporates a person-based approach to identify spillover and compartmentalization patterns of interpartner conflict and maternal parenting practices in an ethnically diverse sample of 192 2-year-old children and their mothers who had experienced higher levels of socioeconomic risk. In addition, we tested whether sociocontextual variables were differentially predictive of theses profiles and examined how interpartner-parenting profiles were associated with children's physiological and psychological adjustment over time. As expected, latent class analyses extracted three primary profiles of functioning: adequate functioning, spillover, and compartmentalizing families. Furthermore, interpartner-parenting profiles were differentially associated with both sociocontextual predictors and children's adjustment trajectories. The findings highlight the developmental utility of incorporating person-based approaches to models of interpartner conflict and maternal parenting practices.

  17. Numerical examinations of simplified spondylodesis models concerning energy absorption in magnetic resonance imaging

    Directory of Open Access Journals (Sweden)

    Hadert Nicole

    2016-09-01

    Full Text Available Metallic implants in magnetic resonance imaging (MRI are a potential safety risk since the energy absorption may increase temperature of the surrounding tissue. The temperature rise is highly dependent on implant size. Numerical examinations can be used to calculate the energy absorption in terms of the specific absorption rate (SAR induced by MRI on orthopaedic implants. This research presents the impact of titanium osteosynthesis spine implants, called spondylodesis, deduced by numerical examinations of energy absorption in simplified spondylodesis models placed in 1.5 T and 3.0 T MRI body coils. The implants are modelled along with a spine model consisting of vertebrae and disci intervertebrales thus extending previous investigations [1], [2]. Increased SAR values are observed at the ends of long implants, while at the center SAR is significantly lower. Sufficiently short implants show increased SAR along the complete length of the implant. A careful data analysis reveals that the particular anatomy, i.e. vertebrae and disci intervertebrales, has a significant effect on SAR. On top of SAR profile due to the implant length, considerable SAR variations at small scale are observed, e.g. SAR values at vertebra are higher than at disc positions.

  18. The reasoned/reactive model: A new approach to examining eating decisions among female college dieters and nondieters.

    Science.gov (United States)

    Ruhl, Holly; Holub, Shayla C; Dolan, Elaine A

    2016-12-01

    Female college students are prone to unhealthy eating patterns that can impact long-term health. This study examined female students' healthy and unhealthy eating behaviors with three decision-making models. Specifically, the theory of reasoned action, prototype/willingness model, and new reasoned/reactive model were compared to determine how reasoned (logical) and reactive (impulsive) factors relate to dietary decisions. Females (N=583, M age =20.89years) completed measures on reasoned cognitions about foods (attitudes, subjective norms, nutrition knowledge, intentions to eat foods), reactive cognitions about foods (prototypes, affect, willingness to eat foods), dieting, and food consumption. Structural equation modeling (SEM) revealed the new reasoned/reactive model to be the preeminent model for examining eating behaviors. This model showed that attitudes were related to intentions and willingness to eat healthy and unhealthy foods. Affect was related to willingness to eat healthy and unhealthy foods, whereas nutrition knowledge was related to intentions and willingness to eat healthy foods only. Intentions and willingness were related to healthy and unhealthy food consumption. Dieting status played a moderating role in the model and revealed mean-level differences between dieters and nondieters. This study highlights the importance of specific factors in relation to female students' eating decisions and unveils a comprehensive model for examining health behaviors. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Multipole analyses and photo-decay couplings at intermediate energies

    International Nuclear Information System (INIS)

    Workman, R.L.; Arndt, R.A.; Zhujun Li

    1992-01-01

    The authors describe the results of several multipole analyses of pion-photoproduction data to 2 GeV in the lab photon energy. Comparisons are made with previous analyses. The photo-decay couplings for the delta are examined in detail. Problems in the representation of photoproduction data are discussed, with an emphasis on the recent LEGS data. 16 refs., 4 tabs

  20. Prediction Models for Licensure Examination Performance using Data Mining Classifiers for Online Test and Decision Support System

    Directory of Open Access Journals (Sweden)

    Ivy M. Tarun

    2017-05-01

    Full Text Available This study focuse d on two main points: the generation of licensure examination performan ce prediction models; and the development of a Decision Support System. In this study, data mining classifiers were used to generate the models using WEKA (Waikato Environment for Knowledge Analysis. These models were integrated into the Decision Support System as default models to support decision making as far as appropriate interventions during review sessions are concerned. The system developed mainly involves the repeated generation of MR models for performance prediction and also provides a Mock Boar d Exam for the reviewees to take. From the models generated, it is established that the General Weighted Average of the reviewees in their General Education subjects, the result of the Mock Board Exam and the instance when the reviewee is conducting a sel f - review are good predictors of the licensure examination performance. Further , it is concluded that the General Weighted Average of the reviewees in their Major or Content courses is the best predictor of licensure examination performance. Based from the evaluation results of the system , the system satisfied its implied functions and is efficient, usable, reliable and portable. Hence, it can already be used not as a substitute to the face - to - face review sessions but to enhance the reviewees’ licensure exa mination review and allow initial identification of those who are likely to have difficulty in passing the licensure examination, therefore providing sufficient time and opportunities for appropriate interventions.

  1. An examination of the Sport Drug Control Model with elite Australian athletes.

    Science.gov (United States)

    Gucciardi, Daniel F; Jalleh, Geoffrey; Donovan, Robert J

    2011-11-01

    This study presents an opportunistic examination of the theoretical tenets outlined in the Sport Drug Control Model(1) using questionnaire items from a survey of 643 elite Australian athletes. Items in the questionnaire that related to the concepts in the model were identified and structural equation modelling was employed to test the hypothesised model. Morality (cheating), benefit appraisal (performance), and threat appraisal (enforcement) evidenced the strongest relationships with attitude to doping, which in turn was positively associated with doping susceptibility. Self-esteem, perceptions of legitimacy and reference group opinions showed small non-significant associations with attitude to doping. The hypothesised model accounted for 30% and 11% of the variance in attitudes to doping and doping susceptibility, respectively. These present findings provide support for the model even though the questionnaire items were not constructed to specifically measure concepts contained in it. Thus, the model appears useful for understanding influences on doping. Nevertheless, there is a need to further explore individual and social factors that may influence athletes' use of performance enhancing drugs. Copyright © 2011 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  2. Examining the dimensional structure models of secondary traumatic stress based on DSM-5 symptoms.

    Science.gov (United States)

    Mordeno, Imelu G; Go, Geraldine P; Yangson-Serondo, April

    2017-02-01

    Latent factor structure of Secondary Traumatic Stress (STS) has been examined using Diagnostic Statistic Manual-IV (DSM-IV)'s Posttraumatic Stress Disorder (PTSD) nomenclature. With the advent of Diagnostic Statistic Manual-5 (DSM-5), there is an impending need to reexamine STS using DSM-5 symptoms in light of the most updated PTSD models in the literature. The study investigated and determined the best fitted PTSD models using DSM-5 PTSD criteria symptoms. Confirmatory factor analysis (CFA) was conducted to examine model fit using the Secondary Traumatic Stress Scale in 241 registered and practicing Filipino nurses (166 females and 75 males) who worked in the Philippines and gave direct nursing services to patients. Based on multiple fit indices, the results showed the 7-factor hybrid model, comprising of intrusion, avoidance, negative affect, anhedonia, externalizing behavior, anxious arousal, and dysphoric arousal factors has excellent fit to STS. This model asserts that: (1) hyperarousal criterion needs to be divided into anxious and dysphoric arousal factors; (2) symptoms characterizing negative and positive affect need to be separated to two separate factors, and; (3) a new factor would categorize externalized, self-initiated impulse and control-deficit behaviors. Comparison of nested and non-nested models showed Hybrid model to have superior fit over other models. The specificity of the symptom structure of STS based on DSM-5 PTSD criteria suggests having more specific interventions addressing the more elaborate symptom-groupings that would alleviate the condition of nurses exposed to STS on a daily basis. Copyright © 2016 Elsevier B.V. All rights reserved.

  3. Analyses of hydraulic performance of velocity caps

    DEFF Research Database (Denmark)

    Christensen, Erik Damgaard; Degn Eskesen, Mark Chr.; Buhrkall, Jeppe

    2014-01-01

    The hydraulic performance of a velocity cap has been investigated. Velocity caps are often used in connection with offshore intakes. CFD (computational fluid dynamics) examined the flow through the cap openings and further down into the intake pipes. This was combined with dimension analyses...

  4. Basic quantitative analyses of medical examinations

    OpenAIRE

    Möltner, A; Schellberg, D; Jünger, J

    2006-01-01

    [english] The evaluation steps are described which are necessary for an elementary test-theoretic analysis of an exam and sufficient as a basis of item-revisions, improvements of the composition of tests and feedback to teaching coordinators and curriculum developers. These steps include the evaluation of the results, the analysis of item difficulty and discrimination and - where appropriate - the corresponding evaluation of single answers. To complete the procedure, the internal consistency ...

  5. Comparison with Russian analyses of meteor impact

    Energy Technology Data Exchange (ETDEWEB)

    Canavan, G.H.

    1997-06-01

    The inversion model for meteor impacts is used to discuss Russian analyses and compare principal results. For common input parameters, the models produce consistent estimates of impactor parameters. Directions for future research are discussed and prioritized.

  6. Examination of models of knee in primary cosmic ray spectrum using gamma-hadron families

    International Nuclear Information System (INIS)

    Sveshnikova, L.G.; Managadze, A.K.; Roganova, T.M.; Mukhamedshin, R.A.

    2005-01-01

    Four models for describing the primary cosmic radiation (PCR) spectrum are proposed/ The examination of the PCR spectra models is carried out from the viewpoint of their consistency with the data on the gamma-hadron families for the threshold energies of 100 and 500 TeV. The maximum possible contribution of the superfamilies, originating from the primary nuclei, but not from the protons, is calculated [ru

  7. A review on design of experiments and surrogate models in aircraft real-time and many-query aerodynamic analyses

    Science.gov (United States)

    Yondo, Raul; Andrés, Esther; Valero, Eusebio

    2018-01-01

    Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial

  8. Using species abundance distribution models and diversity indices for biogeographical analyses

    Science.gov (United States)

    Fattorini, Simone; Rigal, François; Cardoso, Pedro; Borges, Paulo A. V.

    2016-01-01

    We examine whether Species Abundance Distribution models (SADs) and diversity indices can describe how species colonization status influences species community assembly on oceanic islands. Our hypothesis is that, because of the lack of source-sink dynamics at the archipelago scale, Single Island Endemics (SIEs), i.e. endemic species restricted to only one island, should be represented by few rare species and consequently have abundance patterns that differ from those of more widespread species. To test our hypothesis, we used arthropod data from the Azorean archipelago (North Atlantic). We divided the species into three colonization categories: SIEs, archipelagic endemics (AZEs, present in at least two islands) and native non-endemics (NATs). For each category, we modelled rank-abundance plots using both the geometric series and the Gambin model, a measure of distributional amplitude. We also calculated Shannon entropy and Buzas and Gibson's evenness. We show that the slopes of the regression lines modelling SADs were significantly higher for SIEs, which indicates a relative predominance of a few highly abundant species and a lack of rare species, which also depresses diversity indices. This may be a consequence of two factors: (i) some forest specialist SIEs may be at advantage over other, less adapted species; (ii) the entire populations of SIEs are by definition concentrated on a single island, without possibility for inter-island source-sink dynamics; hence all populations must have a minimum number of individuals to survive natural, often unpredictable, fluctuations. These findings are supported by higher values of the α parameter of the Gambin mode for SIEs. In contrast, AZEs and NATs had lower regression slopes, lower α but higher diversity indices, resulting from their widespread distribution over several islands. We conclude that these differences in the SAD models and diversity indices demonstrate that the study of these metrics is useful for

  9. Measuring and Examining General Self-Efficacy among Community College Students: A Structural Equation Modeling Approach

    Science.gov (United States)

    Chen, Yu; Starobin, Soko S.

    2018-01-01

    This study examined a psychosocial mechanism of how general self-efficacy interacts with other key factors and influences degree aspiration for students enrolled in an urban diverse community college. Using general self-efficacy scales, the authors hypothesized the General Self-efficacy model for Community College students (the GSE-CC model). A…

  10. Sobol method application in dimensional sensitivity analyses of different AFM cantilevers for biological particles

    Science.gov (United States)

    Korayem, M. H.; Taheri, M.; Ghahnaviyeh, S. D.

    2015-08-01

    Due to the more delicate nature of biological micro/nanoparticles, it is necessary to compute the critical force of manipulation. The modeling and simulation of reactions and nanomanipulator dynamics in a precise manipulation process require an exact modeling of cantilevers stiffness, especially the stiffness of dagger cantilevers because the previous model is not useful for this investigation. The stiffness values for V-shaped cantilevers can be obtained through several methods. One of them is the PBA method. In another approach, the cantilever is divided into two sections: a triangular head section and two slanted rectangular beams. Then, deformations along different directions are computed and used to obtain the stiffness values in different directions. The stiffness formulations of dagger cantilever are needed for this sensitivity analyses so the formulations have been driven first and then sensitivity analyses has been started. In examining the stiffness of the dagger-shaped cantilever, the micro-beam has been divided into two triangular and rectangular sections and by computing the displacements along different directions and using the existing relations, the stiffness values for dagger cantilever have been obtained. In this paper, after investigating the stiffness of common types of cantilevers, Sobol sensitivity analyses of the effects of various geometric parameters on the stiffness of these types of cantilevers have been carried out. Also, the effects of different cantilevers on the dynamic behavior of nanoparticles have been studied and the dagger-shaped cantilever has been deemed more suitable for the manipulation of biological particles.

  11. Analysing the Effects of Flood-Resilience Technologies in Urban Areas Using a Synthetic Model Approach

    Directory of Open Access Journals (Sweden)

    Reinhard Schinke

    2016-11-01

    Full Text Available Flood protection systems with their spatial effects play an important role in managing and reducing flood risks. The planning and decision process as well as the technical implementation are well organized and often exercised. However, building-related flood-resilience technologies (FReT are often neglected due to the absence of suitable approaches to analyse and to integrate such measures in large-scale flood damage mitigation concepts. Against this backdrop, a synthetic model-approach was extended by few complementary methodical steps in order to calculate flood damage to buildings considering the effects of building-related FReT and to analyse the area-related reduction of flood risks by geo-information systems (GIS with high spatial resolution. It includes a civil engineering based investigation of characteristic properties with its building construction including a selection and combination of appropriate FReT as a basis for derivation of synthetic depth-damage functions. Depending on the real exposition and the implementation level of FReT, the functions can be used and allocated in spatial damage and risk analyses. The application of the extended approach is shown at a case study in Valencia (Spain. In this way, the overall research findings improve the integration of FReT in flood risk management. They provide also some useful information for advising of individuals at risk supporting the selection and implementation of FReT.

  12. Energy system analyses of the marginal energy technology in life cycle assessments

    DEFF Research Database (Denmark)

    Mathiesen, B.V.; Münster, Marie; Fruergaard, Thilde

    2007-01-01

    in historical and potential future energy systems. Subsequently, key LCA studies of products and different waste flows are analysed in relation to the recom- mendations in consequential LCA. Finally, a case of increased waste used for incineration is examined using an energy system analysis model......In life cycle assessments consequential LCA is used as the “state-of-the-art” methodology, which focuses on the consequences of decisions made in terms of system boundaries, allocation and selection of data, simple and dynamic marginal technology, etc.(Ekvall & Weidema 2004). In many LCA studies...... marginal technology? How is the marginal technology identified and used today? What is the consequence of not using energy system analy- sis for identifying the marginal energy technologies? The use of the methodology is examined from three angles. First, the marginal electricity technology is identified...

  13. College Students Coping with Interpersonal Stress: Examining a Control-Based Model of Coping

    Science.gov (United States)

    Coiro, Mary Jo; Bettis, Alexandra H.; Compas, Bruce E.

    2017-01-01

    Objective: The ways that college students cope with stress, particularly interpersonal stress, may be a critical factor in determining which students are at risk for impairing mental health disorders. Using a control-based model of coping, the present study examined associations between interpersonal stress, coping strategies, and symptoms.…

  14. Intention-to-treat analyses and missing data approaches in pharmacotherapy trials for alcohol use disorders.

    Science.gov (United States)

    Del Re, A C; Maisel, Natalya C; Blodgett, Janet C; Finney, John W

    2013-11-12

    Intention to treat (ITT) is an analytic strategy for reducing potential bias in treatment effects arising from missing data in randomised controlled trials (RCTs). Currently, no universally accepted definition of ITT exists, although many researchers consider it to require either no attrition or a strategy to handle missing data. Using the reports of a large pool of RCTs, we examined discrepancies between the types of analyses that alcohol pharmacotherapy researchers stated they used versus those they actually used. We also examined the linkage between analytic strategy (ie, ITT or not) and how missing data on outcomes were handled (if at all), and whether data analytic and missing data strategies have changed over time. Descriptive statistics were generated for reported and actual data analytic strategy and for missing data strategy. In addition, generalised linear models determined changes over time in the use of ITT analyses and missing data strategies. 165 RCTs of pharmacotherapy for alcohol use disorders. Of the 165 studies, 74 reported using an ITT strategy. However, less than 40% of the studies actually conducted ITT according to the rigorous definition above. Whereas no change in the use of ITT analyses over time was found, censored (last follow-up completed) and imputed missing data strategies have increased over time, while analyses of data only for the sample actually followed have decreased. Discrepancies in reporting versus actually conducting ITT analyses were found in this body of RCTs. Lack of clarity regarding the missing data strategy used was common. Consensus on a definition of ITT is important for an adequate understanding of research findings. Clearer reporting standards for analyses and the handling of missing data in pharmacotherapy trials and other intervention studies are needed.

  15. Advanced toroidal facility vaccuum vessel stress analyses

    International Nuclear Information System (INIS)

    Hammonds, C.J.; Mayhall, J.A.

    1987-01-01

    The complex geometry of the Advance Toroidal Facility (ATF) vacuum vessel required special analysis techniques in investigating the structural behavior of the design. The response of a large-scale finite element model was found for transportation and operational loading. Several computer codes and systems, including the National Magnetic Fusion Energy Computer Center Cray machines, were implemented in accomplishing these analyses. The work combined complex methods that taxed the limits of both the codes and the computer systems involved. Using MSC/NASTRAN cyclic-symmetry solutions permitted using only 1/12 of the vessel geometry to mathematically analyze the entire vessel. This allowed the greater detail and accuracy demanded by the complex geometry of the vessel. Critical buckling-pressure analyses were performed with the same model. The development, results, and problems encountered in performing these analyses are described. 5 refs., 3 figs

  16. A History of Rotorcraft Comprehensive Analyses

    Science.gov (United States)

    Johnson, Wayne

    2013-01-01

    A history of the development of rotorcraft comprehensive analyses is presented. Comprehensive analyses are digital computer programs that calculate the aeromechanical behavior of the rotor and aircraft, bringing together the most advanced models of the geometry, structure, dynamics, and aerodynamics available in rotary wing technology. The development of the major codes of the last five decades from industry, government, and universities is described. A number of common themes observed in this history are discussed.

  17. TSOAK-M1: an examination of its model and methods

    International Nuclear Information System (INIS)

    Edgell, D.H.

    1983-05-01

    Fusion facilities will contain a sizable inventory of tritium fuel that will be vulnerable to release. Once released, molecular tritium begins converting into tritiated water which is 10000 times more hazardous and tends to adsorb onto surfaces. The rate of conversion and adsorption/desorption must be accurately known to estimate cleanup times and radiation hazards realistically. Argonne National Laboratory developed a computer code, TSOAK-M1, to determine the conversion/ adsorption/desorption parameters and to model cleanups. The Canadian Fusion Fuels Technology Project examined the program for reliability and potential applications. TSOAK-M1 assumes a pseudo second order radiolytic conversion where a first order surface reaction seems more appropriate. This difference in order should be investigated to accurately determine the reaction law. TSOAK-M1 determines the model parameters from experimental data using an optimization routine. However the data used is judged insufficient. More data is needed where the conversion of molecular tritium to tritiated water has a significant effect due to adsorption/desorption. SOAKER, an improved version of the TSOAK-M1 model, which combines first and second order reactions has been implemented in Wang BASIC. Once the reaction law and the parameters have been accurately determined the program could be a useful tool in the study and design of decontamination systems

  18. Sensitivity in risk analyses with uncertain numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, W. Troy; Ferson, Scott

    2006-06-01

    Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.

  19. Grey literature in meta-analyses.

    Science.gov (United States)

    Conn, Vicki S; Valentine, Jeffrey C; Cooper, Harris M; Rantz, Marilyn J

    2003-01-01

    In meta-analysis, researchers combine the results of individual studies to arrive at cumulative conclusions. Meta-analysts sometimes include "grey literature" in their evidential base, which includes unpublished studies and studies published outside widely available journals. Because grey literature is a source of data that might not employ peer review, critics have questioned the validity of its data and the results of meta-analyses that include it. To examine evidence regarding whether grey literature should be included in meta-analyses and strategies to manage grey literature in quantitative synthesis. This article reviews evidence on whether the results of studies published in peer-reviewed journals are representative of results from broader samplings of research on a topic as a rationale for inclusion of grey literature. Strategies to enhance access to grey literature are addressed. The most consistent and robust difference between published and grey literature is that published research is more likely to contain results that are statistically significant. Effect size estimates of published research are about one-third larger than those of unpublished studies. Unfunded and small sample studies are less likely to be published. Yet, importantly, methodological rigor does not differ between published and grey literature. Meta-analyses that exclude grey literature likely (a) over-represent studies with statistically significant findings, (b) inflate effect size estimates, and (c) provide less precise effect size estimates than meta-analyses including grey literature. Meta-analyses should include grey literature to fully reflect the existing evidential base and should assess the impact of methodological variations through moderator analysis.

  20. Microsegregation in multicomponent alloy analysed by quantitative phase-field model

    International Nuclear Information System (INIS)

    Ohno, M; Takaki, T; Shibuta, Y

    2015-01-01

    Microsegregation behaviour in a ternary alloy system has been analysed by means of quantitative phase-field (Q-PF) simulations with a particular attention directed at an influence of tie-line shift stemming from different liquid diffusivities of the solute elements. The Q-PF model developed for non-isothermal solidification in multicomponent alloys with non-zero solid diffusivities was applied to analysis of microsegregation in a ternary alloy consisting of fast and slow diffusing solute elements. The accuracy of the Q-PF simulation was first verified by performing the convergence test of segregation ratio with respect to the interface thickness. From one-dimensional analysis, it was found that the microsegregation of slow diffusing element is reduced due to the tie-line shift. In two-dimensional simulations, the refinement of microstructure, viz., the decrease of secondary arms spacing occurs at low cooling rates due to the formation of diffusion layer of slow diffusing element. It yields the reductions of degrees of microsegregation for both the fast and slow diffusing elements. Importantly, in a wide range of cooling rates, the degree of microsegregation of the slow diffusing element is always lower than that of the fast diffusing element, which is entirely ascribable to the influence of tie-line shift. (paper)

  1. Chapter No.4. Safety analyses

    International Nuclear Information System (INIS)

    2002-01-01

    In 2001 the activity in the field of safety analyses was focused on verification of the safety analyses reports for NPP V-2 Bohunice and NPP Mochovce concerning the new profiled fuel and probabilistic safety assessment study for NPP Mochovce. The calculation safety analyses were performed and expert reviews for the internal UJD needs were elaborated. An important part of work was performed also in solving of scientific and technical tasks appointed within bilateral projects of co-operation between UJD and its international partnership organisations as well as within international projects ordered and financed by the European Commission. All these activities served as an independent support for UJD in its deterministic and probabilistic safety assessment of nuclear installations. A special attention was paid to a review of probabilistic safety assessment study of level 1 for NPP Mochovce. The probabilistic safety analysis of NPP related to the full power operation was elaborated in the study and a contribution of the technical and operational improvements to the risk decreasing was quantified. A core damage frequency of the reactor was calculated and the dominant initiating events and accident sequences with the major contribution to the risk were determined. The target of the review was to determine the acceptance of the sources of input information, assumptions, models, data, analyses and obtained results, so that the probabilistic model could give a real picture of the NPP. The review of the study was performed in co-operation of UJD with the IAEA (IPSART mission) as well as with other external organisations, which were not involved in the elaboration of the reviewed document and probabilistic model of NPP. The review was made in accordance with the IAEA guidelines and methodical documents of UJD and US NRC. In the field of calculation safety analyses the UJD activity was focused on the analysis of an operational event, analyses of the selected accident scenarios

  2. Segmental Musculoskeletal Examinations using Dual-Energy X-Ray Absorptiometry (DXA: Positioning and Analysis Considerations

    Directory of Open Access Journals (Sweden)

    Nicolas H. Hart, Sophia Nimphius, Tania Spiteri, Jodie L. Cochrane, Robert U. Newton

    2015-09-01

    Full Text Available Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1 positioning and analysis procedures using DXA and 2 reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46 football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 % using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV and intraclass correlation coefficients (ICC were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988 and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980 reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures

  3. Developing a system dynamics model to analyse environmental problem in construction site

    Science.gov (United States)

    Haron, Fatin Fasehah; Hawari, Nurul Nazihah

    2017-11-01

    This study aims to develop a system dynamics model at a construction site to analyse the impact of environmental problem. Construction sites may cause damages to the environment, and interference in the daily lives of residents. A proper environmental management system must be used to reduce pollution, enhance bio-diversity, conserve water, respect people and their local environment, measure performance and set targets for the environment and sustainability. This study investigates the damaging impact normally occur during the construction stage. Environmental problem will cause costly mistake in project implementation, either because of the environmental damages that are likely to arise during project implementation, or because of modification that may be required subsequently in order to make the action environmentally acceptable. Thus, findings from this study has helped in significantly reducing the damaging impact towards environment, and improve the environmental management system performance at construction site.

  4. The Leicester AATSR Global Analyser (LAGA) - Giving Young Students the Opportunity to Examine Space Observations of Global Climate-Related Processes

    Science.gov (United States)

    Llewellyn-Jones, David; Good, Simon; Corlett, Gary

    A pc-based analysis package has been developed, for the dual purposes of, firstly, providing ‘quick-look' capability to research workers inspecting long time-series of global satellite datasets of Sea-surface Temperature (SST); and, secondly, providing an introduction for students, either undergraduates, or advanced high-school students to the characteristics of commonly used analysis techniques for large geophysical data-sets from satellites. Students can also gain insight into the behaviour of some basic climate-related large-scale or global processes. The package gives students immediate access to up to 16 years of continuous global SST data, mainly from the Advanced Along-Track Scanning Radiometer, currently flying on ESA's Envisat satellite. The data are available and are presented in the form of monthly averages and spatial averaged to half-degree or one-sixth degree longitude-latitude grids. There are simple button-operated facilities for defining and calculating box-averages; producing time-series of such averages; defining and displaying transects and their evolution over time; and the examination anomalous behaviour by displaying the difference between observed values and values derived from climatological means. By using these facilities a student rapidly gains familiarity with such processes as annual variability, the El Nĩo effect, as well as major current systems n such as the Gulf Stream and other climatically important phenomena. In fact, the student is given immediate insights into the basic methods of examining geophysical data in a research context, without needing to acquire special analysis skills are go trough lengthy data retrieval and preparation procedures which are more generally required, as precursors to serious investigation, in the research laboratory. This software package, called the Leicester AAATSR Global Analyser (LAGA), is written in a well-known and widely used analysis language and the package can be run by using software

  5. Examination of the Safety of Pediatric Vaccine Schedules in a Non-Human Primate Model: Assessments of Neurodevelopment, Learning, and Social Behavior

    Science.gov (United States)

    Curtis, Britni; Liberato, Noelle; Rulien, Megan; Morrisroe, Kelly; Kenney, Caroline; Yutuc, Vernon; Ferrier, Clayton; Marti, C. Nathan; Mandell, Dorothy; Burbacher, Thomas M.; Sackett, Gene P.

    2015-01-01

    Background In the 1990s, the mercury-based preservative thimerosal was used in most pediatric vaccines. Although there are currently only two thimerosal-containing vaccines (TCVs) recommended for pediatric use, parental perceptions that vaccines pose safety concerns are affecting vaccination rates, particularly in light of the much expanded and more complex schedule in place today. Objectives The objective of this study was to examine the safety of pediatric vaccine schedules in a non-human primate model. Methods We administered vaccines to six groups of infant male rhesus macaques (n = 12–16/group) using a standardized thimerosal dose where appropriate. Study groups included the recommended 1990s Pediatric vaccine schedule, an accelerated 1990s Primate schedule with or without the measles–mumps–rubella (MMR) vaccine, the MMR vaccine only, and the expanded 2008 schedule. We administered saline injections to age-matched control animals (n = 16). Infant development was assessed from birth to 12 months of age by examining the acquisition of neonatal reflexes, the development of object concept permanence (OCP), computerized tests of discrimination learning, and infant social behavior. Data were analyzed using analysis of variance, multilevel modeling, and survival analyses, where appropriate. Results We observed no group differences in the acquisition of OCP. During discrimination learning, animals receiving TCVs had improved performance on reversal testing, although some of these same animals showed poorer performance in subsequent learning-set testing. Analysis of social and nonsocial behaviors identified few instances of negative behaviors across the entire infancy period. Although some group differences in specific behaviors were reported at 2 months of age, by 12 months all infants, irrespective of vaccination status, had developed the typical repertoire of macaque behaviors. Conclusions This comprehensive 5-year case–control study, which closely examined

  6. Examination of a Group Counseling Model of Career Decision Making with College Students

    Science.gov (United States)

    Rowell, P. Clay; Mobley, A. Keith; Kemer, Gulsah; Giordano, Amanda

    2014-01-01

    The authors examined the effectiveness of a group career counseling model (Pyle, K. R., 2007) on college students' career decision-making abilities. They used a Solomon 4-group design and found that students who participated in the career counseling groups had significantly greater increases in career decision-making abilities than those who…

  7. What is needed to eliminate new pediatric HIV infections: The contribution of model-based analyses

    Science.gov (United States)

    Doherty, Katie; Ciaranello, Andrea

    2013-01-01

    Purpose of Review Computer simulation models can identify key clinical, operational, and economic interventions that will be needed to achieve the elimination of new pediatric HIV infections. In this review, we summarize recent findings from model-based analyses of strategies for prevention of mother-to-child HIV transmission (MTCT). Recent Findings In order to achieve elimination of MTCT (eMTCT), model-based studies suggest that scale-up of services will be needed in several domains: uptake of services and retention in care (the PMTCT “cascade”), interventions to prevent HIV infections in women and reduce unintended pregnancies (the “four-pronged approach”), efforts to support medication adherence through long periods of pregnancy and breastfeeding, and strategies to make breastfeeding safer and/or shorter. Models also project the economic resources that will be needed to achieve these goals in the most efficient ways to allocate limited resources for eMTCT. Results suggest that currently recommended PMTCT regimens (WHO Option A, Option B, and Option B+) will be cost-effective in most settings. Summary Model-based results can guide future implementation science, by highlighting areas in which additional data are needed to make informed decisions and by outlining critical interventions that will be necessary in order to eliminate new pediatric HIV infections. PMID:23743788

  8. VOC composition of current motor vehicle fuels and vapors, and collinearity analyses for receptor modeling.

    Science.gov (United States)

    Chin, Jo-Yu; Batterman, Stuart A

    2012-03-01

    The formulation of motor vehicle fuels can alter the magnitude and composition of evaporative and exhaust emissions occurring throughout the fuel cycle. Information regarding the volatile organic compound (VOC) composition of motor fuels other than gasoline is scarce, especially for bioethanol and biodiesel blends. This study examines the liquid and vapor (headspace) composition of four contemporary and commercially available fuels: gasoline (gasoline), ultra-low sulfur diesel (ULSD), and B20 (20% soy-biodiesel and 80% ULSD). The composition of gasoline and E85 in both neat fuel and headspace vapor was dominated by aromatics and n-heptane. Despite its low gasoline content, E85 vapor contained higher concentrations of several VOCs than those in gasoline vapor, likely due to adjustments in its formulation. Temperature changes produced greater changes in the partial pressures of 17 VOCs in E85 than in gasoline, and large shifts in the VOC composition. B20 and ULSD were dominated by C(9) to C(16)n-alkanes and low levels of the aromatics, and the two fuels had similar headspace vapor composition and concentrations. While the headspace composition predicted using vapor-liquid equilibrium theory was closely correlated to measurements, E85 vapor concentrations were underpredicted. Based on variance decomposition analyses, gasoline and diesel fuels and their vapors VOC were distinct, but B20 and ULSD fuels and vapors were highly collinear. These results can be used to estimate fuel related emissions and exposures, particularly in receptor models that apportion emission sources, and the collinearity analysis suggests that gasoline- and diesel-related emissions can be distinguished. Copyright © 2011 Elsevier Ltd. All rights reserved.

  9. Examining Asymmetrical Relationships of Organizational Learning Antecedents: A Theoretical Model

    Directory of Open Access Journals (Sweden)

    Ery Tri Djatmika

    2016-02-01

    Full Text Available Global era is characterized by highly competitive advantage market demand. Responding to the challenge of rapid environmental changes, organizational learning is becoming a strategic way and solution to empower people themselves within the organization in order to create a novelty as valuable positioning source. For research purposes, determining the influential antecedents that affect organizational learning is vital to understand research-based solutions given for practical implications. Accordingly, identification of variables examined by asymmetrical relationships is critical to establish. Possible antecedent variables come from organizational and personal point of views. It is also possible to include a moderating one. A proposed theoretical model of asymmetrical effects of organizational learning and its antecedents is discussed in this article.

  10. Mechanical analyses on the digital behaviour of the Tokay gecko (Gekko gecko) based on a multi-level directional adhesion model

    OpenAIRE

    Wu, Xuan; Wang, Xiaojie; Mei, Tao; Sun, Shaoming

    2015-01-01

    This paper proposes a multi-level hierarchical model for the Tokay gecko (Gekko gecko) adhesive system and analyses the digital behaviour of the G. gecko under macro/meso-level scale. The model describes the structures of G. gecko's adhesive system from the nano-level spatulae to the sub-millimetre-level lamella. The G. gecko's seta is modelled using inextensible fibril based on Euler's elastica theorem. Considering the side contact of the spatular pads of the seta on the flat and rigid subst...

  11. Continuous spatial modelling to analyse planning and economic consequences of offshore wind energy

    International Nuclear Information System (INIS)

    Moeller, Bernd

    2011-01-01

    Offshore wind resources appear abundant, but technological, economic and planning issues significantly reduce the theoretical potential. While massive investments are anticipated and planners and developers are scouting for viable locations and consider risk and impact, few studies simultaneously address potentials and costs together with the consequences of proposed planning in an analytical and continuous manner and for larger areas at once. Consequences may be investments short of efficiency and equity, and failed planning routines. A spatial resource economic model for the Danish offshore waters is presented, used to analyse area constraints, technological risks, priorities for development and opportunity costs of maintaining competing area uses. The SCREAM-offshore wind model (Spatially Continuous Resource Economic Analysis Model) uses raster-based geographical information systems (GIS) and considers numerous geographical factors, technology and cost data as well as planning information. Novel elements are weighted visibility analysis and geographically recorded shipping movements as variable constraints. A number of scenarios have been described, which include restrictions of using offshore areas, as well as alternative uses such as conservation and tourism. The results comprise maps, tables and cost-supply curves for further resource economic assessment and policy analysis. A discussion of parameter variations exposes uncertainties of technology development, environmental protection as well as competing area uses and illustrates how such models might assist in ameliorating public planning, while procuring decision bases for the political process. The method can be adapted to different research questions, and is largely applicable in other parts of the world. - Research Highlights: → A model for the spatially continuous evaluation of offshore wind resources. → Assessment of spatial constraints, costs and resources for each location. → Planning tool for

  12. Conclusions from working group 2 - the analyses of the WIPP-2 experiments

    International Nuclear Information System (INIS)

    Jackson, C.P.

    1995-01-01

    The INTRAVAL WIPP-2 test case is based on data from site investigations carried out at the Waste Isolation Pilot Plant (WIPP) in New Mexico, USA. The site has been chosen as a potential location for a radioactive waste repository. Extensive investigations have been carried out, focused mainly on groundwater flow and transport in the Culebra Dolomite, the main pathway for transport of radionuclides off the site by groundwater in the case of an accidental borehole intrusion into the repository. Five teams studied the test case. Two teams addressed issues involved in the treatment of heterogeneity. Stochastic models and a Monte Carlo approach were used. One team quantified the increased uncertainty resulting from fewer data and explored the issues involved in validation of stochastic models. A second team developed a new method for conditioning stochastic models on head data. Two other teams examined issues relating to the choice of conceptual models. Two-dimensional vertical cross-section models were used to explore the importance of vertical flow. The fifth team advocate the use of a variety of models to highlight the most important processes and parameters. Conclusions from each team experiment are analysed. (J.S.). 4 refs., 11 figs

  13. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Science.gov (United States)

    Rallapalli, Varsha H.

    2016-01-01

    Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL) often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM) has demonstrated that the signal-to-noise ratio (SNRENV) from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N) is assumed to: (a) reduce S + N envelope power by filling in dips within clean speech (S) and (b) introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  14. Propensity-score matching in economic analyses: comparison with regression models, instrumental variables, residual inclusion, differences-in-differences, and decomposition methods.

    Science.gov (United States)

    Crown, William H

    2014-02-01

    This paper examines the use of propensity score matching in economic analyses of observational data. Several excellent papers have previously reviewed practical aspects of propensity score estimation and other aspects of the propensity score literature. The purpose of this paper is to compare the conceptual foundation of propensity score models with alternative estimators of treatment effects. References are provided to empirical comparisons among methods that have appeared in the literature. These comparisons are available for a subset of the methods considered in this paper. However, in some cases, no pairwise comparisons of particular methods are yet available, and there are no examples of comparisons across all of the methods surveyed here. Irrespective of the availability of empirical comparisons, the goal of this paper is to provide some intuition about the relative merits of alternative estimators in health economic evaluations where nonlinearity, sample size, availability of pre/post data, heterogeneity, and missing variables can have important implications for choice of methodology. Also considered is the potential combination of propensity score matching with alternative methods such as differences-in-differences and decomposition methods that have not yet appeared in the empirical literature.

  15. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  16. Forced vibration tests and simulation analyses of a nuclear reactor building. Part 2: simulation analyses

    International Nuclear Information System (INIS)

    Kuno, M.; Nakagawa, S.; Momma, T.; Naito, Y.; Niwa, M.; Motohashi, S.

    1995-01-01

    Forced vibration tests of a BWR-type reactor building. Hamaoka Unit 4, were performed. Valuable data on the dynamic characteristics of the soil-structure interaction system were obtained through the tests. Simulation analyses of the fundamental dynamic characteristics of the soil-structure system were conducted, using a basic lumped mass soil-structure model (lattice model), and strong correlation with the measured data was obtained. Furthermore, detailed simulation models were employed to investigate the effects of simultaneously induced vertical response and response of the adjacent turbine building on the lateral response of the reactor building. (author). 4 refs., 11 figs

  17. Models for regionalizing economic data and their applications within the scope of forensic disaster analyses

    Science.gov (United States)

    Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.

    2015-04-01

    The impact of natural hazards on the economic system can be observed in many different regions all over the world. Once the local economic structure is hit by an event direct costs instantly occur. However, the disturbance on a local level (e.g. parts of city or industries along a river bank) might also cause monetary damages in other, indirectly affected sectors. If the impact of an event is strong, these damages are likely to cascade and spread even on an international scale (e.g. the eruption of Eyjafjallajökull and its impact on the automotive sector in Europe). In order to determine these special impacts, one has to gain insights into the directly hit economic structure before being able to calculate these side effects. Especially, regarding the development of a model used for near real-time forensic disaster analyses any simulation needs to be based on data that is rapidly available or easily to be computed. Therefore, we investigated commonly used or recently discussed methodologies for regionalizing economic data. Surprisingly, even for German federal states there is no official input-output data available that can be used, although it might provide detailed figures concerning economic interrelations between different industry sectors. In the case of highly developed countries, such as Germany, we focus on models for regionalizing nationwide input-output table which is usually available at the national statistical offices. However, when it comes to developing countries (e.g. South-East Asia) the data quality and availability is usually much poorer. In this case, other sources need to be found for the proper assessment of regional economic performance. We developed an indicator-based model that can fill this gap because of its flexibility regarding the level of aggregation and the composability of different input parameters. Our poster presentation brings up a literature review and a summary on potential models that seem to be useful for this specific task

  18. PWR plant transient analyses using TRAC-PF1

    International Nuclear Information System (INIS)

    Ireland, J.R.; Boyack, B.E.

    1984-01-01

    This paper describes some of the pressurized water reactor (PWR) transient analyses performed at Los Alamos for the US Nuclear Regulatory Commission using the Transient Reactor Analysis Code (TRAC-PF1). Many of the transient analyses performed directly address current PWR safety issues. Included in this paper are examples of two safety issues addressed by TRAC-PF1. These examples are pressurized thermal shock (PTS) and feed-and-bleed cooling for Oconee-1. The calculations performed were plant specific in that details of both the primary and secondary sides were modeled in addition to models of the plant integrated control systems. The results of these analyses show that for these two transients, the reactor cores remained covered and cooled at all times posing no real threat to the reactor system nor to the public

  19. Interrogating Paradigmatic and Narrative Analyses against a Backdrop of Teacher Professionalism

    Science.gov (United States)

    MacMath, Sheryl

    2009-01-01

    In response to Denzin and Lincoln's claim that using a variety of interpretive analyses enables a better understanding of the world, I examine two different analyses of the same data set. Using a survey of preservice elementary education students (n = 208) asked to describe their perceptions of teacher professionalism, I contrast the application…

  20. Interpreting Mini-Mental State Examination Performance in Highly Proficient Bilingual Spanish-English and Asian Indian-English Speakers: Demographic Adjustments, Item Analyses, and Supplemental Measures.

    Science.gov (United States)

    Milman, Lisa H; Faroqi-Shah, Yasmeen; Corcoran, Chris D; Damele, Deanna M

    2018-04-17

    Performance on the Mini-Mental State Examination (MMSE), among the most widely used global screens of adult cognitive status, is affected by demographic variables including age, education, and ethnicity. This study extends prior research by examining the specific effects of bilingualism on MMSE performance. Sixty independent community-dwelling monolingual and bilingual adults were recruited from eastern and western regions of the United States in this cross-sectional group study. Independent sample t tests were used to compare 2 bilingual groups (Spanish-English and Asian Indian-English) with matched monolingual speakers on the MMSE, demographically adjusted MMSE scores, MMSE item scores, and a nonverbal cognitive measure. Regression analyses were also performed to determine whether language proficiency predicted MMSE performance in both groups of bilingual speakers. Group differences were evident on the MMSE, on demographically adjusted MMSE scores, and on a small subset of individual MMSE items. Scores on a standardized screen of language proficiency predicted a significant proportion of the variance in the MMSE scores of both bilingual groups. Bilingual speakers demonstrated distinct performance profiles on the MMSE. Results suggest that supplementing the MMSE with a language screen, administering a nonverbal measure, and/or evaluating item-based patterns of performance may assist with test interpretation for this population.

  1. Item response analysis on an examination in anesthesiology for medical students in Taiwan: A comparison of one- and two-parameter logistic models

    Directory of Open Access Journals (Sweden)

    Yu-Feng Huang

    2013-06-01

    Conclusion: Item response models are useful for medical test analyses and provide valuable information about model comparisons and identification of differential items other than test reliability, item difficulty, and examinee's ability.

  2. Assessment applicability of selected models of multiple discriminant analyses to forecast financial situation of Polish wood sector enterprises

    Directory of Open Access Journals (Sweden)

    Adamowicz Krzysztof

    2017-03-01

    Full Text Available In the last three decades forecasting bankruptcy of enterprises has been an important and difficult problem, used as an impulse for many research projects (Ribeiro et al. 2012. At present many methods of bankruptcy prediction are available. In view of the specific character of economic activity in individual sectors, specialised methods adapted to a given branch of industry are being used increasingly often. For this reason an important scientific problem is related with the indication of an appropriate model or group of models to prepare forecasts for a given branch of industry. Thus research has been conducted to select an appropriate model of Multiple Discriminant Analysis (MDA, best adapted to forecasting changes in the wood industry. This study analyses 10 prediction models popular in Poland. Effectiveness of the model proposed by Jagiełło, developed for all industrial enterprises, may be labelled accidental. That model is not adapted to predict financial changes in wood sector companies in Poland.

  3. A model for asymmetric ballooning and analyses of ballooning behaviour of single rods with probabilistic methods

    International Nuclear Information System (INIS)

    Keusenhoff, J.G.; Schubert, J.D.; Chakraborty, A.K.

    1985-01-01

    Plastic deformation behaviour of Zircaloy cladding has been extensively examined in the past and can be described best by a model for asymmetric deformation. Slight displacement between the pellet and cladding will always exist and this will lead to the formation of azimuthal temperature differences. The ballooning process is strongly temperature dependent and, as a result of the built up temperature differences, differing deformation behaviours along the circumference of the cladding result. The calculated ballooning of cladding is mainly influenced by its temperature, the applied burst criterion and the parameters used in the deformation model. All these influencing parameters possess uncertainties. In order to quantify these uncertainties and to estimate distribution functions of important parameters such as temperature and deformation the response surface method was applied. For a hot rod the calculated standard deviation of cladding temperature amounts to 50 K. From this high value the large influence of the external cooling conditions on the deformation and burst behaviour of cladding can be estimated. In an additional statistical examination the parameters of deformation and burst models have been included and their influence on the deformation of the rod has been studied. (author)

  4. Examining the Pathologic Adaptation Model of Community Violence Exposure in Male Adolescents of Color

    Science.gov (United States)

    Gaylord-Harden, Noni K.; So, Suzanna; Bai, Grace J.; Henry, David B.; Tolan, Patrick H.

    2017-01-01

    The current study examined a model of desensitization to community violence exposure—the pathologic adaptation model—in male adolescents of color. The current study included 285 African American (61%) and Latino (39%) male adolescents (W1 M age = 12.41) from the Chicago Youth Development Study to examine the longitudinal associations between community violence exposure, depressive symptoms, and violent behavior. Consistent with the pathologic adaptation model, results indicated a linear, positive association between community violence exposure in middle adolescence and violent behavior in late adolescence, as well as a curvilinear association between community violence exposure in middle adolescence and depressive symptoms in late adolescence, suggesting emotional desensitization. Further, these effects were specific to cognitive-affective symptoms of depression and not somatic symptoms. Emotional desensitization outcomes, as assessed by depressive symptoms, can occur in male adolescents of color exposed to community violence and these effects extend from middle adolescence to late adolescence. PMID:27653968

  5. Using a latent variable model with non-constant factor loadings to examine PM2.5 constituents related to secondary inorganic aerosols.

    Science.gov (United States)

    Zhang, Zhenzhen; O'Neill, Marie S; Sánchez, Brisa N

    2016-04-01

    Factor analysis is a commonly used method of modelling correlated multivariate exposure data. Typically, the measurement model is assumed to have constant factor loadings. However, from our preliminary analyses of the Environmental Protection Agency's (EPA's) PM 2.5 fine speciation data, we have observed that the factor loadings for four constituents change considerably in stratified analyses. Since invariance of factor loadings is a prerequisite for valid comparison of the underlying latent variables, we propose a factor model that includes non-constant factor loadings that change over time and space using P-spline penalized with the generalized cross-validation (GCV) criterion. The model is implemented using the Expectation-Maximization (EM) algorithm and we select the multiple spline smoothing parameters by minimizing the GCV criterion with Newton's method during each iteration of the EM algorithm. The algorithm is applied to a one-factor model that includes four constituents. Through bootstrap confidence bands, we find that the factor loading for total nitrate changes across seasons and geographic regions.

  6. Improvement of a three-dimensional atmospheric dynamic model and examination of its performance over complex terrain

    International Nuclear Information System (INIS)

    Nagai, Haruyasu; Yamazawa, Hiromi

    1994-11-01

    A three-dimensional atmospheric dynamic model (PHYSIC) was improved and its performance was examined using the meteorological data observed at a coastal area with a complex terrain. To introduce synoptic meteorological conditions into the model, the initial and boundary conditions were improved. By this improvement, the model can predict the temporal change of wind field for more than 24 hours. Moreover, the model successfully simulates the land and sea breeze observed at Shimokita area in the summer of 1992. (author)

  7. Fluid-structure interaction and structural analyses using a comprehensive mitral valve model with 3D chordal structure.

    Science.gov (United States)

    Toma, Milan; Einstein, Daniel R; Bloodworth, Charles H; Cochran, Richard P; Yoganathan, Ajit P; Kunzelman, Karyn S

    2017-04-01

    Over the years, three-dimensional models of the mitral valve have generally been organized around a simplified anatomy. Leaflets have been typically modeled as membranes, tethered to discrete chordae typically modeled as one-dimensional, non-linear cables. Yet, recent, high-resolution medical images have revealed that there is no clear boundary between the chordae and the leaflets. In fact, the mitral valve has been revealed to be more of a webbed structure whose architecture is continuous with the chordae and their extensions into the leaflets. Such detailed images can serve as the basis of anatomically accurate, subject-specific models, wherein the entire valve is modeled with solid elements that more faithfully represent the chordae, the leaflets, and the transition between the two. These models have the potential to enhance our understanding of mitral valve mechanics and to re-examine the role of the mitral valve chordae, which heretofore have been considered to be 'invisible' to the fluid and to be of secondary importance to the leaflets. However, these new models also require a rethinking of modeling assumptions. In this study, we examine the conventional practice of loading the leaflets only and not the chordae in order to study the structural response of the mitral valve apparatus. Specifically, we demonstrate that fully resolved 3D models of the mitral valve require a fluid-structure interaction analysis to correctly load the valve even in the case of quasi-static mechanics. While a fluid-structure interaction mode is still more computationally expensive than a structural-only model, we also show that advances in GPU computing have made such models tractable. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Analysing pseudoephedrine/methamphetamine policy options in Australia using multi-criteria decision modelling.

    Science.gov (United States)

    Manning, Matthew; Wong, Gabriel T W; Ransley, Janet; Smith, Christine

    2016-06-01

    In this paper we capture and synthesize the unique knowledge of experts so that choices regarding policy measures to address methamphetamine consumption and dependency in Australia can be strengthened. We examine perceptions of the: (1) influence of underlying factors that impact on the methamphetamine problem; (2) importance of various models of intervention that have the potential to affect the success of policies; and (3) efficacy of alternative pseudoephedrine policy options. We adopt a multi-criteria decision model to unpack factors that affect decisions made by experts and examine potential variations on weight/preference among groups. Seventy experts from five groups (i.e. academia (18.6%), government and policy (27.1%), health (18.6%), pharmaceutical (17.1%) and police (18.6%)) in Australia participated in the survey. Social characteristics are considered the most important underlying factor, prevention the most effective strategy and Project STOP the most preferred policy option with respect to reducing methamphetamine consumption and dependency in Australia. One-way repeated ANOVAs indicate a statistically significant difference with regards to the influence of underlying factors (F(2.3, 144.5)=11.256, pmethamphetamine consumption and dependency. Most experts support the use of preventative mechanisms to inhibit drug initiation and delayed drug uptake. Compared to other policies, Project STOP (which aims to disrupt the initial diversion of pseudoephedrine) appears to be a more preferable preventative mechanism to control the production and subsequent sale and use of methamphetamine. This regulatory civil law lever engages third parties in controlling drug-related crime. The literature supports third-party partnerships as it engages experts who have knowledge and expertise with respect to prevention and harm minimization. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Comparing Multidimensional and Continuum Models of Vocabulary Acquisition: An Empirical Examination of the Vocabulary Knowledge Scale

    Science.gov (United States)

    Stewart, Jeffrey; Batty, Aaron Olaf; Bovee, Nicholas

    2012-01-01

    Second language vocabulary acquisition has been modeled both as multidimensional in nature and as a continuum wherein the learner's knowledge of a word develops along a cline from recognition through production. In order to empirically examine and compare these models, the authors assess the degree to which the Vocabulary Knowledge Scale (VKS;…

  10. The application of fluid structure interaction techniques within finite element analyses of water-filled transport flasks

    International Nuclear Information System (INIS)

    Smith, C.; Stojko, S.

    2004-01-01

    Historically, Finite Element (FE) analyses of water-filled transport flasks and their payloads have been carried out assuming a dry environment, mainly due to a lack of robust Fluid Structure Interaction (FSI) modelling techniques. Also it has been accepted within the RAM transport industry that the presence of water would improve the impact withstand capability of dropped payloads within containers. In recent years the FE community has seen significant progress and improvement in FSI techniques. These methods have been utilised to investigate the effects of a wet environment on payload behaviour for the regulatory drop test within a recent transport licence renewal application. Fluid flow and pressure vary significantly during a wet impact and the effects on the contents become complex when water is incorporated into the flask analyses. Modelling a fluid environment within the entire flask is considered impractical; hence a good understanding of the FSI techniques and assumptions regarding fluid boundaries is required in order to create a representative FSI model. Therefore, a Verification and Validation (V and V) exercise was undertaken to underpin the FSI techniques eventually utilised. A number of problems of varying complexity have been identified to test the FSI capabilities of the explicit code LS-DYNA, which is used in the extant dry container impact analyses. RADIOSS explicit code has been used for comparison, to provide further confidence in LS-DYNA predictions. Various methods of modelling fluid are tested, and the relative advantages and limitations of each method and FSI coupling approaches are discussed. Results from the V and V problems examined provided sufficient confidence that FSI effects within containers can be accurately modelled

  11. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    NARCIS (Netherlands)

    Kolkman, Rien; Kok, Matthijs; van der Veen, A.

    2005-01-01

    The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties

  12. Balmorel: A model for analyses of the electricity and CHP markets in the Baltic Sea Region. Appendices

    International Nuclear Information System (INIS)

    Ravn, H.F.; Munksgaard, J.; Ramskov, J.; Grohnheit, P.E.; Larsen, H.V.

    2001-03-01

    This report describes the motivations behind the development of the Balmorel model as well as the model itself. The purpose of the Balmorel project is to develop a model for analyses of the power and CHP sectors in the Baltic Sea Region. The model is directed towards the analysis of relevant policy questions to the extent that they contain substantial international aspects. The model is developed in response to the trend towards internationalisation in the electricity sector. This trend is seen in increased international trade of electricity, in investment strategies among producers and otherwise. Also environmental considerations and policies are to an increasing extent gaining an international perspective in relation to the greenhouse gasses. Further, the ongoing process of deregulation of the energy sector highlights this and contributes to the need for overview and analysis. A guiding principle behind the construction of the model has been that it may serve as a means of communication in relation to the policy issues that already are or that may become important for the region. Therefore, emphasis has been put on documentation, transparency and flexibility of the model. This is achieved in part by formulating the model in a high level modelling language, and by making the model, including data, available at the internet. Potential users of the Balmorel model include research institutions, consulting companies, energy authorities, transmission system operators and energy companies. (au)

  13. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  14. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 3

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Corum, J.M.; Bryson, J.W.

    1975-06-01

    The third in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: the experimental data provide design information directly applicable to nozzles in cylindrical vessels; and the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 3 had a 10 in. OD and the nozzle had a 1.29 in. OD, giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios for the cylinder and the nozzle were 50 and 7.68 respectively. Thirteen separate loading cases were analyzed. In each, one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for all the loadings were obtained using 158 three-gage strain rosettes located on the inner and outer surfaces. The loading cases were also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  15. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 4

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-06-01

    The last in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models in the series are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: (1) the experimental data provide design information directly applicable to nozzles in cylindrical vessels, and (2) the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 4 had an outside diameter of 10 in., and the nozzle had an outside diameter of 1.29 in., giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios were 50 and 20.2 for the cylinder and nozzle respectively. Thirteen separate loading cases were analyzed. For each loading condition one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for each of the 13 loadings were obtained using 157 three-gage strain rosettes located on the inner and outer surfaces. Each of the 13 loading cases was also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  16. Examination of a muscular activity estimation model using a Bayesian network for the influence of an ankle foot orthosis.

    Science.gov (United States)

    Inoue, Jun; Kawamura, Kazuya; Fujie, Masakatsu G

    2012-01-01

    In the present paper, we examine the appropriateness of a new model to examine the activity of the foot in gait. We developed an estimation model for foot-ankle muscular activity in the design of an ankle-foot orthosis by means of a statistical method. We chose three muscles for measuring muscular activity and built a Bayesian network model to confirm the appropriateness of the estimation model. We experimentally examined the normal gait of a non-disabled subject. We measured the muscular activity of the lower foot muscles using electromyography, the joint angles, and the pressure on each part of the sole. From these data, we obtained the causal relationship at every 10% level for these factors and built models for the stance phase, control term, and propulsive term. Our model has three advantages. First, it can express the influences that change during gait because we use 10% level nodes for each factor. Second, it can express the influences of factors that differ for low and high muscular-activity levels. Third, we created divided models that are able to reflect the actual features of gait. In evaluating the new model, we confirmed it is able to estimate all muscular activity level with an accuracy of over 90%.

  17. Testicular Self-Examination: A Test of the Health Belief Model and the Theory of Planned Behaviour

    Science.gov (United States)

    McClenahan, Carol; Shevlin, Mark; Adamson, Gary; Bennett, Cara; O'Neill, Brenda

    2007-01-01

    The aim of this study was to test the utility and efficiency of the theory of planned behaviour (TPB) and the health belief model (HBM) in predicting testicular self-examination (TSE) behaviour. A questionnaire was administered to an opportunistic sample of 195 undergraduates aged 18-39 years. Structural equation modelling indicated that, on the…

  18. Gamma-ray pulsar physics: gap-model populations and light-curve analyses in the Fermi era

    International Nuclear Information System (INIS)

    Pierbattista, M.

    2010-01-01

    This thesis research focusses on the study of the young and energetic isolated ordinary pulsar population detected by the Fermi gamma-ray space telescope. We compared the model expectations of four emission models and the LAT data. We found that all the models fail to reproduce the LAT detections, in particular the large number of high E objects observed. This inconsistency is not model dependent. A discrepancy between the radio-loud/radio-quiet objects ratio was also found between the observed and predicted samples. The L γ α E 0.5 relation is robustly confirmed by all the assumed models with particular agreement in the slot gap (SG) case. On luminosity bases, the intermediate altitude emission of the two pole caustic SG model is favoured. The beaming factor f Ω shows an E dependency that is slightly visible in the SG case. Estimates of the pulsar orientations have been obtained to explain the simultaneous gamma and radio light-curves. By analysing the solutions we found a relation between the observed energy cutoff and the width of the emission slot gap. This relation has been theoretically predicted. A possible magnetic obliquity α alignment with time is rejected -for all the models- on timescale of the order of 10 6 years. The light-curve morphology study shows that the outer magnetosphere gap emission (OGs) are favoured to explain the observed radio-gamma lag. The light curve moment studies (symmetry and sharpness) on the contrary favour a two pole caustic SG emission. All the model predictions suggest a different magnetic field layout with an hybrid two pole caustic and intermediate altitude emission to explain both the pulsar luminosity and light curve morphology. The low magnetosphere emission mechanism of the polar cap model, is systematically rejected by all the tests done. (author) [fr

  19. Congruence between distribution modelling and phylogeographical analyses reveals Quaternary survival of a toadflax species (Linaria elegans) in oceanic climate areas of a mountain ring range.

    Science.gov (United States)

    Fernández-Mazuecos, Mario; Vargas, Pablo

    2013-06-01

    · The role of Quaternary climatic shifts in shaping the distribution of Linaria elegans, an Iberian annual plant, was investigated using species distribution modelling and molecular phylogeographical analyses. Three hypotheses are proposed to explain the Quaternary history of its mountain ring range. · The distribution of L. elegans was modelled using the maximum entropy method and projected to the last interglacial and to the last glacial maximum (LGM) using two different paleoclimatic models: the Community Climate System Model (CCSM) and the Model for Interdisciplinary Research on Climate (MIROC). Two nuclear and three plastid DNA regions were sequenced for 24 populations (119 individuals sampled). Bayesian phylogenetic, phylogeographical, dating and coalescent-based population genetic analyses were conducted. · Molecular analyses indicated the existence of northern and southern glacial refugia and supported two routes of post-glacial recolonization. These results were consistent with the LGM distribution as inferred under the CCSM paleoclimatic model (but not under the MIROC model). Isolation between two major refugia was dated back to the Riss or Mindel glaciations, > 100 kyr before present (bp). · The Atlantic distribution of inferred refugia suggests that the oceanic (buffered)-continental (harsh) gradient may have played a key and previously unrecognized role in determining Quaternary distribution shifts of Mediterranean plants. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  20. A new approach to analyse longitudinal epidemiological data with an excess of zeros

    NARCIS (Netherlands)

    Spriensma, Alette S.; Hajos, Tibor R. S.; de Boer, Michiel R.; Heymans, Martijn W.; Twisk, Jos W. R.

    2013-01-01

    Background: Within longitudinal epidemiological research, 'count' outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables

  1. Models for transient analyses in advanced test reactors

    International Nuclear Information System (INIS)

    Gabrielli, Fabrizio

    2011-01-01

    Several strategies are developed worldwide to respond to the world's increasing demand for electricity. Modern nuclear facilities are under construction or in the planning phase. In parallel, advanced nuclear reactor concepts are being developed to achieve sustainability, minimize waste, and ensure uranium resources. To optimize the performance of components (fuels and structures) of these systems, significant efforts are under way to design new Material Test Reactors facilities in Europe which employ water as a coolant. Safety provisions and the analyses of severe accidents are key points in the determination of sound designs. In this frame, the SIMMER multiphysics code systems is a very attractive tool as it can simulate transients and phenomena within and beyond the design basis in a tightly coupled way. This thesis is primarily focused upon the extension of the SIMMER multigroup cross-sections processing scheme (based on the Bondarenko method) for a proper heterogeneity treatment in the analyses of water-cooled thermal neutron systems. Since the SIMMER code was originally developed for liquid metal-cooled fast reactors analyses, the effect of heterogeneity had been neglected. As a result, the application of the code to water-cooled systems leads to a significant overestimation of the reactivity feedbacks and in turn to non-conservative results. To treat the heterogeneity, the multigroup cross-sections should be computed by properly taking account of the resonance self-shielding effects and the fine intra-cell flux distribution in space group-wise. In this thesis, significant improvements of the SIMMER cross-section processing scheme are described. A new formulation of the background cross-section, based on the Bell and Wigner correlations, is introduced and pre-calculated reduction factors (Effective Mean Chord Lengths) are used to take proper account of the resonance self-shielding effects of non-fuel isotopes. Moreover, pre-calculated parameters are applied

  2. Analyses of cavitation instabilities in ductile metals

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2007-01-01

    Cavitation instabilities have been predicted for a single void in a ductile metal stressed under high triaxiality conditions. In experiments for a ceramic reinforced by metal particles a single dominant void has been observed on the fracture surface of some of the metal particles bridging a crack......, and also tests for a thin ductile metal layer bonding two ceramic blocks have indicated rapid void growth. Analyses for these material configurations are discussed here. When the void radius is very small, a nonlocal plasticity model is needed to account for observed size-effects, and recent analyses......, while the surrounding voids are represented by a porous ductile material model in terms of a field quantity that specifies the variation of the void volume fraction in the surrounding metal....

  3. The applied model of imagery use: Examination of moderation and mediation effects.

    Science.gov (United States)

    Koehn, S; Stavrou, N A M; Young, J A; Morris, T

    2016-08-01

    The applied model of mental imagery use proposed an interaction effect between imagery type and imagery ability. This study had two aims: (a) the examination of imagery ability as a moderating variable between imagery type and dispositional flow, and (b) the testing of alternative mediation models. The sample consisted of 367 athletes from Scotland and Australia, who completed the Sport Imagery Questionnaire, Sport Imagery Ability Questionnaire, and Dispositional Flow Scale-2. Hierarchical regression analysis showed direct effects of imagery use and imagery ability on flow, but no significant interaction. Mediation analysis revealed a significant indirect path, indicating a partially mediated relationship (P = 0.002) between imagery use, imagery ability, and flow. Partial mediation was confirmed when the effect of cognitive imagery use and cognitive imagery ability was tested, and a full mediation model was found between motivational imagery use, motivational imagery ability, and flow. The results are discussed in conjunction with potential future research directions on advancing theory and applications. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. An application of the 'Bayesian cohort model' to nuclear power plant cost analyses

    International Nuclear Information System (INIS)

    Ono, Kenji; Nakamura, Takashi

    2002-01-01

    We have developed a new method for identifying the effects of calendar year, plant age and commercial operation starting year on the costs and performances of nuclear power plants and also developed an analysis system running on personal computers. The method extends the Bayesian cohort model for time series social survey data proposed by one of the authors. The proposed method was shown to be able to separate the above three effects more properly than traditional methods such as taking simple means by time domain. The analyses of US nuclear plant cost and performance data by using the proposed method suggest that many of the US plants spent relatively long time and much capital cost for modification at their age of about 10 to 20 years, but that, after those ages, they performed fairly well with lower and stabilized O and M and additional capital costs. (author)

  5. A causal examination of the effects of confounding factors on multimetric indices

    Science.gov (United States)

    Schoolmaster, Donald R.; Grace, James B.; Schweiger, E. William; Mitchell, Brian R.; Guntenspergen, Glenn R.

    2013-01-01

    The development of multimetric indices (MMIs) as a means of providing integrative measures of ecosystem condition is becoming widespread. An increasingly recognized problem for the interpretability of MMIs is controlling for the potentially confounding influences of environmental covariates. Most common approaches to handling covariates are based on simple notions of statistical control, leaving the causal implications of covariates and their adjustment unstated. In this paper, we use graphical models to examine some of the potential impacts of environmental covariates on the observed signals between human disturbance and potential response metrics. Using simulations based on various causal networks, we show how environmental covariates can both obscure and exaggerate the effects of human disturbance on individual metrics. We then examine from a causal interpretation standpoint the common practice of adjusting ecological metrics for environmental influences using only the set of sites deemed to be in reference condition. We present and examine the performance of an alternative approach to metric adjustment that uses the whole set of sites and models both environmental and human disturbance effects simultaneously. The findings from our analyses indicate that failing to model and adjust metrics can result in a systematic bias towards those metrics in which environmental covariates function to artificially strengthen the metric–disturbance relationship resulting in MMIs that do not accurately measure impacts of human disturbance. We also find that a “whole-set modeling approach” requires fewer assumptions and is more efficient with the given information than the more commonly applied “reference-set” approach.

  6. Fracture Mechanics Analyses of Reinforced Carbon-Carbon Wing-Leading-Edge Panels

    Science.gov (United States)

    Raju, Ivatury S.; Phillips, Dawn R.; Knight, Norman F., Jr.; Song, Kyongchan

    2010-01-01

    Fracture mechanics analyses of subsurface defects within the joggle regions of the Space Shuttle wing-leading-edge RCC panels are performed. A 2D plane strain idealized joggle finite element model is developed to study the fracture behavior of the panels for three distinct loading conditions - lift-off and ascent, on-orbit, and entry. For lift-off and ascent, an estimated bounding aerodynamic pressure load is used for the analyses, while for on-orbit and entry, thermo-mechanical analyses are performed using the extreme cold and hot temperatures experienced by the panels. In addition, a best estimate for the material stress-free temperature is used in the thermo-mechanical analyses. In the finite element models, the substrate and coating are modeled separately as two distinct materials. Subsurface defects are introduced at the coating-substrate interface and within the substrate. The objective of the fracture mechanics analyses is to evaluate the defect driving forces, which are characterized by the strain energy release rates, and determine if defects can become unstable for each of the loading conditions.

  7. A new approach to analyse longitudinal epidemiological data with an excess of zeros

    NARCIS (Netherlands)

    Spriensma, A.S.; Hajós, T.R.S.; de Boer, M.R.; Heijmans, M.W.; Twisk, J.W.R.

    2013-01-01

    Background: Within longitudinal epidemiological research, count outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables with

  8. Seismic risk analyses in the German Risk Study, phase B

    International Nuclear Information System (INIS)

    Hosser, D.; Liemersdorf, H.

    1991-01-01

    The paper discusses some aspects of the seismic risk part of the German Risk Study for Nuclear Power Plants, Phase B. First simplified analyses in Phase A of the study allowed only a rough classification of structures and systems of the PWR reference plant according to their seismic risk contribution. These studies were extended in Phase B using improved models for the dynamic analyses of buildings, structures and components as well as for the probabilistic analyses of seismic loading, failure probabilities and event trees. The methodology of deriving probabilistic seismic load descriptions is explained and compared with the methods in Phase A of the study and in other studies. Some details of the linear and nonlinear dynamic analyses of structures are reported in order to demonstrate the influence of different assumptions for material behaviour and failure criteria. The probabilistic structural and event tree analyses are discussed with respect to distribution assumptions, acceptable simplifications and model uncertainties. Some results for the PWR reference plant are given. (orig.)

  9. Examining School-Based Bullying Interventions Using Multilevel Discrete Time Hazard Modeling

    Science.gov (United States)

    Wagaman, M. Alex; Geiger, Jennifer Mullins; Bermudez-Parsai, Monica; Hedberg, E. C.

    2014-01-01

    Although schools have been trying to address bulling by utilizing different approaches that stop or reduce the incidence of bullying, little remains known about what specific intervention strategies are most successful in reducing bullying in the school setting. Using the social-ecological framework, this paper examines school-based disciplinary interventions often used to deliver consequences to deter the reoccurrence of bullying and aggressive behaviors among school-aged children. Data for this study are drawn from the School-Wide Information System (SWIS) with the final analytic sample consisting of 1,221 students in grades K – 12 who received an office disciplinary referral for bullying during the first semester. Using Kaplan-Meier Failure Functions and Multi-level discrete time hazard models, determinants of the probability of a student receiving a second referral over time were examined. Of the seven interventions tested, only Parent-Teacher Conference (AOR=0.65, pbullying and aggressive behaviors. By using a social-ecological framework, schools can develop strategies that deter the reoccurrence of bullying by identifying key factors that enhance a sense of connection between the students’ mesosystems as well as utilizing disciplinary strategies that take into consideration student’s microsystem roles. PMID:22878779

  10. Examining the Link Between Public Transit Use and Active Commuting

    Directory of Open Access Journals (Sweden)

    Melissa Bopp

    2015-04-01

    Full Text Available Background: An established relationship exists between public transportation (PT use and physical activity. However, there is limited literature that examines the link between PT use and active commuting (AC behavior. This study examines this link to determine if PT users commute more by active modes. Methods: A volunteer, convenience sample of adults (n = 748 completed an online survey about AC/PT patterns, demographic, psychosocial, community and environmental factors. t-test compared differences between PT riders and non-PT riders. Binary logistic regression analyses examined the effect of multiple factors on AC and a full logistic regression model was conducted to examine AC. Results: Non-PT riders (n = 596 reported less AC than PT riders. There were several significant relationships with AC for demographic, interpersonal, worksite, community and environmental factors when considering PT use. The logistic multivariate analysis for included age, number of children and perceived distance to work as negative predictors and PT use, feelings of bad weather and lack of on-street bike lanes as a barrier to AC, perceived behavioral control and spouse AC were positive predictors. Conclusions: This study revealed the complex relationship between AC and PT use. Further research should investigate how AC and public transit use are related.

  11. Segmental Musculoskeletal Examinations using Dual-Energy X-Ray Absorptiometry (DXA): Positioning and Analysis Considerations.

    Science.gov (United States)

    Hart, Nicolas H; Nimphius, Sophia; Spiteri, Tania; Cochrane, Jodie L; Newton, Robert U

    2015-09-01

    Musculoskeletal examinations provide informative and valuable quantitative insight into muscle and bone health. DXA is one mainstream tool used to accurately and reliably determine body composition components and bone mass characteristics in-vivo. Presently, whole body scan models separate the body into axial and appendicular regions, however there is a need for localised appendicular segmentation models to further examine regions of interest within the upper and lower extremities. Similarly, inconsistencies pertaining to patient positioning exist in the literature which influence measurement precision and analysis outcomes highlighting a need for standardised procedure. This paper provides standardised and reproducible: 1) positioning and analysis procedures using DXA and 2) reliable segmental examinations through descriptive appendicular boundaries. Whole-body scans were performed on forty-six (n = 46) football athletes (age: 22.9 ± 4.3 yrs; height: 1.85 ± 0.07 cm; weight: 87.4 ± 10.3 kg; body fat: 11.4 ± 4.5 %) using DXA. All segments across all scans were analysed three times by the main investigator on three separate days, and by three independent investigators a week following the original analysis. To examine intra-rater and inter-rater, between day and researcher reliability, coefficients of variation (CV) and intraclass correlation coefficients (ICC) were determined. Positioning and segmental analysis procedures presented in this study produced very high, nearly perfect intra-tester (CV ≤ 2.0%; ICC ≥ 0.988) and inter-tester (CV ≤ 2.4%; ICC ≥ 0.980) reliability, demonstrating excellent reproducibility within and between practitioners. Standardised examinations of axial and appendicular segments are necessary. Future studies aiming to quantify and report segmental analyses of the upper- and lower-body musculoskeletal properties using whole-body DXA scans are encouraged to use the patient positioning and image analysis procedures outlined in this

  12. Image analysis for remote examination of fuel pins

    International Nuclear Information System (INIS)

    Cook, J.H.; Nayak, U.P.

    1982-01-01

    An image analysis system operating in the Wing 9 Hot Cell Facility at Los Alamos National Laboratory provides quantitative microstructural analyses of irradiated fuels and materials. With this system, fewer photomicrographs are required during postirradiation microstructural examination and data are available for analysis much faster. The system has been used successfully to examine Westinghouse Advanced Reactors Division experimental fuel pins

  13. Examination of the interaction of different lighting conditions and chronic mild stress in animal model.

    Science.gov (United States)

    Muller, A; Gal, N; Betlehem, J; Fuller, N; Acs, P; Kovacs, G L; Fusz, K; Jozsa, R; Olah, A

    2015-09-01

    We examined the effects of different shift work schedules and chronic mild stress (CMS) on mood using animal model. The most common international shift work schedules in nursing were applied by three groups of Wistar-rats and a control group with normal light-dark cycle. One subgroup from each group was subjected to CMS. Levels of anxiety and emotional life were evaluated in light-dark box. Differences between the groups according to independent and dependent variables were examined with one- and two-way analysis of variance, with a significance level defined at p animals.

  14. Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile

    Directory of Open Access Journals (Sweden)

    Hoľko Michal

    2014-12-01

    Full Text Available The article deals with numerical analyses of a Continuous Flight Auger (CFA pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed.

  15. An examination of the factors affecting consumer’s purchase decision in the Malaysian retail market

    OpenAIRE

    Jalal Rajeh Hanaysha

    2018-01-01

    Purpose – The purpose of this paper is to examine the effects of corporate social responsibility, social media marketing, sales promotion, store environment and perceived value on a purchase decision in the retail sector. Design/methodology/approach – A quantitative research methodology was used and the data were collected from 278 customers of retail stores in Malaysia. The collected data were analysed using SPSS 19 and structural equation modelling on AMOS. Findings – The findings showed th...

  16. Educational productivity in higher education : An examination of part of the Walberg Educational Productivity Model

    NARCIS (Netherlands)

    Bruinsma, M.; Jansen, E. P. W. A.

    Several factors in the H. J. Walberg Educational Productivity Model, which assumes that 9 factors affect academic achievement, were examined with a limited sample of 1st-year students in the University of Groningen. Information concerning 8 of these factors - grades, motivation, age, prior

  17. Computational analyses of synergism in small molecular network motifs.

    Directory of Open Access Journals (Sweden)

    Yili Zhang

    2014-03-01

    Full Text Available Cellular functions and responses to stimuli are controlled by complex regulatory networks that comprise a large diversity of molecular components and their interactions. However, achieving an intuitive understanding of the dynamical properties and responses to stimuli of these networks is hampered by their large scale and complexity. To address this issue, analyses of regulatory networks often focus on reduced models that depict distinct, reoccurring connectivity patterns referred to as motifs. Previous modeling studies have begun to characterize the dynamics of small motifs, and to describe ways in which variations in parameters affect their responses to stimuli. The present study investigates how variations in pairs of parameters affect responses in a series of ten common network motifs, identifying concurrent variations that act synergistically (or antagonistically to alter the responses of the motifs to stimuli. Synergism (or antagonism was quantified using degrees of nonlinear blending and additive synergism. Simulations identified concurrent variations that maximized synergism, and examined the ways in which it was affected by stimulus protocols and the architecture of a motif. Only a subset of architectures exhibited synergism following paired changes in parameters. The approach was then applied to a model describing interlocked feedback loops governing the synthesis of the CREB1 and CREB2 transcription factors. The effects of motifs on synergism for this biologically realistic model were consistent with those for the abstract models of single motifs. These results have implications for the rational design of combination drug therapies with the potential for synergistic interactions.

  18. GOTHIC MODEL OF BWR SECONDARY CONTAINMENT DRAWDOWN ANALYSES

    International Nuclear Information System (INIS)

    Hansen, P.N.

    2004-01-01

    This article introduces a GOTHIC version 7.1 model of the Secondary Containment Reactor Building Post LOCA drawdown analysis for a BWR. GOTHIC is an EPRI sponsored thermal hydraulic code. This analysis is required by the Utility to demonstrate an ability to restore and maintain the Secondary Containment Reactor Building negative pressure condition. The technical and regulatory issues associated with this modeling are presented. The analysis includes the affect of wind, elevation and thermal impacts on pressure conditions. The model includes a multiple volume representation which includes the spent fuel pool. In addition, heat sources and sinks are modeled as one dimensional heat conductors. The leakage into the building is modeled to include both laminar as well as turbulent behavior as established by actual plant test data. The GOTHIC code provides components to model heat exchangers used to provide fuel pool cooling as well as area cooling via air coolers. The results of the evaluation are used to demonstrate the time that the Reactor Building is at a pressure that exceeds external conditions. This time period is established with the GOTHIC model based on the worst case pressure conditions on the building. For this time period the Utility must assume the primary containment leakage goes directly to the environment. Once the building pressure is restored below outside conditions the release to the environment can be credited as a filtered release

  19. DCH analyses using the CONTAIN code

    International Nuclear Information System (INIS)

    Hong, Sung Wan; Kim, Hee Dong

    1996-08-01

    This report describes CONTAIN analyses performed during participation in the project of 'DCH issue resolution for ice condenser plants' which is sponsored by NRC at SNL. Even though the calculations were performed for the Ice Condenser plant, CONTAIN code has been used for analyses of many phenomena in the PWR containment and the DCH module can be commonly applied to any plant types. The present ice condenser issue resolution effort intended to provide guidance as to what might be needed to resolve DCH for ice condenser plants. It includes both a screening analysis and a scoping study if the screening analysis cannot provide an complete resolution. The followings are the results concerning DCH loads in descending order. 1. Availability of ignition sources prior to vessel breach 2. availability and effectiveness of ice in the ice condenser 3. Loads modeling uncertainties related to co-ejected RPV water 4. Other loads modeling uncertainties 10 tabs., 3 figs., 14 refs. (Author)

  20. DCH analyses using the CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Sung Wan; Kim, Hee Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-08-01

    This report describes CONTAIN analyses performed during participation in the project of `DCH issue resolution for ice condenser plants` which is sponsored by NRC at SNL. Even though the calculations were performed for the Ice Condenser plant, CONTAIN code has been used for analyses of many phenomena in the PWR containment and the DCH module can be commonly applied to any plant types. The present ice condenser issue resolution effort intended to provide guidance as to what might be needed to resolve DCH for ice condenser plants. It includes both a screening analysis and a scoping study if the screening analysis cannot provide an complete resolution. The followings are the results concerning DCH loads in descending order. 1. Availability of ignition sources prior to vessel breach 2. availability and effectiveness of ice in the ice condenser 3. Loads modeling uncertainties related to co-ejected RPV water 4. Other loads modeling uncertainties 10 tabs., 3 figs., 14 refs. (Author).

  1. Patient dose optimisation in cardiology during fluoroscopy examinations

    International Nuclear Information System (INIS)

    Verdun, F.R.; Valley, J.F.; Wicky, S.; Narbel, M.; Schnyder, P.

    2001-01-01

    Data from 1200 cardiac examinations recorded during the past ten months have been analysed. The DAP's obtained for most of the examinations are comparable to the published data. Moreover, an excellent correlation has been found between the high DAP value and the experience of the operator. DAP measurements for 'high dose examinations' are becoming mandatory in several countries, and medical physicists should help the physicians to interpret these measurements in order to improve the safety of the ionising radiation use. In our Centre it appeared that for their first examinations physicians should be more closely guided by seniors. (author)

  2. Analyses and testing of model prestressed concrete reactor vessels with built-in planes of weakness

    International Nuclear Information System (INIS)

    Dawson, P.; Paton, A.A.; Fleischer, C.C.

    1990-01-01

    This paper describes the design, construction, analyses and testing of two small scale, single cavity prestressed concrete reactor vessel models, one without planes of weakness and one with planes of weakness immediately behind the cavity liner. This work was carried out to extend a previous study which had suggested the likely feasibility of constructing regions of prestressed concrete reactor vessels and biological shields, which become activated, using easily removable blocks, separated by a suitable membrane. The paper describes the results obtained and concludes that the planes of weakness concept could offer a means of facilitating the dismantling of activated regions of prestressed concrete reactor vessels, biological shields and similar types of structure. (author)

  3. Weather-Driven Variation in Dengue Activity in Australia Examined Using a Process-Based Modeling Approach

    Science.gov (United States)

    Bannister-Tyrrell, Melanie; Williams, Craig; Ritchie, Scott A.; Rau, Gina; Lindesay, Janette; Mercer, Geoff; Harley, David

    2013-01-01

    The impact of weather variation on dengue transmission in Cairns, Australia, was determined by applying a process-based dengue simulation model (DENSiM) that incorporated local meteorologic, entomologic, and demographic data. Analysis showed that inter-annual weather variation is one of the significant determinants of dengue outbreak receptivity. Cross-correlation analyses showed that DENSiM simulated epidemics of similar relative magnitude and timing to those historically recorded in reported dengue cases in Cairns during 1991–2009, (r = 0.372, P < 0.01). The DENSiM model can now be used to study the potential impacts of future climate change on dengue transmission. Understanding the impact of climate variation on the geographic range, seasonality, and magnitude of dengue transmission will enhance development of adaptation strategies to minimize future disease burden in Australia. PMID:23166197

  4. Model-based performance and energy analyses of reverse osmosis to reuse wastewater in a PVC production site.

    Science.gov (United States)

    Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie

    2017-11-10

    A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.

  5. Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile

    Science.gov (United States)

    Hoľko, Michal; Stacho, Jakub

    2014-12-01

    The article deals with numerical analyses of a Continuous Flight Auger (CFA) pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed. Our analyses show that both types of software permit the modelling of pile foundations. The Plaxis software uses advanced material models as well as the modelling of the impact of groundwater or overconsolidation. The load-settlement curve calculated using Plaxis is equal to the results of a static load test with a more than 95 % degree of accuracy. In comparison, the load-settlement curve calculated using Ansys allows for the obtaining of only an approximate estimate, but the software allows for the common modelling of large structure systems together with a foundation system.

  6. Examining the Relationship between Technological Pedagogical Content Knowledge (TPACK) and Student Achievement Utilizing the Florida Value-Added Model

    Science.gov (United States)

    Farrell, Ivan K.; Hamed, Kastro M.

    2017-01-01

    Utilizing a correlational research design, we sought to examine the relationship between the technological pedagogical content knowledge (TPACK) of in-service teachers and student achievement measured with each individual teacher's Value-Added Model (VAM) score. The TPACK survey results and a teacher's VAM score were also examined, separately,…

  7. Examining the link between patient satisfaction and adherence to HIV care: a structural equation model.

    Directory of Open Access Journals (Sweden)

    Bich N Dang

    Full Text Available INTRODUCTION: Analogous to the business model of customer satisfaction and retention, patient satisfaction could serve as an innovative, patient-centered focus for increasing retention in HIV care and adherence to HAART, and ultimately HIV suppression. OBJECTIVE: To test, through structural equation modeling (SEM, a model of HIV suppression in which patient satisfaction influences HIV suppression indirectly through retention in HIV care and adherence to HAART. METHODS: We conducted a cross-sectional study of adults receiving HIV care at two clinics in Texas. Patient satisfaction was based on two validated items, one adapted from the Consumer Assessment of Healthcare Providers and Systems survey ("Would you recommend this clinic to other patients with HIV? and one adapted from the Delighted-Terrible Scale, ("Overall, how do you feel about the care you got at this clinic in the last 12 months?". A validated, single-item question measured adherence to HAART over the past 4 weeks. Retention in HIV care was based on visit constancy in the year prior to the survey. HIV suppression was defined as plasma HIV RNA <48 copies/mL at the time of the survey. We used SEM to test hypothesized relationships. RESULTS: The analyses included 489 patients (94% of eligible patients. The patient satisfaction score had a mean of 8.5 (median 9.2 on a 0- to 10- point scale. A total of 46% reported "excellent" adherence, 76% had adequate retention, and 70% had HIV suppression. In SEM analyses, patient satisfaction with care influences retention in HIV care and adherence to HAART, which in turn serve as key determinants of HIV suppression (all p<.0001. CONCLUSIONS: Patient satisfaction may have direct effects on retention in HIV care and adherence to HAART. Interventions to improve the care experience, without necessarily targeting objective clinical performance measures, could serve as an innovative method for optimizing HIV outcomes.

  8. Human event observations in the individual plant examinations

    International Nuclear Information System (INIS)

    Lois, E.; Forester, J.

    1994-01-01

    A main objective of the Nuclear Regulatory Commission's (NRC) Individual Plant Examination (IPE) Insights Program is to document significant safety insights relative to core damage frequency (CDF) for the different reactor and containment types and plant designs as indicated in the IPEs. The Human Reliability Analysis (HRA) is a critical component of the Probabilistic Risk Assessments (PRAs) which were done for the IPEs. The determination and selection of human actions for incorporation into the event and fault tree models and the quantification of their failure probabilities can have an important impact on the resulting estimates of CDF and risk. Therefore, two important goals of the NRCs IPE Insights Program are (1) to determine the extent to which human actions and their corresponding failure probabilities influenced the results of the IPEs and (2) to identify which factors played significant roles in determining the differences and similarities in the results of the HRA analyses across the different plants. To obtain the relevant information, the NRCs IPE database, which contains information on plant design, CDF, and containment performance obtained from the IPEs, was used in conjunction with a systematic examination of the HRA results from the IPEs

  9. Theory of Planned Behavior in the Classroom: An Examination of the Instructor Confirmation-Interaction Model

    Science.gov (United States)

    Burns, Michael E.; Houser, Marian L.; Farris, Kristen LeBlanc

    2018-01-01

    The current study utilizes the theory of planned behavior (Ajzen "Organizational Behavior and Human Decision Processes," 50, 179-211 Ajzen 1991) to examine an instructor confirmation-interaction model in the instructional communication context to discover a means by which instructors might cultivate positive student attitudes and…

  10. Using the Single Prolonged Stress Model to Examine the Pathophysiology of PTSD

    Directory of Open Access Journals (Sweden)

    Rimenez R. Souza

    2017-09-01

    Full Text Available The endurance of memories of emotionally arousing events serves the adaptive role of minimizing future exposure to danger and reinforcing rewarding behaviors. However, following a traumatic event, a subset of individuals suffers from persistent pathological symptoms such as those seen in posttraumatic stress disorder (PTSD. Despite the availability of pharmacological treatments and evidence-based cognitive behavioral therapy, a considerable number of PTSD patients do not respond to the treatment, or show partial remission and relapse of the symptoms. In controlled laboratory studies, PTSD patients show deficient ability to extinguish conditioned fear. Failure to extinguish learned fear could be responsible for the persistence of PTSD symptoms such as elevated anxiety, arousal, and avoidance. It may also explain the high non-response and dropout rates seen during treatment. Animal models are useful for understanding the pathophysiology of the disorder and the development of new treatments. This review examines studies in a rodent model of PTSD with the goal of identifying behavioral and physiological factors that predispose individuals to PTSD symptoms. Single prolonged stress (SPS is a frequently used rat model of PTSD that involves exposure to several successive stressors. SPS rats show PTSD-like symptoms, including impaired extinction of conditioned fear. Since its development by the Liberzon lab in 1997, the SPS model has been referred to by more than 200 published papers. Here we consider the findings of these studies and unresolved questions that may be investigated using the model.

  11. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  12. Examining the Competition for Forest Resources in Sweden Using Factor Substitution Analysis and Partial Equilibrium Modelling

    Energy Technology Data Exchange (ETDEWEB)

    Olsson, Anna

    2011-07-01

    The overall objective of the thesis is to analyse the procurement competition for forest resources in Sweden. The thesis consists of an introductory part and two self-contained papers. In paper I a translog cost function approach is used to analyse the factor substitution in the sawmill industry, the pulp and paper industry and the heating industry in Sweden over the period 1970 to 2008. The estimated parameters are used to calculate the Allen and Morishima elasticities of substitution as well as the price elasticities of input demand. The utilisation of forest resources in the energy sector has been increasing and this increase is believed to continue. The increase is, to a large extent, caused by economic policies introduced to reduce the emission of greenhouse gases. Such policies could lead to an increase in the procurement competition between the forest industries and the energy sector. The calculated substitution elasticities indicate that it is easier for the heating industry to substitutes between by-products and logging residues than it is for the pulp and paper industry to substitute between by-products and roundwood. This suggests that the pulp and paper industry could suffer from an increase in the procurement competition. However, overall the substitutions elasticities estimated in our study are relatively low. This indicates that substitution possibilities could be rather limited due to rigidities in input prices. This result suggests that competition of forest resources also might be relatively limited. In paper II a partial equilibrium model is constructed in order to asses the effects an increasing utilisation of forest resources in the energy sector. The increasing utilisation of forest fuel is, to a large extent, caused by economic policies introduced to reduce the emission of greenhouse gases. In countries where forests already are highly utilised such policies will lead to an increase in the procurement competition between the forest sector and

  13. Using FOSM-Based Data Worth Analyses to Design Geophysical Surveys to Reduce Uncertainty in a Regional Groundwater Model Update

    Science.gov (United States)

    Smith, B. D.; White, J.; Kress, W. H.; Clark, B. R.; Barlow, J.

    2016-12-01

    Hydrogeophysical surveys have become an integral part of understanding hydrogeological frameworks used in groundwater models. Regional models cover a large area where water well data is, at best, scattered and irregular. Since budgets are finite, priorities must be assigned to select optimal areas for geophysical surveys. For airborne electromagnetic (AEM) geophysical surveys, optimization of mapping depth and line spacing needs to take in account the objectives of the groundwater models. The approach discussed here uses a first-order, second-moment (FOSM) uncertainty analyses which assumes an approximate linear relation between model parameters and observations. This assumption allows FOSM analyses to be applied to estimate the value of increased parameter knowledge to reduce forecast uncertainty. FOSM is used to facilitate optimization of yet-to-be-completed geophysical surveying to reduce model forecast uncertainty. The main objective of geophysical surveying is assumed to estimate values and spatial variation in hydrologic parameters (i.e. hydraulic conductivity) as well as map lower permeability layers that influence the spatial distribution of recharge flux. The proposed data worth analysis was applied to Mississippi Embayment Regional Aquifer Study (MERAS) which is being updated. The objective of MERAS is to assess the ground-water availability (status and trends) of the Mississippi embayment aquifer system. The study area covers portions of eight states including Alabama, Arkansas, Illinois, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee. The active model grid covers approximately 70,000 square miles, and incorporates some 6,000 miles of major rivers and over 100,000 water wells. In the FOSM analysis, a dense network of pilot points was used to capture uncertainty in hydraulic conductivity and recharge. To simulate the effect of AEM flight lines, the prior uncertainty for hydraulic conductivity and recharge pilots along potential flight lines was

  14. Stochastic modeling of wetland-groundwater systems

    Science.gov (United States)

    Bertassello, Leonardo Enrico; Rao, P. Suresh C.; Park, Jeryang; Jawitz, James W.; Botter, Gianluca

    2018-02-01

    Modeling and data analyses were used in this study to examine the temporal hydrological variability in geographically isolated wetlands (GIWs), as influenced by hydrologic connectivity to shallow groundwater, wetland bathymetry, and subject to stochastic hydro-climatic forcing. We examined the general case of GIWs coupled to shallow groundwater through exfiltration or infiltration across wetland bottom. We also examined limiting case with the wetland stage as the local expression of the shallow groundwater. We derive analytical expressions for the steady-state probability density functions (pdfs) for wetland water storage and stage using few, scaled, physically-based parameters. In addition, we analyze the hydrologic crossing time properties of wetland stage, and the dependence of the mean hydroperiod on climatic and wetland morphologic attributes. Our analyses show that it is crucial to account for shallow groundwater connectivity to fully understand the hydrologic dynamics in wetlands. The application of the model to two different case studies in Florida, jointly with a detailed sensitivity analysis, allowed us to identify the main drivers of hydrologic dynamics in GIWs under different climate and morphologic conditions.

  15. Soil analyses by ICP-MS (Review)

    International Nuclear Information System (INIS)

    Yamasaki, Shin-ichi

    2000-01-01

    Soil analyses by inductively coupled plasma mass spectrometry (ICP-MS) are reviewed. The first half of the paper is devoted to the development of techniques applicable to soil analyses, where diverse analytical parameters are carefully evaluated. However, the choice of soil samples is somewhat arbitrary, and only a limited number of samples (mostly reference materials) are examined. In the second half, efforts are mostly concentrated on the introduction of reports, where a large number of samples and/or very precious samples have been analyzed. Although the analytical techniques used in these reports are not necessarily novel, valuable information concerning such topics as background levels of elements in soils, chemical forms of elements in soils and behavior of elements in soil ecosystems and the environment can be obtained. The major topics discussed are total elemental analysis, analysis of radionuclides with long half-lives, speciation, leaching techniques, and isotope ratio measurements. (author)

  16. Data Assimilation Tools for CO2 Reservoir Model Development – A Review of Key Data Types, Analyses, and Selected Software

    Energy Technology Data Exchange (ETDEWEB)

    Rockhold, Mark L.; Sullivan, E. C.; Murray, Christopher J.; Last, George V.; Black, Gary D.

    2009-09-30

    Pacific Northwest National Laboratory (PNNL) has embarked on an initiative to develop world-class capabilities for performing experimental and computational analyses associated with geologic sequestration of carbon dioxide. The ultimate goal of this initiative is to provide science-based solutions for helping to mitigate the adverse effects of greenhouse gas emissions. This Laboratory-Directed Research and Development (LDRD) initiative currently has two primary focus areas—advanced experimental methods and computational analysis. The experimental methods focus area involves the development of new experimental capabilities, supported in part by the U.S. Department of Energy’s (DOE) Environmental Molecular Science Laboratory (EMSL) housed at PNNL, for quantifying mineral reaction kinetics with CO2 under high temperature and pressure (supercritical) conditions. The computational analysis focus area involves numerical simulation of coupled, multi-scale processes associated with CO2 sequestration in geologic media, and the development of software to facilitate building and parameterizing conceptual and numerical models of subsurface reservoirs that represent geologic repositories for injected CO2. This report describes work in support of the computational analysis focus area. The computational analysis focus area currently consists of several collaborative research projects. These are all geared towards the development and application of conceptual and numerical models for geologic sequestration of CO2. The software being developed for this focus area is referred to as the Geologic Sequestration Software Suite or GS3. A wiki-based software framework is being developed to support GS3. This report summarizes work performed in FY09 on one of the LDRD projects in the computational analysis focus area. The title of this project is Data Assimilation Tools for CO2 Reservoir Model Development. Some key objectives of this project in FY09 were to assess the current state

  17. Assessing models of speciation under different biogeographic scenarios; An empirical study using multi-locus and RNA-seq analyses

    Science.gov (United States)

    Edwards, Taylor; Tollis, Marc; Hsieh, PingHsun; Gutenkunst, Ryan N.; Liu, Zhen; Kusumi, Kenro; Culver, Melanie; Murphy, Robert W.

    2016-01-01

    Evolutionary biology often seeks to decipher the drivers of speciation, and much debate persists over the relative importance of isolation and gene flow in the formation of new species. Genetic studies of closely related species can assess if gene flow was present during speciation, because signatures of past introgression often persist in the genome. We test hypotheses on which mechanisms of speciation drove diversity among three distinct lineages of desert tortoise in the genus Gopherus. These lineages offer a powerful system to study speciation, because different biogeographic patterns (physical vs. ecological segregation) are observed at opposing ends of their distributions. We use 82 samples collected from 38 sites, representing the entire species' distribution and generate sequence data for mtDNA and four nuclear loci. A multilocus phylogenetic analysis in *BEAST estimates the species tree. RNA-seq data yield 20,126 synonymous variants from 7665 contigs from two individuals of each of the three lineages. Analyses of these data using the demographic inference package ∂a∂i serve to test the null hypothesis of no gene flow during divergence. The best-fit demographic model for the three taxa is concordant with the *BEAST species tree, and the ∂a∂i analysis does not indicate gene flow among any of the three lineages during their divergence. These analyses suggest that divergence among the lineages occurred in the absence of gene flow and in this scenario the genetic signature of ecological isolation (parapatric model) cannot be differentiated from geographic isolation (allopatric model).

  18. An examination of the misuse of prescription stimulants among college students using the theory of planned behavior.

    Science.gov (United States)

    Gallucci, Andrew; Martin, Ryan; Beaujean, Alex; Usdan, Stuart

    2015-01-01

    The misuse of prescription stimulants (MPS) is an emergent adverse health behavior among undergraduate college students. However, current research on MPS is largely atheoretical. The purpose of this study was to validate a survey to assess MPS-related theory of planned behavior (TPB) constructs (i.e. attitudes, subjective norms, and perceived behavioral control) and determine the relationship between these constructs, MPS-related risk factors (e.g. gender and class status), and current MPS (i.e. past 30 days use) among college students. Participants (N = 978, 67.8% female and 82.9% Caucasian) at a large public university in the southeastern USA completed a survey assessing MPS and MPS-related TPB constructs during fall 2010. To examine the relationship between MPS-related TPB constructs and current MPS, we conducted (1) confirmatory factor analyses to validate that our survey items assessed MPS-related TPB constructs and (2) a series of regression analyses to examine associations between MPS-related TPB constructs, potential MPS-related risk factors, and MPS in this sample. Our factor analyses indicated that the survey items assessed MPS-related TPB constructs and our multivariate logistic regression analysis indicated that perceived behavioral control was significantly associated with current MPS. In addition, analyses found that having a prescription stimulant was a protective factor against MPS when the model included MPS-related TPB variables.

  19. In Search of Black Swans: Identifying Students at Risk of Failing Licensing Examinations.

    Science.gov (United States)

    Barber, Cassandra; Hammond, Robert; Gula, Lorne; Tithecott, Gary; Chahine, Saad

    2018-03-01

    To determine which admissions variables and curricular outcomes are predictive of being at risk of failing the Medical Council of Canada Qualifying Examination Part 1 (MCCQE1), how quickly student risk of failure can be predicted, and to what extent predictive modeling is possible and accurate in estimating future student risk. Data from five graduating cohorts (2011-2015), Schulich School of Medicine & Dentistry, Western University, were collected and analyzed using hierarchical generalized linear models (HGLMs). Area under the receiver operating characteristic curve (AUC) was used to evaluate the accuracy of predictive models and determine whether they could be used to predict future risk, using the 2016 graduating cohort. Four predictive models were developed to predict student risk of failure at admissions, year 1, year 2, and pre-MCCQE1. The HGLM analyses identified gender, MCAT verbal reasoning score, two preclerkship course mean grades, and the year 4 summative objective structured clinical examination score as significant predictors of student risk. The predictive accuracy of the models varied. The pre-MCCQE1 model was the most accurate at predicting a student's risk of failing (AUC 0.66-0.93), while the admissions model was not predictive (AUC 0.25-0.47). Key variables predictive of students at risk were found. The predictive models developed suggest, while it is not possible to identify student risk at admission, we can begin to identify and monitor students within the first year. Using such models, programs may be able to identify and monitor students at risk quantitatively and develop tailored intervention strategies.

  20. Modelling the geographic distribution of wind power and the impact on transmission needs

    DEFF Research Database (Denmark)

    Østergaard, Poul Alberg

    2003-01-01

    Through energy systems modelling, transmission systems modelling and geographical modelling, the article examines the sensitivity of the response of the transmission system to the geographic distributions of wind power and in particular the sensitivity of the results to the accuracy...... of the distributed modelled. The results show that accuracy of the geographic modelling while important for the analysis of specific single transmission lines is not important for the analysis of the general response of the transmission system. The analyses thus corroborate previous analyses that demonstrated...

  1. Examining a conceptual model of parental nurturance, parenting practices and physical activity among 5-6 year olds.

    Science.gov (United States)

    Sebire, Simon J; Jago, Russell; Wood, Lesley; Thompson, Janice L; Zahra, Jezmond; Lawlor, Deborah A

    2016-01-01

    Parenting is an often-studied correlate of children's physical activity, however there is little research examining the associations between parenting styles, practices and the physical activity of younger children. This study aimed to investigate whether physical activity-based parenting practices mediate the association between parenting styles and 5-6 year-old children's objectively-assessed physical activity. 770 parents self-reported parenting style (nurturance and control) and physical activity-based parenting practices (logistic and modeling support). Their 5-6 year old child wore an accelerometer for five days to measure moderate-to-vigorous physical activity (MVPA). Linear regression was used to examine direct and indirect (mediation) associations. Data were collected in the United Kingdom in 2012/13 and analyzed in 2014. Parent nurturance was positively associated with provision of modeling (adjusted unstandardized coefficient, β = 0.11; 95% CI = 0.02, 0.21) and logistic support (β = 0.14; 0.07, 0.21). Modeling support was associated with greater child MVPA (β = 2.41; 0.23, 4.60) and a small indirect path from parent nurturance to child's MVPA was identified (β = 0.27; 0.04, 0.70). Physical activity-based parenting practices are more strongly associated with 5-6 year old children's MVPA than parenting styles. Further research examining conceptual models of parenting is needed to understand in more depth the possible antecedents to adaptive parenting practices beyond parenting styles. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  2. Coping with examinations: exploring relationships between students' coping strategies, implicit theories of ability, and perceived control.

    Science.gov (United States)

    Doron, Julie; Stephan, Yannick; Boiché, Julie; Le Scanff, Christine

    2009-09-01

    Relatively little is known about the contribution of students' beliefs regarding the nature of academic ability (i.e. their implicit theories) on strategies used to deal with examinations. This study applied Dweck's socio-cognitive model of achievement motivation to better understand how students cope with examinations. It was expected that students' implicit theories of academic ability would be related to their use of particular coping strategies to deal with exam-related stress. Additionally, it was predicted that perceived control over exams acts as a mediator between implicit theories of ability and coping. Four hundred and ten undergraduate students (263 males, 147 females), aged from 17 to 26 years old (M=19.73, SD=1.46) were volunteers for the present study. Students completed measures of coping, implicit theories of academic ability, and perception of control over academic examinations during regular classes in the first term of the university year. Multiple regression analyses revealed that incremental beliefs of ability significantly and positively predicted active coping, planning, venting of emotions, seeking social support for emotional and instrumental reasons, whereas entity beliefs positively predicted behavioural disengagement and negatively predicted active coping and acceptance. In addition, analyses revealed that entity beliefs of ability were related to coping strategies through students' perception of control over academic examinations. These results confirm that exam-related coping varies as a function of students' beliefs about the nature of academic ability and their perceptions of control when approaching examinations.

  3. Performance on the Cardiovascular In-Training Examination in Relation to the ABIM Cardiovascular Disease Certification Examination.

    Science.gov (United States)

    Indik, Julia H; Duhigg, Lauren M; McDonald, Furman S; Lipner, Rebecca S; Rubright, Jonathan D; Haist, Steven A; Botkin, Naomi F; Kuvin, Jeffrey T

    2017-06-13

    The American College of Cardiology In-Training Exam (ACC-ITE) is incorporated into most U.S. training programs, but its relationship to performance on the American Board of Internal Medicine Cardiovascular Disease (ABIM CVD) Certification Examination is unknown. ACC-ITE scores from third-year fellows from 2011 to 2014 (n = 1,918) were examined. Covariates for regression analyses included sex, age, medical school country, U.S. Medical Licensing Examination Step, and ABIM Internal Medicine Certification Examination scores. A secondary analysis examined fellows (n = 511) who took the ACC-ITE in the first and third years. ACC-ITE scores were the strongest predictor of ABIM CVD scores (p ITE scores from first to third year was a strong predictor of the ABIM CVD score (p ITE is strongly associated with performance on the ABIM CVD Certification Examination. Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  4. Weight-related actual and ideal self-states, discrepancies, and shame, guilt, and pride: examining associations within the process model of self-conscious emotions.

    Science.gov (United States)

    Castonguay, Andree L; Brunet, Jennifer; Ferguson, Leah; Sabiston, Catherine M

    2012-09-01

    The aim of this study was to examine the associations between women's actual:ideal weight-related self-discrepancies and experiences of weight-related shame, guilt, and authentic pride using self-discrepancy (Higgins, 1987) and self-conscious emotion (Tracy & Robins, 2004) theories as guiding frameworks. Participants (N=398) completed self-report questionnaires. Main analyses involved polynomial regressions, followed by the computation and evaluation of response surface values. Actual and ideal weight self-states were related to shame (R2 = .35), guilt (R2 = .25), and authentic pride (R2 = .08). When the discrepancy between actual and ideal weights increased, shame and guilt also increased, while authentic pride decreased. Findings provide partial support for self-discrepancy theory and the process model of self-conscious emotions. Experiencing weight-related self-discrepancies may be important cognitive appraisals related to shame, guilt, and authentic pride. Further research is needed exploring the relations between self-discrepancies and a range of weight-related self-conscious emotions. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Coupling Longitudinal Data and Multilevel Modeling to Examine the Antecedents and Consequences of Jealousy Experiences in Romantic Relationships: A Test of the Relational Turbulence Model

    Science.gov (United States)

    Theiss, Jennifer A.; Solomon, Denise Haunani

    2006-01-01

    We used longitudinal data and multilevel modeling to examine how intimacy, relational uncertainty, and failed attempts at interdependence influence emotional, cognitive, and communicative responses to romantic jealousy, and how those experiences shape subsequent relationship characteristics. The relational turbulence model (Solomon & Knobloch,…

  6. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  7. Examining the Determinants of China’s Inward FDI Using Grey Matrix Relational Analysis Model

    Directory of Open Access Journals (Sweden)

    Hang JIANG

    2017-12-01

    Full Text Available Grey relational analysis (GRA model is an important part of grey system theory, which is used to ascertain the relational grade between an influential factor and the major behavior factor. Most of GRA models are mainly applied to the field in which the behavior factor and influential factor are the cross-sectional or time series data in a given system. However, owing to the panel data contains plenty information including individual and time characteristics, the traditional GRA model cannot be applied to panel data analysis. To overcome this drawback, the grey matrix relational analysis model is applied to measure the similarity of panel data from two dimensions of individual and time on the basis of the definition of the matrix sequence of a discrete data sequence. This paper examines the determinants of inward foreign direct investment (IFDI in China using grey matrix relational analysis model. The study finds that the GDP per capita, enrollment of regular institutions of higher education, and internal expenditure on R&D are the key factors of IFDI.

  8. A new approach to analyse longitudinal epidemiological data with an excess of zeros.

    Science.gov (United States)

    Spriensma, Alette S; Hajos, Tibor R S; de Boer, Michiel R; Heymans, Martijn W; Twisk, Jos W R

    2013-02-20

    Within longitudinal epidemiological research, 'count' outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables with an excess of zeros. Therefore, objective of this paper was to introduce the relatively 'new' method of two-part joint regression modelling in longitudinal data analysis for outcome variables with an excess of zeros, and to compare the performance of this method to current approaches. Within an observational longitudinal dataset, we compared three techniques; two 'standard' approaches (a linear mixed model, and a Poisson mixed model), and a two-part joint mixed model (a binomial/Poisson mixed distribution model), including random intercepts and random slopes. Model fit indicators, and differences between predicted and observed values were used for comparisons. The analyses were performed with STATA using the GLLAMM procedure. Regarding the random intercept models, the two-part joint mixed model (binomial/Poisson) performed best. Adding random slopes for time to the models changed the sign of the regression coefficient for both the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) and resulted into a much better fit. This paper showed that a two-part joint mixed model is a more appropriate method to analyse longitudinal data with an excess of zeros compared to a linear mixed model and a Poisson mixed model. However, in a model with random slopes for time a Poisson mixed model also performed remarkably well.

  9. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    Science.gov (United States)

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  10. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  11. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results (a) confirmed, in a general way, the procedures for application to pulsed burning, (b) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur and (c) indicated that steam can terminate continuous burning. Future actions recommended include (a) modification of the code to perform continuous-burn analyses, which is demonstrated, (b) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (c) changes to the models for estimating burn parameters

  12. Rescheduling nursing shifts: scoping the challenge and examining the potential of mathematical model based tools.

    Science.gov (United States)

    Clark, Alistair; Moule, Pam; Topping, Annie; Serpell, Martin

    2015-05-01

    To review research in the literature on nursing shift scheduling / rescheduling, and to report key issues identified in a consultation exercise with managers in four English National Health Service trusts to inform the development of mathematical tools for rescheduling decision-making. Shift rescheduling is unrecognised as an everyday time-consuming management task with different imperatives from scheduling. Poor rescheduling decisions can have quality, cost and morale implications. A systematic critical literature review identified rescheduling issues and existing mathematic modelling tools. A consultation exercise with nursing managers examined the complex challenges associated with rescheduling. Minimal research exists on rescheduling compared with scheduling. Poor rescheduling can result in greater disruption to planned nursing shifts and may impact negatively on the quality and cost of patient care, and nurse morale and retention. Very little research examines management challenges or mathematical modelling for rescheduling. Shift rescheduling is a complex and frequent management activity that is more challenging than scheduling. Mathematical modelling may have potential as a tool to support managers to minimise rescheduling disruption. The lack of specific methodological support for rescheduling that takes into account its complexity, increases the likelihood of harm for patients and stress for nursing staff and managers. © 2013 John Wiley & Sons Ltd.

  13. Examining Sex Differences in Altering Attitudes About Rape: A Test of the Elaboration Likelihood Model.

    Science.gov (United States)

    Heppner, Mary J.; And Others

    1995-01-01

    Intervention sought to improve first-year college students' attitudes about rape. Used the Elaboration Likelihood Model to examine men's and women's attitude change process. Found numerous sex differences in ways men and women experienced and changed during and after intervention. Women's attitude showed more lasting change while men's was more…

  14. Possible future HERA analyses

    International Nuclear Information System (INIS)

    Geiser, Achim

    2015-12-01

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing ep collider data and their physics scope. Comparisons to the original scope of the HERA pro- gramme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-established data and MC sets, calibrations, and analysis procedures the manpower and expertise needed for a particular analysis is often very much smaller than that needed for an ongoing experiment. Since centrally funded manpower to carry out such analyses is not available any longer, this contribution not only targets experienced self-funded experimentalists, but also theorists and master-level students who might wish to carry out such an analysis.

  15. Framing patient consent for student involvement in pelvic examination: a dual model of autonomy.

    Science.gov (United States)

    Carson-Stevens, Andrew; Davies, Myfanwy M; Jones, Rhiain; Chik, Aiman D Pawan; Robbé, Iain J; Fiander, Alison N

    2013-11-01

    Patient consent has been formulated in terms of radical individualism rather than shared benefits. Medical education relies on the provision of patient consent to provide medical students with the training and experience to become competent doctors. Pelvic examination represents an extreme case in which patients may legitimately seek to avoid contact with inexperienced medical students particularly where these are male. However, using this extreme case, this paper will examine practices of framing and obtaining consent as perceived by medical students. This paper reports findings of an exploratory qualitative study of medical students and junior doctors. Participants described a number of barriers to obtaining informed consent. These related to misunderstandings concerning student roles and experiences and insufficient information on the nature of the examination. Participants reported perceptions of the negative framing of decisions on consent by nursing staff where the student was male. Potentially coercive practices of framing of the decision by senior doctors were also reported. Participants outlined strategies they adopted to circumvent patients' reasons for refusal. Practices of framing the information used by students, nurses and senior doctors to enable patients to decide about consent are discussed in the context of good ethical practice. In the absence of a clear ethical model, coercion appears likely. We argue for an expanded model of autonomy in which the potential tension between respecting patients' autonomy and ensuring the societal benefit of well-trained doctors is recognised. Practical recommendations are made concerning information provision and clear delineations of student and patient roles and expectations.

  16. One size does not fit all: On how Markov model order dictates performance of genomic sequence analyses

    Science.gov (United States)

    Narlikar, Leelavati; Mehta, Nidhi; Galande, Sanjeev; Arjunwadkar, Mihir

    2013-01-01

    The structural simplicity and ability to capture serial correlations make Markov models a popular modeling choice in several genomic analyses, such as identification of motifs, genes and regulatory elements. A critical, yet relatively unexplored, issue is the determination of the order of the Markov model. Most biological applications use a predetermined order for all data sets indiscriminately. Here, we show the vast variation in the performance of such applications with the order. To identify the ‘optimal’ order, we investigated two model selection criteria: Akaike information criterion and Bayesian information criterion (BIC). The BIC optimal order delivers the best performance for mammalian phylogeny reconstruction and motif discovery. Importantly, this order is different from orders typically used by many tools, suggesting that a simple additional step determining this order can significantly improve results. Further, we describe a novel classification approach based on BIC optimal Markov models to predict functionality of tissue-specific promoters. Our classifier discriminates between promoters active across 12 different tissues with remarkable accuracy, yielding 3 times the precision expected by chance. Application to the metagenomics problem of identifying the taxum from a short DNA fragment yields accuracies at least as high as the more complex mainstream methodologies, while retaining conceptual and computational simplicity. PMID:23267010

  17. Autonomous and controlled motivational regulations for multiple health-related behaviors: between- and within-participants analyses

    Science.gov (United States)

    Hagger, M.S.; Hardcastle, S.J.; Chater, A.; Mallett, C.; Pal, S.; Chatzisarantis, N.L.D.

    2014-01-01

    Self-determination theory has been applied to the prediction of a number of health-related behaviors with self-determined or autonomous forms of motivation generally more effective in predicting health behavior than non-self-determined or controlled forms. Research has been confined to examining the motivational predictors in single health behaviors rather than comparing effects across multiple behaviors. The present study addressed this gap in the literature by testing the relative contribution of autonomous and controlling motivation to the prediction of a large number of health-related behaviors, and examining individual differences in self-determined motivation as a moderator of the effects of autonomous and controlling motivation on health behavior. Participants were undergraduate students (N = 140) who completed measures of autonomous and controlled motivational regulations and behavioral intention for 20 health-related behaviors at an initial occasion with follow-up behavioral measures taken four weeks later. Path analysis was used to test a process model for each behavior in which motivational regulations predicted behavior mediated by intentions. Some minor idiosyncratic findings aside, between-participants analyses revealed significant effects for autonomous motivational regulations on intentions and behavior across the 20 behaviors. Effects for controlled motivation on intentions and behavior were relatively modest by comparison. Intentions mediated the effect of autonomous motivation on behavior. Within-participants analyses were used to segregate the sample into individuals who based their intentions on autonomous motivation (autonomy-oriented) and controlled motivation (control-oriented). Replicating the between-participants path analyses for the process model in the autonomy- and control-oriented samples did not alter the relative effects of the motivational orientations on intention and behavior. Results provide evidence for consistent effects

  18. Respiratory system model for quasistatic pulmonary pressure-volume (P-V) curve: inflation-deflation loop analyses.

    Science.gov (United States)

    Amini, R; Narusawa, U

    2008-06-01

    A respiratory system model (RSM) is developed for the deflation process of a quasistatic pressure-volume (P-V) curve, following the model for the inflation process reported earlier. In the RSM of both the inflation and the deflation limb, a respiratory system consists of a large population of basic alveolar elements, each consisting of a piston-spring-cylinder subsystem. A normal distribution of the basic elements is derived from Boltzmann statistical model with the alveolar closing (opening) pressure as the distribution parameter for the deflation (inflation) process. An error minimization by the method of least squares applied to existing P-V loop data from two different data sources confirms that a simultaneous inflation-deflation analysis is required for an accurate determination of RSM parameters. Commonly used terms such as lower inflection point, upper inflection point, and compliance are examined based on the P-V equations, on the distribution function, as well as on the geometric and physical properties of the basic alveolar element.

  19. The variants of an LOD of a 3D building model and their influence on spatial analyses

    Science.gov (United States)

    Biljecki, Filip; Ledoux, Hugo; Stoter, Jantien; Vosselman, George

    2016-06-01

    The level of detail (LOD) of a 3D city model indicates the model's grade and usability. However, there exist multiple valid variants of each LOD. As a consequence, the LOD concept is inconclusive as an instruction for the acquisition of 3D city models. For instance, the top surface of an LOD1 block model may be modelled at the eaves of a building or at its ridge height. Such variants, which we term geometric references, are often overlooked and are usually not documented in the metadata. Furthermore, the influence of a particular geometric reference on the performance of a spatial analysis is not known. In response to this research gap, we investigate a variety of LOD1 and LOD2 geometric references that are commonly employed, and perform numerical experiments to investigate their relative difference when used as input for different spatial analyses. We consider three use cases (estimation of the area of the building envelope, building volume, and shadows cast by buildings), and compute the deviations in a Monte Carlo simulation. The experiments, carried out with procedurally generated models, indicate that two 3D models representing the same building at the same LOD, but modelled according to different geometric references, may yield substantially different results when used in a spatial analysis. The outcome of our experiments also suggests that the geometric reference may have a bigger influence than the LOD, since an LOD1 with a specific geometric reference may yield a more accurate result than when using LOD2 models.

  20. An examination of the developmental propensity model of conduct problems.

    Science.gov (United States)

    Rhee, Soo Hyun; Friedman, Naomi P; Corley, Robin P; Hewitt, John K; Hink, Laura K; Johnson, Daniel P; Smith Watts, Ashley K; Young, Susan E; Robinson, JoAnn; Waldman, Irwin D; Zahn-Waxler, Carolyn

    2016-05-01

    The present study tested specific hypotheses advanced by the developmental propensity model of the etiology of conduct problems in the Colorado Longitudinal Twin Study, a prospective, longitudinal, genetically informative sample. High negative emotionality, low behavioral inhibition, low concern and high disregard for others, and low cognitive ability assessed during toddlerhood (age 14 to 36 months) were examined as predictors of conduct problems in later childhood and adolescence (age 4 to 17 years). Each hypothesized antisocial propensity dimension predicted conduct problems, but some predictions may be context specific or due to method covariance. The most robust predictors were observed disregard for others (i.e., responding to others' distress with active, negative responses such as anger and hostility), general cognitive ability, and language ability, which were associated with conduct problems reported by parents, teachers, and adolescents, and change in observed negative emotionality (i.e., frustration tolerance), which was associated with conduct problems reported by teachers and adolescents. Furthermore, associations between the most robust early predictors and later conduct problems were influenced by the shared environment rather than genes. We conclude that shared environmental influences that promote disregard for others and detract from cognitive and language development during toddlerhood also predispose individuals to conduct problems in later childhood and adolescence. The identification of those shared environmental influences common to early antisocial propensity and later conduct problems is an important future direction, and additional developmental behavior genetic studies examining the interaction between children's characteristics and socializing influences on conduct problems are needed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  1. “PERLE bedside-examination-course for candidates in state examination” – Developing a training program for the third part of medical state examination (oral examination with practical skills

    Directory of Open Access Journals (Sweden)

    Karthaus, Anne

    2016-08-01

    Full Text Available Introduction: In preparation for the state examination, many students have open questions and a need for advice. Tutors of the Skills Lab PERLE-„Praxis ERfahren und Lernen“ (experiencing and learning practical skills have developed a new course concept to provide support and practical assistance for the examinees.Objectives: The course aims to familiarize the students with the exam situation in order to gain more confidence. This enables the students to experience a confrontation with the specific situation of the exam in a protected environment. Furthermore, soft skills are utilized and trained.Concept of the course: The course was inspired by the OSCE-model (Objective Structured Clinical Examination, an example for case-based learning and controlling. Acquired knowledge can be revised and extended through the case studies. Experienced tutors provide assistance in discipline-specific competencies, and help in organizational issues such as dress code and behaviour.Evaluation of the course: An evaluation was conducted by the attending participants after every course. Based on this assessment, the course is constantly being developed. In March, April and October 2015 six courses, with a total of 84 participants, took place. Overall 76 completed questionnaires (91% were analysed.Discussion: Strengths of the course are a good tutor-participants-ratio with 1:4 (1 Tutor provides guidance for 4 participants, the interactivity of the course, and the high flexibility in responding to the group's needs. Weaknesses are the tight schedule, and the currently not yet performed evaluation before and after the course.Conclusion: In terms of “best practise”, this article shows an example of how to offer low-cost and low-threshold preparation for the state examination.

  2. Steady-state and accident analyses of PBMR with the computer code SPECTRA

    International Nuclear Information System (INIS)

    Stempniewicz, Marek M.

    2002-01-01

    The SPECTRA code is an accident analysis code developed at NRG. It is designed for thermal-hydraulic analyses of nuclear or conventional power plants. The code is capable of analysing the whole power plant, including reactor vessel, primary system, various control and safety systems, containment and reactor building. The aim of the work presented in this paper was to prepare a preliminary thermal-hydraulic model of PBMR for SPECTRA, and perform steady state and accident analyses. In order to assess SPECTRA capability to model the PBMR reactors, a model of the INCOGEN system has been prepared first. Steady state and accident scenarios were analyzed for INCOGEN configuration. Results were compared to the results obtained earlier with INAS and OCTOPUS/PANTHERMIX. A good agreement was obtained. Results of accident analyses with PBMR model showed qualitatively good results. It is concluded that SPECTRA is a suitable tool for analyzing High Temperature Reactors, such as INCOGEN or for example PBMR (Pebble Bed Modular Reactor). Analyses of INCOGEN and PBMR systems showed that in all analyzed cases the fuel temperatures remained within the acceptable limits. Consequently there is no danger of release of radioactivity to the environment. It may be concluded that those are promising designs for future safe industrial reactors. (author)

  3. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  4. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-01-01

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1

  5. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters.

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-09-11

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  6. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  7. Development of an Integrated Global Energy Model

    International Nuclear Information System (INIS)

    Krakowski, R.A.

    1999-01-01

    The primary objective of this research was to develop a forefront analysis tool for application to enhance understanding of long-term, global, nuclear-energy and nuclear-material futures. To this end, an existing economics-energy-environmental (E 3 ) model was adopted, modified, and elaborated to examine this problem in a multi-regional (13), long-term (approximately2,100) context. The E 3 model so developed was applied to create a Los Alamos presence in this E 3 area through ''niche analyses'' that provide input to the formulation of policies dealing with and shaping of nuclear-energy and nuclear-materials futures. Results from analyses using the E 3 model have been presented at a variety of national and international conferences and workshops. Through use of the E 3 model Los Alamos was afforded the opportunity to participate in a multi-national E 3 study team that is examining a range of global, long-term nuclear issues under the auspices of the IAEA during the 1998-99 period . Finally, the E 3 model developed under this LDRD project is being used as an important component in more recent Nuclear Material Management Systems (NMMS) project

  8. Examination and modeling of void growth kinetics in modern high strength dual phase steels during uniaxial tensile deformation

    Energy Technology Data Exchange (ETDEWEB)

    Saeidi, N., E-mail: navidsae@gmail.com [Department of Materials Engineering, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of); Ashrafizadeh, F.; Niroumand, B. [Department of Materials Engineering, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of); Forouzan, M.R.; Mohseni mofidi, S. [Department of Mechanical Engineering, Isfahan University of Technology, Isfahan 84156-83111 (Iran, Islamic Republic of); Barlat, F. [Materials Mechanics Laboratory (MML), Graduate Institute of Ferrous Technology (GIFT), Pohang University of Science and Technology - POSTECH, San 31 Hyoja-dong, Nam-gu, Pohang, Gyeongbuk 790-784 (Korea, Republic of)

    2016-04-01

    Ductile fracture mechanisms during uniaxial tensile testing of two different modern high strength dual phase steels, i.e. DP780 and DP980, were studied. Detailed microstructural characterization of the strained and sectioned samples was performed by scanning electron microscopy as well as EBSD examination. The results revealed that interface decohesion, especially at martensite particles located at ferrite grain boundaries, was the most probable mechanism for void nucleation. It was also revealed that the creation of cellular substructure can reduce stored strain energy and thereby, higher true fracture strain was obtained in DP980 than DP780 steel. Prediction of void growth behavior based on some previously proposed models showed unreliable results. Therefore, a modified model based on Rice-Tracey family models was proposed which showed a very lower prediction error compared with other models. - Highlights: • Damage mechanism in two modern high strength dual phase steels was studied. • Creation of cellular substructures can reduce the stored strain energy within the ferrite grains. • The experimental values were examined by Agrawal as well as RT family models. • A modified model was proposed for prediction of void growth behavior of DP steels.

  9. HLA region excluded by linkage analyses of early onset periodontitis

    Energy Technology Data Exchange (ETDEWEB)

    Sun, C.; Wang, S.; Lopez, N.

    1994-09-01

    Previous studies suggested that HLA genes may influence susceptibility to early-onset periodontitis (EOP). Segregation analyses indicate that EOP may be due to a single major gene. We conducted linkage analyses to assess possible HLA effects on EOP. Fifty families with two or more close relatives affected by EOP were ascertained in Virginia and Chile. A microsatellite polymorphism within the HLA region (at the tumor necrosis factor beta locus) was typed using PCR. Linkage analyses used a donimant model most strongly supported by previous studies. Assuming locus homogeneity, our results exclude a susceptibility gene within 10 cM on either side of our marker locus. This encompasses all of the HLA region. Analyses assuming alternative models gave qualitatively similar results. Allowing for locus heterogeneity, our data still provide no support for HLA-region involvement. However, our data do not statistically exclude (LOD <-2.0) hypotheses of disease-locus heterogeneity, including models where up to half of our families could contain an EOP disease gene located in the HLA region. This is due to the limited power of even our relatively large collection of families and the inherent difficulties of mapping genes for disorders that have complex and heterogeneous etiologies. Additional statistical analyses, recruitment of families, and typing of flanking DNA markers are planned to more conclusively address these issues with respect to the HLA region and other candidate locations in the human genome. Additional results for markers covering most of the human genome will also be presented.

  10. Modelling and analysing interoperability in service compositions using COSMO

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.

    2008-01-01

    A service composition process typically involves multiple service models. These models may represent the composite and composed services from distinct perspectives, e.g. to model the role of some system that is involved in a service, and at distinct abstraction levels, e.g. to model the goal,

  11. Demand Modelling in Telecommunications

    Directory of Open Access Journals (Sweden)

    M. Chvalina

    2009-01-01

    Full Text Available This article analyses the existing possibilities for using Standard Statistical Methods and Artificial Intelligence Methods for a short-term forecast and simulation of demand in the field of telecommunications. The most widespread methods are based on Time Series Analysis. Nowadays, approaches based on Artificial Intelligence Methods, including Neural Networks, are booming. Separate approaches will be used in the study of Demand Modelling in Telecommunications, and the results of these models will be compared with actual guaranteed values. Then we will examine the quality of Neural Network models

  12. The News Model of Asset Price Determination - An Empirical Examination of the Danish Football Club Bröndby IF

    DEFF Research Database (Denmark)

    Stadtmann, Georg; Moritzen; Jörgensen

    2012-01-01

    According to the news model of asset price determination, only the unexpected component of an information should drive the stock price. We use the Danish publicly listed football club Brøndby IF to analyse how match outcome impacts the stock price. To disentangle gross news from net news, betting...

  13. Examining Gender Differences in Received, Provided and Invisible Social Control: An Application of the Dual-Effects-Model

    OpenAIRE

    Lüscher Janina; Ochsner Sibylle; Knoll Nina; Stadler Gertraud; Hornung Rainer; Scholz Urte

    2014-01-01

    The dual effects model of social control assumes that social control leads to better health practices but also arouses psychological distress. However findings are inconsistent. The present study advances the current literature by examining social control from a dyadic perspective in the context of smoking. In addition the study examines whether control continuous smoking abstinence and affect are differentially related for men and women. Before and three weeks after a self set quit attempt w...

  14. Fundamental issues in finite element analyses of localization of deformation

    NARCIS (Netherlands)

    Borst, de R.; Sluys, L.J.; Mühlhaus, H.-B.; Pamin, J.

    1993-01-01

    Classical continuum models, i.e. continuum models that do not incorporate an internal length scale, suffer from excessive mesh dependence when strain-softening models are used in numerical analyses and cannot reproduce the size effect commonly observed in quasi-brittle failure. In this contribution

  15. A Race to the Bottom: MOOCs and Higher Education Business Models

    Science.gov (United States)

    Kalman, Yoram M.

    2014-01-01

    This is a critical examination of the claims that innovations such as massive open online courses (MOOCs) will disrupt the business models of the higher education sector. It describes what business models are, analyses the business model of free MOOCs offered by traditional universities and compares that model to that of paid online courses…

  16. Uncertainty and Sensitivity Analyses Plan

    International Nuclear Information System (INIS)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project

  17. Tomographic anthropomorphic models. Pt. 2. Organ doses from computed tomographic examinations in paediatric radiology

    International Nuclear Information System (INIS)

    Zankl, M.; Panzer, W.; Drexler, G.

    1993-11-01

    This report provides a catalogue of organ dose conversion factors resulting from computed tomographic (CT) examinations of children. Two radiation qualities and two exposure geometries were simulated as well as the use of asymmetrical beams. The use of further beam shaping devices was not considered. The organ dose conversion factors are applicable to babies at the age of ca. 2 months and to children between 5 and 7 years but can be used for other ages as well with the appropriate adjustments. For the calculations, the patients were represented by the GSF tomographic anthropomorphic models BABY and CHILD. The radiation transport in the body was simulated using a Monte Carlo method. The doses are presented as conversion factors of mean organ doses per air kerma free in air on the axis of rotation. Mean organ dose conversion factors are given per organ and per scanned body section of 1 cm height. The mean dose to an organ resulting from a particular CT examination can be estimated by summing up the contributions to the organ dose from all relevant sections. To facilitate the selection of the appropriate sections, a table is given which relates the tomographic models' coordinates to certain anatomical landmarks in the human body. (orig.)

  18. Selection of asset investment models by hospitals: examination of influencing factors, using Switzerland as an example.

    Science.gov (United States)

    Eicher, Bernhard

    2016-10-01

    Hospitals are responsible for a remarkable part of the annual increase in healthcare expenditure. This article examines one of the major cost drivers, the expenditure for investment in hospital assets. The study, conducted in Switzerland, identifies factors that influence hospitals' investment decisions. A suggestion on how to categorize asset investment models is presented based on the life cycle of an asset, and its influencing factors defined based on transaction cost economics. The influence of five factors (human asset specificity, physical asset specificity, uncertainty, bargaining power, and privacy of ownership) on the selection of an asset investment model is examined using a two-step fuzzy-set Qualitative Comparative Analysis. The research shows that outsourcing-oriented asset investment models are particularly favored in the presence of two combinations of influencing factors: First, if technological uncertainty is high and both human asset specificity and bargaining power of a hospital are low. Second, if assets are very specific, technological uncertainty is high and there is a private hospital with low bargaining power, outsourcing-oriented asset investment models are favored too. Using Qualitative Comparative Analysis, it can be demonstrated that investment decisions of hospitals do not depend on isolated influencing factors but on a combination of factors. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  19. Catchment variability and parameter estimation in multi-objective regionalisation of a rainfall-runoff model

    NARCIS (Netherlands)

    Deckers, Dave L.E.H.; Booij, Martijn J.; Rientjes, T.H.M.; Krol, Martinus S.

    2010-01-01

    This study attempts to examine if catchment variability favours regionalisation by principles of catchment similarity. Our work combines calibration of a simple conceptual model for multiple objectives and multi-regression analyses to establish a regional model between model sensitive parameters and

  20. New insights into survival trend analyses in cancer population-based studies: the SUDCAN methodology.

    Science.gov (United States)

    Uhry, Zoé; Bossard, Nadine; Remontet, Laurent; Iwaz, Jean; Roche, Laurent

    2017-01-01

    The main objective of the SUDCAN study was to compare, for 15 cancer sites, the trends in net survival and excess mortality rates from cancer 5 years after diagnosis between six European Latin countries (Belgium, France, Italy, Portugal, Spain and Switzerland). The data were extracted from the EUROCARE-5 database. The study period ranged from 6 (Portugal, 2000-2005) to 18 years (Switzerland, 1989-2007). Trend analyses were carried out separately for each country and cancer site; the number of cases ranged from 1500 to 104 000 cases. We developed an original flexible excess rate modelling strategy that accounts for the continuous effects of age, year of diagnosis, time since diagnosis and their interactions. Nineteen models were constructed; they differed in the modelling of the effect of the year of diagnosis in terms of linearity, proportionality and interaction with age. The final model was chosen according to the Akaike Information Criterion. The fit was assessed graphically by comparing model estimates versus nonparametric (Pohar-Perme) net survival estimates. Out of the 90 analyses carried out, the effect of the year of diagnosis on the excess mortality rate depended on age in 61 and was nonproportional in 64; it was nonlinear in 27 out of the 75 analyses where this effect was considered. The model fit was overall satisfactory. We analysed successfully 15 cancer sites in six countries. The refined methodology proved necessary for detailed trend analyses. It is hoped that three-dimensional parametric modelling will be used more widely in net survival trend studies as it has major advantages over stratified analyses.

  1. An examination of the factors affecting people's participation in future health examinations based on community health exam interventions.

    Science.gov (United States)

    Tu, Shih-Kai; Liao, Hung-En

    2014-01-01

    Community-based intervention health examinations were implemented at a health care facility to comply with the government's primary health care promotion policy. The theory of planned behavior model was applied to examine the effect that community-based health examinations had on people's health concepts regarding seeking future health examinations. The research participants were individuals who had received a health examination provided at two branches of a hospital in central Taiwan in 2012. The hospital's two branches held a total of 14 free community-based health examination sessions. The hospital provided health examination equipment and staff to perform health examinations during public holidays. We conducted an exploratory questionnaire survey to collect data and implemented cross-sectional research based on anonymous self-ratings to examine the public's intention to receive future community-based or hospital-based health examinations. Including of 807 valid questionnaires, accounting for 89.4% of the total number of questionnaires distributed. The correlation coefficients of the second-order structural model indicate that attitudes positively predict behavioral intentions (γ = .66, p intentions (γ = .66, p intentions (γ = -.71, p > .05). The results of the first-order structural model indicated that the second-order constructs had a high explanatory power for the first-order constructs. People's health concepts regarding health examinations and their desire to continue receiving health examinations must be considered when promoting health examinations in the community. Regarding hospital management and the government's implementation of primary health care, health examination services should address people's medical needs to increase coverage and participation rates and reduce the waste of medical resources.

  2. Coupled Mooring Analyses for the WEC-Sim Wave Energy Converter Design Tool: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Sirnivas, Senu; Yu, Yi-Hsiang; Hall, Matthew; Bosma, Bret

    2016-07-01

    A wave-energy-converter-specific time-domain modeling method (WEC-Sim) was coupled with a lumped-mass-based mooring model (MoorDyn) to improve its mooring dynamics modeling capability. This paper presents a verification and validation study on the coupled numerical method. First, a coupled model was built to simulate a 1/25 model scale floating power system connected to a traditional three-point catenary mooring with an angle of 120 between the lines. The body response and the tension force on the mooring lines at the fairlead in decay tests and under regular and irregular waves were examined. To validate and verify the coupled numerical method, the simulation results were compared to the measurements from a wave tank test and a commercial code (OrcaFlex). Second, a coupled model was built to simulate a two-body point absorber system with a chain-connected catenary system. The influence of the mooring connection on the point absorber was investigated. Overall, the study showed that the coupling of WEC-Sim and the MoorDyn model works reasonably well for simulating a floating system with practical mooring designs and predicting the corresponding dynamic loads on the mooring lines. Further analyses on improving coupling efficiency and the feasibility of applying the numerical method to simulate WEC systems with more complex mooring configuration are still needed.

  3. Experimental models of demyelination and remyelination.

    Science.gov (United States)

    Torre-Fuentes, L; Moreno-Jiménez, L; Pytel, V; Matías-Guiu, J A; Gómez-Pinedo, U; Matías-Guiu, J

    2017-08-29

    Experimental animal models constitute a useful tool to deepen our knowledge of central nervous system disorders. In the case of multiple sclerosis, however, there is no such specific model able to provide an overview of the disease; multiple models covering the different pathophysiological features of the disease are therefore necessary. We reviewed the different in vitro and in vivo experimental models used in multiple sclerosis research. Concerning in vitro models, we analysed cell cultures and slice models. As for in vivo models, we examined such models of autoimmunity and inflammation as experimental allergic encephalitis in different animals and virus-induced demyelinating diseases. Furthermore, we analysed models of demyelination and remyelination, including chemical lesions caused by cuprizone, lysolecithin, and ethidium bromide; zebrafish; and transgenic models. Experimental models provide a deeper understanding of the different pathogenic mechanisms involved in multiple sclerosis. Choosing one model or another depends on the specific aims of the study. Copyright © 2017 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  4. Wind Tunnel Test of a Risk-Reduction Wing/Fuselage Model to Examine Juncture-Flow Phenomena

    Science.gov (United States)

    Kegerise, Michael A.; Neuhart, Dan H.

    2016-01-01

    A wing/fuselage wind-tunnel model was tested in the Langley 14- by 22-foot Subsonic Wind Tunnel in preparation for a highly-instrumented Juncture Flow Experiment to be conducted in the same facility. This test, which was sponsored by the NASA Transformational Tool and Technologies Project, is part of a comprehensive set of experimental and computational research activities to develop revolutionary, physics-based aeronautics analysis and design capability. The objectives of this particular test were to examine the surface and off-body flow on a generic wing/body combination to: 1) choose a final wing for a future, highly instrumented model, 2) use the results to facilitate unsteady pressure sensor placement on the model, 3) determine the area to be surveyed with an embedded laser-doppler velocimetry (LDV) system, 4) investigate the primary juncture corner- flow separation region using particle image velocimetry (PIV) to see if the particle seeding is adequately entrained and to examine the structure in the separated region, and 5) to determine the similarity of observed flow features with those predicted by computational fluid dynamics (CFD). This report documents the results of the above experiment that specifically address the first three goals. Multiple wing configurations were tested at a chord Reynolds number of 2.4 million. Flow patterns on the surface of the wings and in the region of the wing/fuselage juncture were examined using oil- flow visualization and infrared thermography. A limited number of unsteady pressure sensors on the fuselage around the wing leading and trailing edges were used to identify any dynamic effects of the horseshoe vortex on the flow field. The area of separated flow in the wing/fuselage juncture near the wing trailing edge was observed for all wing configurations at various angles of attack. All of the test objectives were met. The staff of the 14- by 22-foot Subsonic Wind Tunnel provided outstanding support and delivered

  5. Historical Analysis of the Inorganic Chemistry Curriculum Using ACS Examinations as Artifacts

    Science.gov (United States)

    Srinivasan, Shalini; Reisner, Barbara A.; Smith, Sheila R.; Stewart, Joanne L.; Johnson, Adam R.; Lin, Shirley; Marek, Keith A.; Nataro, Chip; Murphy, Kristen L.; Raker, Jeffrey R.

    2018-01-01

    ACS Examinations provide a lens through which to examine historical changes in topic coverage via analyses of course-specific examinations. This study is an extension of work completed previously by the ACS Exams Research Staff and collaborators in general chemistry, organic chemistry, and physical chemistry to explore content changes in the…

  6. Applications of MIDAS regression in analysing trends in water quality

    Science.gov (United States)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  7. Structural analyses of ITER toroidal field coils under fault conditions

    International Nuclear Information System (INIS)

    Jong, C.T.J.

    1992-04-01

    ITER (International Thermonuclear Experimental Reactor) is intended to be an experimental thermonuclear tokamak reactor testing the basic physics performance and technologies essential to future fusion reactors. The magnet system of ITER consists essentially of 4 sub-systems, i.e. toroidal field coils (TFCs), poloidal field coils (PFCs), power supplies, and cryogenic supplies. These subsystems do not contain significant radioactivity inventories, but the large energy inventory is a potential accident initiator. The aim of the structural analyses is to prevent accidents from propagating into vacuum vessel, tritium system and cooling system, which all contain significant amounts of radioactivity. As part of design process 3 conditions are defined for PF and TF coils, at which mechanical behaviour has to be analyzed in some detail, viz: normal operating conditions, upset conditions and fault conditions. This paper describes the work carried out by ECN to create a detailed finite element model of 16 TFCs as well as results of some fault condition analyses made with the model. Due to fault conditions, either electrical or mechanical, magnetic loading of TFCs becomes abnormal and further mechanical failure of parts of the overall structure might occur (e.g. failure of coil, gravitational supports, intercoil structure). The analyses performed consist of linear elastic stress analyses and electro-magneto-structural analyses (coupled field analyses). 8 refs.; 5 figs.; 5 tabs

  8. Business Model Disclosures in Corporate Reports

    Directory of Open Access Journals (Sweden)

    Jan Michalak

    2017-01-01

    Full Text Available Purpose: In this paper, we investigate the development, the current state, and the potential of business model disclosures to illustrate where, why and how organizations might want to disclose their business models to their stakeholders. The description of the business model may be relevant to stakeholders if it helps them to comprehend the company ‘story’ and increase understanding of other provided data (i.e. financial statements, risk exposure, sustainability of operations. It can also aid stakeholders in the assessment of sustainability of business models and the whole company. To realize these goals, business model descriptions should fulfil requirements of users suggested by various guidelines. Design/Methodology/Approach: First, we review and analyse literature on business model disclosure and some of its antecedents, including voluntary disclosure of intellectual capital. We also discuss business model reporting incentives from the viewpoint of shareholders, stakeholders and legitimacy theory. Second, we compare and discuss reporting guidelines on strategic reports, intellectual capital reports, and integrated reports through the lens of their requirements for business model disclosure and the consequences of their use for corporate report users. Third, we present, analyse and compare examples of good corporate practices in business model reporting. Findings: In the examined reporting guidelines, we find similarities, e.g. mostly structural but also qualitative attributes, in their presented information: materiality, completeness, connectivity, future orientation and conciseness. We also identify important differences between their frameworks concerning the target audience of the reports, business model definitions and business model disclosure requirements. Discontinuation of intellectual capital reporting conforming to DATI guidelines provides important warnings for the proponents of voluntary disclosure – especially for

  9. Continuous assessment and matriculation examination marks – An empirical examination

    Directory of Open Access Journals (Sweden)

    Servaas van der Berg

    2015-12-01

    This study compares CASS data to the externally assessed matric exam marks for a number of subjects. There are two signalling dimensions to inaccurate assessments: (i Inflated CASS marks can give students a false sense of security and lead to diminished exam effort. (ii A weak correlation between CASS and the exam marks could mean poor signalling in another dimension: Relatively good students may get relatively low CASS marks. Such low correlations indicate poor assessment reliability, as the examination and continuous assessment should both be testing mastery of the same national curriculum. The paper analyses the extent of each of these dimensions of weak signalling in South African schools and draws disturbing conclusions for a large part of the school system.

  10. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  11. Using plant growth modeling to analyse C source-sink relations under drought: inter and intra specific comparison

    Directory of Open Access Journals (Sweden)

    Benoit ePallas

    2013-11-01

    Full Text Available The ability to assimilate C and allocate NSC (non structural carbohydrates to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyse such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed.

  12. Analyses of containment structures with corrosion damage

    International Nuclear Information System (INIS)

    Cherry, J.L.

    1997-01-01

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a open-quotes lower boundclose quotes, open-quotes best estimateclose quotes, and open-quotes upper boundclose quotes failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties

  13. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    Directory of Open Access Journals (Sweden)

    Han Bossier

    2018-01-01

    Full Text Available Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1 the balance between false and true positives and (2 the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS, or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35. To do this, we apply a resampling scheme on a large dataset (N = 1,400 to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results.

  14. Extending and Applying Spartan to Perform Temporal Sensitivity Analyses for Predicting Changes in Influential Biological Pathways in Computational Models.

    Science.gov (United States)

    Alden, Kieran; Timmis, Jon; Andrews, Paul S; Veiga-Fernandes, Henrique; Coles, Mark

    2017-01-01

    Through integrating real time imaging, computational modelling, and statistical analysis approaches, previous work has suggested that the induction of and response to cell adhesion factors is the key initiating pathway in early lymphoid tissue development, in contrast to the previously accepted view that the process is triggered by chemokine mediated cell recruitment. These model derived hypotheses were developed using spartan, an open-source sensitivity analysis toolkit designed to establish and understand the relationship between a computational model and the biological system that model captures. Here, we extend the functionality available in spartan to permit the production of statistical analyses that contrast the behavior exhibited by a computational model at various simulated time-points, enabling a temporal analysis that could suggest whether the influence of biological mechanisms changes over time. We exemplify this extended functionality by using the computational model of lymphoid tissue development as a time-lapse tool. By generating results at twelve- hour intervals, we show how the extensions to spartan have been used to suggest that lymphoid tissue development could be biphasic, and predict the time-point when a switch in the influence of biological mechanisms might occur.

  15. An analysis of the policy coverage and examination of ...

    African Journals Online (AJOL)

    ... topics in subjects such as Life Sciences, Physical Sciences, Life Orientation, ... The aim of the research reported here was to investigate the coverage and ... In analysing the coverage and examination of environmental-impact topics, ...

  16. Problems of Modelling Toxic Compounds Emitted by a Marine Internal Combustion Engine in Unsteady States

    Directory of Open Access Journals (Sweden)

    Rudnicki Jacek

    2015-01-01

    Full Text Available Contemporary engine tests are performed based on the theory of experiment. The available versions of programmes used for analysing experimental data make frequent use of the multiple regression model, which enables examining effects and interactions between input model parameters and a single output variable. The use of multi-equation models provides more freedom in analysing the measured results, as those models enable simultaneous analysis of effects and interactions between many output variables. They can also be used as a tool in preparing experimental material for other advanced diagnostic tools, such as the models making use of neural networks which, when properly prepared, enable also analysing measurement results recorded during dynamic processes.

  17. A response-modeling alternative to surrogate models for support in computational analyses

    International Nuclear Information System (INIS)

    Rutherford, Brian

    2006-01-01

    Often, the objectives in a computational analysis involve characterization of system performance based on some function of the computed response. In general, this characterization includes (at least) an estimate or prediction for some performance measure and an estimate of the associated uncertainty. Surrogate models can be used to approximate the response in regions where simulations were not performed. For most surrogate modeling approaches, however (1) estimates are based on smoothing of available data and (2) uncertainty in the response is specified in a point-wise (in the input space) fashion. These aspects of the surrogate model construction might limit their capabilities. One alternative is to construct a probability measure, G(r), for the computer response, r, based on available data. This 'response-modeling' approach will permit probability estimation for an arbitrary event, E(r), based on the computer response. In this general setting, event probabilities can be computed: prob(E)=∫ r I(E(r))dG(r) where I is the indicator function. Furthermore, one can use G(r) to calculate an induced distribution on a performance measure, pm. For prediction problems where the performance measure is a scalar, its distribution F pm is determined by: F pm (z)=∫ r I(pm(r)≤z)dG(r). We introduce response models for scalar computer output and then generalize the approach to more complicated responses that utilize multiple response models

  18. A re-examination of thermodynamic modelling of U-Ru binary phase diagram

    Energy Technology Data Exchange (ETDEWEB)

    Wang, L.C.; Kaye, M.H., E-mail: matthew.kaye@uoit.ca [University of Ontario Institute of Technology, Oshawa, ON (Canada)

    2015-07-01

    Ruthenium (Ru) is one of the more abundant fission products (FPs) both in fast breeder reactors and thermal reactors. Post irradiation examinations (PIE) show that both 'the white metallic phase' (MoTc-Ru-Rh-Pd) and 'the other metallic phase' (U(Pd-Rh-Ru)3) are present in spent nuclear fuels. To describe this quaternary system, binary subsystems of uranium (U) with Pd, Rh, and Ru are necessary. Presently, only the U-Ru system has been thermodynamically described but with some problems. As part of research on U-Ru-Rh-Pd quaternary system, an improved consistent thermodynamic model describing the U-Ru binary phase diagram has been obtained. (author)

  19. A Framework for Analysing Driver Interactions with Semi-Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Siraj Shaikh

    2012-12-01

    Full Text Available Semi-autonomous vehicles are increasingly serving critical functions in various settings from mining to logistics to defence. A key characteristic of such systems is the presence of the human (drivers in the control loop. To ensure safety, both the driver needs to be aware of the autonomous aspects of the vehicle and the automated features of the vehicle built to enable safer control. In this paper we propose a framework to combine empirical models describing human behaviour with the environment and system models. We then analyse, via model checking, interaction between the models for desired safety properties. The aim is to analyse the design for safe vehicle-driver interaction. We demonstrate the applicability of our approach using a case study involving semi-autonomous vehicles where the driver fatigue are factors critical to a safe journey.

  20. Analysing the teleconnection systems affecting the climate of the Carpathian Basin

    Science.gov (United States)

    Kristóf, Erzsébet; Bartholy, Judit; Pongrácz, Rita

    2017-04-01

    Nowadays, the increase of the global average near-surface air temperature is unequivocal. Atmospheric low-frequency variabilities have substantial impacts on climate variables such as air temperature and precipitation. Therefore, assessing their effects is essential to improve global and regional climate model simulations for the 21st century. The North Atlantic Oscillation (NAO) is one of the best-known atmospheric teleconnection patterns affecting the Carpathian Basin in Central Europe. Besides NAO, we aim to analyse other interannual-to-decadal teleconnection patterns, which might have significant impacts on the Carpathian Basin, namely, the East Atlantic/West Russia pattern, the Scandinavian pattern, the Mediterranean Oscillation, and the North-Sea Caspian Pattern. For this purpose primarily the European Centre for Medium-Range Weather Forecasts' (ECMWF) ERA-20C atmospheric reanalysis dataset and multivariate statistical methods are used. The indices of each teleconnection pattern and their correlations with temperature and precipitation will be calculated for the period of 1961-1990. On the basis of these data first the long range (i. e. seasonal and/or annual scale) forecast ability is evaluated. Then, we aim to calculate the same indices of the relevant teleconnection patterns for the historical and future simulations of Coupled Model Intercomparison Project Phase 5 (CMIP5) models and compare them against each other using statistical methods. Our ultimate goal is to examine all available CMIP5 models and evaluate their abilities to reproduce the selected teleconnection systems. Thus, climate predictions for the 21st century for the Carpathian Basin may be improved using the best-performing models among all CMIP5 model simulations.

  1. Nuclear power plants: Results of recent safety analyses

    International Nuclear Information System (INIS)

    Steinmetz, E.

    1987-01-01

    The contributions deal with the problems posed by low radiation doses, with the information currently available from analyses of the Chernobyl reactor accident, and with risk assessments in connection with nuclear power plant accidents. Other points of interest include latest results on fission product release from reactor core or reactor building, advanced atmospheric dispersion models for incident and accident analyses, reliability studies on safety systems, and assessment of fire hazard in nuclear installations. The various contributions are found as separate entries in the database. (DG) [de

  2. A comparison of linear tyre models for analysing shimmy

    NARCIS (Netherlands)

    Besselink, I.J.M.; Maas, J.W.L.H.; Nijmeijer, H.

    2011-01-01

    A comparison is made between three linear, dynamic tyre models using low speed step responses and yaw oscillation tests. The match with the measurements improves with increasing complexity of the tyre model. Application of the different tyre models to a two degree of freedom trailing arm suspension

  3. Methodologic quality of meta-analyses and systematic reviews on the Mediterranean diet and cardiovascular disease outcomes: a review.

    Science.gov (United States)

    Huedo-Medina, Tania B; Garcia, Marissa; Bihuniak, Jessica D; Kenny, Anne; Kerstetter, Jane

    2016-03-01

    Several systematic reviews/meta-analyses published within the past 10 y have examined the associations of Mediterranean-style diets (MedSDs) on cardiovascular disease (CVD) risk. However, these reviews have not been evaluated for satisfying contemporary methodologic quality standards. This study evaluated the quality of recent systematic reviews/meta-analyses on MedSD and CVD risk outcomes by using an established methodologic quality scale. The relation between review quality and impact per publication value of the journal in which the article had been published was also evaluated. To assess compliance with current standards, we applied a modified version of the Assessment of Multiple Systematic Reviews (AMSTARMedSD) quality scale to systematic reviews/meta-analyses retrieved from electronic databases that had met our selection criteria: 1) used systematic or meta-analytic procedures to review the literature, 2) examined MedSD trials, and 3) had MedSD interventions independently or combined with other interventions. Reviews completely satisfied from 8% to 75% of the AMSTARMedSD items (mean ± SD: 31.2% ± 19.4%), with those published in higher-impact journals having greater quality scores. At a minimum, 60% of the 24 reviews did not disclose full search details or apply appropriate statistical methods to combine study findings. Only 5 of the reviews included participant or study characteristics in their analyses, and none evaluated MedSD diet characteristics. These data suggest that current meta-analyses/systematic reviews evaluating the effect of MedSD on CVD risk do not fully comply with contemporary methodologic quality standards. As a result, there are more research questions to answer to enhance our understanding of how MedSD affects CVD risk or how these effects may be modified by the participant or MedSD characteristics. To clarify the associations between MedSD and CVD risk, future meta-analyses and systematic reviews should not only follow methodologic

  4. Hybrid Logical Analyses of the Ambient Calculus

    DEFF Research Database (Denmark)

    Bolander, Thomas; Hansen, Rene Rydhof

    2010-01-01

    In this paper, hybrid logic is used to formulate three control flow analyses for Mobile Ambients, a process calculus designed for modelling mobility. We show that hybrid logic is very well-suited to express the semantic structure of the ambient calculus and how features of hybrid logic can...

  5. Results of radiotherapy in craniopharyngiomas analysed by the linear quadratic model

    Energy Technology Data Exchange (ETDEWEB)

    Guerkaynak, M. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Oezyar, E. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Zorlu, F. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Akyol, F.H. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Lale Atahan, I. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey)

    1994-12-31

    In 23 craniopharyngioma patients treated by limited surgery and external radiotherapy, the results concerning local control were analysed by linear quadratic formula. A biologically effective dose (BED) of 55 Gy, calculated with time factor and an {alpha}/{beta} value of 10 Gy, seemed to be adequate for local control. (orig.).

  6. Best Practice Examples of Circular Business Models

    DEFF Research Database (Denmark)

    Guldmann, Eva

    Best practice examples of circular business models are presented in this report. The purpose is to inform and inspire interested readers, in particular companies that aspire to examine the potentials of the circular economy. Circular business models in two different sectors are examined, namely...... the textile and clothing sector as well as the durable goods sector. In order to appreciate the notion of circular business models, the basics of the circular economy are outlined along with three frameworks for categorizing the various types of circular business models. The frameworks take point of departure...... in resource loops, value bases and business model archetypes respectively, and they are applied for analysing and organizing the business models that are presented throughout the report. The investigations in the report show that circular business models are relevant to businesses because they hold...

  7. Sensitivity analyses of seismic behavior of spent fuel dry cask storage systems

    International Nuclear Information System (INIS)

    Luk, V.K.; Spencer, B.W.; Shaukat, S.K.; Lam, I.P.; Dameron, R.A.

    2003-01-01

    Sandia National Laboratories is conducting a research project to develop a comprehensive methodology for evaluating the seismic behavior of spent fuel dry cask storage systems (DCSS) for the Office of Nuclear Regulatory Research of the U.S. Nuclear Regulatory Commission (NRC). A typical Independent Spent Fuel Storage Installation (ISFSI) consists of arrays of free-standing storage casks resting on concrete pads. In the safety review process of these cask systems, their seismically induced horizontal displacements and angular rotations must be quantified to determine whether casks will overturn or neighboring casks will collide during a seismic event. The ABAQUS/Explicit code is used to analyze three-dimensional coupled finite element models consisting of three submodels, which are a cylindrical cask or a rectangular module, a flexible concrete pad, and an underlying soil foundation. The coupled model includes two sets of contact surfaces between the submodels with prescribed coefficients of friction. The seismic event is described by one vertical and two horizontal components of statistically independent seismic acceleration time histories. A deconvolution procedure is used to adjust the amplitudes and frequency contents of these three-component reference surface motions before applying them simultaneously at the soil foundation base. The research project focused on examining the dynamic and nonlinear seismic behavior of the coupled model of free-standing DCSS including soil-structure interaction effects. This paper presents a subset of analysis results for a series of parametric analyses. Input variables in the parametric analyses include: designs of the cask/module, time histories of the seismic accelerations, coefficients of friction at the cask/pad interface, and material properties of the soil foundation. In subsequent research, the analysis results will be compiled and presented in nomograms to highlight the sensitivity of seismic response of DCSS to

  8. Assessment of Tools and Data for System-Level Dynamic Analyses

    International Nuclear Information System (INIS)

    Piet, Steven J.; Soelberg, Nick R.

    2011-01-01

    The only fuel cycle for which dynamic analyses and assessments are not needed is the null fuel cycle - no nuclear power. For every other concept, dynamic analyses are needed and can influence relative desirability of options. Dynamic analyses show how a fuel cycle might work during transitions from today's partial fuel cycle to something more complete, impact of technology deployments, location of choke points, the key time lags, when benefits can manifest, and how well parts of fuel cycles work together. This report summarizes the readiness of existing Fuel Cycle Technology (FCT) tools and data for conducting dynamic analyses on the range of options. VISION is the primary dynamic analysis tool. Not only does it model mass flows, as do other dynamic system analysis models, but it allows users to explore various potential constraints. The only fuel cycle for which constraints are not important are those in concept advocates PowerPoint presentations; in contrast, comparative analyses of fuel cycles must address what constraints exist and how they could impact performance. The most immediate tool need is extending VISION to the thorium/U233 fuel cycle. Depending on further clarification of waste management strategies in general and for specific fuel cycle candidates, waste management sub-models in VISION may need enhancement, e.g., more on 'co-flows' of non-fuel materials, constraints in waste streams, or automatic classification of waste streams on the basis of user-specified rules. VISION originally had an economic sub-model. The economic calculations were deemed unnecessary in later versions so it was retired. Eventually, the program will need to restore and improve the economics sub-model of VISION to at least the cash flow stage and possibly to incorporating cost constraints and feedbacks. There are multiple sources of data that dynamic analyses can draw on. In this report, 'data' means experimental data, data from more detailed theoretical or empirical

  9. Assessment of Tools and Data for System-Level Dynamic Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Steven J. Piet; Nick R. Soelberg

    2011-06-01

    The only fuel cycle for which dynamic analyses and assessments are not needed is the null fuel cycle - no nuclear power. For every other concept, dynamic analyses are needed and can influence relative desirability of options. Dynamic analyses show how a fuel cycle might work during transitions from today's partial fuel cycle to something more complete, impact of technology deployments, location of choke points, the key time lags, when benefits can manifest, and how well parts of fuel cycles work together. This report summarizes the readiness of existing Fuel Cycle Technology (FCT) tools and data for conducting dynamic analyses on the range of options. VISION is the primary dynamic analysis tool. Not only does it model mass flows, as do other dynamic system analysis models, but it allows users to explore various potential constraints. The only fuel cycle for which constraints are not important are those in concept advocates PowerPoint presentations; in contrast, comparative analyses of fuel cycles must address what constraints exist and how they could impact performance. The most immediate tool need is extending VISION to the thorium/U233 fuel cycle. Depending on further clarification of waste management strategies in general and for specific fuel cycle candidates, waste management sub-models in VISION may need enhancement, e.g., more on 'co-flows' of non-fuel materials, constraints in waste streams, or automatic classification of waste streams on the basis of user-specified rules. VISION originally had an economic sub-model. The economic calculations were deemed unnecessary in later versions so it was retired. Eventually, the program will need to restore and improve the economics sub-model of VISION to at least the cash flow stage and possibly to incorporating cost constraints and feedbacks. There are multiple sources of data that dynamic analyses can draw on. In this report, 'data' means experimental data, data from more detailed

  10. Diffusion-weighted and T2-weighted MR imaging for colorectal liver metastases detection in a rat model at 7 T: a comparative study using histological examination as reference

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Mathilde; Ronot, Maxime; Vilgrain, Valerie; Beers, Bernard E. van [University Paris Diderot, Sorbonne Paris Cite, INSERM UMR 773, University Hospitals Paris Nord Val de Seine, Beaujon, Assistance Publique- Hopitaux de Paris, Laboratory of Physiological and Molecular Imaging of the Abdomen (IPMA) and Department of Radiology, Clichy Cedex (France); Maggiori, Leon; Panis, Yves [University Paris Diderot, Sorbonne Paris Cite, INSERM UMR 773, University Hospitals Paris Nord Val de Seine, Beaujon, Assistance Publique-Hopitaux de Paris, Department of Colorectal Surgery, Clichy (France); Paradis, Valerie [University Paris Diderot, Sorbonne Paris Cite, INSERM UMR 773, University Hospitals Paris Nord Val de Seine, Beaujon, Assistance Publique-Hopitaux de Paris, Department of Pathology, Clichy (France)

    2013-08-15

    To compare diffusion-weighted (DW) and T2-weighted MR imaging in detecting colorectal liver metastases in a rat model, using histological examination as a reference method. Eighteen rats had four liver injections of colon cancer cells. MR examinations at 7 T included FSE-T2-weighted imaging and SE-DW MR imaging (b = 0, 20 and 150 s/mm{sup 2}) and were analysed by two independent readers. Histological examination was performed on 0.4-mm slices. McNemar's test was used to compare the sensitivities and the Wilcoxon matched pairs test to compare the average number of false-positives per rat. One hundred and sixty-six liver metastases were identified on histological examination. The sensitivity in detecting liver metastases was significantly higher on DW MR than on T2-weighted images (99/166 (60 %) (reader 1) and 92/166 (55 %) (reader 2) versus 77/166 (46 %), P {<=} 0.001), without an increase in false-positives per rat (P = 0.773/P = 0.850). After stratification according to metastasis diameter, DW MR imaging had a significantly higher sensitivity than T2-weighted imaging only for metastases with a diameter (0.6-1.2 mm) similar to that of the spatial resolution of MR imaging in the current study. This MR study with histological correlations shows the higher sensitivity of DW relative to T2-weighted imaging at 7 T for detecting liver metastases, especially small ones. (orig.)

  11. The active learning hypothesis of the job-demand-control model: an experimental examination.

    Science.gov (United States)

    Häusser, Jan Alexander; Schulz-Hardt, Stefan; Mojzisch, Andreas

    2014-01-01

    The active learning hypothesis of the job-demand-control model [Karasek, R. A. 1979. "Job Demands, Job Decision Latitude, and Mental Strain: Implications for Job Redesign." Administration Science Quarterly 24: 285-307] proposes positive effects of high job demands and high job control on performance. We conducted a 2 (demands: high vs. low) × 2 (control: high vs. low) experimental office workplace simulation to examine this hypothesis. Since performance during a work simulation is confounded by the boundaries of the demands and control manipulations (e.g. time limits), we used a post-test, in which participants continued working at their task, but without any manipulation of demands and control. This post-test allowed for examining active learning (transfer) effects in an unconfounded fashion. Our results revealed that high demands had a positive effect on quantitative performance, without affecting task accuracy. In contrast, high control resulted in a speed-accuracy tradeoff, that is participants in the high control conditions worked slower but with greater accuracy than participants in the low control conditions.

  12. Parametric analyses of single-zone thorium-fueled molten salt reactor fuel cycle options

    International Nuclear Information System (INIS)

    Powers, J.J.; Worrall, A.; Gehin, J.C.; Harrison, T.J.; Sunny, E.E.

    2013-01-01

    Analyses of fuel cycle options based on thorium-fueled Molten Salt Reactors (MSRs) have been performed in support of fuel cycle screening and evaluation activities for the United States Department of Energy. The MSR options considered are based on thermal spectrum MSRs with 3 different separations levels: full recycling, limited recycling, and 'once-through' operation without active separations. A single-fluid, single-zone 2250 MWth (1000 MWe) MSR concept consisting of a fuel-bearing molten salt with graphite moderator and reflectors was used as the basis for this study. Radiation transport and isotopic depletion calculations were performed using SCALE 6.1 with ENDF/B-VII nuclear data. New methodology developed at Oak Ridge National Laboratory (ORNL) enables MSR analysis using SCALE, modeling material feed and removal by taking user-specified parameters and performing multiple SCALE/TRITON simulations to determine the resulting equilibrium operating conditions. Parametric analyses examined the sensitivity of the performance of a thorium MSR to variations in the separations efficiency for protactinium and fission products. Results indicate that self-sustained operation is possible with full or limited recycling but once-through operation would require an external neutron source. (authors)

  13. Age determination by teeth examination: a comparison between different morphologic and quantitative analyses.

    Science.gov (United States)

    Amariti, M L; Restori, M; De Ferrari, F; Paganelli, C; Faglia, R; Legnani, G

    1999-06-01

    Age determination by teeth examination is one of the main means of determining personal identification. Current studies have suggested different techniques for determining the age of a subject by means of the analysis of microscopic and macroscopic structural modifications of the tooth with ageing. The histological approach is useful among the various methodologies utilized for this purpose. It is still unclear as to what is the best technique, as almost all the authors suggest the use of the approach they themselves have tested. In the present study, age determination by means of microscopic techniques has been based on the quantitative analysis of three parameters, all well recognized in specialized literature: 1. dentinal tubules density/sclerosis 2. tooth translucency 3. analysis of the cementum thickness. After a description of the three methodologies (with automatic image processing of the dentinal sclerosis utilizing an appropriate computer program developed by the authors) the results obtained on cases using the three different approaches are presented, and the merits and failings of each technique are identified with the intention of identifying the one offering the least degree of error in age determination.

  14. Global post-Kyoto scenario analyses at PSI

    Energy Technology Data Exchange (ETDEWEB)

    Kypreos, S [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    Scenario analyses are described here using the Global MARKAL-Macro Trade (GMMT) model to study the economic implications of the Kyoto Protocol to the UN Convention on Climate change. Some conclusions are derived in terms of efficient implementations of the post-Kyoto extensions of the Protocol. (author) 2 figs., 5 refs.

  15. Global post-Kyoto scenario analyses at PSI

    International Nuclear Information System (INIS)

    Kypreos, S.

    1999-01-01

    Scenario analyses are described here using the Global MARKAL-Macro Trade (GMMT) model to study the economic implications of the Kyoto Protocol to the UN Convention on Climate change. Some conclusions are derived in terms of efficient implementations of the post-Kyoto extensions of the Protocol. (author) 2 figs., 5 refs

  16. LOCO - a linearised model for analysing the onset of coolant oscillations and frequency response of boiling channels

    International Nuclear Information System (INIS)

    Romberg, T.M.

    1982-12-01

    Industrial plant such as heat exchangers and nuclear and conventional boilers are prone to coolant flow oscillations which may not be detected. In this report, a hydrodynamic model is formulated in which the one-dimensional, non-linear, partial differential equations for the conservation of mass, energy and momentum are perturbed with respect to time, linearised, and Laplace-transformed into the s-domain for frequency response analysis. A computer program has been developed to integrate numerically the resulting non-linear ordinary differential equations by finite difference methods. A sample problem demonstrates how the computer code is used to analyse the frequency response and flow stability characteristics of a heated channel

  17. Indian Point 2 steam generator tube rupture analyses

    International Nuclear Information System (INIS)

    Dayan, A.

    1985-01-01

    Analyses were conducted with RETRAN-02 to study consequences of steam generator tube rupture (SGTR) events. The Indian Point, Unit 2, power plant (IP2, PWR) was modeled as a two asymmetric loops, consisting of 27 volumes and 37 junctions. The break section was modeled once, conservatively, as a 150% flow area opening at the wall of the steam generator cold leg plenum, and once as a 200% double-ended tube break. Results revealed 60% overprediction of breakflow rates by the traditional conservative model. Two SGTR transients were studied, one with low-pressure reactor trip and one with an earlier reactor trip via over temperature ΔT. The former is more typical to a plant with low reactor average temperature such as IP2. Transient analyses for a single tube break event over 500 seconds indicated continued primary subcooling and no need for steam line pressure relief. In addition, SGTR transients with reactor trip while the pressurizer still contains water were found to favorably reduce depressurization rates. Comparison of the conservative results with independent LOFTRAN predictions showed good agreement

  18. Analysing Models as a Knowledge Technology in Transport Planning

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2011-01-01

    critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... device to illuminate how such an analytic scheme may allow patterns of insight about the use, influence and role of models in planning to emerge. The main contribution of the paper is to demonstrate that concepts and terminologies from knowledge use literature can provide interpretations of significance...

  19. Modeling and analysing storage systems in agricultural biomass supply chain for cellulosic ethanol production

    International Nuclear Information System (INIS)

    Ebadian, Mahmood; Sowlati, Taraneh; Sokhansanj, Shahab; Townley-Smith, Lawrence; Stumborg, Mark

    2013-01-01

    Highlights: ► Studied the agricultural biomass supply chain for cellulosic ethanol production. ► Evaluated the impact of storage systems on different supply chain actors. ► Developed a combined simulation/optimization model to evaluate storage systems. ► Compared two satellite storage systems with roadside storage in terms of costs and emitted CO 2 . ► SS would lead to a more cost-efficient supply chain compared to roadside storage. -- Abstract: In this paper, a combined simulation/optimization model is developed to better understand and evaluate the impact of the storage systems on the costs incurred by each actor in the agricultural biomass supply chain including farmers, hauling contractors and the cellulosic ethanol plant. The optimization model prescribes the optimum number and location of farms and storages. It also determines the supply radius, the number of farms required to secure the annual supply of biomass and also the assignment of farms to storage locations. Given the specific design of the supply chain determined by the optimization model, the simulation model determines the number of required machines for each operation, their daily working schedule and utilization rates, along with the capacities of storages. To evaluate the impact of the storage systems on the delivered costs, three storage systems are molded and compared: roadside storage (RS) system and two satellite storage (SS) systems including SS with fixed hauling distance (SF) and SS with variable hauling distance (SV). In all storage systems, it is assumed the loading equipment is dedicated to storage locations. The obtained results from a real case study provide detailed cost figures for each storage system since the developed model analyses the supply chain on an hourly basis and considers time-dependence and stochasticity of the supply chain. Comparison of the storage systems shows SV would outperform SF and RS by reducing the total delivered cost by 8% and 6%, respectively

  20. Examining Factors Predicting Students’ Digital Competence

    Directory of Open Access Journals (Sweden)

    Ove Edvard Hatlevik

    2015-02-01

    Full Text Available The purpose of this study was to examine factors predicting lower secondary school students’ digital competence and to explore differences between students when it comes to digital competence. Results from a digital competence test and survey in lower secondary school will be presented. It is important to learn more about and investigate what characterizes students’ digital competence. A sample of 852 ninth-grade Norwegian students from 38 schools participated in the study. The students answered a 26 item multiple-choice digital competence test and a self-report questionnaire about family background, motivation, and previous grades. Structural equation modeling was used to test a model of the hypothesised relationship between family background, mastery orientation, previous achievements, and digital competence. The results indicate variation in digital competence among the ninth-graders. Further, analyses showed that students’ conditions at home, i.e., language integration and cultural capital, together with mastery orientation and academic achievements predict students digital competence. This study indicates that that there is evidence of digital diversity between lower secondary students. It does not seem like the development of digital competence among the students happens automatically. Students’ family background and school performance are the most important factors. Therefore, as this study shows, it is necessary to further investigate how schools can identify students’ level of competence and to develop plans and actions for how schools can help to try to equalize differences.

  1. Aerosol penetration of leak pathways : an examination of the available data and models.

    Energy Technology Data Exchange (ETDEWEB)

    Powers, Dana Auburn

    2009-04-01

    Data and models of aerosol particle deposition in leak pathways are described. Pathways considered include capillaries, orifices, slots and cracks in concrete. The Morewitz-Vaughan criterion for aerosol plugging of leak pathways is shown to be applicable only to a limited range of particle settling velocities and Stokes numbers. More useful are sampling efficiency criteria defined by Davies and by Liu and Agarwal. Deposition of particles can be limited by bounce from surfaces defining leak pathways and by resuspension of particles deposited on these surfaces. A model of the probability of particle bounce is described. Resuspension of deposited particles can be triggered by changes in flow conditions, particle impact on deposits and by shock or vibration of the surfaces. This examination was performed as part of the review of the AP1000 Standard Combined License Technical Report, APP-GW-GLN-12, Revision 0, 'Offsite and Control Room Dose Changes' (TR-112) in support of the USNRC AP1000 Standard Combined License Pre-Application Review.

  2. A simulation model for analysing brain structure deformations

    Energy Technology Data Exchange (ETDEWEB)

    Bona, Sergio Di [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy); Lutzemberger, Ludovico [Department of Neuroscience, Institute of Neurosurgery, University of Pisa, Via Roma, 67-56100 Pisa (Italy); Salvetti, Ovidio [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy)

    2003-12-21

    Recent developments of medical software applications from the simulation to the planning of surgical operations have revealed the need for modelling human tissues and organs, not only from a geometric point of view but also from a physical one, i.e. soft tissues, rigid body, viscoelasticity, etc. This has given rise to the term 'deformable objects', which refers to objects with a morphology, a physical and a mechanical behaviour of their own and that reflects their natural properties. In this paper, we propose a model, based upon physical laws, suitable for the realistic manipulation of geometric reconstructions of volumetric data taken from MR and CT scans. In particular, a physically based model of the brain is presented that is able to simulate the evolution of different nature pathological intra-cranial phenomena such as haemorrhages, neoplasm, haematoma, etc and to describe the consequences that are caused by their volume expansions and the influences they have on the anatomical and neuro-functional structures of the brain.

  3. ParticipACTION: Awareness of the participACTION campaign among Canadian adults - Examining the knowledge gap hypothesis and a hierarchy-of-effects model

    Directory of Open Access Journals (Sweden)

    Faulkner Guy EJ

    2009-12-01

    Full Text Available Abstract Background ParticipACTION was a pervasive communication campaign that promoted physical activity in the Canadian population for three decades. According to McGuire's hierarchy-of-effects model (HOEM, this campaign should influence physical activity through intermediate mediators such as beliefs and intention. Also, when such media campaigns occur, knowledge gaps often develop within the population about the messages being conveyed. The purposes of this study were to (a determine the current awareness of ParticipACTION campaigns among Canadians; (b confirm if awareness of the ParticipACTION initiative varied as a function of levels of education and household income; and, (c to examine whether awareness of ParticipACTION was associated with physical activity related beliefs, intentions, and leisure-time physical activity (LTPA as suggested by the HOEM. Specifically, we tested a model including awareness of ParticipACTION (unprompted, prompted, outcome expectations, self-efficacy, intention, and physical activity status. Methods A population-based survey was conducted on 4,650 Canadians over a period of 6 months from August, 2007 to February, 2008 (response rate = 49%. The survey consisted of a set of additional questions on the 2007 Physical Activity Monitor (PAM. Our module on the PAM included questions related to awareness and knowledge of ParticipACTION. Weighted logistic models were constructed to test the knowledge gap hypotheses and to examine whether awareness was associated with physical activity related beliefs (i.e., outcome expectations, self-efficacy, intention, and LTPA. All analyses included those respondents who were 20 years of age and older in 2007/2008 (N = 4424. Results Approximately 8% of Canadians were still aware of ParticipACTION unprompted and 82% were aware when prompted. Both education and income were significant correlates of awareness among Canadians. The odds of people being aware of ParticipACTION were

  4. ParticipACTION: awareness of the participACTION campaign among Canadian adults--examining the knowledge gap hypothesis and a hierarchy-of-effects model.

    Science.gov (United States)

    Spence, John C; Brawley, Lawrence R; Craig, Cora Lynn; Plotnikoff, Ronald C; Tremblay, Mark S; Bauman, Adrian; Faulkner, Guy Ej; Chad, Karen; Clark, Marianne I

    2009-12-09

    ParticipACTION was a pervasive communication campaign that promoted physical activity in the Canadian population for three decades. According to McGuire's hierarchy-of-effects model (HOEM), this campaign should influence physical activity through intermediate mediators such as beliefs and intention. Also, when such media campaigns occur, knowledge gaps often develop within the population about the messages being conveyed. The purposes of this study were to (a) determine the current awareness of ParticipACTION campaigns among Canadians; (b) confirm if awareness of the ParticipACTION initiative varied as a function of levels of education and household income; and, (c) to examine whether awareness of ParticipACTION was associated with physical activity related beliefs, intentions, and leisure-time physical activity (LTPA) as suggested by the HOEM. Specifically, we tested a model including awareness of ParticipACTION (unprompted, prompted), outcome expectations, self-efficacy, intention, and physical activity status. A population-based survey was conducted on 4,650 Canadians over a period of 6 months from August, 2007 to February, 2008 (response rate = 49%). The survey consisted of a set of additional questions on the 2007 Physical Activity Monitor (PAM). Our module on the PAM included questions related to awareness and knowledge of ParticipACTION. Weighted logistic models were constructed to test the knowledge gap hypotheses and to examine whether awareness was associated with physical activity related beliefs (i.e., outcome expectations, self-efficacy), intention, and LTPA. All analyses included those respondents who were 20 years of age and older in 2007/2008 (N = 4424). Approximately 8% of Canadians were still aware of ParticipACTION unprompted and 82% were aware when prompted. Both education and income were significant correlates of awareness among Canadians. The odds of people being aware of ParticipACTION were greater if they were more educated and reported

  5. Examining a conceptual model of parental nurturance, parenting practices and physical activity among 5–6 year olds

    Science.gov (United States)

    Sebire, Simon J.; Jago, Russell; Wood, Lesley; Thompson, Janice L.; Zahra, Jezmond; Lawlor, Deborah A.

    2016-01-01

    Rationale Parenting is an often-studied correlate of children's physical activity, however there is little research examining the associations between parenting styles, practices and the physical activity of younger children. Objective This study aimed to investigate whether physical activity-based parenting practices mediate the association between parenting styles and 5–6 year-old children's objectively-assessed physical activity. Methods 770 parents self-reported parenting style (nurturance and control) and physical activity-based parenting practices (logistic and modeling support). Their 5–6 year old child wore an accelerometer for five days to measure moderate-to-vigorous physical activity (MVPA). Linear regression was used to examine direct and indirect (mediation) associations. Data were collected in the United Kingdom in 2012/13 and analyzed in 2014. Results Parent nurturance was positively associated with provision of modeling (adjusted unstandardized coefficient, β = 0.11; 95% CI = 0.02, 0.21) and logistic support (β = 0.14; 0.07, 0.21). Modeling support was associated with greater child MVPA (β = 2.41; 0.23, 4.60) and a small indirect path from parent nurturance to child's MVPA was identified (β = 0.27; 0.04, 0.70). Conclusions Physical activity-based parenting practices are more strongly associated with 5–6 year old children's MVPA than parenting styles. Further research examining conceptual models of parenting is needed to understand in more depth the possible antecedents to adaptive parenting practices beyond parenting styles. PMID:26647364

  6. Examining Pedestrian Injury Severity Using Alternative Disaggregate Models

    DEFF Research Database (Denmark)

    Abay, Kibrom Araya

    2013-01-01

    This paper investigates the injury severity of pedestrians considering detailed road user characteristics and alternative model specification using a high-quality Danish road accident data. Such detailed and alternative modeling approach helps to assess the sensitivity of empirical inferences...... to the choice of these models. The empirical analysis reveals that detailed road user characteristics such as crime history of drivers and momentary activities of road users at the time of the accident provides an interesting insight in the injury severity analysis. Likewise, the alternative analytical...... specification of the models reveals that some of the conventionally employed fixed parameters injury severity models could underestimate the effect of some important behavioral attributes of the accidents. For instance, the standard ordered logit model underestimated the marginal effects of some...

  7. Slow Steaming in Maritime Transportation: Fundamentals, Trade-offs, and Decision Models

    DEFF Research Database (Denmark)

    Psaraftis, Harilaos N.; Kontovas, Christos A.

    2015-01-01

    burned. The purpose of this chapter is to examine the practice of slow steaming from various angles. In that context, a taxonomy of models is presented, some fundamentals are outlined, the main trade-offs are analysed, and some decision models are presented. Some examples are finally presented so...

  8. AECL hot-cell facilities and post-irradiation examination services

    International Nuclear Information System (INIS)

    Schankula, M.H.; Plaice, E.L.; Woodworth, L.G.

    1998-04-01

    This paper presents an overview of the post-irradiation examination (PIE) services available at AECL's hot-cell facilities (HCF). The HCFs are used primarily to provide PIE support for operating CANDU power reactors in Canada and abroad, and for the examination of experimental fuel bundles and core components irradiated in research reactors at the Chalk River Laboratories (CRL) and off-shore. A variety of examinations and analyses are performed ranging from non-destructive visual and dimensional inspections to detailed optical and scanning electron microscopic examinations. Several hot cells are dedicated to mechanical property testing of structural materials and to determine the fitness-for-service of reactor core components. Facility upgrades and the development of innovative examination techniques continue to improve AECL's PIE capabilities. (author)

  9. Examining the utility of satellite-based wind sheltering estimates for lake hydrodynamic modeling

    Science.gov (United States)

    Van Den Hoek, Jamon; Read, Jordan S.; Winslow, Luke A.; Montesano, Paul; Markfort, Corey D.

    2015-01-01

    Satellite-based measurements of vegetation canopy structure have been in common use for the last decade but have never been used to estimate canopy's impact on wind sheltering of individual lakes. Wind sheltering is caused by slower winds in the wake of topography and shoreline obstacles (e.g. forest canopy) and influences heat loss and the flux of wind-driven mixing energy into lakes, which control lake temperatures and indirectly structure lake ecosystem processes, including carbon cycling and thermal habitat partitioning. Lakeshore wind sheltering has often been parameterized by lake surface area but such empirical relationships are only based on forested lakeshores and overlook the contributions of local land cover and terrain to wind sheltering. This study is the first to examine the utility of satellite imagery-derived broad-scale estimates of wind sheltering across a diversity of land covers. Using 30 m spatial resolution ASTER GDEM2 elevation data, the mean sheltering height, hs, being the combination of local topographic rise and canopy height above the lake surface, is calculated within 100 m-wide buffers surrounding 76,000 lakes in the U.S. state of Wisconsin. Uncertainty of GDEM2-derived hs was compared to SRTM-, high-resolution G-LiHT lidar-, and ICESat-derived estimates of hs, respective influences of land cover type and buffer width on hsare examined; and the effect of including satellite-based hs on the accuracy of a statewide lake hydrodynamic model was discussed. Though GDEM2 hs uncertainty was comparable to or better than other satellite-based measures of hs, its higher spatial resolution and broader spatial coverage allowed more lakes to be included in modeling efforts. GDEM2 was shown to offer superior utility for estimating hs compared to other satellite-derived data, but was limited by its consistent underestimation of hs, inability to detect within-buffer hs variability, and differing accuracy across land cover types. Nonetheless

  10. PERFORMANCE MEASURES OF STUDENTS IN EXAMINATIONS: A STOCHASTIC APPROACH

    OpenAIRE

    Goutam Saha; GOUTAM SAHA

    2013-01-01

    Data on Secondary and Higher Secondary examination (science stream) results from Tripura (North-East India) schools are analyzed to measure the performance of students based on tests and also the performance measures of schools based on final results and continuous assessment processes are obtained. The result variation in terms of grade points in the Secondary and Higher Secondary examinations are analysed using different sets of performance measures. The transition probabilities from one g...

  11. Bifactor Models Show a Superior Model Fit: Examination of the Factorial Validity of Parent-Reported and Self-Reported Symptoms of Attention-Deficit/Hyperactivity Disorders in Children and Adolescents.

    Science.gov (United States)

    Rodenacker, Klaas; Hautmann, Christopher; Görtz-Dorten, Anja; Döpfner, Manfred

    2016-01-01

    Various studies have demonstrated that bifactor models yield better solutions than models with correlated factors. However, the kind of bifactor model that is most appropriate is yet to be examined. The current study is the first to test bifactor models across the full age range (11-18 years) of adolescents using self-reports, and the first to test bifactor models with German subjects and German questionnaires. The study sample included children and adolescents aged between 6 and 18 years recruited from a German clinical sample (n = 1,081) and a German community sample (n = 642). To examine the factorial validity, we compared unidimensional, correlated factors and higher-order and bifactor models and further tested a modified incomplete bifactor model for measurement invariance. Bifactor models displayed superior model fit statistics compared to correlated factor models or second-order models. However, a more parsimonious incomplete bifactor model with only 2 specific factors (inattention and impulsivity) showed a good model fit and a better factor structure than the other bifactor models. Scalar measurement invariance was given in most group comparisons. An incomplete bifactor model would suggest that the specific inattention and impulsivity factors represent entities separable from the general attention-deficit/hyperactivity disorder construct and might, therefore, give way to a new approach to subtyping of children beyond and above attention-deficit/hyperactivity disorder. © 2016 S. Karger AG, Basel.

  12. Application of a weighted spatial probability model in GIS to analyse landslides in Penang Island, Malaysia

    Directory of Open Access Journals (Sweden)

    Samy Ismail Elmahdy

    2016-01-01

    Full Text Available In the current study, Penang Island, which is one of the several mountainous areas in Malaysia that is often subjected to landslide hazard, was chosen for further investigation. A multi-criteria Evaluation and the spatial probability weighted approach and model builder was applied to map and analyse landslides in Penang Island. A set of automated algorithms was used to construct new essential geological and morphometric thematic maps from remote sensing data. The maps were ranked using the weighted probability spatial model based on their contribution to the landslide hazard. Results obtained showed that sites at an elevation of 100–300 m, with steep slopes of 10°–37° and slope direction (aspect in the E and SE directions were areas of very high and high probability for the landslide occurrence; the total areas were 21.393 km2 (11.84% and 58.690 km2 (32.48%, respectively. The obtained map was verified by comparing variogram models of the mapped and the occurred landslide locations and showed a strong correlation with the locations of occurred landslides, indicating that the proposed method can successfully predict the unpredictable landslide hazard. The method is time and cost effective and can be used as a reference for geological and geotechnical engineers.

  13. Systematic Examination of Stardust Bulbous Track Wall Materials

    Science.gov (United States)

    Nakamura-Messenger, K.; Clemett, S. J.; Nguyen, A. N.; Berger, E. L.; Keller, L. P.; Messenger, S.

    2013-01-01

    Analyses of Comet Wild-2 samples returned by NASA's Stardust spacecraft have focused primarily on terminal particles (TPs) or well-preserved fine-grained materials along the track walls [1,2]. However much of the collected material was melted and mixed intimately with the aerogel by the hypervelocity impact [3,4]. We are performing systematic examinations of entire Stardust tracks to establish the mineralogy and origins of all comet Wild 2 components [7,8]. This report focuses on coordinated analyses of indigenous crystalline and amorphous/melt cometary materials along the aerogel track walls, their interaction with aerogel during collection and comparisons with their TPs.

  14. Preliminary analyses of AP600 using RELAP5

    International Nuclear Information System (INIS)

    Modro, S.M.; Beelman, R.J.; Fisher, J.E.

    1991-01-01

    This paper presents results of preliminary analyses of the proposed Westinghouse Electric Corporation AP600 design. AP600 is a two loop, 600 MW (e) pressurized water reactor (PWR) arranged in a two hot leg, four cold leg nuclear steam supply system (NSSS) configuration. In contrast to the present generation of PWRs it is equipped with passive emergency core coolant (ECC) systems. Also, the containment and the safety systems of the AP600 interact with the reactor coolant system and each other in a more integral fashion than present day PWRs. The containment in this design is the ultimate heat sink for removal of decay heat to the environment. Idaho National Engineering Laboratory (INEL) has studied applicability of the RELAP5 code to AP600 safety analysis and has developed a model of the AP600 for the Nuclear Regulatory Commission. The model incorporates integral modeling of the containment, NSSS and passive safety systems. Best available preliminary design data were used. Nodalization sensitivity studies were conducted to gain experience in modeling of systems and conditions which are beyond the applicability of previously established RELAP5 modeling guidelines or experience. Exploratory analyses were then undertaken to investigate AP600 system response during postulated accident conditions. Four small break LOCA calculations and two large break LOCA calculations were conducted

  15. Thermoelastic analyses of spent fuel repositories in bedded and dome salt. Technical memorandum report RSI-0054

    International Nuclear Information System (INIS)

    Callahan, G.D.; Ratigan, J.L.

    1978-01-01

    Global thermoelastic analyses of bedded and dome salt models showed a slight preference for the bedded salt model through the range of thermal loading conditions. Spent fuel thermal loadings should be less than 75 kW/acre of the repository pending more accurate material modeling. One should first limit the study to one or two spent fuel thermal loading (i.e. 75 kW/acre and/or 50 kW/acre) analyses up to a maximum time of approximately 2000 years. Parametric thermoelastic type analyses could then be readily obtained to determine the influence of the thermomechanical properties. Recommendations for further study include parametric analyses, plasticity analyses, consideration of the material interfaces as joints, and possibly consideration of a global joint pattern (i.e. jointed at the same orientation everywhere) for the non-salt materials. Subsequently, the viscoelastic analyses could be performed

  16. MCNPX, MONK, and ERANOS analyses of the YALINA Booster subcritical assembly

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, Alberto, E-mail: alby@anl.go [Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Gohar, Y.; Aliberti, G.; Cao, Y.; Smith, D.; Zhong, Z. [Argonne National Laboratory, 9700 S. Cass Avenue, Argonne, IL 60439 (United States); Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Serafimovich, I. [Joint Institute for Power and Nuclear Research - Sosny, National Academy of Sciences of Belarus, 99 Acad. Krasin Str., Minsk 220109 (Belarus)

    2011-05-15

    This paper compares the numerical results obtained from various nuclear codes and nuclear data libraries with the YALINA Booster subcritical assembly (Minsk, Belarus) experimental results. This subcritical assembly was constructed to study the physics and the operation of accelerator-driven subcritical systems (ADS) for transmuting the light water reactors (LWR) spent nuclear fuel. The YALINA Booster facility has been accurately modeled, with no material homogenization, by the Monte Carlo codes MCNPX (MCNP/MCB) and MONK. The MONK geometrical model matches that of MCNPX. The assembly has also been analyzed by the deterministic code ERANOS. In addition, the differences between the effective neutron multiplication factor and the source multiplication factors have been examined by alternative calculational methodologies. The analyses include the delayed neutron fraction, prompt neutron lifetime, generation time, neutron flux profiles, and spectra in various experimental channels. The accuracy of the numerical models has been enhanced by accounting for all material impurities and the actual density of the polyethylene material used in the assembly (the latter value was obtained by dividing the total weight of the polyethylene by its volume in the numerical model). There is good agreement between the results from MONK, MCNPX, and ERANOS. The ERANOS results show small differences relative to the other results because of material homogenization and the energy and angle discretizations.The MCNPX results match the experimental measurements of the {sup 3}He(n,p) reaction rates obtained with the californium neutron source.

  17. MCNPX, MONK, and ERANOS analyses of the YALINA Booster subcritical assembly

    International Nuclear Information System (INIS)

    Talamo, Alberto; Gohar, Y.; Aliberti, G.; Cao, Y.; Smith, D.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Serafimovich, I.

    2011-01-01

    This paper compares the numerical results obtained from various nuclear codes and nuclear data libraries with the YALINA Booster subcritical assembly (Minsk, Belarus) experimental results. This subcritical assembly was constructed to study the physics and the operation of accelerator-driven subcritical systems (ADS) for transmuting the light water reactors (LWR) spent nuclear fuel. The YALINA Booster facility has been accurately modeled, with no material homogenization, by the Monte Carlo codes MCNPX (MCNP/MCB) and MONK. The MONK geometrical model matches that of MCNPX. The assembly has also been analyzed by the deterministic code ERANOS. In addition, the differences between the effective neutron multiplication factor and the source multiplication factors have been examined by alternative calculational methodologies. The analyses include the delayed neutron fraction, prompt neutron lifetime, generation time, neutron flux profiles, and spectra in various experimental channels. The accuracy of the numerical models has been enhanced by accounting for all material impurities and the actual density of the polyethylene material used in the assembly (the latter value was obtained by dividing the total weight of the polyethylene by its volume in the numerical model). There is good agreement between the results from MONK, MCNPX, and ERANOS. The ERANOS results show small differences relative to the other results because of material homogenization and the energy and angle discretizations.The MCNPX results match the experimental measurements of the 3 He(n,p) reaction rates obtained with the californium neutron source.

  18. Density dependent forces and large basis structure models in the analyses of 12C(p,p') reactions at 135 MeV

    International Nuclear Information System (INIS)

    Bauhoff, W.; Collins, S.F.; Henderson, R.S.

    1983-01-01

    Differential cross-sections have been measured for the elastic and inelastic scattering of 135 MeV protons from 12 C. The data from the transitions to 9 select states up to 18.3 MeV in excitation have been analysed using a distorted wave approximation with various microscopic model nuclear structure transition densities and free and density dependent two nucleon t-matrices. Clear signatures of the density dependence of the t-matrix are defined and the utility of select transitions to test different attributes of that t-matrix when good nuclear structure models are used is established

  19. The Hanford study: issues in analysing and interpreting data from occupational studies

    International Nuclear Information System (INIS)

    Gilbert, E.S.

    1987-01-01

    Updated analyses of workers at the Hanford Site provided no evidence of a correlation of radiation exposure and mortality from all cancers or mortality from leukemia. Potentially confounding factors were examined, and to the extent possible taken account of in these analyses. Risk estimates for leukemia and for all cancers except leukemia were calculated and compared with those from other sources. For leukemia, consideration was given to modifying factors such as age at exposure and time from exposure. (author)

  20. Conducting qualitative research in mental health: Thematic and content analyses.

    Science.gov (United States)

    Crowe, Marie; Inder, Maree; Porter, Richard

    2015-07-01

    The objective of this paper is to describe two methods of qualitative analysis - thematic analysis and content analysis - and to examine their use in a mental health context. A description of the processes of thematic analysis and content analysis is provided. These processes are then illustrated by conducting two analyses of the same qualitative data. Transcripts of qualitative interviews are analysed using each method to illustrate these processes. The illustration of the processes highlights the different outcomes from the same set of data. Thematic and content analyses are qualitative methods that serve different research purposes. Thematic analysis provides an interpretation of participants' meanings, while content analysis is a direct representation of participants' responses. These methods provide two ways of understanding meanings and experiences and provide important knowledge in a mental health context. © The Royal Australian and New Zealand College of Psychiatrists 2015.

  1. An examination of mediators of the transfer of cognitive speed of processing training to everyday functional performance.

    Science.gov (United States)

    Edwards, Jerri D; Ruva, Christine L; O'Brien, Jennifer L; Haley, Christine B; Lister, Jennifer J

    2013-06-01

    The purpose of these analyses was to examine mediators of the transfer of cognitive speed of processing training to improved everyday functional performance (J. D. Edwards, V. G. Wadley,, D. E. Vance, D. L. Roenker, & K. K. Ball, 2005, The impact of speed of processing training on cognitive and everyday performance. Aging & Mental Health, 9, 262-271). Cognitive speed of processing and visual attention (as measured by the Useful Field of View Test; UFOV) were examined as mediators of training transfer. Secondary data analyses were conducted from the Staying Keen in Later Life (SKILL) study, a randomized cohort study including 126 community dwelling adults 63 to 87 years of age. In the SKILL study, participants were randomized to an active control group or cognitive speed of processing training (SOPT), a nonverbal, computerized intervention involving perceptual practice of visual tasks. Prior analyses found significant effects of training as measured by the UFOV and Timed Instrumental Activities of Daily Living (TIADL) Tests. Results from the present analyses indicate that speed of processing for a divided attention task significantly mediated the effect of SOPT on everyday performance (e.g., TIADL) in a multiple mediation model accounting for 91% of the variance. These findings suggest that everyday functional improvements found from SOPT are directly attributable to improved UFOV performance, speed of processing for divided attention in particular. Targeting divided attention in cognitive interventions may be important to positively affect everyday functioning among older adults. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  2. Integrate models of ultrasonics examination for NDT expertise

    International Nuclear Information System (INIS)

    Calmon, P.; Lhemery, A.; Lecoeur-Taibi, I.; Raillon, R.

    1996-01-01

    For several years, the French Atomic Energy Commission (CEA) has developed a system called CIVA for multiple-technique NDE data acquisition and processing. Modeling tools for ultrasonic non-destructive testing have been developed and implemented within this allowing direct comparison between measured and predicted results. These models are not only devoted to laboratory uses bus also must be usable by ultrasonic operators without special training in simulation techniques. Therefore, emphasis has been on finding the best compromise between as accurate as possible quantitative predictions and ease, simplicity and speed, crucial requirements in the industrial context. This approach has led us to develop approximate models for the different phenomena involved in ultrasonic inspections: radiation, transmission through interfaces, propagation, scattering by defects and boundaries, reception etc. Two main models have been implemented, covering the most commonly encountered NDT configurations. At first, these two models are shortly described. Then, two examples of their applications are shown. Based on the same underlying theories, specific modeling tools are proposed to industrial partners to answer special requirements. To illustrate this, an example is given of a software used a tool to help experts's interpretation during on-site french PWR vessel inspections. Other models can be implemented in CIVA when some assumptions made in the previous models Champ-Sons and Mephisto are not fulfilled, e. g., when less-conventional testing configurations are concerned. We briefly presents as an example a modeling study of echoes arising from cladded steel surfaces achieved in the laboratory. (authors)

  3. Examining a "Household" Model of Residential Long-term Care in Nova Scotia

    Directory of Open Access Journals (Sweden)

    Janice Keefe

    2017-04-01

    Full Text Available In 2006, Nova Scotia began to implement its Continuing Care Strategy which was grounded in a vision of providing client-centered care for continuing care clients, including residents of nursing homes. Considerable evidence pointed to the benefits of the “household” model of care—which led the province to adopt the smaller self-contained household model as a requirement for owners/operators seeking to build government-funded new and replacement nursing homes. The specific goals of the reform (the adoption of the household model included increasing the proportion of single rooms, improving the home-likeness of the facility, and more generally, providing high-quality care services. The reform was influenced by recognition of the need for change, rapid population aging in the province, and strong political will at a time when fiscal resources were available. To achieve the reform, Nova Scotia Department of Health released two key documents (2007 to guide the design and operation of all new and replacement facilities procured using a request for proposal process: The Long Term Care Program Requirements and the Space and Design Requirements. Results from a research study examining resident quality of life suggest regardless of physical design or staffing approach high resident quality of life can be experienced, while at the same time recognizing that the facilities with “self-contained household” design and expanded care staff roles were uniquely supporting relationships and home-likeness and positively impacting resident quality of life.

  4. Multilevel models for multiple-baseline data: modeling across-participant variation in autocorrelation and residual variance.

    Science.gov (United States)

    Baek, Eun Kyeng; Ferron, John M

    2013-03-01

    Multilevel models (MLM) have been used as a method for analyzing multiple-baseline single-case data. However, some concerns can be raised because the models that have been used assume that the Level-1 error covariance matrix is the same for all participants. The purpose of this study was to extend the application of MLM of single-case data in order to accommodate across-participant variation in the Level-1 residual variance and autocorrelation. This more general model was then used in the analysis of single-case data sets to illustrate the method, to estimate the degree to which the autocorrelation and residual variances differed across participants, and to examine whether inferences about treatment effects were sensitive to whether or not the Level-1 error covariance matrix was allowed to vary across participants. The results from the analyses of five published studies showed that when the Level-1 error covariance matrix was allowed to vary across participants, some relatively large differences in autocorrelation estimates and error variance estimates emerged. The changes in modeling the variance structure did not change the conclusions about which fixed effects were statistically significant in most of the studies, but there was one exception. The fit indices did not consistently support selecting either the more complex covariance structure, which allowed the covariance parameters to vary across participants, or the simpler covariance structure. Given the uncertainty in model specification that may arise when modeling single-case data, researchers should consider conducting sensitivity analyses to examine the degree to which their conclusions are sensitive to modeling choices.

  5. Using food as a reward: An examination of parental reward practices.

    Science.gov (United States)

    Roberts, Lindsey; Marx, Jenna M; Musher-Eizenman, Dara R

    2018-01-01

    Eating patterns and taste preferences are often established early in life. Many studies have examined how parental feeding practices may affect children's outcomes, including food intake and preference. The current study focused on a common food parenting practice, using food as a reward, and used Latent Profile Analysis (LPA) to examine whether mothers (n = 376) and fathers (n = 117) of children ages 2.8 to 7.5 (M = 4.7; SD = 1.1) grouped into profiles (i.e., subgroups) based on how they use of food as a reward. The 4-class model was the best-fitting LPA model, with resulting classes based on both the frequency and type of reward used. Classes were: infrequent reward (33%), tangible reward (21%), food reward (27%), and frequent reward (19%). The current study also explored whether children's eating styles (emotional overeating, rood fussiness, food responsiveness, and satiety responsiveness) and parenting style (Authoritative, Authoritarian, and Permissive) varied by reward profile. Analyses of Variance (ANOVA) revealed that the four profiles differed significantly for all outcome variables except satiety responsiveness. It appears that the use of tangible and food-based rewards have important implications in food parenting. More research is needed to better understand how the different rewarding practices affect additional child outcomes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  6. On groundwater flow modelling in safety analyses of spent fuel disposal. A comparative study with emphasis on boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Jussila, P

    1999-11-01

    Modelling groundwater flow is an essential part of the safety assessment of spent fuel disposal because moving groundwater makes a physical connection between a geological repository and the biosphere. Some of the common approaches to model groundwater flow in bedrock are equivalent porous continuum (EC), stochastic continuum and various fracture network concepts. The actual flow system is complex and measuring data are limited. Multiple distinct approaches and models, alternative scenarios as well as calibration and sensitivity analyses are used to give confidence on the results of the calculations. The correctness and orders of magnitude of results of such complex research can be assessed by comparing them to the results of simplified and robust approaches. The first part of this study is a survey of the objects, contents and methods of the groundwater flow modelling performed in the safety assessment of the spent fuel disposal in Finland and Sweden. The most apparent difference of the Swedish studies compared to the Finnish ones is the approach of using more different models, which is enabled by the more resources available in Sweden. The results of more comprehensive approaches provided by international co-operation are very useful to give perspective to the results obtained in Finland. In the second part of this study, the influence of boundary conditions on the flow fields of a simple 2D model is examined. The assumptions and simplifications in this approach include e.g. the following: (1) the EC model is used, in which the 2-dimensional domain is considered a continuum of equivalent properties without fractures present, (2) the calculations are done for stationary fields, without sources or sinks present in the domain and with a constant density of the groundwater, (3) the repository is represented by an isotropic plate, the hydraulic conductivity of which is given fictitious values, (4) the hydraulic conductivity of rock is supposed to have an exponential

  7. Energy and exergy analyses of the diffusion absorption refrigeration system

    International Nuclear Information System (INIS)

    Yıldız, Abdullah; Ersöz, Mustafa Ali

    2013-01-01

    This paper describes the thermodynamic analyses of a DAR (diffusion absorption refrigeration) cycle. The experimental apparatus is set up to an ammonia–water DAR cycle with helium as the auxiliary inert gas. A thermodynamic model including mass, energy and exergy balance equations are presented for each component of the DAR cycle and this model is then validated by comparison with experimental data. In the thermodynamic analyses, energy and exergy losses for each component of the system are quantified and illustrated. The systems' energy and exergy losses and efficiencies are investigated. The highest energy and exergy losses occur in the solution heat exchanger. The highest energy losses in the experimental and theoretical analyses are found 25.7090 W and 25.4788 W respectively, whereas those losses as to exergy are calculated 13.7933 W and 13.9976 W. Although the values of energy efficiencies obtained from both the model and experimental studies are calculated as 0.1858, those values, in terms of exergy efficiencies are found 0.0260 and 0.0356. - Highlights: • The diffusion absorption refrigerator system is designed manufactured and tested. • The energy and exergy analyses of the system are presented theoretically and experimentally. • The energy and exergy losses are investigated for each component of the system. • The highest energy and exergy losses occur in the solution heat exchanger. • The energy and the exergy performances are also calculated

  8. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    Science.gov (United States)

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  9. Optimization of a Centrifugal Boiler Circulating Pump's Casing Based on CFD and FEM Analyses

    Directory of Open Access Journals (Sweden)

    Zhigang Zuo

    2014-04-01

    Full Text Available It is important to evaluate the economic efficiency of boiler circulating pumps in manufacturing process from the manufacturers' point of view. The possibility of optimizing the pump casing with respect to structural pressure integrity and hydraulic performance was discussed. CFD analyses of pump models with different pump casing sizes were firstly carried out for the hydraulic performance evaluation. The effects of the working temperature and the sealing ring on the hydraulic efficiency were discussed. A model with casing diameter of 0.875D40 was selected for further analyses. FEM analyses were then carried out on different combinations of casing sizes, casing wall thickness, and materials, to evaluate its safety related to pressure integrity, with respect to both static and fatigue strength analyses. Two models with forging and cast materials were selected as final results.

  10. Characteristic of inapperceptive pneumonia according to materials of fluorographic examination of population

    International Nuclear Information System (INIS)

    Frejdzon, M.N.; Volynskaya, N.E.

    1984-01-01

    189 (15.5%) cases of pneumonia disclosed during preventive examination of 2202 patients in policlinics for 5 years are analysed. Conclusion is drawn on importance of fluorographic preventive examination of population in pic periods of catarrhal diseases (for diagnosis of not clearly expressed and asymptomatic pneumonia). True number of asymptomatic pneumonia is considerably lower than that registered before X-ray examination. Improvement of diagnostic method is possible with thorough clinical examination of patients

  11. A Formal Model to Analyse the Firewall Configuration Errors

    Directory of Open Access Journals (Sweden)

    T. T. Myo

    2015-01-01

    Full Text Available The firewall is widely known as a brandmauer (security-edge gateway. To provide the demanded security, the firewall has to be appropriately adjusted, i.e. be configured. Unfortunately, when configuring, even the skilled administrators may make mistakes, which result in decreasing level of a network security and network infiltration undesirable packages.The network can be exposed to various threats and attacks. One of the mechanisms used to ensure network security is the firewall.The firewall is a network component, which, using a security policy, controls packages passing through the borders of a secured network. The security policy represents the set of rules.Package filters work in the mode without inspection of a state: they investigate packages as the independent objects. Rules take the following form: (condition, action. The firewall analyses the entering traffic, based on the IP address of the sender and recipient, the port number of the sender and recipient, and the used protocol. When the package meets rule conditions, the action specified in the rule is carried out. It can be: allow, deny.The aim of this article is to develop tools to analyse a firewall configuration with inspection of states. The input data are the file with the set of rules. It is required to submit the analysis of a security policy in an informative graphic form as well as to reveal discrepancy available in rules. The article presents a security policy visualization algorithm and a program, which shows how the firewall rules act on all possible packages. To represent a result in an intelligible form a concept of the equivalence region is introduced.Our task is the program to display results of rules action on the packages in a convenient graphic form as well as to reveal contradictions between the rules. One of problems is the large number of measurements. As it was noted above, the following parameters are specified in the rule: Source IP address, appointment IP

  12. Ideal point error for model assessment in data-driven river flow forecasting

    Directory of Open Access Journals (Sweden)

    C. W. Dawson

    2012-08-01

    Full Text Available When analysing the performance of hydrological models in river forecasting, researchers use a number of diverse statistics. Although some statistics appear to be used more regularly in such analyses than others, there is a distinct lack of consistency in evaluation, making studies undertaken by different authors or performed at different locations difficult to compare in a meaningful manner. Moreover, even within individual reported case studies, substantial contradictions are found to occur between one measure of performance and another. In this paper we examine the ideal point error (IPE metric – a recently introduced measure of model performance that integrates a number of recognised metrics in a logical way. Having a single, integrated measure of performance is appealing as it should permit more straightforward model inter-comparisons. However, this is reliant on a transferrable standardisation of the individual metrics that are combined to form the IPE. This paper examines one potential option for standardisation: the use of naive model benchmarking.

  13. Thermodynamic analysis and modeling of thermo compressor; Analyse et modelisation thermodynamique du mouvement du piston d'un thermocompresseur

    Energy Technology Data Exchange (ETDEWEB)

    Arques, Ph. [Ecole Centrale de Lyon, 69 - Ecully (France)

    1998-07-01

    A thermo-compressor is a compressor that transforms directly the heat release by a source in an energy of pressure without intermediate mechanical work. It is a conversion of the Stirling engine in driven machine in order that the piston that provides the work has been suppressed. In this article, we present the analytical and numerical analyses of heat and mass transfers modeling in the different volumes of the thermo-compressor. This engine comprises a free piston displacer that separates cold and hot gas. (author)

  14. Implementing partnerships in nonreactor facility safety analyses

    International Nuclear Information System (INIS)

    Courtney, J.C.; Perry, W.H.; Phipps, R.D.

    1996-01-01

    Faculty and students from LSU have been participating in nuclear safety analyses and radiation protection projects at ANL-W at INEL since 1973. A mutually beneficial relationship has evolved that has resulted in generation of safety-related studies acceptable to Argonne and DOE, NRC, and state regulatory groups. Most of the safety projects have involved the Hot Fuel Examination Facility or the Fuel Conditioning Facility; both are hot cells that receive spent fuel from EBR-II. A table shows some of the major projects at ANL-W that involved LSU students and faculty

  15. Community medicine in the medical curriculum: a statistical analysis of a professional examination.

    Science.gov (United States)

    Craddock, M J; Murdoch, R M; Stewart, G T

    1984-01-01

    This paper analyses the examination results of two cohorts of medical students at the University of Glasgow. It discusses the usefulness of Scottish higher grades as predictors of ability to pass examinations in medicine. Further correlations are made between the results from community medicine and other fourth- and fifth-year medical school examinations.

  16. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Directory of Open Access Journals (Sweden)

    Catherine L Worth

    Full Text Available BACKGROUND: Up until recently the only available experimental (high resolution structure of a G-protein-coupled receptor (GPCR was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. METHODOLOGY: We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s for building a comparative molecular model. CONCLUSIONS: The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying

  17. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Science.gov (United States)

    Worth, Catherine L; Kleinau, Gunnar; Krause, Gerd

    2009-09-16

    Up until recently the only available experimental (high resolution) structure of a G-protein-coupled receptor (GPCR) was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s) to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s) for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s) for building a comparative molecular model. The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying suitable templates for GPCR homology modelling that will

  18. RELAP5 analyses and support of Oconee-1 PTS studies

    International Nuclear Information System (INIS)

    Charlton, T.R.

    1983-01-01

    The integrity of a reactor vessel during a severe overcooling transient with primary system pressurization is a current safety concern and has been identified as an Unresolved Safety Issue(USI) A-49 by the US Nuclear Regulatory Commission (NRC). Resolution of USI A-49, denoted as Pressurized Thermal Shock (PTS), is being examined by the US NRC sponsored PTS integration study. In support of this study, the Idaho National Engineering Laboratory (INEL) has performed RELAP5/MOD1.5 thermal-hydraulic analyses of selected overcooling transients. These transient analyses were performed for the Oconee-1 pressurized water reactor (PWR), which is Babcock and Wilcox designed nuclear steam supply system

  19. Plasma-safety assessment model and safety analyses of ITER

    International Nuclear Information System (INIS)

    Honda, T.; Okazaki, T.; Bartels, H.-H.; Uckan, N.A.; Sugihara, M.; Seki, Y.

    2001-01-01

    A plasma-safety assessment model has been provided on the basis of the plasma physics database of the International Thermonuclear Experimental Reactor (ITER) to analyze events including plasma behavior. The model was implemented in a safety analysis code (SAFALY), which consists of a 0-D dynamic plasma model and a 1-D thermal behavior model of the in-vessel components. Unusual plasma events of ITER, e.g., overfueling, were calculated using the code and plasma burning is found to be self-bounded by operation limits or passively shut down due to impurity ingress from overheated divertor targets. Sudden transition of divertor plasma might lead to failure of the divertor target because of a sharp increase of the heat flux. However, the effects of the aggravating failure can be safely handled by the confinement boundaries. (author)

  20. Improved bolt models for use in global analyses of storage and transportation casks subject to extra-regulatory loading

    International Nuclear Information System (INIS)

    Kalan, R.J.; Ammerman, D.J.; Gwinn, K.W.

    2004-01-01

    Transportation and storage casks subjected to extra-regulatory loadings may experience large stresses and strains in key structural components. One of the areas susceptible to these large stresses and strains is the bolted joint retaining any closure lid on an overpack or a canister. Modeling this joint accurately is necessary in evaluating the performance of the cask under extreme loading conditions. However, developing detailed models of a bolt in a large cask finite element model can dramatically increase the computational time, making the analysis prohibitive. Sandia National Laboratories used a series of calibrated, detailed, bolt finite element sub-models to develop a modified-beam bolt-model in order to examine the response of a storage cask and closure to severe accident loadings. The initial sub-models were calibrated for tension and shear loading using test data for large diameter bolts. Next, using the calibrated test model, sub-models of the actual joints were developed to obtain force-displacement curves and failure points for the bolted joint. These functions were used to develop a modified beam element representation of the bolted joint, which could be incorporated into the larger cask finite element model. This paper will address the modeling and assumptions used for the development of the initial calibration models, the joint sub-models and the modified beam model

  1. Comparative analyses reveal potential uses of Brachypodium distachyon as a model for cold stress responses in temperate grasses

    Directory of Open Access Journals (Sweden)

    Li Chuan

    2012-05-01

    Full Text Available Abstract Background Little is known about the potential of Brachypodium distachyon as a model for low temperature stress responses in Pooideae. The ice recrystallization inhibition protein (IRIP genes, fructosyltransferase (FST genes, and many C-repeat binding factor (CBF genes are Pooideae specific and important in low temperature responses. Here we used comparative analyses to study conservation and evolution of these gene families in B. distachyon to better understand its potential as a model species for agriculturally important temperate grasses. Results Brachypodium distachyon contains cold responsive IRIP genes which have evolved through Brachypodium specific gene family expansions. A large cold responsive CBF3 subfamily was identified in B. distachyon, while CBF4 homologs are absent from the genome. No B. distachyon FST gene homologs encode typical core Pooideae FST-motifs and low temperature induced fructan accumulation was dramatically different in B. distachyon compared to core Pooideae species. Conclusions We conclude that B. distachyon can serve as an interesting model for specific molecular mechanisms involved in low temperature responses in core Pooideae species. However, the evolutionary history of key genes involved in low temperature responses has been different in Brachypodium and core Pooideae species. These differences limit the use of B. distachyon as a model for holistic studies relevant for agricultural core Pooideae species.

  2. Examining the nomological network of satisfaction with work-life balance.

    Science.gov (United States)

    Grawitch, Matthew J; Maloney, Patrick W; Barber, Larissa K; Mooshegian, Stephanie E

    2013-07-01

    This study expands on past work-life research by examining the nomological network of satisfaction with work-life balance-the overall appraisal or global assessment of how one manages time and energy across work and nonwork domains. Analyses using 456 employees at a midsized organization indicated expected relationships with bidirectional conflict, bidirectional facilitation, and satisfaction with work and nonwork life. Structural equation modeling supported the utility of satisfaction with balance as a unique component of work-life interface perceptions. Results also indicated that satisfaction with balance mediated the relationship between some conflict/facilitation and life satisfaction outcomes, though conflict and facilitation maintained unique predictive validity on domain specific outcomes (i.e., work-to-life conflict and facilitation with work life satisfaction; life-to-work conflict and facilitation with nonwork life satisfaction). PsycINFO Database Record (c) 2013 APA, all rights reserved.

  3. Examining Ecological and Ecosystem Level Impacts of Aquatic Invasive Species in Lake Michigan Using An Ecosystem Productivity Model, LM-Eco

    Science.gov (United States)

    Ecological and ecosystem-level impacts of aquatic invasive species in Lake Michigan were examined using the Lake Michigan Ecosystem Model (LM-Eco). The LM-Eco model includes a detailed description of trophic levels and their interactions within the lower food web of Lake Michiga...

  4. From global economic modelling to household level analyses of food security and sustainability: how big is the gap and can we bridge it?

    NARCIS (Netherlands)

    Wijk, van M.T.

    2014-01-01

    Policy and decision makers have to make difficult choices to improve the food security of local people against the background of drastic global and local changes. Ex-ante impact assessment using integrated models can help them with these decisions. This review analyses the state of affairs of the

  5. Bayesian network modelling on data from fine needle aspiration cytology examination for breast cancer diagnosis

    OpenAIRE

    Ding, Xuemei; Cao, Yi; Zhai, Jia; Maguire, Liam; Li, Yuhua; Yang, Hongqin; Wang, Yuhua; Zeng, Jinshu; Liu, Shuo

    2017-01-01

    The paper employed Bayesian network (BN) modelling approach to discover causal dependencies among different data features of Breast Cancer Wisconsin Dataset (BCWD) derived from openly sourced UCI repository. K2 learning algorithm and k-fold cross validation were used to construct and optimize BN structure. Compared to Na‹ve Bayes (NB), the obtained BN presented better performance for breast cancer diagnosis based on fine needle aspiration cytology (FNAC) examination. It also showed that, amon...

  6. Confirmatory Factor Analysis of WAIS-IV in a Clinical Sample: Examining a Bi-Factor Model

    Directory of Open Access Journals (Sweden)

    Rachel Collinson

    2016-12-01

    Full Text Available There have been a number of studies that have examined the factor structure of the Wechsler Adult Intelligence Scale IV (WAIS-IV using the standardization sample. In this study, we investigate its factor structure on a clinical neuropsychology sample of mixed aetiology. Correlated factor, higher-order and bi-factor models are all tested. Overall, the results suggest that the WAIS-IV will be suitable for use with this population.

  7. Longitudinal predictive ability of mapping models: examining post-intervention EQ-5D utilities derived from baseline MHAQ data in rheumatoid arthritis patients.

    Science.gov (United States)

    Kontodimopoulos, Nick; Bozios, Panagiotis; Yfantopoulos, John; Niakas, Dimitris

    2013-04-01

    The purpose of this methodological study was to to provide insight into the under-addressed issue of the longitudinal predictive ability of mapping models. Post-intervention predicted and reported utilities were compared, and the effect of disease severity on the observed differences was examined. A cohort of 120 rheumatoid arthritis (RA) patients (60.0% female, mean age 59.0) embarking on therapy with biological agents completed the Modified Health Assessment Questionnaire (MHAQ) and the EQ-5D at baseline, and at 3, 6 and 12 months post-intervention. OLS regression produced a mapping equation to estimate post-intervention EQ-5D utilities from baseline MHAQ data. Predicted and reported utilities were compared with t test, and the prediction error was modeled, using fixed effects, in terms of covariates such as age, gender, time, disease duration, treatment, RF, DAS28 score, predicted and reported EQ-5D. The OLS model (RMSE = 0.207, R(2) = 45.2%) consistently underestimated future utilities, with a mean prediction error of 6.5%. Mean absolute differences between reported and predicted EQ-5D utilities at 3, 6 and 12 months exceeded the typically reported MID of the EQ-5D (0.03). According to the fixed-effects model, time, lower predicted EQ-5D and higher DAS28 scores had a significant impact on prediction errors, which appeared increasingly negative for lower reported EQ-5D scores, i.e., predicted utilities tended to be lower than reported ones in more severe health states. This study builds upon existing research having demonstrated the potential usefulness of mapping disease-specific instruments onto utility measures. The specific issue of longitudinal validity is addressed, as mapping models derived from baseline patients need to be validated on post-therapy samples. The underestimation of post-treatment utilities in the present study, at least in more severe patients, warrants further research before it is prudent to conduct cost-utility analyses in the context

  8. Mean and Covariance Structures Analyses: An Examination of the Rosenberg Self-Esteem Scale among Adolescents and Adults.

    Science.gov (United States)

    Whiteside-Mansell, Leanne; Corwyn, Robert Flynn

    2003-01-01

    Examined the cross-age comparability of the widely used Rosenberg Self-Esteem Scale (RSES) in 414 adolescents and 900 adults in families receiving Aid to Families with Dependent Children. Found similarities of means in the RSES across groups. (SLD)

  9. Lagrangian Coherent Structure Analysis of Terminal Winds: Three-Dimensionality, Intramodel Variations, and Flight Analyses

    Directory of Open Access Journals (Sweden)

    Brent Knutson

    2015-01-01

    Full Text Available We present a study of three-dimensional Lagrangian coherent structures (LCS near the Hong Kong International Airport and relate to previous developments of two-dimensional (2D LCS analyses. The LCS are contrasted among three independent models and against 2D coherent Doppler light detection and ranging (LIDAR data. Addition of the velocity information perpendicular to the LIDAR scanning cone helps solidify flow structures inferred from previous studies; contrast among models reveals the intramodel variability; and comparison with flight data evaluates the performance among models in terms of Lagrangian analyses. We find that, while the three models and the LIDAR do recover similar features of the windshear experienced by a landing aircraft (along the landing trajectory, their Lagrangian signatures over the entire domain are quite different—a portion of each numerical model captures certain features resembling those LCS extracted from independent 2D LIDAR analyses based on observations.

  10. Inscription and interpretation of text: a cultural hermeneutic examination of virtual community

    Directory of Open Access Journals (Sweden)

    Gary Burnett

    2003-01-01

    Full Text Available People engaging in electronic exchanges can create communities--places with socially constituted norms, values, and expectations. We adopt an anthropological perspective, yoked with a methodology based in hermeneutics, to illustrate how language use both reflects and influences culture in a virtual community. Our study analyses contributions to a Usenet newsgroup. Four elements of our conceptual model--coherence, reference, invention, and intention--provide mechanisms to examine a community's texts as it engages in social interaction and knowledge creation. While information exchange and socializing are intertwined, our model allows a robust understanding of the relationship between the two. Texts are not merely vehicles for communication but serve multiple purposes simultaneously. While they transfer information, texts also provide information within a social context, and create an expanding archive of socially-contextualized information well beyond the capabilities of any individual participant. This allows groups to negotiate reputations, socialize, and define the limits of their knowledge.

  11. Field Study of Dairy Cows with Reduced Appetite in Early Lactation: Clinical Examinations, Blood and Rumen Fluid Analyses

    Directory of Open Access Journals (Sweden)

    Steen A

    2001-06-01

    Full Text Available The study included 125 cows with reduced appetite and with clinical signs interpreted by the owner as indicating bovine ketosis 6 to 75 days postpartum. Almost all of the cows were given concentrates 2 to 3 times daily. With a practitioners view to treatment and prophylaxis the cows were divided into 5 diagnostic groups on the basis of thorough clinical examination, milk ketotest, decreased protozoal activity and concentrations, increased methylene blue reduction time, and increased liver parameters: ketosis (n = 32, indigestion (n = 26, combined ketosis and indigestion (n = 29, liver disease combined with ketosis, indigestion, or both (n = 15, and no specific diagnosis (n = 17. Three cows with traumatic reticuloperitonitis and 3 with abomasal displacement were not grouped. Nonparametric methods were used when groups were compared. Aspartate aminotransferase, glutamate dehydrogenase, gamma-glutamyl transferase and total bilirubin were elevated in the group with liver disease. Free fatty acids were significantly elevated in cows with ketosis, compared with cows with indigestion. Activity and concentrations of large and small protozoas were reduced, and methylene blue reduction time was increased in cows with indigestion. The rumen fluid pH was the same for groups of cows with and without indigestion. Prolonged reduced appetite before examination could have led to misclassification. Without careful interpretation of the milk ketotest, many cases with additional diagnoses would have been reported as primary ketosis. Thorough clinical examination together with feasible rumen fluid examination and economically reasonable blood biochemistry did not uncover the reason(s for reduced appetite in 14% of the cows. More powerful diagnostic methods are needed.

  12. Post-Irradiation Non-Destructive Analyses of the AFIP-7 Experiment

    Science.gov (United States)

    Williams, W. J.; Robinson, A. B.; Rabin, B. H.

    2017-12-01

    This article reports the results and interpretation of post-irradiation non-destructive examinations performed on four curved full-size fuel plates that comprise the AFIP-7 experiment. These fuel plates, having a U-10 wt.%Mo monolithic design, were irradiated under moderate operating conditions in the Advanced Test Reactor to assess fuel performance for geometries that are prototypic of research reactor fuel assemblies. Non-destructive examinations include visual examination, neutron radiography, profilometry, and precision gamma scanning. This article evaluates the qualitative and quantitative data taken for each plate, compares corresponding data sets, and presents the results of swelling analyses. These characterization results demonstrate that the fuel meets established irradiation performance requirements for mechanical integrity, geometric stability, and stable and predictable behavior.

  13. Using niche-modelling and species-specific cost analyses to determine a multispecies corridor in a fragmented landscape

    Science.gov (United States)

    Zurano, Juan Pablo; Selleski, Nicole; Schneider, Rosio G.

    2017-01-01

    types independent of the degree of legal protection. These data used with multifocal GIS analyses balance the varying degree of overlap and unique properties among them allowing for comprehensive conservation strategies to be developed relatively rapidly. Our comprehensive approach serves as a model to other regions faced with habitat loss and lack of data. The five carnivores focused on in our study have wide ranges, so the results from this study can be expanded and combined with surrounding countries, with analyses at the species or community level. PMID:28841692

  14. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  15. Examining the Role of Self-Disclosure and Connectedness in the Process of Instructional Dissent: A Test of the Instructional Beliefs Model

    Science.gov (United States)

    Johnson, Zac D.; LaBelle, Sara

    2015-01-01

    The current study examined the relationship between student-to-student communicative behaviors and communication outcomes in the college classroom. The instructional beliefs model was used to examine student self-disclosures, student perceptions of connectedness, and student enactment of instructional dissent. Students (N = 351) completed…

  16. Multi-level Bayesian analyses for single- and multi-vehicle freeway crashes.

    Science.gov (United States)

    Yu, Rongjie; Abdel-Aty, Mohamed

    2013-09-01

    This study presents multi-level analyses for single- and multi-vehicle crashes on a mountainous freeway. Data from a 15-mile mountainous freeway section on I-70 were investigated. Both aggregate and disaggregate models for the two crash conditions were developed. Five years of crash data were used in the aggregate investigation, while the disaggregate models utilized one year of crash data along with real-time traffic and weather data. For the aggregate analyses, safety performance functions were developed for the purpose of revealing the contributing factors for each crash type. Two methodologies, a Bayesian bivariate Poisson-lognormal model and a Bayesian hierarchical Poisson model with correlated random effects, were estimated to simultaneously analyze the two crash conditions with consideration of possible correlations. Except for the factors related to geometric characteristics, two exposure parameters (annual average daily traffic and segment length) were included. Two different sets of significant explanatory and exposure variables were identified for the single-vehicle (SV) and multi-vehicle (MV) crashes. It was found that the Bayesian bivariate Poisson-lognormal model is superior to the Bayesian hierarchical Poisson model, the former with a substantially lower DIC and more significant variables. In addition to the aggregate analyses, microscopic real-time crash risk evaluation models were developed for the two crash conditions. Multi-level Bayesian logistic regression models were estimated with the random parameters accounting for seasonal variations, crash-unit-level diversity and segment-level random effects capturing unobserved heterogeneity caused by the geometric characteristics. The model results indicate that the effects of the selected variables on crash occurrence vary across seasons and crash units; and that geometric characteristic variables contribute to the segment variations: the more unobserved heterogeneity have been accounted, the better

  17. Examining variation in working memory capacity and retrieval in cued recall.

    Science.gov (United States)

    Unsworth, Nash

    2009-05-01

    Two experiments examined the notion that individual differences in working memory capacity (WMC) are partially due to differences in search set size in cued recall. High and low WMC individuals performed variants of a cued recall task with either unrelated cue words (Experiment 1) or specific cue phrases (Experiment 2). Across both experiments low WMC individuals recalled fewer items, made more errors, and had longer correct recall latencies than high WMC individuals. Cross-experimental analyses suggested that providing participants with more specific cues decreased the size of the search set, leading to better recall overall. However, these effects were equivalent for high and low WMC. It is argued that these results are consistent with a search model framework in which low WMC individuals search through a larger set of items than high WMC individuals.

  18. A shock absorber model for structure-borne noise analyses

    Science.gov (United States)

    Benaziz, Marouane; Nacivet, Samuel; Thouverez, Fabrice

    2015-08-01

    Shock absorbers are often responsible for undesirable structure-borne noise in cars. The early numerical prediction of this noise in the automobile development process can save time and money and yet remains a challenge for industry. In this paper, a new approach to predicting shock absorber structure-borne noise is proposed; it consists in modelling the shock absorber and including the main nonlinear phenomena responsible for discontinuities in the response. The model set forth herein features: compressible fluid behaviour, nonlinear flow rate-pressure relations, valve mechanical equations and rubber mounts. The piston, base valve and complete shock absorber model are compared with experimental results. Sensitivity of the shock absorber response is evaluated and the most important parameters are classified. The response envelope is also computed. This shock absorber model is able to accurately reproduce local nonlinear phenomena and improves our state of knowledge on potential noise sources within the shock absorber.

  19. Do phase-shift analyses and nucleon-nucleon potential models yield the wrong 3Pj phase shifts at low energies?

    International Nuclear Information System (INIS)

    Tornow, W.; Witala, H.; Kievsky, A.

    1998-01-01

    The 4 P J waves in nucleon-deuteron scattering were analyzed using proton-deuteron and neutron-deuteron data at E N =3 MeV. New sets of nucleon-nucleon 3 P j phase shifts were obtained that may lead to a better understanding of the long-standing A y (θ) puzzle in nucleon-deuteron elastic scattering. However, these sets of 3 P j phase shifts are quite different from the ones determined from both global phase-shift analyses of nucleon-nucleon data and nucleon-nucleon potential models. copyright 1998 The American Physical Society

  20. DEVELOPMENT OF WEB-BASED EXAMINATION SYSTEM USING OPEN SOURCE PROGRAMMING MODEL

    Directory of Open Access Journals (Sweden)

    Olalere A. ABASS

    2017-04-01

    Full Text Available The traditional method of assessment (examination is often characterized by examination questions leakages, human errors during marking of scripts and recording of scores. The technological advancement in the field of computer science has necessitated the need for computer usage in majorly all areas of human life and endeavors, education sector not excluded. This work, Web-based Examination System (WES was, therefore, born out of the will to stymie the problems plaguing the conventional (paper-based examination system by providing a campus-wide service for e-assessment devoid of irregularities and generally fair to examinees and equally enhances instant feedback. This system developed using combination of CSS, HTML, PHP SQL MySQL and Dreamweaver is capable of reducing proportion of workload on examination, grading and reviewing on the part of examiners. Thus, the system enables the release of examination results in record time and without error. WES can serve as an effective solution for mass education evaluation and offers many novel features that cannot be implemented in paper-based systems, such as real time data collection, management and analysis, distributed and interactive assessment towards promoting distance education.

  1. Effects of dating errors on nonparametric trend analyses of speleothem time series

    Directory of Open Access Journals (Sweden)

    M. Mudelsee

    2012-10-01

    Full Text Available A fundamental problem in paleoclimatology is to take fully into account the various error sources when examining proxy records with quantitative methods of statistical time series analysis. Records from dated climate archives such as speleothems add extra uncertainty from the age determination to the other sources that consist in measurement and proxy errors. This paper examines three stalagmite time series of oxygen isotopic composition (δ18O from two caves in western Germany, the series AH-1 from the Atta Cave and the series Bu1 and Bu4 from the Bunker Cave. These records carry regional information about past changes in winter precipitation and temperature. U/Th and radiocarbon dating reveals that they cover the later part of the Holocene, the past 8.6 thousand years (ka. We analyse centennial- to millennial-scale climate trends by means of nonparametric Gasser–Müller kernel regression. Error bands around fitted trend curves are determined by combining (1 block bootstrap resampling to preserve noise properties (shape, autocorrelation of the δ18O residuals and (2 timescale simulations (models StalAge and iscam. The timescale error influences on centennial- to millennial-scale trend estimation are not excessively large. We find a "mid-Holocene climate double-swing", from warm to cold to warm winter conditions (6.5 ka to 6.0 ka to 5.1 ka, with warm–cold amplitudes of around 0.5‰ δ18O; this finding is documented by all three records with high confidence. We also quantify the Medieval Warm Period (MWP, the Little Ice Age (LIA and the current warmth. Our analyses cannot unequivocally support the conclusion that current regional winter climate is warmer than that during the MWP.

  2. Modelling, singular perturbation and bifurcation analyses of bitrophic food chains.

    Science.gov (United States)

    Kooi, B W; Poggiale, J C

    2018-04-20

    Two predator-prey model formulations are studied: for the classical Rosenzweig-MacArthur (RM) model and the Mass Balance (MB) chemostat model. When the growth and loss rate of the predator is much smaller than that of the prey these models are slow-fast systems leading mathematically to singular perturbation problem. In contradiction to the RM-model, the resource for the prey are modelled explicitly in the MB-model but this comes with additional parameters. These parameter values are chosen such that the two models become easy to compare. In both models a transcritical bifurcation, a threshold above which invasion of predator into prey-only system occurs, and the Hopf bifurcation where the interior equilibrium becomes unstable leading to a stable limit cycle. The fast-slow limit cycles are called relaxation oscillations which for increasing differences in time scales leads to the well known degenerated trajectories being concatenations of slow parts of the trajectory and fast parts of the trajectory. In the fast-slow version of the RM-model a canard explosion of the stable limit cycles occurs in the oscillatory region of the parameter space. To our knowledge this type of dynamics has not been observed for the RM-model and not even for more complex ecosystem models. When a bifurcation parameter crosses the Hopf bifurcation point the amplitude of the emerging stable limit cycles increases. However, depending of the perturbation parameter the shape of this limit cycle changes abruptly from one consisting of two concatenated slow and fast episodes with small amplitude of the limit cycle, to a shape with large amplitude of which the shape is similar to the relaxation oscillation, the well known degenerated phase trajectories consisting of four episodes (concatenation of two slow and two fast). The canard explosion point is accurately predicted by using an extended asymptotic expansion technique in the perturbation and bifurcation parameter simultaneously where the small

  3. Type 2 diabetes mellitus unawareness, prevalence, trends and risk factors: National Health and Nutrition Examination Survey (NHANES) 1999–2010

    Science.gov (United States)

    Zhang, Nana; Yang, Xin; Zhu, Xiaolin; Zhao, Bin; Huang, Tianyi

    2017-01-01

    Objectives To determine whether the associations with key risk factors in patients with diagnosed and undiagnosed type 2 diabetes mellitus (T2DM) are different using data from the National Health and Nutrition Examination Survey (NHANES) from 1999 to 2010. Methods The study analysed the prevalence and association with risk factors of undiagnosed and diagnosed T2DM using a regression model and a multinomial logistic regression model. Data from the NHANES 1999–2010 were used for the analyses. Results The study analysed data from 10 570 individuals. The overall prevalence of diagnosed and undiagnosed T2DM increased significantly from 1999 to 2010. The prevalence of undiagnosed T2DM was significantly higher in non-Hispanic whites, in individuals educational level had no effect on T2DM diagnosis rates. Though diagnosed T2DM was associated with favourable diet/carbohydrate intake behavioural changes, it had no effect on physical activity levels. Conclusion The overall T2DM prevalence increased between 1999 and 2010, particularly for undiagnosed T2DM in patients that were formerly classified as low risk. PMID:28415936

  4. A systematic review of the quality and impact of anxiety disorder meta-analyses.

    Science.gov (United States)

    Ipser, Jonathan C; Stein, Dan J

    2009-08-01

    Meta-analyses are seen as representing the pinnacle of a hierarchy of evidence used to inform clinical practice. Therefore, the potential importance of differences in the rigor with which they are conducted and reported warrants consideration. In this review, we use standardized instruments to describe the scientific and reporting quality of meta-analyses of randomized controlled trials of the treatment of anxiety disorders. We also use traditional and novel metrics of article impact to assess the influence of meta-analyses across a range of research fields in the anxiety disorders. Overall, although the meta-analyses that we examined had some flaws, their quality of reporting was generally acceptable. Neither the scientific nor reporting quality of the meta-analyses was predicted by any of the impact metrics. The finding that treatment meta-analyses were cited less frequently than quantitative reviews of studies in current "hot spots" of research (ie, genetics, imaging) points to the multifactorial nature of citation patterns. A list of the meta-analyses included in this review is available on an evidence-based website of anxiety and trauma-related disorders.

  5. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Science.gov (United States)

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  6. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Directory of Open Access Journals (Sweden)

    Arika Ligmann-Zielinska

    Full Text Available Agent-based models (ABMs have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1 efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2 conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  7. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  8. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  9. Functional Analysis in Public Schools: A Summary of 90 Functional Analyses

    Science.gov (United States)

    Mueller, Michael M.; Nkosi, Ajamu; Hine, Jeffrey F.

    2011-01-01

    Several review and epidemiological studies have been conducted over recent years to inform behavior analysts of functional analysis outcomes. None to date have closely examined demographic and clinical data for functional analyses conducted exclusively in public school settings. The current paper presents a data-based summary of 90 functional…

  10. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  11. Application of some turbulence models

    International Nuclear Information System (INIS)

    Ushijima, Sho; Kato, Masanobu; Fujimoto, Ken; Moriya, Shoichi

    1985-01-01

    In order to predict numerically the thermal stratification and the thermal striping phenomena in pool-type FBRs, it is necessary to simulate adequately various turbulence properties of flows with good turbulence models. This report presents numerical simulations of two dimensional isothermal steady flows in a rectangular plenum using three types of turbulence models. Three models are general k-ε model and two Reynolds stress models. The agreements of these results are examined and the properties of these models are compared. The main results are summarized as follows. (1) Concerning the mean velocity distributions, although a little differences exist, all results of three models agree with experimental values. (2) It can be found that non-isotropy of normal Reynolds stresses (u' 2 , v' 2 ) distributions is qwite well simulated by two Reynolds stress models, but not adequately by k-ε model, shear Reynolds stress (-u', v') distribations of three models have little differences and agree good with experiments. (3) Balances of the various terms of Reynolds stress equations are examined. Comparing the results obtained by analyses and those of previous experiments, both distributions show qualitative agreements. (author)

  12. Exergetic and thermoeconomic analyses of power plants

    International Nuclear Information System (INIS)

    Kwak, H.-Y.; Kim, D.-J.; Jeon, J.-S.

    2003-01-01

    Exergetic and thermoeconomic analyses were performed for a 500-MW combined cycle plant. In these analyses, mass and energy conservation laws were applied to each component of the system. Quantitative balances of the exergy and exergetic cost for each component, and for the whole system was carefully considered. The exergoeconomic model, which represented the productive structure of the system considered, was used to visualize the cost formation process and the productive interaction between components. The computer program developed in this study can determine the production costs of power plants, such as gas- and steam-turbines plants and gas-turbine cogeneration plants. The program can be also be used to study plant characteristics, namely, thermodynamic performance and sensitivity to changes in process and/or component design variables

  13. Innovative Business Model for Realization of Sustainable Supply Chain at the Outsourcing Examination of Logistics Services

    Directory of Open Access Journals (Sweden)

    Péter Tamás

    2018-01-01

    Full Text Available The issue of sustainability is becoming more and more important because of the increase in the human population and the extraction of non-renewable natural resources. We can make decisive steps towards sustainability in the fields of logistics services by improvement of logistics processes and/or application of new environment-friendly technologies. These steps are very important for companies because they have a significant effect on competitiveness. Nowadays significant changes are taking place in applied methods and technologies in the fields of logistics services as part of the 4th Industrial Revolution. Most companies are not able to keep pace with these changes in addition to carrying out their main activities by using own resources; consequently, in many cases logistics services are outsourced in the interest of maintaining or increasing competitiveness. The currently applied outsourcing examination process contains numerous shortcomings. We have elaborated a new business model to eliminate these shortcomings, namely the basic concept for an outsourcing investigation system integrated in the electronic marketplace. The paper introduces the current process of logistics service outsourcing examination and the elaborated business model concept.

  14. Radiation Exposure Analyses Supporting the Development of Solar Particle Event Shielding Technologies

    Science.gov (United States)

    Walker, Steven A.; Clowdsley, Martha S.; Abston, H. Lee; Simon, Hatthew A.; Gallegos, Adam M.

    2013-01-01

    NASA has plans for long duration missions beyond low Earth orbit (LEO). Outside of LEO, large solar particle events (SPEs), which occur sporadically, can deliver a very large dose in a short amount of time. The relatively low proton energies make SPE shielding practical, and the possibility of the occurrence of a large event drives the need for SPE shielding for all deep space missions. The Advanced Exploration Systems (AES) RadWorks Storm Shelter Team was charged with developing minimal mass SPE storm shelter concepts for missions beyond LEO. The concepts developed included "wearable" shields, shelters that could be deployed at the onset of an event, and augmentations to the crew quarters. The radiation transport codes, human body models, and vehicle geometry tools contained in the On-Line Tool for the Assessment of Radiation In Space (OLTARIS) were used to evaluate the protection provided by each concept within a realistic space habitat and provide the concept designers with shield thickness requirements. Several different SPE models were utilized to examine the dependence of the shield requirements on the event spectrum. This paper describes the radiation analysis methods and the results of these analyses for several of the shielding concepts.

  15. Finite element analyses of a linear-accelerator electron gun

    Science.gov (United States)

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-02-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  16. Finite element analyses of a linear-accelerator electron gun

    Energy Technology Data Exchange (ETDEWEB)

    Iqbal, M., E-mail: muniqbal.chep@pu.edu.pk, E-mail: muniqbal@ihep.ac.cn [Centre for High Energy Physics, University of the Punjab, Lahore 45590 (Pakistan); Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Wasy, A. [Department of Mechanical Engineering, Changwon National University, Changwon 641773 (Korea, Republic of); Islam, G. U. [Centre for High Energy Physics, University of the Punjab, Lahore 45590 (Pakistan); Zhou, Z. [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2014-02-15

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  17. Finite element analyses of a linear-accelerator electron gun

    International Nuclear Information System (INIS)

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-01-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator

  18. Analysing radio-frequency coil arrays in high-field magnetic resonance imaging by the combined field integral equation method

    Energy Technology Data Exchange (ETDEWEB)

    Wang Shumin; Duyn, Jeff H [Laboratory of Functional and Molecular Imaging, National Institute of Neurological Disorders and Stroke, National Institutes of Health, 10 Center Drive, 10/B1D728, Bethesda, MD 20892 (United States)

    2006-06-21

    We present the combined field integral equation (CFIE) method for analysing radio-frequency coil arrays in high-field magnetic resonance imaging (MRI). Three-dimensional models of coils and the human body were used to take into account the electromagnetic coupling. In the method of moments formulation, we applied triangular patches and the Rao-Wilton-Glisson basis functions to model arbitrarily shaped geometries. We first examined a rectangular loop coil to verify the CFIE method and also demonstrate its efficiency and accuracy. We then studied several eight-channel receive-only head coil arrays for 7.0 T SENSE functional MRI. Numerical results show that the signal dropout and the average SNR are two major concerns in SENSE coil array design. A good design should be a balance of these two factors.

  19. Radiobiological analyse based on cell cluster models

    International Nuclear Information System (INIS)

    Lin Hui; Jing Jia; Meng Damin; Xu Yuanying; Xu Liangfeng

    2010-01-01

    The influence of cell cluster dimension on EUD and TCP for targeted radionuclide therapy was studied using the radiobiological method. The radiobiological features of tumor with activity-lack in core were evaluated and analyzed by associating EUD, TCP and SF.The results show that EUD will increase with the increase of tumor dimension under the activity homogeneous distribution. If the extra-cellular activity was taken into consideration, the EUD will increase 47%. Under the activity-lack in tumor center and the requirement of TCP=0.90, the α cross-fire influence of 211 At could make up the maximum(48 μm)3 activity-lack for Nucleus source, but(72 μm)3 for Cytoplasm, Cell Surface, Cell and Voxel sources. In clinic,the physician could prefer the suggested dose of Cell Surface source in case of the future of local tumor control for under-dose. Generally TCP could well exhibit the effect difference between under-dose and due-dose, but not between due-dose and over-dose, which makes TCP more suitable for the therapy plan choice. EUD could well exhibit the difference between different models and activity distributions,which makes it more suitable for the research work. When the user uses EUD to study the influence of activity inhomogeneous distribution, one should keep the consistency of the configuration and volume of the former and the latter models. (authors)

  20. Analyses of out-of-pile freezing experiments by SIMMER-II

    International Nuclear Information System (INIS)

    Sawada, Tetsuo; Ninokata, Hisashi

    1994-01-01

    This paper describes the interpretation of the TRAN Simulation experiments performed by SIMBATH facility of KfK. Two typical TRAN Simulation experiments were analyzed by using the SIMMER-II code. The original TRAN experiments were performed at SNL in order to examine the freezing behavior of molten UO 2 injected into an annular channel. In the TRAN Simulation experiments of SIMBATH series, similar freezing phenomena were investigated for molten thermite, i.e., a mixture of Al 2 O 3 and iron, instead of UO 2 . The analyses of the simulation experiments by SIMMER-II code aimed at clarifying the applicability of the code and interpreting the freezing process during the experiments. Distribution of molten materials that had deposited in the test section was compared between experimental measurements and calculation by SIMMER-II. Through this study, it has been confirmed that SIMMER-II can well reproduce the TRAN Simulation experiments with allowable difference. The calculations by SIMMER-II also suggested that further model improvements, e.g., freezing on a convex surface, would be effective for a better interpretation of the freezing phenomena. (author)

  1. Partial differential equation techniques for analysing animal movement: A comparison of different methods.

    Science.gov (United States)

    Wang, Yi-Shan; Potts, Jonathan R

    2017-03-07

    Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Exploring Specialized STEM High Schools: Three Dissertation Studies Examining Commonalities and Differences Across Six Case Studies

    Science.gov (United States)

    Tofel-Grehl, Colby

    This dissertation is comprised of three independently conducted analyses of a larger investigation into the practices and features of specialized STEM high schools. While educators and policy makers advocate the development of many new specialized STEM high schools, little is known about the unique features and practices of these schools. The results of these manuscripts add to the literature exploring the promise of specialized STEM schools. Manuscript 1¹ is a qualitative investigation of the common features of STEM schools across multiple school model types. Schools were found to possess common cultural and academic features regardless of model type. Manuscript 2² builds on the findings of manuscript 1. With no meaningful differences found attributable to model type, the researchers used grounded theory to explore the relationships between observed differences among programs as related to the intensity of the STEM experience offered at schools. Schools were found to fall into two categories, high STEM intensity (HSI) and low STEM intensity (LSI), based on five major traits. Manuscript 3³ examines the commonalities and differences in classroom discourse and teachers' questioning techniques in STEM schools. It explicates these discursive practices in order to explore instructional practices across schools. It also examines factors that may influence classroom discourse such as discipline, level of teacher education, and course status as required or elective. Collectively, this research furthers the agenda of better understanding the potential advantages of specialized STEM high schools for preparing a future scientific workforce. ¹Tofel-Grehl, C., Callahan, C., & Gubbins, E. (2012). STEM high school communities: Common and differing features. Manuscript in preparation. ²Tofel-Grehl, C., Callahan, C., & Gubbins, E. (2012). Variations in the intensity of specialized science, technology, engineering, and mathematics (STEM) high schools. Manuscript in preparation

  3. Impact of covariate models on the assessment of the air pollution-mortality association in a single- and multipollutant context.

    Science.gov (United States)

    Sacks, Jason D; Ito, Kazuhiko; Wilson, William E; Neas, Lucas M

    2012-10-01

    With the advent of multicity studies, uniform statistical approaches have been developed to examine air pollution-mortality associations across cities. To assess the sensitivity of the air pollution-mortality association to different model specifications in a single and multipollutant context, the authors applied various regression models developed in previous multicity time-series studies of air pollution and mortality to data from Philadelphia, Pennsylvania (May 1992-September 1995). Single-pollutant analyses used daily cardiovascular mortality, fine particulate matter (particles with an aerodynamic diameter ≤2.5 µm; PM(2.5)), speciated PM(2.5), and gaseous pollutant data, while multipollutant analyses used source factors identified through principal component analysis. In single-pollutant analyses, risk estimates were relatively consistent across models for most PM(2.5) components and gaseous pollutants. However, risk estimates were inconsistent for ozone in all-year and warm-season analyses. Principal component analysis yielded factors with species associated with traffic, crustal material, residual oil, and coal. Risk estimates for these factors exhibited less sensitivity to alternative regression models compared with single-pollutant models. Factors associated with traffic and crustal material showed consistently positive associations in the warm season, while the coal combustion factor showed consistently positive associations in the cold season. Overall, mortality risk estimates examined using a source-oriented approach yielded more stable and precise risk estimates, compared with single-pollutant analyses.

  4. Trajectory data analyses for pedestrian space-time activity study.

    Science.gov (United States)

    Qi, Feng; Du, Fei

    2013-02-25

    automatic module. Trajectory segmentation(5) involves the identification of indoor and outdoor parts from pre-processed space-time tracks. Again, both interactive visual segmentation and automatic segmentation are supported. Segmented space-time tracks are then analyzed to derive characteristics of one's activity space such as activity radius etc. Density estimation and visualization are used to examine large amount of trajectory data to model hot spots and interactions. We demonstrate both density surface mapping(6) and density volume rendering(7). We also include a couple of other exploratory data analyses (EDA) and visualizations tools, such as Google Earth animation support and connection analysis. The suite of analytical as well as visual methods presented in this paper may be applied to any trajectory data for space-time activity studies.

  5. Examining the mechanisms of overgeneral autobiographical memory: capture and rumination, and impaired executive control.

    Science.gov (United States)

    Sumner, Jennifer A; Griffith, James W; Mineka, Susan

    2011-02-01

    Overgeneral autobiographical memory (OGM) is an important cognitive phenomenon in depression, but questions remain regarding the underlying mechanisms. The CaR-FA-X model (Williams et al., 2007) proposes three mechanisms that may contribute to OGM, but little work has examined the possible additive and/or interactive effects among them. We examined two mechanisms of CaR-FA-X: capture and rumination, and impaired executive control. We analysed data from undergraduates (N=109) scoring high or low on rumination who were presented with cues of high and low self-relevance on the Autobiographical Memory Test (AMT). Executive control was operationalised as performance on both the Stroop Colour-Word Task and the Controlled Oral Word Association Test (COWAT). Hierarchical generalised linear modelling was used to predict whether participants would generate a specific memory on a trial of the AMT. Higher COWAT scores, lower rumination, and greater cue self-relevance predicted a higher probability of a specific memory. There was also a rumination×cue self-relevance interaction: Higher (vs lower) rumination was associated with a lower probability of a specific memory primarily for low self-relevant cues. We found no evidence of interactions between these mechanisms. Findings are interpreted with respect to current autobiographical memory models. Future directions for OGM mechanism research are discussed. © 2011 Psychology Press, an imprint of the Taylor & Francis Group, an Informa business

  6. A case study of GWE satellite data impact on GLA assimilation analyses of two ocean cyclones

    Science.gov (United States)

    Gallimore, R. G.; Johnson, D. R.

    1986-01-01

    The effects of the Global Weather Experiment (GWE) data obtained on January 18-20, 1979 on Goddard Laboratory for Atmospheres assimilation analyses of simultaneous cyclones in the western Pacific and Atlantic oceans are examined. The ability of satellite data within assimilation models to determine the baroclinic structures of developing extratropical cyclones is evaluated. The impact of the satellite data on the amplitude and phase of the temperature structure within the storm domain, potential energy, and baroclinic growth rate is studied. The GWE data are compared with Data Systems Test results. It is noted that it is necessary to characterize satellite effects on the baroclinic structure of cyclone waves which degrade numerical weather predictions of cyclogenesis.

  7. CSNI Project for Fracture Analyses of Large-Scale International Reference Experiments (FALSIRE II)

    Energy Technology Data Exchange (ETDEWEB)

    Bass, B.R.; Pugh, C.E.; Keeney, J. [Oak Ridge National Lab., TN (United States); Schulz, H.; Sievers, J. [Gesellschaft fuer Anlagen- und Reaktorsicherheit (GRS) mbH, Koeln (Gemany)

    1996-11-01

    A summary of Phase II of the Project for FALSIRE is presented. FALSIRE was created by the Fracture Assessment Group (FAG) of the OECD/NEA`s Committee on the Safety of Nuclear Installations (CNSI) Principal Working Group No. 3. FALSIRE I in 1988 assessed fracture methods through interpretive analyses of 6 large-scale fracture experiments in reactor pressure vessel (RPV) steels under pressurized- thermal-shock (PTS) loading. In FALSIRE II, experiments examined cleavage fracture in RPV steels for a wide range of materials, crack geometries, and constraint and loading conditions. The cracks were relatively shallow, in the transition temperature region. Included were cracks showing either unstable extension or two stages of extensions under transient thermal and mechanical loads. Crack initiation was also investigated in connection with clad surfaces and with biaxial load. Within FALSIRE II, comparative assessments were performed for 7 reference fracture experiments based on 45 analyses received from 22 organizations representing 12 countries. Temperature distributions in thermal shock loaded samples were approximated with high accuracy and small scatter bands. Structural response was predicted reasonably well; discrepancies could usually be traced to the assumed material models and approximated material properties. Almost all participants elected to use the finite element method.

  8. CSNI Project for Fracture Analyses of Large-Scale International Reference Experiments (FALSIRE II)

    International Nuclear Information System (INIS)

    Bass, B.R.; Pugh, C.E.; Keeney, J.; Schulz, H.; Sievers, J.

    1996-11-01

    A summary of Phase II of the Project for FALSIRE is presented. FALSIRE was created by the Fracture Assessment Group (FAG) of the OECD/NEA's Committee on the Safety of Nuclear Installations (CNSI) Principal Working Group No. 3. FALSIRE I in 1988 assessed fracture methods through interpretive analyses of 6 large-scale fracture experiments in reactor pressure vessel (RPV) steels under pressurized- thermal-shock (PTS) loading. In FALSIRE II, experiments examined cleavage fracture in RPV steels for a wide range of materials, crack geometries, and constraint and loading conditions. The cracks were relatively shallow, in the transition temperature region. Included were cracks showing either unstable extension or two stages of extensions under transient thermal and mechanical loads. Crack initiation was also investigated in connection with clad surfaces and with biaxial load. Within FALSIRE II, comparative assessments were performed for 7 reference fracture experiments based on 45 analyses received from 22 organizations representing 12 countries. Temperature distributions in thermal shock loaded samples were approximated with high accuracy and small scatter bands. Structural response was predicted reasonably well; discrepancies could usually be traced to the assumed material models and approximated material properties. Almost all participants elected to use the finite element method

  9. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality.

    Science.gov (United States)

    Woodley, Hayden J R; Bourdage, Joshua S; Ogunfowora, Babatunde; Nguyen, Brenda

    2015-01-01

    The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called "Benevolents." Individuals low on equity sensitivity are more outcome oriented, and are described as "Entitleds." Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity.

  10. Examining Equity Sensitivity: An Investigation Using the Big Five and HEXACO Models of Personality

    Science.gov (United States)

    Woodley, Hayden J. R.; Bourdage, Joshua S.; Ogunfowora, Babatunde; Nguyen, Brenda

    2016-01-01

    The construct of equity sensitivity describes an individual's preference about his/her desired input to outcome ratio. Individuals high on equity sensitivity tend to be more input oriented, and are often called “Benevolents.” Individuals low on equity sensitivity are more outcome oriented, and are described as “Entitleds.” Given that equity sensitivity has often been described as a trait, the purpose of the present study was to examine major personality correlates of equity sensitivity, so as to inform both the nature of equity sensitivity, and the potential processes through which certain broad personality traits may relate to outcomes. We examined the personality correlates of equity sensitivity across three studies (total N = 1170), two personality models (i.e., the Big Five and HEXACO), the two most common measures of equity sensitivity (i.e., the Equity Preference Questionnaire and Equity Sensitivity Inventory), and using both self and peer reports of personality (in Study 3). Although results varied somewhat across samples, the personality variables of Conscientiousness and Honesty-Humility, followed by Agreeableness, were the most robust predictors of equity sensitivity. Individuals higher on these traits were more likely to be Benevolents, whereas those lower on these traits were more likely to be Entitleds. Although some associations between Extraversion, Openness, and Neuroticism and equity sensitivity were observed, these were generally not robust. Overall, it appears that there are several prominent personality variables underlying equity sensitivity, and that the addition of the HEXACO model's dimension of Honesty-Humility substantially contributes to our understanding of equity sensitivity. PMID:26779102

  11. State of the art in establishing computed models of adsorption processes to serve as a basis of radionuclide migration assessment for safety analyses

    International Nuclear Information System (INIS)

    Koss, V.

    1991-01-01

    An important point in safety analysis of an underground repository is adsorption of radionuclides in the overlying cover. Adsorption may be judged according to experimental results or to model calculations. Because of the reliability aspired in safety analyses, it is necessary to strengthen experimental results by theoretical calculations. At the time, there is no single thermodynamic model of adsorption to be agreed on. Therefore, this work reviews existing equilibrium models of adsorption. Limitations of the K d -concept and of adsorption-isotherms according to Freundlich and Langmuir are mentioned. The surface ionisation and complexation edl model is explained in full as is the criticism of this model. The application is stressed of simple surface complexation models to adsorption experiments in natural systems as is experimental and modelling work according to systems from Gorleben. Hints are given how to deal with modelling of adsorption related to Gorleben systems in the future. (orig.) [de

  12. Physics Analyses in the Design of the HFIR Cold Neutron Source

    International Nuclear Information System (INIS)

    Bucholz, J.A.

    1999-01-01

    Physics analyses have been performed to characterize the performance of the cold neutron source to be installed in the High Flux Isotope Reactor at the Oak Ridge National Laboratory in the near future. This paper provides a description of the physics models developed, and the resulting analyses that have been performed to support the design of the cold source. These analyses have provided important parametric performance information, such as cold neutron brightness down the beam tube and the various component heat loads, that have been used to develop the reference cold source concept

  13. Analysing spatially extended high-dimensional dynamics by recurrence plots

    Energy Technology Data Exchange (ETDEWEB)

    Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)

    2015-05-08

    Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.

  14. Two Sides of the Same Coin: ERP and Wavelet Analyses of Visual Potentials Evoked and Induced by Task-Relevant Faces.

    Science.gov (United States)

    Van der Lubbe, Rob H J; Szumska, Izabela; Fajkowska, Małgorzata

    2016-01-01

    New analysis techniques of the electroencephalogram (EEG) such as wavelet analysis open the possibility to address questions that may largely improve our understanding of the EEG and clarify its relation with related potentials (ER Ps). Three issues were addressed. 1) To what extent can early ERERP components be described as transient evoked oscillations in specific frequency bands? 2) Total EEG power (TP) after a stimulus consists of pre-stimulus baseline power (BP), evoked power (EP), and induced power (IP), but what are their respective contributions? 3) The Phase Reset model proposes that BP predicts EP, while the evoked model holds that BP is unrelated to EP; which model is the most valid one? EEG results on NoGo trials for 123 individuals that took part in an experiment with emotional facial expressions were examined by computing ERPs and by performing wavelet analyses on the raw EEG and on ER Ps. After performing several multiple regression analyses, we obtained the following answers. First, the P1, N1, and P2 components can by and large be described as transient oscillations in the α and θ bands. Secondly, it appears possible to estimate the separate contributions of EP, BP, and IP to TP, and importantly, the contribution of IP is mostly larger than that of EP. Finally, no strong support was obtained for either the Phase Reset or the Evoked model. Recent models are discussed that may better explain the relation between raw EEG and ERPs.

  15. Measuring social capital through multivariate analyses for the IQ-SC.

    Science.gov (United States)

    Campos, Ana Cristina Viana; Borges, Carolina Marques; Vargas, Andréa Maria Duarte; Gomes, Viviane Elisangela; Lucas, Simone Dutra; Ferreira e Ferreira, Efigênia

    2015-01-20

    Social capital can be viewed as a societal process that works toward the common good as well as toward the good of the collective based on trust, reciprocity, and solidarity. Our study aimed to present two multivariate statistical analyses to examine the formation of latent classes of social capital using the IQ-SC and to identify the most important factors in building an indicator of individual social capital. A cross-sectional study was conducted in 2009 among working adolescents supported by a Brazilian NGO. The sample consisted of 363 individuals, and data were collected using the World Bank Questionnaire for measuring social capital. First, the participants were grouped by a segmentation analysis using the Two Step Cluster method based on the Euclidian distance and the centroid criteria as the criteria for aggregate answers. Using specific weights for each item, discriminant analysis was used to validate the cluster analysis in an attempt to maximize the variance among the groups with respect to the variance within the clusters. "Community participation" and "trust in one's neighbors" contributed significantly to the development of the model with two distinct discriminant functions (p < 0.001). The majority of cases (95.0%) and non-cases (93.1%) were correctly classified by discriminant analysis. The two multivariate analyses (segmentation analysis and canonical discriminant analysis), used together, can be considered good choices for measuring social capital. Our results indicate that it is possible to form three social capital groups (low, medium and high) using the IQ-SC.

  16. RETRAN nonequilibrium two-phase flow model for operational transient analyses

    International Nuclear Information System (INIS)

    Paulsen, M.P.; Hughes, E.D.

    1982-01-01

    The field balance equations, flow-field models, and equation of state for a nonequilibrium two-phase flow model for RETRAN are given. The differential field balance model equations are: (1) conservation of mixture mass; (2) conservation of vapor mass; (3) balance of mixture momentum; (4) a dynamic-slip model for the velocity difference; and (5) conservation of mixture energy. The equation of state is formulated such that the liquid phase may be subcooled, saturated, or superheated. The vapor phase is constrained to be at the saturation state. The dynamic-slip model includes wall-to-phase and interphase momentum exchanges. A mechanistic vapor generation model is used to describe vapor production under bulk subcooling conditions. The speed of sound for the mixture under nonequilibrium conditions is obtained from the equation of state formulation. The steady-state and transient solution methods are described

  17. Mitochondrial dysfunction, oxidative stress and apoptosis revealed by proteomic and transcriptomic analyses of the striata in two mouse models of Parkinson’s disease

    Energy Technology Data Exchange (ETDEWEB)

    Chin, Mark H.; Qian, Weijun; Wang, Haixing; Petyuk, Vladislav A.; Bloom, Joshua S.; Sforza, Daniel M.; Lacan, Goran; Liu, Dahai; Khan, Arshad H.; Cantor, Rita M.; Bigelow, Diana J.; Melega, William P.; Camp, David G.; Smith, Richard D.; Smith, Desmond J.

    2008-02-10

    The molecular mechanisms underlying the changes in the nigrostriatal pathway in Parkinson disease (PD) are not completely understood. Here we use mass spectrometry and microarrays to study the proteomic and transcriptomic changes in the striatum of two mouse models of PD, induced by the distinct neurotoxins 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) and methamphetamine (METH). Proteomic analyses resulted in the identification and relative quantification of 912 proteins with two or more unique peptides and 85 proteins with significant abundance changes following neurotoxin treatment. Similarly, microarray analyses revealed 181 genes with significant changes in mRNA following neurotoxin treatment. The combined protein and gene list provides a clearer picture of the potential mechanisms underlying neurodegeneration observed in PD. Functional analysis of this combined list revealed a number of significant categories, including mitochondrial dysfunction, oxidative stress response and apoptosis. Additionally, codon usage and miRNAs may play an important role in translational control in the striatum. These results constitute one of the largest datasets integrating protein and transcript changes for these neurotoxin models with many similar endpoint phenotypes but distinct mechanisms.

  18. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  19. Combining Teacher Assessment Scores with External Examination ...

    African Journals Online (AJOL)

    Combining Teacher Assessment Scores with External Examination Scores for Certification: Comparative Study of Four Statistical Models. ... University entrance examination scores in mathematics were obtained for a subsample of 115 ...

  20. Type 2 diabetes mellitus unawareness, prevalence, trends and risk factors: National Health and Nutrition Examination Survey (NHANES) 1999-2010.

    Science.gov (United States)

    Zhang, Nana; Yang, Xin; Zhu, Xiaolin; Zhao, Bin; Huang, Tianyi; Ji, Qiuhe

    2017-04-01

    Objectives To determine whether the associations with key risk factors in patients with diagnosed and undiagnosed type 2 diabetes mellitus (T2DM) are different using data from the National Health and Nutrition Examination Survey (NHANES) from 1999 to 2010. Methods The study analysed the prevalence and association with risk factors of undiagnosed and diagnosed T2DM using a regression model and a multinomial logistic regression model. Data from the NHANES 1999-2010 were used for the analyses. Results The study analysed data from 10 570 individuals. The overall prevalence of diagnosed and undiagnosed T2DM increased significantly from 1999 to 2010. The prevalence of undiagnosed T2DM was significantly higher in non-Hispanic whites, in individuals 130-159 mg/dl) or very high (≥220 mg/dl) non-high-density lipoprotein cholesterol levels compared with diagnosed T2DM. Body mass index, low economic status or low educational level had no effect on T2DM diagnosis rates. Though diagnosed T2DM was associated with favourable diet/carbohydrate intake behavioural changes, it had no effect on physical activity levels. Conclusion The overall T2DM prevalence increased between 1999 and 2010, particularly for undiagnosed T2DM in patients that were formerly classified as low risk.