WorldWideScience

Sample records for modeling hlm analyses

  1. Analysis of SX farm leak histories - Historical leak model (HLM)

    International Nuclear Information System (INIS)

    Fredenburg, E.A.

    1998-01-01

    This report uses readily available historical information to better define the volume, chemical composition, and Cs-137/Sr-90 amounts for leaks that have occurred in the past for tanks SX-108, SX-109, SX-111, and SX-112. In particular a Historical Leak Model (HLM) is developed that is a month by month reconciliation of tank levels, fill records, and calculated boil-off rates for these tanks. The HLM analysis is an independent leak estimate that reconstructs the tank thermal histories thereby deriving each tank's evaporative volume loss and by difference, its unaccounted losses as well. The HLM analysis was meant to demonstrate the viability of its approach, not necessarily to establish the HLM leak estimates as being definitive. Past leak estimates for these tanks have invariably resorted to soil wetting arguments but the extent of soil contaminated by each leak has always been highly uncertain. There is also a great deal of uncertainty with the HLM that was not quantified in this report, but will be addressed later. These four tanks (among others) were used from 1956 to 1975 for storage of high-level waste from the Redox process at Hanford. During their operation, tank waste temperatures were often as high as 150 C (300 F), but were more typically around 130 C. The primary tank cooling was by evaporation of tank waste and therefore periodic replacement of lost volume with water was necessary to maintain each tank's inventory. This active reflux of waste resulted in very substantial turnovers in tank inventory as well as significant structural degradation of these tanks. As a result of the loss of structural integrity, each of these tanks leaked during their active periods of operation. Unfortunately, the large turnover in tank volume associated with their reflux cooling has made a determination of leak volumes very difficult. During much of these tanks operational histories, inventory losses because of evaporative cooling could have effectively masked any volume

  2. HLM in Cluster-Randomised Trials--Measuring Efficacy across Diverse Populations of Learners

    Science.gov (United States)

    Hegedus, Stephen; Tapper, John; Dalton, Sara; Sloane, Finbarr

    2013-01-01

    We describe the application of Hierarchical Linear Modelling (HLM) in a cluster-randomised study to examine learning algebraic concepts and procedures in an innovative, technology-rich environment in the US. HLM is applied to measure the impact of such treatment on learning and on contextual variables. We provide a detailed description of such…

  3. How to do Meta-Analysis using HLM software

    OpenAIRE

    Petscher, Yaacov

    2013-01-01

    This is a step-by-step presentation of how to run a meta-analysis using HLM software. Because it's a variance known model, it is not run through the GUI, but batch mode. These slides show how to prepare the data and run the analysis.

  4. The Effects Of Gender, Engineering Identification, and Engineering Program Expectancy On Engineering Career Intentions: Applying Hierarchical Linear Modeling (HLM) In Engineering Education Research

    Science.gov (United States)

    Tendhar, Chosang; Paretti, Marie C.; Jones, Brett D.

    2017-01-01

    This study had three purposes and four hypotheses were tested. Three purposes: (1) To use hierarchical linear modeling (HLM) to investigate whether students' perceptions of their engineering career intentions changed over time; (2) To use HLM to test the effects of gender, engineering identification (the degree to which an individual values a…

  5. Hierarchical linear modeling (HLM) of longitudinal brain structural and cognitive changes in alcohol-dependent individuals during sobriety

    DEFF Research Database (Denmark)

    Yeh, P.H.; Gazdzinski, S.; Durazzo, T.C.

    2007-01-01

    faster brain volume gains, which were also related to greater smoking and drinking severities. Over 7 months of abstinence from alcohol, sALC compared to nsALC showed less improvements in visuospatial learning and memory despite larger brain volume gains and ventricular shrinkage. Conclusions: Different......)-derived brain volume changes and cognitive changes in abstinent alcohol-dependent individuals as a function of smoking status, smoking severity, and drinking quantities. Methods: Twenty non-smoking recovering alcoholics (nsALC) and 30 age-matched smoking recovering alcoholics (sALC) underwent quantitative MRI...... time points. Using HLM, we modeled volumetric and cognitive outcome measures as a function of cigarette and alcohol use variables. Results: Different hierarchical linear models with unique model structures are presented and discussed. The results show that smaller brain volumes at baseline predict...

  6. Heat transfer on HLM cooled wire-spaced fuel pin bundle simulator in the NACIE-UP facility

    Energy Technology Data Exchange (ETDEWEB)

    Di Piazza, Ivan, E-mail: ivan.dipiazza@enea.it [Italian National Agency for New Technologies, Energy and Sustainable Economic Development, C.R. ENEA Brasimone, Camugnano (Italy); Angelucci, Morena; Marinari, Ranieri [University of Pisa, Dipartimento di Ingegneria Civile e Industriale, Pisa (Italy); Tarantino, Mariano [Italian National Agency for New Technologies, Energy and Sustainable Economic Development, C.R. ENEA Brasimone, Camugnano (Italy); Forgione, Nicola [University of Pisa, Dipartimento di Ingegneria Civile e Industriale, Pisa (Italy)

    2016-04-15

    Highlights: • Experiments with a wire-wrapped 19-pin fuel bundle cooled by LBE. • Wall and bulk temperature measurements at three axial positions. • Heat transfer and error analysis in the range of low mass flow rates and Péclet number. • Comparison of local and section-averaged Nusselt number with correlations. - Abstract: The NACIE-UP experimental facility at the ENEA Brasimone Research Centre (Italy) allowed to evaluate the heat transfer coefficient of a wire-spaced fuel bundle cooled by lead-bismuth eutectic (LBE). Lead or lead-bismuth eutectic are very attractive as coolants for the GEN-IV fast reactors due to the good thermo-physical properties and the capability to fulfil the GEN-IV goals. Nevertheless, few experimental data on heat transfer with heavy liquid metals (HLM) are available in literature. Furthermore, just a few data can be identified on the specific topic of wire-spaced fuel bundle cooled by HLM. Additional analysis on thermo-fluid dynamic behaviour of the HLM inside the subchannels of a rod bundle is necessary to support the design and safety assessment of GEN. IV/ADS reactors. In this context, a wire-spaced 19-pin fuel bundle was installed inside the NACIE-UP facility. The pin bundle is equipped with 67 thermocouples to monitor temperatures and analyse the heat transfer behaviour in different sub-channels and axial positions. The experimental campaign was part of the SEARCH FP7 EU project to support the development of the MYRRHA irradiation facility (SCK-CEN). Natural and mixed circulation flow regimes were investigated, with subchannel Reynolds number in the range Re = 1000–10,000 and heat flux in the range q″ = 50–500 kW/m{sup 2}. Local Nusselt numbers were calculated for five sub-channels in different ranks at three axial positions. Section-averaged Nusselt number was also defined and calculated. Local Nusselt data showed good consistency with some of the correlation existing in literature for heat transfer in liquid metals

  7. Heat transfer on HLM cooled wire-spaced fuel pin bundle simulator in the NACIE-UP facility

    International Nuclear Information System (INIS)

    Di Piazza, Ivan; Angelucci, Morena; Marinari, Ranieri; Tarantino, Mariano; Forgione, Nicola

    2016-01-01

    Highlights: • Experiments with a wire-wrapped 19-pin fuel bundle cooled by LBE. • Wall and bulk temperature measurements at three axial positions. • Heat transfer and error analysis in the range of low mass flow rates and Péclet number. • Comparison of local and section-averaged Nusselt number with correlations. - Abstract: The NACIE-UP experimental facility at the ENEA Brasimone Research Centre (Italy) allowed to evaluate the heat transfer coefficient of a wire-spaced fuel bundle cooled by lead-bismuth eutectic (LBE). Lead or lead-bismuth eutectic are very attractive as coolants for the GEN-IV fast reactors due to the good thermo-physical properties and the capability to fulfil the GEN-IV goals. Nevertheless, few experimental data on heat transfer with heavy liquid metals (HLM) are available in literature. Furthermore, just a few data can be identified on the specific topic of wire-spaced fuel bundle cooled by HLM. Additional analysis on thermo-fluid dynamic behaviour of the HLM inside the subchannels of a rod bundle is necessary to support the design and safety assessment of GEN. IV/ADS reactors. In this context, a wire-spaced 19-pin fuel bundle was installed inside the NACIE-UP facility. The pin bundle is equipped with 67 thermocouples to monitor temperatures and analyse the heat transfer behaviour in different sub-channels and axial positions. The experimental campaign was part of the SEARCH FP7 EU project to support the development of the MYRRHA irradiation facility (SCK-CEN). Natural and mixed circulation flow regimes were investigated, with subchannel Reynolds number in the range Re = 1000–10,000 and heat flux in the range q″ = 50–500 kW/m"2. Local Nusselt numbers were calculated for five sub-channels in different ranks at three axial positions. Section-averaged Nusselt number was also defined and calculated. Local Nusselt data showed good consistency with some of the correlation existing in literature for heat transfer in liquid metals for

  8. Current and future research on corrosion and thermalhydraulic issues of HLM cooled reactors and on LMR fuels for fast reactor systems

    International Nuclear Information System (INIS)

    Knebel, J.U.; Konings, R.J.M.

    2002-01-01

    Heavy liquid metals (HLM) such as lead (Pb) or lead-bismuth eutectic (Pb-Bi) are currently investigated world-wide as coolant for nuclear power reactors and for accelerator driven systems (ADS). Besides the advantages of HLM as coolant and spallation material, e.g. high boiling point, low reactivity with water and air and a high neutron yield, some technological issues, such as high corrosion effects in contact with steels and thermalhydraulic characteristics, need further experimental investigations and physical model improvements and validations. The paper describes some typical HLM cooled reactor designs, which are currently considered, and outlines the technological challenges related to corrosion, thermalhydraulic and fuel issues. In the first part of the presentation, the status of presently operated or planned test facilities related to corrosion and thermalhydraulic questions will be discussed. First approaches to solve the corrosion problem will be given. The approach to understand and model thermalhydraulic issues such as heat transfer, turbulence, two-phase flow and instrumentation will be outlined. In the second part of the presentation, an overview will be given of the advanced fuel types that are being considered for future liquid metal reactor (LMR) systems. Advantages and disadvantages will be discussed in relation to fabrication technology and fuel cycle considerations. For the latter, special attention will be given to the partitioning and transmutation potential. Metal, oxide and nitride fuel materials will be discussed in different fuel forms and packings. For both parts of the presentation, an overview of existing co-operations and networks will be given and the needs for future research work will be identified. (authors)

  9. The Impact of Salient Role Stress on Trajectories of Health in Late Life among Survivors of a Seven-Year Panel Study: Analyses of Individual Growth Curves

    Science.gov (United States)

    Shaw, Benjamin A.; Krause, Neal

    2002-01-01

    The purpose of this study is twofold: 1) to model changes in health over time among older adults; and 2) to assess the degree to which stress arising in salient social roles accounts for individual variation in these changes. Individual growth curve analyses using Hierarchical Linear Modeling (HLM) software were employed with longitudinal data…

  10. HLM fuel pin bundle experiments in the CIRCE pool facility

    Energy Technology Data Exchange (ETDEWEB)

    Martelli, Daniele, E-mail: daniele.martelli@ing.unipi.it [University of Pisa, Department of Civil and Industrial Engineering, Pisa (Italy); Forgione, Nicola [University of Pisa, Department of Civil and Industrial Engineering, Pisa (Italy); Di Piazza, Ivan; Tarantino, Mariano [Italian National Agency for New Technologies, Energy and Sustainable Economic Development, C.R. ENEA Brasimone (Italy)

    2015-10-15

    Highlights: • The experimental results represent the first set of values for LBE pool facility. • Heat transfer is investigated for a 37-pin electrical bundle cooled by LBE. • Experimental data are presented together with a detailed error analysis. • Nu is computed as a function of the Pe and compared with correlations. • Experimental Nu is about 25% lower than Nu derived from correlations. - Abstract: Since Lead-cooled Fast Reactors (LFR) have been conceptualized in the frame of GEN IV International Forum (GIF), great interest has focused on the development and testing of new technologies related to HLM nuclear reactors. In this frame the Integral Circulation Experiment (ICE) test section has been installed into the CIRCE pool facility and suitable experiments have been carried out aiming to fully investigate the heat transfer phenomena in grid spaced fuel pin bundles providing experimental data in support of European fast reactor development. In particular, the fuel pin bundle simulator (FPS) cooled by lead bismuth eutectic (LBE), has been conceived with a thermal power of about 1 MW and a uniform linear power up to 25 kW/m, relevant values for a LFR. It consists of 37 fuel pins (electrically simulated) placed on a hexagonal lattice with a pitch to diameter ratio of 1.8. The FPS was deeply instrumented by several thermocouples. In particular, two sections of the FPS were instrumented in order to evaluate the heat transfer coefficient along the bundle as well as the cladding temperature in different ranks of sub-channels. Nusselt number in the central sub-channel was therefore calculated as a function of the Peclet number and the obtained results were compared to Nusselt numbers obtained from convective heat transfer correlations available in literature on Heavy Liquid Metals (HLM). Results reported in the present work, represent the first set of experimental data concerning fuel pin bundle behaviour in a heavy liquid metal pool, both in forced and

  11. Scale of association: hierarchical linear models and the measurement of ecological systems

    Science.gov (United States)

    Sean M. McMahon; Jeffrey M. Diez

    2007-01-01

    A fundamental challenge to understanding patterns in ecological systems lies in employing methods that can analyse, test and draw inference from measured associations between variables across scales. Hierarchical linear models (HLM) use advanced estimation algorithms to measure regression relationships and variance-covariance parameters in hierarchically structured...

  12. A Hierarchical Linear Model for Estimating Gender-Based Earnings Differentials.

    Science.gov (United States)

    Haberfield, Yitchak; Semyonov, Moshe; Addi, Audrey

    1998-01-01

    Estimates of gender earnings inequality in data from 116,431 Jewish workers were compared using a hierarchical linear model (HLM) and ordinary least squares model. The HLM allows estimation of the extent to which earnings inequality depends on occupational characteristics. (SK)

  13. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2014-01-01

    Highly recommended by JASA, Technometrics, and other journals, the first edition of this bestseller showed how to easily perform complex linear mixed model (LMM) analyses via a variety of software programs. Linear Mixed Models: A Practical Guide Using Statistical Software, Second Edition continues to lead readers step by step through the process of fitting LMMs. This second edition covers additional topics on the application of LMMs that are valuable for data analysts in all fields. It also updates the case studies using the latest versions of the software procedures and provides up-to-date information on the options and features of the software procedures available for fitting LMMs in SAS, SPSS, Stata, R/S-plus, and HLM.New to the Second Edition A new chapter on models with crossed random effects that uses a case study to illustrate software procedures capable of fitting these models Power analysis methods for longitudinal and clustered study designs, including software options for power analyses and suggest...

  14. Analyzing longitudinal data with the linear mixed models procedure in SPSS.

    Science.gov (United States)

    West, Brady T

    2009-09-01

    Many applied researchers analyzing longitudinal data share a common misconception: that specialized statistical software is necessary to fit hierarchical linear models (also known as linear mixed models [LMMs], or multilevel models) to longitudinal data sets. Although several specialized statistical software programs of high quality are available that allow researchers to fit these models to longitudinal data sets (e.g., HLM), rapid advances in general purpose statistical software packages have recently enabled analysts to fit these same models when using preferred packages that also enable other more common analyses. One of these general purpose statistical packages is SPSS, which includes a very flexible and powerful procedure for fitting LMMs to longitudinal data sets with continuous outcomes. This article aims to present readers with a practical discussion of how to analyze longitudinal data using the LMMs procedure in the SPSS statistical software package.

  15. Determining Predictor Importance in Hierarchical Linear Models Using Dominance Analysis

    Science.gov (United States)

    Luo, Wen; Azen, Razia

    2013-01-01

    Dominance analysis (DA) is a method used to evaluate the relative importance of predictors that was originally proposed for linear regression models. This article proposes an extension of DA that allows researchers to determine the relative importance of predictors in hierarchical linear models (HLM). Commonly used measures of model adequacy in…

  16. Investigating the connections between health lean management and clinical risk management.

    Science.gov (United States)

    Crema, Maria; Verbano, Chiara

    2015-01-01

    The purpose of this paper is to investigate connections and overlaps between health lean management (HLM) and clinical risk management (CRM) understanding whether and how these two approaches can be combined together to pursue efficiency and patient safety improvements simultaneously. A systematic literature review has been carried out. Searching in academic databases, papers that focus not only on HLM, but also on clinical errors and risk reduction, were included. The general characteristics of the selected papers were analysed and a content analysis was conducted. In most of the papers, pursing objectives of HLM and CRM and adopting tools and practices of both approaches, results of quality and, particularly, of safety improvements were obtained. A two-way arrow between HLM and CRM emerged but so far, none of the studies has been focused on the relationship between HLM and CRM. Results highlight an emerging research stream, with many useful theoretical and practical implications and opportunities for further research.

  17. New insights into the nature of cerebellar-dependent eyeblink conditioning deficits in schizophrenia: A hierarchical linear modeling approach

    Directory of Open Access Journals (Sweden)

    Amanda R Bolbecker

    2016-01-01

    Full Text Available Evidence of cerebellar dysfunction in schizophrenia has mounted over the past several decades, emerging from neuroimaging, neuropathological, and behavioral studies. Consistent with these findings, cerebellar-dependent delay eyeblink conditioning (dEBC deficits have been identified in schizophrenia. While repeated measures analysis of variance (ANOVA is traditionally used to analyze dEBC data, hierarchical linear modeling (HLM more reliably describes change over time by accounting for the dependence in repeated measures data. This analysis approach is well suited to dEBC data analysis because it has less restrictive assumptions and allows unequal variances. The current study examined dEBC measured with electromyography in a single-cue tone paradigm in an age-matched sample of schizophrenia participants and healthy controls (N=56 per group using HLM. Subjects participated in 90 trials (10 blocks of dEBC, during which a 400 ms tone co-terminated with a 50 ms air puff delivered to the left eye. Each block also contained 1 tone-alone trial. The resulting block averages of dEBC data were fitted to a 3-parameter logistic model in HLM, revealing significant differences between schizophrenia and control groups on asymptote and inflection point, but not slope. These findings suggest that while the learning rate is not significantly different compared to controls, associative learning begins to level off later and a lower ultimate level of associative learning is achieved in schizophrenia. Given the large sample size in the present study, HLM may provide a more nuanced and definitive analysis of differences between schizophrenia and controls on dEBC.

  18. Measuring Teacher Effectiveness through Hierarchical Linear Models: Exploring Predictors of Student Achievement and Truancy

    Science.gov (United States)

    Subedi, Bidya Raj; Reese, Nancy; Powell, Randy

    2015-01-01

    This study explored significant predictors of student's Grade Point Average (GPA) and truancy (days absent), and also determined teacher effectiveness based on proportion of variance explained at teacher level model. We employed a two-level hierarchical linear model (HLM) with student and teacher data at level-1 and level-2 models, respectively.…

  19. Computational Tools for Probing Interactions in Multiple Linear Regression, Multilevel Modeling, and Latent Curve Analysis

    Science.gov (United States)

    Preacher, Kristopher J.; Curran, Patrick J.; Bauer, Daniel J.

    2006-01-01

    Simple slopes, regions of significance, and confidence bands are commonly used to evaluate interactions in multiple linear regression (MLR) models, and the use of these techniques has recently been extended to multilevel or hierarchical linear modeling (HLM) and latent curve analysis (LCA). However, conducting these tests and plotting the…

  20. Verify Super Double-Heterogeneous Spherical Lattice Model for Equilibrium Fuel Cycle Analysis AND HTR Spherical Super Lattice Model for Equilibrium Fuel Cycle Analysis

    International Nuclear Information System (INIS)

    Gray S. Chang

    2005-01-01

    The currently being developed advanced High Temperature gas-cooled Reactors (HTR) is able to achieve a simplification of safety through reliance on innovative features and passive systems. One of the innovative features in these HTRs is reliance on ceramic-coated fuel particles to retain the fission products even under extreme accident conditions. Traditionally, the effect of the random fuel kernel distribution in the fuel pebble/block is addressed through the use of the Dancoff correction factor in the resonance treatment. However, the Dancoff correction factor is a function of burnup and fuel kernel packing factor, which requires that the Dancoff correction factor be updated during Equilibrium Fuel Cycle (EqFC) analysis. An advanced KbK-sph model and whole pebble super lattice model (PSLM), which can address and update the burnup dependent Dancoff effect during the EqFC analysis. The pebble homogeneous lattice model (HLM) is verified by the burnup characteristics with the double-heterogeneous KbK-sph lattice model results. This study summarizes and compares the KbK-sph lattice model and HLM burnup analyzed results. Finally, we discuss the Monte-Carlo coupling with a fuel depletion and buildup code--ORIGEN-2 as a fuel burnup analysis tool and its PSLM calculated results for the HTR EqFC burnup analysis

  1. Relative contributions of the major human CYP450 to the metabolism of icotinib and its implication in prediction of drug-drug interaction between icotinib and CYP3A4 inhibitors/inducers using physiologically based pharmacokinetic modeling.

    Science.gov (United States)

    Chen, Jia; Liu, Dongyang; Zheng, Xin; Zhao, Qian; Jiang, Ji; Hu, Pei

    2015-06-01

    Icotinib is an anticancer drug, but relative contributions of CYP450 have not been identified. This study was carried out to identify the contribution percentage of CYP450 to icotinib and use the results to develop a physiologically based pharmacokinetic (PBPK) model, which can help to predict drug-drug interaction (DDI). Human liver microsome (HLM) and supersome using relative activity factor (RAF) were employed to determine the relative contributions of the major human P450 to the net hepatic metabolism of icotinib. These values were introduced to develop a PBPK model using SimCYP. The model was validated by the observed data in a Phase I clinical trial in Chinese healthy subjects. Finally, the model was used to simulate the DDI with ketoconazole or rifampin. Final contribution of CYP450 isoforms determined by HLM showed that CYP3A4 provided major contributions to the metabolism of icotinib. The percentage contributions of the P450 to the net hepatic metabolism of icotinib were determined by HLM inhibition assay and RAF. The AUC ratio under concomitant use of ketoconazole and rifampin was 3.22 and 0.55, respectively. Percentage of contribution of CYP450 to icotinib metabolism was calculated by RAF. The model has been proven to fit the observed data and is used in predicting icotinib-ketoconazole/rifampin interaction.

  2. Using hierarchical linear models to test differences in Swedish results from OECD’s PISA 2003: Integrated and subject-specific science education

    Directory of Open Access Journals (Sweden)

    Maria Åström

    2012-06-01

    Full Text Available The possible effects of different organisations of the science curriculum in schools participating in PISA 2003 are tested with a hierarchical linear model (HLM of two levels. The analysis is based on science results. Swedish schools are free to choose how they organise the science curriculum. They may choose to work subject-specifically (with Biology, Chemistry and Physics, integrated (with Science or to mix these two. In this study, all three ways of organising science classes in compulsory school are present to some degree. None of the different ways of organising science education displayed statistically significant better student results in scientific literacy as measured in PISA 2003. The HLM model used variables of gender, country of birth, home language, preschool attendance, an economic, social and cultural index as well as the teaching organisation.

  3. Using Hierarchical Linear Modelling to Examine Factors Predicting English Language Students' Reading Achievement

    Science.gov (United States)

    Fung, Karen; ElAtia, Samira

    2015-01-01

    Using Hierarchical Linear Modelling (HLM), this study aimed to identify factors such as ESL/ELL/EAL status that would predict students' reading performance in an English language arts exam taken across Canada. Using data from the 2007 administration of the Pan-Canadian Assessment Program (PCAP) along with the accompanying surveys for students and…

  4. Risk and resiliency processes in ethnically diverse families in poverty.

    Science.gov (United States)

    Wadsworth, Martha E; Santiago, Catherine Decarlo

    2008-06-01

    Families living in poverty face numerous stressors that threaten the health and well-being of family members. This study examined the relationships among family-level poverty-related stress (PRS), individual-level coping with PRS, and a wide range of psychological symptoms in an ethnically diverse sample of 98 families (300 family members) living at or below 150% of the federal poverty line. Hierarchical linear model (HLM) analyses revealed that family PRS is robustly related to a wide range of psychological syndromes for family members of both genders, all ages, and all ethnic backgrounds. In addition, primary and secondary control coping were both found to serve as buffers of PRS for many syndromes. For several psychological syndromes, parents showed significantly higher levels of symptoms, but the link between PRS and symptoms was significantly stronger for children than for adults. Ethnicity was not a significant predictor in overall HLM models or follow-up analyses, suggesting that the broad construct of PRS and the theoretical model tested here apply across the 3 major ethnic groups included in this study. The findings suggest that family-based, coping-focused interventions have the potential to promote resiliency and break linkages in the pernicious cycle of family economic stress. (c) 2008 APA, all rights reserved

  5. Exploring the Relations of Inquiry-Based Teaching to Science Achievement and Dispositions in 54 Countries

    Science.gov (United States)

    Cairns, Dean; Areepattamannil, Shaljan

    2017-06-01

    This study, drawing on data from the third cycle of the Program for International Student Assessment (PISA) and employing three-level hierarchical linear modeling (HLM) as an analytic strategy, examined the relations of inquiry-based science teaching to science achievement and dispositions toward science among 170,474 15-year-old students from 4780 schools in 54 countries across the globe. The results of the HLM analyses, after accounting for student-, school-, and country-level demographic characteristics and students' dispositions toward science, revealed that inquiry-based science teaching was significantly negatively related to science achievement. In contrast, inquiry-based science teaching was significantly positively associated with dispositions toward science, such as interest in and enjoyment of science learning, instrumental and future-oriented science motivation, and science self-concept and self-efficacy. Implications of the findings for policy and practice are discussed.

  6. Linear mixed models a practical guide using statistical software

    CERN Document Server

    West, Brady T; Galecki, Andrzej T

    2006-01-01

    Simplifying the often confusing array of software programs for fitting linear mixed models (LMMs), Linear Mixed Models: A Practical Guide Using Statistical Software provides a basic introduction to primary concepts, notation, software implementation, model interpretation, and visualization of clustered and longitudinal data. This easy-to-navigate reference details the use of procedures for fitting LMMs in five popular statistical software packages: SAS, SPSS, Stata, R/S-plus, and HLM. The authors introduce basic theoretical concepts, present a heuristic approach to fitting LMMs based on bo

  7. Assessing exposure to violence using multiple informants: application of hierarchical linear model.

    Science.gov (United States)

    Kuo, M; Mohler, B; Raudenbush, S L; Earls, F J

    2000-11-01

    The present study assesses the effects of demographic risk factors on children's exposure to violence (ETV) and how these effects vary by informants. Data on exposure to violence of 9-, 12-, and 15-year-olds were collected from both child participants (N = 1880) and parents (N = 1776), as part of the assessment of the Project on Human Development in Chicago Neighborhoods (PHDCN). A two-level hierarchical linear model (HLM) with multivariate outcomes was employed to analyze information obtained from these two different groups of informants. The findings indicate that parents generally report less ETV than do their children and that associations of age, gender, and parent education with ETV are stronger in the self-reports than in the parent reports. The findings support a multivariate approach when information obtained from different sources is being integrated. The application of HLM allows an assessment of interactions between risk factors and informants and uses all available data, including data from one informant when data from the other informant is missing.

  8. Comparison of methods of extracting information for meta-analysis of observational studies in nutritional epidemiology

    Directory of Open Access Journals (Sweden)

    Jong-Myon Bae

    2016-01-01

    Full Text Available OBJECTIVES: A common method for conducting a quantitative systematic review (QSR for observational studies related to nutritional epidemiology is the “highest versus lowest intake” method (HLM, in which only the information concerning the effect size (ES of the highest category of a food item is collected on the basis of its lowest category. However, in the interval collapsing method (ICM, a method suggested to enable a maximum utilization of all available information, the ES information is collected by collapsing all categories into a single category. This study aimed to compare the ES and summary effect size (SES between the HLM and ICM. METHODS: A QSR for evaluating the citrus fruit intake and risk of pancreatic cancer and calculating the SES by using the HLM was selected. The ES and SES were estimated by performing a meta-analysis using the fixed-effect model. The directionality and statistical significance of the ES and SES were used as criteria for determining the concordance between the HLM and ICM outcomes. RESULTS: No significant differences were observed in the directionality of SES extracted by using the HLM or ICM. The application of the ICM, which uses a broader information base, yielded more-consistent ES and SES, and narrower confidence intervals than the HLM. CONCLUSIONS: The ICM is advantageous over the HLM owing to its higher statistical accuracy in extracting information for QSR on nutritional epidemiology. The application of the ICM should hence be recommended for future studies.

  9. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  10. Modelling and analysing oriented fibrous structures

    International Nuclear Information System (INIS)

    Rantala, M; Lassas, M; Siltanen, S; Sampo, J; Takalo, J; Timonen, J

    2014-01-01

    A mathematical model for fibrous structures using a direction dependent scaling law is presented. The orientation of fibrous nets (e.g. paper) is analysed with a method based on the curvelet transform. The curvelet-based orientation analysis has been tested successfully on real data from paper samples: the major directions of fibrefibre orientation can apparently be recovered. Similar results are achieved in tests on data simulated by the new model, allowing a comparison with ground truth

  11. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...... attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate...

  12. Development of ITER 3D neutronics model and nuclear analyses

    International Nuclear Information System (INIS)

    Zeng, Q.; Zheng, S.; Lu, L.; Li, Y.; Ding, A.; Hu, H.; Wu, Y.

    2007-01-01

    ITER nuclear analyses rely on the calculations with the three-dimensional (3D) Monte Carlo code e.g. the widely-used MCNP. However, continuous changes in the design of the components require the 3D neutronics model for nuclear analyses should be updated. Nevertheless, the modeling of a complex geometry with MCNP by hand is a very time-consuming task. It is an efficient way to develop CAD-based interface code for automatic conversion from CAD models to MCNP input files. Based on the latest CAD model and the available interface codes, the two approaches of updating 3D nuetronics model have been discussed by ITER IT (International Team): The first is to start with the existing MCNP model 'Brand' and update it through a combination of direct modification of the MCNP input file and generation of models for some components directly from the CAD data; The second is to start from the full CAD model, make the necessary simplifications, and generate the MCNP model by one of the interface codes. MCAM as an advanced CAD-based MCNP interface code developed by FDS Team in China has been successfully applied to update the ITER 3D neutronics model by adopting the above two approaches. The Brand model has been updated to generate portions of the geometry based on the newest CAD model by MCAM. MCAM has also successfully performed conversion to MCNP neutronics model from a full ITER CAD model which is simplified and issued by ITER IT to benchmark the above interface codes. Based on the two updated 3D neutronics models, the related nuclear analyses are performed. This paper presents the status of ITER 3D modeling by using MCAM and its nuclear analyses, as well as a brief introduction of advanced version of MCAM. (authors)

  13. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  14. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  15. Performance of neutron kinetics models for ADS transient analyses

    International Nuclear Information System (INIS)

    Rineiski, A.; Maschek, W.; Rimpault, G.

    2002-01-01

    Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One

  16. Implications of a Cognitive Science Model Integrating Literacy in Science on Achievement in Science and Reading: Direct Effects in Grades 3-5 with Transfer to Grades 6-7

    Science.gov (United States)

    Romance, Nancy; Vitale, Michael

    2017-01-01

    Reported are the results of a multiyear study in which reading comprehension and writing were integrated within an in-depth science instructional model (Science IDEAS) in daily 1.5 to 2 h daily lessons on a schoolwide basis in grades 3-4-5. Multilevel (HLM7) achievement findings showed the experimental intervention resulted in significant and…

  17. A dialogue game for analysing group model building: framing collaborative modelling and its facilitation

    NARCIS (Netherlands)

    Hoppenbrouwers, S.J.B.A.; Rouwette, E.A.J.A.

    2012-01-01

    This paper concerns a specific approach to analysing and structuring operational situations in collaborative modelling. Collaborative modelling is viewed here as 'the goal-driven creation and shaping of models that are based on the principles of rational description and reasoning'. Our long term

  18. Metabolism of methylstenbolone studied with human liver microsomes and the uPA⁺/⁺-SCID chimeric mouse model.

    Science.gov (United States)

    Geldof, Lore; Lootens, Leen; Polet, Michael; Eichner, Daniel; Campbell, Thane; Nair, Vinod; Botrè, Francesco; Meuleman, Philip; Leroux-Roels, Geert; Deventer, Koen; Eenoo, Peter Van

    2014-07-01

    Anti-doping laboratories need to be aware of evolutions on the steroid market and elucidate steroid metabolism to identify markers of misuse. Owing to ethical considerations, in vivo and in vitro models are preferred to human excretion for nonpharmaceutical grade substances. In this study the chimeric mouse model and human liver microsomes (HLM) were used to elucidate the phase I metabolism of a new steroid product containing, according to the label, methylstenbolone. Analysis revealed the presence of both methylstenbolone and methasterone, a structurally closely related steroid. Via HPLC fraction collection, methylstenbolone was isolated and studied with both models. Using HLM, 10 mono-hydroxylated derivatives (U1-U10) and a still unidentified derivative of methylstenbolone (U13) were detected. In chimeric mouse urine only di-hydroxylated metabolites (U11-U12) were identified. Although closely related, neither methasterone nor its metabolites were detected after administration of isolated methylstenbolone. Administration of the steroid product resulted mainly in the detection of methasterone metabolites, which were similar to those already described in the literature. Methylstenbolone metabolites previously described were not detected. A GC-MS/MS multiple reaction monitoring method was developed to detect methylstenbolone misuse. In one out of three samples, previously tested positive for methasterone, methylstenbolone and U13 were additionally detected, indicating the applicability of the method. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Model-based Recursive Partitioning for Subgroup Analyses

    OpenAIRE

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-01-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by...

  20. Beta-Poisson model for single-cell RNA-seq data analyses.

    Science.gov (United States)

    Vu, Trung Nghia; Wills, Quin F; Kalari, Krishna R; Niu, Nifang; Wang, Liewei; Rantalainen, Mattias; Pawitan, Yudi

    2016-07-15

    Single-cell RNA-sequencing technology allows detection of gene expression at the single-cell level. One typical feature of the data is a bimodality in the cellular distribution even for highly expressed genes, primarily caused by a proportion of non-expressing cells. The standard and the over-dispersed gamma-Poisson models that are commonly used in bulk-cell RNA-sequencing are not able to capture this property. We introduce a beta-Poisson mixture model that can capture the bimodality of the single-cell gene expression distribution. We further integrate the model into the generalized linear model framework in order to perform differential expression analyses. The whole analytical procedure is called BPSC. The results from several real single-cell RNA-seq datasets indicate that ∼90% of the transcripts are well characterized by the beta-Poisson model; the model-fit from BPSC is better than the fit of the standard gamma-Poisson model in > 80% of the transcripts. Moreover, in differential expression analyses of simulated and real datasets, BPSC performs well against edgeR, a conventional method widely used in bulk-cell RNA-sequencing data, and against scde and MAST, two recent methods specifically designed for single-cell RNA-seq data. An R package BPSC for model fitting and differential expression analyses of single-cell RNA-seq data is available under GPL-3 license at https://github.com/nghiavtr/BPSC CONTACT: yudi.pawitan@ki.se or mattias.rantalainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  1. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Science.gov (United States)

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  2. Structural Elucidation of Metabolites of Synthetic Cannabinoid UR-144 by Cunninghamella elegans Using Nuclear Magnetic Resonance (NMR) Spectroscopy.

    Science.gov (United States)

    Watanabe, Shimpei; Kuzhiumparambil, Unnikrishnan; Fu, Shanlin

    2018-03-08

    The number of new psychoactive substances keeps on rising despite the controlling efforts by law enforcement. Although metabolism of the newly emerging drugs is continuously studied to keep up with the new additions, the exact structures of the metabolites are often not identified due to the insufficient sample quantities for techniques such as nuclear magnetic resonance (NMR) spectroscopy. The aim of the study was to characterise several metabolites of the synthetic cannabinoid (1-pentyl-1H-indol-3-yl) (2,2,3,3-tetramethylcyclopropyl) methanone (UR-144) by NMR spectroscopy after the incubation with the fungus Cunninghamella elegans. UR-144 was incubated with C. elegans for 72 h, and the resulting metabolites were chromatographically separated. Six fractions were collected and analysed by NMR spectroscopy. UR-144 was also incubated with human liver microsomes (HLM), and the liquid chromatography-high resolution mass spectrometry analysis was performed on the HLM metabolites with the characterised fungal metabolites as reference standards. Ten metabolites were characterised by NMR analysis including dihydroxy metabolites, carboxy and hydroxy metabolites, a hydroxy and ketone metabolite, and a carboxy and ketone metabolite. Of these metabolites, dihydroxy metabolite, carboxy and hydroxy metabolites, and a hydroxy and ketone metabolite were identified in HLM incubation. The results indicate that the fungus is capable of producing human-relevant metabolites including the exact isomers. The capacity of the fungus C. elegans to allow for NMR structural characterisation by enabling production of large amounts of metabolites makes it an ideal model to complement metabolism studies.

  3. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2011-01-01

    Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.

  4. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  5. Present status of theories and data analyses of mathematical models for carcinogenesis

    International Nuclear Information System (INIS)

    Kai, Michiaki; Kawaguchi, Isao

    2007-01-01

    Reviewed are the basic mathematical models (hazard functions), present trend of the model studies and that for radiation carcinogenesis. Hazard functions of carcinogenesis are described for multi-stage model and 2-event model related with cell dynamics. At present, the age distribution of cancer mortality is analyzed, relationship between mutation and carcinogenesis is discussed, and models for colorectal carcinogenesis are presented. As for radiation carcinogenesis, models of Armitage-Doll and of generalized MVK (Moolgavkar, Venson, Knudson, 1971-1990) by 2-stage clonal expansion have been applied to analysis of carcinogenesis in A-bomb survivors, workers in uranium mine (Rn exposure) and smoking doctors in UK and other cases, of which characteristics are discussed. In analyses of A-bomb survivors, models above are applied to solid tumors and leukemia to see the effect, if any, of stage, age of exposure, time progression etc. In miners and smokers, stages of the initiation, promotion and progression in carcinogenesis are discussed on the analyses. Others contain the analyses of workers in Canadian atomic power plant, and of patients who underwent the radiation therapy. Model analysis can help to understand the carcinogenic process in a quantitative aspect rather than to describe the process. (R.T.)

  6. SVM models for analysing the headstreams of mine water inrush

    Energy Technology Data Exchange (ETDEWEB)

    Yan Zhi-gang; Du Pei-jun; Guo Da-zhi [China University of Science and Technology, Xuzhou (China). School of Environmental Science and Spatial Informatics

    2007-08-15

    The support vector machine (SVM) model was introduced to analyse the headstrean of water inrush in a coal mine. The SVM model, based on a hydrogeochemical method, was constructed for recognising two kinds of headstreams and the H-SVMs model was constructed for recognising multi- headstreams. The SVM method was applied to analyse the conditions of two mixed headstreams and the value of the SVM decision function was investigated as a means of denoting the hydrogeochemical abnormality. The experimental results show that the SVM is based on a strict mathematical theory, has a simple structure and a good overall performance. Moreover the parameter W in the decision function can describe the weights of discrimination indices of the headstream of water inrush. The value of the decision function can denote hydrogeochemistry abnormality, which is significant in the prevention of water inrush in a coal mine. 9 refs., 1 fig., 7 tabs.

  7. Model-Based Recursive Partitioning for Subgroup Analyses.

    Science.gov (United States)

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-05-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by predictive factors. The method starts with a model for the overall treatment effect as defined for the primary analysis in the study protocol and uses measures for detecting parameter instabilities in this treatment effect. The procedure produces a segmented model with differential treatment parameters corresponding to each patient subgroup. The subgroups are linked to predictive factors by means of a decision tree. The method is applied to the search for subgroups of patients suffering from amyotrophic lateral sclerosis that differ with respect to their Riluzole treatment effect, the only currently approved drug for this disease.

  8. Bayesian uncertainty analyses of probabilistic risk models

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1989-01-01

    Applications of Bayesian principles to the uncertainty analyses are discussed in the paper. A short review of the most important uncertainties and their causes is provided. An application of the principle of maximum entropy to the determination of Bayesian prior distributions is described. An approach based on so called probabilistic structures is presented in order to develop a method of quantitative evaluation of modelling uncertainties. The method is applied to a small example case. Ideas for application areas for the proposed method are discussed

  9. Influence of Leaders' Psychological Capital on Their Followers: Multilevel Mediation Effect of Organizational Identification

    Science.gov (United States)

    Chen, Qishan; Wen, Zhonglin; Kong, Yurou; Niu, Jun; Hau, Kit-Tai

    2017-01-01

    We investigated the relationships between leaders' and their followers' psychological capital and organizational identification in a Chinese community. Participants included 423 followers on 34 work teams, each with its respective team leader. Hierarchical linear models (HLM) were used in the analyses to delineate the relationships among participants' demographic background (gender, age, marital status, and educational level), human capital, and tenure. The results revealed that leaders' psychological capital positively influenced their followers' psychological capital through the mediation effect of enhancing followers' organizational identification. The implications of these findings, the study's limitations, and directions for future research are discussed. PMID:29075218

  10. Analyses and simulations in income frame regulation model for the network sector from 2007

    International Nuclear Information System (INIS)

    Askeland, Thomas Haave; Fjellstad, Bjoern

    2007-01-01

    Analyses of the income frame regulation model for the network sector in Norway, introduced 1.st of January 2007. The model's treatment of the norm cost is evaluated, especially the effect analyses carried out by a so called Data Envelopment Analysis model. It is argued that there may exist an age lopsidedness in the data set, and that this should and can be corrected in the effect analyses. The adjustment is proposed corrected for by introducing an age parameter in the data set. Analyses of how the calibration effects in the regulation model affect the business' total income frame, as well as each network company's income frame have been made. It is argued that the calibration, the way it is presented, is not working according to its intention, and should be adjusted in order to provide the sector with the rate of reference in return (ml)

  11. Use of flow models to analyse loss of coolant accidents

    International Nuclear Information System (INIS)

    Pinet, Bernard

    1978-01-01

    This article summarises current work on developing the use of flow models to analyse loss-of-coolant accident in pressurized-water plants. This work is being done jointly, in the context of the LOCA Technical Committee, by the CEA, EDF and FRAMATOME. The construction of the flow model is very closely based on some theoretical studies of the two-fluid model. The laws of transfer at the interface and at the wall are tested experimentally. The representativity of the model then has to be checked in experiments involving several elementary physical phenomena [fr

  12. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  13. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  14. Models and analyses for inertial-confinement fusion-reactor studies

    International Nuclear Information System (INIS)

    Bohachevsky, I.O.

    1981-05-01

    This report describes models and analyses devised at Los Alamos National Laboratory to determine the technical characteristics of different inertial confinement fusion (ICF) reactor elements required for component integration into a functional unit. We emphasize the generic properties of the different elements rather than specific designs. The topics discussed are general ICF reactor design considerations; reactor cavity phenomena, including the restoration of interpulse ambient conditions; first-wall temperature increases and material losses; reactor neutronics and hydrodynamic blanket response to neutron energy deposition; and analyses of loads and stresses in the reactor vessel walls, including remarks about the generation and propagation of very short wavelength stress waves. A discussion of analytic approaches useful in integrations and optimizations of ICF reactor systems concludes the report

  15. Experimental and Computational Modal Analyses for Launch Vehicle Models considering Liquid Propellant and Flange Joints

    Directory of Open Access Journals (Sweden)

    Chang-Hoon Sim

    2018-01-01

    Full Text Available In this research, modal tests and analyses are performed for a simplified and scaled first-stage model of a space launch vehicle using liquid propellant. This study aims to establish finite element modeling techniques for computational modal analyses by considering the liquid propellant and flange joints of launch vehicles. The modal tests measure the natural frequencies and mode shapes in the first and second lateral bending modes. As the liquid filling ratio increases, the measured frequencies decrease. In addition, as the number of flange joints increases, the measured natural frequencies increase. Computational modal analyses using the finite element method are conducted. The liquid is modeled by the virtual mass method, and the flange joints are modeled using one-dimensional spring elements along with the node-to-node connection. Comparison of the modal test results and predicted natural frequencies shows good or moderate agreement. The correlation between the modal tests and analyses establishes finite element modeling techniques for modeling the liquid propellant and flange joints of space launch vehicles.

  16. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Directory of Open Access Journals (Sweden)

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  17. Numerical model simulation of free surface behavior in spallation target of ADS

    International Nuclear Information System (INIS)

    Chai Xiang; Su Guanyu; Cheng Xu

    2012-01-01

    The spallation target in accelerator driven sub-critical system (ADS) couples the subcritical reactor core with accelerator. The design of a windowless target has to ensure the formation of a stable free surface with desirable shape, to avoid local over- heating of the heavy liquid metal (HLM). To investigate the free surface behavior of the spallation target, OpenFOAM, an opened CFD software platform, was used to simulate the formation and features of the free surface in the windowless target. VOF method was utilized as the interface-capturing methodology. The numerical results were compared to experimental data and numerical results obtained with FLUENT code. The effects of turbulence models were studied and recommendations were made related to application of turbulence models. (authors)

  18. A 1024 channel analyser of model FH 465

    International Nuclear Information System (INIS)

    Tang Cunxun

    1988-01-01

    The FH 465 is renewed type of the 1024 Channel Analyser of model FH451. Besides simple operation and fine display, featured by the primary one, the core memory is replaced by semiconductor memory; the integration has been improved; employment of 74LS low power consumpted devices widely used in the world has not only greatly decreased the cost, but also can be easily interchanged with Apple-II, Great Wall-0520-CH or IBM-PC/XT Microcomputers. The operating principle, main specifications and test results are described

  19. arXiv Statistical Analyses of Higgs- and Z-Portal Dark Matter Models

    CERN Document Server

    Ellis, John; Marzola, Luca; Raidal, Martti

    2018-06-12

    We perform frequentist and Bayesian statistical analyses of Higgs- and Z-portal models of dark matter particles with spin 0, 1/2 and 1. Our analyses incorporate data from direct detection and indirect detection experiments, as well as LHC searches for monojet and monophoton events, and we also analyze the potential impacts of future direct detection experiments. We find acceptable regions of the parameter spaces for Higgs-portal models with real scalar, neutral vector, Majorana or Dirac fermion dark matter particles, and Z-portal models with Majorana or Dirac fermion dark matter particles. In many of these cases, there are interesting prospects for discovering dark matter particles in Higgs or Z decays, as well as dark matter particles weighing $\\gtrsim 100$ GeV. Negative results from planned direct detection experiments would still allow acceptable regions for Higgs- and Z-portal models with Majorana or Dirac fermion dark matter particles.

  20. Vocational Teachers and Professionalism - A Model Based on Empirical Analyses

    DEFF Research Database (Denmark)

    Duch, Henriette Skjærbæk; Andreasen, Karen E

    Vocational Teachers and Professionalism - A Model Based on Empirical Analyses Several theorists has developed models to illustrate the processes of adult learning and professional development (e.g. Illeris, Argyris, Engeström; Wahlgren & Aarkorg, Kolb and Wenger). Models can sometimes be criticized...... emphasis on the adult employee, the organization, its surroundings as well as other contextual factors. Our concern is adult vocational teachers attending a pedagogical course and teaching at vocational colleges. The aim of the paper is to discuss different models and develop a model concerning teachers...... at vocational colleges based on empirical data in a specific context, vocational teacher-training course in Denmark. By offering a basis and concepts for analysis of practice such model is meant to support the development of vocational teachers’ professionalism at courses and in organizational contexts...

  1. Material model for non-linear finite element analyses of large concrete structures

    NARCIS (Netherlands)

    Engen, Morten; Hendriks, M.A.N.; Øverli, Jan Arve; Åldstedt, Erik; Beushausen, H.

    2016-01-01

    A fully triaxial material model for concrete was implemented in a commercial finite element code. The only required input parameter was the cylinder compressive strength. The material model was suitable for non-linear finite element analyses of large concrete structures. The importance of including

  2. Effect of socioeconomic inequalities on cholecystectomy outcomes: a 10-year population-based analysis.

    Science.gov (United States)

    Lu, Ping; Yang, Nan-Ping; Chang, Nien-Tzu; Lai, K Robert; Lin, Kai-Biao; Chan, Chien-Lung

    2018-02-13

    Although numerous epidemiological studies on cholecystectomy have been conducted worldwide, only a few have considered the effect of socioeconomic inequalities on cholecystectomy outcomes. Specifically, few studies have focused on the low-income population (LIP). A nationwide prospective study based on the Taiwan National Health Insurance dataset was conducted during 2003-2012. The International Classification of ICD-9-CM procedure codes 51.2 and 51.21-51.24 were identified as the inclusion criteria for cholecystectomy. Temporal trends were analyzed using a joinpoint regression, and the hierarchical linear modeling (HLM) method was used as an analytical strategy to evaluate the group-level and individual-level factors. Interactions between age, gender and SES were also tested in HLM model. Analyses were conducted on 225,558 patients. The incidence rates were 167.81 (95% CI: 159.78-175.83) per 100,000 individuals per year for the LIP and 123.24 (95% CI: 116.37-130.12) per 100,000 individuals per year for the general population (GP). After cholecystectomy, LIP patients showed higher rates of 30-day mortality, in-hospital complications, and readmission for complications, but a lower rate of routine discharge than GP patients. The hospital costs and length of stay for LIP patients were higher than those for GP patients. The multilevel analysis using HLM revealed that adverse socioeconomic status significantly negatively affects the outcomes of patients undergoing cholecystectomy. Additionally, male sex, advanced age, and high Charlson Comorbidity Index (CCI) scores were associated with higher rates of in-hospital complications and 30-day mortality. We also observed that the 30-day mortality rates for patients who underwent cholecystectomy in regional hospitals and district hospitals were significantly higher than those of patients receiving care in a medical center. Patients with a disadvantaged finance status appeared to be more vulnerable to cholecystectomy surgery

  3. Taxing CO2 and subsidising biomass: Analysed in a macroeconomic and sectoral model

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    2000-01-01

    This paper analyses the combination of taxes and subsidies as an instrument to enable a reduction in CO2 emission. The objective of the study is to compare recycling of a CO2 tax revenue as a subsidy for biomass use as opposed to traditional recycling such as reduced income or corporate taxation....... A model of Denmark's energy supply sector is used to analyse the e€ect of a CO2 tax combined with using the tax revenue for biomass subsidies. The energy supply model is linked to a macroeconomic model such that the macroeconomic consequences of tax policies can be analysed along with the consequences...... for speci®c sectors such as agriculture. Electricity and heat are produced at heat and power plants utilising fuels which minimise total fuel cost, while the authorities regulate capacity expansion technologies. The e€ect of fuel taxes and subsidies on fuels is very sensitive to the fuel substitution...

  4. Strategies for Determining Correct Cytochrome P450 Contributions in Hepatic Clearance Predictions: In Vitro-In Vivo Extrapolation as Modelling Approach and Tramadol as Proof-of Concept Compound.

    Science.gov (United States)

    T'jollyn, Huybrecht; Snoeys, Jan; Van Bocxlaer, Jan; De Bock, Lies; Annaert, Pieter; Van Peer, Achiel; Allegaert, Karel; Mannens, Geert; Vermeulen, An; Boussery, Koen

    2017-06-01

    Although the measurement of cytochrome P450 (CYP) contributions in metabolism assays is straightforward, determination of actual in vivo contributions might be challenging. How representative are in vitro for in vivo CYP contributions? This article proposes an improved strategy for the determination of in vivo CYP enzyme-specific metabolic contributions, based on in vitro data, using an in vitro-in vivo extrapolation (IVIVE) approach. Approaches are exemplified using tramadol as model compound, and CYP2D6 and CYP3A4 as involved enzymes. Metabolism data for tramadol and for the probe substrates midazolam (CYP3A4) and dextromethorphan (CYP2D6) were gathered in human liver microsomes (HLM) and recombinant human enzyme systems (rhCYP). From these probe substrates, an activity-adjustment factor (AAF) was calculated per CYP enzyme, for the determination of correct hepatic clearance contributions. As a reference, tramadol CYP contributions were scaled-back from in vivo data (retrograde approach) and were compared with the ones derived in vitro. In this view, the AAF is an enzyme-specific factor, calculated from reference probe activity measurements in vitro and in vivo, that allows appropriate scaling of a test drug's in vitro activity to the 'healthy volunteer' population level. Calculation of an AAF, thus accounts for any 'experimental' or 'batch-specific' activity difference between in vitro HLM and in vivo derived activity. In this specific HLM batch, for CYP3A4 and CYP2D6, an AAF of 0.91 and 1.97 was calculated, respectively. This implies that, in this batch, the in vitro CYP3A4 activity is 1.10-fold higher and the CYP2D6 activity 1.97-fold lower, compared to in vivo derived CYP activities. This study shows that, in cases where the HLM pool does not represent the typical mean population CYP activities, AAF correction of in vitro metabolism data, optimizes CYP contributions in the prediction of hepatic clearance. Therefore, in vitro parameters for any test compound

  5. Quantifying and Analysing Neighbourhood Characteristics Supporting Urban Land-Use Modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2009-01-01

    Land-use modelling and spatial scenarios have gained increased attention as a means to meet the challenge of reducing uncertainty in the spatial planning and decision-making. Several organisations have developed software for land-use modelling. Many of the recent modelling efforts incorporate...... cellular automata (CA) to accomplish spatially explicit land-use change modelling. Spatial interaction between neighbour land-uses is an important component in urban cellular automata. Nevertheless, this component is calibrated through trial-and-error estimation. The aim of the current research project has...... been to quantify and analyse land-use neighbourhood characteristics and impart useful information for cell based land-use modelling. The results of our research is a major step forward, because we have estimated rules for neighbourhood interaction from really observed land-use changes at a yearly basis...

  6. Impact of CYP2C8*3 polymorphism on in vitro metabolism of imatinib to N-desmethyl imatinib.

    Science.gov (United States)

    Khan, Muhammad Suleman; Barratt, Daniel T; Somogyi, Andrew A

    2016-01-01

    1. Imatinib is metabolized to N-desmethyl imatinib by CYPs 3A4 and 2C8. The effect of CYP2C8*3 genotype on N-desmethyl imatinib formation was unknown. 2. We examined imatinib N-demethylation in human liver microsomes (HLMs) genotyped for CYP2C8*3, in CYP2C8*3/*3 pooled HLMs and in recombinant CYP2C8 and CYP3A4 enzymes. Effects of CYP-selective inhibitors on N-demethylation were also determined. 3. A single-enzyme Michaelis-Menten model with autoinhibition best fitted CYP2C8*1/*1 HLM (n = 5) and recombinant CYP2C8 kinetic data (median ± SD Ki = 139 ± 61 µM and 149 µM, respectively). Recombinant CYP3A4 showed two-site enzyme kinetics with no autoinhibition. Three of four CYP2C8*1/*3 HLMs showed single-enzyme kinetics with no autoinhibition. Binding affinity was higher in CYP2C8*1/*3 than CYP2C8*1/*1 HLM (median ± SD Km = 6 ± 2 versus 11 ± 2 µM, P=0.04). CYP2C8*3/*3 (pooled HLM) also showed high binding affinity (Km = 4 µM) and single-enzyme weak autoinhibition (Ki = 449 µM) kinetics. CYP2C8 inhibitors reduced HLM N-demethylation by 47-75%, compared to 0-30% for CYP3A4 inhibitors. 4. In conclusion, CYP2C8*3 is a gain-of-function polymorphism for imatinib N-demethylation, which appears to be mainly mediated by CYP2C8 and not CYP3A4 in vitro in HLM.

  7. A non-equilibrium neutral model for analysing cultural change.

    Science.gov (United States)

    Kandler, Anne; Shennan, Stephen

    2013-08-07

    Neutral evolution is a frequently used model to analyse changes in frequencies of cultural variants over time. Variants are chosen to be copied according to their relative frequency and new variants are introduced by a process of random mutation. Here we present a non-equilibrium neutral model which accounts for temporally varying population sizes and mutation rates and makes it possible to analyse the cultural system under consideration at any point in time. This framework gives an indication whether observed changes in the frequency distributions of a set of cultural variants between two time points are consistent with the random copying hypothesis. We find that the likelihood of the existence of the observed assemblage at the end of the considered time period (expressed by the probability of the observed number of cultural variants present in the population during the whole period under neutral evolution) is a powerful indicator of departures from neutrality. Further, we study the effects of frequency-dependent selection on the evolutionary trajectories and present a case study of change in the decoration of pottery in early Neolithic Central Europe. Based on the framework developed we show that neutral evolution is not an adequate description of the observed changes in frequency. Copyright © 2013 Elsevier Ltd. All rights reserved.

  8. The Differential Treatment Model: Empirical Evidence from a Personality Typology of Adult Offenders.

    Science.gov (United States)

    Annis, Helen M.; Chan, David

    1983-01-01

    Interaction of offender type by treatment program was examined for 100 adult offenders with alcohol and drug problems assigned to a group therapy program and 50 to routine care. Offenders who were classified high in self-image showed greater improvement in the group therapy program. (Author/HLM)

  9. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Science.gov (United States)

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

  10. Multi-state models: metapopulation and life history analyses

    Directory of Open Access Journals (Sweden)

    Arnason, A. N.

    2004-06-01

    Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such

  11. Multilevel Analysis of Employee Satisfaction on Commitment to Organizational Culture: Case Study of Chinese State-Owned Enterprises

    Directory of Open Access Journals (Sweden)

    Fangtao Liu

    2017-11-01

    Full Text Available This study analyzes the effects of employee satisfaction and demographic indicators on employee commitment to organizational culture at the enterprise level. With data from a survey of 3029 employees from 27 state-owned enterprises (SOEs, a hierarchical linear model (HLM is used to identify the influencing factors of employee commitment to organizational culture at the enterprise level. An empirical study indicates that apart from the factors of employee satisfaction and demographic background, four contextual variables of enterprises, namely, comprehensive management, energy intensity, cost-income ratio, and capacity-load ratio, also influence commitment to organizational culture levels. Results show that applying HLM can substantially improve the explanatory power of employee satisfaction factors on commitment to organizational culture using nested enterprise contextual variables. Although measurement scales and satisfaction models have been proposed over the years, only a few studies have addressed the particular nature inherent in Chinese SOEs. HLM, which accounts for the nested data structure and determines the effects of employee satisfaction factors on commitment to organizational culture without bias, is developed in this study. Through an insider view based on empirical work, this research can improve the ability of senior managers to understand the culture and dynamics of organizations, to deliver strong leadership, and to enhance corporate internal management.

  12. A fast exact simulation method for a class of Markov jump processes.

    Science.gov (United States)

    Li, Yao; Hu, Lili

    2015-11-14

    A new method of the stochastic simulation algorithm (SSA), named the Hashing-Leaping method (HLM), for exact simulations of a class of Markov jump processes, is presented in this paper. The HLM has a conditional constant computational cost per event, which is independent of the number of exponential clocks in the Markov process. The main idea of the HLM is to repeatedly implement a hash-table-like bucket sort algorithm for all times of occurrence covered by a time step with length τ. This paper serves as an introduction to this new SSA method. We introduce the method, demonstrate its implementation, analyze its properties, and compare its performance with three other commonly used SSA methods in four examples. Our performance tests and CPU operation statistics show certain advantages of the HLM for large scale problems.

  13. Regional analyses of labor markets and demography: a model based Norwegian example.

    Science.gov (United States)

    Stambol, L S; Stolen, N M; Avitsland, T

    1998-01-01

    The authors discuss the regional REGARD model, developed by Statistics Norway to analyze the regional implications of macroeconomic development of employment, labor force, and unemployment. "In building the model, empirical analyses of regional producer behavior in manufacturing industries have been performed, and the relation between labor market development and regional migration has been investigated. Apart from providing a short description of the REGARD model, this article demonstrates the functioning of the model, and presents some results of an application." excerpt

  14. Analysing and controlling the tax evasion dynamics via majority-vote model

    Energy Technology Data Exchange (ETDEWEB)

    Lima, F W S, E-mail: fwslima@gmail.co, E-mail: wel@ufpi.edu.b [Departamento de Fisica, Universidade Federal do PiauI, 64049-550, Teresina - PI (Brazil)

    2010-09-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q{sub c} to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  15. Analysing and controlling the tax evasion dynamics via majority-vote model

    International Nuclear Information System (INIS)

    Lima, F W S

    2010-01-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q c to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  16. On the use of uncertainty analyses to test hypotheses regarding deterministic model predictions of environmental processes

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Bittner, E.A.; Essington, E.H.

    1995-01-01

    This paper illustrates the use of Monte Carlo parameter uncertainty and sensitivity analyses to test hypotheses regarding predictions of deterministic models of environmental transport, dose, risk and other phenomena. The methodology is illustrated by testing whether 238 Pu is transferred more readily than 239+240 Pu from the gastrointestinal (GI) tract of cattle to their tissues (muscle, liver and blood). This illustration is based on a study wherein beef-cattle grazed for up to 1064 days on a fenced plutonium (Pu)-contaminated arid site in Area 13 near the Nevada Test Site in the United States. Periodically, cattle were sacrificed and their tissues analyzed for Pu and other radionuclides. Conditional sensitivity analyses of the model predictions were also conducted. These analyses indicated that Pu cattle tissue concentrations had the largest impact of any model parameter on the pdf of predicted Pu fractional transfers. Issues that arise in conducting uncertainty and sensitivity analyses of deterministic models are discussed. (author)

  17. Growth Trajectories of Mathematics Achievement: Longitudinal Tracking of Student Academic Progress

    Science.gov (United States)

    Mok, Magdalena M. C.; McInerney, Dennis M.; Zhu, Jinxin; Or, Anthony

    2015-01-01

    Background: A number of methods to investigate growth have been reported in the literature, including hierarchical linear modelling (HLM), latent growth modelling (LGM), and multidimensional scaling applied to longitudinal profile analysis (LPAMS). Aims: This study aimed at modelling the mathematics growth of students over a span of 6 years from…

  18. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  19. Pneumoproteins as a lung-specific biomarker of alveolar permeability in conventional on-pump coronary artery bypass graft surgery vs mini-extracorporeal circuit - A pilot study

    NARCIS (Netherlands)

    van Boven, WJP; Gerritsen, WBM; Zanen, P; Grutters, JC; van Dongen, HPA; Bernard, A; Aarts, LPHJ

    Background: Despite improvements of the heart-lung machine (HLM), oxidative stress and subsequent damage to the alveolar capillary membrane still occur after conventional on-pump coronary artery bypass graft (CCABG) surgery. In an attempt to further improve the conventional HLM, a

  20. YALINA Booster subcritical assembly modeling and analyses

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, Y.; Aliberti, G.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Sadovich, S.

    2010-01-01

    Full text: Accurate simulation models of the YALINA Booster assembly of the Joint Institute for Power and Nuclear Research (JIPNR)-Sosny, Belarus have been developed by Argonne National Laboratory (ANL) of the USA. YALINA-Booster has coupled zones operating with fast and thermal neutron spectra, which requires a special attention in the modelling process. Three different uranium enrichments of 90%, 36% or 21% were used in the fast zone and 10% uranium enrichment was used in the thermal zone. Two of the most advanced Monte Carlo computer programs have been utilized for the ANL analyses: MCNP of the Los Alamos National Laboratory and MONK of the British Nuclear Fuel Limited and SERCO Assurance. The developed geometrical models for both computer programs modelled all the details of the YALINA Booster facility as described in the technical specifications defined in the International Atomic Energy Agency (IAEA) report without any geometrical approximation or material homogenization. Materials impurities and the measured material densities have been used in the models. The obtained results for the neutron multiplication factors calculated in criticality mode (keff) and in source mode (ksrc) with an external neutron source from the two Monte Carlo programs are very similar. Different external neutron sources have been investigated including californium, deuterium-deuterium (D-D), and deuterium-tritium (D-T) neutron sources. The spatial neutron flux profiles and the neutron spectra in the experimental channels were calculated. In addition, the kinetic parameters were defined including the effective delayed neutron fraction, the prompt neutron lifetime, and the neutron generation time. A new calculation methodology has been developed at ANL to simulate the pulsed neutron source experiments. In this methodology, the MCNP code is used to simulate the detector response from a single pulse of the external neutron source and a C code is used to superimpose the pulse until the

  1. Global analyses of historical masonry buildings: Equivalent frame vs. 3D solid models

    Science.gov (United States)

    Clementi, Francesco; Mezzapelle, Pardo Antonio; Cocchi, Gianmichele; Lenci, Stefano

    2017-07-01

    The paper analyses the seismic vulnerability of two different masonry buildings. It provides both an advanced 3D modelling with solid elements and an equivalent frame modelling. The global structural behaviour and the dynamic properties of the compound have been evaluated using the Finite Element Modelling (FEM) technique, where the nonlinear behaviour of masonry has been taken into account by proper constitutive assumptions. A sensitivity analysis is done to evaluate the effect of the choice of the structural models.

  2. Vibration tests and analyses of the reactor building model on a small scale

    International Nuclear Information System (INIS)

    Tsuchiya, Hideo; Tanaka, Mitsuru; Ogihara, Yukio; Moriyama, Ken-ichi; Nakayama, Masaaki

    1985-01-01

    The purpose of this paper is to describe the vibration tests and the simulation analyses of the reactor building model on a small scale. The model vibration tests were performed to investigate the vibrational characteristics of the combined super-structure and to verify the computor code based on Dr. H. Tajimi's Thin Layered Element Theory, using the uniaxial shaking table (60 cm x 60 cm). The specimens consist of ground model, three structural model (prestressed concrete containment vessel, inner concrete structure, and enclosure building), a combined structural model and a combined structure-soil interaction model. These models are made of silicon-rubber, and they have a scale of 1:600. Harmonic step by step excitation of 40 gals was performed to investigate the vibrational characteristics for each structural model. The responses of the specimen to harmonic excitation were measured by optical displacement meters, and analyzed by a real time spectrum analyzer. The resonance and phase lag curves of the specimens to the shaking table were obtained respectively. As for the tests of a combined structure-soil interaction model, three predominant frequencies were observed in the resonance curves. These values were in good agreement with the analytical transfer function curves on the computer code. From the vibration tests and the simulation analyses, the silicon-rubber model test is useful for the fundamental study of structural problems. The computer code based on the Thin Element Theory can simulate well the test results. (Kobozono, M.)

  3. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  4. Identification and characterization of human UDP-glucuronosyltransferases responsible for the in-vitro glucuronidation of arctigenin.

    Science.gov (United States)

    Xin, Hong; Xia, Yang-Liu; Hou, Jie; Wang, Ping; He, Wei; Yang, Ling; Ge, Guang-Bo; Xu, Wei

    2015-12-01

    This study aimed to characterize the glucuronidation pathway of arctigenin (AR) in human liver microsomes (HLM) and human intestine microsomes (HIM). HLM and HIM incubation systems were employed to catalyse the formation of AR glucuronide. The glucuronidation activity of commercially recombinant UGT isoforms towards AR was screened. A combination of chemical inhibition assay and kinetic analysis was used to determine the UGT isoforms involved in the glucuronidation of AR in HLM and HIM. AR could be extensively metabolized to one mono-glucuronide in HLM and HIM. The mono-glucuronide was biosynthesized and characterized as 4'-O-glucuronide. UGT1A1, 1A3, 1A7, 1A8, 1A9, 1A10, 2B4, 2B7 and 2B17 participated in the formation of 4'-O-G, while UGT2B17 demonstrated the highest catalytic activity in this biotransformation. Both kinetic analysis and chemical inhibition assays demonstrated that UGT1A9, UGT2B7 and UGT2B17 played important roles in AR-4'-O-glucuronidation in HLM. Furthermore, HIM demonstrated moderate efficiency for AR-4'-O-glucuronidation, implying that AR may undergo a first-pass metabolism during the absorption process. UGT1A9, UGT2B7 and UGT2B17 were the major isoforms responsible for the 4'-O-glucuronidation of AR in HLM, while UGT2B7 and UGT2B17 were the major contributors to this biotransformation in HIM. © 2015 Royal Pharmaceutical Society.

  5. Identification of an unusual type II thioesterase in the dithiolopyrrolone antibiotics biosynthetic pathway

    Energy Technology Data Exchange (ETDEWEB)

    Zhai, Ying; Bai, Silei; Liu, Jingjing; Yang, Liyuan [National Key Laboratory of Agricultural Microbiology, College of Life Science and Technology, Huazhong Agricultural University, Wuhan 430070 (China); Han, Li; Huang, Xueshi [Institute of Microbial Pharmaceuticals, College of Life and Health Sciences, Northeastern University, Shenyang 110819 (China); He, Jing, E-mail: hejingjj@mail.hzau.edu.cn [National Key Laboratory of Agricultural Microbiology, College of Life Science and Technology, Huazhong Agricultural University, Wuhan 430070 (China)

    2016-04-22

    Dithiolopyrrolone group antibiotics characterized by an electronically unique dithiolopyrrolone heterobicyclic core are known for their antibacterial, antifungal, insecticidal and antitumor activities. Recently the biosynthetic gene clusters for two dithiolopyrrolone compounds, holomycin and thiomarinol, have been identified respectively in different bacterial species. Here, we report a novel dithiolopyrrolone biosynthetic gene cluster (aut) isolated from Streptomyces thioluteus DSM 40027 which produces two pyrrothine derivatives, aureothricin and thiolutin. By comparison with other characterized dithiolopyrrolone clusters, eight genes in the aut cluster were verified to be responsible for the assembly of dithiolopyrrolone core. The aut cluster was further confirmed by heterologous expression and in-frame gene deletion experiments. Intriguingly, we found that the heterogenetic thioesterase HlmK derived from the holomycin (hlm) gene cluster in Streptomyces clavuligerus significantly improved heterologous biosynthesis of dithiolopyrrolones in Streptomyces albus through coexpression with the aut cluster. In the previous studies, HlmK was considered invalid because it has a Ser to Gly point mutation within the canonical Ser-His-Asp catalytic triad of thioesterases. However, gene inactivation and complementation experiments in our study unequivocally demonstrated that HlmK is an active distinctive type II thioesterase that plays a beneficial role in dithiolopyrrolone biosynthesis. - Highlights: • Cloning of the aureothricin biosynthetic gene cluster from Streptomyces thioluteus DSM 40027. • Identification of the aureothricin gene cluster by heterologous expression and in-frame gene deletion. • The heterogenetic thioesterase HlmK significantly improved dithiolopyrrolones production of the aureothricin gene cluster. • Identification of HlmK as an unusual type II thioesterase.

  6. Identification of an unusual type II thioesterase in the dithiolopyrrolone antibiotics biosynthetic pathway

    International Nuclear Information System (INIS)

    Zhai, Ying; Bai, Silei; Liu, Jingjing; Yang, Liyuan; Han, Li; Huang, Xueshi; He, Jing

    2016-01-01

    Dithiolopyrrolone group antibiotics characterized by an electronically unique dithiolopyrrolone heterobicyclic core are known for their antibacterial, antifungal, insecticidal and antitumor activities. Recently the biosynthetic gene clusters for two dithiolopyrrolone compounds, holomycin and thiomarinol, have been identified respectively in different bacterial species. Here, we report a novel dithiolopyrrolone biosynthetic gene cluster (aut) isolated from Streptomyces thioluteus DSM 40027 which produces two pyrrothine derivatives, aureothricin and thiolutin. By comparison with other characterized dithiolopyrrolone clusters, eight genes in the aut cluster were verified to be responsible for the assembly of dithiolopyrrolone core. The aut cluster was further confirmed by heterologous expression and in-frame gene deletion experiments. Intriguingly, we found that the heterogenetic thioesterase HlmK derived from the holomycin (hlm) gene cluster in Streptomyces clavuligerus significantly improved heterologous biosynthesis of dithiolopyrrolones in Streptomyces albus through coexpression with the aut cluster. In the previous studies, HlmK was considered invalid because it has a Ser to Gly point mutation within the canonical Ser-His-Asp catalytic triad of thioesterases. However, gene inactivation and complementation experiments in our study unequivocally demonstrated that HlmK is an active distinctive type II thioesterase that plays a beneficial role in dithiolopyrrolone biosynthesis. - Highlights: • Cloning of the aureothricin biosynthetic gene cluster from Streptomyces thioluteus DSM 40027. • Identification of the aureothricin gene cluster by heterologous expression and in-frame gene deletion. • The heterogenetic thioesterase HlmK significantly improved dithiolopyrrolones production of the aureothricin gene cluster. • Identification of HlmK as an unusual type II thioesterase.

  7. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    International Nuclear Information System (INIS)

    2014-12-01

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  8. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-12-15

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  9. Diethylstilbestrol can effectively accelerate estradiol-17-O-glucuronidation, while potently inhibiting estradiol-3-O-glucuronidation

    International Nuclear Information System (INIS)

    Zhu, Liangliang; Xiao, Ling; Xia, Yangliu; Zhou, Kun; Wang, Huili; Huang, Minyi; Ge, Guangbo; Wu, Yan; Wu, Ganlin; Yang, Ling

    2015-01-01

    This in vitro study investigates the effects of diethylstilbestrol (DES), a widely used toxic synthetic estrogen, on estradiol-3- and 17-O- (E2-3/17-O) glucuronidation, via culturing human liver microsomes (HLMs) or recombinant UDP-glucuronosyltransferases (UGTs) with DES and E2. DES can potently inhibit E2-3-O-glucuronidation in HLM, a probe reaction for UGT1A1. Kinetic assays indicate that the inhibition follows a competitive inhibition mechanism, with the Ki value of 2.1 ± 0.3 μM, which is less than the possible in vivo level. In contrast to the inhibition on E2-3-O-glucuronidation, the acceleration is observed on E2-17-O-glucuronidation in HLM, in which cholestatic E2-17-O-glucuronide is generated. In the presence of DES (0–6.25 μM), K m values for E2-17-O-glucuronidation are located in the range of 7.2–7.4 μM, while V max values range from 0.38 to 1.54 nmol/min/mg. The mechanism behind the activation in HLM is further demonstrated by the fact that DES can efficiently elevate the activity of UGT1A4 in catalyzing E2-17-O-glucuronidation. The presence of DES (2 μM) can elevate V max from 0.016 to 0.81 nmol/min/mg, while lifting K m in a much lesser extent from 4.4 to 11 μM. Activation of E2-17-O-glucuronidation is well described by a two binding site model, with K A , α, and β values of 0.077 ± 0.18 μM, 3.3 ± 1.1 and 104 ± 56, respectively. However, diverse effects of DES towards E2-3/17-O-glucuronidation are not observed in liver microsomes from several common experimental animals. In summary, this study issues new potential toxic mechanisms for DES: potently inhibiting the activity of UGT1A1 and powerfully accelerating the formation of cholestatic E2-17-O-glucuronide by UGT1A4. - Highlights: • E2-3-O-glucuronidation in HLM is inhibited when co-incubated with DES. • E2-17-O-glucuronidation in HLM is stimulated when co-incubated with DES. • Acceleration of E2-17-O-glucuronidationin in HLM by DES is via activating the activity of UGT1A4

  10. Diethylstilbestrol can effectively accelerate estradiol-17-O-glucuronidation, while potently inhibiting estradiol-3-O-glucuronidation

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Liangliang; Xiao, Ling [The Centre for Drug and Food Safety Evaluation, School of Life Science, Anqing Normal University, Anqing 246011 (China); Xia, Yangliu [Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023 (China); Zhou, Kun [College of Pharmacy, Liaoning University of Traditional Chinese Medicine, Dalian 116600 (China); Wang, Huili; Huang, Minyi [The Centre for Drug and Food Safety Evaluation, School of Life Science, Anqing Normal University, Anqing 246011 (China); Ge, Guangbo, E-mail: geguangbo@dicp.ac.cn [Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023 (China); Wu, Yan; Wu, Ganlin [The Centre for Drug and Food Safety Evaluation, School of Life Science, Anqing Normal University, Anqing 246011 (China); Yang, Ling, E-mail: yling@dicp.ac.cn [Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023 (China)

    2015-03-01

    This in vitro study investigates the effects of diethylstilbestrol (DES), a widely used toxic synthetic estrogen, on estradiol-3- and 17-O- (E2-3/17-O) glucuronidation, via culturing human liver microsomes (HLMs) or recombinant UDP-glucuronosyltransferases (UGTs) with DES and E2. DES can potently inhibit E2-3-O-glucuronidation in HLM, a probe reaction for UGT1A1. Kinetic assays indicate that the inhibition follows a competitive inhibition mechanism, with the Ki value of 2.1 ± 0.3 μM, which is less than the possible in vivo level. In contrast to the inhibition on E2-3-O-glucuronidation, the acceleration is observed on E2-17-O-glucuronidation in HLM, in which cholestatic E2-17-O-glucuronide is generated. In the presence of DES (0–6.25 μM), K{sub m} values for E2-17-O-glucuronidation are located in the range of 7.2–7.4 μM, while V{sub max} values range from 0.38 to 1.54 nmol/min/mg. The mechanism behind the activation in HLM is further demonstrated by the fact that DES can efficiently elevate the activity of UGT1A4 in catalyzing E2-17-O-glucuronidation. The presence of DES (2 μM) can elevate V{sub max} from 0.016 to 0.81 nmol/min/mg, while lifting K{sub m} in a much lesser extent from 4.4 to 11 μM. Activation of E2-17-O-glucuronidation is well described by a two binding site model, with K{sub A}, α, and β values of 0.077 ± 0.18 μM, 3.3 ± 1.1 and 104 ± 56, respectively. However, diverse effects of DES towards E2-3/17-O-glucuronidation are not observed in liver microsomes from several common experimental animals. In summary, this study issues new potential toxic mechanisms for DES: potently inhibiting the activity of UGT1A1 and powerfully accelerating the formation of cholestatic E2-17-O-glucuronide by UGT1A4. - Highlights: • E2-3-O-glucuronidation in HLM is inhibited when co-incubated with DES. • E2-17-O-glucuronidation in HLM is stimulated when co-incubated with DES. • Acceleration of E2-17-O-glucuronidationin in HLM by DES is via activating the

  11. Applications of one-dimensional models in simplified inelastic analyses

    International Nuclear Information System (INIS)

    Kamal, S.A.; Chern, J.M.; Pai, D.H.

    1980-01-01

    This paper presents an approximate inelastic analysis based on geometric simplification with emphasis on its applicability, modeling, and the method of defining the loading conditions. Two problems are investigated: a one-dimensional axisymmetric model of generalized plane strain thick-walled cylinder is applied to the primary sodium inlet nozzle of the Clinch River Breeder Reactor Intermediate Heat Exchanger (CRBRP-IHX), and a finite cylindrical shell is used to simulate the branch shell forging (Y) junction. The results are then compared with the available detailed inelastic analyses under cyclic loading conditions in terms of creep and fatigue damages and inelastic ratchetting strains per the ASME Code Case N-47 requirements. In both problems, the one-dimensional simulation is able to trace the detailed stress-strain response. The quantitative comparison is good for the nozzle, but less satisfactory for the Y junction. Refinements are suggested to further improve the simulation

  12. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  13. A Multilevel Association Model for IT Employees’ Life Stress and Job Satisfaction: An Information Technology (IT Industry Case Study

    Directory of Open Access Journals (Sweden)

    Mehmood Khalid

    2017-01-01

    Full Text Available The aim of this research was to investigate the association among IT employees’ life stress and job satisfaction in information technology (IT firms. Data on 250 IT employees’ in 30 working groups was obtained from 10 Information Technology (IT Chinese firms from Beijing, and analyzed using hierarchical linear modeling (HLM. Results found momentous association among life stress of IT employees’ and their job satisfaction at an individual-level and group-level in IT firms. Furthermore, life stress in Beijing at group-level moderates the association among job satisfaction and IT employees’ life stress at an individual-level. Finally, limitations and implications of the present study are also discussed.

  14. NUMERICAL MODELLING AS NON-DESTRUCTIVE METHOD FOR THE ANALYSES AND DIAGNOSIS OF STONE STRUCTURES: MODELS AND POSSIBILITIES

    Directory of Open Access Journals (Sweden)

    Nataša Štambuk-Cvitanović

    1999-12-01

    Full Text Available Assuming the necessity of analysis, diagnosis and preservation of existing valuable stone masonry structures and ancient monuments in today European urban cores, numerical modelling become an efficient tool for the structural behaviour investigation. It should be supported by experimentally found input data and taken as a part of general combined approach, particularly non-destructive techniques on the structure/model within it. For the structures or their detail which may require more complex analyses three numerical models based upon finite elements technique are suggested: (1 standard linear model; (2 linear model with contact (interface elements; and (3 non-linear elasto-plastic and orthotropic model. The applicability of these models depend upon the accuracy of the approach or type of the problem, and will be presented on some characteristic samples.

  15. Generic uncertainty model for DETRA for environmental consequence analyses. Application and sample outputs

    International Nuclear Information System (INIS)

    Suolanen, V.; Ilvonen, M.

    1998-10-01

    Computer model DETRA applies a dynamic compartment modelling approach. The compartment structure of each considered application can be tailored individually. This flexible modelling method makes it possible that the transfer of radionuclides can be considered in various cases: aquatic environment and related food chains, terrestrial environment, food chains in general and food stuffs, body burden analyses of humans, etc. In the former study on this subject, modernization of the user interface of DETRA code was carried out. This new interface works in Windows environment and the usability of the code has been improved. The objective of this study has been to further develop and diversify the user interface so that also probabilistic uncertainty analyses can be performed by DETRA. The most common probability distributions are available: uniform, truncated Gaussian and triangular. The corresponding logarithmic distributions are also available. All input data related to a considered case can be varied, although this option is seldomly needed. The calculated output values can be selected as monitored values at certain simulation time points defined by the user. The results of a sensitivity run are immediately available after simulation as graphical presentations. These outcomes are distributions generated for varied parameters, density functions of monitored parameters and complementary cumulative density functions (CCDF). An application considered in connection with this work was the estimation of contamination of milk caused by radioactive deposition of Cs (10 kBq(Cs-137)/m 2 ). The multi-sequence calculation model applied consisted of a pasture modelling part and a dormant season modelling part. These two sequences were linked periodically simulating the realistic practice of care taking of domestic animals in Finland. The most important parameters were varied in this exercise. The performed diversifying of the user interface of DETRA code seems to provide an easily

  16. Allosteric activation of midazolam CYP3A5 hydroxylase activity by icotinib - Enhancement by ketoconazole.

    Science.gov (United States)

    Zhuang, XiaoMei; Zhang, TianHong; Yue, SiJia; Wang, Juan; Luo, Huan; Zhang, YunXia; Li, Zheng; Che, JinJing; Yang, HaiYing; Li, Hua; Zhu, MingShe; Lu, Chuang

    2016-12-01

    Icotinib (ICO), a novel small molecule and a tyrosine kinase inhibitor, was developed and approved recently in China for non-small cell lung cancer. During screening for CYP inhibition potential in human liver microsomes (HLM), heterotropic activation toward CYP3A5 was revealed. Activation by icotinib was observed with CYP3A-mediated midazolam hydroxylase activity in HLM (∼40% over the baseline) or recombinant human CYP3A5 (rhCYP3A5) (∼70% over the baseline), but not in the other major CYPs including rhCYP3A4. When co-incubated with selective CYP3A4 inhibitor CYP3cide or monoclonal human CYP3A4 inhibitory antibody in HLM, the activation was extended to ∼60%, suggesting CYP3A5 might be the isozyme involved. Further, the relative activation was enhanced to ∼270% in rhCYP3A5 in the presence of ketoconazole. The activation was substrate and pathway dependent and observed only in the formation of 1'-OH-midazolam, and not 4-OH-midazolam, 6β-OH-testosterone, or oxidized nifedipine. The activation requires the presence of cytochrome b5 and it is only observed in the liver microsomes of dogs, monkeys, and humans, but not in rats and mice. Kinetic analyses of 1'-OH-midazolam formation showed that ICO increased the V max values in HLM and rhCYP3A5 with no significant changes in K m values. By adding CYP3cide with ICO to the incubation, the V max values increased 2-fold over the CYP3cide control. Addition of ketoconazole with ICO alone or ICO plus CYP3cide resulted in an increase in V max values and decrease in K m values compared to their controls. This phenomenon may be attributed to a new mechanism of CYP3A5 heterotropic activation, which warrants further investigation. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. An IEEE 802.11 EDCA Model with Support for Analysing Networks with Misbehaving Nodes

    Directory of Open Access Journals (Sweden)

    Szott Szymon

    2010-01-01

    Full Text Available We present a novel model of IEEE 802.11 EDCA with support for analysing networks with misbehaving nodes. In particular, we consider backoff misbehaviour. Firstly, we verify the model by extensive simulation analysis and by comparing it to three other IEEE 802.11 models. The results show that our model behaves satisfactorily and outperforms other widely acknowledged models. Secondly, a comparison with simulation results in several scenarios with misbehaving nodes proves that our model performs correctly for these scenarios. The proposed model can, therefore, be considered as an original contribution to the area of EDCA models and backoff misbehaviour.

  18. Toward a Common Language for Measuring Patient Mobility in the Hospital: Reliability and Construct Validity of Interprofessional Mobility Measures.

    Science.gov (United States)

    Hoyer, Erik H; Young, Daniel L; Klein, Lisa M; Kreif, Julie; Shumock, Kara; Hiser, Stephanie; Friedman, Michael; Lavezza, Annette; Jette, Alan; Chan, Kitty S; Needham, Dale M

    2018-02-01

    The lack of common language among interprofessional inpatient clinical teams is an important barrier to achieving inpatient mobilization. In The Johns Hopkins Hospital, the Activity Measure for Post-Acute Care (AM-PAC) Inpatient Mobility Short Form (IMSF), also called "6-Clicks," and the Johns Hopkins Highest Level of Mobility (JH-HLM) are part of routine clinical practice. The measurement characteristics of these tools when used by both nurses and physical therapists for interprofessional communication or assessment are unknown. The purposes of this study were to evaluate the reliability and minimal detectable change of AM-PAC IMSF and JH-HLM when completed by nurses and physical therapists and to evaluate the construct validity of both measures when used by nurses. A prospective evaluation of a convenience sample was used. The test-retest reliability and the interrater reliability of AM-PAC IMSF and JH-HLM for inpatients in the neuroscience department (n = 118) of an academic medical center were evaluated. Each participant was independently scored twice by a team of 2 nurses and 1 physical therapist; a total of 4 physical therapists and 8 nurses participated in reliability testing. In a separate inpatient study protocol (n = 69), construct validity was evaluated via an assessment of convergent validity with other measures of function (grip strength, Katz Activities of Daily Living Scale, 2-minute walk test, 5-times sit-to-stand test) used by 5 nurses. The test-retest reliability values (intraclass correlation coefficients) for physical therapists and nurses were 0.91 and 0.97, respectively, for AM-PAC IMSF and 0.94 and 0.95, respectively, for JH-HLM. The interrater reliability values (intraclass correlation coefficients) between physical therapists and nurses were 0.96 for AM-PAC IMSF and 0.99 for JH-HLM. Construct validity (Spearman correlations) ranged from 0.25 between JH-HLM and right-hand grip strength to 0.80 between AM-PAC IMSF and the Katz Activities of

  19. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  20. The Development of Coordinated Communication in Infants at Heightened Risk for Autism Spectrum Disorder

    Science.gov (United States)

    Parladé, Meaghan V.; Iverson, Jana M.

    2015-01-01

    This study evaluated the extent to which developmental change in coordination of social communication in early infancy differentiates children eventually diagnosed with ASD from those not likely to develop the disorder. A prospective longitudinal design was used to compare 9 infants at heightened risk for ASD (HR) later diagnosed with ASD, to 13 HR infants with language delay, 28 HR infants with no diagnosis, and 30 low risk infants. Hierarchical Linear Modeling (HLM) analyses revealed that ASD infants exhibited significantly slower growth in coordinations overall and in gestures coordinated with vocalizations, even relative to HR infants with language delay. Disruption in the development of gesture-vocalization coordinations may result in negative cascading effects that negatively impact later social and linguistic development. PMID:25689930

  1. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  2. Process of Integrating Screening and Detailed Risk-based Modeling Analyses to Ensure Consistent and Scientifically Defensible Results

    International Nuclear Information System (INIS)

    Buck, John W.; McDonald, John P.; Taira, Randal Y.

    2002-01-01

    To support cleanup and closure of these tanks, modeling is performed to understand and predict potential impacts to human health and the environment. Pacific Northwest National Laboratory developed a screening tool for the United States Department of Energy, Office of River Protection that estimates the long-term human health risk, from a strategic planning perspective, posed by potential tank releases to the environment. This tool is being conditioned to more detailed model analyses to ensure consistency between studies and to provide scientific defensibility. Once the conditioning is complete, the system will be used to screen alternative cleanup and closure strategies. The integration of screening and detailed models provides consistent analyses, efficiencies in resources, and positive feedback between the various modeling groups. This approach of conditioning a screening methodology to more detailed analyses provides decision-makers with timely and defensible information and increases confidence in the results on the part of clients, regulators, and stakeholders

  3. Public Park Spaces as a Platform to Promote Healthy Living: Introducing a HealthPark Concept.

    Science.gov (United States)

    Arena, Ross; Bond, Samantha; O'Neill, Robert; Laddu, Deepika R; Hills, Andrew P; Lavie, Carl J; McNeil, Amy

    The concept of Healthy Living (HL) as a primary medical intervention continues to gain traction, and rightfully so. Being physically active, consuming a nutritious diet, not smoking and maintaining an appropriate body weight constitute the HL polypill, the foundation of HL medicine (HLM). Daily use of the HL polypill, working toward optimal dosages, portends profound health benefits, substantially reducing the risk of chronic disease [i.e., cardiovascular disease (CVD), pulmonary disease, metabolic syndromes, certain cancers, etc.] and associated adverse health consequences. To be effective and proactive, our healthcare system must rethink where its primary intervention, HLM, is delivered. Waiting for individuals to come to the traditional outpatient setting is an ineffective approach as poor lifestyle habits are typically well established by the time care is initiated. Ideally, HLM should be delivered where individuals live, work and go to school, promoting immersion in a culture of health and wellness. To this end, there is a growing interest in the use of public parks as a platform to promote the adoption of HL behaviors. The current perspectives paper provides a brief literature review on the use of public parks for HL interventions and introduces a new HealthPark model being developed in Chicago. Copyright © 2017 Elsevier Inc. All rights reserved.

  4. School Health Promotion Policies and Adolescent Risk Behaviors in Israel: A Multilevel Analysis

    Science.gov (United States)

    Tesler, Riki; Harel-Fisch, Yossi; Baron-Epel, Orna

    2016-01-01

    Background: Health promotion policies targeting risk-taking behaviors are being implemented across schools in Israel. This study identified the most effective components of these policies influencing cigarette smoking and alcohol consumption among adolescents. Methods: Logistic hierarchical linear model (HLM) analysis of data for 5279 students in…

  5. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  6. Biophysical analysis of a lethal laminin alpha-1 mutation reveals altered self-interaction

    KAUST Repository

    Patel, Trushar R.; Nikodemus, Denise; Besong, Tabot M.D.; Reuten, Raphael; Meier, Markus; Harding, Stephen E.; Winzor, Donald J.; Koch, Manuel; Stetefeld, Jö rg

    2015-01-01

    Laminins are key basement membrane molecules that influence several biological activities and are linked to a number of diseases. They are secreted as heterotrimeric proteins consisting of one α, one β, and one γ chain, followed by their assembly into a polymer-like sheet at the basement membrane. Using sedimentation velocity, dynamic light scattering, and surface plasmon resonance experiments, we studied self-association of three laminin (LM) N-terminal fragments α-1 (hLM α-1 N), α-5 (hLM α-5 N) and β-3 (hLM β-3 N) originating from the short arms of the human laminin αβγ heterotrimer. Corresponding studies of the hLM α-1 N C49S mutant, equivalent to the larval lethal C56S mutant in zebrafish, have shown that this mutation causes enhanced self-association behavior, an observation that provides a plausible explanation for the inability of laminin bearing this mutation to fulfill functional roles in vivo, and hence for the deleterious pathological consequences of the mutation on lens function.

  7. Biophysical analysis of a lethal laminin alpha-1 mutation reveals altered self-interaction

    KAUST Repository

    Patel, Trushar R.

    2015-07-26

    Laminins are key basement membrane molecules that influence several biological activities and are linked to a number of diseases. They are secreted as heterotrimeric proteins consisting of one α, one β, and one γ chain, followed by their assembly into a polymer-like sheet at the basement membrane. Using sedimentation velocity, dynamic light scattering, and surface plasmon resonance experiments, we studied self-association of three laminin (LM) N-terminal fragments α-1 (hLM α-1 N), α-5 (hLM α-5 N) and β-3 (hLM β-3 N) originating from the short arms of the human laminin αβγ heterotrimer. Corresponding studies of the hLM α-1 N C49S mutant, equivalent to the larval lethal C56S mutant in zebrafish, have shown that this mutation causes enhanced self-association behavior, an observation that provides a plausible explanation for the inability of laminin bearing this mutation to fulfill functional roles in vivo, and hence for the deleterious pathological consequences of the mutation on lens function.

  8. Comparison of optical-model and Lane-model analyses of sub-Coulomb protons on /sup 92,94/Zr

    International Nuclear Information System (INIS)

    Schrils, R.; Flynn, D.S.; Hershberger, R.L.; Gabbard, F.

    1979-01-01

    Accurate proton elastic-scattering cross sections were measured with enriched targets of /sup 92,94/Zr from E/sub p/ = 2.0 to 6.5 MeV. The elastic-scattering cross sections, together with absorption cross sections, were analyzed with a Lane model which employed the optical potential of Johnson et al. The resulting parameters were compared with those obtained with a single-channel optical model and negligible differences were found. Significant differences between the 92 Zr and 94 Zr real diffusenesses resulted from the inclusion of the (p,p) data in the analyses

  9. Inverse analyses of effective diffusion parameters relevant for a two-phase moisture model of cementitious materials

    DEFF Research Database (Denmark)

    Addassi, Mouadh; Johannesson, Björn; Wadsö, Lars

    2018-01-01

    Here we present an inverse analyses approach to determining the two-phase moisture transport properties relevant to concrete durability modeling. The purposed moisture transport model was based on a continuum approach with two truly separate equations for the liquid and gas phase being connected...... test, and, (iv) capillary suction test. Mass change over time, as obtained from the drying test, the two different cup test intervals and the capillary suction test, was used to obtain the effective diffusion parameters using the proposed inverse analyses approach. The moisture properties obtained...

  10. Chromosomal locations of members of a family of novel endogenous human retroviral genomes

    International Nuclear Information System (INIS)

    Horn, T.M.; Huebner, K.; Croce, C.; Callahan, R.

    1986-01-01

    Human cellular DNA contains two distinguishable families of retroviral related sequences. One family shares extensive nucleotide sequence homology with infectious mammalian type C retroviral genomes. The other family contains major regions of homology with the pol genes of infectious type A and B and avian type C and D retroviral genomes. Analysis of the human recombinant clone HLM-2 has shown that the pol gene in the latter family is located within an endogenous proviral genome. The authors show that the proviral genome in HLM-2 and the related recombinant clone HLM-25 are located, respectively, on human chromosomes 1 and 5. Other related proviral genomes are located on chromosomes 7, 8, 11, 14, and 17

  11. Nurses' intention to leave: critically analyse the theory of reasoned action and organizational commitment model.

    Science.gov (United States)

    Liou, Shwu-Ru

    2009-01-01

    To systematically analyse the Organizational Commitment model and Theory of Reasoned Action and determine concepts that can better explain nurses' intention to leave their job. The Organizational Commitment model and Theory of Reasoned Action have been proposed and applied to understand intention to leave and turnover behaviour, which are major contributors to nursing shortage. However, the appropriateness of applying these two models in nursing was not analysed. Three main criteria of a useful model were used for the analysis: consistency in the use of concepts, testability and predictability. Both theories use concepts consistently. Concepts in the Theory of Reasoned Action are defined broadly whereas they are operationally defined in the Organizational Commitment model. Predictability of the Theory of Reasoned Action is questionable whereas the Organizational Commitment model can be applied to predict intention to leave. A model was proposed based on this analysis. Organizational commitment, intention to leave, work experiences, job characteristics and personal characteristics can be concepts for predicting nurses' intention to leave. Nursing managers may consider nurses' personal characteristics and experiences to increase their organizational commitment and enhance their intention to stay. Empirical studies are needed to test and cross-validate the re-synthesized model for nurses' intention to leave their job.

  12. Meaningful Effect Sizes, Intraclass Correlations, and Proportions of Variance Explained by Covariates for Planning Two- and Three-Level Cluster Randomized Trials of Social and Behavioral Outcomes.

    Science.gov (United States)

    Dong, Nianbo; Reinke, Wendy M; Herman, Keith C; Bradshaw, Catherine P; Murray, Desiree W

    2016-09-30

    There is a need for greater guidance regarding design parameters and empirical benchmarks for social and behavioral outcomes to inform assumptions in the design and interpretation of cluster randomized trials (CRTs). We calculated the empirical reference values on critical research design parameters associated with statistical power for children's social and behavioral outcomes, including effect sizes, intraclass correlations (ICCs), and proportions of variance explained by a covariate at different levels (R 2 ). Children from kindergarten to Grade 5 in the samples from four large CRTs evaluating the effectiveness of two classroom- and two school-level preventive interventions. Teacher ratings of students' social and behavioral outcomes using the Teacher Observation of Classroom Adaptation-Checklist and the Social Competence Scale-Teacher. Two types of effect size benchmarks were calculated: (1) normative expectations for change and (2) policy-relevant demographic performance gaps. The ICCs and R 2 were calculated using two-level hierarchical linear modeling (HLM), where students are nested within schools, and three-level HLM, where students were nested within classrooms, and classrooms were nested within schools. Comprehensive tables of benchmarks and ICC values are provided to inform prevention researchers in interpreting the effect size of interventions and conduct power analyses for designing CRTs of children's social and behavioral outcomes. The discussion also provides a demonstration for how to use the parameter reference values provided in this article to calculate the sample size for two- and three-level CRTs designs. © The Author(s) 2016.

  13. Factors influencing the occupational injuries of physical therapists in Taiwan: A hierarchical linear model approach.

    Science.gov (United States)

    Tao, Yu-Hui; Wu, Yu-Lung; Huang, Wan-Yun

    2017-01-01

    The evidence literature suggests that physical therapy practitioners are subjected to a high probability of acquiring work-related injuries, but only a few studies have specifically investigated Taiwanese physical therapy practitioners. This study was conducted to determine the relationships among individual and group hospital-level factors that contribute to the medical expenses for the occupational injuries of physical therapy practitioners in Taiwan. Physical therapy practitioners in Taiwan with occupational injuries were selected from the 2013 National Health Insurance Research Databases (NHIRD). The age, gender, job title, hospitals attributes, and outpatient data of physical therapy practitioners who sustained an occupational injury in 2013 were obtained with SAS 9.3. SPSS 20.0 and HLM 7.01 were used to conduct descriptive and hierarchical linear model analyses, respectively. The job title of physical therapy practitioners at the individual level and the hospital type at the group level exert positive effects on per person medical expenses. Hospital hierarchy moderates the individual-level relationships of age and job title with the per person medical expenses. Considering that age, job title, and hospital hierarchy affect medical expenses for the occupational injuries of physical therapy practitioners, we suggest strengthening related safety education and training and elevating the self-awareness of the risk of occupational injuries of physical therapy practitioners to reduce and prevent the occurrence of such injuries.

  14. ENEA infrastructures toward the LFR development

    International Nuclear Information System (INIS)

    Tarantino, M.; Agostini, P.; Del Nevo, A.; Di Piazza, I.; Rozzia, D.

    2013-01-01

    ENEA has one of the most relevant EU R&D infrastructures for HLM technological development, and it is strongly involved in the main research programs worldwide supporting the development of sub-critical (MYRRHA) and critical lead cooled reactors (ALFRED). In these frames a large experimental program ranging from HLM thermal-hydraulic to large scale experiment has been implemented

  15. A multilevel model of organizational health culture and the effectiveness of health promotion.

    Science.gov (United States)

    Lin, Yea-Wen; Lin, Yueh-Ysen

    2014-01-01

    Organizational health culture is a health-oriented core characteristic of the organization that is shared by all members. It is effective in regulating health-related behavior for employees and could therefore influence the effectiveness of health promotion efforts among organizations and employees. This study applied a multilevel analysis to verify the effects of organizational health culture on the organizational and individual effectiveness of health promotion. At the organizational level, we investigated the effect of organizational health culture on the organizational effectiveness of health promotion. At the individual level, we adopted a cross-level analysis to determine if organizational health culture affects employee effectiveness through the mediating effect of employee health behavior. The study setting consisted of the workplaces of various enterprises. We selected 54 enterprises in Taiwan and surveyed 20 full-time employees from each organization, for a total sample of 1011 employees. We developed the Organizational Health Culture Scale to measure employee perceptions and aggregated the individual data to formulate organization-level data. Organizational effectiveness of health promotion included four dimensions: planning effectiveness, production, outcome, and quality, which were measured by scale or objective indicators. The Health Promotion Lifestyle Scale was adopted for the measurement of health behavior. Employee effectiveness was measured subjectively in three dimensions: self-evaluated performance, altruism, and happiness. Following the calculation of descriptive statistics, hierarchical linear modeling (HLM) was used to test the multilevel hypotheses. Organizational health culture had a significant effect on the planning effectiveness (β = .356, p production (β = .359, p promotion. In addition, results of cross-level moderating effect analysis by HLM demonstrated that the effects of organizational health culture on three dimensions of

  16. Modular 3-D solid finite element model for fatigue analyses of a PWR coolant system

    International Nuclear Information System (INIS)

    Garrido, Oriol Costa; Cizelj, Leon; Simonovski, Igor

    2012-01-01

    Highlights: ► A 3-D model of a reactor coolant system for fatigue usage assessment. ► The performed simulations are a heat transfer and stress analyses. ► The main results are the expected ranges of fatigue loadings. - Abstract: The extension of operational licenses of second generation pressurized water reactor (PWR) nuclear power plants depends to a large extent on the analyses of fatigue usage of the reactor coolant pressure boundary. The reliable estimation of the fatigue usage requires detailed thermal and stress analyses of the affected components. Analyses, based upon the in-service transient loads should be compared to the loads analyzed at the design stage. The thermal and stress transients can be efficiently analyzed using the finite element method. This requires that a 3-D solid model of a given system is discretized with finite elements (FE). The FE mesh density is crucial for both the accuracy and the cost of the analysis. The main goal of the paper is to propose a set of computational tools which assist a user in a deployment of modular spatial FE model of main components of a typical reactor coolant system, e.g., pipes, pressure vessels and pumps. The modularity ensures that the components can be analyzed individually or in a system. Also, individual components can be meshed with different mesh densities, as required by the specifics of the particular transient studied. For optimal accuracy, all components are meshed with hexahedral elements with quadratic interpolation. The performance of the model is demonstrated with simulations performed with a complete two-loop PWR coolant system (RCS). Heat transfer analysis and stress analysis for a complete loading and unloading cycle of the RCS are performed. The main results include expected ranges of fatigue loading for the pipe lines and coolant pump components under the given conditions.

  17. Student Engagement and Classroom Variables in Improving Mathematics Achievement

    Science.gov (United States)

    Park, So-Young

    2005-01-01

    The study explored how much student engagement and classroom variables predicted student achievement in mathematics. Since students were nested within a classroom, hierarchical linear modeling (HLM) was employed for the analysis. The results indicated that student engagement had positive effects on student academic growth per month in math after…

  18. Are Principal Background and School Processes Related to Teacher Job Satisfaction? A Multilevel Study Using Schools and Staffing Survey 2003-04

    Science.gov (United States)

    Shen, Jianping; Leslie, Jeffrey M.; Spybrook, Jessaca K.; Ma, Xin

    2012-01-01

    Using nationally representative samples for public school teachers and principals, the authors inquired into whether principal background and school processes are related to teacher job satisfaction. Employing hierarchical linear modeling (HLM), the authors were able to control for background characteristics at both the teacher and school levels.…

  19. A theoretical model for analysing gender bias in medicine.

    Science.gov (United States)

    Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina

    2009-08-03

    During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  20. Growth Modeling with Non-Ignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial

    Science.gov (United States)

    Muthén, Bengt; Asparouhov, Tihomir; Hunter, Aimee; Leuchter, Andrew

    2011-01-01

    This paper uses a general latent variable framework to study a series of models for non-ignorable missingness due to dropout. Non-ignorable missing data modeling acknowledges that missingness may depend on not only covariates and observed outcomes at previous time points as with the standard missing at random (MAR) assumption, but also on latent variables such as values that would have been observed (missing outcomes), developmental trends (growth factors), and qualitatively different types of development (latent trajectory classes). These alternative predictors of missing data can be explored in a general latent variable framework using the Mplus program. A flexible new model uses an extended pattern-mixture approach where missingness is a function of latent dropout classes in combination with growth mixture modeling using latent trajectory classes. A new selection model allows not only an influence of the outcomes on missingness, but allows this influence to vary across latent trajectory classes. Recommendations are given for choosing models. The missing data models are applied to longitudinal data from STAR*D, the largest antidepressant clinical trial in the U.S. to date. Despite the importance of this trial, STAR*D growth model analyses using non-ignorable missing data techniques have not been explored until now. The STAR*D data are shown to feature distinct trajectory classes, including a low class corresponding to substantial improvement in depression, a minority class with a U-shaped curve corresponding to transient improvement, and a high class corresponding to no improvement. The analyses provide a new way to assess drug efficiency in the presence of dropout. PMID:21381817

  1. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    Science.gov (United States)

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  2. An LP-model to analyse economic and ecological sustainability on Dutch dairy farms: model presentation and application for experimental farm "de Marke"

    NARCIS (Netherlands)

    Calker, van K.J.; Berentsen, P.B.M.; Boer, de I.J.M.; Giesen, G.W.J.; Huirne, R.B.M.

    2004-01-01

    Farm level modelling can be used to determine how farm management adjustments and environmental policy affect different sustainability indicators. In this paper indicators were included in a dairy farm LP (linear programming)-model to analyse the effects of environmental policy and management

  3. Poverty and involuntary engagement stress responses: examining the link to anxiety and aggression within low-income families.

    Science.gov (United States)

    Wolff, Brian C; Santiago, Catherine DeCarlo; Wadsworth, Martha E

    2009-05-01

    Families living with the burdens of poverty-related stress are at risk for developing a range of psychopathology. The present study examines the year-long prospective relationships among poverty-related stress, involuntary engagement stress response (IESR) levels, and anxiety symptoms and aggression in an ethnically diverse sample of 98 families (300 individual family members) living at or below 150% of the US federal poverty line. Hierarchical Linear Modeling (HLM) moderator model analyses provided strong evidence that IESR levels moderated the influence of poverty-related stress on anxiety symptoms and provided mixed evidence for the same interaction effect on aggression. Higher IESR levels, a proxy for physiological stress reactivity, worsened the impact of stress on symptoms. Understanding how poverty-related stress and involuntary stress responses affect psychological functioning has implications for efforts to prevent or reduce psychopathology, particularly anxiety, among individuals and families living in poverty.

  4. Model evaluation of the phytoextraction potential of heavy metal hyperaccumulators and non-hyperaccumulators

    Energy Technology Data Exchange (ETDEWEB)

    Liang, H.-M. [Agricultural Biotechnology Research Center, Academia Sinica, 128 Section 2, Academia Road, Taipei, Taiwan 11529, Taiwan (China); Lin, T.-H. [Department of Statistics, National Taipei University, Taiwan (China); Chiou, J.-M. [Institute of Statistical Science, Academia Sinica, Taiwan (China); Yeh, K.-C., E-mail: kcyeh@gate.sinica.edu.t [Agricultural Biotechnology Research Center, Academia Sinica, 128 Section 2, Academia Road, Taipei, Taiwan 11529, Taiwan (China)

    2009-06-15

    Evaluation of the remediation ability of zinc/cadmium in hyper- and non-hyperaccumulator plant species through greenhouse studies is limited. To bridge the gap between greenhouse studies and field applications for phytoextraction, we used published data to examine the partitioning of heavy metals between plants and soil (defined as the bioconcentration factor). We compared the remediation ability of the Zn/Cd hyperaccumulators Thlaspi caerulescens and Arabidopsis halleri and the non-hyperaccumulators Nicotiana tabacum and Brassica juncea using a hierarchical linear model (HLM). A recursive algorithm was then used to evaluate how many harvest cycles were required to clean a contaminated site to meet Taiwan Environmental Protection Agency regulations. Despite the high bioconcentration factor of both hyperaccumulators, metal removal was still limited because of the plants' small biomass. Simulation with N. tabacum and the Cadmium model suggests further study and development of plants with high biomass and improved phytoextraction potential for use in environmental cleanup. - A quantitative solution enables the evaluation of Zn/Cd phytoextraction.

  5. Model evaluation of the phytoextraction potential of heavy metal hyperaccumulators and non-hyperaccumulators.

    Science.gov (United States)

    Liang, Hong-Ming; Lin, Ting-Hsiang; Chiou, Jeng-Min; Yeh, Kuo-Chen

    2009-06-01

    Evaluation of the remediation ability of zinc/cadmium in hyper- and non-hyperaccumulator plant species through greenhouse studies is limited. To bridge the gap between greenhouse studies and field applications for phytoextraction, we used published data to examine the partitioning of heavy metals between plants and soil (defined as the bioconcentration factor). We compared the remediation ability of the Zn/Cd hyperaccumulators Thlaspi caerulescens and Arabidopsis halleri and the non-hyperaccumulators Nicotiana tabacum and Brassica juncea using a hierarchical linear model (HLM). A recursive algorithm was then used to evaluate how many harvest cycles were required to clean a contaminated site to meet Taiwan Environmental Protection Agency regulations. Despite the high bioconcentration factor of both hyperaccumulators, metal removal was still limited because of the plants' small biomass. Simulation with N. tabacum and the Cadmium model suggests further study and development of plants with high biomass and improved phytoextraction potential for use in environmental cleanup.

  6. Model evaluation of the phytoextraction potential of heavy metal hyperaccumulators and non-hyperaccumulators

    International Nuclear Information System (INIS)

    Liang, H.-M.; Lin, T.-H.; Chiou, J.-M.; Yeh, K.-C.

    2009-01-01

    Evaluation of the remediation ability of zinc/cadmium in hyper- and non-hyperaccumulator plant species through greenhouse studies is limited. To bridge the gap between greenhouse studies and field applications for phytoextraction, we used published data to examine the partitioning of heavy metals between plants and soil (defined as the bioconcentration factor). We compared the remediation ability of the Zn/Cd hyperaccumulators Thlaspi caerulescens and Arabidopsis halleri and the non-hyperaccumulators Nicotiana tabacum and Brassica juncea using a hierarchical linear model (HLM). A recursive algorithm was then used to evaluate how many harvest cycles were required to clean a contaminated site to meet Taiwan Environmental Protection Agency regulations. Despite the high bioconcentration factor of both hyperaccumulators, metal removal was still limited because of the plants' small biomass. Simulation with N. tabacum and the Cadmium model suggests further study and development of plants with high biomass and improved phytoextraction potential for use in environmental cleanup. - A quantitative solution enables the evaluation of Zn/Cd phytoextraction.

  7. Kinetic analyses and mathematical modeling of primary photochemical and photoelectrochemical processes in plant photosystems

    NARCIS (Netherlands)

    Vredenberg, W.J.

    2011-01-01

    In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with

  8. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Tappen, J. J.; Wasiolek, M. A.; Wu, D. W.; Schmitt, J. F.; Smith, A. J.

    2002-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  9. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Jeff Tappen; M.A. Wasiolek; D.W. Wu; J.F. Schmitt

    2001-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  10. Towards an Industrial Application of Statistical Uncertainty Analysis Methods to Multi-physical Modelling and Safety Analyses

    International Nuclear Information System (INIS)

    Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe

    2013-01-01

    Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)

  11. On the Multilevel Nature of Meta-Analysis: A Tutorial, Comparison of Software Programs, and Discussion of Analytic Choices.

    Science.gov (United States)

    Pastor, Dena A; Lazowski, Rory A

    2018-01-01

    The term "multilevel meta-analysis" is encountered not only in applied research studies, but in multilevel resources comparing traditional meta-analysis to multilevel meta-analysis. In this tutorial, we argue that the term "multilevel meta-analysis" is redundant since all meta-analysis can be formulated as a special kind of multilevel model. To clarify the multilevel nature of meta-analysis the four standard meta-analytic models are presented using multilevel equations and fit to an example data set using four software programs: two specific to meta-analysis (metafor in R and SPSS macros) and two specific to multilevel modeling (PROC MIXED in SAS and HLM). The same parameter estimates are obtained across programs underscoring that all meta-analyses are multilevel in nature. Despite the equivalent results, not all software programs are alike and differences are noted in the output provided and estimators available. This tutorial also recasts distinctions made in the literature between traditional and multilevel meta-analysis as differences between meta-analytic choices, not between meta-analytic models, and provides guidance to inform choices in estimators, significance tests, moderator analyses, and modeling sequence. The extent to which the software programs allow flexibility with respect to these decisions is noted, with metafor emerging as the most favorable program reviewed.

  12. Nitrogen injection in stagnant liquid metal. Eulerian-Eulerian and VOF calculations by fluent

    International Nuclear Information System (INIS)

    Pena, A.; Esteban, G.A.

    2004-01-01

    High power spallation sources are devices that can be very useful in different fields, as medicine, material science, and also in the Accelerator Driven Systems (ADS). This devices use Heavy Liquid Metals (HLM) as the spallation target. Furthermore, HLM are thought to be the coolant of those big energy sources produced by the process. Fast breeder reactors, advanced nuclear reactors, as well as the future designs of fusion reactors, also consider HLM as targets or coolants. Gas injection in liquid metal flows allows the enhancement of this coolant circulation. The difference in densities between the gas and the liquid metal is a big challenge for the multiphase models implemented in the Computational Fluid Dynamics (CFD) codes. Also the changing shape of the bubbles involves extra difficulties in the calculations. A N 2 flow in stagnant Lead-Bismuth eutectic (Pb-Bi), experiment available at Forschungszentrum Rossendorf e.V (FZR) in Germany, was used in one of the work-packages of the ASCHLIM project (EU contract number FIKW-CT-2001-80121). In this paper, calculations made by the UPV/EHU (University of the Basque Country) show measuring data compared with numerical results using the CFD (Computational Fluid Dynamics) code FLUENT and two multiphase models: the Eulerian-Eulerian and the Volume of Fluid (VOF). The interpretation of the experimental resulting velocities was difficult, because some parameters were not known, bubble trajectory and bubble shape, for example, as direct optical methods cannot be used, like it is done with water experiments. (author)

  13. A theoretical model for analysing gender bias in medicine

    Directory of Open Access Journals (Sweden)

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  14. Comparison of plasma input and reference tissue models for analysing [(11)C]flumazenil studies

    NARCIS (Netherlands)

    Klumpers, Ursula M. H.; Veltman, Dick J.; Boellaard, Ronald; Comans, Emile F.; Zuketto, Cassandra; Yaqub, Maqsood; Mourik, Jurgen E. M.; Lubberink, Mark; Hoogendijk, Witte J. G.; Lammertsma, Adriaan A.

    2008-01-01

    A single-tissue compartment model with plasma input is the established method for analysing [(11)C]flumazenil ([(11)C]FMZ) studies. However, arterial cannulation and measurement of metabolites are time-consuming. Therefore, a reference tissue approach is appealing, but this approach has not been

  15. Challenges of Analysing Gene-Environment Interactions in Mouse Models of Schizophrenia

    Directory of Open Access Journals (Sweden)

    Peter L. Oliver

    2011-01-01

    Full Text Available The modelling of neuropsychiatric disease using the mouse has provided a wealth of information regarding the relationship between specific genetic lesions and behavioural endophenotypes. However, it is becoming increasingly apparent that synergy between genetic and nongenetic factors is a key feature of these disorders that must also be taken into account. With the inherent limitations of retrospective human studies, experiments in mice have begun to tackle this complex association, combining well-established behavioural paradigms and quantitative neuropathology with a range of environmental insults. The conclusions from this work have been varied, due in part to a lack of standardised methodology, although most have illustrated that phenotypes related to disorders such as schizophrenia are consistently modified. Far fewer studies, however, have attempted to generate a “two-hit” model, whereby the consequences of a pathogenic mutation are analysed in combination with environmental manipulation such as prenatal stress. This significant, yet relatively new, approach is beginning to produce valuable new models of neuropsychiatric disease. Focussing on prenatal and perinatal stress models of schizophrenia, this review discusses the current progress in this field, and highlights important issues regarding the interpretation and comparative analysis of such complex behavioural data.

  16. Development of steady-state model for MSPT and detailed analyses of receiver

    Science.gov (United States)

    Yuasa, Minoru; Sonoda, Masanori; Hino, Koichi

    2016-05-01

    Molten salt parabolic trough system (MSPT) uses molten salt as heat transfer fluid (HTF) instead of synthetic oil. The demonstration plant of MSPT was constructed by Chiyoda Corporation and Archimede Solar Energy in Italy in 2013. Chiyoda Corporation developed a steady-state model for predicting the theoretical behavior of the demonstration plant. The model was designed to calculate the concentrated solar power and heat loss using ray tracing of incident solar light and finite element modeling of thermal energy transferred into the medium. This report describes the verification of the model using test data on the demonstration plant, detailed analyses on the relation between flow rate and temperature difference on the metal tube of receiver and the effect of defocus angle on concentrated power rate, for solar collector assembly (SCA) development. The model is accurate to an extent of 2.0% as systematic error and 4.2% as random error. The relationships between flow rate and temperature difference on metal tube and the effect of defocus angle on concentrated power rate are shown.

  17. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.; Sobel, A.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overview of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.

  18. Development and application of model RAIA uranium on-line analyser

    International Nuclear Information System (INIS)

    Dong Yanwu; Song Yufen; Zhu Yaokun; Cong Peiyuan; Cui Songru

    1999-01-01

    The working principle, structure, adjustment and application of model RAIA on-line analyser are reported. The performance of this instrument is reliable. For identical sample, the signal fluctuation in continuous monitoring for four months is less than +-1%. According to required measurement range, appropriate length of sample cell is chosen. The precision of measurement process is better than 1% at 100 g/L U. The detection limit is 50 mg/L. The uranium concentration in process stream can be displayed automatically and printed at any time. It presents 4∼20 mA current signal being proportional to the uranium concentration. This makes a long step towards process continuous control and computer management

  19. Analysis of Student and School Level Variables Related to Mathematics Self-Efficacy Level Based on PISA 2012 Results for China-Shanghai, Turkey, and Greece

    Science.gov (United States)

    Usta, H. Gonca

    2016-01-01

    This study aims to analyze the student and school level variables that affect students' self-efficacy levels in mathematics in China-Shanghai, Turkey, and Greece based on PISA 2012 results. In line with this purpose, the hierarchical linear regression model (HLM) was employed. The interschool variability is estimated at approximately 17% in…

  20. Tests and analyses of 1/4-scale upgraded nine-bay reinforced concrete basement models

    International Nuclear Information System (INIS)

    Woodson, S.C.

    1983-01-01

    Two nine-bay prototype structures, a flat plate and two-way slab with beams, were designed in accordance with the 1977 ACI code. A 1/4-scale model of each prototype was constructed, upgraded with timber posts, and statically tested. The development of the timber posts placement scheme was based upon yield-line analyses, punching shear evaluation, and moment-thrust interaction diagrams of the concrete slab sections. The flat plate model and the slab with beams model withstood approximate overpressures of 80 and 40 psi, respectively, indicating that required hardness may be achieved through simple upgrading techniques

  1. A model finite-element to analyse the mechanical behavior of a PWR fuel rod

    International Nuclear Information System (INIS)

    Galeao, A.C.N.R.; Tanajura, C.A.S.

    1988-01-01

    A model to analyse the mechanical behavior of a PWR fuel rod is presented. We drew our attention to the phenomenon of pellet-pellet and pellet-cladding contact by taking advantage of an elastic model which include the effects of thermal gradients, cladding internal and external pressures, swelling and initial relocation. The problem of contact gives rise ro a variational formulation which employs Lagrangian multipliers. An iterative scheme is constructed and the finite element method is applied to obtain the numerical solution. Some results and comments are presented to examine the performance of the model. (author) [pt

  2. Development of CFD fire models for deterministic analyses of the cable issues in the nuclear power plant

    International Nuclear Information System (INIS)

    Lin, C.-H.; Ferng, Y.-M.; Pei, B.-S.

    2009-01-01

    Additional fire barriers of electrical cables are required for the nuclear power plants (NPPs) in Taiwan due to the separation requirements of Appendix R to 10 CFR Part 50. The risk-informed fire analysis (RIFA) may provide a viable method to resolve these fire barrier issues. However, it is necessary to perform the fire scenario analyses so that RIFA can quantitatively determine the risk related to the fire barrier wrap. The CFD fire models are then proposed in this paper to help the RIFA in resolving these issues. Three typical fire scenarios are selected to assess the present CFD models. Compared with the experimental data and other model's simulations, the present calculated results show reasonable agreements, rendering that present CFD fire models can provide the quantitative information for RIFA analyses to release the cable wrap requirements for NPPs

  3. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    Science.gov (United States)

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  4. Control designs and stability analyses for Helly’s car-following model

    Science.gov (United States)

    Rosas-Jaimes, Oscar A.; Quezada-Téllez, Luis A.; Fernández-Anaya, Guillermo

    Car-following is an approach to understand traffic behavior restricted to pairs of cars, identifying a “leader” moving in front of a “follower”, which at the same time, it is assumed that it does not surpass to the first one. From the first attempts to formulate the way in which individual cars are affected in a road through these models, linear differential equations were suggested by author like Pipes or Helly. These expressions represent such phenomena quite well, even though they have been overcome by other more recent and accurate models. However, in this paper, we show that those early formulations have some properties that are not fully reported, presenting the different ways in which they can be expressed, and analyzing them in their stability behaviors. Pipes’ model can be extended to what it is known as Helly’s model, which is viewed as a more precise model to emulate this microscopic approach to traffic. Once established some convenient forms of expression, two control designs are suggested herein. These regulation schemes are also complemented with their respective stability analyses, which reflect some important properties with implications in real driving. It is significant that these linear designs can be very easy to understand and to implement, including those important features related to safety and comfort.

  5. Mechanical properties of structural materials in HLM

    International Nuclear Information System (INIS)

    Moisa, A. E.; Valeca, S.; Pitigoi, V.

    2016-01-01

    The Generation IV nuclear systems are nowadays in the design stage, and this is one of the reasons of testing stage for candidate materials. The purpose of this paper is to present the tensile tests, for candidate materials. The studied test are: on temperature of 500°C in air, on mechanical testing machine Walter + Bie by using the furnace of the testing machine, and environmental molten lead using testing machine Instron, equipped with a lead testing device attached to it. Also the mechanical parameters will be determined on tensile strength and yield strength for steel 316L material to be used as candidate in achieving LFR reactor vessel type, and the microstructural analysis of surface breaking will be performed by electronic microscopy. The paper will present the main components, the operating procedure of the testing system, and the results of tensile tests in molten lead. (authors)

  6. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  7. Uncertainty and sensitivity analyses for age-dependent unavailability model integrating test and maintenance

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Highlights: ► Application of analytical unavailability model integrating T and M, ageing, and test strategy. ► Ageing data uncertainty propagation on system level assessed via Monte Carlo simulation. ► Uncertainty impact is growing with the extension of the surveillance test interval. ► Calculated system unavailability dependence on two different sensitivity study ageing databases. ► System unavailability sensitivity insights regarding specific groups of BEs as test intervals extend. - Abstract: The interest in operational lifetime extension of the existing nuclear power plants is growing. Consequently, plants life management programs, considering safety components ageing, are being developed and employed. Ageing represents a gradual degradation of the physical properties and functional performance of different components consequently implying their reduced availability. Analyses, which are being made in the direction of nuclear power plants lifetime extension are based upon components ageing management programs. On the other side, the large uncertainties of the ageing parameters as well as the uncertainties associated with most of the reliability data collections are widely acknowledged. This paper addresses the uncertainty and sensitivity analyses conducted utilizing a previously developed age-dependent unavailability model, integrating effects of test and maintenance activities, for a selected stand-by safety system in a nuclear power plant. The most important problem is the lack of data concerning the effects of ageing as well as the relatively high uncertainty associated to these data, which would correspond to more detailed modelling of ageing. A standard Monte Carlo simulation was coded for the purpose of this paper and utilized in the process of assessment of the component ageing parameters uncertainty propagation on system level. The obtained results from the uncertainty analysis indicate the extent to which the uncertainty of the selected

  8. Fenproporex N-dealkylation to amphetamine--enantioselective in vitro studies in human liver microsomes as well as enantioselective in vivo studies in Wistar and Dark Agouti rats.

    Science.gov (United States)

    Kraemer, Thomas; Pflugmann, Thomas; Bossmann, Michael; Kneller, Nicole M; Peters, Frank T; Paul, Liane D; Springer, Dietmar; Staack, Roland F; Maurer, Hans H

    2004-09-01

    Fenproporex (FP) is known to be N-dealkylated to R(-)-amphetamine (AM) and S(+)-amphetamine. Involvement of the polymorphic cytochrome P450 (CYP) isoform CYP2D6 in metabolism of such amphetamine precursors is discussed controversially in literature. In this study, the human hepatic CYPs involved in FP dealkylation were identified using recombinant CYPs and human liver microsomes (HLM). These studies revealed that not only CYP2D6 but also CYP1A2, CYP2B6 and CYP3A4 catalyzed this metabolic reaction for both enantiomers with slight preference for the S(+)-enantiomer. Formation of amphetamine was not significantly changed by quinidine and was not different in poor metabolizer HLM compared to pooled HLM. As in vivo experiments, blood levels of R(-)-amphetamine and S(+)-amphetamine formed after administration of FP were determined in female Dark Agouti rats (fDA), a model of the human CYP2D6 poor metabolizer phenotype (PM), male Dark Agouti rats (mDA), an intermediate model, and in male Wistar rats (WI), a model of the human CYP2D6 extensive metabolizer phenotype. Analysis of the plasma samples showed that fDA exhibited significantly higher plasma levels of both amphetamine enantiomers compared to those of WI. Corresponding plasma levels in mDA were between those in fDA and WI. Furthermore, pretreatment of WI with the CYP2D inhibitor quinine resulted in significantly higher amphetamine plasma levels, which did not significantly differ from those in fDA. The in vivo studies suggested that CYP2D6 is not crucial to the N-dealkylation but to another metabolic step, most probably to the ring hydroxylation. Further studies are necessary for elucidating the role of CYP2D6 in FP hydroxylation.

  9. Predicting Change in Parenting Stress across Early Childhood: Child and Maternal Factors

    Science.gov (United States)

    Williford, Amanda P.; Calkins, Susan D.; Keane, Susan P.

    2007-01-01

    This study examined maternal parenting stress in a sample of 430 boys and girls including those at risk for externalizing behavior problems. Children and their mothers were assessed when the children were ages 2, 4, and 5. Hierarchical linear modeling (HLM) was used to examine stability of parenting stress across early childhood and to examine…

  10. Lottery promotions at the point-of-sale in Ontario, Canada.

    Science.gov (United States)

    Planinac, Lynn C; Cohen, Joanna E; Reynolds, Jennifer; Robinson, Daniel J; Lavack, Anne; Korn, David

    2011-06-01

    We documented the extent of point-of-sale (POS) lottery promotions in Ontario, Canada and the relationship between lottery promotions and store and city characteristics. This is the first quantitative study of POS lottery promotions. A total of 366 stores-independent and chain convenience stores, gas stations and grocery stores-were visited across 20 cities in Ontario. Data collectors unobtrusively observed the type of lottery promotions in each store and completed a data collection checklist. A lottery promotion index was created and hierarchical linear modeling (HLM) was conducted to examine the relationship between extent of lottery promotions and independent variables such as neighbourhood socioeconomic status and city prevalence of lottery ticket purchasing. POS lottery promotions were widespread across Ontario, with the highest level of promotion found in independent convenience stores. In the multivariable HLM model, none of the remaining independent variables remained statistically significant, except for store type. Lottery promotions are extensive at the POS in Ontario. These findings can help initiate discussions around the appropriateness and possible future regulation of this form of advertising.

  11. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    Science.gov (United States)

    Kolkman, M. J.; Kok, M.; van der Veen, A.

    The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties on a fundamental cognitive level, which can reveal experiences, perceptions, assumptions, knowledge and subjective beliefs of stakeholders, experts and other actors, and can stimulate communication and learning. This article presents the theoretical framework from which the use of mental model mapping techniques to analyse this type of problems emerges as a promising technique. The framework consists of the problem solving or policy design cycle, the knowledge production or modelling cycle, and the (computer) model as interface between the cycles. Literature attributes difficulties in the decision-making process to communication gaps between decision makers, stakeholders and scientists, and to the construction of knowledge within different paradigm groups that leads to different interpretation of the problem situation. Analysis of the decision-making process literature indicates that choices, which are made in all steps of the problem solving cycle, are based on an individual decision maker’s frame of perception. This frame, in turn, depends on the mental model residing in the mind of the individual. Thus we identify three levels of awareness on which the decision process can be analysed. This research focuses on the third level. Mental models can be elicited using mapping techniques. In this way, analysing an individual’s mental model can shed light on decision-making problems. The steps of the knowledge production cycle are, in the same manner, ultimately driven by the mental models of the scientist in a specific discipline. Remnants of this mental model can be found in the resulting computer model. The characteristics of unstructured problems (complexity

  12. Comparison of TiO2 photocatalysis, electrochemically assisted Fenton reaction and direct electrochemistry for simulation of phase I metabolism reactions of drugs.

    Science.gov (United States)

    Ruokolainen, Miina; Gul, Turan; Permentier, Hjalmar; Sikanen, Tiina; Kostiainen, Risto; Kotiaho, Tapio

    2016-02-15

    The feasibility of titanium dioxide (TiO2) photocatalysis, electrochemically assisted Fenton reaction (EC-Fenton) and direct electrochemical oxidation (EC) for simulation of phase I metabolism of drugs was studied by comparing the reaction products of buspirone, promazine, testosterone and 7-ethoxycoumarin with phase I metabolites of the same compounds produced in vitro by human liver microsomes (HLM). Reaction products were analysed by UHPLC-MS. TiO2 photocatalysis simulated the in vitro phase I metabolism in HLM more comprehensively than did EC-Fenton or EC. Even though TiO2 photocatalysis, EC-Fenton and EC do not allow comprehensive prediction of phase I metabolism, all three methods produce several important metabolites without the need for demanding purification steps to remove the biological matrix. Importantly, TiO2 photocatalysis produces aliphatic and aromatic hydroxylation products where direct EC fails. Furthermore, TiO2 photocatalysis is an extremely rapid, simple and inexpensive way to generate oxidation products in a clean matrix and the reaction can be simply initiated and quenched by switching the UV lamp on/off. Copyright © 2015 Elsevier B.V. All rights reserved.

  13. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  14. Evaluation of the corrosion, reactivity and chemistry control aspects for the selection of an alternative coolant in the secondary circuit of sodium fast reactors

    International Nuclear Information System (INIS)

    Brissonneau, L.; Simon, N.; Balbaud-Celerier, F.; Courouau, J.L.; Martinelli, L.; Grabon, V.; Capitaine, A.; Conocar, O.; Blat, M.

    2009-01-01

    Full text of publication follows: Sodium Fast Reactors are promising fourth generation reactors as they can contribute to reduce resource demand in uranium and considerably reduce waste level due to their fast spectrum. However, progress can be obtained for these reactors on the investment cost and on safety improvement. To achieve these goals, one of the innovative solutions consists in eliminating the reaction of sodium with water in the steam generators, by replacing the sodium in the secondary circuit by another coolant. A work group composed of experts from CEA, Areva NP and EdF was in charge to evaluate several alternative coolants as Heavy Liquid Metals (HLM), nitrate salts and hydroxide mixtures, through a multi-criteria analysis. Three important criteria for the selection of one coolant are its 'Interactions with the structures', and its 'chemistry control', and 'Reactivity with fluids' which are strongly correlated. The assessment, mainly based on the state-of-art from published literature on these points, is detailed in this paper. The mechanisms of corrosion of steels by the HLM depend on the oxygen content. For Pb-Bi, it has been modelled for oxidation and release domains. The corrosion of steels by nitrate salts presents similarity with the oxidation induced by HLM. The highly corrosive hydroxide mixture requires the use of nickel base alloys, for which oxidation and mass transfer are nevertheless significant. The HLM requires a fine regulation of oxygen content, through measurements and control systems, both to prevent lead oxide precipitation at high level and release corrosion at low level. Nitrate salts decompose into nitrites at sufficiently high temperature, which might induce pressure build-up in the circuit. The hydroxides must be kept under reducing atmosphere to lower the corrosion rate. Though these coolants are relatively inert to air and water, one of the main drawbacks of HLM and nitrate salts are their reactivity with sodium. Bismuth

  15. Analyses and optimization of Lee propagation model for LoRa 868 MHz network deployments in urban areas

    Directory of Open Access Journals (Sweden)

    Dobrilović Dalibor

    2017-01-01

    Full Text Available In the recent period, fast ICT expansion and rapid appearance of new technologies raised the importance of fast and accurate planning and deployment of emerging communication technologies, especially wireless ones. In this paper is analyzed possible usage of Lee propagation model for planning, design and management of networks based on LoRa 868MHz technology. LoRa is wireless technology which can be deployed in various Internet of Things and Smart City scenarios in urban areas. The analyses are based on comparison of field measurements with model calculations. Besides the analyses of Lee propagation model usability, the possible optimization of the model is discussed as well. The research results can be used for accurate design, planning and for preparation of high-performance wireless resource management of various Internet of Things and Smart City applications in urban areas based on LoRa or similar wireless technology. The equipment used for measurements is based on open-source hardware.

  16. Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.

    Science.gov (United States)

    Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D

    2014-05-01

    In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available. Copyright © 2013. Published by Elsevier Ireland Ltd.

  17. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  18. Analysing bifurcations encountered in numerical modelling of current transfer to cathodes of dc glow and arc discharges

    International Nuclear Information System (INIS)

    Almeida, P G C; Benilov, M S; Cunha, M D; Faria, M J

    2009-01-01

    Bifurcations and/or their consequences are frequently encountered in numerical modelling of current transfer to cathodes of gas discharges, also in apparently simple situations, and a failure to recognize and properly analyse a bifurcation may create difficulties in the modelling and hinder the understanding of numerical results and the underlying physics. This work is concerned with analysis of bifurcations that have been encountered in the modelling of steady-state current transfer to cathodes of glow and arc discharges. All basic types of steady-state bifurcations (fold, transcritical, pitchfork) have been identified and analysed. The analysis provides explanations to many results obtained in numerical modelling. In particular, it is shown that dramatic changes in patterns of current transfer to cathodes of both glow and arc discharges, described by numerical modelling, occur through perturbed transcritical bifurcations of first- and second-order contact. The analysis elucidates the reason why the mode of glow discharge associated with the falling section of the current-voltage characteristic in the solution of von Engel and Steenbeck seems not to appear in 2D numerical modelling and the subnormal and normal modes appear instead. A similar effect has been identified in numerical modelling of arc cathodes and explained.

  19. Reproduction of the Yucca Mountain Project TSPA-LA Uncertainty and Sensitivity Analyses and Preliminary Upgrade of Models

    Energy Technology Data Exchange (ETDEWEB)

    Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis; Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis

    2016-09-01

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.

  20. Pathway models for analysing and managing the introduction of alien plant pests - an overview and categorization

    NARCIS (Netherlands)

    Douma, J.C.; Pautasso, M.; Venette, R.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Schans, J.; Werf, van der W.

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative

  1. Identification of AKB-48 and 5F-AKB-48 Metabolites in Authentic Human Urine Samples Using Human Liver Microsomes and Time of Flight Mass Spectrometry

    OpenAIRE

    Vikingsson, Svante; Josefsson, Martin; Green, Henrik

    2015-01-01

    The occurrence of structurally related synthetic cannabinoids makes the identification of unique markers of drug intake particularly challenging. The aim of this study was to identify unique and abundant metabolites of AKB-48 and 5F-AKB-48 for toxicological screening in urine. Investigations of authentic urine samples from forensic cases in combination with human liver microsome (HLM) experiments were used for identification of metabolites. HLM incubations of AKB-48 and 5F-AKB-48 along with 3...

  2. BWR Mark III containment analyses using a GOTHIC 8.0 3D model

    International Nuclear Information System (INIS)

    Jimenez, Gonzalo; Serrano, César; Lopez-Alonso, Emma; Molina, M del Carmen; Calvo, Daniel; García, Javier; Queral, César; Zuriaga, J. Vicente; González, Montserrat

    2015-01-01

    Highlights: • The development of a 3D GOTHIC code model of BWR Mark-III containment is described. • Suppression pool modelling based on the POOLEX STB-20 and STB-16 experimental tests. • LOCA and SBO transient simulated to verify the behaviour of the 3D GOTHIC model. • Comparison between the 3D GOTHIC model and MAAP4.07 model is conducted. • Accurate reproduction of pre severe accident conditions with the 3D GOTHIC model. - Abstract: The purpose of this study is to establish a detailed three-dimensional model of Cofrentes NPP BWR/6 Mark III containment building using the containment code GOTHIC 8.0. This paper presents the model construction, the phenomenology tests conducted and the selected transient for the model evaluation. In order to study the proper settings for the model in the suppression pool, two experiments conducted with the experimental installation POOLEX have been simulated, allowing to obtain a proper behaviour of the model under different suppression pool phenomenology. In the transient analyses, a Loss of Coolant Accident (LOCA) and a Station Blackout (SBO) transient have been performed. The main results of the simulations of those transients were qualitative compared with the results obtained from simulations with MAAP 4.07 Cofrentes NPP model, used by the plant for simulating severe accidents. From this comparison, a verification of the model in terms of pressurization, asymmetric discharges and high pressure release were obtained. The completeness of this model has proved to adequately simulate the thermal hydraulic phenomena which occur in the containment during accidental sequences

  3. A chip-level modeling approach for rail span collapse and survivability analyses

    International Nuclear Information System (INIS)

    Marvis, D.G.; Alexander, D.R.; Dinger, G.L.

    1989-01-01

    A general semiautomated analysis technique has been developed for analyzing rail span collapse and survivability of VLSI microcircuits in high ionizing dose rate radiation environments. Hierarchical macrocell modeling permits analyses at the chip level and interactive graphical postprocessing provides a rapid visualization of voltage, current and power distributions over an entire VLSIC. The technique is demonstrated for a 16k C MOS/SOI SRAM and a CMOS/SOS 8-bit multiplier. The authors also present an efficient method to treat memory arrays as well as a three-dimensional integration technique to compute sapphire photoconduction from the design layout

  4. The Effects of Performance Budgeting and Funding Programs on Graduation Rate in Public Four-Year Colleges and Universities

    Directory of Open Access Journals (Sweden)

    Jung-cheol Shinn

    2004-05-01

    Full Text Available This study was conducted to determine whether states with performance budgeting and funding (PBF programs had improved institutional performance of higher education over the five years (1997 through 2001 considered in this study. First Time in College (FTIC graduation rate was used as the measure of institutional performance. In this study, the unit of analysis is institution level and the study population is all public four-or-more-year institutions in the United States. To test PBF program effectiveness, Hierarchical Linear Modeling (HLM growth analysis was applied. According to the HLM analysis, the growth of graduation rates in states with PBF programs was not greater than in states without PBF programs. The lack of growth in institutional graduation rates, however, does not mean that PBF programs failed to achieve their goals. Policy-makers are advised to sustain PBF programs long enough until such programs bear their fruits or are proven ineffective.

  5. Demographic origins of skewed operational and adult sex ratios: perturbation analyses of two-sex models.

    Science.gov (United States)

    Veran, Sophie; Beissinger, Steven R

    2009-02-01

    Skewed sex ratios - operational (OSR) and Adult (ASR) - arise from sexual differences in reproductive behaviours and adult survival rates due to the cost of reproduction. However, skewed sex-ratio at birth, sex-biased dispersal and immigration, and sexual differences in juvenile mortality may also contribute. We present a framework to decompose the roles of demographic traits on sex ratios using perturbation analyses of two-sex matrix population models. Metrics of sensitivity are derived from analyses of sensitivity, elasticity, life-table response experiments and life stage simulation analyses, and applied to the stable stage distribution instead of lambda. We use these approaches to examine causes of male-biased sex ratios in two populations of green-rumped parrotlets (Forpus passerinus) in Venezuela. Female local juvenile survival contributed the most to the unbalanced OSR and ASR due to a female-biased dispersal rate, suggesting sexual differences in philopatry can influence sex ratios more strongly than the cost of reproduction.

  6. Impact of sophisticated fog spray models on accident analyses

    International Nuclear Information System (INIS)

    Roblyer, S.P.; Owzarski, P.C.

    1978-01-01

    The N-Reactor confinement system release dose to the public in a postulated accident is reduced by washing the confinement atmosphere with fog sprays. This allows a low pressure release of confinement atmosphere containing fission products through filters and out an elevated stack. The current accident analysis required revision of the CORRAL code and other codes such as CONTEMPT to properly model the N Reactor confinement into a system of multiple fog-sprayed compartments. In revising these codes, more sophisticated models for the fog sprays and iodine plateout were incorporated to remove some of the conservatism of steam condensing rate, fission product washout and iodine plateout than used in previous studies. The CORRAL code, which was used to describe the transport and deposition of airborne fission products in LWR containment systems for the Rasmussen Study, was revised to describe fog spray removal of molecular iodine (I 2 ) and particulates in multiple compartments for sprays having individual characteristics of on-off times, flow rates, fall heights, and drop sizes in changing containment atmospheres. During postulated accidents, the code determined the fission product removal rates internally rather than from input decontamination factors. A discussion is given of how the calculated plateout and washout rates vary with time throughout the analysis. The results of the accident analyses indicated that more credit could be given to fission product washout and plateout. An important finding was that the release of fission products to the atmosphere and adsorption of fission products on the filters were significantly lower than previous studies had indicated

  7. A simple beam model to analyse the durability of adhesively bonded tile floorings in presence of shrinkage

    Directory of Open Access Journals (Sweden)

    S. de Miranda

    2014-07-01

    Full Text Available A simple beam model for the evaluation of tile debonding due to substrate shrinkage is presented. The tile-adhesive-substrate package is modeled as an Euler-Bernoulli beam laying on a two-layer elastic foundation. An effective discrete model for inter-tile grouting is introduced with the aim of modelling workmanship defects due to partial filled groutings. The model is validated using the results of a 2D FE model. Different defect configurations and adhesive typologies are analysed, focusing the attention on the prediction of normal stresses in the adhesive layer under the assumption of Mode I failure of the adhesive.

  8. Quantitative Model for Economic Analyses of Information Security Investment in an Enterprise Information System

    Directory of Open Access Journals (Sweden)

    Bojanc Rok

    2012-11-01

    Full Text Available The paper presents a mathematical model for the optimal security-technology investment evaluation and decision-making processes based on the quantitative analysis of security risks and digital asset assessments in an enterprise. The model makes use of the quantitative analysis of different security measures that counteract individual risks by identifying the information system processes in an enterprise and the potential threats. The model comprises the target security levels for all identified business processes and the probability of a security accident together with the possible loss the enterprise may suffer. The selection of security technology is based on the efficiency of selected security measures. Economic metrics are applied for the efficiency assessment and comparative analysis of different protection technologies. Unlike the existing models for evaluation of the security investment, the proposed model allows direct comparison and quantitative assessment of different security measures. The model allows deep analyses and computations providing quantitative assessments of different options for investments, which translate into recommendations facilitating the selection of the best solution and the decision-making thereof. The model was tested using empirical examples with data from real business environment.

  9. Rockslide and Impulse Wave Modelling in the Vajont Reservoir by DEM-CFD Analyses

    Science.gov (United States)

    Zhao, T.; Utili, S.; Crosta, G. B.

    2016-06-01

    This paper investigates the generation of hydrodynamic water waves due to rockslides plunging into a water reservoir. Quasi-3D DEM analyses in plane strain by a coupled DEM-CFD code are adopted to simulate the rockslide from its onset to the impact with the still water and the subsequent generation of the wave. The employed numerical tools and upscaling of hydraulic properties allow predicting a physical response in broad agreement with the observations notwithstanding the assumptions and characteristics of the adopted methods. The results obtained by the DEM-CFD coupled approach are compared to those published in the literature and those presented by Crosta et al. (Landslide spreading, impulse waves and modelling of the Vajont rockslide. Rock mechanics, 2014) in a companion paper obtained through an ALE-FEM method. Analyses performed along two cross sections are representative of the limit conditions of the eastern and western slope sectors. The max rockslide average velocity and the water wave velocity reach ca. 22 and 20 m/s, respectively. The maximum computed run up amounts to ca. 120 and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 and 190 m, respectively). Therefore, the overall study lays out a possible DEM-CFD framework for the modelling of the generation of the hydrodynamic wave due to the impact of a rapid moving rockslide or rock-debris avalanche.

  10. Modelling and Analysing Access Control Policies in XACML 3.0

    DEFF Research Database (Denmark)

    Ramli, Carroline Dewi Puspa Kencana

    (c.f. GM03,Mos05,Ris13) and manual analysis of the overall effect and consequences of a large XACML policy set is a very daunting and time-consuming task. In this thesis we address the problem of understanding the semantics of access control policy language XACML, in particular XACML version 3.0....... The main focus of this thesis is modelling and analysing access control policies in XACML 3.0. There are two main contributions in this thesis. First, we study and formalise XACML 3.0, in particular the Policy Decision Point (PDP). The concrete syntax of XACML is based on the XML format, while its standard...... semantics is described normatively using natural language. The use of English text in standardisation leads to the risk of misinterpretation and ambiguity. In order to avoid this drawback, we define an abstract syntax of XACML 3.0 and a formal XACML semantics. Second, we propose a logic-based XACML analysis...

  11. Risk Factor Analyses for the Return of Spontaneous Circulation in the Asphyxiation Cardiac Arrest Porcine Model

    Directory of Open Access Journals (Sweden)

    Cai-Jun Wu

    2015-01-01

    Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.

  12. Pathway models for analysing and managing the introduction of alien plant pests—an overview and categorization

    Science.gov (United States)

    J.C. Douma; M. Pautasso; R.C. Venette; C. Robinet; L. Hemerik; M.C.M. Mourits; J. Schans; W. van der Werf

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative estimates of introduction risks and effectiveness of management options....

  13. Devising a New Model of Demand-Based Learning Integrated with Social Networks and Analyses of its Performance

    Directory of Open Access Journals (Sweden)

    Bekim Fetaji

    2018-02-01

    Full Text Available The focus of the research study is to devise a new model for demand based learning that will be integrated with social networks such as Facebook, twitter and other. The study investigates this by reviewing the published literature and realizes a case study analyses in order to analyze the new models’ analytical perspectives of practical implementation. The study focuses on analyzing demand-based learning and investigating how it can be improved by devising a specific model that incorporates social network use. Statistical analyses of the results of the questionnaire through research of the raised questions and hypothesis showed that there is a need for introducing new models in the teaching process. The originality stands on the prologue of the social login approach to an educational environment, whereas the approach is counted as a contribution of developing a demand-based web application, which aims to modernize the educational pattern of communication, introduce the social login approach, and increase the process of knowledge transfer as well as improve learners’ performance and skills. Insights and recommendations are provided, argumented and discussed.

  14. Teleoperation and computer control of a backhoe/manipulator system

    International Nuclear Information System (INIS)

    Amazeen, C.A.; Bishop, S.S.

    1987-01-01

    Teleoperation of the U.S. Army's Small Emplacement Excavator (SEE) is now in the prototype stage of development. Initial work is directed towards remotely controlling the SEE backhoe attachment as well as a Belvoir Research, Development, and Engineering Center (BRDEC)-developed heavy-lift manipulator (HLM). The HLM is an alternate end effector for the backhoe. Primitive computer control of the backhoe, with a bucket as an end effector, has been achieved. This paper presents the current and planned system configurations and discusses system applications

  15. Integrated tokamak modelling with the fast-ion Fokker–Planck solver adapted for transient analyses

    International Nuclear Information System (INIS)

    Toma, M; Hamamatsu, K; Hayashi, N; Honda, M; Ide, S

    2015-01-01

    Integrated tokamak modelling that enables the simulation of an entire discharge period is indispensable for designing advanced tokamak plasmas. For this purpose, we extend the integrated code TOPICS to make it more suitable for transient analyses in the fast-ion part. The fast-ion Fokker–Planck solver is integrated into TOPICS at the same level as the bulk transport solver so that the time evolutions of the fast ion and the bulk plasma are consistent with each other as well as with the equilibrium magnetic field. The fast-ion solver simultaneously handles neutral beam-injected ions and alpha particles. Parallelisation of the fast-ion solver in addition to its computational lightness owing to a dimensional reduction in the phase space enables transient analyses for long periods in the order of tens of seconds. The fast-ion Fokker–Planck calculation is compared and confirmed to be in good agreement with an orbit following a Monte Carlo calculation. The integrated code is applied to ramp-up simulations for JT-60SA and ITER to confirm its capability and effectiveness in transient analyses. In the integrated simulations, the coupled evolution of the fast ions, plasma profiles, and equilibrium magnetic fields are presented. In addition, the electric acceleration effect on fast ions is shown and discussed. (paper)

  16. Model tests and numerical analyses on horizontal impedance functions of inclined single piles embedded in cohesionless soil

    Science.gov (United States)

    Goit, Chandra Shekhar; Saitoh, Masato

    2013-03-01

    Horizontal impedance functions of inclined single piles are measured experimentally for model soil-pile systems with both the effects of local soil nonlinearity and resonant characteristics. Two practical pile inclinations of 5° and 10° in addition to a vertical pile embedded in cohesionless soil and subjected to lateral harmonic pile head loadings for a wide range of frequencies are considered. Results obtained with low-to-high amplitude of lateral loadings on model soil-pile systems encased in a laminar shear box show that the local nonlinearities have a profound impact on the horizontal impedance functions of piles. Horizontal impedance functions of inclined piles are found to be smaller than the vertical pile and the values decrease as the angle of pile inclination increases. Distinct values of horizontal impedance functions are obtained for the `positive' and `negative' cycles of harmonic loadings, leading to asymmetric force-displacement relationships for the inclined piles. Validation of these experimental results is carried out through three-dimensional nonlinear finite element analyses, and the results from the numerical models are in good agreement with the experimental data. Sensitivity analyses conducted on the numerical models suggest that the consideration of local nonlinearity at the vicinity of the soil-pile interface influence the response of the soil-pile systems.

  17. Time-dependent inhibition of CYP3A4 by gallic acid in human liver microsomes and recombinant systems.

    Science.gov (United States)

    Pu, Qiang-Hong; Shi, Liang; Yu, Chao

    2015-03-01

    1.Gallic acid is a main polyphenol in various fruits and plants. Inhibitory characteristics of gallic acid on CYP3A4 were still unclear. The objective of this work is hence to investigate inhibitory characteristics of gallic acid on CYP3A4 using testosterone as the probe substrate in human liver microsomes (HLMs) and recombinant CYP3A4 (rCYP3A4) systems. 2.Gallic acid caused concentration-dependent loss of CYP3A4 activity with IC50 values of 615.2 μM and 669.5 μM in HLM and rCYP3A4 systems, respectively. IC50-shift experiments showed that pre-incubation with gallic acid in the absence of NADPH contributed to 12- or 14-fold reduction of IC50 in HLM and rCYP3A4 systems, respectively, supporting a time-dependent inhibition. In HLM, time-dependent inactivation variables KI and Kinact were 485.8 μM and 0.05 min(-1), respectively. 3.Compared with the presence of NADPH, pre-incubation of gallic acid in the absence of NADPH markedly increased its inhibitory effects in HLM and rCYP3A4 systems. Those results indicate that CYP3A4 inactivation by gallic acid was independent on NADPH and was mainly mediated its oxidative products. 4.In conclusion, we showed that gallic acid weakly and time-dependently inactivated CYP3A4 via its oxidative products.

  18. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Directory of Open Access Journals (Sweden)

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  19. Uncertainty Analyses and Strategy

    International Nuclear Information System (INIS)

    Kevin Coppersmith

    2001-01-01

    The DOE identified a variety of uncertainties, arising from different sources, during its assessment of the performance of a potential geologic repository at the Yucca Mountain site. In general, the number and detail of process models developed for the Yucca Mountain site, and the complex coupling among those models, make the direct incorporation of all uncertainties difficult. The DOE has addressed these issues in a number of ways using an approach to uncertainties that is focused on producing a defensible evaluation of the performance of a potential repository. The treatment of uncertainties oriented toward defensible assessments has led to analyses and models with so-called ''conservative'' assumptions and parameter bounds, where conservative implies lower performance than might be demonstrated with a more realistic representation. The varying maturity of the analyses and models, and uneven level of data availability, result in total system level analyses with a mix of realistic and conservative estimates (for both probabilistic representations and single values). That is, some inputs have realistically represented uncertainties, and others are conservatively estimated or bounded. However, this approach is consistent with the ''reasonable assurance'' approach to compliance demonstration, which was called for in the U.S. Nuclear Regulatory Commission's (NRC) proposed 10 CFR Part 63 regulation (64 FR 8640 [DIRS 101680]). A risk analysis that includes conservatism in the inputs will result in conservative risk estimates. Therefore, the approach taken for the Total System Performance Assessment for the Site Recommendation (TSPA-SR) provides a reasonable representation of processes and conservatism for purposes of site recommendation. However, mixing unknown degrees of conservatism in models and parameter representations reduces the transparency of the analysis and makes the development of coherent and consistent probability statements about projected repository

  20. Systematic review of model-based analyses reporting the cost-effectiveness and cost-utility of cardiovascular disease management programs.

    Science.gov (United States)

    Maru, Shoko; Byrnes, Joshua; Whitty, Jennifer A; Carrington, Melinda J; Stewart, Simon; Scuffham, Paul A

    2015-02-01

    The reported cost effectiveness of cardiovascular disease management programs (CVD-MPs) is highly variable, potentially leading to different funding decisions. This systematic review evaluates published modeled analyses to compare study methods and quality. Articles were included if an incremental cost-effectiveness ratio (ICER) or cost-utility ratio (ICUR) was reported, it is a multi-component intervention designed to manage or prevent a cardiovascular disease condition, and it addressed all domains specified in the American Heart Association Taxonomy for Disease Management. Nine articles (reporting 10 clinical outcomes) were included. Eight cost-utility and two cost-effectiveness analyses targeted hypertension (n=4), coronary heart disease (n=2), coronary heart disease plus stoke (n=1), heart failure (n=2) and hyperlipidemia (n=1). Study perspectives included the healthcare system (n=5), societal and fund holders (n=1), a third party payer (n=3), or was not explicitly stated (n=1). All analyses were modeled based on interventions of one to two years' duration. Time horizon ranged from two years (n=1), 10 years (n=1) and lifetime (n=8). Model structures included Markov model (n=8), 'decision analytic models' (n=1), or was not explicitly stated (n=1). Considerable variation was observed in clinical and economic assumptions and reporting practices. Of all ICERs/ICURs reported, including those of subgroups (n=16), four were above a US$50,000 acceptability threshold, six were below and six were dominant. The majority of CVD-MPs was reported to have favorable economic outcomes, but 25% were at unacceptably high cost for the outcomes. Use of standardized reporting tools should increase transparency and inform what drives the cost-effectiveness of CVD-MPs. © The European Society of Cardiology 2014.

  1. Performance and Vibration Analyses of Lift-Offset Helicopters

    Directory of Open Access Journals (Sweden)

    Jeong-In Go

    2017-01-01

    Full Text Available A validation study on the performance and vibration analyses of the XH-59A compound helicopter is conducted to establish techniques for the comprehensive analysis of lift-offset compound helicopters. This study considers the XH-59A lift-offset compound helicopter using a rigid coaxial rotor system as a verification model. CAMRAD II (Comprehensive Analytical Method of Rotorcraft Aerodynamics and Dynamics II, a comprehensive analysis code, is used as a tool for the performance, vibration, and loads analyses. A general free wake model, which is a more sophisticated wake model than other wake models, is used to obtain good results for the comprehensive analysis. Performance analyses of the XH-59A helicopter with and without auxiliary propulsion are conducted in various flight conditions. In addition, vibration analyses of the XH-59A compound helicopter configuration are conducted in the forward flight condition. The present comprehensive analysis results are in good agreement with the flight test and previous analyses. Therefore, techniques for the comprehensive analysis of lift-offset compound helicopters are appropriately established. Furthermore, the rotor lifts are calculated for the XH-59A lift-offset compound helicopter in the forward flight condition to investigate the airloads characteristics of the ABC™ (Advancing Blade Concept rotor.

  2. SENSITIVITY ANALYSIS FOR SALTSTONE DISPOSAL UNIT COLUMN DEGRADATION ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G.

    2014-10-28

    PORFLOW related analyses supporting a Sensitivity Analysis for Saltstone Disposal Unit (SDU) column degradation were performed. Previous analyses, Flach and Taylor 2014, used a model in which the SDU columns degraded in a piecewise manner from the top and bottom simultaneously. The current analyses employs a model in which all pieces of the column degrade at the same time. Information was extracted from the analyses which may be useful in determining the distribution of Tc-99 in the various SDUs throughout time and in determining flow balances for the SDUs.

  3. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    Science.gov (United States)

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  4. Multicollinearity in prognostic factor analyses using the EORTC QLQ-C30: identification and impact on model selection.

    Science.gov (United States)

    Van Steen, Kristel; Curran, Desmond; Kramer, Jocelyn; Molenberghs, Geert; Van Vreckem, Ann; Bottomley, Andrew; Sylvester, Richard

    2002-12-30

    Clinical and quality of life (QL) variables from an EORTC clinical trial of first line chemotherapy in advanced breast cancer were used in a prognostic factor analysis of survival and response to chemotherapy. For response, different final multivariate models were obtained from forward and backward selection methods, suggesting a disconcerting instability. Quality of life was measured using the EORTC QLQ-C30 questionnaire completed by patients. Subscales on the questionnaire are known to be highly correlated, and therefore it was hypothesized that multicollinearity contributed to model instability. A correlation matrix indicated that global QL was highly correlated with 7 out of 11 variables. In a first attempt to explore multicollinearity, we used global QL as dependent variable in a regression model with other QL subscales as predictors. Afterwards, standard diagnostic tests for multicollinearity were performed. An exploratory principal components analysis and factor analysis of the QL subscales identified at most three important components and indicated that inclusion of global QL made minimal difference to the loadings on each component, suggesting that it is redundant in the model. In a second approach, we advocate a bootstrap technique to assess the stability of the models. Based on these analyses and since global QL exacerbates problems of multicollinearity, we therefore recommend that global QL be excluded from prognostic factor analyses using the QLQ-C30. The prognostic factor analysis was rerun without global QL in the model, and selected the same significant prognostic factors as before. Copyright 2002 John Wiley & Sons, Ltd.

  5. Analyses of non-fatal accidents in an opencast mine by logistic regression model - a case study.

    Science.gov (United States)

    Onder, Seyhan; Mutlu, Mert

    2017-09-01

    Accidents cause major damage for both workers and enterprises in the mining industry. To reduce the number of occupational accidents, these incidents should be properly registered and carefully analysed. This study efficiently examines the Aegean Lignite Enterprise (ELI) of Turkish Coal Enterprises (TKI) in Soma between 2006 and 2011, and opencast coal mine occupational accident records were used for statistical analyses. A total of 231 occupational accidents were analysed for this study. The accident records were categorized into seven groups: area, reason, occupation, part of body, age, shift hour and lost days. The SPSS package program was used in this study for logistic regression analyses, which predicted the probability of accidents resulting in greater or less than 3 lost workdays for non-fatal injuries. Social facilities-area of surface installations, workshops and opencast mining areas are the areas with the highest probability for accidents with greater than 3 lost workdays for non-fatal injuries, while the reasons with the highest probability for these types of accidents are transporting and manual handling. Additionally, the model was tested for such reported accidents that occurred in 2012 for the ELI in Soma and estimated the probability of exposure to accidents with lost workdays correctly by 70%.

  6. Effect of a Herbal-Leucine mix on the IL-1β-induced cartilage degradation and inflammatory gene expression in human chondrocytes

    Directory of Open Access Journals (Sweden)

    Haqqi Tariq M

    2011-08-01

    Full Text Available Abstract Background Conventional treatments for the articular diseases are often effective for symptom relief, but can also cause significant side effects and do not slow the progression of the disease. Several natural substances have been shown to be effective at relieving the symptoms of osteoarthritis (OA, and preliminary evidence suggests that some of these compounds may exert a favorable influence on the course of the disease. The objective of this study was to investigate the anti-inflammatory/chondroprotective potential of a Herbal and amino acid mixture containing extract of the Uncaria tomentosa, Boswellia spp., Lepidium meyenii and L-Leucine on the IL-1β-induced production of nitric oxide (NO, glycosaminoglycan (GAG, matrix metalloproteinases (MMPs, aggrecan (ACAN and type II collagen (COL2A1 in human OA chondrocytes and OA cartilage explants. Methods Primary OA chondrocytes or OA cartilage explants were pretreated with Herbal-Leucine mixture (HLM, 1-10 μg/ml and then stimulated with IL-1β (5 ng/ml. Effect of HLM on IL-1β-induced gene expression of iNOS, MMP-9, MMP-13, ACAN and COL2A1 was verified by real time-PCR. Estimation of NO and GAG release in culture supernatant was done using commercially available kits. Results HLM tested in these in vitro studies was found to be an effective anti-inflammatory agent, as evidenced by strong inhibition of iNOS, MMP-9 and MMP-13 expression and NO production in IL-1β-stimulated OA chondrocytes (p Leucine mixture (HLM up-regulation of ACAN and COL2A1 expression in IL-1β-stimulated OA chondrocytes was also noted (p Conclusion Our data suggests that HLM could be chondroprotective and anti-inflammatory agent in arthritis, switching chondrocyte gene expression from catabolic direction towards anabolic and regenerative, and consequently this approach may be potentially useful as a new adjunct therapeutic/preventive agent for OA or injury recovery.

  7. Inhibition of the human liver microsomal and human cytochrome P450 1A2 and 3A4 metabolism of estradiol by deployment-related and other chemicals.

    Science.gov (United States)

    Usmani, Khawja A; Cho, Taehyeon M; Rose, Randy L; Hodgson, Ernest

    2006-09-01

    Cytochromes P450 (P450s) are major catalysts in the metabolism of xenobiotics and endogenous substrates such as estradiol (E2). It has previously been shown that E2 is predominantly metabolized in humans by CYP1A2 and CYP3A4 with 2-hydroxyestradiol (2-OHE2) the major metabolite. This study examines effects of deployment-related and other chemicals on E2 metabolism by human liver microsomes (HLM) and individual P450 isoforms. Kinetic studies using HLM, CYP3A4, and CYP1A2 showed similar affinities (Km) for E2 with respect to 2-OHE2 production. Vmax and CLint values for HLM are 0.32 nmol/min/mg protein and 7.5 microl/min/mg protein; those for CYP3A4 are 6.9 nmol/min/nmol P450 and 291 microl/min/nmol P450; and those for CYP1A2 are 17.4 nmol/min/nmol P450 and 633 microl/min/nmol P450. Phenotyped HLM use showed that individuals with high levels of CYP1A2 and CYP3A4 have the greatest potential to metabolize E2. Preincubation of HLM with a variety of chemicals, including those used in military deployments, resulted in varying levels of inhibition of E2 metabolism. The greatest inhibition was observed with organophosphorus compounds, including chlorpyrifos and fonofos, with up to 80% inhibition for 2-OHE2 production. Carbaryl, a carbamate pesticide, and naphthalene, a jet fuel component, inhibited ca. 40% of E2 metabolism. Preincubation of CYP1A2 with chlorpyrifos, fonofos, carbaryl, or naphthalene resulted in 96, 59, 84, and 87% inhibition of E2 metabolism, respectively. Preincubation of CYP3A4 with chlorpyrifos, fonofos, deltamethrin, or permethrin resulted in 94, 87, 58, and 37% inhibition of E2 metabolism. Chlorpyrifos inhibition of E2 metabolism is shown to be irreversible.

  8. Analysing Feature Model Changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable sys- tems is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this con- text, the evolution of the feature model closely follows the evolution of the system.

  9. Reducing organizational politics in performance appraisal: the role of coaching leaders in appraising age-diverse employees

    OpenAIRE

    Dello Russo, S.; Miraglia, M.; Borgogni, L.

    2017-01-01

    WOS:000410769200004 (Nº de Acesso Web of Science) We examined whether a supervisor’s coaching leadership style predicts the perception of organizational politics in performance appraisal (OPPA) reported by the collaborators. Additionally, we drew on social cognition and motivational lifespan development theories to hypothesize age-related differences in perceived OPPA and its link with the coaching leadership style. Using hierarchical linear modeling (HLM) on a sample of 576 employees and ...

  10. Modeling Acequia Irrigation Systems Using System Dynamics: Model Development, Evaluation, and Sensitivity Analyses to Investigate Effects of Socio-Economic and Biophysical Feedbacks

    Directory of Open Access Journals (Sweden)

    Benjamin L. Turner

    2016-10-01

    Full Text Available Agriculture-based irrigation communities of northern New Mexico have survived for centuries despite the arid environment in which they reside. These irrigation communities are threatened by regional population growth, urbanization, a changing demographic profile, economic development, climate change, and other factors. Within this context, we investigated the extent to which community resource management practices centering on shared resources (e.g., water for agricultural in the floodplains and grazing resources in the uplands and mutualism (i.e., shared responsibility of local residents to maintaining traditional irrigation policies and upholding cultural and spiritual observances embedded within the community structure influence acequia function. We used a system dynamics modeling approach as an interdisciplinary platform to integrate these systems, specifically the relationship between community structure and resource management. In this paper we describe the background and context of acequia communities in northern New Mexico and the challenges they face. We formulate a Dynamic Hypothesis capturing the endogenous feedbacks driving acequia community vitality. Development of the model centered on major stock-and-flow components, including linkages for hydrology, ecology, community, and economics. Calibration metrics were used for model evaluation, including statistical correlation of observed and predicted values and Theil inequality statistics. Results indicated that the model reproduced trends exhibited by the observed system. Sensitivity analyses of socio-cultural processes identified absentee decisions, cumulative income effect on time in agriculture, and land use preference due to time allocation, community demographic effect, effect of employment on participation, and farm size effect as key determinants of system behavior and response. Sensitivity analyses of biophysical parameters revealed that several key parameters (e.g., acres per

  11. An efficient modeling of fine air-gaps in tokamak in-vessel components for electromagnetic analyses

    International Nuclear Information System (INIS)

    Oh, Dong Keun; Pak, Sunil; Jhang, Hogun

    2012-01-01

    Highlights: ► A simple and efficient modeling technique is introduced to avoid undesirable massive air mesh which is usually encountered at the modeling of fine structures in tokamak in-vessel component. ► This modeling method is based on the decoupled nodes at the boundary element mocking the air gaps. ► We demonstrated the viability and efficacy, comparing this method with brute force modeling of air-gaps and effective resistivity approximation instead of detail modeling. ► Application of the method to the ITER machine was successfully carried out without sacrificing computational resources and speed. - Abstract: A simple and efficient modeling technique is presented for a proper analysis of complicated eddy current flows in conducting structures with fine air gaps. It is based on the idea of replacing a slit with the decoupled boundary of finite elements. The viability and efficacy of the technique is demonstrated in a simple problem. Application of the method to electromagnetic load analyses during plasma disruptions in ITER has been successfully carried out without sacrificing computational resources and speed. This shows the proposed method is applicable to a practical system with complicated geometrical structures.

  12. An extensible analysable system model

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    , this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...

  13. BWR core melt progression phenomena: Experimental analyses

    International Nuclear Information System (INIS)

    Ott, L.J.

    1992-01-01

    In the BWR Core Melt in Progression Phenomena Program, experimental results concerning severe fuel damage and core melt progression in BWR core geometry are used to evaluate existing models of the governing phenomena. These include control blade eutectic liquefaction and the subsequent relocation and attack on the channel box structure; oxidation heating and hydrogen generation; Zircaloy melting and relocation; and the continuing oxidation of zirconium with metallic blockage formation. Integral data have been obtained from the BWR DF-4 experiment in the ACRR and from BWR tests in the German CORA exreactor fuel-damage test facility. Additional integral data will be obtained from new CORA BWR test, the full-length FLHT-6 BWR test in the NRU test reactor, and the new program of exreactor experiments at Sandia National Laboratories (SNL) on metallic melt relocation and blockage formation. an essential part of this activity is interpretation and use of the results of the BWR tests. The Oak Ridge National Laboratory (ORNL) has developed experiment-specific models for analysis of the BWR experiments; to date, these models have permitted far more precise analyses of the conditions in these experiments than has previously been available. These analyses have provided a basis for more accurate interpretation of the phenomena that the experiments are intended to investigate. The results of posttest analyses of BWR experiments are discussed and significant findings from these analyses are explained. The ORNL control blade/canister models with materials interaction, relocation and blockage models are currently being implemented in SCDAP/RELAP5 as an optional structural component

  14. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    Science.gov (United States)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  15. Design of the incentive mechanism in electricity auction market based on the signaling game theory

    International Nuclear Information System (INIS)

    Liu, Zhen; Zhang, Xiliang; Lieu, Jenny

    2010-01-01

    At present, designing a proper bidding mechanism to decrease the generators' market power is considered to be one of the key approaches to deepen the reform of the electricity market. Based on the signaling game theory, the paper analyzes the main electricity bidding mechanisms in the electricity auction markets and considers the degree of information disturbance as an important factor for evaluating bidding mechanisms. Under the above studies, an incentive electricity bidding mechanism defined as the Generator Semi-randomized Matching (GSM) mechanism is proposed. In order to verify the new bidding mechanism, this paper uses the Swarm platform to develop a simulation model based on the multi-agents. In the simulation model, the generators and purchasers use the partly superior study strategy to adjust their price and their electricity quantity. Then, the paper examines a simulation experiment of the GSM bidding mechanism and compares it to a simulation of the High-Low Matching (HLM) bidding mechanism. According to the simulation results, several conclusions can be drawn when comparing the proposed GSM bidding mechanism to the equilibrium state of HLM: the clearing price decreases, the total transaction volume increases, the profits of electricity generators decreases, and the overall benefits of purchasers increases. Index Terms - signaling game; semi-randomized matching; high-low match. (author)

  16. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  17. LFR Development: Italian Program

    International Nuclear Information System (INIS)

    Tarantino, M.

    2011-01-01

    Conclusions: ⇨ ENEA has one of the most relevant EU R&D infrastructures for HLM technological development; ⇨ ENEA is strongly involved in the EU R&D programs supporting the development of sub-critical (ADS) and critical lead cooled reactors (LFR - Gen. IV); ⇨ Large experimental program ranging from HLM thermalhydraulic to large scale experiment has been implemented in Italy, partially funded by the National Program; ⇨ Large competencies are available related to Safety Assessment, System Design, Core Design & Optimization; ⇨ ENEA is able to cooperate with other laboratories in order to promote the growth and diffusion of the technology for nuclear application

  18. Comparison with Russian analyses of meteor impact

    Energy Technology Data Exchange (ETDEWEB)

    Canavan, G.H.

    1997-06-01

    The inversion model for meteor impacts is used to discuss Russian analyses and compare principal results. For common input parameters, the models produce consistent estimates of impactor parameters. Directions for future research are discussed and prioritized.

  19. A review on design of experiments and surrogate models in aircraft real-time and many-query aerodynamic analyses

    Science.gov (United States)

    Yondo, Raul; Andrés, Esther; Valero, Eusebio

    2018-01-01

    Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial

  20. Analysing the Effects of Flood-Resilience Technologies in Urban Areas Using a Synthetic Model Approach

    Directory of Open Access Journals (Sweden)

    Reinhard Schinke

    2016-11-01

    Full Text Available Flood protection systems with their spatial effects play an important role in managing and reducing flood risks. The planning and decision process as well as the technical implementation are well organized and often exercised. However, building-related flood-resilience technologies (FReT are often neglected due to the absence of suitable approaches to analyse and to integrate such measures in large-scale flood damage mitigation concepts. Against this backdrop, a synthetic model-approach was extended by few complementary methodical steps in order to calculate flood damage to buildings considering the effects of building-related FReT and to analyse the area-related reduction of flood risks by geo-information systems (GIS with high spatial resolution. It includes a civil engineering based investigation of characteristic properties with its building construction including a selection and combination of appropriate FReT as a basis for derivation of synthetic depth-damage functions. Depending on the real exposition and the implementation level of FReT, the functions can be used and allocated in spatial damage and risk analyses. The application of the extended approach is shown at a case study in Valencia (Spain. In this way, the overall research findings improve the integration of FReT in flood risk management. They provide also some useful information for advising of individuals at risk supporting the selection and implementation of FReT.

  1. Advanced toroidal facility vaccuum vessel stress analyses

    International Nuclear Information System (INIS)

    Hammonds, C.J.; Mayhall, J.A.

    1987-01-01

    The complex geometry of the Advance Toroidal Facility (ATF) vacuum vessel required special analysis techniques in investigating the structural behavior of the design. The response of a large-scale finite element model was found for transportation and operational loading. Several computer codes and systems, including the National Magnetic Fusion Energy Computer Center Cray machines, were implemented in accomplishing these analyses. The work combined complex methods that taxed the limits of both the codes and the computer systems involved. Using MSC/NASTRAN cyclic-symmetry solutions permitted using only 1/12 of the vessel geometry to mathematically analyze the entire vessel. This allowed the greater detail and accuracy demanded by the complex geometry of the vessel. Critical buckling-pressure analyses were performed with the same model. The development, results, and problems encountered in performing these analyses are described. 5 refs., 3 figs

  2. A History of Rotorcraft Comprehensive Analyses

    Science.gov (United States)

    Johnson, Wayne

    2013-01-01

    A history of the development of rotorcraft comprehensive analyses is presented. Comprehensive analyses are digital computer programs that calculate the aeromechanical behavior of the rotor and aircraft, bringing together the most advanced models of the geometry, structure, dynamics, and aerodynamics available in rotary wing technology. The development of the major codes of the last five decades from industry, government, and universities is described. A number of common themes observed in this history are discussed.

  3. Sensitivity in risk analyses with uncertain numbers.

    Energy Technology Data Exchange (ETDEWEB)

    Tucker, W. Troy; Ferson, Scott

    2006-06-01

    Sensitivity analysis is a study of how changes in the inputs to a model influence the results of the model. Many techniques have recently been proposed for use when the model is probabilistic. This report considers the related problem of sensitivity analysis when the model includes uncertain numbers that can involve both aleatory and epistemic uncertainty and the method of calculation is Dempster-Shafer evidence theory or probability bounds analysis. Some traditional methods for sensitivity analysis generalize directly for use with uncertain numbers, but, in some respects, sensitivity analysis for these analyses differs from traditional deterministic or probabilistic sensitivity analyses. A case study of a dike reliability assessment illustrates several methods of sensitivity analysis, including traditional probabilistic assessment, local derivatives, and a ''pinching'' strategy that hypothetically reduces the epistemic uncertainty or aleatory uncertainty, or both, in an input variable to estimate the reduction of uncertainty in the outputs. The prospects for applying the methods to black box models are also considered.

  4. Microsegregation in multicomponent alloy analysed by quantitative phase-field model

    International Nuclear Information System (INIS)

    Ohno, M; Takaki, T; Shibuta, Y

    2015-01-01

    Microsegregation behaviour in a ternary alloy system has been analysed by means of quantitative phase-field (Q-PF) simulations with a particular attention directed at an influence of tie-line shift stemming from different liquid diffusivities of the solute elements. The Q-PF model developed for non-isothermal solidification in multicomponent alloys with non-zero solid diffusivities was applied to analysis of microsegregation in a ternary alloy consisting of fast and slow diffusing solute elements. The accuracy of the Q-PF simulation was first verified by performing the convergence test of segregation ratio with respect to the interface thickness. From one-dimensional analysis, it was found that the microsegregation of slow diffusing element is reduced due to the tie-line shift. In two-dimensional simulations, the refinement of microstructure, viz., the decrease of secondary arms spacing occurs at low cooling rates due to the formation of diffusion layer of slow diffusing element. It yields the reductions of degrees of microsegregation for both the fast and slow diffusing elements. Importantly, in a wide range of cooling rates, the degree of microsegregation of the slow diffusing element is always lower than that of the fast diffusing element, which is entirely ascribable to the influence of tie-line shift. (paper)

  5. Chapter No.4. Safety analyses

    International Nuclear Information System (INIS)

    2002-01-01

    In 2001 the activity in the field of safety analyses was focused on verification of the safety analyses reports for NPP V-2 Bohunice and NPP Mochovce concerning the new profiled fuel and probabilistic safety assessment study for NPP Mochovce. The calculation safety analyses were performed and expert reviews for the internal UJD needs were elaborated. An important part of work was performed also in solving of scientific and technical tasks appointed within bilateral projects of co-operation between UJD and its international partnership organisations as well as within international projects ordered and financed by the European Commission. All these activities served as an independent support for UJD in its deterministic and probabilistic safety assessment of nuclear installations. A special attention was paid to a review of probabilistic safety assessment study of level 1 for NPP Mochovce. The probabilistic safety analysis of NPP related to the full power operation was elaborated in the study and a contribution of the technical and operational improvements to the risk decreasing was quantified. A core damage frequency of the reactor was calculated and the dominant initiating events and accident sequences with the major contribution to the risk were determined. The target of the review was to determine the acceptance of the sources of input information, assumptions, models, data, analyses and obtained results, so that the probabilistic model could give a real picture of the NPP. The review of the study was performed in co-operation of UJD with the IAEA (IPSART mission) as well as with other external organisations, which were not involved in the elaboration of the reviewed document and probabilistic model of NPP. The review was made in accordance with the IAEA guidelines and methodical documents of UJD and US NRC. In the field of calculation safety analyses the UJD activity was focused on the analysis of an operational event, analyses of the selected accident scenarios

  6. Developing a system dynamics model to analyse environmental problem in construction site

    Science.gov (United States)

    Haron, Fatin Fasehah; Hawari, Nurul Nazihah

    2017-11-01

    This study aims to develop a system dynamics model at a construction site to analyse the impact of environmental problem. Construction sites may cause damages to the environment, and interference in the daily lives of residents. A proper environmental management system must be used to reduce pollution, enhance bio-diversity, conserve water, respect people and their local environment, measure performance and set targets for the environment and sustainability. This study investigates the damaging impact normally occur during the construction stage. Environmental problem will cause costly mistake in project implementation, either because of the environmental damages that are likely to arise during project implementation, or because of modification that may be required subsequently in order to make the action environmentally acceptable. Thus, findings from this study has helped in significantly reducing the damaging impact towards environment, and improve the environmental management system performance at construction site.

  7. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    Energy Technology Data Exchange (ETDEWEB)

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  8. What is needed to eliminate new pediatric HIV infections: The contribution of model-based analyses

    Science.gov (United States)

    Doherty, Katie; Ciaranello, Andrea

    2013-01-01

    Purpose of Review Computer simulation models can identify key clinical, operational, and economic interventions that will be needed to achieve the elimination of new pediatric HIV infections. In this review, we summarize recent findings from model-based analyses of strategies for prevention of mother-to-child HIV transmission (MTCT). Recent Findings In order to achieve elimination of MTCT (eMTCT), model-based studies suggest that scale-up of services will be needed in several domains: uptake of services and retention in care (the PMTCT “cascade”), interventions to prevent HIV infections in women and reduce unintended pregnancies (the “four-pronged approach”), efforts to support medication adherence through long periods of pregnancy and breastfeeding, and strategies to make breastfeeding safer and/or shorter. Models also project the economic resources that will be needed to achieve these goals in the most efficient ways to allocate limited resources for eMTCT. Results suggest that currently recommended PMTCT regimens (WHO Option A, Option B, and Option B+) will be cost-effective in most settings. Summary Model-based results can guide future implementation science, by highlighting areas in which additional data are needed to make informed decisions and by outlining critical interventions that will be necessary in order to eliminate new pediatric HIV infections. PMID:23743788

  9. Mixed convection and stratification phenomena in a heavy liquid metal pool

    Energy Technology Data Exchange (ETDEWEB)

    Tarantino, Mariano, E-mail: mariano.tarantino@enea.it [Italian National Agency for New Technologies, Energy and Sustainable Economic Development, C.R. ENEA Brasimone (Italy); Martelli, Daniele; Barone, Gianluca [Dipartimento di Ingegneria Civile e Industriale, University of Pisa, Largo Lucio Lazzarino, 1-56100 Pisa Italy (Italy); Di Piazza, Ivan [Italian National Agency for New Technologies, Energy and Sustainable Economic Development, C.R. ENEA Brasimone (Italy); Forgione, Nicola [Dipartimento di Ingegneria Civile e Industriale, University of Pisa, Largo Lucio Lazzarino, 1-56100 Pisa Italy (Italy)

    2015-05-15

    Highlights: • Results related to experiments reproducing PLOHS + LOF accident in CIRCE pool facility. • Vertical thermal stratification in large HLM pool. • Transition from forced to natural circulation in HLM pool under DHR conditions. • Heat transfer coefficient measurement in HLM pin bundle. • Nusselt numbers calculations and comparison with correlations. - Abstract: This work deals with an analysis of the first experimental series of tests performed to investigate mixed convection and stratification phenomena in CIRCE HLM large pool. In particular, the tests concern the transition from nominal flow to natural circulation regime, typical of decay heat removal (DHR) regime. To this purpose the CIRCE pool facility has been updated to host a suitable test section in order to reproduce the thermal-hydraulic behaviour of a HLM pool-type reactor. The test section basically consists of an electrical bundle (FPS) made up of 37 pins arranged in a hexagonal wrapped lattice with a pitch diameter ratio of 1.8. Along the FPS active length, three sections were instrumented to monitor the heat transfer coefficient along the bundle as well as the cladding temperatures at different ranks of the sub-channels. This paper reports the experimental data as well as a preliminary analysis and discussion of the results, focusing on the most relevant tests of the campaign, namely Test I (48 h) and Test II (97 h). Temperatures along three sections of the FPS and at inlet and outlet sections of the main components were reported and the Nusselt number in the FPS sub-channels was investigated together with the void fraction in the riser. Concerning the investigation of in-pool thermal stratification phenomena, the temperatures in the whole LBE pool were monitored at different elevations and radial locations. The analysis of experimental data obtained from Tests I and II underline the occurrence of thermal stratification phenomena in the region placed between the outlet sections of

  10. Theoretical and experimental studies of heavy liquid metal thermal hydraulics. Proceedings of a technical meeting

    International Nuclear Information System (INIS)

    2006-10-01

    Through the Nuclear Energy Department's Technical Working Group on Fast Reactors (TWG-FR), the IAEA provides a forum for exchange of information on national programmes, collaborative assessments, knowledge preservation, and cooperative research in areas agreed by the Member States with fast reactor and partitioning and transmutation development programmes (e.g. accelerator driven systems (ADS)). Trends in advanced fast reactor and ADS designs and technology development are periodically summarized in status reports, symposia, and seminar proceedings prepared by the IAEA to provide all interested IAEA Member States with balanced and objective information. The use of heavy liquid metals (HLM) is rapidly diffusing in different research and industrial fields. The detailed knowledge of the basic thermal hydraulics phenomena associated with their use is a necessary step for the development of the numerical codes to be used in the engineering design of HLM components. This is particularly true in the case of lead or lead-bismuth eutectic alloy cooled fast reactors, high power particle beam targets and in the case of the cooling of accelerator driven sub-critical cores where the use of computational fluid dynamic (CFD) design codes is mandatory. Periodic information exchange within the frame of the TWG-FR has lead to the conclusion that the experience in HLM thermal fluid dynamics with regard to both the theoretical/numerical and experimental fields was limited and somehow dispersed. This is the case, e.g. when considering turbulent exchange phenomena, free-surface problems, and two-phase flows. Consequently, Member States representatives participating in the 35th Annual Meeting of the TWG-FR (Karlsruhe, Germany, 22-26 April 2002) recommended holding a technical meeting (TM) on Theoretical and Experimental Studies of Heavy Liquid Metal Thermal Hydraulics. Following this recommendation, the IAEA has convened the Technical Meeting on Theoretical and Experimental Studies of

  11. An Investigation into the Prediction of in Vivo Clearance for a Range of Flavin-containing Monooxygenase Substrates.

    Science.gov (United States)

    Jones, Barry C; Srivastava, Abhishek; Colclough, Nicola; Wilson, Joanne; Reddy, Venkatesh Pilla; Amberntsson, Sara; Li, Danxi

    2017-10-01

    Flavin-containing monooxygenases (FMO) are metabolic enzymes mediating the oxygenation of nucleophilic atoms such as nitrogen, sulfur, phosphorus, and selenium. These enzymes share similar properties to the cytochrome P450 system but can be differentiated through heat inactivation and selective substrate inhibition by methimazole. This study investigated 10 compounds with varying degrees of FMO involvement to determine the nature of the correlation between human in vitro and in vivo unbound intrinsic clearance. To confirm and quantify the extent of FMO involvement six of the compounds were investigated in human liver microsomal (HLM) in vitro assays using heat inactivation and methimazole substrate inhibition. Under these conditions FMO contribution varied from 21% (imipramine) to 96% (itopride). Human hepatocyte and HLM intrinsic clearance (CL int ) data were scaled using standard methods to determine the predicted unbound intrinsic clearance (predicted CL int u ) for each compound. This was compared with observed unbound intrinsic clearance (observed CL int u ) values back calculated from human pharmacokinetic studies. A good correlation was observed between the predicted and observed CL int u using hepatocytes ( R 2 = 0.69), with 8 of the 10 compounds investigated within or close to a factor of 2. For HLM the in vitro-in vivo correlation was maintained ( R 2 = 0.84) but the accuracy was reduced with only 3 out of 10 compounds falling within, or close to, twofold. This study demonstrates that human hepatocytes and HLM can be used with standard scaling approaches to predict the human in vivo clearance for FMO substrates. Copyright © 2017 by The American Society for Pharmacology and Experimental Therapeutics.

  12. Mechanical analyses on the digital behaviour of the Tokay gecko (Gekko gecko) based on a multi-level directional adhesion model

    OpenAIRE

    Wu, Xuan; Wang, Xiaojie; Mei, Tao; Sun, Shaoming

    2015-01-01

    This paper proposes a multi-level hierarchical model for the Tokay gecko (Gekko gecko) adhesive system and analyses the digital behaviour of the G. gecko under macro/meso-level scale. The model describes the structures of G. gecko's adhesive system from the nano-level spatulae to the sub-millimetre-level lamella. The G. gecko's seta is modelled using inextensible fibril based on Euler's elastica theorem. Considering the side contact of the spatular pads of the seta on the flat and rigid subst...

  13. Continuous spatial modelling to analyse planning and economic consequences of offshore wind energy

    International Nuclear Information System (INIS)

    Moeller, Bernd

    2011-01-01

    Offshore wind resources appear abundant, but technological, economic and planning issues significantly reduce the theoretical potential. While massive investments are anticipated and planners and developers are scouting for viable locations and consider risk and impact, few studies simultaneously address potentials and costs together with the consequences of proposed planning in an analytical and continuous manner and for larger areas at once. Consequences may be investments short of efficiency and equity, and failed planning routines. A spatial resource economic model for the Danish offshore waters is presented, used to analyse area constraints, technological risks, priorities for development and opportunity costs of maintaining competing area uses. The SCREAM-offshore wind model (Spatially Continuous Resource Economic Analysis Model) uses raster-based geographical information systems (GIS) and considers numerous geographical factors, technology and cost data as well as planning information. Novel elements are weighted visibility analysis and geographically recorded shipping movements as variable constraints. A number of scenarios have been described, which include restrictions of using offshore areas, as well as alternative uses such as conservation and tourism. The results comprise maps, tables and cost-supply curves for further resource economic assessment and policy analysis. A discussion of parameter variations exposes uncertainties of technology development, environmental protection as well as competing area uses and illustrates how such models might assist in ameliorating public planning, while procuring decision bases for the political process. The method can be adapted to different research questions, and is largely applicable in other parts of the world. - Research Highlights: → A model for the spatially continuous evaluation of offshore wind resources. → Assessment of spatial constraints, costs and resources for each location. → Planning tool for

  14. The Impact of Sustainable Development Technology on a Small Economy-The Case of Energy-Saving Technology.

    Science.gov (United States)

    Chen, Xiding; Huang, Qinghua; Huang, Weilun; Li, Xue

    2018-02-08

    We investigated the impact of a sustainable development technology on the macroeconomic variables in a small economy utilizing a case study with a stochastically improving energy saving technology and a stochastically increasing energy price. The results show the technological displacement effects of energy saving technology are stronger, but there are more ambiguous instantaneous returns to physical capital. However, the energy saving technology's displacement effects might not affect the conditions under which the Harberger-Laursen-Metzler (HLM) effect holds. The effects of rising energy prices on bonds are stronger, and there are more ambiguous instantaneous returns, but the conditions under which the HLM effect holds are different.

  15. The Impact of Sustainable Development Technology on a Small Economy—The Case of Energy-Saving Technology

    Directory of Open Access Journals (Sweden)

    Xiding Chen

    2018-02-01

    Full Text Available We investigated the impact of a sustainable development technology on the macroeconomic variables in a small economy utilizing a case study with a stochastically improving energy saving technology and a stochastically increasing energy price. The results show the technological displacement effects of energy saving technology are stronger, but there are more ambiguous instantaneous returns to physical capital. However, the energy saving technology’s displacement effects might not affect the conditions under which the Harberger-Laursen-Metzler (HLM effect holds. The effects of rising energy prices on bonds are stronger, and there are more ambiguous instantaneous returns, but the conditions under which the HLM effect holds are different.

  16. The Impact of Sustainable Development Technology on a Small Economy—The Case of Energy-Saving Technology

    Science.gov (United States)

    Huang, Qinghua; Huang, Weilun; Li, Xue

    2018-01-01

    We investigated the impact of a sustainable development technology on the macroeconomic variables in a small economy utilizing a case study with a stochastically improving energy saving technology and a stochastically increasing energy price. The results show the technological displacement effects of energy saving technology are stronger, but there are more ambiguous instantaneous returns to physical capital. However, the energy saving technology’s displacement effects might not affect the conditions under which the Harberger-Laursen-Metzler (HLM) effect holds. The effects of rising energy prices on bonds are stronger, and there are more ambiguous instantaneous returns, but the conditions under which the HLM effect holds are different. PMID:29419788

  17. Faire territoire au quotidien dans les grands ensembles HLM

    Directory of Open Access Journals (Sweden)

    Denis la Mache

    2012-05-01

    Full Text Available Cet article propose une lecture anthropologique de la manière dont les habitants des grands ensembles de périphéries urbaines délimitent, administrent, transforment matériellement et symboliquement les espaces et les lieux de leur quotidien pour faire territoire. Nous nous intéresserons à la fabrication de ces entités spatiales dont chaque individu se donne la liberté de disposer chaque jour selon un usage singulier et qu’il entoure d’un champ symbolique spécifique, garant d’identité. Ces « fabrications sociospatiales » seront abordées à partir d’une recherche empirique menée auprès d’habitants de deux terrains d’enquêtes situés dans des périphéries de villes moyennes.This paper proposes an anthropological reading of how the inhabitants of large urban peripheries define, administer, and process their daily spaces and places from a material as much as symbolical point of view to give sense to their territory. We will focus on the making of these spatial entities, witch everyone can dispose of everyday individually, to guarantee their identity. These “socio spatial creations” will be based on an empirical research, that is to say surveys conducted among the suburbians of two towns.

  18. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  19. Correlation of Klebsiella pneumoniae comparative genetic analyses with virulence profiles in a murine respiratory disease model.

    Directory of Open Access Journals (Sweden)

    Ramy A Fodah

    Full Text Available Klebsiella pneumoniae is a bacterial pathogen of worldwide importance and a significant contributor to multiple disease presentations associated with both nosocomial and community acquired disease. ATCC 43816 is a well-studied K. pneumoniae strain which is capable of causing an acute respiratory disease in surrogate animal models. In this study, we performed sequencing of the ATCC 43816 genome to support future efforts characterizing genetic elements required for disease. Furthermore, we performed comparative genetic analyses to the previously sequenced genomes from NTUH-K2044 and MGH 78578 to gain an understanding of the conservation of known virulence determinants amongst the three strains. We found that ATCC 43816 and NTUH-K2044 both possess the known virulence determinant for yersiniabactin, as well as a Type 4 secretion system (T4SS, CRISPR system, and an acetonin catabolism locus, all absent from MGH 78578. While both NTUH-K2044 and MGH 78578 are clinical isolates, little is known about the disease potential of these strains in cell culture and animal models. Thus, we also performed functional analyses in the murine macrophage cell lines RAW264.7 and J774A.1 and found that MGH 78578 (K52 serotype was internalized at higher levels than ATCC 43816 (K2 and NTUH-K2044 (K1, consistent with previous characterization of the antiphagocytic properties of K1 and K2 serotype capsules. We also examined the three K. pneumoniae strains in a novel BALB/c respiratory disease model and found that ATCC 43816 and NTUH-K2044 are highly virulent (LD50<100 CFU while MGH 78578 is relatively avirulent.

  20. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Science.gov (United States)

    Rallapalli, Varsha H.

    2016-01-01

    Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL) often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM) has demonstrated that the signal-to-noise ratio (SNRENV) from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N) is assumed to: (a) reduce S + N envelope power by filling in dips within clean speech (S) and (b) introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  1. Promoting mobility and reducing length of stay in hospitalized general medicine patients: A quality-improvement project.

    Science.gov (United States)

    Hoyer, Erik H; Friedman, Michael; Lavezza, Annette; Wagner-Kosmakos, Kathleen; Lewis-Cherry, Robin; Skolnik, Judy L; Byers, Sherrie P; Atanelov, Levan; Colantuoni, Elizabeth; Brotman, Daniel J; Needham, Dale M

    2016-05-01

    To determine whether a multidisciplinary mobility promotion quality-improvement (QI) project would increase patient mobility and reduce hospital length of stay (LOS). Implemented using a structured QI model, the project took place between March 1, 2013 and March 1, 2014 on 2 general medicine units in a large academic medical center. There were 3352 patients admitted during the QI project period. The Johns Hopkins Highest Level of Mobility (JH-HLM) scale, an 8-point ordinal scale ranging from bed rest (score = 1) to ambulating ≥250 feet (score = 8), was used to quantify mobility. Changes in JH-HLM scores were compared for the first 4 months of the project (ramp-up phase) versus 4 months after project completion (post-QI phase) using generalized estimating equations. We compared the relative change in median LOS for the project months versus 12 months prior among the QI units, using multivariable linear regression analysis adjusting for 7 demographic and clinically relevant variables. Comparing the ramp-up versus post-QI phases, patients reaching JH-HLM's ambulation status increased from 43% to 70% (P mobility scores between admission and discharge increased from 32% to 45% (P 7 days), were associated with a significantly greater adjusted median reduction in LOS of 1.11 (95% CI: -1.53 to -0.65, P mobility was not associated with an increase in injurious falls compared to 12 months prior on the QI units (P = 0.73). Active prevention of a decline in physical function that commonly occurs during hospitalization may be achieved with a structured QI approach. In an adult medicine population, our QI project was associated with improved mobility, and this may have contributed to a reduction in LOS, particularly for more complex patients with longer expected hospital stay. Journal of Hospital Medicine 2016. © 2016 Society of Hospital Medicine. © 2016 Society of Hospital Medicine.

  2. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  3. Forced vibration tests and simulation analyses of a nuclear reactor building. Part 2: simulation analyses

    International Nuclear Information System (INIS)

    Kuno, M.; Nakagawa, S.; Momma, T.; Naito, Y.; Niwa, M.; Motohashi, S.

    1995-01-01

    Forced vibration tests of a BWR-type reactor building. Hamaoka Unit 4, were performed. Valuable data on the dynamic characteristics of the soil-structure interaction system were obtained through the tests. Simulation analyses of the fundamental dynamic characteristics of the soil-structure system were conducted, using a basic lumped mass soil-structure model (lattice model), and strong correlation with the measured data was obtained. Furthermore, detailed simulation models were employed to investigate the effects of simultaneously induced vertical response and response of the adjacent turbine building on the lateral response of the reactor building. (author). 4 refs., 11 figs

  4. Models for regionalizing economic data and their applications within the scope of forensic disaster analyses

    Science.gov (United States)

    Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.

    2015-04-01

    The impact of natural hazards on the economic system can be observed in many different regions all over the world. Once the local economic structure is hit by an event direct costs instantly occur. However, the disturbance on a local level (e.g. parts of city or industries along a river bank) might also cause monetary damages in other, indirectly affected sectors. If the impact of an event is strong, these damages are likely to cascade and spread even on an international scale (e.g. the eruption of Eyjafjallajökull and its impact on the automotive sector in Europe). In order to determine these special impacts, one has to gain insights into the directly hit economic structure before being able to calculate these side effects. Especially, regarding the development of a model used for near real-time forensic disaster analyses any simulation needs to be based on data that is rapidly available or easily to be computed. Therefore, we investigated commonly used or recently discussed methodologies for regionalizing economic data. Surprisingly, even for German federal states there is no official input-output data available that can be used, although it might provide detailed figures concerning economic interrelations between different industry sectors. In the case of highly developed countries, such as Germany, we focus on models for regionalizing nationwide input-output table which is usually available at the national statistical offices. However, when it comes to developing countries (e.g. South-East Asia) the data quality and availability is usually much poorer. In this case, other sources need to be found for the proper assessment of regional economic performance. We developed an indicator-based model that can fill this gap because of its flexibility regarding the level of aggregation and the composability of different input parameters. Our poster presentation brings up a literature review and a summary on potential models that seem to be useful for this specific task

  5. PWR plant transient analyses using TRAC-PF1

    International Nuclear Information System (INIS)

    Ireland, J.R.; Boyack, B.E.

    1984-01-01

    This paper describes some of the pressurized water reactor (PWR) transient analyses performed at Los Alamos for the US Nuclear Regulatory Commission using the Transient Reactor Analysis Code (TRAC-PF1). Many of the transient analyses performed directly address current PWR safety issues. Included in this paper are examples of two safety issues addressed by TRAC-PF1. These examples are pressurized thermal shock (PTS) and feed-and-bleed cooling for Oconee-1. The calculations performed were plant specific in that details of both the primary and secondary sides were modeled in addition to models of the plant integrated control systems. The results of these analyses show that for these two transients, the reactor cores remained covered and cooled at all times posing no real threat to the reactor system nor to the public

  6. School effects on non-verbal intelligence and nutritional status in rural Zambia.

    Science.gov (United States)

    Hein, Sascha; Tan, Mei; Reich, Jodi; Thuma, Philip E; Grigorenko, Elena L

    2016-02-01

    This study uses hierarchical linear modeling (HLM) to examine the school factors (i.e., related to school organization and teacher and student body) associated with non-verbal intelligence (NI) and nutritional status (i.e., body mass index; BMI) of 4204 3 rd to 7 th graders in rural areas of Southern Province, Zambia. Results showed that 23.5% and 7.7% of the NI and BMI variance, respectively, were conditioned by differences between schools. The set of 14 school factors accounted for 58.8% and 75.9% of the between-school differences in NI and BMI, respectively. Grade-specific HLM yielded higher between-school variation of NI (41%) and BMI (14.6%) for students in grade 3 compared to grades 4 to 7. School factors showed a differential pattern of associations with NI and BMI across grades. The distance to a health post and teacher's teaching experience were the strongest predictors of NI (particularly in grades 4, 6 and 7); the presence of a preschool was linked to lower BMI in grades 4 to 6. Implications for improving access and quality of education in rural Zambia are discussed.

  7. Assessment applicability of selected models of multiple discriminant analyses to forecast financial situation of Polish wood sector enterprises

    Directory of Open Access Journals (Sweden)

    Adamowicz Krzysztof

    2017-03-01

    Full Text Available In the last three decades forecasting bankruptcy of enterprises has been an important and difficult problem, used as an impulse for many research projects (Ribeiro et al. 2012. At present many methods of bankruptcy prediction are available. In view of the specific character of economic activity in individual sectors, specialised methods adapted to a given branch of industry are being used increasingly often. For this reason an important scientific problem is related with the indication of an appropriate model or group of models to prepare forecasts for a given branch of industry. Thus research has been conducted to select an appropriate model of Multiple Discriminant Analysis (MDA, best adapted to forecasting changes in the wood industry. This study analyses 10 prediction models popular in Poland. Effectiveness of the model proposed by Jagiełło, developed for all industrial enterprises, may be labelled accidental. That model is not adapted to predict financial changes in wood sector companies in Poland.

  8. Mental model mapping as a new tool to analyse the use of information in decision-making in integrated water management

    NARCIS (Netherlands)

    Kolkman, Rien; Kok, Matthijs; van der Veen, A.

    2005-01-01

    The solution of complex, unstructured problems is faced with policy controversy and dispute, unused and misused knowledge, project delay and failure, and decline of public trust in governmental decisions. Mental model mapping (also called concept mapping) is a technique to analyse these difficulties

  9. Balmorel: A model for analyses of the electricity and CHP markets in the Baltic Sea Region. Appendices

    International Nuclear Information System (INIS)

    Ravn, H.F.; Munksgaard, J.; Ramskov, J.; Grohnheit, P.E.; Larsen, H.V.

    2001-03-01

    This report describes the motivations behind the development of the Balmorel model as well as the model itself. The purpose of the Balmorel project is to develop a model for analyses of the power and CHP sectors in the Baltic Sea Region. The model is directed towards the analysis of relevant policy questions to the extent that they contain substantial international aspects. The model is developed in response to the trend towards internationalisation in the electricity sector. This trend is seen in increased international trade of electricity, in investment strategies among producers and otherwise. Also environmental considerations and policies are to an increasing extent gaining an international perspective in relation to the greenhouse gasses. Further, the ongoing process of deregulation of the energy sector highlights this and contributes to the need for overview and analysis. A guiding principle behind the construction of the model has been that it may serve as a means of communication in relation to the policy issues that already are or that may become important for the region. Therefore, emphasis has been put on documentation, transparency and flexibility of the model. This is achieved in part by formulating the model in a high level modelling language, and by making the model, including data, available at the internet. Potential users of the Balmorel model include research institutions, consulting companies, energy authorities, transmission system operators and energy companies. (au)

  10. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  11. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 3

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Corum, J.M.; Bryson, J.W.

    1975-06-01

    The third in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: the experimental data provide design information directly applicable to nozzles in cylindrical vessels; and the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 3 had a 10 in. OD and the nozzle had a 1.29 in. OD, giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios for the cylinder and the nozzle were 50 and 7.68 respectively. Thirteen separate loading cases were analyzed. In each, one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for all the loadings were obtained using 158 three-gage strain rosettes located on the inner and outer surfaces. The loading cases were also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  12. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 4

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-06-01

    The last in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models in the series are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: (1) the experimental data provide design information directly applicable to nozzles in cylindrical vessels, and (2) the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 4 had an outside diameter of 10 in., and the nozzle had an outside diameter of 1.29 in., giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios were 50 and 20.2 for the cylinder and nozzle respectively. Thirteen separate loading cases were analyzed. For each loading condition one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for each of the 13 loadings were obtained using 157 three-gage strain rosettes located on the inner and outer surfaces. Each of the 13 loading cases was also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  13. Gamma-ray pulsar physics: gap-model populations and light-curve analyses in the Fermi era

    International Nuclear Information System (INIS)

    Pierbattista, M.

    2010-01-01

    This thesis research focusses on the study of the young and energetic isolated ordinary pulsar population detected by the Fermi gamma-ray space telescope. We compared the model expectations of four emission models and the LAT data. We found that all the models fail to reproduce the LAT detections, in particular the large number of high E objects observed. This inconsistency is not model dependent. A discrepancy between the radio-loud/radio-quiet objects ratio was also found between the observed and predicted samples. The L γ α E 0.5 relation is robustly confirmed by all the assumed models with particular agreement in the slot gap (SG) case. On luminosity bases, the intermediate altitude emission of the two pole caustic SG model is favoured. The beaming factor f Ω shows an E dependency that is slightly visible in the SG case. Estimates of the pulsar orientations have been obtained to explain the simultaneous gamma and radio light-curves. By analysing the solutions we found a relation between the observed energy cutoff and the width of the emission slot gap. This relation has been theoretically predicted. A possible magnetic obliquity α alignment with time is rejected -for all the models- on timescale of the order of 10 6 years. The light-curve morphology study shows that the outer magnetosphere gap emission (OGs) are favoured to explain the observed radio-gamma lag. The light curve moment studies (symmetry and sharpness) on the contrary favour a two pole caustic SG emission. All the model predictions suggest a different magnetic field layout with an hybrid two pole caustic and intermediate altitude emission to explain both the pulsar luminosity and light curve morphology. The low magnetosphere emission mechanism of the polar cap model, is systematically rejected by all the tests done. (author) [fr

  14. In vitro characterization of potential CYP- and UGT-derived metabolites of the psychoactive drug 25B-NBOMe using LC-high resolution MS.

    Science.gov (United States)

    Boumrah, Yacine; Humbert, Luc; Phanithavong, Melodie; Khimeche, Kamel; Dahmani, Abdallah; Allorge, Delphine

    2016-02-01

    One of the main challenges posed by the emergence of new psychoactive substances is their identification in human biological samples. Trying to detect the parent drug could lead to false-negative results when the delay between consumption and sampling has been too long. The identification of their metabolites could then improve their detection window in biological matrices. Oxidative metabolism by cytochromes P450 and glucuronidation are two major detoxification pathways in humans. In order to characterize possible CYP- and UGT-dependent metabolites of the 2-(4-bromo-2,5-dimethoxy-phenyl)-N-[(2-methoxyphenyl)methyl]ethanamine (25B-NBOMe), a synthetic psychoactive drug, analyses of human liver microsome (HLM) incubates were performed using an ultra-high performance liquid chromatography system coupled with a quadrupole-time of flight mass spectrometry detector (UHPLC-Q-TOF/MS). On-line analyses were performed using a Waters OASIS HLB column (30 x 2.1 mm, 20 µm) for the automatic sample loading and a Waters ACQUITY HSS C18 column (150 x 2 mm, 1.8 µm) for the chromatographic separation. Twenty-one metabolites, consisting of 12 CYP-derived and 9 UGT-derived metabolites, were identified. O-Desmethyl metabolites were the most abundant compounds after the phase I process, which appears to be in accordance with data from previously published NBOMe-intoxication case reports. Although other important metabolic transformations, such as sulfation, acetylation, methylation or glutathione conjugation, were not studied and artefactual metabolites might have been produced during the HLM incubation process, the record of all the metabolite MS spectra in our library should enable us to characterize relevant metabolites of 25B-NBOMe and allow us to detect 25B-MBOMe users. Copyright © 2015 John Wiley & Sons, Ltd.

  15. Congruence between distribution modelling and phylogeographical analyses reveals Quaternary survival of a toadflax species (Linaria elegans) in oceanic climate areas of a mountain ring range.

    Science.gov (United States)

    Fernández-Mazuecos, Mario; Vargas, Pablo

    2013-06-01

    · The role of Quaternary climatic shifts in shaping the distribution of Linaria elegans, an Iberian annual plant, was investigated using species distribution modelling and molecular phylogeographical analyses. Three hypotheses are proposed to explain the Quaternary history of its mountain ring range. · The distribution of L. elegans was modelled using the maximum entropy method and projected to the last interglacial and to the last glacial maximum (LGM) using two different paleoclimatic models: the Community Climate System Model (CCSM) and the Model for Interdisciplinary Research on Climate (MIROC). Two nuclear and three plastid DNA regions were sequenced for 24 populations (119 individuals sampled). Bayesian phylogenetic, phylogeographical, dating and coalescent-based population genetic analyses were conducted. · Molecular analyses indicated the existence of northern and southern glacial refugia and supported two routes of post-glacial recolonization. These results were consistent with the LGM distribution as inferred under the CCSM paleoclimatic model (but not under the MIROC model). Isolation between two major refugia was dated back to the Riss or Mindel glaciations, > 100 kyr before present (bp). · The Atlantic distribution of inferred refugia suggests that the oceanic (buffered)-continental (harsh) gradient may have played a key and previously unrecognized role in determining Quaternary distribution shifts of Mediterranean plants. © 2013 The Authors. New Phytologist © 2013 New Phytologist Trust.

  16. A new approach to analyse longitudinal epidemiological data with an excess of zeros

    NARCIS (Netherlands)

    Spriensma, Alette S.; Hajos, Tibor R. S.; de Boer, Michiel R.; Heymans, Martijn W.; Twisk, Jos W. R.

    2013-01-01

    Background: Within longitudinal epidemiological research, 'count' outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables

  17. Models for transient analyses in advanced test reactors

    International Nuclear Information System (INIS)

    Gabrielli, Fabrizio

    2011-01-01

    Several strategies are developed worldwide to respond to the world's increasing demand for electricity. Modern nuclear facilities are under construction or in the planning phase. In parallel, advanced nuclear reactor concepts are being developed to achieve sustainability, minimize waste, and ensure uranium resources. To optimize the performance of components (fuels and structures) of these systems, significant efforts are under way to design new Material Test Reactors facilities in Europe which employ water as a coolant. Safety provisions and the analyses of severe accidents are key points in the determination of sound designs. In this frame, the SIMMER multiphysics code systems is a very attractive tool as it can simulate transients and phenomena within and beyond the design basis in a tightly coupled way. This thesis is primarily focused upon the extension of the SIMMER multigroup cross-sections processing scheme (based on the Bondarenko method) for a proper heterogeneity treatment in the analyses of water-cooled thermal neutron systems. Since the SIMMER code was originally developed for liquid metal-cooled fast reactors analyses, the effect of heterogeneity had been neglected. As a result, the application of the code to water-cooled systems leads to a significant overestimation of the reactivity feedbacks and in turn to non-conservative results. To treat the heterogeneity, the multigroup cross-sections should be computed by properly taking account of the resonance self-shielding effects and the fine intra-cell flux distribution in space group-wise. In this thesis, significant improvements of the SIMMER cross-section processing scheme are described. A new formulation of the background cross-section, based on the Bell and Wigner correlations, is introduced and pre-calculated reduction factors (Effective Mean Chord Lengths) are used to take proper account of the resonance self-shielding effects of non-fuel isotopes. Moreover, pre-calculated parameters are applied

  18. Analyses of cavitation instabilities in ductile metals

    DEFF Research Database (Denmark)

    Tvergaard, Viggo

    2007-01-01

    Cavitation instabilities have been predicted for a single void in a ductile metal stressed under high triaxiality conditions. In experiments for a ceramic reinforced by metal particles a single dominant void has been observed on the fracture surface of some of the metal particles bridging a crack......, and also tests for a thin ductile metal layer bonding two ceramic blocks have indicated rapid void growth. Analyses for these material configurations are discussed here. When the void radius is very small, a nonlocal plasticity model is needed to account for observed size-effects, and recent analyses......, while the surrounding voids are represented by a porous ductile material model in terms of a field quantity that specifies the variation of the void volume fraction in the surrounding metal....

  19. An application of the 'Bayesian cohort model' to nuclear power plant cost analyses

    International Nuclear Information System (INIS)

    Ono, Kenji; Nakamura, Takashi

    2002-01-01

    We have developed a new method for identifying the effects of calendar year, plant age and commercial operation starting year on the costs and performances of nuclear power plants and also developed an analysis system running on personal computers. The method extends the Bayesian cohort model for time series social survey data proposed by one of the authors. The proposed method was shown to be able to separate the above three effects more properly than traditional methods such as taking simple means by time domain. The analyses of US nuclear plant cost and performance data by using the proposed method suggest that many of the US plants spent relatively long time and much capital cost for modification at their age of about 10 to 20 years, but that, after those ages, they performed fairly well with lower and stabilized O and M and additional capital costs. (author)

  20. Fracture Mechanics Analyses of Reinforced Carbon-Carbon Wing-Leading-Edge Panels

    Science.gov (United States)

    Raju, Ivatury S.; Phillips, Dawn R.; Knight, Norman F., Jr.; Song, Kyongchan

    2010-01-01

    Fracture mechanics analyses of subsurface defects within the joggle regions of the Space Shuttle wing-leading-edge RCC panels are performed. A 2D plane strain idealized joggle finite element model is developed to study the fracture behavior of the panels for three distinct loading conditions - lift-off and ascent, on-orbit, and entry. For lift-off and ascent, an estimated bounding aerodynamic pressure load is used for the analyses, while for on-orbit and entry, thermo-mechanical analyses are performed using the extreme cold and hot temperatures experienced by the panels. In addition, a best estimate for the material stress-free temperature is used in the thermo-mechanical analyses. In the finite element models, the substrate and coating are modeled separately as two distinct materials. Subsurface defects are introduced at the coating-substrate interface and within the substrate. The objective of the fracture mechanics analyses is to evaluate the defect driving forces, which are characterized by the strain energy release rates, and determine if defects can become unstable for each of the loading conditions.

  1. A new approach to analyse longitudinal epidemiological data with an excess of zeros

    NARCIS (Netherlands)

    Spriensma, A.S.; Hajós, T.R.S.; de Boer, M.R.; Heijmans, M.W.; Twisk, J.W.R.

    2013-01-01

    Background: Within longitudinal epidemiological research, count outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables with

  2. Seismic risk analyses in the German Risk Study, phase B

    International Nuclear Information System (INIS)

    Hosser, D.; Liemersdorf, H.

    1991-01-01

    The paper discusses some aspects of the seismic risk part of the German Risk Study for Nuclear Power Plants, Phase B. First simplified analyses in Phase A of the study allowed only a rough classification of structures and systems of the PWR reference plant according to their seismic risk contribution. These studies were extended in Phase B using improved models for the dynamic analyses of buildings, structures and components as well as for the probabilistic analyses of seismic loading, failure probabilities and event trees. The methodology of deriving probabilistic seismic load descriptions is explained and compared with the methods in Phase A of the study and in other studies. Some details of the linear and nonlinear dynamic analyses of structures are reported in order to demonstrate the influence of different assumptions for material behaviour and failure criteria. The probabilistic structural and event tree analyses are discussed with respect to distribution assumptions, acceptable simplifications and model uncertainties. Some results for the PWR reference plant are given. (orig.)

  3. Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile

    Directory of Open Access Journals (Sweden)

    Hoľko Michal

    2014-12-01

    Full Text Available The article deals with numerical analyses of a Continuous Flight Auger (CFA pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed.

  4. Teachers' Mastery Goals: Using a Self-Report Survey to Study the Relations between Teaching Practices and Students' Motivation for Science Learning

    Science.gov (United States)

    Vedder-Weiss, Dana; Fortus, David

    2018-02-01

    Employing achievement goal theory (Ames Journal of Educational psychology, 84(3), 261-271, 1992), we explored science teachers' instruction and its relation to students' motivation for science learning and school culture. Based on the TARGETS framework (Patrick et al. The Elementary School Journal, 102(1), 35-58, 2001) and using data from 95 teachers, we developed a self-report survey assessing science teachers' usage of practices that emphasize mastery goals. We then used this survey and hierarchical linear modeling (HLM) analyses to study the relations between 35 science teachers' mastery goals in each of the TARGETS dimensions, the decline in their grade-level 5-8 students' ( N = 1.356) classroom and continuing motivation for science learning, and their schools' mastery goal structure. The findings suggest that adolescents' declining motivation for science learning results in part from a decreasing emphasis on mastery goals by schools and science teachers. Practices that relate to the nature of tasks and to student autonomy emerged as most strongly associated with adolescents' motivation and its decline with age.

  5. Mass spectrometry-based proteomic analysis of human liver cytochrome(s) P450

    Energy Technology Data Exchange (ETDEWEB)

    Shrivas, Kamlesh; Mindaye, Samuel T.; Getie-Kebtie, Melkamu; Alterman, Michail A., E-mail: Michail.Alterman@fda.hhs.gov

    2013-02-15

    The major objective of personalized medicine is to select optimized drug therapies and to a large degree such mission is determined by the expression profiles of cytochrome(s) P450 (CYP). Accordingly, a proteomic case study in personalized medicine is provided by the superfamily of cytochromes P450. Our knowledge about CYP isozyme expression on a protein level is very limited and based exclusively on DNA/mRNA derived data. Such information is not sufficient because transcription and translation events do not lead to correlated levels of expressed proteins. Here we report expression profiles of CYPs in human liver obtained by mass spectrometry (MS)-based proteomic approach. We analyzed 32 samples of human liver microsomes (HLM) of different sexes, ages and ethnicity along with samples of recombinant human CYPs. We have experimentally confirmed that each CYP isozyme can be effectively differentiated by their unique isozyme-specific tryptic peptide(s). Trypsin digestion patterns for almost 30 human CYP isozymes were established. Those findings should assist in selecting tryptic peptides suitable for MS-based quantitation. The data obtained demonstrate remarkable differences in CYP expression profiles. CYP2E1, CYP2C8 and CYP4A11 were the only isozymes found in all HLM samples. Female and pediatric HLM samples revealed much more diverse spectrum of expressed CYPs isozymes compared to male HLM. We have confirmed expression of a number of “rare” CYP (CYP2J2, CYP4B1, CYP4V2, CYP4F3, CYP4F11, CYP8B1, CYP19A1, CYP24A1 and CYP27A1) and obtained first direct experimental data showing expression of such CYPs as CYP2F1, CYP2S1, CYP2W1, CYP4A22, CYP4X1, and CYP26A1 on a protein level. - Highlights: ► First detailed proteomic analysis of CYP isozymes expression in human liver ► Trypsin digestion patterns for almost 30 human CYP isozymes established ► The data obtained demonstrate remarkable differences in CYP expression profiles. ► Female HLM samples revealed more

  6. Mass spectrometry-based proteomic analysis of human liver cytochrome(s) P450

    International Nuclear Information System (INIS)

    Shrivas, Kamlesh; Mindaye, Samuel T.; Getie-Kebtie, Melkamu; Alterman, Michail A.

    2013-01-01

    The major objective of personalized medicine is to select optimized drug therapies and to a large degree such mission is determined by the expression profiles of cytochrome(s) P450 (CYP). Accordingly, a proteomic case study in personalized medicine is provided by the superfamily of cytochromes P450. Our knowledge about CYP isozyme expression on a protein level is very limited and based exclusively on DNA/mRNA derived data. Such information is not sufficient because transcription and translation events do not lead to correlated levels of expressed proteins. Here we report expression profiles of CYPs in human liver obtained by mass spectrometry (MS)-based proteomic approach. We analyzed 32 samples of human liver microsomes (HLM) of different sexes, ages and ethnicity along with samples of recombinant human CYPs. We have experimentally confirmed that each CYP isozyme can be effectively differentiated by their unique isozyme-specific tryptic peptide(s). Trypsin digestion patterns for almost 30 human CYP isozymes were established. Those findings should assist in selecting tryptic peptides suitable for MS-based quantitation. The data obtained demonstrate remarkable differences in CYP expression profiles. CYP2E1, CYP2C8 and CYP4A11 were the only isozymes found in all HLM samples. Female and pediatric HLM samples revealed much more diverse spectrum of expressed CYPs isozymes compared to male HLM. We have confirmed expression of a number of “rare” CYP (CYP2J2, CYP4B1, CYP4V2, CYP4F3, CYP4F11, CYP8B1, CYP19A1, CYP24A1 and CYP27A1) and obtained first direct experimental data showing expression of such CYPs as CYP2F1, CYP2S1, CYP2W1, CYP4A22, CYP4X1, and CYP26A1 on a protein level. - Highlights: ► First detailed proteomic analysis of CYP isozymes expression in human liver ► Trypsin digestion patterns for almost 30 human CYP isozymes established ► The data obtained demonstrate remarkable differences in CYP expression profiles. ► Female HLM samples revealed more

  7. GOTHIC MODEL OF BWR SECONDARY CONTAINMENT DRAWDOWN ANALYSES

    International Nuclear Information System (INIS)

    Hansen, P.N.

    2004-01-01

    This article introduces a GOTHIC version 7.1 model of the Secondary Containment Reactor Building Post LOCA drawdown analysis for a BWR. GOTHIC is an EPRI sponsored thermal hydraulic code. This analysis is required by the Utility to demonstrate an ability to restore and maintain the Secondary Containment Reactor Building negative pressure condition. The technical and regulatory issues associated with this modeling are presented. The analysis includes the affect of wind, elevation and thermal impacts on pressure conditions. The model includes a multiple volume representation which includes the spent fuel pool. In addition, heat sources and sinks are modeled as one dimensional heat conductors. The leakage into the building is modeled to include both laminar as well as turbulent behavior as established by actual plant test data. The GOTHIC code provides components to model heat exchangers used to provide fuel pool cooling as well as area cooling via air coolers. The results of the evaluation are used to demonstrate the time that the Reactor Building is at a pressure that exceeds external conditions. This time period is established with the GOTHIC model based on the worst case pressure conditions on the building. For this time period the Utility must assume the primary containment leakage goes directly to the environment. Once the building pressure is restored below outside conditions the release to the environment can be credited as a filtered release

  8. DCH analyses using the CONTAIN code

    International Nuclear Information System (INIS)

    Hong, Sung Wan; Kim, Hee Dong

    1996-08-01

    This report describes CONTAIN analyses performed during participation in the project of 'DCH issue resolution for ice condenser plants' which is sponsored by NRC at SNL. Even though the calculations were performed for the Ice Condenser plant, CONTAIN code has been used for analyses of many phenomena in the PWR containment and the DCH module can be commonly applied to any plant types. The present ice condenser issue resolution effort intended to provide guidance as to what might be needed to resolve DCH for ice condenser plants. It includes both a screening analysis and a scoping study if the screening analysis cannot provide an complete resolution. The followings are the results concerning DCH loads in descending order. 1. Availability of ignition sources prior to vessel breach 2. availability and effectiveness of ice in the ice condenser 3. Loads modeling uncertainties related to co-ejected RPV water 4. Other loads modeling uncertainties 10 tabs., 3 figs., 14 refs. (Author)

  9. DCH analyses using the CONTAIN code

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Sung Wan; Kim, Hee Dong [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1996-08-01

    This report describes CONTAIN analyses performed during participation in the project of `DCH issue resolution for ice condenser plants` which is sponsored by NRC at SNL. Even though the calculations were performed for the Ice Condenser plant, CONTAIN code has been used for analyses of many phenomena in the PWR containment and the DCH module can be commonly applied to any plant types. The present ice condenser issue resolution effort intended to provide guidance as to what might be needed to resolve DCH for ice condenser plants. It includes both a screening analysis and a scoping study if the screening analysis cannot provide an complete resolution. The followings are the results concerning DCH loads in descending order. 1. Availability of ignition sources prior to vessel breach 2. availability and effectiveness of ice in the ice condenser 3. Loads modeling uncertainties related to co-ejected RPV water 4. Other loads modeling uncertainties 10 tabs., 3 figs., 14 refs. (Author).

  10. Analyses and testing of model prestressed concrete reactor vessels with built-in planes of weakness

    International Nuclear Information System (INIS)

    Dawson, P.; Paton, A.A.; Fleischer, C.C.

    1990-01-01

    This paper describes the design, construction, analyses and testing of two small scale, single cavity prestressed concrete reactor vessel models, one without planes of weakness and one with planes of weakness immediately behind the cavity liner. This work was carried out to extend a previous study which had suggested the likely feasibility of constructing regions of prestressed concrete reactor vessels and biological shields, which become activated, using easily removable blocks, separated by a suitable membrane. The paper describes the results obtained and concludes that the planes of weakness concept could offer a means of facilitating the dismantling of activated regions of prestressed concrete reactor vessels, biological shields and similar types of structure. (author)

  11. Model-based performance and energy analyses of reverse osmosis to reuse wastewater in a PVC production site.

    Science.gov (United States)

    Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie

    2017-11-10

    A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.

  12. Comparison of Numerical Analyses with a Static Load Test of a Continuous Flight Auger Pile

    Science.gov (United States)

    Hoľko, Michal; Stacho, Jakub

    2014-12-01

    The article deals with numerical analyses of a Continuous Flight Auger (CFA) pile. The analyses include a comparison of calculated and measured load-settlement curves as well as a comparison of the load distribution over a pile's length. The numerical analyses were executed using two types of software, i.e., Ansys and Plaxis, which are based on FEM calculations. Both types of software are different from each other in the way they create numerical models, model the interface between the pile and soil, and use constitutive material models. The analyses have been prepared in the form of a parametric study, where the method of modelling the interface and the material models of the soil are compared and analysed. Our analyses show that both types of software permit the modelling of pile foundations. The Plaxis software uses advanced material models as well as the modelling of the impact of groundwater or overconsolidation. The load-settlement curve calculated using Plaxis is equal to the results of a static load test with a more than 95 % degree of accuracy. In comparison, the load-settlement curve calculated using Ansys allows for the obtaining of only an approximate estimate, but the software allows for the common modelling of large structure systems together with a foundation system.

  13. On the Structure of Personality Disorder Traits: Conjoint Analyses of the CAT-PD, PID-5, and NEO-PI-3 Trait Models

    Science.gov (United States)

    Wright, Aidan G.C.; Simms, Leonard J.

    2014-01-01

    The current study examines the relations among contemporary models of pathological and normal range personality traits. Specifically, we report on (a) conjoint exploratory factor analyses of the Computerized Adaptive Test of Personality Disorder static form (CAT-PD-SF) with the Personality Inventory for the DSM-5 (PID-5; Krueger et al., 2012) and NEO Personality Inventory-3 First Half (NEI-PI-3FH; McCrae & Costa, 2007), and (b) unfolding hierarchical analyses of the three measures in a large general psychiatric outpatient sample (N = 628; 64% Female). A five-factor solution provided conceptually coherent alignment among the CAT-PD-SF, PID-5, and NEO-PI-3FH scales. Hierarchical solutions suggested that higher-order factors bear strong resemblance to dimensions that emerge from structural models of psychopathology (e.g., Internalizing and Externalizing spectra). These results demonstrate that the CAT-PD-SF adheres to the consensual structure of broad trait domains at the five-factor level. Additionally, patterns of scale loadings further inform questions of structure and bipolarity of facet and domain level constructs. Finally, hierarchical analyses strengthen the argument for using broad dimensions that span normative and pathological functioning to scaffold a quantitatively derived phenotypic structure of psychopathology to orient future research on explanatory, etiological, and maintenance mechanisms. PMID:24588061

  14. SOCR Analyses - an Instructional Java Web-based Statistical Analysis Toolkit.

    Science.gov (United States)

    Chu, Annie; Cui, Jenny; Dinov, Ivo D

    2009-03-01

    The Statistical Online Computational Resource (SOCR) designs web-based tools for educational use in a variety of undergraduate courses (Dinov 2006). Several studies have demonstrated that these resources significantly improve students' motivation and learning experiences (Dinov et al. 2008). SOCR Analyses is a new component that concentrates on data modeling and analysis using parametric and non-parametric techniques supported with graphical model diagnostics. Currently implemented analyses include commonly used models in undergraduate statistics courses like linear models (Simple Linear Regression, Multiple Linear Regression, One-Way and Two-Way ANOVA). In addition, we implemented tests for sample comparisons, such as t-test in the parametric category; and Wilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, in the non-parametric category. SOCR Analyses also include several hypothesis test models, such as Contingency tables, Friedman's test and Fisher's exact test.The code itself is open source (http://socr.googlecode.com/), hoping to contribute to the efforts of the statistical computing community. The code includes functionality for each specific analysis model and it has general utilities that can be applied in various statistical computing tasks. For example, concrete methods with API (Application Programming Interface) have been implemented in statistical summary, least square solutions of general linear models, rank calculations, etc. HTML interfaces, tutorials, source code, activities, and data are freely available via the web (www.SOCR.ucla.edu). Code examples for developers and demos for educators are provided on the SOCR Wiki website.In this article, the pedagogical utilization of the SOCR Analyses is discussed, as well as the underlying design framework. As the SOCR project is on-going and more functions and tools are being added to it, these resources are constantly improved. The reader is strongly encouraged to check the SOCR site for most

  15. Using FOSM-Based Data Worth Analyses to Design Geophysical Surveys to Reduce Uncertainty in a Regional Groundwater Model Update

    Science.gov (United States)

    Smith, B. D.; White, J.; Kress, W. H.; Clark, B. R.; Barlow, J.

    2016-12-01

    Hydrogeophysical surveys have become an integral part of understanding hydrogeological frameworks used in groundwater models. Regional models cover a large area where water well data is, at best, scattered and irregular. Since budgets are finite, priorities must be assigned to select optimal areas for geophysical surveys. For airborne electromagnetic (AEM) geophysical surveys, optimization of mapping depth and line spacing needs to take in account the objectives of the groundwater models. The approach discussed here uses a first-order, second-moment (FOSM) uncertainty analyses which assumes an approximate linear relation between model parameters and observations. This assumption allows FOSM analyses to be applied to estimate the value of increased parameter knowledge to reduce forecast uncertainty. FOSM is used to facilitate optimization of yet-to-be-completed geophysical surveying to reduce model forecast uncertainty. The main objective of geophysical surveying is assumed to estimate values and spatial variation in hydrologic parameters (i.e. hydraulic conductivity) as well as map lower permeability layers that influence the spatial distribution of recharge flux. The proposed data worth analysis was applied to Mississippi Embayment Regional Aquifer Study (MERAS) which is being updated. The objective of MERAS is to assess the ground-water availability (status and trends) of the Mississippi embayment aquifer system. The study area covers portions of eight states including Alabama, Arkansas, Illinois, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee. The active model grid covers approximately 70,000 square miles, and incorporates some 6,000 miles of major rivers and over 100,000 water wells. In the FOSM analysis, a dense network of pilot points was used to capture uncertainty in hydraulic conductivity and recharge. To simulate the effect of AEM flight lines, the prior uncertainty for hydraulic conductivity and recharge pilots along potential flight lines was

  16. Data Assimilation Tools for CO2 Reservoir Model Development – A Review of Key Data Types, Analyses, and Selected Software

    Energy Technology Data Exchange (ETDEWEB)

    Rockhold, Mark L.; Sullivan, E. C.; Murray, Christopher J.; Last, George V.; Black, Gary D.

    2009-09-30

    Pacific Northwest National Laboratory (PNNL) has embarked on an initiative to develop world-class capabilities for performing experimental and computational analyses associated with geologic sequestration of carbon dioxide. The ultimate goal of this initiative is to provide science-based solutions for helping to mitigate the adverse effects of greenhouse gas emissions. This Laboratory-Directed Research and Development (LDRD) initiative currently has two primary focus areas—advanced experimental methods and computational analysis. The experimental methods focus area involves the development of new experimental capabilities, supported in part by the U.S. Department of Energy’s (DOE) Environmental Molecular Science Laboratory (EMSL) housed at PNNL, for quantifying mineral reaction kinetics with CO2 under high temperature and pressure (supercritical) conditions. The computational analysis focus area involves numerical simulation of coupled, multi-scale processes associated with CO2 sequestration in geologic media, and the development of software to facilitate building and parameterizing conceptual and numerical models of subsurface reservoirs that represent geologic repositories for injected CO2. This report describes work in support of the computational analysis focus area. The computational analysis focus area currently consists of several collaborative research projects. These are all geared towards the development and application of conceptual and numerical models for geologic sequestration of CO2. The software being developed for this focus area is referred to as the Geologic Sequestration Software Suite or GS3. A wiki-based software framework is being developed to support GS3. This report summarizes work performed in FY09 on one of the LDRD projects in the computational analysis focus area. The title of this project is Data Assimilation Tools for CO2 Reservoir Model Development. Some key objectives of this project in FY09 were to assess the current state

  17. Assessing models of speciation under different biogeographic scenarios; An empirical study using multi-locus and RNA-seq analyses

    Science.gov (United States)

    Edwards, Taylor; Tollis, Marc; Hsieh, PingHsun; Gutenkunst, Ryan N.; Liu, Zhen; Kusumi, Kenro; Culver, Melanie; Murphy, Robert W.

    2016-01-01

    Evolutionary biology often seeks to decipher the drivers of speciation, and much debate persists over the relative importance of isolation and gene flow in the formation of new species. Genetic studies of closely related species can assess if gene flow was present during speciation, because signatures of past introgression often persist in the genome. We test hypotheses on which mechanisms of speciation drove diversity among three distinct lineages of desert tortoise in the genus Gopherus. These lineages offer a powerful system to study speciation, because different biogeographic patterns (physical vs. ecological segregation) are observed at opposing ends of their distributions. We use 82 samples collected from 38 sites, representing the entire species' distribution and generate sequence data for mtDNA and four nuclear loci. A multilocus phylogenetic analysis in *BEAST estimates the species tree. RNA-seq data yield 20,126 synonymous variants from 7665 contigs from two individuals of each of the three lineages. Analyses of these data using the demographic inference package ∂a∂i serve to test the null hypothesis of no gene flow during divergence. The best-fit demographic model for the three taxa is concordant with the *BEAST species tree, and the ∂a∂i analysis does not indicate gene flow among any of the three lineages during their divergence. These analyses suggest that divergence among the lineages occurred in the absence of gene flow and in this scenario the genetic signature of ecological isolation (parapatric model) cannot be differentiated from geographic isolation (allopatric model).

  18. Evaluation of Uncertainties in hydrogeological modeling and groundwater flow analyses. Model calibration

    International Nuclear Information System (INIS)

    Ijiri, Yuji; Ono, Makoto; Sugihara, Yutaka; Shimo, Michito; Yamamoto, Hajime; Fumimura, Kenichi

    2003-03-01

    This study involves evaluation of uncertainty in hydrogeological modeling and groundwater flow analysis. Three-dimensional groundwater flow in Shobasama site in Tono was analyzed using two continuum models and one discontinuous model. The domain of this study covered area of four kilometers in east-west direction and six kilometers in north-south direction. Moreover, for the purpose of evaluating how uncertainties included in modeling of hydrogeological structure and results of groundwater simulation decreased with progress of investigation research, updating and calibration of the models about several modeling techniques of hydrogeological structure and groundwater flow analysis techniques were carried out, based on the information and knowledge which were newly acquired. The acquired knowledge is as follows. As a result of setting parameters and structures in renewal of the models following to the circumstances by last year, there is no big difference to handling between modeling methods. The model calibration is performed by the method of matching numerical simulation with observation, about the pressure response caused by opening and closing of a packer in MIU-2 borehole. Each analysis technique attains reducing of residual sum of squares of observations and results of numerical simulation by adjusting hydrogeological parameters. However, each model adjusts different parameters as water conductivity, effective porosity, specific storage, and anisotropy. When calibrating models, sometimes it is impossible to explain the phenomena only by adjusting parameters. In such case, another investigation may be required to clarify details of hydrogeological structure more. As a result of comparing research from beginning to this year, the following conclusions are obtained about investigation. (1) The transient hydraulic data are effective means in reducing the uncertainty of hydrogeological structure. (2) Effective porosity for calculating pore water velocity of

  19. School effects on non-verbal intelligence and nutritional status in rural Zambia

    OpenAIRE

    Hein, Sascha; Tan, Mei; Reich, Jodi; Thuma, Philip E.; Grigorenko, Elena L.

    2015-01-01

    This study uses hierarchical linear modeling (HLM) to examine the school factors (i.e., related to school organization and teacher and student body) associated with non-verbal intelligence (NI) and nutritional status (i.e., body mass index; BMI) of 4204 3rd to 7th graders in rural areas of Southern Province, Zambia. Results showed that 23.5% and 7.7% of the NI and BMI variance, respectively, were conditioned by differences between schools. The set of 14 school factors accounted for 58.8% and ...

  20. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  1. Effects of exogenous polyamines and inhibitors of polyamine ...

    African Journals Online (AJOL)

    guanylhydrazone) (MGBG) and dicyclohexylamine (DCHA) or three exogenous polyamines (putrescine, spermidine and spermine) were added into a modified HLM-1 maturation medium inoculated with embryogenic tissues. Medium responses were ...

  2. A new approach to analyse longitudinal epidemiological data with an excess of zeros.

    Science.gov (United States)

    Spriensma, Alette S; Hajos, Tibor R S; de Boer, Michiel R; Heymans, Martijn W; Twisk, Jos W R

    2013-02-20

    Within longitudinal epidemiological research, 'count' outcome variables with an excess of zeros frequently occur. Although these outcomes are frequently analysed with a linear mixed model, or a Poisson mixed model, a two-part mixed model would be better in analysing outcome variables with an excess of zeros. Therefore, objective of this paper was to introduce the relatively 'new' method of two-part joint regression modelling in longitudinal data analysis for outcome variables with an excess of zeros, and to compare the performance of this method to current approaches. Within an observational longitudinal dataset, we compared three techniques; two 'standard' approaches (a linear mixed model, and a Poisson mixed model), and a two-part joint mixed model (a binomial/Poisson mixed distribution model), including random intercepts and random slopes. Model fit indicators, and differences between predicted and observed values were used for comparisons. The analyses were performed with STATA using the GLLAMM procedure. Regarding the random intercept models, the two-part joint mixed model (binomial/Poisson) performed best. Adding random slopes for time to the models changed the sign of the regression coefficient for both the Poisson mixed model and the two-part joint mixed model (binomial/Poisson) and resulted into a much better fit. This paper showed that a two-part joint mixed model is a more appropriate method to analyse longitudinal data with an excess of zeros compared to a linear mixed model and a Poisson mixed model. However, in a model with random slopes for time a Poisson mixed model also performed remarkably well.

  3. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    Science.gov (United States)

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  4. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results: (1) confirmed, in a general way, the procedures for application to pulsed burning, (2) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur, and (3) indicated that steam can terminate continuous burning. Future actions recommended include: (1) modification of the code to perform continuous-burn analyses, which is demonstrated, (2) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (3) changes to the models for estimating burn parameters

  5. Hydrogen-combustion analyses of large-scale tests

    International Nuclear Information System (INIS)

    Gido, R.G.; Koestel, A.

    1986-01-01

    This report uses results of the large-scale tests with turbulence performed by the Electric Power Research Institute at the Nevada Test Site to evaluate hydrogen burn-analysis procedures based on lumped-parameter codes like COMPARE-H2 and associated burn-parameter models. The test results (a) confirmed, in a general way, the procedures for application to pulsed burning, (b) increased significantly our understanding of the burn phenomenon by demonstrating that continuous burning can occur and (c) indicated that steam can terminate continuous burning. Future actions recommended include (a) modification of the code to perform continuous-burn analyses, which is demonstrated, (b) analyses to determine the type of burning (pulsed or continuous) that will exist in nuclear containments and the stable location if the burning is continuous, and (c) changes to the models for estimating burn parameters

  6. Possible future HERA analyses

    International Nuclear Information System (INIS)

    Geiser, Achim

    2015-12-01

    A variety of possible future analyses of HERA data in the context of the HERA data preservation programme is collected, motivated, and commented. The focus is placed on possible future analyses of the existing ep collider data and their physics scope. Comparisons to the original scope of the HERA pro- gramme are made, and cross references to topics also covered by other participants of the workshop are given. This includes topics on QCD, proton structure, diffraction, jets, hadronic final states, heavy flavours, electroweak physics, and the application of related theory and phenomenology topics like NNLO QCD calculations, low-x related models, nonperturbative QCD aspects, and electroweak radiative corrections. Synergies with other collider programmes are also addressed. In summary, the range of physics topics which can still be uniquely covered using the existing data is very broad and of considerable physics interest, often matching the interest of results from colliders currently in operation. Due to well-established data and MC sets, calibrations, and analysis procedures the manpower and expertise needed for a particular analysis is often very much smaller than that needed for an ongoing experiment. Since centrally funded manpower to carry out such analyses is not available any longer, this contribution not only targets experienced self-funded experimentalists, but also theorists and master-level students who might wish to carry out such an analysis.

  7. One size does not fit all: On how Markov model order dictates performance of genomic sequence analyses

    Science.gov (United States)

    Narlikar, Leelavati; Mehta, Nidhi; Galande, Sanjeev; Arjunwadkar, Mihir

    2013-01-01

    The structural simplicity and ability to capture serial correlations make Markov models a popular modeling choice in several genomic analyses, such as identification of motifs, genes and regulatory elements. A critical, yet relatively unexplored, issue is the determination of the order of the Markov model. Most biological applications use a predetermined order for all data sets indiscriminately. Here, we show the vast variation in the performance of such applications with the order. To identify the ‘optimal’ order, we investigated two model selection criteria: Akaike information criterion and Bayesian information criterion (BIC). The BIC optimal order delivers the best performance for mammalian phylogeny reconstruction and motif discovery. Importantly, this order is different from orders typically used by many tools, suggesting that a simple additional step determining this order can significantly improve results. Further, we describe a novel classification approach based on BIC optimal Markov models to predict functionality of tissue-specific promoters. Our classifier discriminates between promoters active across 12 different tissues with remarkable accuracy, yielding 3 times the precision expected by chance. Application to the metagenomics problem of identifying the taxum from a short DNA fragment yields accuracies at least as high as the more complex mainstream methodologies, while retaining conceptual and computational simplicity. PMID:23267010

  8. Editorial: Merawat Hak Asasi Manusia

    Directory of Open Access Journals (Sweden)

    Atip Latipulhayat

    2016-08-01

    Full Text Available Kita semua mahfum, sejarah hak asasi manusia (HAM, terutama kemunculan instrumen-instrumen HAM selalu dilatarbelakangi oleh kulminasi berbagai pelanggaran HAM. Absolutisme monarki Inggris di masa lalu berangsur-angsur melahirkan satu demi satu instrumen HAM, mulai dari Magna Charta (1215 yang sifatnya sangat elitis hingga Bill of Rights 1689 yang dianggap sebagai kemenangan rakyat (Jimly Asshidiqie: 2006, hlm. 87. Demikian pula pada level internasional, solidaritas penghapusan perbudakan pada abad 18-19, melahirkan kelembagaan dan berbagai dokumen HAM yang tak selalu mulus, mulai dari Liga Bangsa-Bangsa, International Labour Organization (ILO, dan berbagai konvensi internasional-nya hingga kehadiran Perserikatan Bangsa-Bangsa (PBB dan the International Bill of Rights: UDHR, ICCPR, dan ICESCR (Rhona K.M. Smith, et.al.: 2008, hlm. 32-36.

  9. The variants of an LOD of a 3D building model and their influence on spatial analyses

    Science.gov (United States)

    Biljecki, Filip; Ledoux, Hugo; Stoter, Jantien; Vosselman, George

    2016-06-01

    The level of detail (LOD) of a 3D city model indicates the model's grade and usability. However, there exist multiple valid variants of each LOD. As a consequence, the LOD concept is inconclusive as an instruction for the acquisition of 3D city models. For instance, the top surface of an LOD1 block model may be modelled at the eaves of a building or at its ridge height. Such variants, which we term geometric references, are often overlooked and are usually not documented in the metadata. Furthermore, the influence of a particular geometric reference on the performance of a spatial analysis is not known. In response to this research gap, we investigate a variety of LOD1 and LOD2 geometric references that are commonly employed, and perform numerical experiments to investigate their relative difference when used as input for different spatial analyses. We consider three use cases (estimation of the area of the building envelope, building volume, and shadows cast by buildings), and compute the deviations in a Monte Carlo simulation. The experiments, carried out with procedurally generated models, indicate that two 3D models representing the same building at the same LOD, but modelled according to different geometric references, may yield substantially different results when used in a spatial analysis. The outcome of our experiments also suggests that the geometric reference may have a bigger influence than the LOD, since an LOD1 with a specific geometric reference may yield a more accurate result than when using LOD2 models.

  10. Steady-state and accident analyses of PBMR with the computer code SPECTRA

    International Nuclear Information System (INIS)

    Stempniewicz, Marek M.

    2002-01-01

    The SPECTRA code is an accident analysis code developed at NRG. It is designed for thermal-hydraulic analyses of nuclear or conventional power plants. The code is capable of analysing the whole power plant, including reactor vessel, primary system, various control and safety systems, containment and reactor building. The aim of the work presented in this paper was to prepare a preliminary thermal-hydraulic model of PBMR for SPECTRA, and perform steady state and accident analyses. In order to assess SPECTRA capability to model the PBMR reactors, a model of the INCOGEN system has been prepared first. Steady state and accident scenarios were analyzed for INCOGEN configuration. Results were compared to the results obtained earlier with INAS and OCTOPUS/PANTHERMIX. A good agreement was obtained. Results of accident analyses with PBMR model showed qualitatively good results. It is concluded that SPECTRA is a suitable tool for analyzing High Temperature Reactors, such as INCOGEN or for example PBMR (Pebble Bed Modular Reactor). Analyses of INCOGEN and PBMR systems showed that in all analyzed cases the fuel temperatures remained within the acceptable limits. Consequently there is no danger of release of radioactivity to the environment. It may be concluded that those are promising designs for future safe industrial reactors. (author)

  11. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  12. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-01-01

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1

  13. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters.

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-09-11

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  14. Network class superposition analyses.

    Directory of Open Access Journals (Sweden)

    Carl A B Pearson

    Full Text Available Networks are often used to understand a whole system by modeling the interactions among its pieces. Examples include biomolecules in a cell interacting to provide some primary function, or species in an environment forming a stable community. However, these interactions are often unknown; instead, the pieces' dynamic states are known, and network structure must be inferred. Because observed function may be explained by many different networks (e.g., ≈ 10(30 for the yeast cell cycle process, considering dynamics beyond this primary function means picking a single network or suitable sample: measuring over all networks exhibiting the primary function is computationally infeasible. We circumvent that obstacle by calculating the network class ensemble. We represent the ensemble by a stochastic matrix T, which is a transition-by-transition superposition of the system dynamics for each member of the class. We present concrete results for T derived from boolean time series dynamics on networks obeying the Strong Inhibition rule, by applying T to several traditional questions about network dynamics. We show that the distribution of the number of point attractors can be accurately estimated with T. We show how to generate Derrida plots based on T. We show that T-based Shannon entropy outperforms other methods at selecting experiments to further narrow the network structure. We also outline an experimental test of predictions based on T. We motivate all of these results in terms of a popular molecular biology boolean network model for the yeast cell cycle, but the methods and analyses we introduce are general. We conclude with open questions for T, for example, application to other models, computational considerations when scaling up to larger systems, and other potential analyses.

  15. HLA region excluded by linkage analyses of early onset periodontitis

    Energy Technology Data Exchange (ETDEWEB)

    Sun, C.; Wang, S.; Lopez, N.

    1994-09-01

    Previous studies suggested that HLA genes may influence susceptibility to early-onset periodontitis (EOP). Segregation analyses indicate that EOP may be due to a single major gene. We conducted linkage analyses to assess possible HLA effects on EOP. Fifty families with two or more close relatives affected by EOP were ascertained in Virginia and Chile. A microsatellite polymorphism within the HLA region (at the tumor necrosis factor beta locus) was typed using PCR. Linkage analyses used a donimant model most strongly supported by previous studies. Assuming locus homogeneity, our results exclude a susceptibility gene within 10 cM on either side of our marker locus. This encompasses all of the HLA region. Analyses assuming alternative models gave qualitatively similar results. Allowing for locus heterogeneity, our data still provide no support for HLA-region involvement. However, our data do not statistically exclude (LOD <-2.0) hypotheses of disease-locus heterogeneity, including models where up to half of our families could contain an EOP disease gene located in the HLA region. This is due to the limited power of even our relatively large collection of families and the inherent difficulties of mapping genes for disorders that have complex and heterogeneous etiologies. Additional statistical analyses, recruitment of families, and typing of flanking DNA markers are planned to more conclusively address these issues with respect to the HLA region and other candidate locations in the human genome. Additional results for markers covering most of the human genome will also be presented.

  16. Modelling and analysing interoperability in service compositions using COSMO

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.

    2008-01-01

    A service composition process typically involves multiple service models. These models may represent the composite and composed services from distinct perspectives, e.g. to model the role of some system that is involved in a service, and at distinct abstraction levels, e.g. to model the goal,

  17. Fundamental issues in finite element analyses of localization of deformation

    NARCIS (Netherlands)

    Borst, de R.; Sluys, L.J.; Mühlhaus, H.-B.; Pamin, J.

    1993-01-01

    Classical continuum models, i.e. continuum models that do not incorporate an internal length scale, suffer from excessive mesh dependence when strain-softening models are used in numerical analyses and cannot reproduce the size effect commonly observed in quasi-brittle failure. In this contribution

  18. Uncertainty and Sensitivity Analyses Plan

    International Nuclear Information System (INIS)

    Simpson, J.C.; Ramsdell, J.V. Jr.

    1993-04-01

    Hanford Environmental Dose Reconstruction (HEDR) Project staff are developing mathematical models to be used to estimate the radiation dose that individuals may have received as a result of emissions since 1944 from the US Department of Energy's (DOE) Hanford Site near Richland, Washington. An uncertainty and sensitivity analyses plan is essential to understand and interpret the predictions from these mathematical models. This is especially true in the case of the HEDR models where the values of many parameters are unknown. This plan gives a thorough documentation of the uncertainty and hierarchical sensitivity analysis methods recommended for use on all HEDR mathematical models. The documentation includes both technical definitions and examples. In addition, an extensive demonstration of the uncertainty and sensitivity analysis process is provided using actual results from the Hanford Environmental Dose Reconstruction Integrated Codes (HEDRIC). This demonstration shows how the approaches used in the recommended plan can be adapted for all dose predictions in the HEDR Project

  19. Modelling and Analyses of Embedded Systems Design

    DEFF Research Database (Denmark)

    Brekling, Aske Wiid

    We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....... semantics to the elements of specifications in the MoVES language. We show that even for seem- ingly simple systems, the complexity of verifying real-time constraints can be overwhelming - but we give an upper limit to the size of the search-space that needs examining. Furthermore, the formal model exposes...

  20. The proton pump inhibitor, omeprazole, but not lansoprazole or pantoprazole, is a metabolism-dependent inhibitor of CYP2C19: implications for coadministration with clopidogrel.

    Science.gov (United States)

    Ogilvie, Brian W; Yerino, Phyllis; Kazmi, Faraz; Buckley, David B; Rostami-Hodjegan, Amin; Paris, Brandy L; Toren, Paul; Parkinson, Andrew

    2011-11-01

    As a direct-acting inhibitor of CYP2C19 in vitro, lansoprazole is more potent than omeprazole and other proton pump inhibitors (PPIs), but lansoprazole does not cause clinically significant inhibition of CYP2C19 whereas omeprazole does. To investigate this apparent paradox, we evaluated omeprazole, esomeprazole, R-omeprazole, lansoprazole, and pantoprazole for their ability to function as direct-acting and metabolism-dependent inhibitors (MDIs) of CYP2C19 in pooled human liver microsomes (HLM) as well as in cryopreserved hepatocytes and recombinant CYP2C19. In HLM, all PPIs were found to be direct-acting inhibitors of CYP2C19 with IC(50) values varying from 1.2 μM [lansoprazole; maximum plasma concentration (C(max)) = 2.2 μM] to 93 μM (pantoprazole; C(max) = 6.5 μM). In addition, we identified omeprazole, esomeprazole, R-omeprazole, and omeprazole sulfone as MDIs of CYP2C19 (they caused IC(50) shifts after a 30-min preincubation with NADPH-fortified HLM of 4.2-, 10-, 2.5-, and 3.2-fold, respectively), whereas lansoprazole and pantoprazole were not MDIs (IC(50) shifts lansoprazole, or pantoprazole, as irreversible (or quasi-irreversible) MDIs of CYP2C19. These results have important implications for the mechanism of the clinical interaction reported between omeprazole and clopidogrel, as well as other CYP2C19 substrates.

  1. Study on Measurement of Advanced Manufacturing: Case by China

    Directory of Open Access Journals (Sweden)

    She Jinghuai

    2017-01-01

    Full Text Available This article has built a system of China's Advanced Manufacturing measurement indicators. By applying the datum from 2004 to 2013, we estimate the level of development and current status of China’s Advanced Manufacturing (AM, and evaluate the measurement results by establishing Hierarchical Linear Model (HLM. We confirmed that China's Advanced Manufacturing is in the rapid development trend. And due to the difference of initial conditions in Advanced Manufacturing development there is a greater imbalance. In contrast, a region with poor initial condition of has a relatively fast development speed.

  2. New insights into survival trend analyses in cancer population-based studies: the SUDCAN methodology.

    Science.gov (United States)

    Uhry, Zoé; Bossard, Nadine; Remontet, Laurent; Iwaz, Jean; Roche, Laurent

    2017-01-01

    The main objective of the SUDCAN study was to compare, for 15 cancer sites, the trends in net survival and excess mortality rates from cancer 5 years after diagnosis between six European Latin countries (Belgium, France, Italy, Portugal, Spain and Switzerland). The data were extracted from the EUROCARE-5 database. The study period ranged from 6 (Portugal, 2000-2005) to 18 years (Switzerland, 1989-2007). Trend analyses were carried out separately for each country and cancer site; the number of cases ranged from 1500 to 104 000 cases. We developed an original flexible excess rate modelling strategy that accounts for the continuous effects of age, year of diagnosis, time since diagnosis and their interactions. Nineteen models were constructed; they differed in the modelling of the effect of the year of diagnosis in terms of linearity, proportionality and interaction with age. The final model was chosen according to the Akaike Information Criterion. The fit was assessed graphically by comparing model estimates versus nonparametric (Pohar-Perme) net survival estimates. Out of the 90 analyses carried out, the effect of the year of diagnosis on the excess mortality rate depended on age in 61 and was nonproportional in 64; it was nonlinear in 27 out of the 75 analyses where this effect was considered. The model fit was overall satisfactory. We analysed successfully 15 cancer sites in six countries. The refined methodology proved necessary for detailed trend analyses. It is hoped that three-dimensional parametric modelling will be used more widely in net survival trend studies as it has major advantages over stratified analyses.

  3. The Effect of Adherence to Dietary Tracking on Weight Loss: Using HLM to Model Weight Loss over Time.

    Science.gov (United States)

    Ingels, John Spencer; Misra, Ranjita; Stewart, Jonathan; Lucke-Wold, Brandon; Shawley-Brzoska, Samantha

    2017-01-01

    The role of dietary tracking on weight loss remains unexplored despite being part of multiple diabetes and weight management programs. Hence, participants of the Diabetes Prevention and Management (DPM) program (12 months, 22 sessions) tracked their food intake for the duration of the study. A scatterplot of days tracked versus total weight loss revealed a nonlinear relationship. Hence, the number of possible tracking days was divided to create the 3 groups of participants: rare trackers (66% total days tracked). After controlling for initial body mass index, hemoglobin A 1c , and gender, only consistent trackers had significant weight loss (-9.99 pounds), following a linear relationship with consistent loss throughout the year. In addition, the weight loss trend for the rare and inconsistent trackers followed a nonlinear path, with the holidays slowing weight loss and the onset of summer increasing weight loss. These results show the importance of frequent dietary tracking for consistent long-term weight loss success.

  4. Applications of MIDAS regression in analysing trends in water quality

    Science.gov (United States)

    Penev, Spiridon; Leonte, Daniela; Lazarov, Zdravetz; Mann, Rob A.

    2014-04-01

    We discuss novel statistical methods in analysing trends in water quality. Such analysis uses complex data sets of different classes of variables, including water quality, hydrological and meteorological. We analyse the effect of rainfall and flow on trends in water quality utilising a flexible model called Mixed Data Sampling (MIDAS). This model arises because of the mixed frequency in the data collection. Typically, water quality variables are sampled fortnightly, whereas the rain data is sampled daily. The advantage of using MIDAS regression is in the flexible and parsimonious modelling of the influence of the rain and flow on trends in water quality variables. We discuss the model and its implementation on a data set from the Shoalhaven Supply System and Catchments in the state of New South Wales, Australia. Information criteria indicate that MIDAS modelling improves upon simplistic approaches that do not utilise the mixed data sampling nature of the data.

  5. Structural analyses of ITER toroidal field coils under fault conditions

    International Nuclear Information System (INIS)

    Jong, C.T.J.

    1992-04-01

    ITER (International Thermonuclear Experimental Reactor) is intended to be an experimental thermonuclear tokamak reactor testing the basic physics performance and technologies essential to future fusion reactors. The magnet system of ITER consists essentially of 4 sub-systems, i.e. toroidal field coils (TFCs), poloidal field coils (PFCs), power supplies, and cryogenic supplies. These subsystems do not contain significant radioactivity inventories, but the large energy inventory is a potential accident initiator. The aim of the structural analyses is to prevent accidents from propagating into vacuum vessel, tritium system and cooling system, which all contain significant amounts of radioactivity. As part of design process 3 conditions are defined for PF and TF coils, at which mechanical behaviour has to be analyzed in some detail, viz: normal operating conditions, upset conditions and fault conditions. This paper describes the work carried out by ECN to create a detailed finite element model of 16 TFCs as well as results of some fault condition analyses made with the model. Due to fault conditions, either electrical or mechanical, magnetic loading of TFCs becomes abnormal and further mechanical failure of parts of the overall structure might occur (e.g. failure of coil, gravitational supports, intercoil structure). The analyses performed consist of linear elastic stress analyses and electro-magneto-structural analyses (coupled field analyses). 8 refs.; 5 figs.; 5 tabs

  6. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  7. Using plant growth modeling to analyse C source-sink relations under drought: inter and intra specific comparison

    Directory of Open Access Journals (Sweden)

    Benoit ePallas

    2013-11-01

    Full Text Available The ability to assimilate C and allocate NSC (non structural carbohydrates to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyse such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed.

  8. Analyses of containment structures with corrosion damage

    International Nuclear Information System (INIS)

    Cherry, J.L.

    1997-01-01

    Corrosion damage that has been found in a number of nuclear power plant containment structures can degrade the pressure capacity of the vessel. This has prompted concerns regarding the capacity of corroded containments to withstand accident loadings. To address these concerns, finite element analyses have been performed for a typical PWR Ice Condenser containment structure. Using ABAQUS, the pressure capacity was calculated for a typical vessel with no corrosion damage. Multiple analyses were then performed with the location of the corrosion and the amount of corrosion varied in each analysis. Using a strain-based failure criterion, a open-quotes lower boundclose quotes, open-quotes best estimateclose quotes, and open-quotes upper boundclose quotes failure level was predicted for each case. These limits were established by: determining the amount of variability that exists in material properties of typical containments, estimating the amount of uncertainty associated with the level of modeling detail and modeling assumptions, and estimating the effect of corrosion on the material properties

  9. The Influence of Study-Level Inference Models and Study Set Size on Coordinate-Based fMRI Meta-Analyses

    Directory of Open Access Journals (Sweden)

    Han Bossier

    2018-01-01

    Full Text Available Given the increasing amount of neuroimaging studies, there is a growing need to summarize published results. Coordinate-based meta-analyses use the locations of statistically significant local maxima with possibly the associated effect sizes to aggregate studies. In this paper, we investigate the influence of key characteristics of a coordinate-based meta-analysis on (1 the balance between false and true positives and (2 the activation reliability of the outcome from a coordinate-based meta-analysis. More particularly, we consider the influence of the chosen group level model at the study level [fixed effects, ordinary least squares (OLS, or mixed effects models], the type of coordinate-based meta-analysis [Activation Likelihood Estimation (ALE that only uses peak locations, fixed effects, and random effects meta-analysis that take into account both peak location and height] and the amount of studies included in the analysis (from 10 to 35. To do this, we apply a resampling scheme on a large dataset (N = 1,400 to create a test condition and compare this with an independent evaluation condition. The test condition corresponds to subsampling participants into studies and combine these using meta-analyses. The evaluation condition corresponds to a high-powered group analysis. We observe the best performance when using mixed effects models in individual studies combined with a random effects meta-analysis. Moreover the performance increases with the number of studies included in the meta-analysis. When peak height is not taken into consideration, we show that the popular ALE procedure is a good alternative in terms of the balance between type I and II errors. However, it requires more studies compared to other procedures in terms of activation reliability. Finally, we discuss the differences, interpretations, and limitations of our results.

  10. Extending and Applying Spartan to Perform Temporal Sensitivity Analyses for Predicting Changes in Influential Biological Pathways in Computational Models.

    Science.gov (United States)

    Alden, Kieran; Timmis, Jon; Andrews, Paul S; Veiga-Fernandes, Henrique; Coles, Mark

    2017-01-01

    Through integrating real time imaging, computational modelling, and statistical analysis approaches, previous work has suggested that the induction of and response to cell adhesion factors is the key initiating pathway in early lymphoid tissue development, in contrast to the previously accepted view that the process is triggered by chemokine mediated cell recruitment. These model derived hypotheses were developed using spartan, an open-source sensitivity analysis toolkit designed to establish and understand the relationship between a computational model and the biological system that model captures. Here, we extend the functionality available in spartan to permit the production of statistical analyses that contrast the behavior exhibited by a computational model at various simulated time-points, enabling a temporal analysis that could suggest whether the influence of biological mechanisms changes over time. We exemplify this extended functionality by using the computational model of lymphoid tissue development as a time-lapse tool. By generating results at twelve- hour intervals, we show how the extensions to spartan have been used to suggest that lymphoid tissue development could be biphasic, and predict the time-point when a switch in the influence of biological mechanisms might occur.

  11. A response-modeling alternative to surrogate models for support in computational analyses

    International Nuclear Information System (INIS)

    Rutherford, Brian

    2006-01-01

    Often, the objectives in a computational analysis involve characterization of system performance based on some function of the computed response. In general, this characterization includes (at least) an estimate or prediction for some performance measure and an estimate of the associated uncertainty. Surrogate models can be used to approximate the response in regions where simulations were not performed. For most surrogate modeling approaches, however (1) estimates are based on smoothing of available data and (2) uncertainty in the response is specified in a point-wise (in the input space) fashion. These aspects of the surrogate model construction might limit their capabilities. One alternative is to construct a probability measure, G(r), for the computer response, r, based on available data. This 'response-modeling' approach will permit probability estimation for an arbitrary event, E(r), based on the computer response. In this general setting, event probabilities can be computed: prob(E)=∫ r I(E(r))dG(r) where I is the indicator function. Furthermore, one can use G(r) to calculate an induced distribution on a performance measure, pm. For prediction problems where the performance measure is a scalar, its distribution F pm is determined by: F pm (z)=∫ r I(pm(r)≤z)dG(r). We introduce response models for scalar computer output and then generalize the approach to more complicated responses that utilize multiple response models

  12. A Framework for Analysing Driver Interactions with Semi-Autonomous Vehicles

    Directory of Open Access Journals (Sweden)

    Siraj Shaikh

    2012-12-01

    Full Text Available Semi-autonomous vehicles are increasingly serving critical functions in various settings from mining to logistics to defence. A key characteristic of such systems is the presence of the human (drivers in the control loop. To ensure safety, both the driver needs to be aware of the autonomous aspects of the vehicle and the automated features of the vehicle built to enable safer control. In this paper we propose a framework to combine empirical models describing human behaviour with the environment and system models. We then analyse, via model checking, interaction between the models for desired safety properties. The aim is to analyse the design for safe vehicle-driver interaction. We demonstrate the applicability of our approach using a case study involving semi-autonomous vehicles where the driver fatigue are factors critical to a safe journey.

  13. Nuclear power plants: Results of recent safety analyses

    International Nuclear Information System (INIS)

    Steinmetz, E.

    1987-01-01

    The contributions deal with the problems posed by low radiation doses, with the information currently available from analyses of the Chernobyl reactor accident, and with risk assessments in connection with nuclear power plant accidents. Other points of interest include latest results on fission product release from reactor core or reactor building, advanced atmospheric dispersion models for incident and accident analyses, reliability studies on safety systems, and assessment of fire hazard in nuclear installations. The various contributions are found as separate entries in the database. (DG) [de

  14. A comparison of linear tyre models for analysing shimmy

    NARCIS (Netherlands)

    Besselink, I.J.M.; Maas, J.W.L.H.; Nijmeijer, H.

    2011-01-01

    A comparison is made between three linear, dynamic tyre models using low speed step responses and yaw oscillation tests. The match with the measurements improves with increasing complexity of the tyre model. Application of the different tyre models to a two degree of freedom trailing arm suspension

  15. Hybrid Logical Analyses of the Ambient Calculus

    DEFF Research Database (Denmark)

    Bolander, Thomas; Hansen, Rene Rydhof

    2010-01-01

    In this paper, hybrid logic is used to formulate three control flow analyses for Mobile Ambients, a process calculus designed for modelling mobility. We show that hybrid logic is very well-suited to express the semantic structure of the ambient calculus and how features of hybrid logic can...

  16. Results of radiotherapy in craniopharyngiomas analysed by the linear quadratic model

    Energy Technology Data Exchange (ETDEWEB)

    Guerkaynak, M. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Oezyar, E. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Zorlu, F. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Akyol, F.H. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Lale Atahan, I. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey)

    1994-12-31

    In 23 craniopharyngioma patients treated by limited surgery and external radiotherapy, the results concerning local control were analysed by linear quadratic formula. A biologically effective dose (BED) of 55 Gy, calculated with time factor and an {alpha}/{beta} value of 10 Gy, seemed to be adequate for local control. (orig.).

  17. Assessment of Tools and Data for System-Level Dynamic Analyses

    International Nuclear Information System (INIS)

    Piet, Steven J.; Soelberg, Nick R.

    2011-01-01

    The only fuel cycle for which dynamic analyses and assessments are not needed is the null fuel cycle - no nuclear power. For every other concept, dynamic analyses are needed and can influence relative desirability of options. Dynamic analyses show how a fuel cycle might work during transitions from today's partial fuel cycle to something more complete, impact of technology deployments, location of choke points, the key time lags, when benefits can manifest, and how well parts of fuel cycles work together. This report summarizes the readiness of existing Fuel Cycle Technology (FCT) tools and data for conducting dynamic analyses on the range of options. VISION is the primary dynamic analysis tool. Not only does it model mass flows, as do other dynamic system analysis models, but it allows users to explore various potential constraints. The only fuel cycle for which constraints are not important are those in concept advocates PowerPoint presentations; in contrast, comparative analyses of fuel cycles must address what constraints exist and how they could impact performance. The most immediate tool need is extending VISION to the thorium/U233 fuel cycle. Depending on further clarification of waste management strategies in general and for specific fuel cycle candidates, waste management sub-models in VISION may need enhancement, e.g., more on 'co-flows' of non-fuel materials, constraints in waste streams, or automatic classification of waste streams on the basis of user-specified rules. VISION originally had an economic sub-model. The economic calculations were deemed unnecessary in later versions so it was retired. Eventually, the program will need to restore and improve the economics sub-model of VISION to at least the cash flow stage and possibly to incorporating cost constraints and feedbacks. There are multiple sources of data that dynamic analyses can draw on. In this report, 'data' means experimental data, data from more detailed theoretical or empirical

  18. Assessment of Tools and Data for System-Level Dynamic Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Steven J. Piet; Nick R. Soelberg

    2011-06-01

    The only fuel cycle for which dynamic analyses and assessments are not needed is the null fuel cycle - no nuclear power. For every other concept, dynamic analyses are needed and can influence relative desirability of options. Dynamic analyses show how a fuel cycle might work during transitions from today's partial fuel cycle to something more complete, impact of technology deployments, location of choke points, the key time lags, when benefits can manifest, and how well parts of fuel cycles work together. This report summarizes the readiness of existing Fuel Cycle Technology (FCT) tools and data for conducting dynamic analyses on the range of options. VISION is the primary dynamic analysis tool. Not only does it model mass flows, as do other dynamic system analysis models, but it allows users to explore various potential constraints. The only fuel cycle for which constraints are not important are those in concept advocates PowerPoint presentations; in contrast, comparative analyses of fuel cycles must address what constraints exist and how they could impact performance. The most immediate tool need is extending VISION to the thorium/U233 fuel cycle. Depending on further clarification of waste management strategies in general and for specific fuel cycle candidates, waste management sub-models in VISION may need enhancement, e.g., more on 'co-flows' of non-fuel materials, constraints in waste streams, or automatic classification of waste streams on the basis of user-specified rules. VISION originally had an economic sub-model. The economic calculations were deemed unnecessary in later versions so it was retired. Eventually, the program will need to restore and improve the economics sub-model of VISION to at least the cash flow stage and possibly to incorporating cost constraints and feedbacks. There are multiple sources of data that dynamic analyses can draw on. In this report, 'data' means experimental data, data from more detailed

  19. Global post-Kyoto scenario analyses at PSI

    Energy Technology Data Exchange (ETDEWEB)

    Kypreos, S [Paul Scherrer Inst. (PSI), Villigen (Switzerland)

    1999-08-01

    Scenario analyses are described here using the Global MARKAL-Macro Trade (GMMT) model to study the economic implications of the Kyoto Protocol to the UN Convention on Climate change. Some conclusions are derived in terms of efficient implementations of the post-Kyoto extensions of the Protocol. (author) 2 figs., 5 refs.

  20. Global post-Kyoto scenario analyses at PSI

    International Nuclear Information System (INIS)

    Kypreos, S.

    1999-01-01

    Scenario analyses are described here using the Global MARKAL-Macro Trade (GMMT) model to study the economic implications of the Kyoto Protocol to the UN Convention on Climate change. Some conclusions are derived in terms of efficient implementations of the post-Kyoto extensions of the Protocol. (author) 2 figs., 5 refs

  1. LOCO - a linearised model for analysing the onset of coolant oscillations and frequency response of boiling channels

    International Nuclear Information System (INIS)

    Romberg, T.M.

    1982-12-01

    Industrial plant such as heat exchangers and nuclear and conventional boilers are prone to coolant flow oscillations which may not be detected. In this report, a hydrodynamic model is formulated in which the one-dimensional, non-linear, partial differential equations for the conservation of mass, energy and momentum are perturbed with respect to time, linearised, and Laplace-transformed into the s-domain for frequency response analysis. A computer program has been developed to integrate numerically the resulting non-linear ordinary differential equations by finite difference methods. A sample problem demonstrates how the computer code is used to analyse the frequency response and flow stability characteristics of a heated channel

  2. Indian Point 2 steam generator tube rupture analyses

    International Nuclear Information System (INIS)

    Dayan, A.

    1985-01-01

    Analyses were conducted with RETRAN-02 to study consequences of steam generator tube rupture (SGTR) events. The Indian Point, Unit 2, power plant (IP2, PWR) was modeled as a two asymmetric loops, consisting of 27 volumes and 37 junctions. The break section was modeled once, conservatively, as a 150% flow area opening at the wall of the steam generator cold leg plenum, and once as a 200% double-ended tube break. Results revealed 60% overprediction of breakflow rates by the traditional conservative model. Two SGTR transients were studied, one with low-pressure reactor trip and one with an earlier reactor trip via over temperature ΔT. The former is more typical to a plant with low reactor average temperature such as IP2. Transient analyses for a single tube break event over 500 seconds indicated continued primary subcooling and no need for steam line pressure relief. In addition, SGTR transients with reactor trip while the pressurizer still contains water were found to favorably reduce depressurization rates. Comparison of the conservative results with independent LOFTRAN predictions showed good agreement

  3. Analysing Models as a Knowledge Technology in Transport Planning

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2011-01-01

    critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... device to illuminate how such an analytic scheme may allow patterns of insight about the use, influence and role of models in planning to emerge. The main contribution of the paper is to demonstrate that concepts and terminologies from knowledge use literature can provide interpretations of significance...

  4. Modeling and analysing storage systems in agricultural biomass supply chain for cellulosic ethanol production

    International Nuclear Information System (INIS)

    Ebadian, Mahmood; Sowlati, Taraneh; Sokhansanj, Shahab; Townley-Smith, Lawrence; Stumborg, Mark

    2013-01-01

    Highlights: ► Studied the agricultural biomass supply chain for cellulosic ethanol production. ► Evaluated the impact of storage systems on different supply chain actors. ► Developed a combined simulation/optimization model to evaluate storage systems. ► Compared two satellite storage systems with roadside storage in terms of costs and emitted CO 2 . ► SS would lead to a more cost-efficient supply chain compared to roadside storage. -- Abstract: In this paper, a combined simulation/optimization model is developed to better understand and evaluate the impact of the storage systems on the costs incurred by each actor in the agricultural biomass supply chain including farmers, hauling contractors and the cellulosic ethanol plant. The optimization model prescribes the optimum number and location of farms and storages. It also determines the supply radius, the number of farms required to secure the annual supply of biomass and also the assignment of farms to storage locations. Given the specific design of the supply chain determined by the optimization model, the simulation model determines the number of required machines for each operation, their daily working schedule and utilization rates, along with the capacities of storages. To evaluate the impact of the storage systems on the delivered costs, three storage systems are molded and compared: roadside storage (RS) system and two satellite storage (SS) systems including SS with fixed hauling distance (SF) and SS with variable hauling distance (SV). In all storage systems, it is assumed the loading equipment is dedicated to storage locations. The obtained results from a real case study provide detailed cost figures for each storage system since the developed model analyses the supply chain on an hourly basis and considers time-dependence and stochasticity of the supply chain. Comparison of the storage systems shows SV would outperform SF and RS by reducing the total delivered cost by 8% and 6%, respectively

  5. A simulation model for analysing brain structure deformations

    Energy Technology Data Exchange (ETDEWEB)

    Bona, Sergio Di [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy); Lutzemberger, Ludovico [Department of Neuroscience, Institute of Neurosurgery, University of Pisa, Via Roma, 67-56100 Pisa (Italy); Salvetti, Ovidio [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy)

    2003-12-21

    Recent developments of medical software applications from the simulation to the planning of surgical operations have revealed the need for modelling human tissues and organs, not only from a geometric point of view but also from a physical one, i.e. soft tissues, rigid body, viscoelasticity, etc. This has given rise to the term 'deformable objects', which refers to objects with a morphology, a physical and a mechanical behaviour of their own and that reflects their natural properties. In this paper, we propose a model, based upon physical laws, suitable for the realistic manipulation of geometric reconstructions of volumetric data taken from MR and CT scans. In particular, a physically based model of the brain is presented that is able to simulate the evolution of different nature pathological intra-cranial phenomena such as haemorrhages, neoplasm, haematoma, etc and to describe the consequences that are caused by their volume expansions and the influences they have on the anatomical and neuro-functional structures of the brain.

  6. Evaluation of Temperature and Humidity Profiles of Unified Model and ECMWF Analyses Using GRUAN Radiosonde Observations

    Directory of Open Access Journals (Sweden)

    Young-Chan Noh

    2016-07-01

    Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.

  7. Application of a weighted spatial probability model in GIS to analyse landslides in Penang Island, Malaysia

    Directory of Open Access Journals (Sweden)

    Samy Ismail Elmahdy

    2016-01-01

    Full Text Available In the current study, Penang Island, which is one of the several mountainous areas in Malaysia that is often subjected to landslide hazard, was chosen for further investigation. A multi-criteria Evaluation and the spatial probability weighted approach and model builder was applied to map and analyse landslides in Penang Island. A set of automated algorithms was used to construct new essential geological and morphometric thematic maps from remote sensing data. The maps were ranked using the weighted probability spatial model based on their contribution to the landslide hazard. Results obtained showed that sites at an elevation of 100–300 m, with steep slopes of 10°–37° and slope direction (aspect in the E and SE directions were areas of very high and high probability for the landslide occurrence; the total areas were 21.393 km2 (11.84% and 58.690 km2 (32.48%, respectively. The obtained map was verified by comparing variogram models of the mapped and the occurred landslide locations and showed a strong correlation with the locations of occurred landslides, indicating that the proposed method can successfully predict the unpredictable landslide hazard. The method is time and cost effective and can be used as a reference for geological and geotechnical engineers.

  8. Preliminary analyses of AP600 using RELAP5

    International Nuclear Information System (INIS)

    Modro, S.M.; Beelman, R.J.; Fisher, J.E.

    1991-01-01

    This paper presents results of preliminary analyses of the proposed Westinghouse Electric Corporation AP600 design. AP600 is a two loop, 600 MW (e) pressurized water reactor (PWR) arranged in a two hot leg, four cold leg nuclear steam supply system (NSSS) configuration. In contrast to the present generation of PWRs it is equipped with passive emergency core coolant (ECC) systems. Also, the containment and the safety systems of the AP600 interact with the reactor coolant system and each other in a more integral fashion than present day PWRs. The containment in this design is the ultimate heat sink for removal of decay heat to the environment. Idaho National Engineering Laboratory (INEL) has studied applicability of the RELAP5 code to AP600 safety analysis and has developed a model of the AP600 for the Nuclear Regulatory Commission. The model incorporates integral modeling of the containment, NSSS and passive safety systems. Best available preliminary design data were used. Nodalization sensitivity studies were conducted to gain experience in modeling of systems and conditions which are beyond the applicability of previously established RELAP5 modeling guidelines or experience. Exploratory analyses were then undertaken to investigate AP600 system response during postulated accident conditions. Four small break LOCA calculations and two large break LOCA calculations were conducted

  9. Thermoelastic analyses of spent fuel repositories in bedded and dome salt. Technical memorandum report RSI-0054

    International Nuclear Information System (INIS)

    Callahan, G.D.; Ratigan, J.L.

    1978-01-01

    Global thermoelastic analyses of bedded and dome salt models showed a slight preference for the bedded salt model through the range of thermal loading conditions. Spent fuel thermal loadings should be less than 75 kW/acre of the repository pending more accurate material modeling. One should first limit the study to one or two spent fuel thermal loading (i.e. 75 kW/acre and/or 50 kW/acre) analyses up to a maximum time of approximately 2000 years. Parametric thermoelastic type analyses could then be readily obtained to determine the influence of the thermomechanical properties. Recommendations for further study include parametric analyses, plasticity analyses, consideration of the material interfaces as joints, and possibly consideration of a global joint pattern (i.e. jointed at the same orientation everywhere) for the non-salt materials. Subsequently, the viscoelastic analyses could be performed

  10. Density dependent forces and large basis structure models in the analyses of 12C(p,p') reactions at 135 MeV

    International Nuclear Information System (INIS)

    Bauhoff, W.; Collins, S.F.; Henderson, R.S.

    1983-01-01

    Differential cross-sections have been measured for the elastic and inelastic scattering of 135 MeV protons from 12 C. The data from the transitions to 9 select states up to 18.3 MeV in excitation have been analysed using a distorted wave approximation with various microscopic model nuclear structure transition densities and free and density dependent two nucleon t-matrices. Clear signatures of the density dependence of the t-matrix are defined and the utility of select transitions to test different attributes of that t-matrix when good nuclear structure models are used is established

  11. Performance Limiting Flow Processes in High-State Loading High-Mach Number Compressors

    National Research Council Canada - National Science Library

    Tan, Choon S

    2008-01-01

    In high-stage loading high-Mach number (HLM) compressors, counter-rotating pairs of discrete vortices are shed at the trailing edge of the upstream blade row at a frequency corresponding to the downstream rotor blade passing frequency...

  12. Energy and exergy analyses of the diffusion absorption refrigeration system

    International Nuclear Information System (INIS)

    Yıldız, Abdullah; Ersöz, Mustafa Ali

    2013-01-01

    This paper describes the thermodynamic analyses of a DAR (diffusion absorption refrigeration) cycle. The experimental apparatus is set up to an ammonia–water DAR cycle with helium as the auxiliary inert gas. A thermodynamic model including mass, energy and exergy balance equations are presented for each component of the DAR cycle and this model is then validated by comparison with experimental data. In the thermodynamic analyses, energy and exergy losses for each component of the system are quantified and illustrated. The systems' energy and exergy losses and efficiencies are investigated. The highest energy and exergy losses occur in the solution heat exchanger. The highest energy losses in the experimental and theoretical analyses are found 25.7090 W and 25.4788 W respectively, whereas those losses as to exergy are calculated 13.7933 W and 13.9976 W. Although the values of energy efficiencies obtained from both the model and experimental studies are calculated as 0.1858, those values, in terms of exergy efficiencies are found 0.0260 and 0.0356. - Highlights: • The diffusion absorption refrigerator system is designed manufactured and tested. • The energy and exergy analyses of the system are presented theoretically and experimentally. • The energy and exergy losses are investigated for each component of the system. • The highest energy and exergy losses occur in the solution heat exchanger. • The energy and the exergy performances are also calculated

  13. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    Science.gov (United States)

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  14. Optimization of a Centrifugal Boiler Circulating Pump's Casing Based on CFD and FEM Analyses

    Directory of Open Access Journals (Sweden)

    Zhigang Zuo

    2014-04-01

    Full Text Available It is important to evaluate the economic efficiency of boiler circulating pumps in manufacturing process from the manufacturers' point of view. The possibility of optimizing the pump casing with respect to structural pressure integrity and hydraulic performance was discussed. CFD analyses of pump models with different pump casing sizes were firstly carried out for the hydraulic performance evaluation. The effects of the working temperature and the sealing ring on the hydraulic efficiency were discussed. A model with casing diameter of 0.875D40 was selected for further analyses. FEM analyses were then carried out on different combinations of casing sizes, casing wall thickness, and materials, to evaluate its safety related to pressure integrity, with respect to both static and fatigue strength analyses. Two models with forging and cast materials were selected as final results.

  15. Passive safety injection experiments and analyses (PAHKO)

    International Nuclear Information System (INIS)

    Tuunanen, J.

    1998-01-01

    PAHKO project involved experiments on the PACTEL facility and computer simulations of selected experiments. The experiments focused on the performance of Passive Safety Injection Systems (PSIS) of Advanced Light Water Reactors (ALWRs) in Small Break Loss-Of-Coolant Accident (SBLOCA) conditions. The PSIS consisted of a Core Make-up Tank (CMT) and two pipelines (Pressure Balancing Line, PBL, and Injection Line, IL). The examined PSIS worked efficiently in SBLOCAs although the flow through the PSIS stopped temporarily if the break was very small and the hot water filled the CMT. The experiments demonstrated the importance of the flow distributor in the CMT to limit rapid condensation. The project included validation of three thermal-hydraulic computer codes (APROS, CATHARE and RELAP5). The analyses showed the codes are capable to simulate the overall behaviour of the transients. The detailed analyses of the results showed some models in the codes still need improvements. Especially, further development of models for thermal stratification, condensation and natural circulation flow with small driving forces would be necessary for accurate simulation of the PSIS phenomena. (orig.)

  16. A Formal Model to Analyse the Firewall Configuration Errors

    Directory of Open Access Journals (Sweden)

    T. T. Myo

    2015-01-01

    Full Text Available The firewall is widely known as a brandmauer (security-edge gateway. To provide the demanded security, the firewall has to be appropriately adjusted, i.e. be configured. Unfortunately, when configuring, even the skilled administrators may make mistakes, which result in decreasing level of a network security and network infiltration undesirable packages.The network can be exposed to various threats and attacks. One of the mechanisms used to ensure network security is the firewall.The firewall is a network component, which, using a security policy, controls packages passing through the borders of a secured network. The security policy represents the set of rules.Package filters work in the mode without inspection of a state: they investigate packages as the independent objects. Rules take the following form: (condition, action. The firewall analyses the entering traffic, based on the IP address of the sender and recipient, the port number of the sender and recipient, and the used protocol. When the package meets rule conditions, the action specified in the rule is carried out. It can be: allow, deny.The aim of this article is to develop tools to analyse a firewall configuration with inspection of states. The input data are the file with the set of rules. It is required to submit the analysis of a security policy in an informative graphic form as well as to reveal discrepancy available in rules. The article presents a security policy visualization algorithm and a program, which shows how the firewall rules act on all possible packages. To represent a result in an intelligible form a concept of the equivalence region is introduced.Our task is the program to display results of rules action on the packages in a convenient graphic form as well as to reveal contradictions between the rules. One of problems is the large number of measurements. As it was noted above, the following parameters are specified in the rule: Source IP address, appointment IP

  17. Thermodynamic analysis and modeling of thermo compressor; Analyse et modelisation thermodynamique du mouvement du piston d'un thermocompresseur

    Energy Technology Data Exchange (ETDEWEB)

    Arques, Ph. [Ecole Centrale de Lyon, 69 - Ecully (France)

    1998-07-01

    A thermo-compressor is a compressor that transforms directly the heat release by a source in an energy of pressure without intermediate mechanical work. It is a conversion of the Stirling engine in driven machine in order that the piston that provides the work has been suppressed. In this article, we present the analytical and numerical analyses of heat and mass transfers modeling in the different volumes of the thermo-compressor. This engine comprises a free piston displacer that separates cold and hot gas. (author)

  18. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Directory of Open Access Journals (Sweden)

    Catherine L Worth

    Full Text Available BACKGROUND: Up until recently the only available experimental (high resolution structure of a G-protein-coupled receptor (GPCR was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. METHODOLOGY: We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s for building a comparative molecular model. CONCLUSIONS: The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying

  19. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Science.gov (United States)

    Worth, Catherine L; Kleinau, Gunnar; Krause, Gerd

    2009-09-16

    Up until recently the only available experimental (high resolution) structure of a G-protein-coupled receptor (GPCR) was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s) to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s) for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s) for building a comparative molecular model. The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying suitable templates for GPCR homology modelling that will

  20. Identification of AKB-48 and 5F-AKB-48 Metabolites in Authentic Human Urine Samples Using Human Liver Microsomes and Time of Flight Mass Spectrometry.

    Science.gov (United States)

    Vikingsson, Svante; Josefsson, Martin; Gréen, Henrik

    2015-01-01

    The occurrence of structurally related synthetic cannabinoids makes the identification of unique markers of drug intake particularly challenging. The aim of this study was to identify unique and abundant metabolites of AKB-48 and 5F-AKB-48 for toxicological screening in urine. Investigations of authentic urine samples from forensic cases in combination with human liver microsome (HLM) experiments were used for identification of metabolites. HLM incubations of AKB-48 and 5F-AKB-48 along with 35 urine samples from authentic cases were analyzed with liquid chromatography quadrupole tandem time of flight mass spectrometry. Using HLMs 41 metabolites of AKB-48 and 37 metabolites of 5F-AKB-48 were identified, principally represented by hydroxylation but also ketone formation and dealkylation. Monohydroxylated metabolites were replaced by di- and trihydroxylated metabolites within 30 min. The metabolites from the HLM incubations accounted for on average 84% (range, 67-100) and 91% (range, 71-100) of the combined area in the case samples for AKB-48 and 5F-AKB-48, respectively. While defluorinated metabolites accounted for on average 74% of the combined area after a 5F-AKB-48 intake only a few identified metabolites were shared between AKB-48 and 5F-AKB-48, illustrating the need for a systematic approach to identify unique metabolites. HLMs in combination with case samples seem suitable for this purpose. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  1. Plasma-safety assessment model and safety analyses of ITER

    International Nuclear Information System (INIS)

    Honda, T.; Okazaki, T.; Bartels, H.-H.; Uckan, N.A.; Sugihara, M.; Seki, Y.

    2001-01-01

    A plasma-safety assessment model has been provided on the basis of the plasma physics database of the International Thermonuclear Experimental Reactor (ITER) to analyze events including plasma behavior. The model was implemented in a safety analysis code (SAFALY), which consists of a 0-D dynamic plasma model and a 1-D thermal behavior model of the in-vessel components. Unusual plasma events of ITER, e.g., overfueling, were calculated using the code and plasma burning is found to be self-bounded by operation limits or passively shut down due to impurity ingress from overheated divertor targets. Sudden transition of divertor plasma might lead to failure of the divertor target because of a sharp increase of the heat flux. However, the effects of the aggravating failure can be safely handled by the confinement boundaries. (author)

  2. Science-Technology-Society literacy in college non-majors biology: Comparing problem/case studies based learning and traditional expository methods of instruction

    Science.gov (United States)

    Peters, John S.

    This study used a multiple response model (MRM) on selected items from the Views on Science-Technology-Society (VOSTS) survey to examine science-technology-society (STS) literacy among college non-science majors' taught using Problem/Case Studies Based Learning (PBL/CSBL) and traditional expository methods of instruction. An initial pilot investigation of 15 VOSTS items produced a valid and reliable scoring model which can be used to quantitatively assess student literacy on a variety of STS topics deemed important for informed civic engagement in science related social and environmental issues. The new scoring model allows for the use of parametric inferential statistics to test hypotheses about factors influencing STS literacy. The follow-up cross-institutional study comparing teaching methods employed Hierarchical Linear Modeling (HLM) to model the efficiency and equitability of instructional methods on STS literacy. A cluster analysis was also used to compare pre and post course patterns of student views on the set of positions expressed within VOSTS items. HLM analysis revealed significantly higher instructional efficiency in the PBL/CSBL study group for 4 of the 35 STS attitude indices (characterization of media vs. school science; tentativeness of scientific models; cultural influences on scientific research), and more equitable effects of traditional instruction on one attitude index (interdependence of science and technology). Cluster analysis revealed generally stable patterns of pre to post course views across study groups, but also revealed possible teaching method effects on the relationship between the views expressed within VOSTS items with respect to (1) interdependency of science and technology; (2) anti-technology; (3) socioscientific decision-making; (4) scientific/technological solutions to environmental problems; (5) usefulness of school vs. media characterizations of science; (6) social constructivist vs. objectivist views of theories; (7

  3. Epilepsy and its Impact on psychosocial outcomes in Canadian children: Data from the National Longitudinal Study of Children and Youth (NLSCY).

    Science.gov (United States)

    Prasad, A N; Corbett, B

    2016-12-01

    The purpose of this study was to use data from a population-based survey to evaluate the association between childhood epilepsy and social outcomes through tests of mathematics skills, and sense of general self-esteem (GSS). Using data from Cycles 1 to 8 of the National Longitudinal Survey of Children and Youth (NLSCY), Hierarchical linear modeling (HLM) was used to compare baseline math scores and changes in math scores and sense of general self esteem (GSS) over time in children with and without epilepsy. Scores of Health Utility Index (HUI) were factored into the analysis. Children with epilepsy do not significantly differ in their scaled math scores in comparison to their peers without epilepsy, at age 12; however, in the two level HLM model the children with epilepsy lagged behind the healthy comparison group in terms of their growth in acquiring knowledge in mathematics. Additionally, when children with epilepsy carry an added health impairment as measured by an imperfect health utility (HUI) score the group shows a slower rate of growth in their math scores over time. Self-esteem measures show variable effects in children with epilepsy alone, and those with added health impairments. The interaction with HUI scores shows a significant negative effect on self-esteem, when epilepsy is associated with added health impairment. The findings suggest that the population of Canadian children surveyed with epilepsy are vulnerable to poorer academic outcomes in mathematics in later years, and this problem is compounded further with the presence of other additional health impairments. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Comparative analyses reveal potential uses of Brachypodium distachyon as a model for cold stress responses in temperate grasses

    Directory of Open Access Journals (Sweden)

    Li Chuan

    2012-05-01

    Full Text Available Abstract Background Little is known about the potential of Brachypodium distachyon as a model for low temperature stress responses in Pooideae. The ice recrystallization inhibition protein (IRIP genes, fructosyltransferase (FST genes, and many C-repeat binding factor (CBF genes are Pooideae specific and important in low temperature responses. Here we used comparative analyses to study conservation and evolution of these gene families in B. distachyon to better understand its potential as a model species for agriculturally important temperate grasses. Results Brachypodium distachyon contains cold responsive IRIP genes which have evolved through Brachypodium specific gene family expansions. A large cold responsive CBF3 subfamily was identified in B. distachyon, while CBF4 homologs are absent from the genome. No B. distachyon FST gene homologs encode typical core Pooideae FST-motifs and low temperature induced fructan accumulation was dramatically different in B. distachyon compared to core Pooideae species. Conclusions We conclude that B. distachyon can serve as an interesting model for specific molecular mechanisms involved in low temperature responses in core Pooideae species. However, the evolutionary history of key genes involved in low temperature responses has been different in Brachypodium and core Pooideae species. These differences limit the use of B. distachyon as a model for holistic studies relevant for agricultural core Pooideae species.

  5. From global economic modelling to household level analyses of food security and sustainability: how big is the gap and can we bridge it?

    NARCIS (Netherlands)

    Wijk, van M.T.

    2014-01-01

    Policy and decision makers have to make difficult choices to improve the food security of local people against the background of drastic global and local changes. Ex-ante impact assessment using integrated models can help them with these decisions. This review analyses the state of affairs of the

  6. Lagrangian Coherent Structure Analysis of Terminal Winds: Three-Dimensionality, Intramodel Variations, and Flight Analyses

    Directory of Open Access Journals (Sweden)

    Brent Knutson

    2015-01-01

    Full Text Available We present a study of three-dimensional Lagrangian coherent structures (LCS near the Hong Kong International Airport and relate to previous developments of two-dimensional (2D LCS analyses. The LCS are contrasted among three independent models and against 2D coherent Doppler light detection and ranging (LIDAR data. Addition of the velocity information perpendicular to the LIDAR scanning cone helps solidify flow structures inferred from previous studies; contrast among models reveals the intramodel variability; and comparison with flight data evaluates the performance among models in terms of Lagrangian analyses. We find that, while the three models and the LIDAR do recover similar features of the windshear experienced by a landing aircraft (along the landing trajectory, their Lagrangian signatures over the entire domain are quite different—a portion of each numerical model captures certain features resembling those LCS extracted from independent 2D LIDAR analyses based on observations.

  7. Racioethnicity, community makeup, and potential employees' reactions to organizational diversity management approaches.

    Science.gov (United States)

    Olsen, Jesse E; Martins, Luis L

    2016-05-01

    We draw on the values literature from social psychology and the acculturation literature from cross-cultural psychology to develop and test a theory of how signals about an organization's diversity management (DM) approach affect perceptions of organizational attractiveness among potential employees. We examine the mediating effects of individuals' merit-based attributions about hiring decisions at the organization, as well as the moderating effects of their racioethnicity and the racioethnic composition of their home communities. We test our theory using a within-subject policy-capturing experimental design that simulates organizational DM approaches, supplemented with census data for the participants' home communities. Results of hierarchical linear modeling (HLM) analyses suggest that the manipulated instrumental value for diversity leads to higher perceptions of organizational attractiveness, in part through heightened expectations of merit-based hiring decisions. Further, the manipulated assimilative and integrative DM approach signals are positively related to organizational attractiveness and the effect of integrative DM is strongest for racioethnic minorities from communities with especially high proportions of Whites and Whites from communities with especially low proportions of Whites. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  8. Using niche-modelling and species-specific cost analyses to determine a multispecies corridor in a fragmented landscape

    Science.gov (United States)

    Zurano, Juan Pablo; Selleski, Nicole; Schneider, Rosio G.

    2017-01-01

    types independent of the degree of legal protection. These data used with multifocal GIS analyses balance the varying degree of overlap and unique properties among them allowing for comprehensive conservation strategies to be developed relatively rapidly. Our comprehensive approach serves as a model to other regions faced with habitat loss and lack of data. The five carnivores focused on in our study have wide ranges, so the results from this study can be expanded and combined with surrounding countries, with analyses at the species or community level. PMID:28841692

  9. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses.

    Science.gov (United States)

    Faul, Franz; Erdfelder, Edgar; Buchner, Axel; Lang, Albert-Georg

    2009-11-01

    G*Power is a free power analysis program for a variety of statistical tests. We present extensions and improvements of the version introduced by Faul, Erdfelder, Lang, and Buchner (2007) in the domain of correlation and regression analyses. In the new version, we have added procedures to analyze the power of tests based on (1) single-sample tetrachoric correlations, (2) comparisons of dependent correlations, (3) bivariate linear regression, (4) multiple linear regression based on the random predictor model, (5) logistic regression, and (6) Poisson regression. We describe these new features and provide a brief introduction to their scope and handling.

  10. ETHICAL LEADERSHIP AND EMPLOYEE VOICE: EMPLOYEE SELF-EFFICACY AND SELF-IMPACT AS MEDIATORS.

    Science.gov (United States)

    Wang, Duanxu; Gan, Chenjing; Wu, Chaoyan; Wang, Danqi

    2015-06-01

    Previous studies have used social learning theory to explain the influence of ethical leadership. This study continues the previous research by using social learning theory to explain the mediating effect of self-efficacy on the relationship between ethical leadership and employee voice. In addition, this study extends previous studies by introducing expectancy theory to explore whether self-impact also mediates the relationship between ethical leadership and employee voice. Ethical leadership, self-efficacy, self-impact, and employee voice were assessed using paired surveys among 59 supervisors and 295 subordinates employed at nine firms in the People's Republic of China. Using HLM and SEM analyses, the results revealed that ethical leadership was positively related to employee voice and that this relationship was partially mediated by both self-efficacy and self-impact.

  11. Multi-level Bayesian analyses for single- and multi-vehicle freeway crashes.

    Science.gov (United States)

    Yu, Rongjie; Abdel-Aty, Mohamed

    2013-09-01

    This study presents multi-level analyses for single- and multi-vehicle crashes on a mountainous freeway. Data from a 15-mile mountainous freeway section on I-70 were investigated. Both aggregate and disaggregate models for the two crash conditions were developed. Five years of crash data were used in the aggregate investigation, while the disaggregate models utilized one year of crash data along with real-time traffic and weather data. For the aggregate analyses, safety performance functions were developed for the purpose of revealing the contributing factors for each crash type. Two methodologies, a Bayesian bivariate Poisson-lognormal model and a Bayesian hierarchical Poisson model with correlated random effects, were estimated to simultaneously analyze the two crash conditions with consideration of possible correlations. Except for the factors related to geometric characteristics, two exposure parameters (annual average daily traffic and segment length) were included. Two different sets of significant explanatory and exposure variables were identified for the single-vehicle (SV) and multi-vehicle (MV) crashes. It was found that the Bayesian bivariate Poisson-lognormal model is superior to the Bayesian hierarchical Poisson model, the former with a substantially lower DIC and more significant variables. In addition to the aggregate analyses, microscopic real-time crash risk evaluation models were developed for the two crash conditions. Multi-level Bayesian logistic regression models were estimated with the random parameters accounting for seasonal variations, crash-unit-level diversity and segment-level random effects capturing unobserved heterogeneity caused by the geometric characteristics. The model results indicate that the effects of the selected variables on crash occurrence vary across seasons and crash units; and that geometric characteristic variables contribute to the segment variations: the more unobserved heterogeneity have been accounted, the better

  12. RELAP5 analyses of overcooling transients in a pressurized water reactor

    International Nuclear Information System (INIS)

    Bolander, M.A.; Fletcher, C.D.; Ogden, D.M.; Stitt, B.D.; Waterman, M.E.

    1983-01-01

    In support of the Pressurized Thermal Shock Integration Study sponsored by the United States Nuclear Regulatory Commission, the Idaho National Engineering Laboratory has performed analyses of overcooling transients using the RELAP5/MOD1.5 computer code. These analyses were performed for Oconee Plants 1 and 3, which are pressurized water reactors of Babcock and Wilcox lowered-loop design. Results of the RELAP5 analyses are presented, including a comparison with plant data. The capabilities and limitations of the RELAP5/MOD1.5 computer code in analyzing integral plant transients are examined. These analyses require detailed thermal-hydraulic and control system computer models

  13. A shock absorber model for structure-borne noise analyses

    Science.gov (United States)

    Benaziz, Marouane; Nacivet, Samuel; Thouverez, Fabrice

    2015-08-01

    Shock absorbers are often responsible for undesirable structure-borne noise in cars. The early numerical prediction of this noise in the automobile development process can save time and money and yet remains a challenge for industry. In this paper, a new approach to predicting shock absorber structure-borne noise is proposed; it consists in modelling the shock absorber and including the main nonlinear phenomena responsible for discontinuities in the response. The model set forth herein features: compressible fluid behaviour, nonlinear flow rate-pressure relations, valve mechanical equations and rubber mounts. The piston, base valve and complete shock absorber model are compared with experimental results. Sensitivity of the shock absorber response is evaluated and the most important parameters are classified. The response envelope is also computed. This shock absorber model is able to accurately reproduce local nonlinear phenomena and improves our state of knowledge on potential noise sources within the shock absorber.

  14. Do phase-shift analyses and nucleon-nucleon potential models yield the wrong 3Pj phase shifts at low energies?

    International Nuclear Information System (INIS)

    Tornow, W.; Witala, H.; Kievsky, A.

    1998-01-01

    The 4 P J waves in nucleon-deuteron scattering were analyzed using proton-deuteron and neutron-deuteron data at E N =3 MeV. New sets of nucleon-nucleon 3 P j phase shifts were obtained that may lead to a better understanding of the long-standing A y (θ) puzzle in nucleon-deuteron elastic scattering. However, these sets of 3 P j phase shifts are quite different from the ones determined from both global phase-shift analyses of nucleon-nucleon data and nucleon-nucleon potential models. copyright 1998 The American Physical Society

  15. Modelling, singular perturbation and bifurcation analyses of bitrophic food chains.

    Science.gov (United States)

    Kooi, B W; Poggiale, J C

    2018-04-20

    Two predator-prey model formulations are studied: for the classical Rosenzweig-MacArthur (RM) model and the Mass Balance (MB) chemostat model. When the growth and loss rate of the predator is much smaller than that of the prey these models are slow-fast systems leading mathematically to singular perturbation problem. In contradiction to the RM-model, the resource for the prey are modelled explicitly in the MB-model but this comes with additional parameters. These parameter values are chosen such that the two models become easy to compare. In both models a transcritical bifurcation, a threshold above which invasion of predator into prey-only system occurs, and the Hopf bifurcation where the interior equilibrium becomes unstable leading to a stable limit cycle. The fast-slow limit cycles are called relaxation oscillations which for increasing differences in time scales leads to the well known degenerated trajectories being concatenations of slow parts of the trajectory and fast parts of the trajectory. In the fast-slow version of the RM-model a canard explosion of the stable limit cycles occurs in the oscillatory region of the parameter space. To our knowledge this type of dynamics has not been observed for the RM-model and not even for more complex ecosystem models. When a bifurcation parameter crosses the Hopf bifurcation point the amplitude of the emerging stable limit cycles increases. However, depending of the perturbation parameter the shape of this limit cycle changes abruptly from one consisting of two concatenated slow and fast episodes with small amplitude of the limit cycle, to a shape with large amplitude of which the shape is similar to the relaxation oscillation, the well known degenerated phase trajectories consisting of four episodes (concatenation of two slow and two fast). The canard explosion point is accurately predicted by using an extended asymptotic expansion technique in the perturbation and bifurcation parameter simultaneously where the small

  16. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Science.gov (United States)

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  17. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Directory of Open Access Journals (Sweden)

    Arika Ligmann-Zielinska

    Full Text Available Agent-based models (ABMs have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1 efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2 conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  18. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  19. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  20. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  1. Exergetic and thermoeconomic analyses of power plants

    International Nuclear Information System (INIS)

    Kwak, H.-Y.; Kim, D.-J.; Jeon, J.-S.

    2003-01-01

    Exergetic and thermoeconomic analyses were performed for a 500-MW combined cycle plant. In these analyses, mass and energy conservation laws were applied to each component of the system. Quantitative balances of the exergy and exergetic cost for each component, and for the whole system was carefully considered. The exergoeconomic model, which represented the productive structure of the system considered, was used to visualize the cost formation process and the productive interaction between components. The computer program developed in this study can determine the production costs of power plants, such as gas- and steam-turbines plants and gas-turbine cogeneration plants. The program can be also be used to study plant characteristics, namely, thermodynamic performance and sensitivity to changes in process and/or component design variables

  2. Finite element analyses of a linear-accelerator electron gun

    Science.gov (United States)

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-02-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  3. Finite element analyses of a linear-accelerator electron gun

    Energy Technology Data Exchange (ETDEWEB)

    Iqbal, M., E-mail: muniqbal.chep@pu.edu.pk, E-mail: muniqbal@ihep.ac.cn [Centre for High Energy Physics, University of the Punjab, Lahore 45590 (Pakistan); Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China); Wasy, A. [Department of Mechanical Engineering, Changwon National University, Changwon 641773 (Korea, Republic of); Islam, G. U. [Centre for High Energy Physics, University of the Punjab, Lahore 45590 (Pakistan); Zhou, Z. [Institute of High Energy Physics, Chinese Academy of Sciences, Beijing 100049 (China)

    2014-02-15

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.

  4. Finite element analyses of a linear-accelerator electron gun

    International Nuclear Information System (INIS)

    Iqbal, M.; Wasy, A.; Islam, G. U.; Zhou, Z.

    2014-01-01

    Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator

  5. Radiobiological analyse based on cell cluster models

    International Nuclear Information System (INIS)

    Lin Hui; Jing Jia; Meng Damin; Xu Yuanying; Xu Liangfeng

    2010-01-01

    The influence of cell cluster dimension on EUD and TCP for targeted radionuclide therapy was studied using the radiobiological method. The radiobiological features of tumor with activity-lack in core were evaluated and analyzed by associating EUD, TCP and SF.The results show that EUD will increase with the increase of tumor dimension under the activity homogeneous distribution. If the extra-cellular activity was taken into consideration, the EUD will increase 47%. Under the activity-lack in tumor center and the requirement of TCP=0.90, the α cross-fire influence of 211 At could make up the maximum(48 μm)3 activity-lack for Nucleus source, but(72 μm)3 for Cytoplasm, Cell Surface, Cell and Voxel sources. In clinic,the physician could prefer the suggested dose of Cell Surface source in case of the future of local tumor control for under-dose. Generally TCP could well exhibit the effect difference between under-dose and due-dose, but not between due-dose and over-dose, which makes TCP more suitable for the therapy plan choice. EUD could well exhibit the difference between different models and activity distributions,which makes it more suitable for the research work. When the user uses EUD to study the influence of activity inhomogeneous distribution, one should keep the consistency of the configuration and volume of the former and the latter models. (authors)

  6. Trajectories of Life Satisfaction Over the First 10 Years After Traumatic Brain Injury: Race, Gender, and Functional Ability.

    Science.gov (United States)

    Williamson, Meredith L C; Elliott, Timothy R; Bogner, Jennifer; Dreer, Laura E; Arango-Lasprilla, Juan Carlos; Kolakowsky-Hayner, Stephanie A; Pretz, Christopher R; Lequerica, Anthony; Perrin, Paul B

    2016-01-01

    This study investigated the influence of race, gender, functional ability, and an array of preinjury, injury-related, and sociodemographic variables on life satisfaction trajectories over 10 years following moderate to severe traumatic brain injury (TBI). A sample of 3157 individuals with TBI from the TBI Model Systems database was included in this study. Hierarchical linear modeling (HLM) analyses were conducted to examine the trajectories of life satisfaction. The Functional Independence Measure, Glasgow Coma Scale, and the Satisfaction With Life Scale were utilized. Initial models suggested that life satisfaction trajectories increased over the 10-year period and Asian/Pacific Islander participants experienced an increase in life satisfaction over time. In a comprehensive model, time was no longer a significant predictor of increased life satisfaction. Black race, however, was associated with lower life satisfaction, and significant interactions revealed that black participants' life satisfaction trajectory decreased over time while white participants' trajectory increased over the same time period. Life satisfaction trajectories did not significantly differ by gender, and greater motor and cognitive functioning were associated with increasingly positive life satisfaction trajectories over the 10 years. Individuals with more functional impairments are at risk for decreases in life satisfaction over time. Further research is needed to identify the mechanisms and factors that contribute to the lower levels of life satisfaction observed among black individuals post-TBI. This work is needed to determine strategic ways to promote optimal adjustment for these individuals.

  7. Partial differential equation techniques for analysing animal movement: A comparison of different methods.

    Science.gov (United States)

    Wang, Yi-Shan; Potts, Jonathan R

    2017-03-07

    Recent advances in animal tracking have allowed us to uncover the drivers of movement in unprecedented detail. This has enabled modellers to construct ever more realistic models of animal movement, which aid in uncovering detailed patterns of space use in animal populations. Partial differential equations (PDEs) provide a popular tool for mathematically analysing such models. However, their construction often relies on simplifying assumptions which may greatly affect the model outcomes. Here, we analyse the effect of various PDE approximations on the analysis of some simple movement models, including a biased random walk, central-place foraging processes and movement in heterogeneous landscapes. Perhaps the most commonly-used PDE method dates back to a seminal paper of Patlak from 1953. However, our results show that this can be a very poor approximation in even quite simple models. On the other hand, more recent methods, based on transport equation formalisms, can provide more accurate results, as long as the kernel describing the animal's movement is sufficiently smooth. When the movement kernel is not smooth, we show that both the older and newer methods can lead to quantitatively misleading results. Our detailed analysis will aid future researchers in the appropriate choice of PDE approximation for analysing models of animal movement. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. State of the art in establishing computed models of adsorption processes to serve as a basis of radionuclide migration assessment for safety analyses

    International Nuclear Information System (INIS)

    Koss, V.

    1991-01-01

    An important point in safety analysis of an underground repository is adsorption of radionuclides in the overlying cover. Adsorption may be judged according to experimental results or to model calculations. Because of the reliability aspired in safety analyses, it is necessary to strengthen experimental results by theoretical calculations. At the time, there is no single thermodynamic model of adsorption to be agreed on. Therefore, this work reviews existing equilibrium models of adsorption. Limitations of the K d -concept and of adsorption-isotherms according to Freundlich and Langmuir are mentioned. The surface ionisation and complexation edl model is explained in full as is the criticism of this model. The application is stressed of simple surface complexation models to adsorption experiments in natural systems as is experimental and modelling work according to systems from Gorleben. Hints are given how to deal with modelling of adsorption related to Gorleben systems in the future. (orig.) [de

  9. Physics Analyses in the Design of the HFIR Cold Neutron Source

    International Nuclear Information System (INIS)

    Bucholz, J.A.

    1999-01-01

    Physics analyses have been performed to characterize the performance of the cold neutron source to be installed in the High Flux Isotope Reactor at the Oak Ridge National Laboratory in the near future. This paper provides a description of the physics models developed, and the resulting analyses that have been performed to support the design of the cold source. These analyses have provided important parametric performance information, such as cold neutron brightness down the beam tube and the various component heat loads, that have been used to develop the reference cold source concept

  10. Analysing spatially extended high-dimensional dynamics by recurrence plots

    Energy Technology Data Exchange (ETDEWEB)

    Marwan, Norbert, E-mail: marwan@pik-potsdam.de [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Kurths, Jürgen [Potsdam Institute for Climate Impact Research, 14412 Potsdam (Germany); Humboldt Universität zu Berlin, Institut für Physik (Germany); Nizhny Novgorod State University, Department of Control Theory, Nizhny Novgorod (Russian Federation); Foerster, Saskia [GFZ German Research Centre for Geosciences, Section 1.4 Remote Sensing, Telegrafenberg, 14473 Potsdam (Germany)

    2015-05-08

    Recurrence plot based measures of complexity are capable tools for characterizing complex dynamics. In this letter we show the potential of selected recurrence plot measures for the investigation of even high-dimensional dynamics. We apply this method on spatially extended chaos, such as derived from the Lorenz96 model and show that the recurrence plot based measures can qualitatively characterize typical dynamical properties such as chaotic or periodic dynamics. Moreover, we demonstrate its power by analysing satellite image time series of vegetation cover with contrasting dynamics as a spatially extended and potentially high-dimensional example from the real world. - Highlights: • We use recurrence plots for analysing partially extended dynamics. • We investigate the high-dimensional chaos of the Lorenz96 model. • The approach distinguishes different spatio-temporal dynamics. • We use the method for studying vegetation cover time series.

  11. RETRAN nonequilibrium two-phase flow model for operational transient analyses

    International Nuclear Information System (INIS)

    Paulsen, M.P.; Hughes, E.D.

    1982-01-01

    The field balance equations, flow-field models, and equation of state for a nonequilibrium two-phase flow model for RETRAN are given. The differential field balance model equations are: (1) conservation of mixture mass; (2) conservation of vapor mass; (3) balance of mixture momentum; (4) a dynamic-slip model for the velocity difference; and (5) conservation of mixture energy. The equation of state is formulated such that the liquid phase may be subcooled, saturated, or superheated. The vapor phase is constrained to be at the saturation state. The dynamic-slip model includes wall-to-phase and interphase momentum exchanges. A mechanistic vapor generation model is used to describe vapor production under bulk subcooling conditions. The speed of sound for the mixture under nonequilibrium conditions is obtained from the equation of state formulation. The steady-state and transient solution methods are described

  12. Mitochondrial dysfunction, oxidative stress and apoptosis revealed by proteomic and transcriptomic analyses of the striata in two mouse models of Parkinson’s disease

    Energy Technology Data Exchange (ETDEWEB)

    Chin, Mark H.; Qian, Weijun; Wang, Haixing; Petyuk, Vladislav A.; Bloom, Joshua S.; Sforza, Daniel M.; Lacan, Goran; Liu, Dahai; Khan, Arshad H.; Cantor, Rita M.; Bigelow, Diana J.; Melega, William P.; Camp, David G.; Smith, Richard D.; Smith, Desmond J.

    2008-02-10

    The molecular mechanisms underlying the changes in the nigrostriatal pathway in Parkinson disease (PD) are not completely understood. Here we use mass spectrometry and microarrays to study the proteomic and transcriptomic changes in the striatum of two mouse models of PD, induced by the distinct neurotoxins 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) and methamphetamine (METH). Proteomic analyses resulted in the identification and relative quantification of 912 proteins with two or more unique peptides and 85 proteins with significant abundance changes following neurotoxin treatment. Similarly, microarray analyses revealed 181 genes with significant changes in mRNA following neurotoxin treatment. The combined protein and gene list provides a clearer picture of the potential mechanisms underlying neurodegeneration observed in PD. Functional analysis of this combined list revealed a number of significant categories, including mitochondrial dysfunction, oxidative stress response and apoptosis. Additionally, codon usage and miRNAs may play an important role in translational control in the striatum. These results constitute one of the largest datasets integrating protein and transcript changes for these neurotoxin models with many similar endpoint phenotypes but distinct mechanisms.

  13. Proposed Testing to Assess the Accuracy of Glass-To-Metal Seal Stress Analyses.

    Energy Technology Data Exchange (ETDEWEB)

    Chambers, Robert S.; Emery, John M; Tandon, Rajan; Antoun, Bonnie R.; Stavig, Mark E.; Newton, Clay S.; Gibson, Cory S; Bencoe, Denise N.

    2014-09-01

    The material characterization tests conducted on 304L VAR stainless steel and Schott 8061 glass have provided higher fidelity data for calibration of material models used in Glass - T o - Metal (GTM) seal analyses. Specifically, a Thermo - Multi - Linear Elastic Plastic ( thermo - MLEP) material model has be en defined for S S304L and the Simplified Potential Energy Clock nonlinear visc oelastic model has been calibrated for the S8061 glass. To assess the accuracy of finite element stress analyses of GTM seals, a suite of tests are proposed to provide data for comparison to mo del predictions.

  14. Analyse Risk-Return Paradox: Evidence from Electricity Sector of Pakistan

    OpenAIRE

    Naqi Shah, Sadia; Qayyum, Abdul

    2016-01-01

    This study analyse risk return relationship of the electricity companies of Pakistan by using the log return series of these electricity companies. Financial time series data have the property of autoregressive heteroscedasticity so move towards the GARCH family test. As the study want to analyse the risk return relationship so, GARCH-M Model of Engel et al (1987) is used, who empirically found relationship between risk and return. Results show that risk return in case of Pakistan electricity...

  15. Development of a dissertation quality value-added model for humanities and social sciences programs for private higher education institutions in Thailand

    Directory of Open Access Journals (Sweden)

    Thanyasinee Laosum

    2016-09-01

    Full Text Available The purposes of this study were: (1 to evaluate the quality of dissertations in the humanities and social sciences of private higher education institutions, (2 to analyze factors affecting the quality at the student, advisor, and institute levels, and (3 to develop a quality, value-added model of the dissertations. Samples consisted of: (1 750 student dissertations in the humanities and social sciences and (2 753 questionnaire responses consisting of 633 students, 108 dissertation advisors, and 12 senior administrators in the participating institutions. A 5-point rating dissertation evaluation scale was developed for use by the researcher and her assistants. Three sets of a dissertation attribution questionnaire used by the students, advisors, and senior administrators were also developed and administered. Descriptive statistics were used with the 5-point rating data. The 3-level HLM package was used to analyze the quality, value-added model of the dissertations. The findings of the study were: (1 the overall quality of the 750 dissertations was at the standard level; (2 there were 5 factors at 3 different levels influencing the dissertation quality with 1 student factor (favorable characteristics in conducting research, 3 advisor factors (experience in research, up-to-date knowledge in research, and the advisor-student ratio, 1 institutional factor (close monitoring and management system; and (3 the quality value-added model was able to predict the variance of the dissertation quality at 36 percent.

  16. Proceedings of the 2nd CSNI Specialist Meeting on Simulators and Plant Analysers

    International Nuclear Information System (INIS)

    Tiihonen, O.

    1999-01-01

    The safe utilisation of nuclear power plants requires the availability of different computerised tools for analysing the plant behaviour and training the plant personnel. These can be grouped into three categories: accident analysis codes, plant analysers and training simulators. The safety analysis of nuclear power plants has traditionally been limited to the worst accident cases expected for the specific plant design. Many accident analysis codes have been developed for different plant types. The scope of the analyses has continuously expanded. The plant analysers are now emerging tools intended for extensive analysis of the plant behaviour using a best estimate model for the whole plant including the reactor and full thermodynamic process, both combined with automation and electrical systems. The comprehensive model is also supported by good visualisation tools. Training simulators with real time plant model are tools for training the plant operators to run the plant. Modern training simulators have also features supporting visualisation of the important phenomena occurring in the plant during transients. The 2nd CSNI Specialist Meeting on Simulators and Plant Analysers in Espoo attracted some 90 participants from 17 countries. A total of 49 invited papers were presented in the meeting in addition to 7 simulator system demonstrations. Ample time was reserved for the presentations and informal discussions during the four meeting days. (orig.)

  17. Analysing the Transformation of Higher Education Governance in Bulgaria and Lithuania

    NARCIS (Netherlands)

    Dobbins, Michael; Leisyte, Liudvika

    2014-01-01

    Drawing on sociological neo-institutional theory and models of higher education governance, we examine current developments in Bulgaria and Lithuania and explore to what extent those developments were shaped by the Bologna reform. We analyse to what extent the state has moved away from a model of

  18. An assessment of the wind re-analyses in the modelling of an extreme sea state in the Black Sea

    Science.gov (United States)

    Akpinar, Adem; Ponce de León, S.

    2016-03-01

    This study aims at an assessment of wind re-analyses for modelling storms in the Black Sea. A wind-wave modelling system (Simulating WAve Nearshore, SWAN) is applied to the Black Sea basin and calibrated with buoy data for three recent re-analysis wind sources, namely the European Centre for Medium-Range Weather Forecasts Reanalysis-Interim (ERA-Interim), Climate Forecast System Reanalysis (CFSR), and Modern Era Retrospective Analysis for Research and Applications (MERRA) during an extreme wave condition that occurred in the north eastern part of the Black Sea. The SWAN model simulations are carried out for default and tuning settings for deep water source terms, especially whitecapping. Performances of the best model configurations based on calibration with buoy data are discussed using data from the JASON2, TOPEX-Poseidon, ENVISAT and GFO satellites. The SWAN model calibration shows that the best configuration is obtained with Janssen and Komen formulations with whitecapping coefficient (Cds) equal to 1.8e-5 for wave generation by wind and whitecapping dissipation using ERA-Interim. In addition, from the collocated SWAN results against the satellite records, the best configuration is determined to be the SWAN using the CFSR winds. Numerical results, thus show that the accuracy of a wave forecast will depend on the quality of the wind field and the ability of the SWAN model to simulate the waves under extreme wind conditions in fetch limited wave conditions.

  19. Comparative modeling analyses of Cs-137 fate in the rivers impacted by Chernobyl and Fukushima accidents

    Energy Technology Data Exchange (ETDEWEB)

    Zheleznyak, M.; Kivva, S. [Institute of Environmental Radioactivity, Fukushima University (Japan)

    2014-07-01

    The consequences of two largest nuclear accidents of the last decades - at Chernobyl Nuclear Power Plant (ChNPP) (1986) and at Fukushima Daiichi NPP (FDNPP) (2011) clearly demonstrated that radioactive contamination of water bodies in vicinity of NPP and on the waterways from it, e.g., river- reservoir water after Chernobyl accident and rivers and coastal marine waters after Fukushima accident, in the both cases have been one of the main sources of the public concerns on the accident consequences. The higher weight of water contamination in public perception of the accidents consequences in comparison with the real fraction of doses via aquatic pathways in comparison with other dose components is a specificity of public perception of environmental contamination. This psychological phenomenon that was confirmed after these accidents provides supplementary arguments that the reliable simulation and prediction of the radionuclide dynamics in water and sediments is important part of the post-accidental radioecological research. The purpose of the research is to use the experience of the modeling activities f conducted for the past more than 25 years within the Chernobyl affected Pripyat River and Dnieper River watershed as also data of the new monitoring studies in Japan of Abukuma River (largest in the region - the watershed area is 5400 km{sup 2}), Kuchibuto River, Uta River, Niita River, Natsui River, Same River, as also of the studies on the specific of the 'water-sediment' {sup 137}Cs exchanges in this area to refine the 1-D model RIVTOX and 2-D model COASTOX for the increasing of the predictive power of the modeling technologies. The results of the modeling studies are applied for more accurate prediction of water/sediment radionuclide contamination of rivers and reservoirs in the Fukushima Prefecture and for the comparative analyses of the efficiency of the of the post -accidental measures to diminish the contamination of the water bodies. Document

  20. Dynamics of energy systems: Methods of analysing technology change

    Energy Technology Data Exchange (ETDEWEB)

    Neij, Lena

    1999-05-01

    Technology change will have a central role in achieving a sustainable energy system. This calls for methods of analysing the dynamics of energy systems in view of technology change and policy instruments for effecting and accelerating technology change. In this thesis, such methods have been developed, applied, and assessed. Two types of methods have been considered, methods of analysing and projecting the dynamics of future technology change and methods of evaluating policy instruments effecting technology change, i.e. market transformation programmes. Two methods are focused on analysing the dynamics of future technology change; vintage models and experience curves. Vintage models, which allow for complex analysis of annual streams of energy and technological investments, are applied to the analysis of the time dynamics of electricity demand for lighting and air-distribution in Sweden. The results of the analyses show that the Swedish electricity demand for these purposes could decrease over time, relative to a reference scenario, if policy instruments are used. Experience curves are used to provide insight into the prospects of diffusion of wind turbines and photo voltaic (PV) modules due to cost reduction. The results show potential for considerable cost reduction for wind-generated electricity, which, in turn, could lead to major diffusion of wind turbines. The results also show that major diffusion of PV modules, and a reduction of PV generated electricity down to the level of conventional base-load electricity, will depend on large investments in bringing the costs down (through R D and D, market incentives and investments in niche markets) or the introduction of new generations of PV modules (e.g. high-efficiency mass-produced thin-film cells). Moreover, a model has been developed for the evaluation of market transformation programmes, i.e. policy instruments that effect technology change and the introduction and commercialisation of energy

  1. Applications of Historical Analyses in Combat Modelling

    Science.gov (United States)

    2011-12-01

    causes of those results [2]. Models can be classified into three descriptive types [8], according to the degree of abstraction required:  iconic ...22 ( A5 ) Hence the first bracket of Equation A3 is zero. Therefore:         022 22 22 122 22 22 2      o o o o xx xxy yy yyx...A6) Equation A5 can also be used to replace the term in Equation A6, leaving: 22 oxx     222 2 22 1 2222 2 oo xxa byyyx

  2. Vegetable parenting practices scale: Item response modeling analyses

    Science.gov (United States)

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  3. PISA 2012 Analysis of School Variables Affecting Problem-Solving Competency: Turkey-Serbia Comparison

    Directory of Open Access Journals (Sweden)

    Emine YAVUZ

    2017-12-01

    Full Text Available According to the OECD's PISA 2012 Turkey problem-solving report, Turkey and Serbia are at the same mathematical literacy level. However, Serbia's average of problem-solving competency is said to be higher than Turkey's. In this study, school variables that affect problem-solving competency of the two countries were examined and compared. The method of the study was causal comparison method, and HLM analysis was performed on data of 4494 students from 147 schools in Turkey sample and 4059 students from 132 schools in Serbia sample separately. As a result of HLM analysis, "obstacle and family donation" variable for Serbia and "abandon, teacher morale and mathematics competition" variable for Turkey were statistically significant. Although it was found that for each countries different variables influence the problem-solving competency, it was quite remarkable that these variables are in common in that they are components of the school climate concept.

  4. SDMtoolbox 2.0: the next generation Python-based GIS toolkit for landscape genetic, biogeographic and species distribution model analyses.

    Science.gov (United States)

    Brown, Jason L; Bennett, Joseph R; French, Connor M

    2017-01-01

    SDMtoolbox 2.0 is a software package for spatial studies of ecology, evolution, and genetics. The release of SDMtoolbox 2.0 allows researchers to use the most current ArcGIS software and MaxEnt software, and reduces the amount of time that would be spent developing common solutions. The central aim of this software is to automate complicated and repetitive spatial analyses in an intuitive graphical user interface. One core tenant facilitates careful parameterization of species distribution models (SDMs) to maximize each model's discriminatory ability and minimize overfitting. This includes carefully processing of occurrence data, environmental data, and model parameterization. This program directly interfaces with MaxEnt, one of the most powerful and widely used species distribution modeling software programs, although SDMtoolbox 2.0 is not limited to species distribution modeling or restricted to modeling in MaxEnt. Many of the SDM pre- and post-processing tools have 'universal' analogs for use with any modeling software. The current version contains a total of 79 scripts that harness the power of ArcGIS for macroecology, landscape genetics, and evolutionary studies. For example, these tools allow for biodiversity quantification (such as species richness or corrected weighted endemism), generation of least-cost paths and corridors among shared haplotypes, assessment of the significance of spatial randomizations, and enforcement of dispersal limitations of SDMs projected into future climates-to only name a few functions contained in SDMtoolbox 2.0. Lastly, dozens of generalized tools exists for batch processing and conversion of GIS data types or formats, which are broadly useful to any ArcMap user.

  5. LOD score exclusion analyses for candidate genes using random population samples.

    Science.gov (United States)

    Deng, H W; Li, J; Recker, R R

    2001-05-01

    While extensive analyses have been conducted to test for, no formal analyses have been conducted to test against, the importance of candidate genes with random population samples. We develop a LOD score approach for exclusion analyses of candidate genes with random population samples. Under this approach, specific genetic effects and inheritance models at candidate genes can be analysed and if a LOD score is < or = - 2.0, the locus can be excluded from having an effect larger than that specified. Computer simulations show that, with sample sizes often employed in association studies, this approach has high power to exclude a gene from having moderate genetic effects. In contrast to regular association analyses, population admixture will not affect the robustness of our analyses; in fact, it renders our analyses more conservative and thus any significant exclusion result is robust. Our exclusion analysis complements association analysis for candidate genes in random population samples and is parallel to the exclusion mapping analyses that may be conducted in linkage analyses with pedigrees or relative pairs. The usefulness of the approach is demonstrated by an application to test the importance of vitamin D receptor and estrogen receptor genes underlying the differential risk to osteoporotic fractures.

  6. The ASSET intercomparison of stratosphere and lower mesosphere humidity analyses

    Directory of Open Access Journals (Sweden)

    H. E. Thornton

    2009-02-01

    Full Text Available This paper presents results from the first detailed intercomparison of stratosphere-lower mesosphere water vapour analyses; it builds on earlier results from the EU funded framework V "Assimilation of ENVISAT Data" (ASSET project. Stratospheric water vapour plays an important role in many key atmospheric processes and therefore an improved understanding of its daily variability is desirable. With the availability of high resolution, good quality Michelson Interferometer for Passive Atmospheric Sounding (MIPAS water vapour profiles, the ability of four different atmospheric models to assimilate these data is tested. MIPAS data have been assimilated over September 2003 into the models of the European Centre for Medium Range Weather Forecasts (ECMWF, the Belgian Institute for Space and Aeronomy (BIRA-IASB, the French Service d'Aéronomie (SA-IPSL and the UK Met Office. The resultant middle atmosphere humidity analyses are compared against independent satellite data from the Halogen Occultation Experiment (HALOE, the Polar Ozone and Aerosol Measurement (POAM III and the Stratospheric Aerosol and Gas Experiment (SAGE II. The MIPAS water vapour profiles are generally well assimilated in the ECMWF, BIRA-IASB and SA systems, producing stratosphere-mesosphere water vapour fields where the main features compare favourably with the independent observations. However, the models are less capable of assimilating the MIPAS data where water vapour values are locally extreme or in regions of strong humidity gradients, such as the southern hemisphere lower stratosphere polar vortex. Differences in the analyses can be attributed to the choice of humidity control variable, how the background error covariance matrix is generated, the model resolution and its complexity, the degree of quality control of the observations and the use of observations near the model boundaries. Due to the poor performance of the Met Office analyses the results are not included in

  7. The ASSET intercomparison of stratosphere and lower mesosphere humidity analyses

    Science.gov (United States)

    Thornton, H. E.; Jackson, D. R.; Bekki, S.; Bormann, N.; Errera, Q.; Geer, A. J.; Lahoz, W. A.; Rharmili, S.

    2009-02-01

    This paper presents results from the first detailed intercomparison of stratosphere-lower mesosphere water vapour analyses; it builds on earlier results from the EU funded framework V "Assimilation of ENVISAT Data" (ASSET) project. Stratospheric water vapour plays an important role in many key atmospheric processes and therefore an improved understanding of its daily variability is desirable. With the availability of high resolution, good quality Michelson Interferometer for Passive Atmospheric Sounding (MIPAS) water vapour profiles, the ability of four different atmospheric models to assimilate these data is tested. MIPAS data have been assimilated over September 2003 into the models of the European Centre for Medium Range Weather Forecasts (ECMWF), the Belgian Institute for Space and Aeronomy (BIRA-IASB), the French Service d'Aéronomie (SA-IPSL) and the UK Met Office. The resultant middle atmosphere humidity analyses are compared against independent satellite data from the Halogen Occultation Experiment (HALOE), the Polar Ozone and Aerosol Measurement (POAM III) and the Stratospheric Aerosol and Gas Experiment (SAGE II). The MIPAS water vapour profiles are generally well assimilated in the ECMWF, BIRA-IASB and SA systems, producing stratosphere-mesosphere water vapour fields where the main features compare favourably with the independent observations. However, the models are less capable of assimilating the MIPAS data where water vapour values are locally extreme or in regions of strong humidity gradients, such as the southern hemisphere lower stratosphere polar vortex. Differences in the analyses can be attributed to the choice of humidity control variable, how the background error covariance matrix is generated, the model resolution and its complexity, the degree of quality control of the observations and the use of observations near the model boundaries. Due to the poor performance of the Met Office analyses the results are not included in the intercomparison

  8. Linkage and related analyses of Barrett's esophagus and its associated adenocarcinomas.

    Science.gov (United States)

    Sun, Xiangqing; Elston, Robert; Falk, Gary W; Grady, William M; Faulx, Ashley; Mittal, Sumeet K; Canto, Marcia I; Shaheen, Nicholas J; Wang, Jean S; Iyer, Prasad G; Abrams, Julian A; Willis, Joseph E; Guda, Kishore; Markowitz, Sanford; Barnholtz-Sloan, Jill S; Chandar, Apoorva; Brock, Wendy; Chak, Amitabh

    2016-07-01

    Familial aggregation and segregation analysis studies have provided evidence of a genetic basis for esophageal adenocarcinoma (EAC) and its premalignant precursor, Barrett's esophagus (BE). We aim to demonstrate the utility of linkage analysis to identify the genomic regions that might contain the genetic variants that predispose individuals to this complex trait (BE and EAC). We genotyped 144 individuals in 42 multiplex pedigrees chosen from 1000 singly ascertained BE/EAC pedigrees, and performed both model-based and model-free linkage analyses, using S.A.G.E. and other software. Segregation models were fitted, from the data on both the 42 pedigrees and the 1000 pedigrees, to determine parameters for performing model-based linkage analysis. Model-based and model-free linkage analyses were conducted in two sets of pedigrees: the 42 pedigrees and a subset of 18 pedigrees with female affected members that are expected to be more genetically homogeneous. Genome-wide associations were also tested in these families. Linkage analyses on the 42 pedigrees identified several regions consistently suggestive of linkage by different linkage analysis methods on chromosomes 2q31, 12q23, and 4p14. A linkage on 15q26 is the only consistent linkage region identified in the 18 female-affected pedigrees, in which the linkage signal is higher than in the 42 pedigrees. Other tentative linkage signals are also reported. Our linkage study of BE/EAC pedigrees identified linkage regions on chromosomes 2, 4, 12, and 15, with some reported associations located within our linkage peaks. Our linkage results can help prioritize association tests to delineate the genetic determinants underlying susceptibility to BE and EAC.

  9. Elastodynamic fracture analyses of large crack-arrest experiments

    International Nuclear Information System (INIS)

    Bass, B.R.; Pugh, C.E.; Walker, J.K.

    1985-01-01

    Results obtained to date show that the essence of the run-arrest events, including dynamic behavior, is being modeled. Refined meshes and optimum solution algorithms are important parameters in elastodynamic analysis programs to give sufficient resolution to the geometric and time-dependent aspects of fracture analyses. Further refinements in quantitative representation of material parameters and the inclusion of rate dependence through viscoplastic modeling is expected to give an even more accurate basis for assessing the fracture behavior of reactor pressure vessels under PTS and other off-normal loading conditions

  10. An extensive cocktail approach for rapid risk assessment of in vitro CYP450 direct reversible inhibition by xenobiotic exposure

    International Nuclear Information System (INIS)

    Spaggiari, Dany; Daali, Youssef; Rudaz, Serge

    2016-01-01

    Acute exposure to environmental factors strongly affects the metabolic activity of cytochrome P450 (P450). As a consequence, the risk of interaction could be increased, modifying the clinical outcomes of a medication. Because toxic agents cannot be administered to humans for ethical reasons, in vitro approaches are therefore essential to evaluate their impact on P450 activities. In this work, an extensive cocktail mixture was developed and validated for in vitro P450 inhibition studies using human liver microsomes (HLM). The cocktail comprised eleven P450-specific probe substrates to simultaneously assess the activities of the following isoforms: 1A2, 2A6, 2B6, 2C8, 2C9, 2C19, 2D6, 2E1, 2J2 and subfamily 3A. The high selectivity and sensitivity of the developed UHPLC-MS/MS method were critical for the success of this methodology, whose main advantages are: (i) the use of eleven probe substrates with minimized interactions, (ii) a low HLM concentration, (iii) fast incubation (5 min) and (iv) the use of metabolic ratios as microsomal P450 activities markers. This cocktail approach was successfully validated by comparing the obtained IC 50 values for model inhibitors with those generated with the conventional single probe methods. Accordingly, reliable inhibition values could be generated 10-fold faster using a 10-fold smaller amount of HLM compared to individual assays. This approach was applied to assess the P450 inhibition potential of widespread insecticides, namely, chlorpyrifos, fenitrothion, methylparathion and profenofos. In all cases, P450 2B6 was the most affected with IC 50 values in the nanomolar range. For the first time, mixtures of these four insecticides incubated at low concentrations showed a cumulative inhibitory in vitro effect on P450 2B6. - Highlights: • Ten P450 isoforms activities assessed simultaneously with only one incubation. • P450 activity levels measured using the metabolic ratio approach. • IC 50 values generated 10-fold faster

  11. [Methods, challenges and opportunities for big data analyses of microbiome].

    Science.gov (United States)

    Sheng, Hua-Fang; Zhou, Hong-Wei

    2015-07-01

    Microbiome is a novel research field related with a variety of chronic inflamatory diseases. Technically, there are two major approaches to analysis of microbiome: metataxonome by sequencing the 16S rRNA variable tags, and metagenome by shot-gun sequencing of the total microbial (mainly bacterial) genome mixture. The 16S rRNA sequencing analyses pipeline includes sequence quality control, diversity analyses, taxonomy and statistics; metagenome analyses further includes gene annotation and functional analyses. With the development of the sequencing techniques, the cost of sequencing will decrease, and big data analyses will become the central task. Data standardization, accumulation, modeling and disease prediction are crucial for future exploit of these data. Meanwhile, the information property in these data, and the functional verification with culture-dependent and culture-independent experiments remain the focus in future research. Studies of human microbiome will bring a better understanding of the relations between the human body and the microbiome, especially in the context of disease diagnosis and therapy, which promise rich research opportunities.

  12. Post-facta Analyses of Fukushima Accident and Lessons Learned

    Energy Technology Data Exchange (ETDEWEB)

    Tanabe, Fumiya [Sociotechnical Systems Safety Research Institute, Ichige (Japan)

    2014-08-15

    Independent analyses have been performed of the core melt behavior of the Unit 1, Unit 2 and Unit 3 reactors of Fukushima Daiichi Nuclear Power Station on 11-15 March 2011. The analyses are based on a phenomenological methodology with measured data investigation and a simple physical model calculation. Estimated are time variation of core water level, core material temperature and hydrogen generation rate. The analyses have revealed characteristics of accident process of each reactor. In the case of Unit 2 reactor, the calculated result suggests little hydrogen generation because of no steam generation in the core for zirconium-steam reaction during fuel damage process. It could be the reason of no hydrogen explosion in the Unit 2 reactor building. Analyses have been performed also on the core material behavior in another chaotic period of 19-31 March 2011, and it resulted in a re-melt hypothesis that core material in each reactor should have melted again due to shortage of cooling water. The hypothesis is consistent with many observed features of radioactive materials dispersion into the environment.

  13. Issues and approaches in risk-based aging analyses of passive components

    International Nuclear Information System (INIS)

    Uryasev, S.P.; Samanta, P.K.; Vesely, W.E.

    1994-01-01

    In previous NRC-sponsored work a general methodology was developed to quantify the risk contributions from aging components at nuclear plants. The methodology allowed Probabilistic Risk Analyses (PRAs) to be modified to incorporate the age-dependent component failure rates and also aging maintenance models to evaluate and prioritize the aging contributions from active components using the linear aging failure rate model and empirical components aging rates. In the present paper, this methodology is extended to passive components (for example, the pipes, heat exchangers, and the vessel). The analyses of passive components bring in issues different from active components. Here, we specifically focus on three aspects that need to be addressed in risk-based aging prioritization of passive components

  14. Analysing the Effects of a Pigs Production Quota within a Dynamic CGE Framework

    DEFF Research Database (Denmark)

    Adams, Philip D; Hansen, Lill Thanning; Jacobsen, Lars Bo

    2001-01-01

    In this paper we address the issue of timing and announcement within a dynamic applied general equilibrium model of the Danish economy. Specifically we analyse the introduction of a quota on the production of pigs. Two scenarios are analysed, namely the introduction of a once-off quota without any...

  15. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  16. Optical region elemental abundance analyses of B and A stars

    International Nuclear Information System (INIS)

    Adelman, S.J.

    1984-01-01

    Abundance analyses using optical region data and fully line blanketed model atmospheres have been performed for six moderately sharplined middle to late B-type stars. The derived abundances have values similar to those of the Sun. (author)

  17. USE OF BOUNDING ANALYSES TO ESTIMATE THE PREFORMANCE OF A SEISMICALLY ISOLATED STRUCTURE

    Directory of Open Access Journals (Sweden)

    Gökhan ÖZDEMİR

    2017-03-01

    Full Text Available Current design approach for seismic isolated structures is to perform bounding analyses. These analyses provide an envelope for the response of the seismic isolated structure rather than focusing on the actual performance. In this study, the success of bounding analyses to estimate performance of a seismic isolated structure, in which the isolation is provided by means of lead rubber bearings (LRBs, is evaluated in a comparative manner. For this purpose, nonlinear response history analyses were performed under the effect of bidirectional ground motion excitations. In bounding analyses, non-deteriorating hysteretic representations were used to model the hysteretic behavior of LRBs. On the other hand, to estimate the actual performance of both the superstructure and isolator units, deteriorating hysteretic idealizations were employed. The deterioration in strength of LRBs was defined as a function of temperature rise in the lead core. The analyzed structure is an existing seismically isolated hospital building and analytically modeled in accordance with its reported design properties for both isolation units and superstructure. Results obtained from analyses where LRBs are idealized by both deteriorating and non-deteriorating hysteretic representations are used in the comparisons. The response quantities used in the comparisons are maximum isolator displacement, maximum isolator force, maximum absolute floor acceleration, and maximum relative story displacements. In an average sense, bounding analyses is found to provide conservative estimates for the selected response quantities and fulfills its intended purpose. However, it is revealed that there may be individual cases where bounding analyses fails to provide a safe envelope.

  18. Modelling of the $t\\bar{t}H$ and $t\\bar{t}V$ $(V=W,Z)$ processes for $\\sqrt{s}=13$ TeV ATLAS analyses

    CERN Document Server

    The ATLAS collaboration

    2016-01-01

    Production of top quark pairs in association with heavy Standard Model bosons is important both as a signal and a background in several ATLAS analyses. Strong constraints on such processes cannot at present be obtained from data, and therefore their modelling by Monte Carlo simulation as well as the associated uncertainties are important. This note documents the Monte Carlo samples currently being used in ATLAS for the $t\\bar{t}H$ and $t\\bar{t}V$ ($V=W,Z$ vector bosons) processes for $\\sqrt{s}=13$ TeV proton-proton collisions.

  19. Emotional insecurity in the family and community and youth delinquency in Northern Ireland: a person-oriented analysis across five waves

    Science.gov (United States)

    Cummings, E. Mark; Taylor, Laura K.; Merrilees, Christine E.; Goeke-Morey, Marcie C.; Shirlow, Peter

    2015-01-01

    Background Over one billion children are exposed worldwide to political violence and armed conflict. Currently, conclusions about bases for adjustment problems are qualified by limited longitudinal research from a process-oriented, social-ecological perspective. In this study, we examined a theoretically-based model for the impact of multiple levels of the social ecology (family, community) on adolescent delinquency. Specifically, this study explored the impact of children's emotional insecurity about both the family and community on youth delinquency in Northern Ireland. Methods In the context of a five-wave longitudinal research design, participants included 999 mother–child dyads in Belfast (482 boys, 517 girls), drawn from socially-deprived, ethnically-homogenous areas that had experienced political violence. Youth ranged in age from 10 to 20 and were 12.18 (SD = 1.82) years old on average at Time 1. Findings The longitudinal analyses were conducted in hierarchical linear modeling (HLM), allowing for the modeling of interindividual differences in intraindividual change. Intraindividual trajectories of emotional insecurity about the family related to children's delinquency. Greater insecurity about the community worsened the impact of family conflict on youth's insecurity about the family, consistent with the notion that youth's insecurity about the community sensitizes them to exposure to family conflict in the home. Conclusions The results suggest that ameliorating children's insecurity about family and community in contexts of political violence is an important goal toward improving adolescents' well-being, including reduced risk for delinquency. PMID:25981614

  20. Emotional insecurity in the family and community and youth delinquency in Northern Ireland: a person-oriented analysis across five waves.

    Science.gov (United States)

    Cummings, E Mark; Taylor, Laura K; Merrilees, Christine E; Goeke-Morey, Marcie C; Shirlow, Peter

    2016-01-01

    Over one billion children are exposed worldwide to political violence and armed conflict. Currently, conclusions about bases for adjustment problems are qualified by limited longitudinal research from a process-oriented, social-ecological perspective. In this study, we examined a theoretically-based model for the impact of multiple levels of the social ecology (family, community) on adolescent delinquency. Specifically, this study explored the impact of children's emotional insecurity about both the family and community on youth delinquency in Northern Ireland. In the context of a five-wave longitudinal research design, participants included 999 mother-child dyads in Belfast (482 boys, 517 girls), drawn from socially-deprived, ethnically-homogenous areas that had experienced political violence. Youth ranged in age from 10 to 20 and were 12.18 (SD = 1.82) years old on average at Time 1. The longitudinal analyses were conducted in hierarchical linear modeling (HLM), allowing for the modeling of interindividual differences in intraindividual change. Intraindividual trajectories of emotional insecurity about the family related to children's delinquency. Greater insecurity about the community worsened the impact of family conflict on youth's insecurity about the family, consistent with the notion that youth's insecurity about the community sensitizes them to exposure to family conflict in the home. The results suggest that ameliorating children's insecurity about family and community in contexts of political violence is an important goal toward improving adolescents' well-being, including reduced risk for delinquency. © 2015 Association for Child and Adolescent Mental Health.

  1. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.; Mai, Paul Martin; Thingbaijam, Kiran Kumar; Razafindrakoto, H. N. T.; Genton, Marc G.

    2014-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  2. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  3. Basic Diagnosis and Prediction of Persistent Contrail Occurrence using High-resolution Numerical Weather Analyses/Forecasts and Logistic Regression. Part II: Evaluation of Sample Models

    Science.gov (United States)

    Duda, David P.; Minnis, Patrick

    2009-01-01

    Previous studies have shown that probabilistic forecasting may be a useful method for predicting persistent contrail formation. A probabilistic forecast to accurately predict contrail formation over the contiguous United States (CONUS) is created by using meteorological data based on hourly meteorological analyses from the Advanced Regional Prediction System (ARPS) and from the Rapid Update Cycle (RUC) as well as GOES water vapor channel measurements, combined with surface and satellite observations of contrails. Two groups of logistic models were created. The first group of models (SURFACE models) is based on surface-based contrail observations supplemented with satellite observations of contrail occurrence. The second group of models (OUTBREAK models) is derived from a selected subgroup of satellite-based observations of widespread persistent contrails. The mean accuracies for both the SURFACE and OUTBREAK models typically exceeded 75 percent when based on the RUC or ARPS analysis data, but decreased when the logistic models were derived from ARPS forecast data.

  4. Comparative biochemical analyses of venous blood and peritoneal fluid from horses with colic using a portable analyser and an in-house analyser.

    Science.gov (United States)

    Saulez, M N; Cebra, C K; Dailey, M

    2005-08-20

    Fifty-six horses with colic were examined over a period of three months. The concentrations of glucose, lactate, sodium, potassium and chloride, and the pH of samples of blood and peritoneal fluid, were determined with a portable clinical analyser and with an in-house analyser and the results were compared. Compared with the in-house analyser, the portable analyser gave higher pH values for blood and peritoneal fluid with greater variability in the alkaline range, and lower pH values in the acidic range, lower concentrations of glucose in the range below 8.3 mmol/l, and lower concentrations of lactate in venous blood in the range below 5 mmol/l and in peritoneal fluid in the range below 2 mmol/l, with less variability. On average, the portable analyser underestimated the concentrations of lactate and glucose in peritoneal fluid in comparison with the in-house analyser. Its measurements of the concentrations of sodium and chloride in peritoneal fluid had a higher bias and were more variable than the measurements in venous blood, and its measurements of potassium in venous blood and peritoneal fluid had a smaller bias and less variability than the measurements made with the in-house analyser.

  5. Seismic analyses of structures. 1st draft

    International Nuclear Information System (INIS)

    David, M.

    1995-01-01

    The dynamic analysis presented in this paper refers to the seismic analysis of the main building of Paks NPP. The aim of the analysis was to determine the floor response spectra as response to seismic input. This analysis was performed by the 3-dimensional calculation model and the floor response spectra were determined for a number levels from the floor response time histories and no other adjustments were applied. The following results of seismic analysis are presented: 3-dimensional finite element model; basic assumptions of dynamic analyses; table of frequencies and included factors; modal masses for all modes; floor response spectra in all the selected nodes with figures of indicated nodes and important nodes of free vibration

  6. Seismic analyses of structures. 1st draft

    Energy Technology Data Exchange (ETDEWEB)

    David, M [David Consulting, Engineering and Design Office (Czech Republic)

    1995-07-01

    The dynamic analysis presented in this paper refers to the seismic analysis of the main building of Paks NPP. The aim of the analysis was to determine the floor response spectra as responseto seismic input. This analysis was performed by the 3-dimensional calculation model and the floor response spectra were determined for a number levels from the floor response time histories and no other adjustments were applied. The following results of seismic analysis are presented: 3-dimensional finite element model; basic assumptions of dynamic analyses; table of frequencies and included factors; modal masses for all modes; floor response spectra in all the selected nodes with figures of indicated nodes and important nodes of free vibration.

  7. A Cyber-Attack Detection Model Based on Multivariate Analyses

    Science.gov (United States)

    Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi

    In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.

  8. Economical analyses of construction of a biomass boiler house

    International Nuclear Information System (INIS)

    Normak, A.

    2002-01-01

    To reduce the energy costs we can use cheaper fuel to fire our boiler. One of the cheapest fuels is wood biomass. It is very actual issue how to use cheaper wood biomass in heat generation to decrease energy costs and to increase biomass share in our energy balance. Before we decide to build biomass boiler house it is recommendable to analyse the economical situation and work out most profitable, efficient, reliable and ecological boiler plant design on particular conditions. The best way to perform the analyses is to use the economical model presented. It saves our time and gives objective evaluation to the project. (author)

  9. Resensi Buku: Organization Strategy, Structure, and Process

    Directory of Open Access Journals (Sweden)

    Ayi Ahadiyat

    2009-08-01

    Full Text Available Book ReviewJudul Buku    : Organization Strategy, Structure, and ProcessPenulis    : Raymond E. Miles and Charles C. SnowPenerbit     : McGraw-Hill Kogakusha, Ltd (International Student Edition, Tokyo,  274 hlm.Tahun    : 1978

  10. Book Review: Membongkar Logika Penafsir Agama

    Directory of Open Access Journals (Sweden)

    Mohammad Muslih

    2009-11-01

    Full Text Available Judul : Speaking in God’s Name: Islamic law,Authority and WomenPenulis : Khaled Abou el-FadlPenerbit : Oneworld Press, Oxford, 2001Tebal : 361 hlm.Edisi terj. : Atas Nama Tuhan, dari Fikih Otoriter keFikih OtoritatifPenerjemah : R. Cecep Lukman YasinPenerbit : Serambi, Jakarta 2004

  11. SCALE Graphical Developments for Improved Criticality Safety Analyses

    International Nuclear Information System (INIS)

    Barnett, D.L.; Bowman, S.M.; Horwedel, J.E.; Petrie, L.M.

    1999-01-01

    New computer graphic developments at Oak Ridge National Ridge National Laboratory (ORNL) are being used to provide visualization of criticality safety models and calculational results as well as tools for criticality safety analysis input preparation. The purpose of this paper is to present the status of current development efforts to continue to enhance the SCALE (Standardized Computer Analyses for Licensing Evaluations) computer software system. Applications for criticality safety analysis in the areas of 3-D model visualization, input preparation and execution via a graphical user interface (GUI), and two-dimensional (2-D) plotting of results are discussed

  12. Exploratory multinomial logit model-based driver injury severity analyses for teenage and adult drivers in intersection-related crashes.

    Science.gov (United States)

    Wu, Qiong; Zhang, Guohui; Ci, Yusheng; Wu, Lina; Tarefder, Rafiqul A; Alcántara, Adélamar Dely

    2016-05-18

    Teenage drivers are more likely to be involved in severely incapacitating and fatal crashes compared to adult drivers. Moreover, because two thirds of urban vehicle miles traveled are on signal-controlled roadways, significant research efforts are needed to investigate intersection-related teenage driver injury severities and their contributing factors in terms of driver behavior, vehicle-infrastructure interactions, environmental characteristics, roadway geometric features, and traffic compositions. Therefore, this study aims to explore the characteristic differences between teenage and adult drivers in intersection-related crashes, identify the significant contributing attributes, and analyze their impacts on driver injury severities. Using crash data collected in New Mexico from 2010 to 2011, 2 multinomial logit regression models were developed to analyze injury severities for teenage and adult drivers, respectively. Elasticity analyses and transferability tests were conducted to better understand the quantitative impacts of these factors and the teenage driver injury severity model's generality. The results showed that although many of the same contributing factors were found to be significant in the both teenage and adult driver models, certain different attributes must be distinguished to specifically develop effective safety solutions for the 2 driver groups. The research findings are helpful to better understand teenage crash uniqueness and develop cost-effective solutions to reduce intersection-related teenage injury severities and facilitate driver injury mitigation research.

  13. An analyser for power plant operations

    International Nuclear Information System (INIS)

    Rogers, A.E.; Wulff, W.

    1990-01-01

    Safe and reliable operation of power plants is essential. Power plant operators need a forecast of what the plant will do when its current state is disturbed. The in-line plant analyser provides precisely this information at relatively low cost. The plant analyser scheme uses a mathematical model of the dynamic behaviour of the plant to establish a numerical simulation. Over a period of time, the simulation is calibrated with measurements from the particular plant in which it is used. The analyser then provides a reference against which to evaluate the plant's current behaviour. It can be used to alert the operator to any atypical excursions or combinations of readings that indicate malfunction or off-normal conditions that, as the Three Mile Island event suggests, are not easily recognised by operators. In a look-ahead mode, it can forecast the behaviour resulting from an intended change in settings or operating conditions. Then, when such changes are made, the plant's behaviour can be tracked against the forecast in order to assure that the plant is behaving as expected. It can be used to investigate malfunctions that have occurred and test possible adjustments in operating procedures. Finally, it can be used to consider how far from the limits of performance the elements of the plant are operating. Then by adjusting settings, the required power can be generated with as little stress as possible on the equipment. (6 figures) (Author)

  14. Rehme correlation for spacer pressure drop compared to XT-ADS rod bundle simulations and water experiment

    International Nuclear Information System (INIS)

    Batta, A.; Class, A.; Litfin, K.; Wetzel, T.

    2011-01-01

    The Rehme correlation is the most common formula to estimate the pressure drop of spacers in the design phase of new bundle geometries. It is based on considerations of momentum losses and takes into account the obstruction of the flow cross section but it ignores the geometric details of the spacer design. Within the framework of accelerator driven sub-critical reactor systems (ADS), heavy-liquid-metal (HLM) cooled fuel assemblies are considered. At the KArlsruhe Liquid metal LAboratory (KALLA) of the Karlsruhe Institute of Technology a series of experiments to quantify both pressure losses and heat transfer in HLM-cooled rod bundles are performed. The present study compares simulation results obtained with the commercial CFD code Star-CCM to experiments and the Rehme correlation. It can be shown that the Rehme correlation, simulations and experiments all yield similar trends, but quantitative predictions can only be delivered by the CFD which takes into account the full geometric details of the spacer geometry. (orig.)

  15. Integrated experiment activity monitoring for wLCG sites based on GWT

    International Nuclear Information System (INIS)

    Feijóo, Alejandro Guinó; Espinal, Xavier

    2011-01-01

    The goal of this work is to develop a High Level Monitoring (HLM) where to merge the distributed computing activities of an LHC experiment (ATLAS). ATLAS distributed computing is organized in clouds, where the Tier-Is (primary centers) provide services to the associated Tier-2s centers (secondaries) so they are all seen as a cloud by the experiment. Computing activities and sites stability monitoring services are numerous and delocalized. It would be very useful for a cloud manager to have a single place where to aggregate available monitoring information. The idea presented in this paper is to develop a set of collectors to gather information regarding site status and performance on data distribution, data processing and Worldwide LHC Computing Grid (WLCG) tests (Service Availability Monitoring), store them in specific databases, process the results and show it in a single HLM page. Once having it, one can investigate further by interacting with the front-end, which is fed by the stats stored on databases.

  16. Theorising and Analysing Academic Labour

    Directory of Open Access Journals (Sweden)

    Thomas Allmer

    2018-01-01

    Full Text Available The aim of this article is to contextualise universities historically within capitalism and to analyse academic labour and the deployment of digital media theoretically and critically. It argues that the post-war expansion of the university can be considered as medium and outcome of informational capitalism and as a dialectical development of social achievement and advanced commodification. The article strives to identify the class position of academic workers, introduces the distinction between academic work and labour, discusses the connection between academic, information and cultural work, and suggests a broad definition of university labour. It presents a theoretical model of working conditions that helps to systematically analyse the academic labour process and to provide an overview of working conditions at universities. The paper furthermore argues for the need to consider the development of education technologies as a dialectics of continuity and discontinuity, discusses the changing nature of the forces and relations of production, and the impact on the working conditions of academics in the digital university. Based on Erik Olin Wright’s inclusive approach of social transformation, the article concludes with the need to bring together anarchist, social democratic and revolutionary strategies for establishing a socialist university in a commons-based information society.

  17. Can trial sequential monitoring boundaries reduce spurious inferences from meta-analyses?

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Devereaux, P J; Wetterslev, Jørn

    2008-01-01

    BACKGROUND: Results from apparently conclusive meta-analyses may be false. A limited number of events from a few small trials and the associated random error may be under-recognized sources of spurious findings. The information size (IS, i.e. number of participants) required for a reliable......-analyses after each included trial and evaluated their results using a conventional statistical criterion (alpha = 0.05) and two-sided Lan-DeMets monitoring boundaries. We examined the proportion of false positive results and important inaccuracies in estimates of treatment effects that resulted from the two...... approaches. RESULTS: Using the random-effects model and final data, 12 of the meta-analyses yielded P > alpha = 0.05, and 21 yielded P alpha = 0.05. The monitoring boundaries eliminated all false positives. Important inaccuracies in estimates were observed in 6 out of 21 meta-analyses using the conventional...

  18. Analyses of hydrodynamic effects of large sodium-water reactions

    International Nuclear Information System (INIS)

    Sakano, K.; Shindo, Y.; Koishikawa, A.; Maekawa, I.

    1977-01-01

    Large leak sodium-water reactions that would occur in a steam generator of LMFBR causes abrupt changes of pressure and velocity of fluid in a secondary sodium system and relief system. This paper describes SOWACS-III together with its model and method. Results of analyses are also given, the comparison with experimental results of initial pressure spike being included. SOWACS-III treats the system which consists of the steam generator, vessel, valve, pump and pipe, and uses the following models and methods. (1) Components are assumed to be one-dimensional. (2) Pressure wave propagation near a reaction zone, where hydrogen is generated, is analyzed with the spherical co-ordinate (sphere-cylinder model). (3) A moving boundary is formed by contact of sodium with other fluid such as hydrogen and nitrogen. The boundary travels without mixing of sodium and another fluid through the boundary (boundary tracking model). The boundary can be treated not to move from the original place (fixed boundary model). (4) Pressure wave propagation is analyzed by the explicit method of characteristics in one-dimensional Eulerian co-ordinate. (5) Flow-induced force is analyzed by momentum balance. (6) The lateral motion of relief piping caused by the force is analyzed by NASTRAN code. Analyses were carried out for large sodium-water reaction experiments in SWAT-3 rig of PNC by using the sphere-cylinder model. The calculated pressure spike in the reaction vessel was compared with the measured one for a few milliseconds after water injection. The calculated value and measured one were 6.4 ata and 6.7 ata for peak pressure and 0.6 ms and 2.8 ms for rising time, respectively

  19. Business models for telehealth in the US: analyses and insights

    Directory of Open Access Journals (Sweden)

    Pereira F

    2017-02-01

    Full Text Available Francis Pereira Data Sciences and Operations, Marshall School of Business, University of Southern, Los Angeles, CA, USAAbstract: A growing shortage of medical doctors and nurses, globally, coupled with increasing life expectancy, is generating greater cost pressures on health care, in the US and globally. In this respect, telehealth can help alleviate these pressures, as well as extend medical services to underserved or unserved areas. However, its relatively slow adoption in the US, as well as in other markets, suggests the presence of barriers and challenges. The use of a business model framework helps identify the value proposition of telehealth as well as these challenges, which include identifying the right revenue model, organizational structure, and, perhaps more importantly, the stakeholders in the telehealth ecosystem. Successful and cost-effective deployment of telehealth require a redefinition of the ecosystem and a comprehensive review of all benefits and beneficiaries of such a system; hence a reassessment of all the stakeholders that could benefit from such a system, beyond the traditional patient–health provider–insurer model, and thus “who should pay” for such a system, and the driving efforts of a “keystone” player in developing this initiative would help. Keywords: telehealth, business model framework, stakeholders, ecosystem, VISOR business Model

  20. Mechanical analyses on the digital behaviour of the Tokay gecko (Gekko gecko) based on a multi-level directional adhesion model.

    Science.gov (United States)

    Wu, Xuan; Wang, Xiaojie; Mei, Tao; Sun, Shaoming

    2015-07-08

    This paper proposes a multi-level hierarchical model for the Tokay gecko ( Gekko gecko ) adhesive system and analyses the digital behaviour of the G. gecko under macro/meso-level scale. The model describes the structures of G. gecko 's adhesive system from the nano-level spatulae to the sub-millimetre-level lamella. The G. gecko 's seta is modelled using inextensible fibril based on Euler's elastica theorem. Considering the side contact of the spatular pads of the seta on the flat and rigid substrate, the directional adhesion behaviour of the seta has been investigated. The lamella-induced attachment and detachment have been modelled to simulate the active digital hyperextension (DH) and the digital gripping (DG) phenomena. The results suggest that a tiny angular displacement within 0.25° of the lamellar proximal end is necessary in which a fast transition from attachment to detachment or vice versa is induced. The active DH helps release the torque to induce setal non-sliding detachment, while the DG helps apply torque to make the setal adhesion stable. The lamella plays a key role in saving energy during detachment to adapt to its habitat and provides another adhesive function which differs from the friction-dependent setal adhesion system controlled by the dynamic of G. gecko 's body.

  1. SDMtoolbox 2.0: the next generation Python-based GIS toolkit for landscape genetic, biogeographic and species distribution model analyses

    Directory of Open Access Journals (Sweden)

    Jason L. Brown

    2017-12-01

    Full Text Available SDMtoolbox 2.0 is a software package for spatial studies of ecology, evolution, and genetics. The release of SDMtoolbox 2.0 allows researchers to use the most current ArcGIS software and MaxEnt software, and reduces the amount of time that would be spent developing common solutions. The central aim of this software is to automate complicated and repetitive spatial analyses in an intuitive graphical user interface. One core tenant facilitates careful parameterization of species distribution models (SDMs to maximize each model’s discriminatory ability and minimize overfitting. This includes carefully processing of occurrence data, environmental data, and model parameterization. This program directly interfaces with MaxEnt, one of the most powerful and widely used species distribution modeling software programs, although SDMtoolbox 2.0 is not limited to species distribution modeling or restricted to modeling in MaxEnt. Many of the SDM pre- and post-processing tools have ‘universal’ analogs for use with any modeling software. The current version contains a total of 79 scripts that harness the power of ArcGIS for macroecology, landscape genetics, and evolutionary studies. For example, these tools allow for biodiversity quantification (such as species richness or corrected weighted endemism, generation of least-cost paths and corridors among shared haplotypes, assessment of the significance of spatial randomizations, and enforcement of dispersal limitations of SDMs projected into future climates—to only name a few functions contained in SDMtoolbox 2.0. Lastly, dozens of generalized tools exists for batch processing and conversion of GIS data types or formats, which are broadly useful to any ArcMap user.

  2. Performance Analyses in an Assistive Technology Service Delivery Process

    DEFF Research Database (Denmark)

    Petersen, Anne Karin

    Performance Analyses in an Assistive Technology Service Delivery Process.Keywords: process model, occupational performance, assistive technologiesThe Poster is about teaching students, using models and theory in education and practice. It is related to Occupational therapy process and professional...... af top-til-bund, klientcentreret og aktivitetsbaseret interventioner, ERGO/MunksgaardFisher, A. &, Griswold, L. A., 2014. Performance Skills. I: B.Schell red.2014 Occupational Therapy. Willard &Spackman’s occupational therapy. -12th ed., p.249-264Cook A.M., Polgar J.M. (2015) Assistive Technologies...

  3. Using US EPA’s Chemical Safety for Sustainability’s Comptox Chemistry Dashboard and Tools for Bioactivity, Chemical and Toxicokinetic Modeling Analyses (Course at 2017 ISES Annual Meeting)

    Science.gov (United States)

    Title: Using US EPA’s Chemical Safety for Sustainability’s Comptox Chemistry Dashboard and Tools for Bioactivity, Chemical and Toxicokinetic Modeling Analyses • Class format: half-day (4 hours) • Course leader(s): Barbara A. Wetmore and Antony J. Williams,...

  4. ORNL Pre-test Analyses of A Large-scale Experiment in STYLE

    International Nuclear Information System (INIS)

    Williams, Paul T.; Yin, Shengjun; Klasky, Hilda B.; Bass, Bennett Richard

    2011-01-01

    Oak Ridge National Laboratory (ORNL) is conducting a series of numerical analyses to simulate a large scale mock-up experiment planned within the European Network for Structural Integrity for Lifetime Management non-RPV Components (STYLE). STYLE is a European cooperative effort to assess the structural integrity of (non-reactor pressure vessel) reactor coolant pressure boundary components relevant to ageing and life-time management and to integrate the knowledge created in the project into mainstream nuclear industry assessment codes. ORNL contributes work-in-kind support to STYLE Work Package 2 (Numerical Analysis/Advanced Tools) and Work Package 3 (Engineering Assessment Methods/LBB Analyses). This paper summarizes the current status of ORNL analyses of the STYLE Mock-Up3 large-scale experiment to simulate and evaluate crack growth in a cladded ferritic pipe. The analyses are being performed in two parts. In the first part, advanced fracture mechanics models are being developed and performed to evaluate several experiment designs taking into account the capabilities of the test facility while satisfying the test objectives. Then these advanced fracture mechanics models will be utilized to simulate the crack growth in the large scale mock-up test. For the second part, the recently developed ORNL SIAM-PFM open-source, cross-platform, probabilistic computational tool will be used to generate an alternative assessment for comparison with the advanced fracture mechanics model results. The SIAM-PFM probabilistic analysis of the Mock-Up3 experiment will utilize fracture modules that are installed into a general probabilistic framework. The probabilistic results of the Mock-Up3 experiment obtained from SIAM-PFM will be compared to those results generated using the deterministic 3D nonlinear finite-element modeling approach. The objective of the probabilistic analysis is to provide uncertainty bounds that will assist in assessing the more detailed 3D finite

  5. Clustering structures of large proteins using multifractal analyses based on a 6-letter model and hydrophobicity scale of amino acids

    International Nuclear Information System (INIS)

    Yang Jianyi; Yu Zuguo; Anh, Vo

    2009-01-01

    The Schneider and Wrede hydrophobicity scale of amino acids and the 6-letter model of protein are proposed to study the relationship between the primary structure and the secondary structural classification of proteins. Two kinds of multifractal analyses are performed on the two measures obtained from these two kinds of data on large proteins. Nine parameters from the multifractal analyses are considered to construct the parameter spaces. Each protein is represented by one point in these spaces. A procedure is proposed to separate large proteins in the α, β, α + β and α/β structural classes in these parameter spaces. Fisher's linear discriminant algorithm is used to assess our clustering accuracy on the 49 selected large proteins. Numerical results indicate that the discriminant accuracies are satisfactory. In particular, they reach 100.00% and 84.21% in separating the α proteins from the {β, α + β, α/β} proteins in a parameter space; 92.86% and 86.96% in separating the β proteins from the {α + β, α/β} proteins in another parameter space; 91.67% and 83.33% in separating the α/β proteins from the α + β proteins in the last parameter space.

  6. Characterization of the hepatic cytochrome P450 enzymes involved in the metabolism of 25I-NBOMe and 25I-NBOH

    DEFF Research Database (Denmark)

    Nielsen, Line Marie; Holm, Niels Bjerre; Leth-Petersen, Sebastian

    2017-01-01

    )ethylamino]methyl]phenol (25I-NBOH) and to characterize the metabolites. The following approaches were used to identify the main enzymes involved in primary metabolism: incubation with a panel of CYP and monoamine oxidase (MAO) enzymes and incubation in pooled human liver microsomes (HLM) with and without specific CYP...

  7. Comparative Analysis of Upper Ocean Heat Content Variability from Ensemble Operational Ocean Analyses

    Science.gov (United States)

    Xue, Yan; Balmaseda, Magdalena A.; Boyer, Tim; Ferry, Nicolas; Good, Simon; Ishikawa, Ichiro; Rienecker, Michele; Rosati, Tony; Yin, Yonghong; Kumar, Arun

    2012-01-01

    Upper ocean heat content (HC) is one of the key indicators of climate variability on many time-scales extending from seasonal to interannual to long-term climate trends. For example, HC in the tropical Pacific provides information on thermocline anomalies that is critical for the longlead forecast skill of ENSO. Since HC variability is also associated with SST variability, a better understanding and monitoring of HC variability can help us understand and forecast SST variability associated with ENSO and other modes such as Indian Ocean Dipole (IOD), Pacific Decadal Oscillation (PDO), Tropical Atlantic Variability (TAV) and Atlantic Multidecadal Oscillation (AMO). An accurate ocean initialization of HC anomalies in coupled climate models could also contribute to skill in decadal climate prediction. Errors, and/or uncertainties, in the estimation of HC variability can be affected by many factors including uncertainties in surface forcings, ocean model biases, and deficiencies in data assimilation schemes. Changes in observing systems can also leave an imprint on the estimated variability. The availability of multiple operational ocean analyses (ORA) that are routinely produced by operational and research centers around the world provides an opportunity to assess uncertainties in HC analyses, to help identify gaps in observing systems as they impact the quality of ORAs and therefore climate model forecasts. A comparison of ORAs also gives an opportunity to identify deficiencies in data assimilation schemes, and can be used as a basis for development of real-time multi-model ensemble HC monitoring products. The OceanObs09 Conference called for an intercomparison of ORAs and use of ORAs for global ocean monitoring. As a follow up, we intercompared HC variations from ten ORAs -- two objective analyses based on in-situ data only and eight model analyses based on ocean data assimilation systems. The mean, annual cycle, interannual variability and longterm trend of HC have

  8. Architecture Level Safety Analyses for Safety-Critical Systems

    Directory of Open Access Journals (Sweden)

    K. S. Kushal

    2017-01-01

    Full Text Available The dependency of complex embedded Safety-Critical Systems across Avionics and Aerospace domains on their underlying software and hardware components has gradually increased with progression in time. Such application domain systems are developed based on a complex integrated architecture, which is modular in nature. Engineering practices assured with system safety standards to manage the failure, faulty, and unsafe operational conditions are very much necessary. System safety analyses involve the analysis of complex software architecture of the system, a major aspect in leading to fatal consequences in the behaviour of Safety-Critical Systems, and provide high reliability and dependability factors during their development. In this paper, we propose an architecture fault modeling and the safety analyses approach that will aid in identifying and eliminating the design flaws. The formal foundations of SAE Architecture Analysis & Design Language (AADL augmented with the Error Model Annex (EMV are discussed. The fault propagation, failure behaviour, and the composite behaviour of the design flaws/failures are considered for architecture safety analysis. The illustration of the proposed approach is validated by implementing the Speed Control Unit of Power-Boat Autopilot (PBA system. The Error Model Annex (EMV is guided with the pattern of consideration and inclusion of probable failure scenarios and propagation of fault conditions in the Speed Control Unit of Power-Boat Autopilot (PBA. This helps in validating the system architecture with the detection of the error event in the model and its impact in the operational environment. This also provides an insight of the certification impact that these exceptional conditions pose at various criticality levels and design assurance levels and its implications in verifying and validating the designs.

  9. Factors for analysing and improving performance of R&D in Malaysian universities

    NARCIS (Netherlands)

    Ramli, Mohammad Shakir; de Boer, S.J.; de Bruijn, E.J.

    2004-01-01

    This paper presents a model for analysing and improving performance of R&D in Malaysian universities. There are various general models for R&D analysis, but none is specific for improving the performance of R&D in Malaysian universities. This research attempts to fill a gap in the body of knowledge

  10. Analysing organic transistors based on interface approximation

    International Nuclear Information System (INIS)

    Akiyama, Yuto; Mori, Takehiko

    2014-01-01

    Temperature-dependent characteristics of organic transistors are analysed thoroughly using interface approximation. In contrast to amorphous silicon transistors, it is characteristic of organic transistors that the accumulation layer is concentrated on the first monolayer, and it is appropriate to consider interface charge rather than band bending. On the basis of this model, observed characteristics of hexamethylenetetrathiafulvalene (HMTTF) and dibenzotetrathiafulvalene (DBTTF) transistors with various surface treatments are analysed, and the trap distribution is extracted. In turn, starting from a simple exponential distribution, we can reproduce the temperature-dependent transistor characteristics as well as the gate voltage dependence of the activation energy, so we can investigate various aspects of organic transistors self-consistently under the interface approximation. Small deviation from such an ideal transistor operation is discussed assuming the presence of an energetically discrete trap level, which leads to a hump in the transfer characteristics. The contact resistance is estimated by measuring the transfer characteristics up to the linear region

  11. Comparison of Genome-Wide Association Methods in Analyses of Admixed Populations with Complex Familial Relationships

    DEFF Research Database (Denmark)

    Kadri, Naveen; Guldbrandtsen, Bernt; Sørensen, Peter

    2014-01-01

    Population structure is known to cause false-positive detection in association studies. We compared the power, precision, and type-I error rates of various association models in analyses of a simulated dataset with structure at the population (admixture from two populations; P) and family (K......) levels. We also compared type-I error rates among models in analyses of publicly available human and dog datasets. The models corrected for none, one, or both structure levels. Correction for K was performed with linear mixed models incorporating familial relationships estimated from pedigrees or genetic...... corrected for P. In contrast, correction for P alone in linear models was insufficient. The power and precision of linear mixed models with and without correction for P were similar. Furthermore, power, precision, and type-I error rate were comparable in linear mixed models incorporating pedigree...

  12. Aggregated Wind Park Models for Analysing Power System Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Poeller, Markus; Achilles, Sebastian [DIgSILENT GmbH, Gomaringen (Germany)

    2003-11-01

    The increasing amount of wind power generation in European power systems requires stability analysis considering interaction between wind-farms and transmission systems. Dynamics introduced by dispersed wind generators at the distribution level can usually be neglected. However, large on- and offshore wind farms have a considerable influence to power system dynamics and must definitely be considered for analyzing power system dynamics. Compared to conventional power stations, wind power plants consist of a large number of generators of small size. Therefore, representing every wind generator individually increases the calculation time of dynamic simulations considerably. Therefore, model aggregation techniques should be applied for reducing calculation times. This paper presents aggregated models for wind parks consisting of fixed or variable speed wind generators.

  13. Intercomparison and analyses of the climatology of the West African monsoon in the West African monsoon modeling and evaluation project (WAMME) first model intercomparison experiment

    Energy Technology Data Exchange (ETDEWEB)

    Xue, Yongkang; Sales, Fernando De [University of California, Los Angeles, CA (United States); Lau, W.K.M.; Schubert, Siegfried D.; Wu, Man-Li C. [NASA, Goddard Space Flight Center, Greenbelt, MD (United States); Boone, Aaron [Centre National de Recherches Meteorologiques, Meteo-France Toulouse, Toulouse (France); Feng, Jinming [University of California, Los Angeles, CA (United States); Chinese Academy of Sciences, Institute of Atmospheric Physics, Beijing (China); Dirmeyer, Paul; Guo, Zhichang [Center for Ocean-Land-Atmosphere Interactions, Calverton, MD (United States); Kim, Kyu-Myong [University of Maryland Baltimore County, Baltimore, MD (United States); Kitoh, Akio [Meteorological Research Institute, Tsukuba (Japan); Kumar, Vadlamani [National Center for Environmental Prediction, Camp Springs, MD (United States); Wyle Information Systems, Gaithersburg, MD (United States); Poccard-Leclercq, Isabelle [Universite de Bourgogne, Centre de Recherches de Climatologie UMR5210 CNRS, Dijon (France); Mahowald, Natalie [Cornell University, Ithaca, NY (United States); Moufouma-Okia, Wilfran; Rowell, David P. [Met Office Hadley Centre, Exeter (United Kingdom); Pegion, Phillip [NASA, Goddard Space Flight Center, Greenbelt, MD (United States); National Center for Environmental Prediction, Camp Springs, MD (United States); Schemm, Jae; Thiaw, Wassila M. [National Center for Environmental Prediction, Camp Springs, MD (United States); Sealy, Andrea [The Caribbean Institute for Meteorology and Hydrology, St. James (Barbados); Vintzileos, Augustin [National Center for Environmental Prediction, Camp Springs, MD (United States); Science Applications International Corporation, Camp Springs, MD (United States); Williams, Steven F. [National Center for Atmospheric Research, Boulder, CO (United States)

    2010-07-15

    This paper briefly presents the West African monsoon (WAM) modeling and evaluation project (WAMME) and evaluates WAMME general circulation models' (GCM) performances in simulating variability of WAM precipitation, surface temperature, and major circulation features at seasonal and intraseasonal scales in the first WAMME experiment. The analyses indicate that models with specified sea surface temperature generally have reasonable simulations of the pattern of spatial distribution of WAM seasonal mean precipitation and surface temperature as well as the averaged zonal wind in latitude-height cross-section and low level circulation. But there are large differences among models in simulating spatial correlation, intensity, and variance of precipitation compared with observations. Furthermore, the majority of models fail to produce proper intensities of the African Easterly Jet (AEJ) and the tropical easterly jet. AMMA Land Surface Model Intercomparison Project (ALMIP) data are used to analyze the association between simulated surface processes and the WAM and to investigate the WAM mechanism. It has been identified that the spatial distributions of surface sensible heat flux, surface temperature, and moisture convergence are closely associated with the simulated spatial distribution of precipitation; while surface latent heat flux is closely associated with the AEJ and contributes to divergence in AEJ simulation. Common empirical orthogonal functions (CEOF) analysis is applied to characterize the WAM precipitation evolution and has identified a major WAM precipitation mode and two temperature modes (Sahara mode and Sahel mode). Results indicate that the WAMME models produce reasonable temporal evolutions of major CEOF modes but have deficiencies/uncertainties in producing variances explained by major modes. Furthermore, the CEOF analysis shows that WAM precipitation evolution is closely related to the enhanced Sahara mode and the weakened Sahel mode, supporting

  14. How Genes Modulate Patterns of Aging-Related Changes on the Way to 100: Biodemographic Models and Methods in Genetic Analyses of Longitudinal Data

    Science.gov (United States)

    Yashin, Anatoliy I.; Arbeev, Konstantin G.; Wu, Deqing; Arbeeva, Liubov; Kulminski, Alexander; Kulminskaya, Irina; Akushevich, Igor; Ukraintseva, Svetlana V.

    2016-01-01

    Background and Objective To clarify mechanisms of genetic regulation of human aging and longevity traits, a number of genome-wide association studies (GWAS) of these traits have been performed. However, the results of these analyses did not meet expectations of the researchers. Most detected genetic associations have not reached a genome-wide level of statistical significance, and suffered from the lack of replication in the studies of independent populations. The reasons for slow progress in this research area include low efficiency of statistical methods used in data analyses, genetic heterogeneity of aging and longevity related traits, possibility of pleiotropic (e.g., age dependent) effects of genetic variants on such traits, underestimation of the effects of (i) mortality selection in genetically heterogeneous cohorts, (ii) external factors and differences in genetic backgrounds of individuals in the populations under study, the weakness of conceptual biological framework that does not fully account for above mentioned factors. One more limitation of conducted studies is that they did not fully realize the potential of longitudinal data that allow for evaluating how genetic influences on life span are mediated by physiological variables and other biomarkers during the life course. The objective of this paper is to address these issues. Data and Methods We performed GWAS of human life span using different subsets of data from the original Framingham Heart Study cohort corresponding to different quality control (QC) procedures and used one subset of selected genetic variants for further analyses. We used simulation study to show that approach to combining data improves the quality of GWAS. We used FHS longitudinal data to compare average age trajectories of physiological variables in carriers and non-carriers of selected genetic variants. We used stochastic process model of human mortality and aging to investigate genetic influence on hidden biomarkers of aging

  15. Far-Field Acoustic Power Level and Performance Analyses of F31/A31 Open Rotor Model at Simulated Scaled Takeoff, Nominal Takeoff, and Approach Conditions: Technical Report I

    Science.gov (United States)

    Sree, Dave

    2015-01-01

    Far-field acoustic power level and performance analyses of open rotor model F31/A31 have been performed to determine its noise characteristics at simulated scaled takeoff, nominal takeoff, and approach flight conditions. The nonproprietary parts of the data obtained from experiments in 9- by 15-Foot Low-Speed Wind Tunnel (9?15 LSWT) tests were provided by NASA Glenn Research Center to perform the analyses. The tone and broadband noise components have been separated from raw test data by using a new data analysis tool. Results in terms of sound pressure levels, acoustic power levels, and their variations with rotor speed, angle of attack, thrust, and input shaft power have been presented and discussed. The effect of an upstream pylon on the noise levels of the model has been addressed. Empirical equations relating model's acoustic power level, thrust, and input shaft power have been developed. The far-field acoustic efficiency of the model is also determined for various simulated flight conditions. It is intended that the results presented in this work will serve as a database for comparison and improvement of other open rotor blade designs and also for validating open rotor noise prediction codes.

  16. Identification of a new reactive metabolite of pyrrolizidine alkaloid retrorsine: (3H-pyrrolizin-7-yl)methanol.

    Science.gov (United States)

    Fashe, Muluneh M; Juvonen, Risto O; Petsalo, Aleksanteri; Rahnasto-Rilla, Minna; Auriola, Seppo; Soininen, Pasi; Vepsäläinen, Jouko; Pasanen, Markku

    2014-11-17

    Pyrrolizidine alkaloids (PAs) such as retrorsine are common food contaminants that are known to be bioactivated by cytochrome P450 enzymes to putative hepatotoxic, genotoxic, and carcinogenic metabolites known as dehydropyrrolizidine alkaloids (DHPs). We compared how both electrochemical (EC) and human liver microsomal (HLM) oxidation of retrorsine could produce short-lived intermediate metabolites; we also characterized a toxicologically important metabolite, (3H-pyrrolizin-7-yl)methanol. The EC cell was coupled online or offline to a liquid chromatograph/mass spectrometer (LC/MS), whereas the HLM oxidation was performed in 100 mM potassium phosphate (pH 7.4) in the presence of NADPH at 37 °C. The EC cell oxidation of retrorsine produced 12 metabolites, including dehydroretrorsine (m/z 350, [M + H(+)]), which was degraded to a new reactive metabolite at m/z 136 ([M + H(+)]). The molecular structure of this small metabolite was determined using high-resolution mass spectrometry and NMR spectroscopy followed by chemical synthesis. In addition, we also identified another minor but reactive metabolite at m/z 136, an isomer of (3H-pyrrolizin-7-yl)methanol. Both (3H-pyrrolizin-7-yl)methanol and its minor isomer were also observed after HLM oxidation of retrorsine and other hepatotoxic PAs such as lasiocarpine and senkirkin. In the presence of reduced glutathione (GSH), each isomer formed identical GSH conjugates at m/z 441 and m/z 730 in the negative ESI-MS. Because (3H-pyrrolizine-7-yl)methanol) and its minor isomer subsequently reacted with GSH, it is concluded that (3H-pyrrolizin-7-yl)methanol may be a common toxic metabolite arising from PAs.

  17. Comparison of genome-wide association methods in analyses of admixed populations with complex familial relationships.

    Directory of Open Access Journals (Sweden)

    Naveen K Kadri

    Full Text Available Population structure is known to cause false-positive detection in association studies. We compared the power, precision, and type-I error rates of various association models in analyses of a simulated dataset with structure at the population (admixture from two populations; P and family (K levels. We also compared type-I error rates among models in analyses of publicly available human and dog datasets. The models corrected for none, one, or both structure levels. Correction for K was performed with linear mixed models incorporating familial relationships estimated from pedigrees or genetic markers. Linear models that ignored K were also tested. Correction for P was performed using principal component or structured association analysis. In analyses of simulated and real data, linear mixed models that corrected for K were able to control for type-I error, regardless of whether they also corrected for P. In contrast, correction for P alone in linear models was insufficient. The power and precision of linear mixed models with and without correction for P were similar. Furthermore, power, precision, and type-I error rate were comparable in linear mixed models incorporating pedigree and genomic relationships. In summary, in association studies using samples with both P and K, ancestries estimated using principal components or structured assignment were not sufficient to correct type-I errors. In such cases type-I errors may be controlled by use of linear mixed models with relationships derived from either pedigree or from genetic markers.

  18. Team characteristics, peer competition threats and individual performance within a working team: An analysis of realtor agents

    Directory of Open Access Journals (Sweden)

    Chun Chang Lee

    2014-03-01

    Full Text Available This paper uses survey data from a questionnaire for brokers given to Kaohsiung realtors in order to explore the effect of the threat of peer competition on an individual’s performance. In the empirical model, the branch “average performance of other agents” is used as the proxy variable for peer competition, and the hierarchical linear modeling (HLM model is applied for estimation. The empirical results suggest that the average performance by other agents has a significant negative effect on an individual’s performance. In branches that have more “agents” or have a “team compensation scheme”, the effect of other agents’ average performance on an individual’s performance is significantly higher than that for the branches with fewer “agents” or without a “team compensation scheme”. These findings are consistent with theoretical expectations.

  19. Analysing Self Interference Cancellation in Full Duplex Radios

    DEFF Research Database (Denmark)

    Mahmood, Nurul Huda; Shafique Ansari, Imran; Berardinelli, Gilberto

    2016-01-01

    Full duplex communication promises a theoretical $100\\%$ throughput gain by doubling the number of simultaneous transmissions. Such compelling gains are conditioned on perfect cancellation of the self interference power resulting from simultaneous transmission and reception. Generally, self...... interference power is modelled as a noise-like constant level interference floor. However, experimental validations have shown that the self interference power is in practice a random variable depending on a number of factors such as the surrounding wireless environment and the degree of interference...... cancellation. In this study, we derive an analytical model for the residual self interference power, and demonstrate various applications of the derived model in analysing the performance of a Full Duplex radio. In general, full duplex communication is found to provide only modest throughput gains over half...

  20. Item Response Theory Modeling and Categorical Regression Analyses of the Five-Factor Model Rating Form: A Study on Italian Community-Dwelling Adolescent Participants and Adult Participants.

    Science.gov (United States)

    Fossati, Andrea; Widiger, Thomas A; Borroni, Serena; Maffei, Cesare; Somma, Antonella

    2017-06-01

    To extend the evidence on the reliability and construct validity of the Five-Factor Model Rating Form (FFMRF) in its self-report version, two independent samples of Italian participants, which were composed of 510 adolescent high school students and 457 community-dwelling adults, respectively, were administered the FFMRF in its Italian translation. Adolescent participants were also administered the Italian translation of the Borderline Personality Features Scale for Children-11 (BPFSC-11), whereas adult participants were administered the Italian translation of the Triarchic Psychopathy Measure (TriPM). Cronbach α values were consistent with previous findings; in both samples, average interitem r values indicated acceptable internal consistency for all FFMRF scales. A multidimensional graded item response theory model indicated that the majority of FFMRF items had adequate discrimination parameters; information indices supported the reliability of the FFMRF scales. Both categorical (i.e., item-level) and scale-level regression analyses suggested that the FFMRF scores may predict a nonnegligible amount of variance in the BPFSC-11 total score in adolescent participants, and in the TriPM scale scores in adult participants.

  1. Marginal Utility of Conditional Sensitivity Analyses for Dynamic Models

    Science.gov (United States)

    Background/Question/MethodsDynamic ecological processes may be influenced by many factors. Simulation models thatmimic these processes often have complex implementations with many parameters. Sensitivityanalyses are subsequently used to identify critical parameters whose uncertai...

  2. The effects of the ARC organizational intervention on caseworker turnover, climate, and culture in children's service systems.

    Science.gov (United States)

    Glisson, Charles; Dukes, Denzel; Green, Philip

    2006-08-01

    This study examines the effects of the Availability, Responsiveness, and Continuity (ARC) organizational intervention strategy on caseworker turnover, climate, and culture in a child welfare and juvenile justice system. Using a pre-post, randomized blocks, true experimental design, 10 urban and 16 rural case management teams were randomly assigned to either the ARC organizational intervention condition or to a control condition. The culture and climate of each case management team were assessed at baseline and again after the one-year organizational intervention was completed. In addition, caseworker turnover was assessed by identifying caseworkers on the sampled teams who quit their jobs during the year. Hierarchical Linear Models (HLM) analyses indicate that the ARC organizational intervention reduced the probability of caseworker turnover by two-thirds and improved organizational climate by reducing role conflict, role overload, emotional exhaustion, and depersonalization in both urban and rural case management teams. Organizational intervention strategies can be used to reduce staff turnover and improve organizational climates in urban and rural child welfare and juvenile justice systems. This is important because child welfare and juvenile justice systems in the U.S.A. are plagued by high turnover rates, and there is evidence that high staff turnover and poor organizational climates negatively affect service quality and outcomes in these systems.

  3. Effectiveness of Motivational Incentives for Adolescent Marijuana Users in a School-Based Intervention.

    Science.gov (United States)

    Stewart, David G; Felleman, Benjamin I; Arger, Christopher A

    2015-11-01

    This study examined whether adolescents receiving Motivational Interviewing (MI) intervention have different outcomes compared to those receiving Motivational Incentives (Motivational Interviewing combined with Contingency Management; MI+CM). A total of 136 adolescents (from a parent study of 220 adolescents) with problematic substance use were recruited from 8 high schools in Washington State, where they completed either 8-weeks of MI or MI+CM. Frequency of marijuana use was assessed at baseline, at the end-of-treatment, and at 16-week follow-up. A balanced and matched sample was created using propensity scores, then analyzed using Hierarchical Linear Modeling (HLM). Multilevel regression analyses revealed that adolescents who received MI+CM exhibited a greater reduction in use across time (pmotivation and school attendance were not found. Use of coping strategies at the end-of-treatment had a significant indirect effect on the relationship between the intervention condition and marijuana use at the end-of-treatment (F3, 121=10.20, R2=.20, p<.01). These results suggest that the inclusion of contingencies into adolescent marijuana treatment decreases the end-of-treatment frequency of marijuana use and related consequences while increasing the use of coping strategies and the pursuit of additional treatment. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Methodology for Analysing Controllability and Observability of Bladed Disc Coupled Vibrations

    DEFF Research Database (Denmark)

    Christensen, Rene Hardam; Santos, Ilmar

    2004-01-01

    to place sensors and actuators so that all vibration levels can be monitored and controlled. Due to the special dynamic characteristics of rotating coupled bladed discs, where disc lateral motion is coupled to blade flexible motion, such analyses become quite complicated. The dynamics is described...... by a time-variant mathematical model, which presents parametric vibration modes and centrifugal stiffening effects resulting in increasing blade natural frequencies. In this framework the objective and contribution of this paper is to present a methodology for analysing the modal controllability...

  5. Analysing Protocol Stacks for Services

    DEFF Research Database (Denmark)

    Gao, Han; Nielson, Flemming; Nielson, Hanne Riis

    2011-01-01

    We show an approach, CaPiTo, to model service-oriented applications using process algebras such that, on the one hand, we can achieve a certain level of abstraction without being overwhelmed by the underlying implementation details and, on the other hand, we respect the concrete industrial...... standards used for implementing the service-oriented applications. By doing so, we will be able to not only reason about applications at different levels of abstractions, but also to build a bridge between the views of researchers on formal methods and developers in industry. We apply our approach...... to the financial case study taken from Chapter 0-3. Finally, we develop a static analysis to analyse the security properties as they emerge at the level of concrete industrial protocols....

  6. Complex accident scenarios modelled and analysed by Stochastic Petri Nets

    International Nuclear Information System (INIS)

    Nývlt, Ondřej; Haugen, Stein; Ferkl, Lukáš

    2015-01-01

    This paper is focused on the usage of Petri nets for an effective modelling and simulation of complicated accident scenarios, where an order of events can vary and some events may occur anywhere in an event chain. These cases are hardly manageable by traditional methods as event trees – e.g. one pivotal event must be often inserted several times into one branch of the tree. Our approach is based on Stochastic Petri Nets with Predicates and Assertions and on an idea, which comes from the area of Programmable Logic Controllers: an accidental scenario is described as a net of interconnected blocks, which represent parts of the scenario. So the scenario is firstly divided into parts, which are then modelled by Petri nets. Every block can be easily interconnected with other blocks by input/output variables to create complex ones. In the presented approach, every event or a part of a scenario is modelled only once, independently on a number of its occurrences in the scenario. The final model is much more transparent then the corresponding event tree. The method is shown in two case studies, where the advanced one contains a dynamic behavior. - Highlights: • Event & Fault trees have problems with scenarios where an order of events can vary. • Paper presents a method for modelling and analysis of dynamic accident scenarios. • The presented method is based on Petri nets. • The proposed method solves mentioned problems of traditional approaches. • The method is shown in two case studies: simple and advanced (with dynamic behavior)

  7. SOCR Analyses: Implementation and Demonstration of a New Graphical Statistics Educational Toolkit

    Directory of Open Access Journals (Sweden)

    Annie Chu

    2009-04-01

    Full Text Available The web-based, Java-written SOCR (Statistical Online Computational Resource toolshave been utilized in many undergraduate and graduate level statistics courses for sevenyears now (Dinov 2006; Dinov et al. 2008b. It has been proven that these resourcescan successfully improve students' learning (Dinov et al. 2008b. Being rst publishedonline in 2005, SOCR Analyses is a somewhat new component and it concentrate on datamodeling for both parametric and non-parametric data analyses with graphical modeldiagnostics. One of the main purposes of SOCR Analyses is to facilitate statistical learn-ing for high school and undergraduate students. As we have already implemented SOCRDistributions and Experiments, SOCR Analyses and Charts fulll the rest of a standardstatistics curricula. Currently, there are four core components of SOCR Analyses. Linearmodels included in SOCR Analyses are simple linear regression, multiple linear regression,one-way and two-way ANOVA. Tests for sample comparisons include t-test in the para-metric category. Some examples of SOCR Analyses' in the non-parametric category areWilcoxon rank sum test, Kruskal-Wallis test, Friedman's test, Kolmogorov-Smirno testand Fligner-Killeen test. Hypothesis testing models include contingency table, Friedman'stest and Fisher's exact test. The last component of Analyses is a utility for computingsample sizes for normal distribution. In this article, we present the design framework,computational implementation and the utilization of SOCR Analyses.

  8. Flash spice as a tool for analysing the impact of radiation

    International Nuclear Information System (INIS)

    Charlot, J.J.; Alali, O.

    1997-01-01

    The Spice simulator must be enhanced with behavioural capabilities for analysing the effects of hostile environments (e.g. radiation exposure) on components, circuits and systems. One solution for achieving this uses the in-house BVHDLA translator to convert from models in analog VHDL Language to models that will be recognized by Spice. This article gives an example combined with self-heating in a MOS transistor. (authors)

  9. Comparative supragenomic analyses among the pathogens Staphylococcus aureus, Streptococcus pneumoniae, and Haemophilus influenzae Using a modification of the finite supragenome model

    Directory of Open Access Journals (Sweden)

    Yu Susan

    2011-04-01

    Full Text Available Abstract Background Staphylococcus aureus is associated with a spectrum of symbiotic relationships with its human host from carriage to sepsis and is frequently associated with nosocomial and community-acquired infections, thus the differential gene content among strains is of interest. Results We sequenced three clinical strains and combined these data with 13 publically available human isolates and one bovine strain for comparative genomic analyses. All genomes were annotated using RAST, and then their gene similarities and differences were delineated. Gene clustering yielded 3,155 orthologous gene clusters, of which 2,266 were core, 755 were distributed, and 134 were unique. Individual genomes contained between 2,524 and 2,648 genes. Gene-content comparisons among all possible S. aureus strain pairs (n = 136 revealed a mean difference of 296 genes and a maximum difference of 476 genes. We developed a revised version of our finite supragenome model to estimate the size of the S. aureus supragenome (3,221 genes, with 2,245 core genes, and compared it with those of Haemophilus influenzae and Streptococcus pneumoniae. There was excellent agreement between RAST's annotations and our CDS clustering procedure providing for high fidelity metabolomic subsystem analyses to extend our comparative genomic characterization of these strains. Conclusions Using a multi-species comparative supragenomic analysis enabled by an improved version of our finite supragenome model we provide data and an interpretation explaining the relatively larger core genome of S. aureus compared to other opportunistic nasopharyngeal pathogens. In addition, we provide independent validation for the efficiency and effectiveness of our orthologous gene clustering algorithm.

  10. Analyses, algorithms, and computations for models of high-temperature superconductivity. Final report

    International Nuclear Information System (INIS)

    Du, Q.

    1997-01-01

    Under the sponsorship of the Department of Energy, the authors have achieved significant progress in the modeling, analysis, and computation of superconducting phenomena. The work so far has focused on mezoscale models as typified by the celebrated Ginzburg-Landau equations; these models are intermediate between the microscopic models (that can be used to understand the basic structure of superconductors and of the atomic and sub-atomic behavior of these materials) and the macroscale, or homogenized, models (that can be of use for the design of devices). The models they have considered include a time dependent Ginzburg-Landau model, a variable thickness thin film model, models for high values of the Ginzburg-landau parameter, models that account for normal inclusions and fluctuations and Josephson effects, and the anisotropic ginzburg-Landau and Lawrence-Doniach models for layered superconductors, including those with high critical temperatures. In each case, they have developed or refined the models, derived rigorous mathematical results that enhance the state of understanding of the models and their solutions, and developed, analyzed, and implemented finite element algorithms for the approximate solution of the model equations

  11. Thermal Safety Analyses for the Production of Plutonium-238 at the High Flux Isotope Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Hurt, Christopher J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Freels, James D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hobbs, Randy W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Jain, Prashant K. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Maldonado, G. Ivan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-08-01

    There has been a considerable effort over the previous few years to demonstrate and optimize the production of plutonium-238 (238Pu) at the High Flux Isotope Reactor (HFIR). This effort has involved resources from multiple divisions and facilities at the Oak Ridge National Laboratory (ORNL) to demonstrate the fabrication, irradiation, and chemical processing of targets containing neptunium-237 (237Np) dioxide (NpO2)/aluminum (Al) cermet pellets. A critical preliminary step to irradiation at the HFIR is to demonstrate the safety of the target under irradiation via documented experiment safety analyses. The steady-state thermal safety analyses of the target are simulated in a finite element model with the COMSOL Multiphysics code that determines, among other crucial parameters, the limiting maximum temperature in the target. Safety analysis efforts for this model discussed in the present report include: (1) initial modeling of single and reduced-length pellet capsules in order to generate an experimental knowledge base that incorporate initial non-linear contact heat transfer and fission gas equations, (2) modeling efforts for prototypical designs of partially loaded and fully loaded targets using limited available knowledge of fabrication and irradiation characteristics, and (3) the most recent and comprehensive modeling effort of a fully coupled thermo-mechanical approach over the entire fully loaded target domain incorporating burn-up dependent irradiation behavior and measured target and pellet properties, hereafter referred to as the production model. These models are used to conservatively determine several important steady-state parameters including target stresses and temperatures, the limiting condition of which is the maximum temperature with respect to the melting point. The single pellet model results provide a basis for the safety of the irradiations, followed by parametric analyses in the initial prototypical designs

  12. Parents' Reactions to Finding Out That Their Children Have Average or above Average IQ Scores.

    Science.gov (United States)

    Dirks, Jean; And Others

    1983-01-01

    Parents of 41 children who had been given an individually-administered intelligence test were contacted 19 months after testing. Parents of average IQ children were less accurate in their memory of test results. Children with above average IQ experienced extremely low frequencies of sibling rivalry, conceit or pressure. (Author/HLM)

  13. Neutronic analyses and tools development efforts in the European DEMO programme

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, U., E-mail: ulrich.fischer@kit.edu [Association KIT-Euratom, Karlsruhe Institute of Technology (KIT), Karlsruhe (Germany); Bachmann, C. [European Fusion Development Agreement (EFDA), Garching (Germany); Bienkowska, B. [Association IPPLM-Euratom, IPPLM Warsaw/INP Krakow (Poland); Catalan, J.P. [Universidad Nacional de Educación a Distancia (UNED), Madrid (Spain); Drozdowicz, K.; Dworak, D. [Association IPPLM-Euratom, IPPLM Warsaw/INP Krakow (Poland); Leichtle, D. [Association KIT-Euratom, Karlsruhe Institute of Technology (KIT), Karlsruhe (Germany); Fusion for Energy (F4E), Barcelona (Spain); Lengar, I. [MESCS-JSI, Ljubljana (Slovenia); Jaboulay, J.-C. [CEA, DEN, Saclay, DM2S, SERMA, F-91191 Gif-sur-Yvette (France); Lu, L. [Association KIT-Euratom, Karlsruhe Institute of Technology (KIT), Karlsruhe (Germany); Moro, F. [Associazione ENEA-Euratom, ENEA Fusion Division, Frascati (Italy); Mota, F. [Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT), Madrid (Spain); Sanz, J. [Universidad Nacional de Educación a Distancia (UNED), Madrid (Spain); Szieberth, M. [Budapest University of Technology and Economics (BME), Budapest (Hungary); Palermo, I. [Centro de Investigaciones Energéticas, Medioambientales y Tecnológicas (CIEMAT), Madrid (Spain); Pampin, R. [Fusion for Energy (F4E), Barcelona (Spain); Porton, M. [Euratom/CCFE Fusion Association, Culham Science Centre for Fusion Energy (CCFE), Culham (United Kingdom); Pereslavtsev, P. [Association KIT-Euratom, Karlsruhe Institute of Technology (KIT), Karlsruhe (Germany); Ogando, F. [Universidad Nacional de Educación a Distancia (UNED), Madrid (Spain); Rovni, I. [Budapest University of Technology and Economics (BME), Budapest (Hungary); and others

    2014-10-15

    Highlights: •Evaluation of neutronic tools for application to DEMO nuclear analyses. •Generation of a DEMO model for nuclear analyses based on MC calculations. •Nuclear analyses of the DEMO reactor equipped with a HCLL-type blanket. -- Abstract: The European Fusion Development Agreement (EFDA) recently launched a programme on Power Plant Physics and Technology (PPPT) with the aim to develop a conceptual design of a fusion demonstration reactor (DEMO) addressing key technology and physics issues. A dedicated part of the PPPT programme is devoted to the neutronics which, among others, has to define and verify requirements and boundary conditions for the DEMO systems. The quality of the provided data depends on the capabilities and the reliability of the computational tools. Accordingly, the PPPT activities in the area of neutronics include both DEMO nuclear analyses and development efforts on neutronic tools including their verification and validation. This paper reports on first neutronics studies performed for DEMO, and on the evaluation and further development of neutronic tools.

  14. Neutronic analyses and tools development efforts in the European DEMO programme

    International Nuclear Information System (INIS)

    Fischer, U.; Bachmann, C.; Bienkowska, B.; Catalan, J.P.; Drozdowicz, K.; Dworak, D.; Leichtle, D.; Lengar, I.; Jaboulay, J.-C.; Lu, L.; Moro, F.; Mota, F.; Sanz, J.; Szieberth, M.; Palermo, I.; Pampin, R.; Porton, M.; Pereslavtsev, P.; Ogando, F.; Rovni, I.

    2014-01-01

    Highlights: •Evaluation of neutronic tools for application to DEMO nuclear analyses. •Generation of a DEMO model for nuclear analyses based on MC calculations. •Nuclear analyses of the DEMO reactor equipped with a HCLL-type blanket. -- Abstract: The European Fusion Development Agreement (EFDA) recently launched a programme on Power Plant Physics and Technology (PPPT) with the aim to develop a conceptual design of a fusion demonstration reactor (DEMO) addressing key technology and physics issues. A dedicated part of the PPPT programme is devoted to the neutronics which, among others, has to define and verify requirements and boundary conditions for the DEMO systems. The quality of the provided data depends on the capabilities and the reliability of the computational tools. Accordingly, the PPPT activities in the area of neutronics include both DEMO nuclear analyses and development efforts on neutronic tools including their verification and validation. This paper reports on first neutronics studies performed for DEMO, and on the evaluation and further development of neutronic tools

  15. Development of a 3-dimensional calculation model of the Danish research reactor DR3 to analyse a proposal to a new core design called ring-core

    Energy Technology Data Exchange (ETDEWEB)

    Nonboel, E

    1985-07-01

    A 3-dimensional calculation model of the Danish research reactor DR3 has been developed. Demands of a more effective utilization of the reactor and its facilities has required a more detailed calculation tool than applied so far. A great deal of attention has been devoted to the treatment of the coarse control arms. The model has been tested against measurements with satisfying results. Furthermore the model has been used to analyse a proposal to a new core design called ring-core where 4 central fuel elements are replaced by 4 dummy elements to increase the thermal flux in the center of the reactor. (author)

  16. LITERATURE SEARCH FOR METHODS FOR HAZARD ANALYSES OF AIR CARRIER OPERATIONS.

    Energy Technology Data Exchange (ETDEWEB)

    MARTINEZ - GURIDI,G.; SAMANTA,P.

    2002-07-01

    Representatives of the Federal Aviation Administration (FAA) and several air carriers under Title 14 of the Code of Federal Regulations (CFR) Part 121 developed a system-engineering model of the functions of air-carrier operations. Their analyses form the foundation or basic architecture upon which other task areas are based: hazard analyses, performance measures, and risk indicator design. To carry out these other tasks, models may need to be developed using the basic architecture of the Air Carrier Operations System Model (ACOSM). Since ACOSM encompasses various areas of air-carrier operations and can be used to address different task areas with differing but interrelated objectives, the modeling needs are broad. A literature search was conducted to identify and analyze the existing models that may be applicable for pursuing the task areas in ACOSM. The intent of the literature search was not necessarily to identify a specific model that can be directly used, but rather to identify relevant ones that have similarities with the processes and activities defined within ACOSM. Such models may provide useful inputs and insights in structuring ACOSM models. ACOSM simulates processes and activities in air-carrier operation, but, in a general framework, it has similarities with other industries where attention also has been paid to hazard analyses, emphasizing risk management, and in designing risk indicators. To assure that efforts in other industries are adequately considered, the literature search includes publications from other industries, e.g., chemical, nuclear, and process industries. This report discusses the literature search, the relevant methods identified and provides a preliminary assessment of their use in developing the models needed for the ACOSM task areas. A detailed assessment of the models has not been made. Defining those applicable for ACOSM will need further analyses of both the models and tools identified. The report is organized in four chapters

  17. Fractal and multifractal analyses of bipartite networks

    Science.gov (United States)

    Liu, Jin-Long; Wang, Jian; Yu, Zu-Guo; Xie, Xian-Hua

    2017-03-01

    Bipartite networks have attracted considerable interest in various fields. Fractality and multifractality of unipartite (classical) networks have been studied in recent years, but there is no work to study these properties of bipartite networks. In this paper, we try to unfold the self-similarity structure of bipartite networks by performing the fractal and multifractal analyses for a variety of real-world bipartite network data sets and models. First, we find the fractality in some bipartite networks, including the CiteULike, Netflix, MovieLens (ml-20m), Delicious data sets and (u, v)-flower model. Meanwhile, we observe the shifted power-law or exponential behavior in other several networks. We then focus on the multifractal properties of bipartite networks. Our results indicate that the multifractality exists in those bipartite networks possessing fractality. To capture the inherent attribute of bipartite network with two types different nodes, we give the different weights for the nodes of different classes, and show the existence of multifractality in these node-weighted bipartite networks. In addition, for the data sets with ratings, we modify the two existing algorithms for fractal and multifractal analyses of edge-weighted unipartite networks to study the self-similarity of the corresponding edge-weighted bipartite networks. The results show that our modified algorithms are feasible and can effectively uncover the self-similarity structure of these edge-weighted bipartite networks and their corresponding node-weighted versions.

  18. Non-Statistical Methods of Analysing of Bankruptcy Risk

    Directory of Open Access Journals (Sweden)

    Pisula Tomasz

    2015-06-01

    Full Text Available The article focuses on assessing the effectiveness of a non-statistical approach to bankruptcy modelling in enterprises operating in the logistics sector. In order to describe the issue more comprehensively, the aforementioned prediction of the possible negative results of business operations was carried out for companies functioning in the Polish region of Podkarpacie, and in Slovakia. The bankruptcy predictors selected for the assessment of companies operating in the logistics sector included 28 financial indicators characterizing these enterprises in terms of their financial standing and management effectiveness. The purpose of the study was to identify factors (models describing the bankruptcy risk in enterprises in the context of their forecasting effectiveness in a one-year and two-year time horizon. In order to assess their practical applicability the models were carefully analysed and validated. The usefulness of the models was assessed in terms of their classification properties, and the capacity to accurately identify enterprises at risk of bankruptcy and healthy companies as well as proper calibration of the models to the data from training sample sets.

  19. Steady-state natural circulation analysis with computational fluid dynamic codes of a liquid metal-cooled accelerator driven system

    International Nuclear Information System (INIS)

    Abanades, A.; Pena, A.

    2009-01-01

    A new innovative nuclear installation is under research in the nuclear community for its potential application to nuclear waste management and, above all, for its capability to enhance the sustainability of nuclear energy in the future as component of a new nuclear fuel cycle in which its efficiency in terms of primary Uranium ore profit and radioactive waste generation will be improved. Such new nuclear installations are called accelerator driven system (ADS) and are the result of a profitable symbiosis between accelerator technology, high-energy physics and reactor technology. Many ADS concepts are based on the utilization of heavy liquid metal (HLM) coolants due to its neutronic and thermo-physical properties. Moreover, such coolants permit the operation in free circulation mode, one of the main aims of passive systems. In this paper, such operation regime is analysed in a proposed ADS design applying computational fluid dynamics (CFD)

  20. Development of model for analysing respective collections of intended hematopoietic stem cells and harvests of unintended mature cells in apheresis for autologous hematopoietic stem cell collection.

    Science.gov (United States)

    Hequet, O; Le, Q H; Rodriguez, J; Dubost, P; Revesz, D; Clerc, A; Rigal, D; Salles, G; Coiffier, B

    2014-04-01

    Hematopoietic stem cells (HSCs) required to perform peripheral hematopoietic autologous stem cell transplantation (APBSCT) can be collected by processing several blood volumes (BVs) in leukapheresis sessions. However, this may cause granulocyte harvest in graft and decrease in patient's platelet blood level. Both consequences may induce disturbances in patient. One apheresis team's current purpose is to improve HSC collection by increasing HSC collection and prevent increase in granulocyte and platelet harvests. Before improving HSC collection it seemed important to know more about the way to harvest these types of cells. The purpose of our study was to develop a simple model for analysing respective collections of intended CD34+ cells among HSC (designated here as HSC) and harvests of unintended platelets or granulocytes among mature cells (designated here as mature cells) considering the number of BVs processed and factors likely to influence cell collection or harvest. For this, we processed 1, 2 and 3 BVs in 59 leukapheresis sessions and analysed corresponding collections and harvests with a referent device (COBE Spectra). First we analysed the amounts of HSC collected and mature cells harvested and second the evolution of the respective shares of HSC and mature cells collected or harvested throughout the BV processes. HSC collections and mature cell harvests increased globally (pcollections and harvests, which showed that only pre-leukapheresis blood levels (CD34+cells and platelets) influenced both cell collections and harvests (CD34+cells and platelets) (pcollections and mature unintended cells harvests (pcollections or unintended mature cell harvests were pre-leukapheresis blood cell levels. Our model was meant to assist apheresis teams in analysing shares of HSC collected and mature cells harvested with new devices or with new types of HSC mobilization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Capacity allocation in wireless communication networks - models and analyses

    NARCIS (Netherlands)

    Litjens, Remco

    2003-01-01

    This monograph has concentrated on capacity allocation in cellular and Wireless Local Area Networks, primarily with a network operator’s perspective. In the introduc- tory chapter, a reference model has been proposed for the extensive suite of capacity allocation mechanisms that can be applied at

  2. On model-independent analyses of elastic hadron scattering

    International Nuclear Information System (INIS)

    Avila, R.F.; Campos, S.D.; Menon, M.J.; Montanha, J.

    2007-01-01

    By means of an almost model-independent parametrization for the elastic hadron-hadron amplitude, as a function of the energy and the momentum transfer, we obtain good descriptions of the physical quantities that characterize elastic proton-proton and antiproton-proton scattering (total cross section, r parameter and differential cross section). The parametrization is inferred on empirical grounds and selected according to high energy theorems and limits from axiomatic quantum field theory. Based on the predictive character of the approach we present predictions for the above physical quantities at the Brookhaven RHIC, Fermilab Tevatron and CERN LHC energies. (author)

  3. Healthcare restructuring and hierarchical alignment: why do staff and managers perceive change outcomes differently?

    Science.gov (United States)

    Walston, Stephen L; Chou, Ann F

    2006-09-01

    Healthcare organizations have undergone major change efforts in the past decade. Sustained change is related to continued alignment among organizational participants and may fail with incongruent perceptions of change. This study identifies factors contributing to the alignment in perceptions of organizational change outcomes between executives and all other employees. The sample included 10 hospitals with survey responses from 421 executives and other employees. Using hierarchical linear modeling (HLM), perceptual alignment was modeled at the first level as a function of goal commitment, goal clarity, goal acceptance, goal specificity, staff participation, available skill set, and knowledge and at the second level as organizational size. Descriptive statistics showed employee perception of outcomes differed among personnel levels. HLM results showed that goal specificity, appropriate staff training, reward incentives, effective communication, information sharing, and organization's ability to sustain changes induced perceptual alignment in change outcomes. Staff involvement in designing change efforts increased perceptual misalignment, whereas involvement during implementation and maintenance phases increased alignment. This research uses cross-sectional data from 10 hospitals. Data were gathered from surveys that may have recall bias as hospitals were surveyed at different times after the implementation of their restructuring. Our findings enhance the understanding of processes and mechanisms that enable healthcare organizations to align organizational participants' efforts during change. Results suggest that decision-makers should create incentives to encourage innovative practices, institute effective communication mechanisms, selectively disseminate information, and involve participants in implementing and maintaining changes to achieve intended outcomes. ORIGINALITY/VALUE OF ARTICLE: This article provides unique insight into the importance and causes of

  4. RELAP5 thermal-hydraulic analyses of overcooling sequences in a pressurized water reactor

    International Nuclear Information System (INIS)

    Bolander, M.A.; Fletcher, C.D.; Davis, C.B.; Kullberg, C.M.; Stitt, B.D.; Waterman, M.E.; Burtt, J.D.

    1984-01-01

    In support of the Pressurized Thermal Shock Integration Study, sponsored by the United States Nuclear Regulatory Commission, the Idaho National Engineering Laboratory has performed analyses of overcooling transients using the RELAP5/MOD1.6 and MOD2.0 computer codes. These analyses were performed for the H.B. Robinson Unit 2 pressurized water reactor, which is a Westinghouse 3-loop design plant. Results of the RELAP5 analyses are presented. The capabilities of the RELAP5 computer code as a tool for analyzing integral plant transients requiring a detailed plant model, including complex trip logic and major control systems, are examined

  5. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  6. Scalable Coupling of Multiscale AEH and PARADYN Analyses for Impact Modeling

    National Research Council Canada - National Science Library

    Valisetty, Rama R; Chung, Peter W; Namburu, Raju R

    2005-01-01

    .... An asymptotic expansion homogenization (AEH)-based microstructural model available for modeling microstructural aspects of modern armor materials is coupled with PARADYN, a parallel explicit Lagrangian finite-element code...

  7. Parental Perceptions toward and Practices of Heritage Language Maintenance: Focusing on the United States and Canada

    Science.gov (United States)

    Liang, Feng

    2018-01-01

    This study reviews 17 studies since the year of 2000 on the perceptions and practices of immigrant parents who reside in the United States or Canada with respect to their children's heritage language maintenance (HLM). The findings suggest that parental perceptions may change due to practical considerations and vary with different degrees of…

  8. How can results from macro economic analyses of the energy consumption of households be used in macro models? A discussion of theoretical and empirical literature about aggregation

    International Nuclear Information System (INIS)

    Halvorsen, Bente; Larsen, Bodil M.; Nesbakken, Runa

    2001-01-01

    The literature on energy demand shows that there are systematic differences in income- and price elasticity from analyses based on macro data and micro data. Even if one estimates models with the same explanatory variables, the results may differ with respect to estimated price- and income sensitivity. These differences may be caused by problems involved in transferring micro properties to macro properties, or the estimated macro relationships have failed to adequately consideration the fact that households behave differently in their energy demand. Political goals are often directed towards the entire household sector. Partial equilibrium models do not capture important equilibrium effects and feedback through the energy markets and the economy in general. Thus, it is very interesting, politically and scientifically, to do macro economic model analyses of different political measures that affect the energy consumption. The results of behavioural analyses, in which one investigates the heterogeneity of the energy demand, must be based on information about individual households. When the demand is studied based on micro data, it is difficult to aggregate its properties to a total demand function for the entire household sector if different household sectors have different behaviour. Such heterogeneity of behaviour may for instance arise when households in different regions have different heating equipment because of regional differences in the price of electricity. The subject of aggregation arises immediately when one wants to draw conclusions about the household sector based on information about individual households, whether the discussion is about the whole population or a selection of households. Thus, aggregation is a topic of interest in a wide range of problems

  9. A methodology for eliciting, representing, and analysing stakeholder knowledge for decision making on complex socio-ecological systems: from cognitive maps to agent-based models.

    Science.gov (United States)

    Elsawah, Sondoss; Guillaume, Joseph H A; Filatova, Tatiana; Rook, Josefine; Jakeman, Anthony J

    2015-03-15

    This paper aims to contribute to developing better ways for incorporating essential human elements in decision making processes for modelling of complex socio-ecological systems. It presents a step-wise methodology for integrating perceptions of stakeholders (qualitative) into formal simulation models (quantitative) with the ultimate goal of improving understanding and communication about decision making in complex socio-ecological systems. The methodology integrates cognitive mapping and agent based modelling. It cascades through a sequence of qualitative/soft and numerical methods comprising: (1) Interviews to elicit mental models; (2) Cognitive maps to represent and analyse individual and group mental models; (3) Time-sequence diagrams to chronologically structure the decision making process; (4) All-encompassing conceptual model of decision making, and (5) computational (in this case agent-based) Model. We apply the proposed methodology (labelled ICTAM) in a case study of viticulture irrigation in South Australia. Finally, we use strengths-weakness-opportunities-threats (SWOT) analysis to reflect on the methodology. Results show that the methodology leverages the use of cognitive mapping to capture the richness of decision making and mental models, and provides a combination of divergent and convergent analysis methods leading to the construction of an Agent Based Model. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Stress analyses of pump gears produced by powder metallurgy

    Energy Technology Data Exchange (ETDEWEB)

    Cetinel, Hakan [Celal Bayar Univ., Mechanical Engineering Dept. (Turkey); Yilmaz, Burak

    2013-06-01

    In this study, trochoidal type (gerotor) hydraulic pump gears were produced by powder metallurgy (P/M) technique. Several gears with different mechanical properties have been obtained by changing process variables. The tooth contact stresses were calculated analytically under particular operation conditions of the hydraulic pump. The 3D models have been obtained from real gears by using Capability Maturity Model (CMM, 3D scanning) operation and SOLIDWORKS software. Stress analyses were conducted on these 3D models by using ANSYS WORKBENCH software. It was found that the density increases by the increase of sintering duration and mechanical properties were positively affected by the increase of density. Maximum deformation takes place in the region of the outer gear where failure generally occurs with the minimum cross-section area.

  11. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  12. Design factors analyses of second-loop PRHRS

    Directory of Open Access Journals (Sweden)

    ZHANG Hongyan

    2017-05-01

    Full Text Available In order to study the operating characteristics of a second-loop Passive Residual Heat Removal System (PRHRS, the transient thermal analysis code RELAP5 is used to build simulation models of the main coolant system and second-loop PRHRS. Transient calculations and comparative analyses under station blackout accident and one-side feed water line break accident conditions are conducted for three critical design factors of the second-loop PRHRS:design capacity, emergency makeup tank and isolation valve opening speed. The impacts of the discussed design factors on the operating characteristics of the second-loop PRHRS are summarized based on calculations and analyses. The analysis results indicate that the system safety and cooling rate should be taken into consideration in designing PRHRS's capacity,and water injection from emergency makeup tank to steam generator can provide advantage to system cooling in the event of accident,and system startup performance can be improved by reducing the opening speed of isolation valve. The results can provide references for the design of the second-loop PRHRS in nuclear power plants.

  13. Longitudinal changes in telomere length and associated genetic parameters in dairy cattle analysed using random regression models.

    Directory of Open Access Journals (Sweden)

    Luise A Seeker

    Full Text Available Telomeres cap the ends of linear chromosomes and shorten with age in many organisms. In humans short telomeres have been linked to morbidity and mortality. With the accumulation of longitudinal datasets the focus shifts from investigating telomere length (TL to exploring TL change within individuals over time. Some studies indicate that the speed of telomere attrition is predictive of future disease. The objectives of the present study were to 1 characterize the change in bovine relative leukocyte TL (RLTL across the lifetime in Holstein Friesian dairy cattle, 2 estimate genetic parameters of RLTL over time and 3 investigate the association of differences in individual RLTL profiles with productive lifespan. RLTL measurements were analysed using Legendre polynomials in a random regression model to describe TL profiles and genetic variance over age. The analyses were based on 1,328 repeated RLTL measurements of 308 female Holstein Friesian dairy cattle. A quadratic Legendre polynomial was fitted to the fixed effect of age in months and to the random effect of the animal identity. Changes in RLTL, heritability and within-trait genetic correlation along the age trajectory were calculated and illustrated. At a population level, the relationship between RLTL and age was described by a positive quadratic function. Individuals varied significantly regarding the direction and amount of RLTL change over life. The heritability of RLTL ranged from 0.36 to 0.47 (SE = 0.05-0.08 and remained statistically unchanged over time. The genetic correlation of RLTL at birth with measurements later in life decreased with the time interval between samplings from near unity to 0.69, indicating that TL later in life might be regulated by different genes than TL early in life. Even though animals differed in their RLTL profiles significantly, those differences were not correlated with productive lifespan (p = 0.954.

  14. Thermal and hydraulic analyses of the System 81 cold traps

    Energy Technology Data Exchange (ETDEWEB)

    Kim, K.

    1977-06-15

    Thermal and hydraulic analyses of the System 81 Type I and II cold traps were completed except for thermal transients analysis. Results are evaluated, discussed, and reported. Analytical models were developed to determine the physical dimensions of the cold traps and to predict the performance. The FFTF cold trap crystallizer performances were simulated using the thermal model. This simulation shows that the analytical model developed predicts reasonably conservative temperatures. Pressure drop and sodium residence time calculations indicate that the present design will meet the requirements specified in the E-Specification. Steady state temperature data for the critical regions were generated to assess the magnitude of the thermal stress.

  15. The plant design analyser and its applications

    International Nuclear Information System (INIS)

    Whitmarsh-Everiss, M.J.

    1992-01-01

    Consideration is given to the history of computational methods for the non-linear dynamic analysis of plant behaviour. This is traced from analogue to hybrid computers. When these were phased out simulation languages were used in the batch mode and the interactive computational capabilities were lost. These have subsequently been recovered using mainframe computing architecture in the context of small models using the Prototype Plant Design Analyser. Given the development of parallel processing architectures, the restriction on model size can be lifted. This capability and the use of advanced Work Stations and graphics software has enabled an advanced interactive design environment to be developed. This system is generic and can be used, with suitable graphics development, to study the dynamics and control behaviour of any plant or system for minimum cost. Examples of past and possible future uses are identified. (author)

  16. Towards interoperable and reproducible QSAR analyses: Exchange of datasets.

    Science.gov (United States)

    Spjuth, Ola; Willighagen, Egon L; Guha, Rajarshi; Eklund, Martin; Wikberg, Jarl Es

    2010-06-30

    QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML) which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join, extend, combine datasets and hence work collectively, but

  17. Towards interoperable and reproducible QSAR analyses: Exchange of datasets

    Directory of Open Access Journals (Sweden)

    Spjuth Ola

    2010-06-01

    Full Text Available Abstract Background QSAR is a widely used method to relate chemical structures to responses or properties based on experimental observations. Much effort has been made to evaluate and validate the statistical modeling in QSAR, but these analyses treat the dataset as fixed. An overlooked but highly important issue is the validation of the setup of the dataset, which comprises addition of chemical structures as well as selection of descriptors and software implementations prior to calculations. This process is hampered by the lack of standards and exchange formats in the field, making it virtually impossible to reproduce and validate analyses and drastically constrain collaborations and re-use of data. Results We present a step towards standardizing QSAR analyses by defining interoperable and reproducible QSAR datasets, consisting of an open XML format (QSAR-ML which builds on an open and extensible descriptor ontology. The ontology provides an extensible way of uniquely defining descriptors for use in QSAR experiments, and the exchange format supports multiple versioned implementations of these descriptors. Hence, a dataset described by QSAR-ML makes its setup completely reproducible. We also provide a reference implementation as a set of plugins for Bioclipse which simplifies setup of QSAR datasets, and allows for exporting in QSAR-ML as well as old-fashioned CSV formats. The implementation facilitates addition of new descriptor implementations from locally installed software and remote Web services; the latter is demonstrated with REST and XMPP Web services. Conclusions Standardized QSAR datasets open up new ways to store, query, and exchange data for subsequent analyses. QSAR-ML supports completely reproducible creation of datasets, solving the problems of defining which software components were used and their versions, and the descriptor ontology eliminates confusions regarding descriptors by defining them crisply. This makes is easy to join

  18. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach......This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling...

  19. Effect of housing relocation and neighborhood environment on adolescent mental and behavioral health.

    Science.gov (United States)

    Byck, Gayle R; Bolland, John; Dick, Danielle; Swann, Gregory; Henry, David; Mustanski, Brian

    2015-11-01

    This study examined whether relocating from a high-poverty neighborhood to a lower poverty neighborhood as part of a federal housing relocation program (HOPE VI; Housing Opportunities for People Everywhere) had effects on adolescent mental and behavioral health compared to adolescents consistently living in lower poverty neighborhoods. Sociodemographic, risk behavior, and neighborhood data were collected from 592 low-income, primarily African-American adolescents and their primary caregivers. Structured psychiatric interviews were conducted with adolescents. Prerelocation neighborhood, demographic, and risk behavior data were also included. Hierarchical Linear Modeling (HLM) was used to test associations between neighborhood variables and risk outcomes. HLM was used to test whether the effect of neighborhood relocation and neighborhood characteristics might explain differences in sexual risk taking, substance use, and mental health outcomes. Adolescents who relocated of HOPE VI neighborhoods (n = 158) fared worse than control group participants (n = 429) on most self-reported mental health outcomes. The addition of subjective neighborhood measures generally did not substantively change these results. Our findings suggest that moving from a high-poverty neighborhood to a somewhat lower poverty neighborhood is not associated with better mental health and risk behavior outcomes in adolescents. The continued effects of having grown up in a high-poverty neighborhood, the small improvements in their new neighborhoods, the comparatively short length of time they lived in their new neighborhood, and/or the stress of moving appears to worsen most of the mental health outcomes of HOPE VI compared to control group participants who consistently lived in the lower poverty neighborhoods. © 2015 Association for Child and Adolescent Mental Health.

  20. Bio-economic farm modelling to analyse agricultural land productivity in Rwanda

    NARCIS (Netherlands)

    Bidogeza, J.C.

    2011-01-01

    Keywords: Rwanda; farm household typology; sustainable technology adoption; multivariate analysis;
    land degradation; food security; bioeconomic model; crop simulation models; organic fertiliser; inorganic fertiliser; policy incentives

    In Rwanda, land degradation contributes to the

  1. Multicollinearity in spatial genetics: separating the wheat from the chaff using commonality analyses.

    Science.gov (United States)

    Prunier, J G; Colyn, M; Legendre, X; Nimon, K F; Flamand, M C

    2015-01-01

    Direct gradient analyses in spatial genetics provide unique opportunities to describe the inherent complexity of genetic variation in wildlife species and are the object of many methodological developments. However, multicollinearity among explanatory variables is a systemic issue in multivariate regression analyses and is likely to cause serious difficulties in properly interpreting results of direct gradient analyses, with the risk of erroneous conclusions, misdirected research and inefficient or counterproductive conservation measures. Using simulated data sets along with linear and logistic regressions on distance matrices, we illustrate how commonality analysis (CA), a detailed variance-partitioning procedure that was recently introduced in the field of ecology, can be used to deal with nonindependence among spatial predictors. By decomposing model fit indices into unique and common (or shared) variance components, CA allows identifying the location and magnitude of multicollinearity, revealing spurious correlations and thus thoroughly improving the interpretation of multivariate regressions. Despite a few inherent limitations, especially in the case of resistance model optimization, this review highlights the great potential of CA to account for complex multicollinearity patterns in spatial genetics and identifies future applications and lines of research. We strongly urge spatial geneticists to systematically investigate commonalities when performing direct gradient analyses. © 2014 John Wiley & Sons Ltd.

  2. Comparison of pre-test analyses with the Sizewell-B 1:10 scale prestressed concrete containment test

    International Nuclear Information System (INIS)

    Dameron, R.A.; Rashid, Y.R.; Parks, M.B.

    1991-01-01

    This paper describes pretest analyses of a one-tenth scale model of the 'Sizewell-B' prestressed concrete containment building. The work was performed by ANATECH Research Corp. under contract with Sandia National Laboratories (SNL). Hydraulic testing of the model was conducted in the United Kingdom by the Central Electricity Generating Board (CEGB). In order to further their understanding of containment behavior, the USNRC, through an agreement with the United Kingdom Atomic Energy Authority (UKAEA), also participated in the test program with SNL serving as their technical agent. The analyses that were conducted included two global axisymmetric models with 'bonded' and 'unbonded' analytical treatment of meridional tendons, a 3D quarter model of the structure, an axisymmetric representation of the equipment hatch region, and local plane stress and r-θ models of a buttress. Results of these analyses are described and compared with the results of the test. A global hoop failure at midheight of the cylinder and a shear/bending type failure at the base of the cylinder wall were both found to have roughly equal probability of occurrence; however, the shear failure mode had higher uncertainty associated with it. Consequently, significant effort was dedicated to improving the modeling capability for concrete shear behavior. This work is also described briefly. (author)

  3. Comparison of pre-test analyses with the Sizewell-B 1:10 scale prestressed concrete containment test

    International Nuclear Information System (INIS)

    Dameron, R.A.; Rashid, Y.R.; Parks, M.B.

    1991-01-01

    This paper describes pretest analyses of a one-tenth scale model of the Sizewell-B prestressed concrete containment building. The work was performed by ANATECH Research Corp. under contract with Sandia National Laboratories (SNL). Hydraulic testing of the model was conducted in the United Kingdom by the Central Electricity Generating Board (CEGB). In order to further their understanding of containment behavior, the USNRC, through an agreement with the United Kingdom Atomic Energy Authority (UKAEA), also participated in the test program with SNL serving as their technical agent. The analyses that were conducted included two global axisymmetric models with ''bonded'' and ''unbonded'' analytical treatment of meridional tendons, a 3D quarter model of the structure, an axisymmetric representation of the equipment hatch region, and local plan stress and r-θ models of a buttress. Results of these analyses are described and compared with the results of the test. A global hoop failure at midheight of the cylinder and a shear/bending type failure at the base of the cylinder wall were both found to have roughly equal probability of occurrence; however, the shear failure mode had higher uncertainty associated with it. Consequently, significant effort was dedicated to improving the modeling capability for concrete shear behavior. This work is also described briefly. 5 refs., 7 figs

  4. Analysing the Relevance of Experience Partitions to the Prediction of Players’ Self-Reports of Affect

    DEFF Research Database (Denmark)

    Martínez, Héctor Pérez; Yannakakis, Georgios N.

    2011-01-01

    A common practice in modeling affect from physiological signals consists of reducing the signals to a set of statistical features that feed predictors of self-reported emotions. This paper analyses the impact of various time-windows, used for the extraction of physiological features, to the accur......A common practice in modeling affect from physiological signals consists of reducing the signals to a set of statistical features that feed predictors of self-reported emotions. This paper analyses the impact of various time-windows, used for the extraction of physiological features...

  5. Application and further development of models for the final repository safety analyses on the clearance of radioactive materials for disposal. Final report; Anwendung und Weiterentwicklung von Modellen fuer Endlagersicherheitsanalysen auf die Freigabe radioaktiver Stoffe zur Deponierung. Abschlussbericht

    Energy Technology Data Exchange (ETDEWEB)

    Artmann, Andreas; Larue, Juergen; Seher, Holger; Weiss, Dietmar

    2014-08-15

    The project of application and further development of models for the final repository safety analyses on the clearance of radioactive materials for disposal is aimed to study the long-term safety using repository-specific simulation programs with respect to radiation exposure for different scenarios. It was supposed to investigate whether the 10 micro Sv criterion can be guaranteed under consideration of human intrusion scenarios. The report covers the following issues: selection and identification of models and codes and the definition of boundary conditions; applicability of conventional repository models for long-term safety analyses; modeling results for the pollutant release and transport and calculation of radiation exposure; determination of the radiation exposure.

  6. FEM modeling and histological analyses on thermal damage induced in facial skin resurfacing procedure with different CO2 laser pulse duration

    Science.gov (United States)

    Rossi, Francesca; Zingoni, Tiziano; Di Cicco, Emiliano; Manetti, Leonardo; Pini, Roberto; Fortuna, Damiano

    2011-07-01

    Laser light is nowadays routinely used in the aesthetic treatments of facial skin, such as in laser rejuvenation, scar removal etc. The induced thermal damage may be varied by setting different laser parameters, in order to obtain a particular aesthetic result. In this work, it is proposed a theoretical study on the induced thermal damage in the deep tissue, by considering different laser pulse duration. The study is based on the Finite Element Method (FEM): a bidimensional model of the facial skin is depicted in axial symmetry, considering the different skin structures and their different optical and thermal parameters; the conversion of laser light into thermal energy is modeled by the bio-heat equation. The light source is a CO2 laser, with different pulse durations. The model enabled to study the thermal damage induced into the skin, by calculating the Arrhenius integral. The post-processing results enabled to study in space and time the temperature dynamics induced in the facial skin, to study the eventual cumulative effects of subsequent laser pulses and to optimize the procedure for applications in dermatological surgery. The calculated data where then validated in an experimental measurement session, performed in a sheep animal model. Histological analyses were performed on the treated tissues, evidencing the spatial distribution and the entity of the thermal damage in the collageneous tissue. Modeling and experimental results were in good agreement, and they were used to design a new optimized laser based skin resurfacing procedure.

  7. Prediction of Seismic Slope Displacements by Dynamic Stick-Slip Analyses

    International Nuclear Information System (INIS)

    Ausilio, Ernesto; Costanzo, Antonio; Silvestri, Francesco; Tropeano, Giuseppe

    2008-01-01

    A good-working balance between simplicity and reliability in assessing seismic slope stability is represented by displacement-based methods, in which the effects of deformability and ductility can be either decoupled or coupled in the dynamic analyses. In this paper, a 1D lumped mass ''stick-slip'' model is developed, accounting for soil heterogeneity and non-linear behaviour, with a base sliding mechanism at a potential rupture surface. The results of the preliminary calibration show a good agreement with frequency-domain site response analysis in no-slip conditions. The comparison with rigid sliding block analyses and with the decoupled approach proves that the stick-slip procedure can result increasingly unconservative for soft soils and deep sliding depths

  8. Model analyses for sustainable energy supply under CO2 restrictions

    International Nuclear Information System (INIS)

    Matsuhashi, Ryuji; Ishitani, Hisashi.

    1995-01-01

    This paper aims at clarifying key points for realizing sustainable energy supply under restrictions on CO 2 emissions. For this purpose, possibility of solar breeding system is investigated as a key technology for the sustainable energy supply. The authors describe their mathematical model simulating global energy supply and demand in ultra-long term. Depletion of non-renewable resources and constraints on CO 2 emissions are taken into consideration in the model. Computed results have shown that present energy system based on non-renewable resources shifts to a system based on renewable resources in the ultra-long term with appropriate incentives

  9. An extensive cocktail approach for rapid risk assessment of in vitro CYP450 direct reversible inhibition by xenobiotic exposure

    Energy Technology Data Exchange (ETDEWEB)

    Spaggiari, Dany, E-mail: dany.spaggiari@unige.ch [School of Pharmaceutical Sciences, University of Geneva, University of Lausanne, Boulevard d' Yvoy 20, 1211 Geneva 4 (Switzerland); Daali, Youssef, E-mail: youssef.daali@hcuge.ch [Clinical Pharmacology and Toxicology Service, Geneva University Hospitals, Rue Gabrielle Perret-Gentil, 1211 Genève 14 (Switzerland); Rudaz, Serge, E-mail: serge.rudaz@unige.ch [School of Pharmaceutical Sciences, University of Geneva, University of Lausanne, Boulevard d' Yvoy 20, 1211 Geneva 4 (Switzerland); Swiss Centre for Applied Human Toxicology, University of Geneva, Boulevard d' Yvoy 20, 1211 Geneva 4 (Switzerland)

    2016-07-01

    Acute exposure to environmental factors strongly affects the metabolic activity of cytochrome P450 (P450). As a consequence, the risk of interaction could be increased, modifying the clinical outcomes of a medication. Because toxic agents cannot be administered to humans for ethical reasons, in vitro approaches are therefore essential to evaluate their impact on P450 activities. In this work, an extensive cocktail mixture was developed and validated for in vitro P450 inhibition studies using human liver microsomes (HLM). The cocktail comprised eleven P450-specific probe substrates to simultaneously assess the activities of the following isoforms: 1A2, 2A6, 2B6, 2C8, 2C9, 2C19, 2D6, 2E1, 2J2 and subfamily 3A. The high selectivity and sensitivity of the developed UHPLC-MS/MS method were critical for the success of this methodology, whose main advantages are: (i) the use of eleven probe substrates with minimized interactions, (ii) a low HLM concentration, (iii) fast incubation (5 min) and (iv) the use of metabolic ratios as microsomal P450 activities markers. This cocktail approach was successfully validated by comparing the obtained IC{sub 50} values for model inhibitors with those generated with the conventional single probe methods. Accordingly, reliable inhibition values could be generated 10-fold faster using a 10-fold smaller amount of HLM compared to individual assays. This approach was applied to assess the P450 inhibition potential of widespread insecticides, namely, chlorpyrifos, fenitrothion, methylparathion and profenofos. In all cases, P450 2B6 was the most affected with IC{sub 50} values in the nanomolar range. For the first time, mixtures of these four insecticides incubated at low concentrations showed a cumulative inhibitory in vitro effect on P450 2B6. - Highlights: • Ten P450 isoforms activities assessed simultaneously with only one incubation. • P450 activity levels measured using the metabolic ratio approach. • IC{sub 50} values generated 10

  10. Groundwater flow analyses in Japan. 1. Case studies in Hokkaido and Northeast Japan

    International Nuclear Information System (INIS)

    Inaba, Hideo; Maekawa, Keisuke; Koide, Kaoru; Yanagizawa, Koichi

    1995-01-01

    An extensive study program has been carried out to estimate hydrogeological characteristics of deep underground in Japan. As a part of this program, groundwater flow analysis in Hokkaido and Northeast Japan were conducted. For the analyses of these area, hydrogeological models representing topography, geology, distribution of hydraulic conductivity were developed using available informations from open literature. By use of these models, steady state three-dimensional groundwater flow under a saturated/unsaturated condition was calculated by means of finite element method. The results are as follows: (1) Distribution of piezometric head corresponds with topography in the study area. (2) Piezometric head distribution is hydrostatic below E.L.-1000m in the study area. (3) Hydraulic gradient in the study area is less than 0.04 below E.L.-500m. (4) Difference of boundary conditions at the shore side of these models does not affect the results of the analyses. (author)

  11. Spatial Analyses of Harappan Urban Settlements

    Directory of Open Access Journals (Sweden)

    Hirofumi Teramura

    2006-12-01

    Full Text Available The Harappan Civilization occupies a unique place among the early civilizations of the world with its well planned urban settlements, advanced handicraft and technology, religious and trade activities. Using a Geographical Information Systems (GIS, this study presents spatial analyses that locate urban settlements on a digital elevation model (DEM according to the three phases of early, mature and late. Understanding the relationship between the spatial distribution of Harappan sites and the change in some factors, such as topographic features, river passages or sea level changes, will lead to an understanding of the dynamism of this civilization. It will also afford a glimpse of the factors behind the formation, development, and decline of the Harappan Civilization.

  12. Two-year outcomes of the Early Risers prevention trial with formerly homeless families residing in supportive housing.

    Science.gov (United States)

    Gewirtz, Abigail H; DeGarmo, David S; Lee, Susanne; Morrell, Nicole; August, Gerald

    2015-04-01

    This article reports 2-year outcomes from a cluster randomized, controlled trial of the Early Risers (ER) program implemented as a selective preventive intervention in supportive housing settings for homeless families. Based on the goals of this comprehensive prevention program, we predicted that intervention participants receiving ER services would show improvement in parenting and child outcomes relative to families in treatment-as-usual sites. The sample included 270 children in 161 families, residing in 15 supportive housing sites; multimethod, multi-informant assessments conducted at baseline and yearly thereafter included parent and teacher report of child adjustment, parent report of parenting self-efficacy, and parent-child observations that yielded scores of effective parenting practices. Data were modeled in HLM7 (4-level model accounting for nesting of children within families and families within housing sites). Two years' postbaseline, intent-to-treat (ITT) analyses indicated that parents in the ER group showed significantly improved parenting self-efficacy, and parent report indicated significant reductions in ER group children's depression. No main effects of ITT were shown for observed parenting effectiveness. However, over time, average levels of parenting self-efficacy predicted observed effective parenting practices, and observed effective parenting practices predicted improvements in both teacher- and parent-report of child adjustment. This is the first study to our knowledge to demonstrate prevention effects of a program for homeless families residing in family supportive housing. (c) 2015 APA, all rights reserved).

  13. Structural analyses on piping systems of sodium reactors. 2. Eigenvalue analyses of hot-leg pipelines of large scale sodium reactors

    International Nuclear Information System (INIS)

    Furuhashi, Ichiro; Kasahara, Naoto

    2002-01-01

    Two types of finite element models analyzed eigenvalues of hot-leg pipelines of a large-scale sodium reactor. One is a beam element model, which is usual for pipe analyses. The other is a shell element model to evaluate particular modes in thin pipes with large diameters. Summary of analysis results: (1) A beam element model and a order natural frequency. A beam element model is available to get the first order vibration mode. (2) The maximum difference ratio of beam mode natural frequencies was 14% between a beam element model with no shear deformations and a shell element model. However, its difference becomes very small, when shear deformations are considered in beam element. (3) In the first order horizontal mode, the Y-piece acts like a pendulum, and the elbow acts like the hinge. The natural frequency is strongly affected by the bending and shear rigidities of the outer supporting pipe. (4) In the first order vertical mode, the vertical sections of the outer and inner pipes moves in the axial-directional piston mode, the horizontal section of inner pipe behaves like the cantilever, and the elbow acts like the hinge. The natural frequency is strongly affected by the axial rigidity of outer supporting pipe. (5) Both effective masses and participation factors were small for particular shell modes. (author)

  14. Sensitivity analyses of factors influencing CMAQ performance for fine particulate nitrate.

    Science.gov (United States)

    Shimadera, Hikari; Hayami, Hiroshi; Chatani, Satoru; Morino, Yu; Mori, Yasuaki; Morikawa, Tazuko; Yamaji, Kazuyo; Ohara, Toshimasa

    2014-04-01

    Improvement of air quality models is required so that they can be utilized to design effective control strategies for fine particulate matter (PM2.5). The Community Multiscale Air Quality modeling system was applied to the Greater Tokyo Area of Japan in winter 2010 and summer 2011. The model results were compared with observed concentrations of PM2.5 sulfate (SO4(2-)), nitrate (NO3(-)) and ammonium, and gaseous nitric acid (HNO3) and ammonia (NH3). The model approximately reproduced PM2.5 SO4(2-) concentration, but clearly overestimated PM2.5 NO3(-) concentration, which was attributed to overestimation of production of ammonium nitrate (NH4NO3). This study conducted sensitivity analyses of factors associated with the model performance for PM2.5 NO3(-) concentration, including temperature and relative humidity, emission of nitrogen oxides, seasonal variation of NH3 emission, HNO3 and NH3 dry deposition velocities, and heterogeneous reaction probability of dinitrogen pentoxide. Change in NH3 emission directly affected NH3 concentration, and substantially affected NH4NO3 concentration. Higher dry deposition velocities of HNO3 and NH3 led to substantial reductions of concentrations of the gaseous species and NH4NO3. Because uncertainties in NH3 emission and dry deposition processes are probably large, these processes may be key factors for improvement of the model performance for PM2.5 NO3(-). The Community Multiscale Air Quality modeling system clearly overestimated the concentration of fine particulate nitrate in the Greater Tokyo Area of Japan, which was attributed to overestimation of production of ammonium nitrate. Sensitivity analyses were conducted for factors associated with the model performance for nitrate. Ammonia emission and dry deposition of nitric acid and ammonia may be key factors for improvement of the model performance.

  15. Integrated freight network model : a GIS-based platform for transportation analyses.

    Science.gov (United States)

    2015-01-01

    The models currently used to examine the behavior transportation systems are usually mode-specific. That is, they focus on a single mode (i.e. railways, highways, or waterways). The lack of : integration limits the usefulness of models to analyze the...

  16. A Conceptual Model for Analysing Management Development in the UK Hospitality Industry

    Science.gov (United States)

    Watson, Sandra

    2007-01-01

    This paper presents a conceptual, contingent model of management development. It explains the nature of the UK hospitality industry and its potential influence on MD practices, prior to exploring dimensions and relationships in the model. The embryonic model is presented as a model that can enhance our understanding of the complexities of the…

  17. Organophosphorothionate pesticides inhibit the bioactivation of imipramine by human hepatic cytochrome P450s

    International Nuclear Information System (INIS)

    Di Consiglio, Emma; Meneguz, Annarita; Testai, Emanuela

    2005-01-01

    The drug-toxicant interaction between the antidepressant imipramine (IMI) and three organophosphorothionate pesticides (OPTs), to which humans may be chronically and simultaneously exposed, has been investigated in vitro. Concentrations of IMI (2-400 μM) and OPTs (≤10 μM) representative of actual human exposure have been tested with recombinant human CYPs and human liver microsomes (HLM). The different CYPs involved in IMI demethylation to the pharmacologically active metabolite desipramine (DES) were CYP2C19 > CYP1A2 > CYP3A4. The OPTs significantly inhibited (up to >80%) IMI bioactivation catalyzed by the recombinant CYPs tested, except CYP2D6, and by HLM; the inhibition was dose-dependent and started at low pesticide concentrations (0.25-2.5 μM). The OPTs, having lower K m values, efficiently competed with IMI for the enzyme active site, as in the case of CYP2C19. However, with CYP1A2 and CYP3A4, a time- and NADPH-dependent mechanism-based inactivation also occurred, consistently with irreversible inhibition due to the release of the sulfur atom, binding to the active CYP during OPT desulfuration. At low IMI and OPT concentrations, lower IC50 values have been obtained with recombinant CYP1A2 (0.7-1.1 μM) or with HLM rich in 1A2-related activity (2-10.8 μM). The K i values (2-14 μM), independent on substrate concentrations, were quite low and similar for the three pesticides. Exposure to OPTs during IMI therapeutic treatments may lead to decreased DES formation, resulting in high plasma levels of the parent drug, eventual impairment of its pharmacological action and possible onset of adverse drug reactions (ADRs)

  18. Quantifying Shapes: Mathematical Techniques for Analysing Visual Representations of Sound and Music

    Directory of Open Access Journals (Sweden)

    Genevieve L. Noyce

    2013-12-01

    Full Text Available Research on auditory-visual correspondences has a long tradition but innovative experimental paradigms and analytic tools are sparse. In this study, we explore different ways of analysing real-time visual representations of sound and music drawn by both musically-trained and untrained individuals. To that end, participants' drawing responses captured by an electronic graphics tablet were analysed using various regression, clustering, and classification techniques. Results revealed that a Gaussian process (GP regression model with a linear plus squared-exponential covariance function was able to model the data sufficiently, whereas a simpler GP was not a good fit. Spectral clustering analysis was the best of a variety of clustering techniques, though no strong groupings are apparent in these data. This was confirmed by variational Bayes analysis, which only fitted one Gaussian over the dataset. Slight trends in the optimised hyperparameters between musically-trained and untrained individuals allowed for the building of a successful GP classifier that differentiated between these two groups. In conclusion, this set of techniques provides useful mathematical tools for analysing real-time visualisations of sound and can be applied to similar datasets as well.

  19. The inhibitory activity of the extracts of popular medicinal herbs on ...

    African Journals Online (AJOL)

    One of the major clinical risks of such concomitant herb-drug use is pharmacokinetic herb-drug interaction (HDI). ... (HLM) to monitor the phenacetin O-deethylation, diclofenac 4'-hydroxylation, S-mephenytoin 4'-hydroxylation and testosterone 6 β-hydroxylation as respective probe reactions for CYP1A2, CYP2C9, CYP2C19 ...

  20. Uncertainty analyses of the calibrated parameter values of a water quality model

    Science.gov (United States)

    Rode, M.; Suhr, U.; Lindenschmidt, K.-E.

    2003-04-01

    For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.

  1. 3D analyses of cavitation instabilities accounting for plastic anisotropy

    DEFF Research Database (Denmark)

    Legarth, Brian Nyvang; Tvergaard, Viggo

    2010-01-01

    Full three dimensional cell model analyses are carried out for a solid containing a single small void, in order to determine the critical stress levels for the occurrence of cavitation instabilities. The material models applied are elastic‐viscoplastic, with a small rate‐hardening exponent...... that the quasi‐static solution is well approximated. A special procedure is used to strongly reduce the loading rate a little before the instability occurs. It is found that plastic anisotropy has a significant effect on the level of the critical stress for cavitation instabilities....

  2. Reliability Analyses of Groundwater Pollutant Transport

    Energy Technology Data Exchange (ETDEWEB)

    Dimakis, Panagiotis

    1997-12-31

    This thesis develops a probabilistic finite element model for the analysis of groundwater pollution problems. Two computer codes were developed, (1) one using finite element technique to solve the two-dimensional steady state equations of groundwater flow and pollution transport, and (2) a first order reliability method code that can do a probabilistic analysis of any given analytical or numerical equation. The two codes were connected into one model, PAGAP (Probability Analysis of Groundwater And Pollution). PAGAP can be used to obtain (1) the probability that the concentration at a given point at a given time will exceed a specified value, (2) the probability that the maximum concentration at a given point will exceed a specified value and (3) the probability that the residence time at a given point will exceed a specified period. PAGAP could be used as a tool for assessment purposes and risk analyses, for instance the assessment of the efficiency of a proposed remediation technique or to study the effects of parameter distribution for a given problem (sensitivity study). The model has been applied to study the greatest self sustained, precipitation controlled aquifer in North Europe, which underlies Oslo`s new major airport. 92 refs., 187 figs., 26 tabs.

  3. A Web-based Tool Combining Different Type Analyses

    DEFF Research Database (Denmark)

    Henriksen, Kim Steen; Gallagher, John Patrick

    2006-01-01

    of both, and they can be goal-dependent or goal-independent. We describe a prototype tool that can be accessed from a web browser, allowing various type analyses to be run. The first goal of the tool is to allow the analysis results to be examined conveniently by clicking on points in the original program...... the minimal "domain model" of the program with respect to the corresponding pre-interpretation, which can give more precise information than the original descriptive type....

  4. Design by theoretical and CFD analyses of a multi-blade screw pump evolving liquid lead for a Generation IV LFR

    Energy Technology Data Exchange (ETDEWEB)

    Ferrini, Marcello [GeNERG - DIME/TEC, University of Genova, via all’Opera Pia 15/a, 16145 Genova (Italy); Borreani, Walter [Ansaldo Nucleare S.p.A., Corso F.M. Perrone 25, 16152 Genova (Italy); INFN, Via Dodecaneso 33, 16146 Genova (Italy); Lomonaco, Guglielmo, E-mail: guglielmo.lomonaco@unige.it [GeNERG - DIME/TEC, University of Genova, via all’Opera Pia 15/a, 16145 Genova (Italy); INFN, Via Dodecaneso 33, 16146 Genova (Italy); Magugliani, Fabrizio [Ansaldo Nucleare S.p.A., Corso F.M. Perrone 25, 16152 Genova (Italy)

    2016-02-15

    Lead-cooled fast reactor (LFR) has both a long history and a penchant of innovation. With early work related to its use for submarine propulsion dating to the 1950s, Russian scientists pioneered the development of reactors cooled by heavy liquid metals (HLM). More recently, there has been substantial interest in both critical and subcritical reactors cooled by lead (Pb) or lead–bismuth eutectic (LBE), not only in Russia, but also in Europe, Asia, and the USA. The growing knowledge of the thermal-fluid-dynamic properties of these fluids and the choice of the LFR as one of the six reactor types selected by Generation IV International Forum (GIF) for further research and development has fostered the exploration of new geometries and new concepts aimed at optimizing the key components that will be adopted in the Advanced Lead Fast Reactor European Demonstrator (ALFRED), the 300 MW{sub t} pool-type reactor aimed at proving the feasibility of the design concept adopted for the European Lead-cooled Fast Reactor (ELFR). In this paper, a theoretical and computational analysis is presented of a multi-blade screw pump evolving liquid Lead as primary pump for the adopted reference conceptual design of ALFRED. The pump is at first analyzed at design operating conditions from the theoretical point of view to determine the optimal geometry according to the velocity triangles and then modeled with a 3D CFD code (ANSYS CFX). The choice of a 3D simulation is dictated by the need to perform a detailed spatial simulation taking into account the peculiar geometry of the pump as well as the boundary layers and turbulence effects of the flow, which are typically tri-dimensional. The use of liquid Lead impacts significantly the fluid dynamic design of the pump because of the key requirement to avoid any erosion affects. These effects have a major impact on the performance, reliability and lifespan of the pump. Albeit some erosion-related issues remain to be fully addressed, the results

  5. Design by theoretical and CFD analyses of a multi-blade screw pump evolving liquid lead for a Generation IV LFR

    International Nuclear Information System (INIS)

    Ferrini, Marcello; Borreani, Walter; Lomonaco, Guglielmo; Magugliani, Fabrizio

    2016-01-01

    Lead-cooled fast reactor (LFR) has both a long history and a penchant of innovation. With early work related to its use for submarine propulsion dating to the 1950s, Russian scientists pioneered the development of reactors cooled by heavy liquid metals (HLM). More recently, there has been substantial interest in both critical and subcritical reactors cooled by lead (Pb) or lead–bismuth eutectic (LBE), not only in Russia, but also in Europe, Asia, and the USA. The growing knowledge of the thermal-fluid-dynamic properties of these fluids and the choice of the LFR as one of the six reactor types selected by Generation IV International Forum (GIF) for further research and development has fostered the exploration of new geometries and new concepts aimed at optimizing the key components that will be adopted in the Advanced Lead Fast Reactor European Demonstrator (ALFRED), the 300 MW t pool-type reactor aimed at proving the feasibility of the design concept adopted for the European Lead-cooled Fast Reactor (ELFR). In this paper, a theoretical and computational analysis is presented of a multi-blade screw pump evolving liquid Lead as primary pump for the adopted reference conceptual design of ALFRED. The pump is at first analyzed at design operating conditions from the theoretical point of view to determine the optimal geometry according to the velocity triangles and then modeled with a 3D CFD code (ANSYS CFX). The choice of a 3D simulation is dictated by the need to perform a detailed spatial simulation taking into account the peculiar geometry of the pump as well as the boundary layers and turbulence effects of the flow, which are typically tri-dimensional. The use of liquid Lead impacts significantly the fluid dynamic design of the pump because of the key requirement to avoid any erosion affects. These effects have a major impact on the performance, reliability and lifespan of the pump. Albeit some erosion-related issues remain to be fully addressed, the results of

  6. Analyses of transient plant response under emergency situations

    Energy Technology Data Exchange (ETDEWEB)

    Koyama, Kazuya [Advanced Reactor Technology, Co. Ltd., Engineering Department, Tokyo (Japan); Shimakawa, Yoshio; Hishida, Masahiko [Mitsubishi Heavy Industry, Ltd., Reactor Core Engineering and Safety Engineering Department, Tokyo (Japan)

    1999-03-01

    In order to support development of the dynamic reliability analysis program DYANA, analyses were made on the event sequences anticipated under emergency situations using the plant dynamics simulation computer code Super-COPD. The analytical models were developed for Super-COPD such as the guard vessel, the maintenance cooling system, the sodium overflow and makeup system, etc. in order to apply the code to the simulation of the emergency situations. The input data were prepared for the analyses. About 70 sequences were analyzed, which are categorized into the following events: (1) PLOHS (Protected Loss of Heat Sink), (2) LORL (Loss of Reactor Level)-J: failure of sodium makeup by the primary sodium overflow and makeup system, (3) LORL-G : failure of primary coolant pump trip, (4) LORL-I: failure of the argon cover gas isolation, and (5) heat removal only using the ventilation system of the primary cooling system rooms. The results were integrated into an input file for preparing the functions for the neural network simulation. (author)

  7. Analysing stratified medicine business models and value systems: innovation-regulation interactions.

    Science.gov (United States)

    Mittra, James; Tait, Joyce

    2012-09-15

    Stratified medicine offers both opportunities and challenges to the conventional business models that drive pharmaceutical R&D. Given the increasingly unsustainable blockbuster model of drug development, due in part to maturing product pipelines, alongside increasing demands from regulators, healthcare providers and patients for higher standards of safety, efficacy and cost-effectiveness of new therapies, stratified medicine promises a range of benefits to pharmaceutical and diagnostic firms as well as healthcare providers and patients. However, the transition from 'blockbusters' to what might now be termed 'niche-busters' will require the adoption of new, innovative business models, the identification of different and perhaps novel types of value along the R&D pathway, and a smarter approach to regulation to facilitate innovation in this area. In this paper we apply the Innogen Centre's interdisciplinary ALSIS methodology, which we have developed for the analysis of life science innovation systems in contexts where the value creation process is lengthy, expensive and highly uncertain, to this emerging field of stratified medicine. In doing so, we consider the complex collaboration, timing, coordination and regulatory interactions that shape business models, value chains and value systems relevant to stratified medicine. More specifically, we explore in some depth two convergence models for co-development of a therapy and diagnostic before market authorisation, highlighting the regulatory requirements and policy initiatives within the broader value system environment that have a key role in determining the probable success and sustainability of these models. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Economical analyses of build-operate-transfer model in establishing alternative power plants

    Energy Technology Data Exchange (ETDEWEB)

    Yumurtaci, Zehra [Yildiz Technical University, Department of Mechanical Engineering, Y.T.U. Mak. Fak. Mak. Muh. Bolumu, Besiktas, 34349 Istanbul (Turkey)]. E-mail: zyumur@yildiz.edu.tr; Erdem, Hasan Hueseyin [Yildiz Technical University, Department of Mechanical Engineering, Y.T.U. Mak. Fak. Mak. Muh. Bolumu, Besiktas, 34349 Istanbul (Turkey)

    2007-01-15

    The most widely employed method to meet the increasing electricity demand is building new power plants. The most important issue in building new power plants is to find financial funds. Various models are employed, especially in developing countries, in order to overcome this problem and to find a financial source. One of these models is the build-operate-transfer (BOT) model. In this model, the investor raises all the funds for mandatory expenses and provides financing, builds the plant and, after a certain plant operation period, transfers the plant to the national power organization. In this model, the object is to decrease the burden of power plants on the state budget. The most important issue in the BOT model is the dependence of the unit electricity cost on the transfer period. In this study, the model giving the unit electricity cost depending on the transfer of the plants established according to the BOT model, has been discussed. Unit electricity investment cost and unit electricity cost in relation to transfer period for plant types have been determined. Furthermore, unit electricity cost change depending on load factor, which is one of the parameters affecting annual electricity production, has been determined, and the results have been analyzed. This method can be employed for comparing the production costs of different plants that are planned to be established according to the BOT model, or it can be employed to determine the appropriateness of the BOT model.

  9. Economical analyses of build-operate-transfer model in establishing alternative power plants

    International Nuclear Information System (INIS)

    Yumurtaci, Zehra; Erdem, Hasan Hueseyin

    2007-01-01

    The most widely employed method to meet the increasing electricity demand is building new power plants. The most important issue in building new power plants is to find financial funds. Various models are employed, especially in developing countries, in order to overcome this problem and to find a financial source. One of these models is the build-operate-transfer (BOT) model. In this model, the investor raises all the funds for mandatory expenses and provides financing, builds the plant and, after a certain plant operation period, transfers the plant to the national power organization. In this model, the object is to decrease the burden of power plants on the state budget. The most important issue in the BOT model is the dependence of the unit electricity cost on the transfer period. In this study, the model giving the unit electricity cost depending on the transfer of the plants established according to the BOT model, has been discussed. Unit electricity investment cost and unit electricity cost in relation to transfer period for plant types have been determined. Furthermore, unit electricity cost change depending on load factor, which is one of the parameters affecting annual electricity production, has been determined, and the results have been analyzed. This method can be employed for comparing the production costs of different plants that are planned to be established according to the BOT model, or it can be employed to determine the appropriateness of the BOT model

  10. Analyses of Current And Wave Forces on Velocity Caps

    DEFF Research Database (Denmark)

    Christensen, Erik Damgaard; Buhrkall, Jeppe; Eskesen, Mark C. D.

    2015-01-01

    Velocity caps are often used in connection with for instance offshore intake sea water for the use of for cooling water for power plants or as a source for desalinization plants. The intakes can also be used for river intakes. The velocity cap is placed on top of a vertical pipe. The vertical pipe......) this paper investigates the current and wave forces on the velocity cap and the vertical cylinder. The Morison’s force model was used in the analyses of the extracted force time series in from the CFD model. Further the distribution of the inlet velocities around the velocity cap was also analyzed in detail...

  11. Imitation of phase I oxidative metabolism of anabolic steroids by titanium dioxide photocatalysis.

    Science.gov (United States)

    Ruokolainen, Miina; Valkonen, Minna; Sikanen, Tiina; Kotiaho, Tapio; Kostiainen, Risto

    2014-12-18

    The aim of this study was to investigate the feasibility of titanium dioxide (TiO2) photocatalysis for oxidation of anabolic steroids and for imitation of their phase I metabolism. The photocatalytic reaction products of five anabolic steroids were compared to their phase I in vitro metabolites produced by human liver microsomes (HLM). The same main reaction types - hydroxylation, dehydrogenation and combination of these two - were observed both in TiO2 photocatalysis and in microsomal incubations. Several isomers of each product type were formed in both systems. Based on the same mass, retention time and similarity of the product ion spectra, many of the products observed in HLM reactions were also formed in TiO2 photocatalytic reactions. However, products characteristic to only either one of the systems were also formed. In conclusion, TiO2 photocatalysis is a rapid, simple and inexpensive method for imitation of phase I metabolism of anabolic steroids and production of metabolite standards. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Optical region elemental abundance analyses of B and A stars

    International Nuclear Information System (INIS)

    Adelman, S.J.; Young, J.M.; Baldwin, H.E.

    1984-01-01

    Abundance analyses using optical region data and fully line blanketed model atmospheres have been performed for two sharp-lined hot Am stars o Pegasi and σ Aquarii and for the sharp-lined marginally peculiar A star v Cancri. The derived abundances exhibit definite anomalies compared with those of normal B-type stars and the Sun. (author)

  13. Applications of neural network to numerical analyses

    International Nuclear Information System (INIS)

    Takeda, Tatsuoki; Fukuhara, Makoto; Ma, Xiao-Feng; Liaqat, Ali

    1999-01-01

    Applications of a multi-layer neural network to numerical analyses are described. We are mainly concerned with the computed tomography and the solution of differential equations. In both cases as the objective functions for the training process of the neural network we employed residuals of the integral equation or the differential equations. This is different from the conventional neural network training where sum of the squared errors of the output values is adopted as the objective function. For model problems both the methods gave satisfactory results and the methods are considered promising for some kind of problems. (author)

  14. Treatment of visceral leishmaniasis: model-based analyses on the spread of antimony-resistant L. donovani in Bihar, India.

    Directory of Open Access Journals (Sweden)

    Anette Stauch

    Full Text Available BACKGROUND: Pentavalent antimonials have been the mainstay of antileishmanial therapy for decades, but increasing failure rates under antimonial treatment have challenged further use of these drugs in the Indian subcontinent. Experimental evidence has suggested that parasites which are resistant against antimonials have superior survival skills than sensitive ones even in the absence of antimonial treatment. METHODS AND FINDINGS: We use simulation studies based on a mathematical L. donovani transmission model to identify parameters which can explain why treatment failure rates under antimonial treatment increased up to 65% in Bihar between 1980 and 1997. Model analyses suggest that resistance to treatment alone cannot explain the observed treatment failure rates. We explore two hypotheses referring to an increased fitness of antimony-resistant parasites: the additional fitness is (i disease-related, by causing more clinical cases (higher pathogenicity or more severe disease (higher virulence, or (ii is transmission-related, by increasing the transmissibility from sand flies to humans or vice versa. CONCLUSIONS: Both hypotheses can potentially explain the Bihar observations. However, increased transmissibility as an explanation appears more plausible because it can occur in the background of asymptomatically transmitted infection whereas disease-related factors would most probably be observable. Irrespective of the cause of fitness, parasites with a higher fitness will finally replace sensitive parasites, even if antimonials are replaced by another drug.

  15. Round-robin pretest analyses of a 1:6-scale reinforced concrete containment model subject to static internal pressurization

    International Nuclear Information System (INIS)

    Clauss, D.B.

    1987-05-01

    Analyses of a 1:6-scale reinforced concrete containment model that will be tested to failure at Sandia National Laboratories in the spring of 1987 were conducted by the following organizations in the United States and Europe: Sandia National Laboratories (USA), Argonne National Laboratory (USA), Electric Power Research Institute (USA), Commissariat a L'Energie Atomique (France), HM Nuclear Installations Inspectorate (UK), Comitato Nazionale per la ricerca e per lo sviluppo dell'Energia Nucleare e delle Energie Alternative (Italy), UK Atomic Energy Authority, Safety and Reliability Directorate (UK), Gesellschaft fuer Reaktorsicherheit (FRG), Brookhaven National Laboratory (USA), and Central Electricity Generating Board (UK). Each organization was supplied with a standard information package, which included construction drawings and actual material properties for most of the materials used in the model. Each organization worked independently using their own analytical methods. This report includes descriptions of the various analytical approaches and pretest predictions submitted by each organization. Significant milestones that occur with increasing pressure, such as damage to the concrete (cracking and crushing) and yielding of the steel components, and the failure pressure (capacity) and failure mechanism are described. Analytical predictions for pressure histories of strain in the liner and rebar and displacements are compared at locations where experimental results will be available after the test. Thus, these predictions can be compared to one another and to experimental results after the test

  16. Persistent Monitoring of Urban Infrasound Phenomenology. Report 1: Modeling an Urban Environment for Acoustical Analyses using the 3-D Finite-Difference Time-Domain Program PSTOP3D

    Science.gov (United States)

    2015-08-01

    ER D C TR -1 5- 5 Remote Assessment of Critical Infrastructure Persistent Monitoring of Urban Infrasound Phenomenology Report 1...ERDC TR-15-5 August 2015 Persistent Monitoring of Urban Infrasound Phenomenology Report 1: Modeling an Urban Environment for Acoustical Analyses...Figure 5.1. Main spreadsheet containing problem setup. ..................................................................... 74 Figure 5.2. Definition

  17. Linking material and energy flow analyses and social theory

    Energy Technology Data Exchange (ETDEWEB)

    Schiller, Frank [The Open University, Faculty of Maths, Computing and Technology, Walton Hall, Milton Keynes, MK7 6AA (United Kingdom)

    2009-04-15

    The paper explores the potential of Habermas' theory of communicative action to alter the social reflexivity of material and energy flow analysis. With his social macro theory Habermas has provided an alternative, critical justification for social theory that can be distinguished from economic libertarianism and from political liberalism. Implicitly, most flow approaches draw from these theoretical traditions rather than from discourse theory. There are several types of material and energy flow analyses. While these concepts basically share a system theoretical view, they lack a specific interdisciplinary perspective that ties the fundamental insight of flows to disciplinary scientific development. Instead of simply expanding micro-models to the social macro-dimension social theory suggests infusing the very notion of flows to the progress of disciplines. With regard to the functional integration of society, material and energy flow analyses can rely on the paradigm of ecological economics and at the same time progress the debate between strong and weak sustainability within the paradigm. However, placing economics at the centre of their functional analyses may still ignore the broader social integration of society, depending on their pre-analytic outline of research and the methods used. (author)

  18. Linking material and energy flow analyses and social theory

    International Nuclear Information System (INIS)

    Schiller, Frank

    2009-01-01

    The paper explores the potential of Habermas' theory of communicative action to alter the social reflexivity of material and energy flow analysis. With his social macro theory Habermas has provided an alternative, critical justification for social theory that can be distinguished from economic libertarianism and from political liberalism. Implicitly, most flow approaches draw from these theoretical traditions rather than from discourse theory. There are several types of material and energy flow analyses. While these concepts basically share a system theoretical view, they lack a specific interdisciplinary perspective that ties the fundamental insight of flows to disciplinary scientific development. Instead of simply expanding micro-models to the social macro-dimension social theory suggests infusing the very notion of flows to the progress of disciplines. With regard to the functional integration of society, material and energy flow analyses can rely on the paradigm of ecological economics and at the same time progress the debate between strong and weak sustainability within the paradigm. However, placing economics at the centre of their functional analyses may still ignore the broader social integration of society, depending on their pre-analytic outline of research and the methods used. (author)

  19. Water flow experiments and analyses on the cross-flow type mercury target model with the flow guide plates

    CERN Document Server

    Haga, K; Kaminaga, M; Hino, R

    2001-01-01

    A mercury target is used in the spallation neutron source driven by a high-intensity proton accelerator. In this study, the effectiveness of the cross-flow type mercury target structure was evaluated experimentally and analytically. Prior to the experiment, the mercury flow field and the temperature distribution in the target container were analyzed assuming a proton beam energy and power of 1.5 GeV and 5 MW, respectively, and the feasibility of the cross-flow type target was evaluated. Then the average water flow velocity field in the target mock-up model, which was fabricated from Plexiglass for a water experiment, was measured at room temperature using the PIV technique. Water flow analyses were conducted and the analytical results were compared with the experimental results. The experimental results showed that the cross-flow could be realized in most of the proton beam path area and the analytical result of the water flow velocity field showed good correspondence to the experimental results in the case w...

  20. Scenario sensitivity analyses performed on the PRESTO-EPA LLW risk assessment models

    International Nuclear Information System (INIS)

    Bandrowski, M.S.

    1988-01-01

    The US Environmental Protection Agency (EPA) is currently developing standards for the land disposal of low-level radioactive waste. As part of the standard development, EPA has performed risk assessments using the PRESTO-EPA codes. A program of sensitivity analysis was conducted on the PRESTO-EPA codes, consisting of single parameter sensitivity analysis and scenario sensitivity analysis. The results of the single parameter sensitivity analysis were discussed at the 1987 DOE LLW Management Conference. Specific scenario sensitivity analyses have been completed and evaluated. Scenario assumptions that were analyzed include: site location, disposal method, form of waste, waste volume, analysis time horizon, critical radionuclides, use of buffer zones, and global health effects