WorldWideScience

Sample records for model post-hoc analyses

  1. Post Hoc Analyses of Anxiety Measures in Adult Patients With Generalized Anxiety Disorder Treated With Vilazodone.

    Science.gov (United States)

    Khan, Arif; Durgam, Suresh; Tang, Xiongwen; Ruth, Adam; Mathews, Maju; Gommoll, Carl P

    2016-01-01

    To investigate vilazodone, currently approved for major depressive disorder in adults, for generalized anxiety disorder (GAD). Three randomized, double-blind, placebo-controlled studies showing positive results for vilazodone (2,040 mg/d) in adult patients with GAD (DSM-IV-TR) were pooled for analyses; data were collected from June 2012 to March 2014. Post hoc outcomes in the pooled intent-to-treat population (n = 1,462) included mean change from baseline to week 8 in Hamilton Anxiety Rating Scale (HARS) total score, psychic and somatic anxiety subscale scores, and individual item scores; HARS response (≥ 50% total score improvement) and remission (total score ≤ 7) at week 8; and category shifts, defined as HARS item score ≥ 2 at baseline (moderate to very severe symptoms) and score of 0 at week 8 (no symptoms). The least squares mean difference was statistically significant for vilazodone versus placebo in change from baseline to week 8 in HARS total score (-1.83, P anxiety (-1.21, P anxiety (-0.63, P adult GAD patients, with significant differences between treatment groups found on both psychic and somatic HARS items. ClinicalTrials.gov identifiers: NCT01629966, NCT01766401, NCT01844115.

  2. Post-Hoc Pattern-Oriented Testing and Tuning of an Existing Large Model: Lessons from the Field Vole

    DEFF Research Database (Denmark)

    Topping, Christopher John; Dalkvist, Trine; Grimm, Volker

    2012-01-01

    between landscape and individual behavior. Results of fitting to the range of patterns chosen were generally very good, but the procedure required to achieve this was long and complicated. To obtain good correspondence between model and the real world it was often necessary to model the real world...... environment closely. We therefore conclude that post-hoc POM is a useful and viable way to test a highly complex simulation model, but also warn against the dangers of over-fitting to real world patterns that lack details in their explanatory driving factors. To overcome some of these obstacles we suggest...

  3. Onset of efficacy and tolerability following the initiation dosing of long-acting paliperidone palmitate: post-hoc analyses of a randomized, double-blind clinical trial

    Directory of Open Access Journals (Sweden)

    Fu Dong-Jing

    2011-05-01

    Full Text Available Abstract Background Paliperidone palmitate is a long-acting injectable atypical antipsychotic for the acute and maintenance treatment of adults with schizophrenia. The recommended initiation dosing regimen is 234 mg on Day 1 and 156 mg on Day 8 via intramuscular (deltoid injection; followed by 39 to 234 mg once-monthly thereafter (deltoid or gluteal. These post-hoc analyses addressed two commonly encountered clinical issues regarding the initiation dosing: the time to onset of efficacy and the associated tolerability. Methods In a 13-week double-blind trial, 652 subjects with schizophrenia were randomized to paliperidone palmitate 39, 156, or 234 mg (corresponding to 25, 100, or 150 mg equivalents of paliperidone, respectively or placebo (NCT#00590577. Subjects randomized to paliperidone palmitate received 234 mg on Day 1, followed by their randomized fixed dose on Day 8, and monthly thereafter, with no oral antipsychotic supplementation. The onset of efficacy was defined as the first timepoint where the paliperidone palmitate group showed significant improvement in the Positive and Negative Syndrome Scale (PANSS score compared to placebo (Analysis of Covariance [ANCOVA] models and Last Observation Carried Forward [LOCF] methodology without adjusting for multiplicity using data from the Days 4, 8, 22, and 36 assessments. Adverse event (AE rates and relative risks (RR with 95% confidence intervals (CI versus placebo were determined. Results Paliperidone palmitate 234 mg on Day 1 was associated with greater improvement than placebo on Least Squares (LS mean PANSS total score at Day 8 (p = 0.037. After the Day 8 injection of 156 mg, there was continued PANSS improvement at Day 22 (p ≤ 0.007 vs. placebo and Day 36 (p Conclusions Significantly greater symptom improvement was observed by Day 8 with paliperidone palmitate (234 mg on Day 1 compared to placebo; this effect was maintained after the 156 mg Day 8 injection, with a trend towards a dose

  4. Estimated medical expenditure and risk of job loss among rheumatoid arthritis patients undergoing tofacitinib treatment: post hoc analyses of two randomized clinical trials.

    Science.gov (United States)

    Rendas-Baum, Regina; Kosinski, Mark; Singh, Amitabh; Mebus, Charles A; Wilkinson, Bethany E; Wallenstein, Gene V

    2017-08-01

    RA causes high disability levels and reduces health-related quality of life, triggering increased costs and risk of unemployment. Tofacitinib is an oral Janus kinase inhibitor for the treatment of RA. These post hoc analyses of phase 3 data aimed to assess monthly medical expenditure (MME) and risk of job loss for tofacitinib treatment vs placebo. Data analysed were from two randomized phase 3 studies of RA patients (n = 1115) with inadequate response to MTX or TNF inhibitors (TNFi) receiving tofacitinib 5 or 10 mg twice daily, adalimumab (one study only) or placebo, in combination with MTX. Short Form 36 version 2 Health Survey physical and mental component summary scores were translated into predicted MME via an algorithm and concurrent inability to work and job loss risks at 6, 12 and 24 months, using Medical Outcomes Study data. MME reduction by month 3 was $100 greater for tofacitinib- than placebo-treated TNFi inadequate responders (P 20 and 6% reductions from baseline, respectively. By month 3 of tofacitinib treatment, the odds of inability to work decreased ⩾16%, and risk of future job loss decreased ∼20% (P < 0.001 vs placebo). MME reduction by month 3 was $70 greater for tofacitinib- than placebo-treated MTX inadequate responders (P < 0.001); ⩾23 and 13% reductions from baseline, respectively. By month 3 of tofacitinib treatment, the odds of inability to work decreased ⩾31% and risk of future job loss decreased ⩾25% (P < 0.001 vs placebo). Tofacitinib treatment had a positive impact on estimated medical expenditure and risk of job loss for RA patients with inadequate response to MTX or TNFi.

  5. Clobazam is equally safe and efficacious for seizures associated with Lennox-Gastaut syndrome across different age groups: Post hoc analyses of short- and long-term clinical trial results.

    Science.gov (United States)

    Ng, Yu-Tze; Conry, Joan; Mitchell, Wendy G; Buchhalter, Jeffrey; Isojarvi, Jouko; Lee, Deborah; Drummond, Rebecca; Chung, Steve

    2015-05-01

    The peak age at onset of Lennox-Gastaut syndrome (LGS) is between 3 and 5years. Patients with LGS frequently experience multiple types of treatment-refractory seizures and require lifelong therapy with several antiepileptic drugs. Here, post hoc analyses of clinical trials (phase III trial OV-1012 and open-label extension trial OV-1004) provide short- and long-term efficacy and safety data of adjunctive clobazam in patients with LGS stratified by age at baseline (≥2 to clobazam over the short and longterm was similarly effective and well-tolerated in both pediatric and adult patients with LGS.

  6. Post-Hoc Pattern-Oriented Testing and Tuning of an Existing Large Model: Lessons from the Field Vole

    DEFF Research Database (Denmark)

    Topping, Christopher John; Dalkvist, Trine; Grimm, Volker

    2012-01-01

    develop an existing agent-based model of the field vole (Microtus agrestis), which was developed and tested within the ALMaSS framework. This framework is complex because it includes a high-resolution representation of the landscape and its dynamics, of the individual’s behavior, and of the interaction...

  7. POST-HOC SEGMENTATION USING MARKETING RESEARCH

    Directory of Open Access Journals (Sweden)

    CRISTINEL CONSTANTIN

    2012-10-01

    Full Text Available This paper is about an instrumental research conducted in order to compare the information given by two multivariate data analysis used fordividing a population in clusters. These methods are K-means cluster and TwoStep cluster, which are available in SPSS system. Such methods could be used in post-hoc market segmentations, which allow companies to find segments with specific behaviours or attitudes. The research scope is to find which of the two methods is better for market segmentation practice. The outcomes reveal that every method has strong points and weaknesses. These ones are related to the relevance of segments description and the statistic significance of the difference between segments. In this respect, the researchers should compare the results of the named analyses and choose the method which better discriminate between the market segments.

  8. Safety and efficacy of canagliflozin in Japanese patients with type 2 diabetes mellitus: post hoc subgroup analyses according to body mass index in a 52-week open-label study.

    Science.gov (United States)

    Inagaki, Nobuya; Goda, Maki; Yokota, Shoko; Maruyama, Nobuko; Iijima, Hiroaki

    2015-01-01

    The safety and efficacy of sodium glucose co-transporter 2 inhibitors in non-obese compared with obese patients with type 2 diabetes mellitus is unknown. We conducted post hoc analyses of the results of a 52-week open-label study of Japanese type 2 diabetes mellitus patients treated with 100 or 200 mg canagliflozin. Patients were divided into four subgroups according to their baseline body mass index (BMI): group I, BMI canagliflozin doses. Hemoglobin A1c, fasting plasma glucose and body weight decreased significantly from baseline to week 52 at both canagliflozin doses. The changes in hemoglobin A1c, and fasting plasma glucose were not significantly different among the four BMI subgroups for either dose. Canagliflozin was tolerated in patients irrespective of their BMI at the start of treatment, although some caution may be needed.

  9. Flourishing in people with depressive symptomatology increases with Acceptance and Commitment Therapy. Post-hoc analyses of a randomized controlled trial

    NARCIS (Netherlands)

    Bohlmeijer, Ernst Thomas; Lamers, S.M.A.; Fledderus, M.

    2015-01-01

    Mental health is more than the absence of mental illness. Rather, both well-being (positive mental health) and mental illness are actually two related continua, with higher levels of well-being defined as “flourishing.” This two-continua model and existing studies about the impact of flourishing on

  10. Flourishing in people with depressive symptomatology increases with Acceptance and Commitment Therapy. Post-hoc analyses of a randomized controlled trial

    NARCIS (Netherlands)

    Bohlmeijer, E.T.; Lamers, S.M.A.; Fledderus, M.

    2015-01-01

    Mental health is more than the absence of mental illness. Rather, both well-being (positive mental health) and mental illness are actually two related continua, with higher levels of well-being defined as “flourishing.” This two-continua model and existing studies about the impact of flourishing on

  11. Flourishing in people with depressive symptomatology increases with Acceptance and Commitment Therapy. Post-hoc analyses of a randomized controlled trial.

    Science.gov (United States)

    Bohlmeijer, Ernst T; Lamers, Sanne M A; Fledderus, Martine

    2015-02-01

    Mental health is more than the absence of mental illness. Rather, both well-being (positive mental health) and mental illness are actually two related continua, with higher levels of well-being defined as "flourishing." This two-continua model and existing studies about the impact of flourishing on psychopathology underscore the need for interventions that enhance flourishing and well-being. Acceptance and Commitment Therapy (ACT) is a model of cognitive behavioral therapy that aims not only to reduce psychopathology but also to promote flourishing as well. This is the first study to evaluate the impact of ACT on flourishing. A post-analysis was conducted on an earlier randomized controlled trial of a sample of adults with depressive symptomatology who participated in a guided self-help ACT intervention. This post-analysis showed a 5%-28% increase of flourishing by the participants. In addition, the effects on flourishing were maintained at the three-month follow-up. When compared to participants in a control group, the flourishing of the ACT-trained participants increased from 5% to about 14% after nine weeks. In addition to levels of positive mental health at baseline, an increase of psychological flexibility during the intervention was a significant predictor of flourishing at the three-month follow-up. Copyright © 2014 Elsevier Ltd. All rights reserved.

  12. Clobazam is efficacious for patients across the spectrum of disease severity of Lennox-Gastaut syndrome: post hoc analyses of clinical trial results by baseline seizure-frequency quartiles and VNS experience.

    Science.gov (United States)

    Wheless, James W; Isojarvi, Jouko; Lee, Deborah; Drummond, Rebecca; Benbadis, Selim R

    2014-12-01

    Lennox-Gastaut syndrome (LGS) severity varies considerably, so the potential impact of differences in baseline severity on patient outcome following treatment is clinically informative. Here, two surrogate indicators of LGS severity (baseline seizure frequency and vagus nerve stimulation [VNS] use) were used in post hoc analyses of both short- and long-term clobazam trials (Phase III OV-1012 [CONTAIN] and open-label extension [OLE] OV-1004). In CONTAIN, 217 patients comprised the modified, intention-to-treat population. Each baseline seizure-frequency quartile had ~40 patients, and baseline weekly drop-seizure frequency ranges were as follows: clobazam-treated patients (vs. 7% for placebo) in Quartile 1. Five percent of clobazam-treated patients in Quartile 4 (most severe LGS) vs. 0% for placebo achieved 100% reduction in drop seizures. A total of 267 of 306 possible patients entered the OLE (61/68 from a Phase II study and 206/238 from Phase III CONTAIN). Each quartile had ~66 patients, and baseline weekly drop-seizure ranges were as follows: 50% of patients in all 4 quartiles demonstrated ≥ 50% decreases in weekly frequency for drop seizures. More than 12% of patients in Quartile 4 achieved 100% reduction in drop seizures from Month 3 through Year 5. For the VNS analyses in CONTAIN, the least-squares mean decreases in average weekly rate of drop seizures (mITT population) were 52% for VNS patients receiving clobazam vs. -22% for placebo (p clobazam and 26% for placebo (p clobazam-treated patients in the VNS and non-VNS groups demonstrated ≥ 50% decreases in average weekly drop- and total-seizure frequencies, and 11% and 14% in the two groups achieved drop-seizure freedom, respectively. Analyses using baseline seizure frequency and VNS use as surrogates for disease severity showed that clobazam treatment of patients with less severe or severe LGS was equally efficacious.

  13. Does rectal indomethacin eliminate the need for prophylactic pancreatic stent placement in patients undergoing high-risk ERCP? Post hoc efficacy and cost-benefit analyses using prospective clinical trial data.

    Science.gov (United States)

    Elmunzer, B Joseph; Higgins, Peter D R; Saini, Sameer D; Scheiman, James M; Parker, Robert A; Chak, Amitabh; Romagnuolo, Joseph; Mosler, Patrick; Hayward, Rodney A; Elta, Grace H; Korsnes, Sheryl J; Schmidt, Suzette E; Sherman, Stuart; Lehman, Glen A; Fogel, Evan L

    2013-03-01

    A recent large-scale randomized controlled trial (RCT) demonstrated that rectal indomethacin administration is effective in addition to pancreatic stent placement (PSP) for preventing post-endoscopic retrograde cholangiopancreatography (ERCP) pancreatitis (PEP) in high-risk cases. We performed a post hoc analysis of this RCT to explore whether rectal indomethacin can replace PSP in the prevention of PEP and to estimate the potential cost savings of such an approach. We retrospectively classified RCT subjects into four prevention groups: (1) no prophylaxis, (2) PSP alone, (3) rectal indomethacin alone, and (4) the combination of PSP and indomethacin. Multivariable logistic regression was used to adjust for imbalances in the prevalence of risk factors for PEP between the groups. Based on these adjusted PEP rates, we conducted an economic analysis comparing the costs associated with PEP prevention strategies employing rectal indomethacin alone, PSP alone, or the combination of both. After adjusting for risk using two different logistic regression models, rectal indomethacin alone appeared to be more effective for preventing PEP than no prophylaxis, PSP alone, and the combination of indomethacin and PSP. Economic analysis revealed that indomethacin alone was a cost-saving strategy in 96% of Monte Carlo trials. A prevention strategy employing rectal indomethacin alone could save approximately $150 million annually in the United States compared with a strategy of PSP alone, and $85 million compared with a strategy of indomethacin and PSP. This hypothesis-generating study suggests that prophylactic rectal indomethacin could replace PSP in patients undergoing high-risk ERCP, potentially improving clinical outcomes and reducing healthcare costs. A RCT comparing rectal indomethacin alone vs. indomethacin plus PSP is needed.

  14. Impact of a switch to fingolimod versus staying on glatiramer acetate or beta interferons on patient- and physician-reported outcomes in relapsing multiple sclerosis: post hoc analyses of the EPOC trial.

    Science.gov (United States)

    Calkwood, Jonathan; Cree, Bruce; Crayton, Heidi; Kantor, Daniel; Steingo, Brian; Barbato, Luigi; Hashmonay, Ron; Agashivala, Neetu; McCague, Kevin; Tenenbaum, Nadia; Edwards, Keith

    2014-11-26

    The Evaluate Patient OutComes (EPOC) study assessed physician- and patient-reported outcomes in individuals with relapsing multiple sclerosis who switched directly from injectable disease-modifying therapy (iDMT; glatiramer acetate, intramuscular or subcutaneous interferon beta-1a, or interferon beta-1b) to once-daily, oral fingolimod. Post hoc analyses evaluated the impact of a switch to fingolimod versus staying on each of the four individual iDMTs. Overall, 1053 patients were randomized 3:1 to switch to fingolimod or remain on iDMT. The primary endpoint was the change in Treatment Satisfaction Questionnaire for Medication (TSQM) Global Satisfaction score. Secondary endpoints included changes in scores for TSQM Effectiveness, Side Effects and Convenience subscales, Beck Depression Inventory-II (BDI-II), Fatigue Severity Scale (FSS), Patient-Reported Outcome Indices for Multiple Sclerosis (PRIMUS) Activities, 36-item Short-Form Health Survey (SF-36) Mental Component Summary (MCS) and Physical Component Summary (PCS) and mean investigator-reported Clinical Global Impressions of Improvement (CGI-I). All outcomes were evaluated after 6 months of treatment. Changes in TSQM Global Satisfaction scores were superior after a switch to fingolimod when compared with scores in patients remaining on any of the iDMTs (all p <0.001). Likewise, all TSQM subscale scores improved following a switch to fingolimod (all p <0.001), except when compared with glatiramer acetate for the TSQM Side Effects subscale (p = 0.111). FSS scores were found to be superior for fingolimod versus remaining on subcutaneous interferon beta-1a and interferon beta-1b, BDI-II scores were significantly improved for fingolimod except for the comparison with intramuscular interferon beta-1a, and SF-36 scores were superior with fingolimod compared with remaining on interferon beta-1b (MCS and PCS; p = 0.030 and p = 0.022, respectively) and subcutaneous interferon beta-1a (PCS only; p = 0

  15. Modelling of increased homocysteine in ischaemic stroke: post-hoc cross-sectional matched case-control analysis in young patients Aumento de homocisteína em acidente vascular cerebral isquêmico: análise post-hoc com casos controles em pacientes jovens

    Directory of Open Access Journals (Sweden)

    Penka A. Atanassova

    2007-03-01

    Full Text Available BACKGROUND & PURPOSE: Hyperhomocysteinaemia has been postulated to participate in pathogenesis of ischaemic stroke (IS. However, especially in young adults, there is possibility of significantly increased IS risk due to increased ‘normal’ homocysteinaemia, i.e., ‘hidden’ (‘pathologically dormant’ prevalence within a healthy, normally-defined range. We performed a post-hoc modelling investigation on plasma total homocysteinaemia (THCY in gender- and age-matched young patients in the acute IS phase. We evaluated relationships between THCY and prevalence of other potential risk factors in 41 patients vs. 41 healthy controls. METHOD: We used clinical methods, instrumental and neuroimmaging procedures, risk factors examination, total plasma homocysteine measurements and other laboratory and statistical modelling techniques. RESULTS: IS patients and healthy controls were similar not only for matching variables, but also for smoking, main vitamin status, serum creatinine and lipid profile. Patients with IS, however, had lower vitamin B6 levels and higher THCY, fibrinogen and triglycerides (TGL. At multivariate stepwise logistic regression only increased THCY and TGL were significantly and independently associated with the risk for stroke (72% model accuracy, p model=0.001. An increase of THCY with 1.0 µmol/L was associated with 22% higher risk of ischaemic stroke [adjusted OR=1.22 (95%CI 1.03?1.44]. In this way, novel lower cut-off value for HCY of 11.58 µmol/L in younger patients has been revealed (ROC AUC= 0.67, 95CI% 0.55-0.78, p=0.009. CONCLUSION: The new THCY cut-off clearly discriminated between absence and presence of IS (sensitivity>63%, specificity>68% irrespectively of age and gender and may be applied to better evaluate and more precisely define, as earlier as possible, the young patients at increased IS risk.OBJETIVO: Hiperhomocisteinemia tem sido postulada como um dos fatores de risco na patogênese do acidente vascular

  16. A-priori and post-hoc segmentation in the design of healthy eating campaigns

    DEFF Research Database (Denmark)

    Kazbare, Laura; van Trijp, Hans C. M.; Eskildsen, Jacob Kjær

    2010-01-01

    -old adolescents is used to prove that segmentation generates valuable insights for planning promotion of healthy eating. Four types of predictive segmentation models are compared - a one-segment model, an a-priori segmentation based on demographic variable, an a-priori segmentation based on behavioural variables....... Although such practice may be justifiable from the practical point of view, it is unclear how effective these implicit segmentations are. In this study the authors argue that  it is important to transcend demographic boundaries and to further segment demographic groups. A study with 13-15-year...... and a post-hoc segmentation. The results of the study show that it is useful and also ethical to differentiate people using segmentation methods, since it facilitates reaching more vulnerable segments of society that in general resist change. It also demonstrates that post-hoc segmentation is more helpful...

  17. A-priori and post-hoc segmentation in the design of healthy eating campaigns

    DEFF Research Database (Denmark)

    Kazbare, Laura; van Trijp, Hans C. M.; Eskildsen, Jacob Kjær

    2010-01-01

    . Although such practice may be justifiable from the practical point of view, it is unclear how effective these implicit segmentations are. In this study the authors argue that  it is important to transcend demographic boundaries and to further segment demographic groups. A study with 13-15-year......-old adolescents is used to prove that segmentation generates valuable insights for planning promotion of healthy eating. Four types of predictive segmentation models are compared - a one-segment model, an a-priori segmentation based on demographic variable, an a-priori segmentation based on behavioural variables...... and a post-hoc segmentation. The results of the study show that it is useful and also ethical to differentiate people using segmentation methods, since it facilitates reaching more vulnerable segments of society that in general resist change. It also demonstrates that post-hoc segmentation is more helpful...

  18. Perioperative hyperoxia - Long-term impact on cardiovascular complications after abdominal surgery, a post hoc analysis of the PROXI trial

    DEFF Research Database (Denmark)

    Fonnes, Siv; Gogenur, Ismail; Sondergaard, Edith Smed;

    2016-01-01

    BACKGROUND: Increased long-term mortality was found in patients exposed to perioperative hyperoxia in the PROXI trial, where patients undergoing laparotomy were randomised to 80% versus 30% oxygen during and after surgery. This post hoc follow-up study assessed the impact of perioperative hyperoxia...... on long-term risk of cardiovascular events. METHODS: A total of 1386 patients undergoing either elective or emergency laparotomy were randomised to 80% versus 30% oxygen during and two hours after surgery. At follow-up, the primary outcome of acute coronary syndrome was assessed. Secondary outcomes...... included myocardial infarction, other heart disease, and acute coronary syndrome or death. Data were analysed in the Cox proportional hazards model. RESULTS: The primary outcome, acute coronary syndrome, occurred in 2.5% versus 1.3% in the 80% versus 30% oxygen group; HR 2.15 (95% CI 0.96-4.84). Patients...

  19. The post hoc use of randomised controlled trials to explore drug associated cancer outcomes

    DEFF Research Database (Denmark)

    Stefansdottir, Gudrun; Zoungas, Sophia; Chalmers, John

    2013-01-01

    INTRODUCTION: Drug-induced cancer risk is of increasing interest. Both observational studies and data from clinical trials have linked several widely used treatments to cancer. When a signal for a potential drug-cancer association is generated, substantiation is required to assess the impact...... on public health before proper regulatory action can be taken. This paper aims to discuss challenges of exploring drug-associated cancer outcomes by post-hoc analyses of Randomised controlled trials (RCTs) designed for other purposes. METHODOLOGICAL CHALLENGES TO CONSIDER: We set out to perform a post...

  20. Post Hoc Tourist Segmentation with Conjoint and Cluster Analysis

    Directory of Open Access Journals (Sweden)

    Sérgio Dominique Ferreira Lopes

    2009-01-01

    Full Text Available En el presente trabajo los autores pretenden ilustrar las ventajas del uso combinado del Análisis Conjunto y del Análisis de Conglomerados, en la segmentación del mercado turístico. Los beneficios son fácilmente entendidos, una vez que el Análisis Conjunto permite a los investigadores conocer la estructura de las preferencias de los consumidores y el Análisis de Conglomerados los agrupa en segmentos, a partir de las preferencias de éstos. Habida cuenta de la enorme complejidad y diversificación que está adquiriendo el mercado turístico en nuestros días, carece de sentido adoptar estrategias de segmentación a priori, basadas únicamente en variables clásicas de corte sociodemográfico, cuya capacidad explicativa ha demostrado ser muy limitada. En su lugar, optar por procedimientos de segmentación post hoc, donde se incluya información más elaborada, como pueden ser las preferencias de los consumidores turistas (estimadas a partir de procedimientos estadísticos avanzados como el Análisis Conjunto, se convierte en una ventaja competitiva. La segmentación basada en las preferencias permite a investigadores y gestores disponer de un conocimiento más preciso del mercado y desarrollar estrategias de Marketing adecuadas a cada uno de los segmentos de interés.

  1. Objectivity in confirmation: post hoc monsters and novel predictions.

    Science.gov (United States)

    Votsis, Ioannis

    2014-03-01

    The aim of this paper is to put in place some cornerstones in the foundations for an objective theory of confirmation by considering lessons from the failures of predictivism. Discussion begins with a widely accepted challenge, to find out what is needed in addition to the right kind of inferential-semantical relations between hypothesis and evidence to have a complete account of confirmation, one that gives a definitive answer to the question whether hypotheses branded as "post hoc monsters" can be confirmed. The predictivist view is then presented as a way to meet this challenge. Particular attention is paid to Worrall's version of predictivism, as it appears to be the most sophisticated of the lot. It is argued that, despite its faults, his view turns our heads in the right direction by attempting to remove contingent considerations from confirmational matters. The demand to remove such considerations becomes the first of four cornerstones. Each cornerstone is put in place with the aim to steer clear of the sort of failures that plague various kinds of predictivism. In the process, it becomes obvious that the original challenge is wrongheaded and in need of revision. The paper ends with just such a revision.

  2. Lurasidone for major depressive disorder with mixed features and irritability: a post-hoc analysis.

    Science.gov (United States)

    Swann, Alan C; Fava, Maurizio; Tsai, Joyce; Mao, Yongcai; Pikalov, Andrei; Loebel, Antony

    2017-04-01

    The aim of this post-hoc analysis was to evaluate the efficacy of lurasidone in treating major depressive disorder (MDD) with mixed features including irritability. The data in this analysis were derived from a study of patients meeting DSM-IV-TR criteria for unipolar MDD, with a Montgomery-Åsberg Depression Rating Scale (MADRS) total score ≥26, presenting with two or three protocol-defined manic symptoms, and who were randomized to 6 weeks of double-blind treatment with either lurasidone 20-60 mg/d (n=109) or placebo (n=100). We defined "irritability" as a score ≥2 on both the Young Mania Rating Scale (YMRS) irritability item (#5) and the disruptive-aggressive item (#9). Endpoint change in the MADRS and YMRS items 5 and 9 were analyzed using a mixed model for repeated measures for patients with and without irritability. Some 20.7% of patients met the criteria for irritability. Treatment with lurasidone was associated with a significant week 6 change vs. placebo in MADRS score in both patients with (-22.6 vs. -9.5, ptreatment with lurasidone was associated with significant week 6 changes vs. placebo in both the YMRS irritability item (-1.4 vs. -0.3, p=0.0012, ES=1.0) and the YMRS disruptive-aggressive item (-1.0 vs. -0.3, p=0.0002, ES=1.2). In our post-hoc analysis of a randomized, placebo-controlled, 6-week trial, treatment with lurasidone significantly improved depressive symptoms in MDD patients with mixed features including irritability. In addition, irritability symptoms significantly improved in patients treated with lurasidone.

  3. Consultants' forum: should post hoc sample size calculations be done?

    Science.gov (United States)

    Walters, Stephen J

    2009-01-01

    Pre-study sample size calculations for clinical trial research protocols are now mandatory. When an investigator is designing a study to compare the outcomes of an intervention, an essential step is the calculation of sample sizes that will allow a reasonable chance (power) of detecting a pre-determined difference (effect size) in the outcome variable, at a given level of statistical significance. Frequently studies will recruit fewer patients than the initial pre-study sample size calculation suggested. Investigators are faced with the fact that their study may be inadequately powered to detect the pre-specified treatment effect and the statistical analysis of the collected outcome data may or may not report a statistically significant result. If the data produces a "non-statistically significant result" then investigators are frequently tempted to ask the question "Given the actual final study size, what is the power of the study, now, to detect a treatment effect or difference?" The aim of this article is to debate whether or not it is desirable to answer this question and to undertake a power calculation, after the data have been collected and analysed.

  4. Antiosteoporotic Activity of Genistein Aglycone in Postmenopausal Women: Evidence from a Post-Hoc Analysis of a Multicenter Randomized Controlled Trial.

    Science.gov (United States)

    Arcoraci, Vincenzo; Atteritano, Marco; Squadrito, Francesco; D'Anna, Rosario; Marini, Herbert; Santoro, Domenico; Minutoli, Letteria; Messina, Sonia; Altavilla, Domenica; Bitto, Alessandra

    2017-02-22

    Genistein has a preventive role against bone mass loss during menopause. However, experimental data in animal models of osteoporosis suggest an anti-osteoporotic potential for this isoflavone. We performed a post-hoc analysis of a previously published trial investigating the effects of genistein in postmenopausal women with low bone mineral density. The parent study was a randomized, double-blind, placebo-controlled trial involving postmenopausal women with a femoral neck (FN) density osteoporosis in the genistein group with a prevalence of 12%, whereas in the placebo group the number of postmenopausal women with osteoporosis was unchanged, after 24 months. This post-hoc analysis is a proof-of concept study suggesting that genistein may be useful not only in postmenopausal osteopenia but also in osteoporosis. However, this proof-of concept study needs to be confirmed by a large, well designed, and appropriately focused randomized clinical trial in a population at high risk of fractures.

  5. Glutamine and antioxidants in the critically ill patient: a post hoc analysis of a large-scale randomized trial.

    Science.gov (United States)

    Heyland, Daren K; Elke, Gunnar; Cook, Deborah; Berger, Mette M; Wischmeyer, Paul E; Albert, Martin; Muscedere, John; Jones, Gwynne; Day, Andrew G

    2015-05-01

    The recent large randomized controlled trial of glutamine and antioxidant supplementation suggested that high-dose glutamine is associated with increased mortality in critically ill patients with multiorgan failure. The objectives of the present analyses were to reevaluate the effect of supplementation after controlling for baseline covariates and to identify potentially important subgroup effects. This study was a post hoc analysis of a prospective factorial 2 × 2 randomized trial conducted in 40 intensive care units in North America and Europe. In total, 1223 mechanically ventilated adult patients with multiorgan failure were randomized to receive glutamine, antioxidants, both glutamine and antioxidants, or placebo administered separate from artificial nutrition. We compared each of the 3 active treatment arms (glutamine alone, antioxidants alone, and glutamine + antioxidants) with placebo on 28-day mortality. Post hoc, treatment effects were examined within subgroups defined by baseline patient characteristics. Logistic regression was used to estimate treatment effects within subgroups after adjustment for baseline covariates and to identify treatment-by-subgroup interactions (effect modification). The 28-day mortality rates in the placebo, glutamine, antioxidant, and combination arms were 25%, 32%, 29%, and 33%, respectively. After adjusting for prespecified baseline covariates, the adjusted odds ratio of 28-day mortality vs placebo was 1.5 (95% confidence interval, 1.0-2.1, P = .05), 1.2 (0.8-1.8, P = .40), and 1.4 (0.9-2.0, P = .09) for glutamine, antioxidant, and glutamine plus antioxidant arms, respectively. In the post hoc subgroup analysis, both glutamine and antioxidants appeared most harmful in patients with baseline renal dysfunction. No subgroups suggested reduced mortality with supplements. After adjustment for baseline covariates, early provision of high-dose glutamine administered separately from artificial nutrition was not beneficial and may be

  6. Mediterranean diet, retinopathy, nephropathy, and microvascular diabetes complications: a post hoc analysis of a randomized trial.

    OpenAIRE

    Díaz‑López, Andrés; Babio, Nancy; Martínez-González, Miguel A; Dolores, Corella; Amor, Antonio J.; Fitó Colomer, Montserrat; Estruch, Ramón; Arós, F.; Gómez-Gracia, Enrique; Fiol, M .; Lapetra, José; Serra-Majem, Luis; Basora, J. (Josep); Basterra-Gortari, F. Javier; Zanon-Moreno, Vicente

    2015-01-01

    OBJECTIVE: To date no clinical trials have evaluated the role of dietary patterns on the incidence of microvascular diabetes complications. We hypothesized that a nutritional intervention based on the Mediterranean diet (MedDiet) would have greater protective effect on diabetic retinopathy and nephropathy than a low-fat control diet. RESEARCH DESIGN AND METHODS: This was a post hoc analysis of a cohort of patients with type 2 diabetes participating in the PREvención con DIeta MEDiterránea (PR...

  7. Rotigotine transdermal system and evaluation of pain in patients with Parkinson’s disease: a post hoc analysis of the RECOVER study

    Science.gov (United States)

    2014-01-01

    Background Pain is a troublesome non-motor symptom of Parkinson’s disease (PD). The RECOVER (Randomized Evaluation of the 24-hour Coverage: Efficacy of Rotigotine; Clintrials.gov: NCT00474058) study demonstrated significant improvements in early-morning motor function (UPDRS III) and sleep disturbances (PDSS-2) with rotigotine transdermal system. Improvements were also reported on a Likert pain scale (measuring any type of pain). This post hoc analysis of RECOVER further evaluates the effect of rotigotine on pain, and whether improvements in pain may be attributable to benefits in motor function or sleep disturbance. Methods PD patients with unsatisfactory early-morning motor impairment were randomized to optimal-dose (up to 16 mg/24 h) rotigotine or placebo, maintained for 4 weeks. Pain was assessed in the early-morning using an 11-point Likert pain scale (rated average severity of pain (of any type) over the preceding 12 hours from 0 [no pain] to 10 [worst pain ever experienced]). Post hoc analyses for patients reporting ‘any’ pain (pain score ≥1) at baseline, and subgroups reporting ‘mild’ (score 1–3), and ‘moderate-to-severe’ pain (score ≥4) were performed. Likert pain scale change from baseline in rotigotine-treated patients was further analyzed based on a UPDRS III/PDSS-2 responder analysis (a responder defined as showing a ≥30% reduction in early morning UPDRS III total score or PDSS-2 total score). As post hoc analyses, all p values presented are exploratory. Results Of 267 patients with Likert pain data (178 rotigotine, 89 placebo), 187 (70%) reported ‘any’ pain; of these 87 (33%) reported ‘mild’, and 100 (37%) ‘moderate-to-severe’ pain. Change from baseline pain scores decreased with rotigotine compared with placebo in patients with ‘any’ pain (-0.88 [95% CI: -1.56, -0.19], p = 0.013), and in the subgroup with ‘moderate-to-severe’ pain (-1.38 [-2.44, -0.31], p = 0.012). UPDRS III or PDSS-2 responders

  8. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    Science.gov (United States)

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  9. Long-term healthcare costs and functional outcomes associated with lack of remission in schizophrenia: a post-hoc analysis of a prospective observational study

    Directory of Open Access Journals (Sweden)

    Haynes Virginia S

    2012-12-01

    Full Text Available Abstract Background Little is known about the long-term outcomes for patients with schizophrenia who fail to achieve symptomatic remission. This post-hoc analysis of a 3-year study compared the costs of mental health services and functional outcomes between individuals with schizophrenia who met or did not meet cross-sectional symptom remission at study enrollment. Methods This post-hoc analysis used data from a large, 3-year prospective, non-interventional observational study of individuals treated for schizophrenia in the United States conducted between July 1997 and September 2003. At study enrollment, individuals were classified as non-remitted or remitted using the Schizophrenia Working Group Definition of symptom remission (8 core symptoms rated as mild or less. Mental health service use was measured using medical records. Costs were based on the sites’ medical information systems. Functional outcomes were measured with multiple patient-reported measures and the clinician-rated Quality of Life Scale (QLS. Symptoms were measured using the Positive and Negative Syndrome Scale (PANSS. Outcomes for non-remitted and remitted patients were compared over time using mixed effects models for repeated measures or generalized estimating equations after adjusting for multiple baseline characteristics. Results At enrollment, most of the 2,284 study participants (76.1% did not meet remission criteria. Non-remitted patients had significantly higher PANSS total scores at baseline, a lower likelihood of being Caucasian, a higher likelihood of hospitalization in the previous year, and a greater likelihood of a substance use diagnosis (all p Conclusions In this post-hoc analysis of a 3-year prospective observational study, the failure to achieve symptomatic remission at enrollment was associated with higher subsequent healthcare costs and worse functional outcomes. Further examination of outcomes for schizophrenia patients who fail to achieve remission at

  10. F-18 fluorodeoxyglucose PET/CT and post hoc PET/MRI in a case of primary meningeal melanomatosis

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hong Je [Dept. of Nuclear Medicine, Dongnam Institute of Radiological and Medical Sciences (DIRAMS), Busan (Korea, Republic of); Ahn, Byeong Cheol; Hwang, Seong Wook; Kim, Hae Won; Lee, Sang Woo; Hwang, Jeong Hyun; Lee, Jae Tae [Kyungpook National University School of Medicine, Kyungpook National University Hospital, Daegu (Korea, Republic of); Cho, Suk Kyong [Dept. of Nuclear Medicine, Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2013-04-15

    Primary meningeal melanomatosis is a rare, aggressive variant of primary malignant melanoma of the central nervous system, which arises from melanocytes within the leptomeninges and carries a poor prognosis. We report a case of primary meningeal melanomatosis in a 17-year-old man, which was diagnosed with 1{sup 8F}-fluorodeoxyglucose (F-18 FDG) PET/CT, and post hoc F-18 FDG PET/MRI fusion images. Whole-body F-18 FDG PET/CT was helpful in ruling out the extracranial origin of melanoma lesions, and in assessing the therapeutic response. Post hoc PET/MRI fusion images facilitated the correlation between PET and MRI images and demonstrated the hypermetabolic lesions more accurately than the unenhanced PET/CT images. Whole body F-18 FDG PET/CT and post hoc PET/MRI images might help clinicians determine the best therapeutic strategy for patients with primary meningeal melanomatosis.

  11. The “Gender Factor” in Wearing-Off among Patients with Parkinson’s Disease: A Post Hoc Analysis of DEEP Study

    Directory of Open Access Journals (Sweden)

    Delia Colombo

    2015-01-01

    Full Text Available Background. The early detection of wearing-off in Parkinson disease (DEEP observational study demonstrated that women with Parkinson’s disease (PD carry an increased risk (80.1% for wearing-off (WO. This post hoc analysis of DEEP study evaluates gender differences on WO and associated phenomena. Methods. Patients on dopaminergic treatment for ≥1 year were included in this multicenter observational cross-sectional study. In a single visit, WO was diagnosed based on neurologist assessment as well as the use of the 19-item wearing-off questionnaire (WOQ-19; WO was defined for scores ≥2. Post hoc analyses were conducted to investigate gender difference for demographic and clinical features with respect to WO. Results. Of 617 patients enrolled, 236 were women and 381 were men. Prevalence of WO was higher among women, according to both neurologists’ judgment (61.9% versus 53.8%, P=0.045 and the WOQ-19 analysis (72.5% versus 64.0%, P=0.034. In patients with WO (WOQ-19, women experienced ≥1 motor symptom in 72.5% versus 64.0% in men and ≥1 nonmotor symptom in 44.5% versus 36.7%, in men. Conclusions. Our results suggest WO as more common among women, for both motor and nonmotor symptoms. Prospective studies are warranted to investigate this potential gender-effect.

  12. Post hoc pattern matching: assigning significance to statistically defined expression patterns in single channel microarray data

    Directory of Open Access Journals (Sweden)

    Blalock Eric M

    2007-07-01

    Full Text Available Abstract Background Researchers using RNA expression microarrays in experimental designs with more than two treatment groups often identify statistically significant genes with ANOVA approaches. However, the ANOVA test does not discriminate which of the multiple treatment groups differ from one another. Thus, post hoc tests, such as linear contrasts, template correlations, and pairwise comparisons are used. Linear contrasts and template correlations work extremely well, especially when the researcher has a priori information pointing to a particular pattern/template among the different treatment groups. Further, all pairwise comparisons can be used to identify particular, treatment group-dependent patterns of gene expression. However, these approaches are biased by the researcher's assumptions, and some treatment-based patterns may fail to be detected using these approaches. Finally, different patterns may have different probabilities of occurring by chance, importantly influencing researchers' conclusions about a pattern and its constituent genes. Results We developed a four step, post hoc pattern matching (PPM algorithm to automate single channel gene expression pattern identification/significance. First, 1-Way Analysis of Variance (ANOVA, coupled with post hoc 'all pairwise' comparisons are calculated for all genes. Second, for each ANOVA-significant gene, all pairwise contrast results are encoded to create unique pattern ID numbers. The # genes found in each pattern in the data is identified as that pattern's 'actual' frequency. Third, using Monte Carlo simulations, those patterns' frequencies are estimated in random data ('random' gene pattern frequency. Fourth, a Z-score for overrepresentation of the pattern is calculated ('actual' against 'random' gene pattern frequencies. We wrote a Visual Basic program (StatiGen that automates PPM procedure, constructs an Excel workbook with standardized graphs of overrepresented patterns, and lists of

  13. Headache relief after anterior cervical discectomy: post hoc analysis of a randomized investigational device exemption trial: clinical article.

    Science.gov (United States)

    Schrot, Rudolph J; Mathew, Jesna S; Li, Yueju; Beckett, Laurel; Bae, Hyun W; Kim, Kee D

    2014-08-01

    The authors analyzed headache relief after anterior cervical discectomy. Headache may be relieved after anterior cervical discectomy, but the mechanism is unknown. If headaches were directly referred from upper cervical pathology, more headache relief would be expected from surgery performed at higher cervical levels. If spinal kinesthetics were the mechanism, then headache relief may differ between arthroplasty and fusion. Headache relief after anterior cervical discectomy was quantified by the operated disc level and by the method of operation (arthroplasty vs arthrodesis). The authors performed a post hoc analysis of an artificial disc trial. Data on headache pain were extracted from the Neck Disability Index (NDI) questionnaire. A total of 260 patients underwent single-level arthroplasty or arthodesis. Preoperatively, 52% reported NDI headache scores of 3 or greater, compared with only 13%-17% postoperatively. The model-based mean NDI headache score at baseline was 2.5 (95% CI 2.3-2.7) and was reduced by 1.3 points after surgery (95% CI 1.2-1.4, p relief. There was no significant difference in headache relief between arthroplasty and arthrodesis. Most patients with symptomatic cervical spondylosis have headache as a preoperative symptom (88%). Anterior cervical discectomy with both arthroplasty and arthrodesis is associated with a durable decrease in headache. Headache relief is not related to the level of operation. The mechanism for headache reduction remains unclear.

  14. Evolution of Blood Lactate and 90-Day Mortality in Septic Shock. A Post Hoc Analysis of the Finnaki Study

    DEFF Research Database (Denmark)

    Varis, Elina; Pettilä, Ville; Poukkanen, Meri;

    2016-01-01

    Hyperlactatemia predicts mortality in patients with sepsis and septic shock, and its normalization is a potential treatment goal. We investigated the association of blood lactate and its changes over time with 90-day mortality in septic shock. We performed a post hoc analysis of 513 septic shock...

  15. Post-hoc Analysis on the R&D Capabilities of Chemical and Metallurgical Manufacturing

    Directory of Open Access Journals (Sweden)

    Herman Shah Anuar

    2013-09-01

    Full Text Available The purpose of this paper is to evaluate how internal R&D, external R&D, and patenting affects the behavior of foreign, local, and joint-venture companies operating in manufacturing companies in Malaysia. Different types of manufacturing companies may have different approach in applying their R&D capabilities and patenting activity. The construct of this paper is based on the post-hoc analysis in evaluating how internal R&D, external R&D, and patenting affects the behavior of foreign, local, and joint-venture companies operating in manufacturing companies. This research was conducted using survey questionnaires. 124 companies in chemical and metallurgical manufacturing companies participated in this survey. It was indicated that these three companies behave differently when dealing with internal R&D, external R&D, and patenting. It can be concluded that these three types of companies have a different perspective on applying internal R&D, external R&D, and patenting which is based on their different business strategic direction. It is suggested that in the near future, researchers should concentrate and other types of manufacturing companies or they can involve more sample size in getting better generalization on the behavior of these companies.

  16. Should we really use post-hoc tests based on mean-ranks?

    CERN Document Server

    Benavoli, Alessio; Mangili, Francesca

    2015-01-01

    The statistical comparison of multiple algorithms over multiple data sets is fundamental in machine learning. This is typically carried out by the Friedman test. When the Friedman test rejects the null hypothesis, multiple comparisons are carried out to establish which are the significant differences among algorithms. The multiple comparisons are usually performed using the mean-ranks test. The aim of this technical note is to discuss the inconsistencies of the mean-ranks post-hoc test with the goal of discouraging its use in machine learning as well as in medicine, psychology, etc.. We show that the outcome of the mean-ranks test depends on the pool of algorithms originally included in the experiment. In other words, the outcome of the comparison between algorithms A and B depends also on the performance of the other algorithms included in the original experiment. This can lead to paradoxical situations. For instance the difference between A and B could be declared significant if the pool comprises algorithm...

  17. M.mode.ify: A Free Online Tool to Generate Post Hoc M-Mode Images From Any Ultrasound Clip.

    Science.gov (United States)

    Smith, Benjamin C; Avila, Jacob

    2016-02-01

    We present a software tool designed to generate an M-mode image post hoc from any B-mode ultrasound clip, along any possible axis. M.mode.ify works by breaking down an ultrasound clip into individual frames. It then rotates and crops these frames by using a user-selected M-mode line. The post hoc M-mode image is created by splicing these frames together. Users can measure time and distance after proper calibration through the M.mode.ify interface. This tool opens up new possibilities for clinical application, quality assurance, and research. It is available free for public use at http://www.ultrasoundoftheweek.com/M.mode.ify/.

  18. A Post Hoc Analysis of D-Threo-Methylphenidate Hydrochloride (Focalin) Versus D,l-Threo-Methylphenidate Hydrochloride (Ritalin)

    Science.gov (United States)

    Weiss, Margaret; Wasdell, Michael; Patin, John

    2004-01-01

    Objective: To evaluate clinical measures of the benefit/risk ratio in a post hoc analysis of a clinical trial of d-threo-methylphenidate hydrochloride (d-MPH) and d,l-threo-methylphenidate hydrochloride (d,l-MPH). Method: Data from a phase III clinical trial was used to compare equimolar doses of d-MPH and d,l-MPH treatment for…

  19. Clinical Factors Associated with Dose of Loop Diuretics After Pediatric Cardiac Surgery: Post Hoc Analysis.

    Science.gov (United States)

    Haiberger, Roberta; Favia, Isabella; Romagnoli, Stefano; Cogo, Paola; Ricci, Zaccaria

    2016-06-01

    A post hoc analysis of a randomized controlled trial comparing the clinical effects of furosemide and ethacrynic acid was conducted. Infants undergoing cardiac surgery with cardiopulmonary bypass were included in order to explore which clinical factors are associated with diuretic dose in infants with congenital heart disease. Overall, 67 patients with median (interquartile range) age of 48 (13-139) days were enrolled. Median diuretic dose was 0.34 (0.25-0.4) mg/kg/h at the end of postoperative day (POD) 0 and it significantly decreased (p = 0.04) over the following PODs; during this period, the ratio between urine output and diuretic dose increased significantly (p = 0.04). Age (r -0.26, p = 0.02), weight (r -0.28, p = 0.01), cross-clamp time (r 0.27, p = 0.03), administration of ethacrynic acid (OR 0.01, p = 0.03), and, at the end of POD0, creatinine levels (r 0.3, p = 0.009), renal near-infrared spectroscopy saturation (-0.44, p = 0.008), whole-blood neutrophil gelatinase-associated lipocalin levels (r 0.30, p = 0.01), pH (r -0.26, p = 0.02), urinary volume (r -0.2755, p = 0.03), and fluid balance (r 0.2577, p = 0.0266) showed a significant association with diuretic dose. At multivariable logistic regression cross-clamp time (OR 1.007, p = 0.04), use of ethacrynic acid (OR 0.2, p = 0.01) and blood pH at the end of POD0 (OR 0.0001, p = 0.03) was independently associated with diuretic dose. Early resistance to loop diuretics continuous infusion is evident in post-cardiac surgery infants: Higher doses are administered to patients with lower urinary output. Independently associated variables with diuretic dose in our population appeared to be cross-clamping time, the administration of ethacrynic acid, and blood pH.

  20. A gender-medicine post hoc analysis (MetaGeM project to test sex differences in previous observational studies in different diseases: methodology

    Directory of Open Access Journals (Sweden)

    Colombo D

    2014-10-01

    Full Text Available Delia Colombo,1 Gilberto Bellia,1 Donatella Vassellatti,1 Emanuela Zagni,1 Simona Sgarbi,2 Sara Rizzoli21Novartis Farma, Origgio, 2MediData, Modena, ltaly Abstract: Only recently has medical research begun to understand the importance of taking sex into account, recognizing that symptoms and responses to medical treatment may be very different between males and females. However, the analyses provided by the pharmaceutical industry to regulatory authorities often do not present safety and efficacy data by sex. Novartis has started a gender-medicine project called MetaGeM, which includes nine observational studies sponsored by Novartis Farma, Italy; conducted in Italy between 2002 and 2013 in a range of different clinical areas. The MetaGeM project aims to analyze and describe by means of post hoc analyses and meta-analyses, clinical outcomes, therapeutic approaches, and safety data of these studies, by sex: PSYCHAE; GENDER ATTENTION in psoriasis; Synergy in psoriatic arthritis; ICEBERG in HBsAg carriers; SURF and CETRA in liver- and renal transplanted patients, respectively; DEEP in Parkinson's disease; and EVOLUTION and AXEPT in Alzheimer's disease. The present paper describes the methodology of the MetaGeM project.Keywords: gender-medicine, MetaGeM project, methodology

  1. Are we drawing the right conclusions from randomised placebo-controlled trials? A post-hoc analysis of data from a randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Bone Kerry M

    2009-06-01

    Full Text Available Abstract Background Assumptions underlying placebo controlled trials include that the placebo effect impacts on all study arms equally, and that treatment effects are additional to the placebo effect. However, these assumptions have recently been challenged, and different mechanisms may potentially be operating in the placebo and treatment arms. The objective of the current study was to explore the nature of placebo versus pharmacological effects by comparing predictors of the placebo response with predictors of the treatment response in a randomised, placebo-controlled trial of a phytotherapeutic combination for the treatment of menopausal symptoms. A substantial placebo response was observed but no significant difference in efficacy between the two arms. Methods A post hoc analysis was conducted on data from 93 participants who completed this previously published study. Variables at baseline were investigated as potential predictors of the response on any of the endpoints of flushing, overall menopausal symptoms and depression. Focused tests were conducted using hierarchical linear regression analyses. Based on these findings, analyses were conducted for both groups separately. These findings are discussed in relation to existing literature on placebo effects. Results Distinct differences in predictors were observed between the placebo and active groups. A significant difference was found for study entry anxiety, and Greene Climacteric Scale (GCS scores, on all three endpoints. Attitude to menopause was found to differ significantly between the two groups for GCS scores. Examination of the individual arms found anxiety at study entry to predict placebo response on all three outcome measures individually. In contrast, low anxiety was significantly associated with improvement in the active treatment group. None of the variables found to predict the placebo response was relevant to the treatment arm. Conclusion This study was a post hoc analysis

  2. Correlation between homocysteine and Vitamin B12 levels: A post-hoc analysis from North-West India

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Raina

    2015-01-01

    Full Text Available Background: Homocysteine has been shown to be a risk factor for cardiovascular disease and is a degradation product of sulfur containing amino acids. The aim of this post-hoc analysis was aimed at arriving at homocysteine levels among voluntarily consenting healthy adults in the context of other hematological parameters. Methods: The data for this post-hoc analysis were derived from an observational study carried out at a medical college in rural North-west India. Results: About 77.42% of those participants enrolled in this study having serum homocysteine level more than 30 μmol/L were seen to possess suboptimal serum Vitamin B 12 (<200 pg/ml. On subjecting data to regression analysis, serum homocysteine was observed to possess an inverse correlation with serum level of Vitamin B 12, in general. Conclusions: Hyperhomocysteinemia observed in our study was sufficiently common and wholly ascribable to low Vitamin B 12 concentration as we did not find any case of subnormal serum folic acid level.

  3. Antiosteoporotic Activity of Genistein Aglycone in Postmenopausal Women: Evidence from a Post-Hoc Analysis of a Multicenter Randomized Controlled Trial

    Directory of Open Access Journals (Sweden)

    Vincenzo Arcoraci

    2017-02-01

    Full Text Available Genistein has a preventive role against bone mass loss during menopause. However, experimental data in animal models of osteoporosis suggest an anti-osteoporotic potential for this isoflavone. We performed a post-hoc analysis of a previously published trial investigating the effects of genistein in postmenopausal women with low bone mineral density. The parent study was a randomized, double-blind, placebo-controlled trial involving postmenopausal women with a femoral neck (FN density <0.795 g/cm2. A cohort of the enrolled women was, in fact, identified at the baseline as osteoporotic (n = 121 on the basis of their T-score and analyzed thereafter for the 24 months’ treatment with either 1000 mg of calcium and 800 IU vitamin D3 (placebo; n = 59; or calcium, vitamin D3, and Genistein aglycone (54 mg/day; genistein; n = 62. According to the femoral neck T-scores, 31.3% of the genistein and 30.9% of the placebo recipients were osteoporotic at baseline. In the placebo and genistein groups, the 10-year hip fracture probability risk assessed by Fracture Risk Assessment tool (FRAX was 4.1 ± 1.9 (SD and 4.2 ± 2.1 (SD, respectively. Mean bone mineral density (BMD at the femoral neck increased from 0.62 g/cm2 at baseline to 0.68 g/cm2 at 1 year and 0.70 g/cm2 at 2 years in genistein recipients, and decreased from 0.61 g/cm2 at baseline to 0.60 g/cm2 at 1 year and 0.57 g/cm2 at 2 years in placebo recipients. At the end of the study only 18 postmenopausal women had osteoporosis in the genistein group with a prevalence of 12%, whereas in the placebo group the number of postmenopausal women with osteoporosis was unchanged, after 24 months. This post-hoc analysis is a proof-of concept study suggesting that genistein may be useful not only in postmenopausal osteopenia but also in osteoporosis. However, this proof-of concept study needs to be confirmed by a large, well designed, and appropriately focused randomized clinical trial in a population at high risk

  4. Brain BLAQ: Post-hoc thick-section histochemistry for localizing optogenetic constructs in neurons and their distal terminals.

    Science.gov (United States)

    Kupferschmidt, David A; Cody, Patrick A; Lovinger, David M; Davis, Margaret I

    2015-01-01

    Optogenetic constructs have revolutionized modern neuroscience, but the ability to accurately and efficiently assess their expression in the brain and associate it with prior functional measures remains a challenge. High-resolution imaging of thick, fixed brain sections would make such post-hoc assessment and association possible; however, thick sections often display autofluorescence that limits their compatibility with fluorescence microscopy. We describe and evaluate a method we call "Brain BLAQ" (Block Lipids and Aldehyde Quench) to rapidly reduce autofluorescence in thick brain sections, enabling efficient axon-level imaging of neurons and their processes in conventional tissue preparations using standard epifluorescence microscopy. Following viral-mediated transduction of optogenetic constructs and fluorescent proteins in mouse cortical pyramidal and dopaminergic neurons, we used BLAQ to assess innervation patterns in the striatum, a region in which autofluorescence often obscures the imaging of fine neural processes. After BLAQ treatment of 250-350 μm-thick brain sections, axons and puncta of labeled afferents were visible throughout the striatum using a standard epifluorescence stereomicroscope. BLAQ histochemistry confirmed that motor cortex (M1) projections preferentially innervated the matrix component of lateral striatum, whereas medial prefrontal cortex projections terminated largely in dorsal striosomes and distinct nucleus accumbens subregions. Ventral tegmental area dopaminergic projections terminated in a similarly heterogeneous pattern within nucleus accumbens and ventral striatum. Using a minimal number of easily manipulated and visualized sections, and microscopes available in most neuroscience laboratories, BLAQ enables simple, high-resolution assessment of virally transduced optogenetic construct expression, and post-hoc association of this expression with molecular markers, physiology and behavior.

  5. On Post Hoc Self-rescue in Criminal Law%论刑法中的事后自救

    Institute of Scientific and Technical Information of China (English)

    马荣春; 陈芹

    2014-01-01

    刑法中的事后自救有着不同于一般意义上的自救行为的特别意义,它是刑法中的违法性阻却事后即正当化事由之一。刑法中的事后自救有着从起因要件、时机要件到主体要件、主观要件再到对象要件和限度要件的系列性构成要件或成立条件。不符合构成要件或成立条件的自救行为轻者是过失犯罪,重者是故意犯罪,不仅可以实施正当防卫或紧急避险,甚至可以实施“刑法中的事后反自救”。%Post hoc self-rescue in Criminal Law is different from common self -rescue behaviors and it re-fers to one of the proper reasons of reasons for elimination of illegality .Post hoc self-rescue has a series of estab-lished conditions , namely , cause condition , timing condition , main body condition , subjective condition and limi-tation condition .Those self-rescue behaviors which don't agree with the established conditions may lead to unin-tentional crime or intentional crime .Those self -rescue behaviors that agree with the established conditions can implement proper self -defense or emergent rescue .

  6. A post hoc analysis of long-term prognosis after exenatide treatment in patients with ST-segment elevation myocardial infarction

    DEFF Research Database (Denmark)

    Kyhl, Kasper; Lønborg, Jacob; Vejlstrup, Niels

    2016-01-01

    AIMS: We aimed to assess the effect of exenatide treatment as an adjunct to primary percutaneous coronary intervention (PCI) on long-term clinical outcome. METHODS AND RESULTS: We performed a post hoc analysis in 334 patients with a first STEMI included in a previous study randomised to exenatide......% in the exenatide group versus 9% in the placebo group (HR 1.45, p=0.20). CONCLUSIONS: In this post hoc analysis of patients with a STEMI, treatment with exenatide at the time of primary PCI did not reduce the primary composite endpoint or the secondary endpoint of all-cause -mortality. However, exenatide treatment...

  7. A review and additional post-hoc analyses of the incidence and impact of constipation observed in darifenacin clinical trials

    Directory of Open Access Journals (Sweden)

    Tack J

    2012-09-01

    Full Text Available Jan Tack,1 Jean-Jacques Wyndaele,2 Greg Ligozio,3 Mathias Egermark41University of Leuven, Gastroenterology Section, Leuven, 2University of Antwerp, Department of Urology, Antwerp, Belgium; 3Novartis Pharmaceuticals Corporation, NJ, USA; 4Roche Diagnostics Scandinavia AB, Bromma, Sweden and formerly of Novartis Pharma AG, Basel, SwitzerlandBackground: Constipation is a common side effect of antimuscarinic treatment for overactive bladder (OAB. This review evaluates the incidence and impact of constipation on the lives of patients with OAB being treated with darifenacin.Methods: Constipation data from published Phase III and Phase IIIb/IV darifenacin studies were reviewed and analyzed. Over 4000 patients with OAB (aged 18–89 years; ≥80% female enrolled in nine studies (three Phase III [data from these fixed-dose studies were pooled and provide the primary focus for this review], three Phase IIIb, and three Phase IV. The impact of constipation was assessed by discontinuations, use of concomitant laxatives, patient-reported perception of treatment, and a bowel habit questionnaire.Results: In the pooled Phase III trials, 14.8% (50/337 of patients on darifenacin 7.5 mg/day and 21.3% (71/334 on 15 mg/day experienced constipation compared with 12.6% (28/223 and 6.2% (24/388 with tolterodine and placebo, respectively. In addition, a few patients discontinued treatment due to constipation (0.6% [2/337], 1.2% [4/334], 1.8% [4/223], and 0.3% [1/388] in the darifenacin 7.5 mg/day or 15 mg/day, tolterodine, and placebo groups, respectively, or required concomitant laxatives (3.3% [11/337], 6.6% [22/334], 7.2% [16/223], and 1.5% [6/388] in the darifenacin 7.5 mg/day or 15 mg/day, tolterodine, and placebo groups, respectively. Patient-reported perception of treatment quality was observed to be similar between patients who experienced constipation and those who did not. During the long-term extension study, a bowel habit questionnaire showed only small numerical changes over time in frequency of bowel movements, straining to empty bowels, or number of days with hard stools.Conclusion: While constipation associated with darifenacin was reported in ≤21% of the patient population, it only led to concomitant laxative use in approximately one-third of these patients and a low incidence of treatment discontinuation. These data suggest that constipation did not impact patient perception of treatment quality.Keywords: antimuscarinics, tolerability, overactive bladder

  8. Early-Stage Hyperoxia Is Associated with Favorable Neurological Outcomes and Survival after Severe Traumatic Brain Injury: A Post-Hoc Analysis of the Brain Hypothermia Study.

    Science.gov (United States)

    Fujita, Motoki; Oda, Yasutaka; Yamashita, Susumu; Kaneda, Kotaro; Kaneko, Tadashi; Suehiro, Eiichi; Dohi, Kenji; Kuroda, Yasuhiro; Kobata, Hitoshi; Tsuruta, Ryosuke; Maekawa, Tsuyoshi

    2017-01-19

    The effects of hyperoxia on the neurological outcomes of patients with severe traumatic brain injury (TBI) are still controversial. We examined whether the partial pressure of arterial oxygen (PaO2) and hyperoxia were associated with neurological outcomes and survival by conducting post-hoc analyses of the Brain Hypothermia (B-HYPO) study, a multi-center randomized controlled trial of mild therapeutic hypothermia for severe TBI. The differences in PaO2 and PaO2/fraction of inspiratory oxygen (P/F) ratio on the 1st day of admission were compared between patients with favorable (n = 64) and unfavorable (n = 65) neurological outcomes and between survivors (n = 90) and deceased patients (n = 39). PaO2 and the P/F ratio were significantly greater in patients with favorable outcomes than in patients with unfavorable neurological outcomes (PaO2: 252 ± 122 vs. 202 ± 87 mm Hg, respectively, p = 0.008; P/F ratio: 455 ± 171 vs. 389 ± 155, respectively, p = 0.022) and in survivors than in deceased patients (PaO2: 242 ± 117 vs. 193 ± 75 mm Hg, respectively, p = 0.005; P/F ratio: 445 ± 171 vs. 370 ± 141, respectively, p = 0.018). Similar tendencies were observed in subgroup analyses in patients with fever control and therapeutic hypothermia, and in patients with an evacuated mass or other lesions (unevacuated lesions). PaO2 was independently associated with survival (odds ratio 1.008, p = 0.037). These results suggested that early-stage hyperoxia might be associated with favorable neurological outcomes and survival following severe TBI.

  9. Delphi consensus on the diagnosis and management of dyslipidaemia in chronic kidney disease patients: A post hoc analysis of the DIANA study.

    Science.gov (United States)

    Cases Amenós, Aleix; Pedro-Botet Montoya, Juan; Pascual Fuster, Vicente; Barrios Alonso, Vivencio; Pintó Sala, Xavier; Ascaso Gimilio, Juan F; Millán Nuñez-Cortés, Jesús; Serrano Cumplido, Adalberto

    This post hoc study analysed the perception of the relevance of chronic kidney disease (CKD) in dyslipidaemia screening and the choice of statin among primary care physicians (PCPs) and other specialists through a Delphi questionnaire. The questionnaire included 4blocks of questions concerning dyslipidaemic patients with impaired carbohydrate metabolism. This study presents the results of the impact of CKD on screening and the choice of statin. Of the 497 experts included, 58% were PCPs and 42% were specialists (35, 7% were nephrologists). There was consensus by both PCPs and specialists, with no difference between PCPs and specialists, that CKD patients should undergo a dyslipidaemia screening and that the screening should be part of routine clinical practice. However, there was no consensus in considering the estimated glomerular filtration rate (eGFR) (although there was consensus among PCPs and nephrologists), or considering albuminuria when selecting a statin, or in determining albuminuria during follow-up after having initiated treatment with statins (although there was consensus among the nephrologists). The consensus to analyse the lipid profile in CKD patients suggests acknowledgment of the high cardiovascular risk of this condition. However, the lack of consensus in considering renal function or albuminuria, both when selecting a statin and during follow-up, suggests a limited knowledge of the differences between statins in relation to CKD. Thus, it would be advisable to develop a guideline/consensus document on the use of statins in CKD. Copyright © 2016 Sociedad Española de Nefrología. Published by Elsevier España, S.L.U. All rights reserved.

  10. Lenient vs. strict rate control in patients with atrial fibrillation and heart failure : a post-hoc analysis of the RACE II study

    NARCIS (Netherlands)

    Mulder, Bart A.; Van Veldhuisen, Dirk J.; Crijns, Harry J. G. M.; Tijssen, Jan G. P.; Hillege, Hans L.; Alings, Marco; Rienstra, Michel; Groenveld, Hessel F.; Van den Berg, Maarten P.; Van Gelder, Isabelle C.

    2013-01-01

    AIMS: It is unknown whether lenient rate control is an acceptable strategy in patients with AF and heart failure. We evaluated differences in outcome in patients with AF and heart failure treated with lenient or strict rate control. METHODS AND RESULTS: This post-hoc analysis of the RACE II trial in

  11. The effect of fluid resuscitation on the effective circulating volume in patients undergoing liver surgery: a post-hoc analysis of a randomized controlled trial

    NARCIS (Netherlands)

    Vos, J.J. (Jaap Jan); Kalmar, A.F.; H.G.D. Hendriks (Herman); J.F. Bakker (Jurriaan); Scheeren, T.W.L.

    2017-01-01

    textabstractTo assess the significance of an analogue of the mean systemic filling pressure (Pmsa) and its derived variables, in providing a physiology based discrimination between responders and non-responders to fluid resuscitation during liver surgery. A post-hoc analysis of data from 30 patients

  12. The effect of fluid resuscitation on the effective circulating volume in patients undergoing liver surgery : a post-hoc analysis of a randomized controlled trial

    NARCIS (Netherlands)

    Vos, Jaap Jan; Kalmar, A F; Hendriks, H. G. D.; Bakker, J.; Scheeren, T W L

    2017-01-01

    To assess the significance of an analogue of the mean systemic filling pressure (Pmsa) and its derived variables, in providing a physiology based discrimination between responders and non-responders to fluid resuscitation during liver surgery. A post-hoc analysis of data from 30 patients undergoing

  13. Increased serum potassium affects renal outcomes : a post hoc analysis of the Reduction of Endpoints in NIDDM with the Angiotensin II Antagonist Losartan (RENAAL) trial

    NARCIS (Netherlands)

    Miao, Y.; Dobre, D.; Lambers Heerspink, H. J.; Brenner, B. M.; Cooper, M. E.; Parving, H-H.; Shahinfar, S.; Grobbee, D.; de Zeeuw, D.

    2011-01-01

    To assess the effect of an angiotensin receptor blocker (ARB) on serum potassium and the effect of a serum potassium change on renal outcomes in patients with type 2 diabetes and nephropathy. We performed a post hoc analysis in patients with type 2 diabetes participating in the Reduction of Endpoint

  14. Influencing Anesthesia Provider Behavior Using Anesthesia Information Management System Data for Near Real-Time Alerts and Post Hoc Reports.

    Science.gov (United States)

    Epstein, Richard H; Dexter, Franklin; Patel, Neil

    2015-09-01

    In this review article, we address issues related to using data from anesthesia information management systems (AIMS) to deliver near real-time alerts via AIMS workstation popups and/or alphanumeric pagers and post hoc reports via e-mail. We focus on reports and alerts for influencing the behavior of anesthesia providers (i.e., anesthesiologists, anesthesia residents, and nurse anesthetists). Multiple studies have shown that anesthesia clinical decision support (CDS) improves adherence to protocols and increases financial performance through facilitation of billing, regulatory, and compliance documentation; however, improved clinical outcomes have not been demonstrated. We inform developers and users of feedback systems about the multitude of concerns to consider during development and implementation of CDS to increase its effectiveness and to mitigate its potentially disruptive aspects. We discuss the timing and modalities used to deliver messages, implications of outlier-only versus individualized feedback, the need to consider possible unintended consequences of such feedback, regulations, sustainability, and portability among systems. We discuss statistical issues related to the appropriate evaluation of CDS efficacy. We provide a systematic review of the published literature (indexed in PubMed) of anesthesia CDS and offer 2 case studies of CDS interventions using AIMS data from our own institution illustrating the salient points. Because of the considerable expense and complexity of maintaining near real-time CDS systems, as compared with providing individual reports via e-mail after the fact, we suggest that if the same goal can be accomplished via delayed reporting versus immediate feedback, the former approach is preferable. Nevertheless, some processes require near real-time alerts to produce the desired improvement. Post hoc e-mail reporting from enterprise-wide electronic health record systems is straightforward and can be accomplished using system

  15. Efficacy and Safety of Risedronate in Osteoporosis Subjects with Comorbid Diabetes, Hypertension, and/or Dyslipidemia: A Post Hoc Analysis of Phase III Trials Conducted in Japan.

    Science.gov (United States)

    Inoue, Daisuke; Muraoka, Ryoichi; Okazaki, Ryo; Nishizawa, Yoshiki; Sugimoto, Toshitsugu

    2016-02-01

    Many osteoporotics have comorbid diabetes mellitus (DM), hypertension (HT), and dyslipidemia (DL). However, whether such comorbidities alter response to anti-osteoporotic treatment is unknown. We did post hoc analyses of combined data from three risedronate Japanese phase III trials to determine whether the presence of DM, HT, or DL affects its efficacy and safety. Data from 885 subjects who received 48-week treatment with risedronate were collected and combined from the three phase III trials. They were divided into two groups by the presence or absence of comorbidities: DM (n = 53) versus non-DM (n = 832); HT (n = 278) versus non-HT (n = 607); and DL (n = 292) versus non-DL (n = 593). Bone mineral density (BMD), urinary type 1 collagen N-telopeptide (uNTX), and serum bone-specific alkaline phosphatase (BAP) were measured at baseline and sequentially until 48 weeks. BMD or bone markers were not different between any of the two groups. Overall, BMD was increased by 5.52%, and uNTX and BAP were decreased by 35.4 and 33.8%, respectively. Some bone markers were slightly lower in DM and DL subjects, but the responses to risedronate were not significantly different. Statin users had lower uNTX and BAP, but showed no difference in the treatment response. All the other medications had no apparent effect. Adverse event incidence was marginally higher in DL compared with non-DL (Relative risk 1.06; 95% confidence interval 1.01-1.11), but was not related to increase in any specific events. Risedronate shows consistent safety and efficacy in suppressing bone turnover and increasing BMD in osteoporosis patients with comorbid DM, HT, and/or DL.

  16. The impact of treatment with indacaterol in patients with COPD: A post-hoc analysis according to GOLD 2011 categories A to D.

    Science.gov (United States)

    Kerstjens, Huib A M; Deslée, Gaëtan; Dahl, Ronald; Donohue, James F; Young, David; Lawrence, David; Kornmann, Oliver

    2015-06-01

    Indacaterol is an inhaled, once-daily, ultra-long-acting β2-agonist for the treatment of chronic obstructive pulmonary disease (COPD). We report on the effectiveness of indacaterol and other bronchodilators compared with placebo in patients across the Global Initiative for Chronic Obstructive Lung Disease (GOLD) 2011 categories A to D. A post-hoc, subgroup pooled analysis of 6-month efficacy data from three randomized, placebo-controlled, parallel-group studies involving 3862 patients was performed across GOLD 2011 categories A to D, according to baseline forced expiratory volume in 1 s (FEV1) % predicted, modified Medical Research Council (mMRC) dyspnea scale, and exacerbation history in the 12 months prior to entry. Efficacy of once-daily indacaterol 150 and 300 μg, open-label tiotropium 18 μg, twice-daily salmeterol 50 μg, and formoterol 12 μg was compared with placebo. End points analysed were trough FEV1, transition dyspnea index (TDI), and St George's Respiratory Questionnaire (SGRQ) total score, all at Week 26, and mean rescue medication use over 26 weeks. Indacaterol 150 and 300 μg significantly improved FEV1, compared with placebo across all GOLD groups. Indacaterol 150 and 300 μg also significantly improved TDI, SGRQ total score, and mean rescue medication use compared with placebo across most GOLD subgroups. Treatment selection according to patient's symptoms as well as lung function is an important consideration in maintenance treatment of COPD. Indacaterol 150 and 300 μg effectively improved lung function and symptoms in patients across all GOLD 2011 categories. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  17. Crenobalneotherapy (spa therapy) in patients with knee and generalized osteoarthritis: a post-hoc subgroup analysis of a large multicentre randomized trial.

    Science.gov (United States)

    Forestier, R; Genty, C; Waller, B; Françon, A; Desfour, H; Rolland, C; Roques, C-F; Bosson, J-L

    2014-06-01

    To determine whether the addition of spa therapy to home exercises provides any benefit over exercises and the usual treatment alone in the management of generalised osteoarthritis associated with knee osteoarthritis. This study was a post-hoc subgroup analysis of our randomised multicentre trial (www.clinicaltrial.gov: NCT00348777). Participants who met the inclusion criteria of generalized osteoarthritis (Kellgren, American College of Rheumatology, or Dougados criteria) were extracted from the original randomised controlled trial. They had been randomised using Zelen randomisation. The treatment group received 18days of spa treatment in addition to a home exercise programme. Main outcome was number of patients achieving minimal clinically important improvement at six months (MCII) (≥-19.9mm on the VAS pain scale and/or ≥-9.1 points in a WOMAC function subscale), and no knee surgery. Secondary outcomes included the "patient acceptable symptom state" (PASS) defined as VAS pain ≤32.3mm and/or WOMAC function subscale ≤31 points. From the original 462 participants, 214 patients could be categorized as having generalised osteoarthritis. At sixth month, 182 (88 in control and 94 in SA group) patients, were analysed for the main criteria. MCII was observed more often in the spa group (n=52/94 vs. 38/88, P=0.010). There was no difference for the PASS (n=19/88 vs. 26/94, P=0.343). This study indicates that spa therapy with home exercises may be superior to home exercise alone in the management of patients with GOA associated with knee OA. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  18. An extensible analysable system model

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    , this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...... allows for easy development of analyses for the abstracted systems. We briefly present one application of our approach, namely the analysis of systems for potential insider threats....

  19. A Post Hoc Analysis of the Effect of Weight on Efficacy in Depressed Patients Treated With Desvenlafaxine 50 mg/d and 100 mg/d

    Science.gov (United States)

    Fayyad, Rana S.; Guico-Pabia, Christine J.; Boucher, Matthieu

    2015-01-01

    Objective: To assess the effect of baseline body mass index (BMI) on efficacy and weight change in adults with major depressive disorder (MDD) treated with desvenlafaxine or placebo in a pooled, post hoc analysis. Method: Adults with MDD were randomly assigned to placebo or desvenlafaxine (50 mg or 100 mg) in 8 short-term, double-blind studies and 1 longer-term randomized withdrawal study (the studies were published between 2007 and 2013). Change from baseline in 17-item Hamilton Depression Rating Scale (HDRS-17) total score at week 8 was analyzed in normal (BMI ≤ 25 kg/m2), overweight (25 kg/m2 30 kg/m2) subgroups using analysis of covariance (ANCOVA). Weight change was analyzed in BMI subgroups using ANCOVA and a mixed-effects model for repeated measures. Results: Desvenlafaxine 50 mg/d or 100 mg/d improved HDRS-17 scores significantly from baseline to week 8 (last observation carried forward) versus placebo in all BMI subgroups (normal: n = 1,122; overweight: n = 960; obese: n = 1,302; all P ≤ .0027); improvement was greatest in normal BMI patients. There was a statistically significant decrease in weight (desvenlafaxine 50 mg/d and 100 mg/d versus placebo in all BMI subgroups (all P desvenlafaxine versus placebo in any BMI subgroup. Baseline BMI predicted weight change in short-term and longer-term desvenlafaxine treatment. Conclusions: Desvenlafaxine significantly improved symptoms of depression versus placebo regardless of baseline BMI. In all BMI subgroups, desvenlafaxine was associated with statistically significant weight loss (< 1 kg) versus placebo over 8 weeks, but no significant differences longer term. Trial Registration: ClinicalTrials.gov identifiers: NCT00072774, NCT00277823, NCT00300378, NCT00384033, NCT00798707, NCT00863798, NCT01121484, NCT00824291, NCT00887224 PMID:26644956

  20. Lurasidone for major depressive disorder with mixed features and anxiety: a post-hoc analysis of a randomized, placebo-controlled study.

    Science.gov (United States)

    Tsai, Joyce; Thase, Michael E; Mao, Yongcai; Ng-Mak, Daisy; Pikalov, Andrei; Loebel, Antony

    2017-04-01

    The aim of this post-hoc analysis was to evaluate the efficacy of lurasidone in treating patients with major depressive disorder (MDD) with mixed features who present with mild and moderate-to-severe levels of anxiety. The data in this analysis were derived from a study of patients meeting the DSM-IV-TR criteria for unipolar MDD, with a Montgomery-Åsberg Depression Rating Scale (MADRS) total score ≥26, presenting with two or three protocol-defined manic symptoms, who were randomized to 6 weeks of double-blind treatment with either lurasidone 20-60 mg/day (n=109) or placebo (n=100). Anxiety severity was evaluated using the Hamilton Anxiety Rating Scale (HAM-A). To evaluate the effect of baseline anxiety on response to lurasidone, the following two anxiety groups were defined: mild anxiety (HAM-A≤14) and moderate-to-severe anxiety (HAM-A≥15). Change from baseline in MADRS total score was analyzed for each group using a mixed model for repeated measures. Treatment with lurasidone was associated with a significant week 6 change versus placebo in MADRS total score for patients with both mild anxiety (-18.4 vs. -12.8, pTreatment with lurasidone was associated with a significant week 6 change versus placebo in HAM-A total score for patients with both mild anxiety (-7.6 vs. -4.0, ptreatment with lurasidone was associated with significant improvement in both depressive and anxiety symptoms in subgroups with mild and moderate-to-severe levels of anxiety at baseline.

  1. Impact of Bone-targeted Therapies in Chemotherapy-naïve Metastatic Castration-resistant Prostate Cancer Patients Treated with Abiraterone Acetate: Post Hoc Analysis of Study COU-AA-302

    Science.gov (United States)

    Saad, Fred; Shore, Neal; Van Poppel, Hendrik; Rathkopf, Dana E.; Smith, Matthew R.; de Bono, Johann S.; Logothetis, Christopher J.; de Souza, Paul; Fizazi, Karim; Mulders, Peter F.A.; Mainwaring, Paul; Hainsworth, John D.; Beer, Tomasz M.; North, Scott; Fradet, Yves; Griffin, Thomas A.; De Porre, Peter; Londhe, Anil; Kheoh, Thian; Small, Eric J.; Scher, Howard I.; Molina, Arturo; Ryan, Charles J.

    2016-01-01

    Background Metastatic castration-resistant prostate cancer (mCRPC) often involves bone, and bone-targeted therapy (BTT) has become part of the overall treatment strategy. Objective Investigation of outcomes for concomitant BTT in a post hoc analysis of the COU-AA-302 trial, which demonstrated an overall clinical benefit of abiraterone acetate (AA) plus prednisone over placebo plus prednisone in asymptomatic or mildly symptomatic chemotherapy-naïve mCRPC patients. Design, setting, and participants This report describes the third interim analysis (prespecified at 55% overall survival [OS] events) for the COU-AA-302 trial. Intervention Patients were grouped by concomitant BTT use or no BTT use. Outcome measurements and statistical analysis Radiographic progression-free survival and OS were coprimary end points. This report describes the third interim analysis (prespecified at 55% OS events) and involves patients treated with or without concomitant BTT during the COU-AA-302 study. Median follow-up for OS was 27.1 mo. Median time-to-event variables with 95% confidence intervals (CIs) were estimated using the Kaplan-Meier method. Adjusted hazard ratios (HRs), 95% CIs, and p values for concomitant BTT versus no BTT were obtained via Cox models. Results and limitations While the post hoc nature of the analysis is a limitation, superiority of AA and prednisone versus prednisone alone was demonstrated for clinical outcomes with or without BTT use. Compared with no BTT use, concomitant BTT significantly improved OS (HR 0.75; p = 0.01) and increased the time to ECOG deterioration (HR 0.75; p < 0.001) and time to opiate use for cancer-related pain (HR 0.80; p = 0.036). The safety profile of concomitant BTT with AA was similar to that reported for AA in the overall intent-to-treat population. Osteonecrosis of the jaw (all grade 1/2) with concomitant BTT use was reported in <3% of patients. Conclusions AA with concomitant BTT was safe and well tolerated in men with chemotherapy

  2. Effects of Safinamide on Pain in Fluctuating Parkinson’s Disease Patients: A Post-Hoc Analysis

    Science.gov (United States)

    Cattaneo, Carlo; Barone, Paolo; Bonizzoni, Erminio; Sardina, Marco

    2016-01-01

    Background: Pain, a frequent non-motor symptom in Parkinson’s Disease (PD), significantly impacts on quality of life. Safinamide is a new drug with dopaminergic and non-dopaminergic properties, approved in Europe as adjunct therapy to levodopa for the treatment of fluctuating PD patients. Results from two 24-month, double-blind, placebo-controlled studies demonstrated that safinamide has positive effects on both motor functions and quality of life in PD patients. Objective: To investigate the effects of safinamide on pain management in PD patients with motor fluctuations using pooled data from studies 016 and SETTLE. Methods: This post-hoc analysis evaluated the reduction of concomitant pain treatments and the changes in the scores of the items related to pain of the Parkinson’s Disease Quality of Life Questionnaire (PDQ-39). A path analysis was performed in order to examine direct and indirect associations between safinamide and PDQ-39 pain-related items assessed after 6-months of treatment. Results: The percentage of patients with no pain treatments at the end of the trials was significantly lower in the safinamide group compared to the placebo group. Safinamide 100 mg/day significantly reduced on average the individual use of pain treatments by ≈24% and significantly improved two out of three PDQ-39 pain-related items of the “Bodily discomfort” domain. Path analysis showed that the direct effect of safinamide on pain accounted for about 80% of the total effect. Conclusions: These results suggest that safinamide may have a positive effect on pain, one of the most underestimated non-motor symptoms. Prospective studies are warranted to investigate this potential benefit. PMID:27802242

  3. Efficacy of lisdexamfetamine dimesylate in children with attention-deficit/hyperactivity disorder previously treated with methylphenidate: a post hoc analysis

    Directory of Open Access Journals (Sweden)

    Jain Rakesh

    2011-11-01

    Full Text Available Abstract Background Attention-deficit/hyperactivity disorder (ADHD is a common neurobehavioral psychiatric disorder that afflicts children, with a reported prevalence of 2.4% to 19.8% worldwide. Stimulants (methylphenidate [MPH] and amphetamine are considered first-line ADHD pharmacotherapy. MPH is a catecholamine reuptake inhibitor, whereas amphetamines have additional presynaptic activity. Although MPH and amphetamine can effectively manage ADHD symptoms in most pediatric patients, many still fail to respond optimally to either. After administration, the prodrug stimulant lisdexamfetamine dimesylate (LDX is converted to l-lysine and therapeutically active d-amphetamine in the blood. The objective of this study was to evaluate the clinical efficacy of LDX in children with ADHD who remained symptomatic (ie, nonremitters; ADHD Rating Scale IV [ADHD-RS-IV] total score > 18 on MPH therapy prior to enrollment in a 4-week placebo-controlled LDX trial, compared with the overall population. Methods In this post hoc analysis of data from a multicenter, randomized, double-blind, forced-dose titration study, we evaluated the clinical efficacy of LDX in children aged 6-12 years with and without prior MPH treatment at screening. ADHD symptoms were assessed using the ADHD-RS-IV scale, Conners' Parent Rating Scale-Revised short form (CPRS-R, and Clinical Global Impressions-Improvement scale, at screening, baseline, and endpoint. ADHD-RS-IV total and CPRS-R ADHD Index scores were summarized as mean (SD. Clinical response for the subgroup analysis was defined as a ≥ 30% reduction from baseline in ADHD-RS-IV score and a CGI-I score of 1 or 2. Dunnett test was used to compare change from baseline in all groups. Number needed to treat to achieve one clinical responder or one symptomatic remitter was calculated as the reciprocal of the difference in their proportions on active treatment and placebo at endpoint. Results Of 290 randomized participants enrolled, 28

  4. A novel method to prioritize RNAseq data for post-hoc analysis based on absolute changes in transcript abundance.

    Science.gov (United States)

    McNutt, Patrick; Gut, Ian; Hubbard, Kyle; Beske, Phil

    2015-06-01

    The use of fold-change (FC) to prioritize differentially expressed genes (DEGs) for post-hoc characterization is a common technique in the analysis of RNA sequencing datasets. However, the use of FC can overlook certain population of DEGs, such as high copy number transcripts which undergo metabolically expensive changes in expression yet fail to exceed the ratiometric FC cut-off, thereby missing potential important biological information. Here we evaluate an alternative approach to prioritizing RNAseq data based on absolute changes in normalized transcript counts (ΔT) between control and treatment conditions. In five pairwise comparisons with a wide range of effect sizes, rank-ordering of DEGs based on the magnitude of ΔT produced a power curve-like distribution, in which 4.7-5.0% of transcripts were responsible for 36-50% of the cumulative change. Thus, differential gene expression is characterized by the high production-cost expression of a small number of genes (large ΔT genes), while the differential expression of the majority of genes involves a much smaller metabolic investment by the cell. To determine whether the large ΔT datasets are representative of coordinated changes in the transcriptional program, we evaluated large ΔT genes for enrichment of gene ontologies (GOs) and predicted protein interactions. In comparison to randomly selected DEGs, the large ΔT transcripts were significantly enriched for both GOs and predicted protein interactions. Furthermore, enrichments were were consistent with the biological context of each comparison yet distinct from those produced using equal-sized populations of large FC genes, indicating that the large ΔT genes represent an orthagonal transcriptional response. Finally, the composition of the large ΔT gene sets were unique to each pairwise comparison, indicating that they represent coherent and context-specific responses to biological conditions rather than the non-specific upregulation of a family of genes

  5. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  6. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...

  7. Post-hoc principal component analysis on a largely illiterate elderly population from North-west India to identify important elements of mini-mental state examination

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Raina

    2016-01-01

    Full Text Available Background: Mini-mental state examination (MMSE scale measures cognition using specific elements that can be isolated, defined, and subsequently measured. This study was conducted with the aim to analyze the factorial structure of MMSE in a largely, illiterate, elderly population in India and to reduce the number of variables to a few meaningful and interpretable combinations. Methodology: Principal component analysis (PCA was performed post-hoc on the data generated by a research project conducted to estimate the prevalence of dementia in four geographically defined habitations in Himachal Pradesh state of India. Results: Questions on orientation and registration account for high percentage of cumulative variance in comparison to other questions. Discussion: The PCA conducted on the data derived from a largely, illiterate population reveals that the most important components to consider for the estimation of cognitive impairment in illiterate Indian population are temporal orientation, spatial orientation, and immediate memory.

  8. The efficacy of levomilnacipran ER across symptoms of major depressive disorder: a post hoc analysis of 5 randomized, double-blind, placebo-controlled trials.

    Science.gov (United States)

    McIntyre, Roger S; Gommoll, Carl; Chen, Changzheng; Ruth, Adam

    2016-10-01

    A post hoc analysis evaluated the effects of levomilnacipran ER on individual symptoms and symptom domains in adults with major depressive disorder (MDD). Data were pooled from 5 Phase III trials comprising 2598 patients. Effects on depression symptoms were analyzed based on change from baseline in individual Montgomery-Åsberg Depression Rating Scale (MADRS) item scores. A1dditional evaluations included resolution of individual symptoms (defined as a MADRS item score ≤1 at end of treatment) and concurrent resolution of all 10 MADRS items, all MADRS6 subscale items, and all items included in different symptom clusters (Dysphoria, Retardation, Vegetative Symptoms, Anhedonia). Significantly greater mean improvements were found on all MADRS items except Reduced Appetite with levomilnacipran ER treatment compared with placebo. Resolution of individual symptoms occurred more frequently with levomilnacipran ER than placebo for each MADRS item (all Pdepression symptoms and symptom domains.

  9. The Value of Abdominal Drainage After Laparoscopic Cholecystectomy for Mild or Moderate Acute Calculous Cholecystitis: A Post Hoc Analysis of a Randomized Clinical Trial.

    Science.gov (United States)

    Prevot, Flavien; Fuks, David; Cosse, Cyril; Pautrat, Karine; Msika, Simon; Mathonnet, Muriel; Khalil, Haitham; Mauvais, François; Regimbeau, Jean-Marc

    2016-11-01

    Although the preoperative management of mild and moderate (Grade I-II) acute calculous cholecystitis (ACC) has been standardized, there is no consensus on the value of abdominal drainage after early cholecystectomy. In a post hoc analysis of a randomized controlled trial (NCT01015417) focused on the value of postoperative antibiotic therapy in patients with ACC, we determined the value of abdominal drainage in patients having undergone laparoscopic cholecystectomy for Grades I-II ACC. All postoperative complications were analyzed after using a propensity score. A post hoc test was used to assess the statistical robustness of our results. Of the 414 enrolled patients, 178 did not have abdominal drainage (forming the no-drainage group) and 236 had drainage (the drainage group). After matching on PS, the deep incisional site infection was 1.1 versus 0.8 %, p = 0.78. This result is similar for the superficial incisional site infections; the distant infections; the overall morbidity, and the readmission rate. Only the hospital length of stay was significantly longer in the drainage group (3.3 vs. 5.1 days, p = 0.003). Neither abdominal drainage nor the absence of postoperative antibiotic therapy was found to be a risk factor for deep incisional site infections. The use of abdominal drainage depends on the surgeon's personal preferences but is often used in high-risk populations. However, abdominal drainage does not appear to be of any benefit (in terms of postoperative outcomes) and may even compromise recovery in patients having undergone early laparoscopic cholecystectomy for mild or moderate ACC.

  10. Nausea and Vomiting following Balanced Xenon Anesthesia Compared to Sevoflurane: A Post-Hoc Explorative Analysis of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Astrid V Fahlenkamp

    Full Text Available Like other inhalational anesthetics xenon seems to be associated with post-operative nausea and vomiting (PONV. We assessed nausea incidence following balanced xenon anesthesia compared to sevoflurane, and dexamethasone for its prophylaxis in a randomized controlled trial with post-hoc explorative analysis.220 subjects with elevated PONV risk (Apfel score ≥2 undergoing elective abdominal surgery were randomized to receive xenon or sevoflurane anesthesia and dexamethasone or placebo after written informed consent. 93 subjects in the xenon group and 94 subjects in the sevoflurane group completed the trial. General anesthesia was maintained with 60% xenon or 2.0% sevoflurane. Dexamethasone 4mg or placebo was administered in the first hour. Subjects were analyzed for nausea and vomiting in predefined intervals during a 24h post-anesthesia follow-up.Logistic regression, controlled for dexamethasone and anesthesia/dexamethasone interaction, showed a significant risk to develop nausea following xenon anesthesia (OR 2.30, 95% CI 1.02-5.19, p = 0.044. Early-onset nausea incidence was 46% after xenon and 35% after sevoflurane anesthesia (p = 0.138. After xenon, nausea occurred significantly earlier (p = 0.014, was more frequent and rated worse in the beginning. Dexamethasone did not markedly reduce nausea occurrence in both groups. Late-onset nausea showed no considerable difference between the groups.In our study setting, xenon anesthesia was associated with an elevated risk to develop nausea in sensitive subjects. Dexamethasone 4mg was not effective preventing nausea in our study. Group size or dosage might have been too small, and change of statistical analysis parameters in the post-hoc evaluation might have further contributed to a limitation of our results. Further trials will be needed to address prophylaxis of xenon-induced nausea.EU Clinical Trials EudraCT-2008-004132-20 ClinicalTrials.gov NCT00793663.

  11. Challenges and Opportunities in Analysing Students Modelling

    Science.gov (United States)

    Blanco-Anaya, Paloma; Justi, Rosária; Díaz de Bustamante, Joaquín

    2017-01-01

    Modelling-based teaching activities have been designed and analysed from distinct theoretical perspectives. In this paper, we use one of them--the model of modelling diagram (MMD)--as an analytical tool in a regular classroom context. This paper examines the challenges that arise when the MMD is used as an analytical tool to characterise the…

  12. Duloxetine for the management of diabetic peripheral neuropathic pain: evidence-based findings from post hoc analysis of three multicenter, randomized, double-blind, placebo-controlled, parallel-group studies

    DEFF Research Database (Denmark)

    Kajdasz, Daniel K; Iyengar, Smriti; Desaiah, Durisala

    2007-01-01

    OBJECTIVE: This post hoc analysis was aimed to summarize the efficacy and tolerability of duloxetine as represented by number needed to treat (NNT) and number needed to harm (NNH) to provide a clinically useful assessment of the position of duloxetine among current agents used to treat diabetic p...

  13. History of early abuse as a predictor of treatment response in patients with fibromyalgia : A post-hoc analysis of a 12-week, randomized, double-blind, placebo-controlled trial of paroxetine controlled release

    NARCIS (Netherlands)

    Pae, Chi-Un; Masand, Prakash S.; Marks, David M.; Krulewicz, Stan; Han, Changsu; Peindl, Kathleen; Mannelli, Paolo; Patkar, Ashwin A.

    2009-01-01

    Objectives. We conducted a post-hoc analysis to determine whether a history of physical or sexual abuse was associated with response to treatment in a double-blind, randomized, placebo-controlled trial of paroxetine controlled release (CR) in fibromyalgia. Methods. A randomized, double-blind,

  14. History of early abuse as a predictor of treatment response in patients with fibromyalgia : A post-hoc analysis of a 12-week, randomized, double-blind, placebo-controlled trial of paroxetine controlled release

    NARCIS (Netherlands)

    Pae, Chi-Un; Masand, Prakash S.; Marks, David M.; Krulewicz, Stan; Han, Changsu; Peindl, Kathleen; Mannelli, Paolo; Patkar, Ashwin A.

    2009-01-01

    Objectives. We conducted a post-hoc analysis to determine whether a history of physical or sexual abuse was associated with response to treatment in a double-blind, randomized, placebo-controlled trial of paroxetine controlled release (CR) in fibromyalgia. Methods. A randomized, double-blind, placeb

  15. The lipid-lowering effects of lomitapide are unaffected by adjunctive apheresis in patients with homozygous familial hypercholesterolaemia – a post-hoc analysis of a Phase 3, single-arm, open-label trial

    Science.gov (United States)

    Stefanutti, C; Blom, DJ; Averna, MR; Meagher, EA; Theron, HdT; Marais, AD; Hegele, RA; Sirtori, CR; Shah, PK; Gaudet, D; Vigna, GB; Sachais, BS; Di Giacomo, S; du Plessis, AME; Bloedon, LT; Balser, J; Rader, DJ; Cuchel, M

    2015-01-01

    Objective Lomitapide (a microsomal triglyceride transfer protein inhibitor) is an adjunctive treatment for homozygous familial hypercholesterolaemia (HoFH), a rare genetic condition characterised by elevated low-density lipoprotein-cholesterol (LDL-C), and premature, severe, accelerated atherosclerosis. Standard of care for HoFH includes lipid-lowering drugs and lipoprotein apheresis. We conducted a post-hoc analysis using data from a Phase 3 study to assess whether concomitant apheresis affected the lipid-lowering efficacy of lomitapide. Methods Existing lipid-lowering therapy, including apheresis, was to remain stable from Week −6 to Week 26. Lomitapide dose was escalated on the basis of individual safety/tolerability from 5 mg to 60 mg a day (maximum). The primary endpoint was mean percent change in LDL-C from baseline to Week 26 (efficacy phase), after which patients remained on lomitapide through Week 78 for safety assessment and further evaluation of efficacy. During this latter period, apheresis could be adjusted. We analysed the impact of apheresis on LDL-C reductions in patients receiving lomitapide. Results Of the 29 patients that entered the efficacy phase, 18 (62%) were receiving apheresis at baseline. Twenty-three patients (13 receiving apheresis) completed the Week 26 evaluation. Of the six patients who discontinued in the first 26 weeks, five were receiving apheresis. There were no significant differences in percent change from baseline of LDL-C at Week 26 in patients treated (−48%) and not treated (−55%) with apheresis (p=0.545). Changes in Lp(a) levels were modest and not different between groups (p=0.436). Conclusion The LDL-C lowering efficacy of lomitapide is unaffected by lipoprotein apheresis. PMID:25897792

  16. Effect of moderate alcohol consumption on fetuin-A levels in men and women: post-hoc analyses of three open-label randomized crossover trials

    NARCIS (Netherlands)

    Joosten, M.M.; Schrieks, I.C.; Hendriks, H.F.J.

    2014-01-01

    Background Fetuin-A, a liver-derived glycoprotein that impairs insulin-signalling, has emerged as a biomarker for diabetes risk. Although moderate alcohol consumption has been inversely associated with fetuin-A, data from clinical trials are lacking. Thus, we evaluated whether moderate alcohol consu

  17. Effect of moderate alcohol consumption on fetuin-A levels in men and women : post-hoc analyses of three open-label randomized crossover trials

    NARCIS (Netherlands)

    Joosten, Michel M.; Schrieks, Ilse C.; Hendriks, Henk F. J.

    2014-01-01

    Background: Fetuin-A, a liver-derived glycoprotein that impairs insulin-signalling, has emerged as a biomarker for diabetes risk. Although moderate alcohol consumption has been inversely associated with fetuin-A, data from clinical trials are lacking. Thus, we evaluated whether moderate alcohol cons

  18. Effect of moderate alcohol consumption on fetuin-A levels in men and women : post-hoc analyses of three open-label randomized crossover trials

    NARCIS (Netherlands)

    Joosten, Michel M.; Schrieks, Ilse C.; Hendriks, Henk F. J.

    2014-01-01

    Background: Fetuin-A, a liver-derived glycoprotein that impairs insulin-signalling, has emerged as a biomarker for diabetes risk. Although moderate alcohol consumption has been inversely associated with fetuin-A, data from clinical trials are lacking. Thus, we evaluated whether moderate alcohol cons

  19. Effect of moderate alcohol consumption on fetuin-A levels in men and women: Post-hoc analyses of three open-label randomized crossover trials

    NARCIS (Netherlands)

    Joosten, M.M.; Schrieks, I.C.; Hendriks, H.F.J.

    2014-01-01

    Background: Fetuin-A, a liver-derived glycoprotein that impairs insulin-signalling, has emerged as a biomarker for diabetes risk. Although moderate alcohol consumption has been inversely associated with fetuin-A, data from clinical trials are lacking. Thus, we evaluated whether moderate alcohol cons

  20. Indomethacin reduces glomerular and tubular damage markers but not renal inflammation in chronic kidney disease patients: a post-hoc analysis.

    Science.gov (United States)

    de Borst, Martin H; Nauta, Ferdau L; Vogt, Liffert; Laverman, Gozewijn D; Gansevoort, Ron T; Navis, Gerjan

    2012-01-01

    Under specific conditions non-steroidal anti-inflammatory drugs (NSAIDs) may be used to lower therapy-resistant proteinuria. The potentially beneficial anti-proteinuric, tubulo-protective, and anti-inflammatory effects of NSAIDs may be offset by an increased risk of (renal) side effects. We investigated the effect of indomethacin on urinary markers of glomerular and tubular damage and renal inflammation. We performed a post-hoc analysis of a prospective open-label crossover study in chronic kidney disease patients (n = 12) with mild renal function impairment and stable residual proteinuria of 4.7±4.1 g/d. After a wash-out period of six wks without any RAAS blocking agents or other therapy to lower proteinuria (untreated proteinuria (UP)), patients subsequently received indomethacin 75 mg BID for 4 wks (NSAID). Healthy subjects (n = 10) screened for kidney donation served as controls. Urine and plasma levels of total IgG, IgG4, KIM-1, beta-2-microglobulin, H-FABP, MCP-1 and NGAL were determined using ELISA. Following NSAID treatment, 24 h -urinary excretion of glomerular and proximal tubular damage markers was reduced in comparison with the period without anti-proteinuric treatment (total IgG: UP 131[38-513] vs NSAID 38[17-218] mg/24 h, pglomerulo- and tubulo-protective effects as observed outweigh the possible side-effects of NSAID treatment on the long term.

  1. A post-hoc analysis of reduction in diabetic foot ulcer size at 4 weeks as a predictor of healing by 12 weeks.

    Science.gov (United States)

    Snyder, Robert J; Cardinal, Matthew; Dauphinée, Damien M; Stavosky, James

    2010-03-01

    Percent area reduction (PAR) after 4 weeks of diabetic foot ulcer (DFU) treatment has been suggested as a clinical monitoring parameter to distinguish DFUs that will heal within 12 weeks from those that will not despite standard wound care. The purpose of this post-hoc analysis of control DFU treatment outcomes from two published, randomized, controlled studies was to assess the relationship between PAR during early standard wound care and ulcer closure by week 12. The proportion of DFUs healed after 12 weeks was 57% (39 out of 69; 95% confidence interval [CI], 44% to 68%) in study A and 52% (38 out of 73; 95% CI, 40% to 64%) in study B for wounds with > or = 50% PAR by week 4 and 5% (three out of 64; 95% CI, 1% to 13%) and 2% (one out of 44; 95% CI, 0.1% to 12%), respectively, for DFUs with or = 50% PAR (P protocols of care should be re-evaluated if > or = 50% PAR is not achieved. Studies to assess DFU healing before and after 4 weeks of standard wound care are needed to further refine these guidelines of care.

  2. Post hoc analysis of the efficacy and safety of desvenlafaxine 50 mg/day in a randomized, placebo-controlled study of perimenopausal and postmenopausal women with major depressive disorder.

    Science.gov (United States)

    Kornstein, Susan G; Clayton, Anita; Bao, Weihang; Guico-Pabia, Christine J

    2014-08-01

    This post hoc analysis assessed the efficacy of desvenlafaxine 50 mg/day for treating major depressive disorder in perimenopausal versus postmenopausal women enrolled in a 10-week, double-blind, placebo-controlled study. Perimenopausal and postmenopausal women (40-70 y) diagnosed with major depressive disorder were randomly assigned to receive desvenlafaxine 50 mg/day or placebo. Changes from baseline in the primary efficacy variable (17-item Hamilton Rating Scale for Depression [HAM-D17] score, week 8) and in other secondary efficacy variables (Sheehan Disability Scale and Menopause Rating Scale) were analyzed using analysis of covariance with treatment, region, and baseline in the model. Clinical Global Impressions-Improvement Scale was analyzed with the Cochran-Mantel-Haenszel test. Response and remission rates were evaluated using logistic regression with treatment, region, and baseline HAM-D17 in the model. Of 426 women (desvenlafaxine, n = 216; placebo, n = 210) included in this analysis, 135 (32%) were perimenopausal and 291 (68%) were postmenopausal at baseline. In both subgroups, improvement from baseline in HAM-D17 scores was significantly greater for desvenlafaxine 50 mg/day than for placebo. Menopause status and time since menopause did not significantly affect HAM-D17 total score. The drug-placebo difference in Sheehan Disability Scale scores was significant in perimenopausal women (-9.3 vs. -5.1, P desvenlafaxine in postmenopausal women. Desvenlafaxine 50 mg/day is effective in treating depression in both perimenopausal and postmenopausal women. Placebo response on measures of functional impairment is lower in perimenopausal women than in postmenopausal women, resulting in a greater apparent treatment benefit with desvenlafaxine among perimenopausal women.

  3. Wasting and sudden cardiac death in hemodialysis patients: a post hoc analysis of 4D (Die Deutsche Diabetes Dialyse Studie).

    Science.gov (United States)

    Drechsler, Christiane; Grootendorst, Diana C; Pilz, Stefan; Tomaschitz, Andreas; Krane, Vera; Dekker, Friedo; März, Winfried; Ritz, Eberhard; Wanner, Christoph

    2011-10-01

    Wasting is common in hemodialysis patients and often is accompanied by cardiovascular disease and inflammation. The cardiovascular risk profile meaningfully changes with the progression of kidney disease, and little is known about the impact of wasting on specific clinical outcomes. This study examined the effects of wasting on the various components of cardiovascular outcome and deaths caused by infection in hemodialysis patients. Prospective cohort study. 1,255 hemodialysis patients from 178 centers participating in Die Deutsche Diabetes Dialyse Studie (4D) in 1998-2004. Moderate wasting was defined as body mass index, albumin, and creatinine values less than the median (26.7 kg/m(2), 3.8 g/dL, and 6.8 mg/dL, respectively) and C-reactive protein level less than the median (5 mg/L) at baseline. Severe wasting was defined as body mass index, albumin, and creatinine levels less than the median and C-reactive protein level greater than the median at baseline. Risks of sudden cardiac death (SCD), myocardial infarction (MI), stroke, combined cardiovascular events, deaths due to infection, and all-cause mortality were determined using Cox regression analyses during a median of 4 years of follow-up. 196 patients had wasting (severe, n = 109; and moderate, n = 87). Overall, 617 patients died (160 of SCD and 128 of infectious deaths). Furthermore, 469 patients experienced a cardiovascular event, with MI and stroke occurring in 200 and 103 patients, respectively. Compared with patients without wasting (n = 1,059), patients with severe wasting had significantly increased risks of SCD (adjusted HR, 1.8; 95% CI, 1.1-3.1), all-cause mortality (adjusted HR, 1.8; 95% CI, 1.4-2.4), and deaths due to infection (adjusted HR, 2.3; 95% CI, 1.2-4.3). In contrast, MI was not affected. The increased risk of cardiovascular events (adjusted HR, 1.5; 95% CI, 1.0-2.1) was explained mainly by the effect of wasting on SCD. Selective patient cohort. Wasting was associated strongly with SCD, but

  4. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate......System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...

  5. Does education level affect the efficacy of a community based salt reduction program? - A post-hoc analysis of the China Rural Health Initiative Sodium Reduction Study (CRHI-SRS).

    Science.gov (United States)

    Wang, Xin; Li, Xian; Vaartjes, Ilonca; Neal, Bruce; Bots, Michiel L; Hoes, Arno W; Wu, Yangfeng

    2016-08-11

    Whether educational level influences the effects of health education is not clearly defined. This study examined whether the impact of a community-based dietary salt reduction program was affected by the level of education of participants. The China Rural Health Initiative Sodium Reduction Study (CRHI-SRS) was a cluster-randomized controlled trial conducted in 120 villages from five Northern Chinese provinces. The intervention comprised a village-wide health education program and availability of salt substitute at village shops. 24-h urine samples were collected among 1903 participants for primary evaluation of the intervention effect. A post-hoc analysis was done to explore for heterogeneity of intervention effects by education level using generalized estimating equations. All models were adjusted for age, sex, body mass index and province. Daily salt intake was lower in intervention than in control at all educational levels with no evidence of a difference in the effect of the intervention across different levels of education. P value for the interaction term between education level and the intervention was 0.35. There was likewise no evidence of an interaction for effects of the intervention on potassium intake (p = 0.71), the sodium to potassium ratio (p = 0.07), or knowledge and behaviors related to salt (all p > 0.05). The study suggests that the effects of the intervention were achieved regardless of the level of education and that the intervention should therefore be broadly effective in rural Chinese populations. The trial was registered with clinicaltrial.gov ( NCT01259700 ).

  6. Maternal waterpipe smoke exposure and the risk of asthma and allergic diseases in childhood: A post hoc analysis.

    Science.gov (United States)

    Waked, Mirna; Salameh, Pascale

    2015-01-01

    This analysis was conducted with the objective of evaluating association between waterpipe passive smoking exposure and asthma, and allergies among Lebanese children. Data were taken from a crosssectional study on children from public and private schools. A sample of 22 schools participated in the study, where standardized written core questionnaires were distributed. From 5 to 12-year-old students filled in the questionnaires at home, while 13-14-year-old students filled it in in the class. In total, 5522 children were evaluated for the prevalence of asthma, allergic rhinitis and atopic eczema, and their associated factors, including waterpipe exposure due to parents' smoking. The descriptive results of parental smoking were, as follows: among mothers: 1609 (29%) mothers smoked cigarettes, 385 (7%) smoked waterpipe and 98 (1.8%) smoked both; among fathers: 2449 (44.2%) smoked cigarettes, 573 (10.3%) smoked waterpipe and 197 (3.5%) smoked both. Maternal waterpipe smoking was significantly and moderately associated with allergic diseases (p waterpipe smoking was not associated with any of the diseases. Parental cigarette smoking demonstrated some positive effects: father's cigarette smoking did not show association with dermatitis or asthma diagnosed by a physician, while mother's cigarette smoking showed a positive association only with probable asthma. Moreover, no interactions between cigarette and waterpipe smoking were observed. Maternal waterpipe smoking should be regarded as a high risk behavior; however, additional studies are necessary to confirm this finding. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  7. Pregabalin versus SSRIs and SNRIs in benzodiazepine-refractory outpatients with generalized anxiety disorder: a post hoc cost-effectiveness analysis in usual medical practice in Spain

    Directory of Open Access Journals (Sweden)

    De Salas-Cansado M

    2012-06-01

    Full Text Available Marina De Salas-Cansado,1 José M Olivares,2 Enrique Álvarez,3 Jose L Carrasco,4 Andoni Barrueta,5 Javier Rejas,51Trial Form Support Spain, Madrid; 2Department of Psychiatry, Hospital Meixoeiro, Complejo Hospitalario Universitario, Vigo; 3Department of Psychiatry, Hospital de la Santa Creu i San Pau, Barcelona; 4Department of Psychiatry, Hospital Clínico San Carlos, Madrid; 5Health Outcomes Research Department, Medical Unit, Pfizer Spain, Alcobendas, Madrid, SpainBackground: Generalized anxiety disorder (GAD is a prevalent health condition which seriously affects both patient quality of life and the National Health System. The aim of this research was to carry out a post hoc cost-effectiveness analysis of the effect of pregabalin versus selective serotonin reuptake inhibitors (SSRIs/serotonin norepinephrine reuptake inhibitors (SNRIs in treated benzodiazepine-refractory outpatients with GAD.Methods: This post hoc cost-effectiveness analysis used secondary data extracted from the 6-month cohort, prospective, noninterventional ADAN study, which was conducted to ascertain the cost of illness in GAD subjects diagnosed according to Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria. Benzodiazepine-refractory subjects were those who claimed persistent symptoms of anxiety and showed a suboptimal response (Hamilton Anxiety Rating Scale ≥16 to benzodiazepines, alone or in combination, over 6 months. Patients could switch to pregabalin (as monotherapy or addon or to an SSRI or SNRI, alone or in combination. Effectiveness was expressed as quality-adjusted life years gained, and the perspective was that of the National Health System in the year 2008. A sensitivity analysis was performed using bootstrapping techniques (10,000 resamples were obtained in order to obtain a cost-effectiveness plane and a corresponding acceptability curve.Results: A total of 282 subjects (mean Hamilton Anxiety Rating Scale score 25.8 were

  8. Records of pan (floodplain wetland) sedimentation as an approach for post-hoc investigation of the hydrological impacts of dam impoundment: The Pongolo river, KwaZulu-Natal.

    Science.gov (United States)

    Heath, S K; Plater, A J

    2010-07-01

    River impoundment by dams has far-reaching consequences for downstream floodplains in terms of hydrology, water quality, geomorphology, ecology and ecosystem services. With the imperative of economic development, there is the danger that potential environmental impacts are not assessed adequately or monitored appropriately. Here, an investigation of sediment composition of two pans (floodplain wetlands) in the Pongolo River floodplain, KwaZulu-Natal, downstream of the Pongolapoort dam constructed in 1974, is considered as a method for post-hoc assessment of the impacts on river hydrology, sediment supply and water quality. Bumbe and Sokhunti pans have contrasting hydrological regimes in terms of their connection to the main Pongolo channel - Bumbe is a shallow ephemeral pan and Sokhunti is a deep, perennial water body. The results of X-ray fluorescence (XRF) geochemical analysis of their sediment records over a depth of >1 m show that whilst the two pans exhibit similar sediment composition and variability in their lower part, Bumbe pan exhibits a shift toward increased fine-grained mineral supply and associated nutrient influx at a depth of c. 45 cm whilst Sokhunti pan is characterised by increased biogenic productivity at a depth of c. 26 cm due to enhanced nutrient status. The underlying cause is interpreted as a shift in hydrology to a 'post-dam' flow regime of reduced flood frequencies with more regular baseline flows which reduce the average flow velocity. In addition, Sokhunti shows a greater sensitivity to soil influx during flood events due to the nature of its 'background' of autochthonous biogenic sedimentation. The timing of the overall shift in sediment composition and the dates of the mineral inwash events are not well defined, but the potential for these wetlands as sensitive recorders of dam-induced changes in floodplain hydrology, especially those with a similar setting to Sokhunti pan, is clearly demonstrated. Copyright 2010 Elsevier Ltd. All

  9. Factors associated with dose escalation of fesoterodine for treatment of overactive bladder in people >65 years of age: A post hoc analysis of data from the SOFIA study.

    Science.gov (United States)

    Wagg, Adrian; Darekar, Amanda; Arumi, Daniel; Khullar, Vik; Oelke, Matthias

    2015-06-01

    To investigate factors which may influence dose escalation of antimuscarinics for overactive bladder (OAB) in older patients and how dose escalation affects treatment efficacy. A post hoc analysis of data from the 12-week randomized, placebo controlled phase of the SOFIA study investigating treatment with fesoterodine in older people with OAB. Predictors and outcomes in patients aged ≥65 years with OAB who did or did not choose to escalate from fesoterodine 4 to 8 mg before the first dose-escalation choice point (week 4) and at the end of the study (week 12) were assessed. Variables which significantly increased likelihood of dose escalation were, at baseline, body mass index (OR: 1.06, 95% CI 1.01, 1.12; P = 0.0222), and male gender (OR: 2.06, 95% CI 1.28, 3.32; P = 0.0028) and at week 4, change from baseline in urgency episodes (OR: 1.12, 95% CI 1.05, 1.20; P = 0.0008), patient perception of bladder control (PPBC) (OR: 1.44, 95% CI 1.12, 1.84; P = 0.004). At week 12, dose escalation was associated with slightly reduced treatment outcomes compared to week 4 non-escalators. No baseline disease related factor associated with dose escalation was identified. Magnitude of change in urgency episodes and reduction in PPBC at 4 weeks were associated with dose escalation. These data may be of use to healthcare providers as they allow judgement to be made in individual patients, allowing treatment decisions to be made. At end of treatment, improvements in efficacy and quality of life were achieved in both escalators and non-escalators. © 2014 Wiley Periodicals, Inc.

  10. Effects of statins on the progression of cerebral white matter lesion: Post hoc analysis of the ROCAS (Regression of Cerebral Artery Stenosis) study.

    Science.gov (United States)

    Mok, Vincent C T; Lam, Wynnie W M; Fan, Yu Hua; Wong, Adrian; Ng, Ping Wing; Tsoi, Tak Hon; Yeung, Vincent; Wong, Ka Sing

    2009-05-01

    Arteriosclerotic related cerebral white matter lesion (WML) is associated with increased risk of death, stroke, dementia, depression, gait disturbance, and urinary incontinence. We investigated the effects of statins on WML progression by performing a post hoc analysis on the ROCAS (Regression of Cerebral Artery Stenosis) study, which is a randomized, double-blind, placebo-controlled study evaluating the effects of statins upon asymptomatic middle cerebral artery stenosis progression among stroke-free individuals. Two hundreds and eight randomized subjects were assigned to either placebo (n = 102) or simvastatin 20 mg daily (n = 106) for 2 years. Baseline severity of WML was graded visually into none, mild, and severe. Volume (cm3) of WML was determined quantitatively at baseline and at end of study using a semi-automated method based on MRI. Primary outcome was the change in WML volume over 2 years. After 2 years of follow-up, there was no significant change in WML volume between the active and the placebo group as a whole. However, stratified analysis showed that for those with severe WML at baseline, the median volume increase in the active group (1.9 cm3) was less compared with that in the placebo group (3.0 cm3; P = 0.047). Linear multivariate regression analysis identified that baseline WML volume (beta = 0.63, P < 0.001) and simvastatin treatment (beta = -0.214, P = 0.043) independently predicted change in WML volume. Our findings suggest that statins may delay the progression of cerebral WML only among those who already have severe WML at baseline.

  11. Metabolic syndrome cluster does not provide incremental prognostic information in patients with stable cardiovascular disease: A post hoc analysis of the AIM-HIGH trial.

    Science.gov (United States)

    Lyubarova, Radmila; Robinson, Jennifer G; Miller, Michael; Simmons, Debra L; Xu, Ping; Abramson, Beth L; Elam, Marshall B; Brown, Todd M; McBride, Ruth; Fleg, Jerome L; Desvigne-Nickens, Patrice; Ayenew, Woubeshet; Boden, William E

    Metabolic syndrome (MS) is a well-known risk factor for the development of cardiovascular (CV) disease; yet, controversy persists whether it adds incremental prognostic value in patients with established CV disease. This study was performed to determine if MS is associated with worse CV outcomes in patients with established CV disease treated intensively with statins. We performed a post hoc analysis of the Atherothrombosis Intervention in Metabolic Syndrome with Low HDL/High Triglycerides and Impact on Global Health Outcomes trial, in which patients with established CV disease and atherogenic dyslipidemia (n = 3414) were randomly assigned to receive extended release niacin or placebo during a mean 36-month follow-up, to assess whether the presence of MS or the number of MS components contributed to CV outcomes. The composite primary end point of CV events occurred in 15.1% of patients without MS vs 13.8%, 16.9%, and 16.8% of patients with MS in the subsets with 3, 4, and 5 MS components, respectively (corresponding adjusted hazard ratios 0.9, 1.1, and 1.1 relative to patients without MS), P = .55. Comparing subgroups with 3 vs 4 or 5 MS components, there was no significant difference in either the composite primary end point or secondary end points. Patients with diabetes mellitus had higher event rates, with or without the presence of MS. The presence of MS was not associated with worse CV outcomes in the AIM-HIGH population. The rate of CV events in statin-treated Atherothrombosis Intervention in Metabolic Syndrome with Low HDL/High Triglycerides and Impact on Global Health Outcomes patients with MS was not significantly influenced by the number of MS components. Copyright © 2017 National Lipid Association. All rights reserved.

  12. Modelling and Analysing Socio-Technical Systems

    DEFF Research Database (Denmark)

    Aslanyan, Zaruhi; Ivanova, Marieta Georgieva; Nielson, Flemming

    2015-01-01

    with social engineering. Due to this combination of attack steps on technical and social levels, risk assessment in socio-technical systems is complex. Therefore, established risk assessment methods often abstract away the internal structure of an organisation and ignore human factors when modelling...... and assessing attacks. In our work we model all relevant levels of socio-technical systems, and propose evaluation techniques for analysing the security properties of the model. Our approach simplifies the identification of possible attacks and provides qualified assessment and ranking of attacks based...... on the expected impact. We demonstrate our approach on a home-payment system. The system is specifically designed to help elderly or disabled people, who may have difficulties leaving their home, to pay for some services, e.g., care-taking or rent. The payment is performed using the remote control of a television...

  13. Indomethacin reduces glomerular and tubular damage markers but not renal inflammation in chronic kidney disease patients: a post-hoc analysis.

    Directory of Open Access Journals (Sweden)

    Martin H de Borst

    Full Text Available Under specific conditions non-steroidal anti-inflammatory drugs (NSAIDs may be used to lower therapy-resistant proteinuria. The potentially beneficial anti-proteinuric, tubulo-protective, and anti-inflammatory effects of NSAIDs may be offset by an increased risk of (renal side effects. We investigated the effect of indomethacin on urinary markers of glomerular and tubular damage and renal inflammation. We performed a post-hoc analysis of a prospective open-label crossover study in chronic kidney disease patients (n = 12 with mild renal function impairment and stable residual proteinuria of 4.7±4.1 g/d. After a wash-out period of six wks without any RAAS blocking agents or other therapy to lower proteinuria (untreated proteinuria (UP, patients subsequently received indomethacin 75 mg BID for 4 wks (NSAID. Healthy subjects (n = 10 screened for kidney donation served as controls. Urine and plasma levels of total IgG, IgG4, KIM-1, beta-2-microglobulin, H-FABP, MCP-1 and NGAL were determined using ELISA. Following NSAID treatment, 24 h -urinary excretion of glomerular and proximal tubular damage markers was reduced in comparison with the period without anti-proteinuric treatment (total IgG: UP 131[38-513] vs NSAID 38[17-218] mg/24 h, p<0.01; IgG4: 50[16-68] vs 10[1-38] mg/24 h, p<0.001; beta-2-microglobulin: 200[55-404] vs 50[28-110] ug/24 h, p = 0.03; KIM-1: 9[5]-[14] vs 5[2]-[9] ug/24 h, p = 0.01. Fractional excretions of these damage markers were also reduced by NSAID. The distal tubular marker H-FABP showed a trend to reduction following NSAID treatment. Surprisingly, NSAID treatment did not reduce urinary excretion of the inflammation markers MCP-1 and NGAL, but did reduce plasma MCP-1 levels, resulting in an increased fractional MCP-1 excretion. In conclusion, the anti-proteinuric effect of indomethacin is associated with reduced urinary excretion of glomerular and tubular damage markers, but not with reduced excretion of renal

  14. Evaluation of blood pressure and heart rate in patients with hypertension who received tapentadol extended release for chronic pain: a post hoc, pooled data analysis.

    Science.gov (United States)

    Biondi, David M; Xiang, Jim; Etropolski, Mila; Moskovitz, Bruce

    2014-08-01

    Hypertension is one of the most common co-existing conditions in patients with chronic pain, and the potential effects of an analgesic on heart rate and blood pressure are of particular concern for patients with hypertension. The purpose of this analysis was to evaluate changes in blood pressure and heart rate with tapentadol extended release (ER) treatment in patients with hypertension. We performed a post hoc analysis of data pooled from three randomized, placebo- and active-controlled, phase III studies of tapentadol ER for managing chronic osteoarthritis knee (NCT00421928, NCT00486811) or low back (NCT00449176) pain (15-week, double-blind treatment period). Data were independently analyzed for patients with a listed medical history of hypertension at baseline and patients with at least one listed concomitant antihypertensive medication at baseline. Heart rate, systolic blood pressure (SBP), and diastolic blood pressure (DBP) were measured at each visit. In patients with a listed medical history of hypertension (n = 1,464), least-squares mean (LSM [standard error (SE)]) changes from baseline to endpoint with placebo, tapentadol ER, and oxycodone HCl controlled release (CR), respectively, were -0.7 (0.44), 0.2 (0.43), and -0.9 (0.45) beats per minute (bpm) for heart rate; -2.4 (0.64), -2.7 (0.64), and -3.7 (0.67) mmHg for SBP; and -1.0 (0.39), -1.3 (0.39), and -2.3 (0.41) mmHg for DBP; in patients with at least one listed concomitant antihypertensive medication (n = 1,376), the LSM (SE) changes from baseline to endpoint were -0.6 (0.45), 0.1 (0.44), and -0.7 (0.47) bpm for heart rate; -1.8 (0.66), -3.3 (0.65), and -3.7 (0.69) mmHg for SBP; and -0.7 (0.40), -1.4 (0.40), and -2.3 (0.42) mmHg for DBP. No clinically meaningful mean changes in heart rate or blood pressure were observed for the evaluated cohorts of patients with hypertension who were treated with tapentadol ER (100-250 mg twice daily).

  15. Limitations of log-rank tests for analysing longevity data in biogerontology.

    Science.gov (United States)

    Le Bourg, Eric

    2014-08-01

    Log-rank tests are sometimes used to analyse longevity data when other tests should be preferred. When the experimental design involves more than one factor, some authors perform several log-rank tests with the same data, which increases the risk to wrongly conclude that a difference among groups does exist and does not allow to test interactions. When analysing the effect of a single factor with more than two groups, some authors also perform several tests (e.g. comparing a control group to each of the experimental groups), because post hoc analysis is not available with log-rank tests. These errors prevent to fully appreciate the longevity results of these articles and it would be easy to overcome this problem by using statistical methods devoted to one-way or multi-way designs, such as Cox's models, analysis of variance, and generalised linear models.

  16. Externalizing Behaviour for Analysing System Models

    NARCIS (Netherlands)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof; Kammüller, Florian

    Systems models have recently been introduced to model organisationsandevaluate their vulnerability to threats and especially insiderthreats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside

  17. Photovoltaic System Modeling. Uncertainty and Sensitivity Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Clifford W. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Martin, Curtis E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    We report an uncertainty and sensitivity analysis for modeling AC energy from ph otovoltaic systems . Output from a PV system is predicted by a sequence of models. We quantify u ncertainty i n the output of each model using empirical distribution s of each model's residuals. We propagate uncertainty through the sequence of models by sampli ng these distributions to obtain a n empirical distribution of a PV system's output. We consider models that: (1) translate measured global horizontal, direct and global diffuse irradiance to plane - of - array irradiance; (2) estimate effective irradiance; (3) predict cell temperature; (4) estimate DC voltage, current and power ; (5) reduce DC power for losses due to inefficient maximum power point tracking or mismatch among modules; and (6) convert DC to AC power . O ur analysis consider s a notional PV system com prising an array of FirstSolar FS - 387 modules and a 250 kW AC inverter ; we use measured irradiance and weather at Albuquerque, NM. We found the uncertainty in PV syste m output to be relatively small, on the order of 1% for daily energy. We found that unce rtainty in the models for POA irradiance and effective irradiance to be the dominant contributors to uncertainty in predicted daily energy. Our analysis indicates that efforts to reduce the uncertainty in PV system output predictions may yield the greatest improvements by focusing on the POA and effective irradiance models.

  18. Bayesian Uncertainty Analyses Via Deterministic Model

    Science.gov (United States)

    Krzysztofowicz, R.

    2001-05-01

    Rational decision-making requires that the total uncertainty about a variate of interest (a predictand) be quantified in terms of a probability distribution, conditional on all available information and knowledge. Suppose the state-of-knowledge is embodied in a deterministic model, which is imperfect and outputs only an estimate of the predictand. Fundamentals are presented of three Bayesian approaches to producing a probability distribution of the predictand via any deterministic model. The Bayesian Processor of Output (BPO) quantifies the total uncertainty in terms of a posterior distribution, conditional on model output. The Bayesian Processor of Ensemble (BPE) quantifies the total uncertainty in terms of a posterior distribution, conditional on an ensemble of model output. The Bayesian Forecasting System (BFS) decomposes the total uncertainty into input uncertainty and model uncertainty, which are characterized independently and then integrated into a predictive distribution.

  19. Analysing Social Epidemics by Delayed Stochastic Models

    Directory of Open Access Journals (Sweden)

    Francisco-José Santonja

    2012-01-01

    Full Text Available We investigate the dynamics of a delayed stochastic mathematical model to understand the evolution of the alcohol consumption in Spain. Sufficient condition for stability in probability of the equilibrium point of the dynamic model with aftereffect and stochastic perturbations is obtained via Kolmanovskii and Shaikhet general method of Lyapunov functionals construction. We conclude that alcohol consumption in Spain will be constant (with stability in time with around 36.47% of nonconsumers, 62.94% of nonrisk consumers, and 0.59% of risk consumers. This approach allows us to emphasize the possibilities of the dynamical models in order to study human behaviour.

  20. Modelling, analyses and design of switching converters

    Science.gov (United States)

    Cuk, S. M.; Middlebrook, R. D.

    1978-01-01

    A state-space averaging method for modelling switching dc-to-dc converters for both continuous and discontinuous conduction mode is developed. In each case the starting point is the unified state-space representation, and the end result is a complete linear circuit model, for each conduction mode, which correctly represents all essential features, namely, the input, output, and transfer properties (static dc as well as dynamic ac small-signal). While the method is generally applicable to any switching converter, it is extensively illustrated for the three common power stages (buck, boost, and buck-boost). The results for these converters are then easily tabulated owing to the fixed equivalent circuit topology of their canonical circuit model. The insights that emerge from the general state-space modelling approach lead to the design of new converter topologies through the study of generic properties of the cascade connection of basic buck and boost converters.

  1. Effect of monthly high-dose vitamin D supplementation on falls and non-vertebral fractures: secondary and post-hoc outcomes from the randomised, double-blind, placebo-controlled ViDA trial.

    Science.gov (United States)

    Khaw, Kay-Tee; Stewart, Alistair W; Waayer, Debbie; Lawes, Carlene M M; Toop, Les; Camargo, Carlos A; Scragg, Robert

    2017-06-01

    Adults with low concentrations of 25-hydroxyvitamin D (25[OH]D) in blood have an increased risk of falls and fractures, but randomised trials of vitamin D supplementation have had inconsistent results. We aimed to assess the effect of high-dose vitamin D supplementation on fractures and falls. The Vitamin D Assessment (ViDA) Study was a randomised, double-blind, placebo-controlled trial of healthy volunteers aged 50-84 years conducted at one centre in Auckland, New Zealand. Participants were randomly assigned to receive either an initial oral dose of 200 000 IU (5·0 mg) colecalciferol (vitamin D3) followed by monthly 100 000 IU (2·5 mg) colecalciferol or equivalent placebo dosing. The prespecified primary outcome was cardiovascular disease and secondary outcomes were respiratory illness and fractures. Here, we report secondary outcome data for fractures and post-hoc outcome data for falls. Cox proportional hazards models were used to estimate hazard ratios (HRs) for time to first fracture or time to first fall in individuals allocated vitamin D compared with placebo. The analysis of fractures included all participants who gave consent and was by intention-to-treat; the analysis of falls included all individuals who returned one or more questionnaires. This trial is registered with the Australian New Zealand Clinical Trials Registry, number ACTRN12611000402943. Between April 5, 2011, and Nov 6, 2012, 5110 participants were recruited and randomly assigned either colecalciferol (n=2558) or placebo (n=2552). Two participants allocated placebo withdrew consent after randomisation; thus, a total of 5108 individuals were included in the analysis of fractures. The mean age of participants was 65·9 years (SD 8·3) and 2971 (58%) were men. The mean concentration of 25(OH)D in blood was 63 nmol/L (SD 24) at baseline, with 1534 (30%) having 25(OH)D concentrations lower than 50 nmol/L. Follow-up was until July 31, 2015, with a mean treatment duration of 3·4 years (SD 0

  2. Modelling and Analyses of Embedded Systems Design

    DEFF Research Database (Denmark)

    Brekling, Aske Wiid

    We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....

  3. Clinical response to eliglustat in treatment-naïve patients with Gaucher disease type 1: Post-hoc comparison to imiglucerase-treated patients enrolled in the International Collaborative Gaucher Group Gaucher Registry

    Directory of Open Access Journals (Sweden)

    Jennifer Ibrahim

    2016-09-01

    Full Text Available Eliglustat is a recently approved oral therapy in the United States and Europe for adults with Gaucher disease type 1 who are CYP2D6 extensive, intermediate, or poor metabolizers (>90% of patients that has been shown to decrease spleen and liver volume and increase hemoglobin concentrations and platelet counts in untreated adults with Gaucher disease type 1 and maintain these parameters in patients previously stabilized on enzyme replacement therapy. In a post-hoc analysis, we compared the results of eliglustat treatment in treatment-naïve patients in two clinical studies with the results of imiglucerase treatment among a cohort of treatment-naïve patients with comparable baseline hematologic and visceral parameters in the International Collaborative Gaucher Group Gaucher Registry. Organ volumes and hematologic parameters improved from baseline in both treatment groups, with a time course and degree of improvement in eliglustat-treated patients similar to imiglucerase-treated patients.

  4. Clinical response to eliglustat in treatment-naïve patients with Gaucher disease type 1: Post-hoc comparison to imiglucerase-treated patients enrolled in the International Collaborative Gaucher Group Gaucher Registry.

    Science.gov (United States)

    Ibrahim, Jennifer; Underhill, Lisa H; Taylor, John S; Angell, Jennifer; Peterschmitt, M Judith

    2016-09-01

    Eliglustat is a recently approved oral therapy in the United States and Europe for adults with Gaucher disease type 1 who are CYP2D6 extensive, intermediate, or poor metabolizers (> 90% of patients) that has been shown to decrease spleen and liver volume and increase hemoglobin concentrations and platelet counts in untreated adults with Gaucher disease type 1 and maintain these parameters in patients previously stabilized on enzyme replacement therapy. In a post-hoc analysis, we compared the results of eliglustat treatment in treatment-naïve patients in two clinical studies with the results of imiglucerase treatment among a cohort of treatment-naïve patients with comparable baseline hematologic and visceral parameters in the International Collaborative Gaucher Group Gaucher Registry. Organ volumes and hematologic parameters improved from baseline in both treatment groups, with a time course and degree of improvement in eliglustat-treated patients similar to imiglucerase-treated patients.

  5. Prior human papillomavirus-16/18 AS04-adjuvanted vaccination prevents recurrent high grade cervical intraepithelial neoplasia after definitive surgical therapy: Post-hoc analysis from a randomized controlled trial.

    Science.gov (United States)

    Garland, Suzanne M; Paavonen, Jorma; Jaisamrarn, Unnop; Naud, Paulo; Salmerón, Jorge; Chow, Song-Nan; Apter, Dan; Castellsagué, Xavier; Teixeira, Júlio C; Skinner, S Rachel; Hedrick, James; Limson, Genara; Schwarz, Tino F; Poppe, Willy A J; Bosch, F Xavier; de Carvalho, Newton S; Germar, Maria Julieta V; Peters, Klaus; Del Rosario-Raymundo, M Rowena; Catteau, Grégory; Descamps, Dominique; Struyf, Frank; Lehtinen, Matti; Dubin, Gary

    2016-12-15

    We evaluated the efficacy of the human papillomavirus (HPV)-16/18 AS04-adjuvanted vaccine in preventing HPV-related disease after surgery for cervical lesions in a post-hoc analysis of the PApilloma TRIal against Cancer In young Adults (PATRICIA; NCT00122681). Healthy women aged 15-25 years were randomized (1:1) to receive vaccine or control at months 0, 1 and 6 and followed for 4 years. Women were enrolled regardless of their baseline HPV DNA status, HPV-16/18 serostatus, or cytology, but excluded if they had previous or planned colposcopy. The primary and secondary endpoints of PATRICIA have been reported previously; the present post-hoc analysis evaluated efficacy in a subset of women who underwent an excisional procedure for cervical lesions after vaccination. The main outcome was the incidence of subsequent HPV-related cervical intraepithelial neoplasia grade 2 or greater (CIN2+) 60 days or more post-surgery. Other outcomes included the incidence of HPV-related CIN1+, and vulvar or vaginal intraepithelial neoplasia (VIN/VaIN) 60 days or more post-surgery. Of the total vaccinated cohort of 18,644 women (vaccine = 9,319; control = 9,325), 454 (vaccine = 190, control = 264) underwent an excisional procedure during the trial. Efficacy 60 days or more post-surgery for a first lesion, irrespective of HPV DNA results, was 88.2% (95% CI: 14.8, 99.7) against CIN2+ and 42.6% (-21.1, 74.1) against CIN1+. No VIN was reported and one woman in each group had VaIN2+ 60 days or more post-surgery. Women who undergo surgical therapy for cervical lesions after vaccination with the HPV-16/18 vaccine may continue to benefit from vaccination, with a reduced risk of developing subsequent CIN2+. © 2016 UICC.

  6. Remission and recovery associated with lurasidone in the treatment of major depressive disorder with subthreshold hypomanic symptoms (mixed features): post-hoc analysis of a randomized, placebo-controlled study with longer-term extension.

    Science.gov (United States)

    Goldberg, Joseph F; Ng-Mak, Daisy; Siu, Cynthia; Chuang, Chien-Chia; Rajagopalan, Krithika; Loebel, Antony

    2017-04-01

    This post-hoc analysis assessed rates of symptomatic and functional remission, as well as recovery (combination of symptomatic and functional remission), in patients treated with lurasidone for major depressive disorder (MDD) associated with subthreshold hypomanic symptoms (mixed features). Patients with MDD plus two or three manic symptoms (defined as per the DSM-5 mixed-features specifier) were randomly assigned to flexible-dose lurasidone 20-60 mg/day (n=109) or placebo (n=100) for 6 weeks, followed by a 3-month open-label, flexible-dose extension study for U.S. sites only (n=48). Cross-sectional recovery was defined as the presence of both symptomatic remission (Montgomery-Åsberg Depression Rating Scale score ≤ 12) and functional remission (all Sheehan Disability Scale [SDS] domain scores ≤3) at week 6, and at both months 1 and 3 of the extension study ("sustained recovery"). A significantly higher proportion of lurasidone-treated patients (31.3%) achieved recovery (assessed cross-sectionally) compared to placebo (12.2%, p=0.002) at week 6. The number of manic symptoms at baseline moderated the effect size for attaining cross-sectional recovery for lurasidone treatment (vs. placebo) (p=0.028). Sustained recovery rates were higher in patients initially treated with lurasidone (20.8%) versus placebo (12.5%). In this post-hoc analysis of a placebo-controlled study with open-label extension that involved patients with MDD and mixed features, lurasidone was found to significantly improve the rate of recovery at 6 weeks (vs. placebo) that was sustained at month 3 of the extension study. The presence of two (as opposed to three) manic symptoms moderated recovery at the acute study endpoint.

  7. A post hoc analysis of negative symptoms and psychosocial function in patients with schizophrenia: a 40-week randomized, double-blind study of ziprasidone versus haloperidol followed by a 3-year double-blind extension trial.

    Science.gov (United States)

    Stahl, Stephen M; Malla, Ashok; Newcomer, John W; Potkin, Steven G; Weiden, Peter J; Harvey, Philip D; Loebel, Antony; Watsky, Eric; Siu, Cynthia O; Romano, Steve

    2010-08-01

    Schizophrenia is a persistent, lifelong illness such that enduring functional improvements may only occur over the course of years [corrected].This post hoc analysis in stable outpatients with schizophrenia investigated the negative symptom efficacy and treatment outcomes of ziprasidone (80-160 mg/d given twice a day, mean modal dose of 112 mg/d; and 80-120 mg/d given every day, mean modal dose of 96 mg/d) versus haloperidol (5-20 mg/d, mean modal dose of 12 mg/d) in a randomized, 40-week, double-blind study, followed by a double-blind continuation trial that extended up to 156 additional weeks. Symptomatic and functional recovery criteria were met when subjects attained both negative symptom remission and adequate psychosocial functioning based on the 4 Quality-of-Life subscales (instrumental role, interpersonal relations, participation in community, and intrapsychic foundations). Negative symptom remission (P = 0.005), as well as sustained adequate functioning (6 months) in instrumental role (P = 0.04) and participation in community (P = 0.02), was associated with significantly shorter time to remission in the ziprasidone 80 to 160 mg group than in the haloperidol group, as was the combination of symptomatic and functional recovery during the 196-week double-blind study period. A similar pattern was observed for the ziprasidone 80 to 120 mg group, which showed significant differences versus haloperidol in negative symptom remission and instrumental role functioning (but not other Quality-of-Life subscale measures). The clinically relevant outcome differences detected in this post hoc exploratory analysis support the potential for both enhanced remission in negative symptoms and psychosocial recovery during long-term treatment with an atypical agent and add to our understanding regarding the degree to which negative symptom remission can be attained in the maintenance phase.

  8. Exploring the variation in implementation of a COPD disease management programme and its impact on health outcomes: a post hoc analysis of the RECODE cluster randomised trial.

    Science.gov (United States)

    Boland, Melinde R S; Kruis, Annemarije L; Huygens, Simone A; Tsiachristas, Apostolos; Assendelft, Willem J J; Gussekloo, Jacobijn; Blom, Coert M G; Chavannes, Niels H; Rutten-van Mölken, Maureen P M H

    2015-12-17

    This study aims to (1) examine the variation in implementation of a 2-year chronic obstructive pulmonary disease (COPD) management programme called RECODE, (2) analyse the facilitators and barriers to implementation and (3) investigate the influence of this variation on health outcomes. Implementation variation among the 20 primary-care teams was measured directly using a self-developed scale and indirectly through the level of care integration as measured with the Patient Assessment of Chronic Illness Care (PACIC) and the Assessment of Chronic Illness Care (ACIC). Interviews were held to obtain detailed information regarding the facilitators and barriers to implementation. Multilevel models were used to investigate the association between variation in implementation and change in outcomes. The teams implemented, on average, eight of the 19 interventions, and the specific package of interventions varied widely. Important barriers and facilitators of implementation were (in)sufficient motivation of healthcare provider and patient, the high starting level of COPD care, the small size of the COPD population per team, the mild COPD population, practicalities of the information and communication technology (ICT) system, and hurdles in reimbursement. Level of implementation as measured with our own scale and the ACIC was not associated with health outcomes. A higher level of implementation measured with the PACIC was positively associated with improved self-management capabilities, but this association was not found for other outcomes. There was a wide variety in the implementation of RECODE, associated with barriers at individual, social, organisational and societal level. There was little association between extent of implementation and health outcomes.

  9. Exploring the variation in implementation of a COPD disease management programma and its impact on health outcomes : A post hoc analysis of the RECODE cluster randomised trial

    NARCIS (Netherlands)

    M.R.S. Boland (Melinde); A.L. Kruis (Annemarije); S.A. Huygens (Simone); A. Tsiachristas (Apostolos); W.J.J. Assendelft (Willem); J. Gussekloo (Jacobijn); C.M.G. Blom (Coert); N.H. Chavannes (Nicolas); M.P.M.H. Rutten-van Mölken (Maureen)

    2015-01-01

    markdownabstractThis study aims to (1) examine the variation in implementation of a 2-year chronic obstructive pulmonary disease (COPD) management programme called RECODE, (2) analyse the facilitators and barriers to implementation and (3) investigate the influence of this variation on health

  10. Exploring the variation in implementation of a COPD disease management programme and its impact on health outcomes: a post hoc analysis of the RECODE cluster randomised trial

    NARCIS (Netherlands)

    Boland, M.R.; Kruis, A.L.; Huygens, S.A.; Tsiachristas, A.; Assendelft, W.J.J.; Gussekloo, J.; Blom, C.M.G.; Chavannes, N.H.; Molken, M.P. Rutten-van

    2015-01-01

    This study aims to (1) examine the variation in implementation of a 2-year chronic obstructive pulmonary disease (COPD) management programme called RECODE, (2) analyse the facilitators and barriers to implementation and (3) investigate the influence of this variation on health outcomes.

  11. Exploring the variation in implementation of a COPD disease management programme and its impact on health outcomes: a post hoc analysis of the RECODE cluster randomised trial

    NARCIS (Netherlands)

    Boland, M.R.; Kruis, A.L.; Huygens, S.A.; Tsiachristas, A.; Assendelft, W.J.J.; Gussekloo, J.; Blom, C.M.G.; Chavannes, N.H.; Molken, M.P. Rutten-van

    2015-01-01

    This study aims to (1) examine the variation in implementation of a 2-year chronic obstructive pulmonary disease (COPD) management programme called RECODE, (2) analyse the facilitators and barriers to implementation and (3) investigate the influence of this variation on health outcomes. Implementati

  12. The association of the effect of lithium in the maintenance treatment of bipolar disorder with lithium plasma levels : a post hoc analysis of a double-blind study comparing switching to lithium or placebo in patients who responded to quetiapine (Trial 144)

    NARCIS (Netherlands)

    Nolen, Willem A.; Weisler, Richard H.

    Nolen WA, Weisler RH. The association of the effect of lithium in the maintenance treatment of bipolar disorder with lithium plasma levels: a post hoc analysis of a double-blind study comparing switching to lithium or placebo in patients who responded to quetiapine (Trial 144). Bipolar Disord 2012:

  13. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  14. Effect of maternal death reviews and training on maternal mortality among cesarean delivery : post-hoc analysis of a cluster-randomized controlled trial

    OpenAIRE

    Zongo, A.; Dumont, Alexandre; Fournier, P.; Traore, M.; Kouanda, S.; B. Sondo

    2015-01-01

    Objectives: To explore the differential effect of a multifaceted intervention on hospital-based maternal mortality between patients with cesarean and vaginal delivery in low-resource settings. Study design: We reanalyzed the data from a major cluster-randomized controlled trial, QUARITE (Quality of care, Risk management and technology in obstetrics). These subgroup analyses were not prespecified and were treated as exploratory. The intervention consisted of an initial interactive workshop and...

  15. Delphi consensus on the diagnosis and management of dyslipidaemia in chronic kidney disease patients: A post hoc analysis of the DIANA study

    Directory of Open Access Journals (Sweden)

    Aleix Cases Amenós

    2016-11-01

    Conclusions: The consensus to analyse the lipid profile in CKD patients suggests acknowledgement of the high cardiovascular risk of this condition. However, the lack of consensus in considering renal function or albuminuria, both when selecting a statin and during follow-up, suggests a limited knowledge of the differences between statins in relation to CKD. Thus, it would be advisable to develop a guideline/consensus document on the use of statins in CKD.

  16. Effect of Renin-Angiotensin-Aldosterone System Inhibition, Dietary Sodium Restriction, and/or Diuretics on Urinary Kidney Injury Molecule 1 Excretion in Nondiabetic Proteinuric Kidney Disease: A Post Hoc Analysis of a Randomized Controlled Trial

    Science.gov (United States)

    Waanders, Femke; Vaidya, Vishal S.; van Goor, Harry; Leuvenink, Henri; Damman, Kevin; Hamming, Inge; Bonventre, Joseph V.; Vogt, Liffert; Navis, Gerjan

    2012-01-01

    Background Tubulointerstitial damage plays an important role in chronic kidney disease (CKD) with proteinuria. Urinary kidney injury molecule 1 (KIM-1) reflects tubular KIM-1 and is considered a sensitive biomarker for early tubular damage. We hypothesized that a decrease in proteinuria by using therapeutic interventions is associated with decreased urinary KIM-1 levels. Study Design Post hoc analysis of a randomized, double-blind, placebo-controlled, crossover trial. Setting & Participants 34 proteinuric patients without diabetes from our outpatient renal clinic. Intervention Stepwise 6-week interventions of losartan, sodium restriction (low-sodium [LS] diet), their combination, losartan plus hydrochlorothiazide (HCT), and the latter plus an LS diet. Outcomes & Measurements Urinary excretion of KIM-1, total protein, and N-acetyl-β-D-glucosaminidase (NAG) as a positive control for tubular injury. Results Mean baseline urine protein level was 3.8 ± 0.4 (SE) g/d, and KIM-1 level was 1,706 ± 498 ng/d (increased compared with healthy controls; 74 ng/d). KIM-1 level was decreased by using placebo/LS (1,201 ± 388 ng/d; P = 0.04), losartan/high sodium (1,184 ± 296 ng/d; P = 0.09), losartan/LS (921 ± 176 ng/d; P = 0.008), losartan/high sodium plus HCT (862 ± 151 ng/d; P = 0.008) and losartan/LS plus HCT (743 ± 170 ng/d; P = 0.001). The decrease in urinary KIM-1 levels paralleled the decrease in proteinuria (R = 0.523; P < 0.001), but not blood pressure or creatinine clearance. 16 patients reached target proteinuria with protein less than 1 g/d, whereas KIM-1 levels normalized in only 2 patients. Urinary NAG level was increased at baseline and significantly decreased during the treatment periods of combined losartan plus HCT only. The decrease in urinary NAG levels was not closely related to proteinuria. Limitations Post hoc analysis. Conclusions Urinary KIM-1 level was increased in patients with nondiabetic CKD with proteinuria and decreased in parallel with

  17. Modelling longevity bonds: Analysing the Swiss Re Kortis bond

    OpenAIRE

    2015-01-01

    A key contribution to the development of the traded market for longevity risk was the issuance of the Kortis bond, the world's first longevity trend bond, by Swiss Re in 2010. We analyse the design of the Kortis bond, develop suitable mortality models to analyse its payoff and discuss the key risk factors for the bond. We also investigate how the design of the Kortis bond can be adapted and extended to further develop the market for longevity risk.

  18. Early Hepatic Dysfunction Is Associated with a Worse Outcome in Patients Presenting with Acute Respiratory Distress Syndrome: A Post-Hoc Analysis of the ACURASYS and PROSEVA Studies

    Science.gov (United States)

    Dizier, Stéphanie; Forel, Jean-Marie; Ayzac, Louis; Richard, Jean-Christophe; Hraiech, Sami; Lehingue, Samuel; Loundou, Anderson; Roch, Antoine; Guerin, Claude; Papazian, Laurent

    2015-01-01

    Introduction Bilirubin is well-recognized marker of hepatic dysfunction in intensive care unit (ICU) patients. Multiple organ failure often complicates acute respiratory distress syndrome (ARDS) evolution and is associated with high mortality. The effect of early hepatic dysfunction on ARDS mortality has been poorly investigated. We evaluated the incidence and the prognostic significance of increased serum bilirubin levels in the initial phase of ARDS. Methods The data of 805 patients with ARDS were retrospectively analysed. This population was extracted from two recent multicenter, prospective and randomised trials. Patients presenting with ARDS with a ratio of the partial pressure of arterial oxygen to the fraction of inspired oxygen < 150 mmHg measured with a PEEP ≥ 5 cm of water were included. The total serum bilirubin was measured at inclusion and at days 2, 4, 7 and 14. The primary objective was to analyse the bilirubin at inclusion according to the 90-day mortality rate. Results The 90-day mortality rate was 33.8% (n = 272). The non-survivors were older, had higher Sepsis-related Organ Failure Assessment (SOFA) score and were more likely to have a medical diagnosis on admission than the survivors. At inclusion, the SOFA score without the liver score (10.3±2.9 vs. 9.0±3.0, p<0.0001) and the serum bilirubin levels (36.1±57.0 vs. 20.5±31.5 μmol/L, p<0.0001) were significantly higher in the non-survivors than in the survivors. Age, the hepatic SOFA score, the coagulation SOFA score, the arterial pH level, and the plateau pressure were independently associated with 90-day mortality in patients with ARDS. Conclusion Bilirubin used as a surrogate marker of hepatic dysfunction and measured early in the course of ARDS was associated with the 90-day mortality rate. PMID:26636318

  19. Physicians Experience with and Expectations of the Safety and Tolerability of WHO-Step III Opioids for Chronic (Low Back Pain: Post Hoc Analysis of Data from a German Cross-Sectional Physician Survey

    Directory of Open Access Journals (Sweden)

    Michael A. Ueberall

    2015-01-01

    Full Text Available Objective. To describe physicians’ daily life experience with WHO-step III opioids in the treatment of chronic (low back pain (CLBP. Methods. Post hoc analysis of data from a cross-sectional online survey with 4.283 Germany physicians. Results. With a reported median use in 17% of affected patients, WHO-step III opioids play a minor role in treatment of CLBP in daily practice associated with a broad spectrum of positive and negative effects. If prescribed, potent opioids were reported to show clinically relevant effects (such as ≥50% pain relief in approximately 3 of 4 patients (median 72%. Analgesic effects reported are frequently related with adverse events (AEs. Only 20% of patients were reported to remain free of any AE. Most frequently reported AE was constipation (50%, also graded highest for AE-related daily life restrictions (median 46%. Specific AE countermeasures were reported to be necessary in approximately half of patients (median 45%; nevertheless AE-related premature discontinuation rates reported were high (median 22%. Fentanyl/morphine were the most/least prevalently prescribed potent opioids mentioned (median 20 versus 8%. Conclusion. Overall, use of WHO-step III opioids for CLBP is low. AEs, especially constipation, are commonly reported and interfere significantly with analgesic effects in daily practice. Nevertheless, beneficial effects outweigh related AEs in most patients with CLBP.

  20. Dose and aging effect on patients reported treatment benefit switching from the first overactive bladder therapy with tolterodine ER to fesoterodine: post-hoc analysis from an observational and retrospective study

    Directory of Open Access Journals (Sweden)

    Castro-Diaz David

    2012-07-01

    Full Text Available Abstract Background Previous randomized studies have demonstrated that fesoterodine significantly improves the Overactive Bladder (OAB symptoms and their assessment by patients compared with tolterodine extended-release (ER. This study aimed to assess the effect of aging and dose escalation on patient-reported treatment benefit, after changing their first Overactive Bladder (OAB therapy with tolterodine-ER to fesoterodine in daily clinical practice. Methods A post-hoc analysis of data from a retrospective, cross-sectional and observational study was performed in a cohort of 748 OAB adults patients (OAB-V8 score ≥8, who switched to fesoterodine from their first tolterodine-ER-based therapy within the 3–4 months before study visit. Effect of fesoterodine doses (4 mg vs. 8 mg and patient age ( Results Improvements were not affected by age. Fesoterodine 8 mg vs. 4 mg provides significant improvements in terms of treatment benefit [TBS 97.1% vs. 88.4%, p  Conclusions A change from tolterodine ER therapy to fesoterodine with dose escalation to 8 mg in symptomatic OAB patients, seems to be associated with greater improvement in terms of both patient-reported-treatment benefit and clinical global impression of change. Improvement was not affected by age.

  1. Dose and aging effect on patients reported treatment benefit switching from the first overactive bladder therapy with tolterodine ER to fesoterodine: post-hoc analysis from an observational and retrospective study.

    Science.gov (United States)

    Castro-Diaz, David; Miranda, Pilar; Sanchez-Ballester, Francisco; Lizarraga, Isabel; Arumí, Daniel; Rejas, Javier

    2012-07-26

    Previous randomized studies have demonstrated that fesoterodine significantly improves the Overactive Bladder (OAB) symptoms and their assessment by patients compared with tolterodine extended-release (ER). This study aimed to assess the effect of aging and dose escalation on patient-reported treatment benefit, after changing their first Overactive Bladder (OAB) therapy with tolterodine-ER to fesoterodine in daily clinical practice. A post-hoc analysis of data from a retrospective, cross-sectional and observational study was performed in a cohort of 748 OAB adults patients (OAB-V8 score ≥8), who switched to fesoterodine from their first tolterodine-ER-based therapy within the 3-4 months before study visit. Effect of fesoterodine doses (4 mg vs. 8 mg) and patient age (Fesoterodine 8 mg vs. 4 mg provides significant improvements in terms of treatment benefit [TBS 97.1% vs. 88.4%, p fesoterodine with dose escalation to 8 mg in symptomatic OAB patients, seems to be associated with greater improvement in terms of both patient-reported-treatment benefit and clinical global impression of change. Improvement was not affected by age.

  2. Healthy Aging 5 Years After a Period of Daily Supplementation With Antioxidant Nutrients: A Post Hoc Analysis of the French Randomized Trial SU.VI.MAX.

    Science.gov (United States)

    Assmann, Karen E; Andreeva, Valentina A; Jeandel, Claude; Hercberg, Serge; Galan, Pilar; Kesse-Guyot, Emmanuelle

    2015-10-15

    This study's objective was to investigate healthy aging in older French adults 5 years after a period of daily nutritional-dose supplementation with antioxidant nutrients. The study was based on the double-blind, randomized trial, Supplementation with Antioxidant Vitamins and Minerals (SU.VI.MAX) Study (1994-2002) and the SU.VI.MAX 2 Follow-up Study (2007-2009). During 1994-2002, participants received a daily combination of vitamin C (120 mg), β-carotene (6 mg), vitamin E (30 mg), selenium (100 µg), and zinc (20 mg) or placebo. Healthy aging was assessed in 2007-2009 by using multiple criteria, including the absence of major chronic disease and good physical and cognitive functioning. Data from a subsample of the SU.VI.MAX 2 cohort, initially free of major chronic disease, with a mean age of 65.3 years in 2007-2009 (n = 3,966), were used to calculate relative risks. Supplementation was associated with a greater healthy aging probability among men (relative risk = 1.16, 95% confidence interval: 1.04, 1.29) but not among women (relative risk = 0.98, 95% confidence interval: 0.86, 1.11) or all participants (relative risk = 1.07, 95% confidence interval: 0.99, 1.16). Moreover, exploratory subgroup analyses indicated effect modification by initial serum concentrations of zinc and vitamin C. In conclusion, an adequate supply of antioxidant nutrients (equivalent to quantities provided by a balanced diet rich in fruits and vegetables) may have a beneficial role for healthy aging.

  3. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or m

  4. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  5. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  6. The method of characteristics applied to analyse 2DH models

    NARCIS (Netherlands)

    Sloff, C.J.

    1992-01-01

    To gain insight into the physical behaviour of 2D hydraulic models (mathematically formulated as a system of partial differential equations), the method of characteristics is used to analyse the propagation of physical meaningful disturbances. These disturbances propagate as wave fronts along bichar

  7. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    D. E. Reusser

    2008-11-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns which can lead to the identification of model structural errors.

  8. Analysing the temporal dynamics of model performance for hydrological models

    Directory of Open Access Journals (Sweden)

    E. Zehe

    2009-07-01

    Full Text Available The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or model structure. Dealing with a set of performance measures evaluated at a high temporal resolution implies analyzing and interpreting a high dimensional data set. This paper presents a method for such a hydrological model performance assessment with a high temporal resolution and illustrates its application for two very different rainfall-runoff modeling case studies. The first is the Wilde Weisseritz case study, a headwater catchment in the eastern Ore Mountains, simulated with the conceptual model WaSiM-ETH. The second is the Malalcahuello case study, a headwater catchment in the Chilean Andes, simulated with the physics-based model Catflow. The proposed time-resolved performance assessment starts with the computation of a large set of classically used performance measures for a moving window. The key of the developed approach is a data-reduction method based on self-organizing maps (SOMs and cluster analysis to classify the high-dimensional performance matrix. Synthetic peak errors are used to interpret the resulting error classes. The final outcome of the proposed method is a time series of the occurrence of dominant error types. For the two case studies analyzed here, 6 such error types have been identified. They show clear temporal patterns, which can lead to the identification of model structural errors.

  9. Social Network Analyses and Nutritional Behavior: An Integrated Modeling Approach

    Directory of Open Access Journals (Sweden)

    Alistair McNair Senior

    2016-01-01

    Full Text Available Animals have evolved complex foraging strategies to obtain a nutritionally balanced diet and associated fitness benefits. Recent advances in nutrition research, combining state-space models of nutritional geometry with agent-based models of systems biology, show how nutrient targeted foraging behavior can also influence animal social interactions, ultimately affecting collective dynamics and group structures. Here we demonstrate how social network analyses can be integrated into such a modeling framework and provide a tangible and practical analytical tool to compare experimental results with theory. We illustrate our approach by examining the case of nutritionally mediated dominance hierarchies. First we show how nutritionally explicit agent-based models that simulate the emergence of dominance hierarchies can be used to generate social networks. Importantly the structural properties of our simulated networks bear similarities to dominance networks of real animals (where conflicts are not always directly related to nutrition. Finally, we demonstrate how metrics from social network analyses can be used to predict the fitness of agents in these simulated competitive environments. Our results highlight the potential importance of nutritional mechanisms in shaping dominance interactions in a wide range of social and ecological contexts. Nutrition likely influences social interaction in many species, and yet a theoretical framework for exploring these effects is currently lacking. Combining social network analyses with computational models from nutritional ecology may bridge this divide, representing a pragmatic approach for generating theoretical predictions for nutritional experiments.

  10. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Science.gov (United States)

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  11. Dose escalation improves therapeutic outcome: post hoc analysis of data from a 12-week, multicentre, double-blind, parallel-group trial of trospium chloride in patients with urinary urge incontinence

    Directory of Open Access Journals (Sweden)

    Bödeker Rolf-Hasso

    2010-09-01

    Full Text Available Abstract Background Flexible dosing of anticholinergics used for overactive bladder (OAB treatment is a useful strategy in clinical practice for achieving a maximum effective and maximum tolerated level of therapeutic benefit. In this post hoc analysis we evaluated the efficacy and tolerability of trospium chloride treatment for urinary urge incontinence (UUI with focus on flexible dosing. Methods The data came from a 12-week, randomised, double-blind, phase IIIb study in which 1658 patients with urinary frequency plus urge incontinence received trospium chloride 15 mg TID (n = 828 or 2.5 mg oxybutynin hydrochloride TID (n = 830. After four weeks, daily doses were doubled and not readjusted in 29.2% (242/828 of patients in the trospium group, and in 23.3% (193/830 in the oxybuytnin group, until the end of treatment. We assessed the absolute reduction in weekly UUI episodes and the change in intensity of dry mouth, recorded in patients' micturition diaries. Adverse events were also evaluated. Statistics were descriptive. Results Dose escalation of either trospium or oxybutynin increased reduction in UUI episodes in the population studied. At study end, there were no relevant differences between the "dose adjustment" subgroups and the respective "no dose adjustment" subgroups (trospium: P = 0.249; oxybutynin: P = 0.349. After dose escalation, worsening of dry mouth was higher in both dose adjusted subgroups compared to the respective "no dose adjustment" subgroups (P P Conclusions Flexible dosing of trospium was proven to be as effective, but better tolerated as the officially approved adjusted dose of oxybutynin. Trial registration (parent study The study was registered with the German Federal Institute for Drugs and Medical Devices (BfArM, Berlin, Germany, registration number 4022383, as required at the time point of planning this study.

  12. Onset of efficacy with acute long-acting injectable paliperidone palmitate treatment in markedly to severely ill patients with schizophrenia: post hoc analysis of a randomized, double-blind clinical trial

    Directory of Open Access Journals (Sweden)

    Ma Yi-Wen

    2011-04-01

    Full Text Available Abstract Background This post hoc analysis (trial registration: ClinicalTrials.gov NCT00590577 assessed onset of efficacy and tolerability of acute treatment with once-monthly paliperidone palmitate (PP, a long-acting atypical antipsychotic initiated by day 1 and day 8 injections, in a markedly to severely ill schizophrenia population. Methods Subjects entering the 13-week, double-blind trial were randomized to PP (39, 156, or 234 mg [25, 100, and 150 mg eq of paliperidone, respectively] or placebo. This subgroup analysis included those with a baseline Clinical Global Impressions-Severity (CGI-S score indicating marked to severe illness. PP subjects received a 234-mg day 1 injection (deltoid, followed by their assigned dose on day 8 and monthly thereafter (deltoid or gluteal. Thus, data for PP groups were pooled for days 4 and 8. Measures included Positive and Negative Syndrome Scale (PANSS, CGI-S, Personal and Social Performance (PSP, and adverse events (AEs. Analysis of covariance (ANCOVA and last-observation-carried-forward (LOCF methodologies, without multiplicity adjustments, were used to assess changes in continuous measures. Onset of efficacy was defined as the first time point a treatment group showed significant PANSS improvement (assessed days 4, 8, 22, 36, 64, and 92 versus placebo, which was maintained through end point. Results A total of 312 subjects met inclusion criterion for this subgroup analysis. After the day 1 injection, mean PANSS total scores improved significantly with PP (all received 234 mg versus placebo at day 4 (P = 0.012 and day 8 (P = 0.007. After the day 8 injection, a significant PANSS improvement persisted at all subsequent time points in the 234-mg group versus placebo (P P P P Conclusions In this markedly to severely ill population, acute treatment with 234 mg PP improved psychotic symptoms compared with placebo by day 4. After subsequent injections, observed improvements are suggestive of a dose

  13. The Effect of Sitagliptin on the Regression of Carotid Intima-Media Thickening in Patients with Type 2 Diabetes Mellitus: A Post Hoc Analysis of the Sitagliptin Preventive Study of Intima-Media Thickness Evaluation

    Science.gov (United States)

    Katakami, Naoto; Shiraiwa, Toshihiko; Yoshii, Hidenori; Gosho, Masahiko; Shimomura, Iichiro; Watada, Hirotaka

    2017-01-01

    Background. The effect of dipeptidyl peptidase-4 (DPP-4) inhibitors on the regression of carotid IMT remains largely unknown. The present study aimed to clarify whether sitagliptin, DPP-4 inhibitor, could regress carotid intima-media thickness (IMT) in insulin-treated patients with type 2 diabetes mellitus (T2DM). Methods. This is an exploratory analysis of a randomized trial in which we investigated the effect of sitagliptin on the progression of carotid IMT in insulin-treated patients with T2DM. Here, we compared the efficacy of sitagliptin treatment on the number of patients who showed regression of carotid IMT of ≥0.10 mm in a post hoc analysis. Results. The percentages of the number of the patients who showed regression of mean-IMT-CCA (28.9% in the sitagliptin group versus 16.4% in the conventional group, P = 0.022) and left max-IMT-CCA (43.0% in the sitagliptin group versus 26.2% in the conventional group, P = 0.007), but not right max-IMT-CCA, were higher in the sitagliptin treatment group compared with those in the non-DPP-4 inhibitor treatment group. In multiple logistic regression analysis, sitagliptin treatment significantly achieved higher target attainment of mean-IMT-CCA ≥0.10 mm and right and left max-IMT-CCA ≥0.10 mm compared to conventional treatment. Conclusions. Our data suggested that DPP-4 inhibitors were associated with the regression of carotid atherosclerosis in insulin-treated T2DM patients. This study has been registered with the University Hospital Medical Information Network Clinical Trials Registry (UMIN000007396). PMID:28250768

  14. Health-Related Quality-of-Life after Laparoscopic Gastric Bypass Surgery with or Without Closure of the Mesenteric Defects: a Post-hoc Analysis of Data from a Randomized Clinical Trial.

    Science.gov (United States)

    Stenberg, Erik; Szabo, Eva; Ottosson, Johan; Thorell, Anders; Näslund, Ingmar

    2017-07-04

    Mesenteric defect closure in laparoscopic gastric bypass surgery has been reported to reduce the risk for small bowel obstruction. Little is known, however, about the effect of mesenteric defect closure on patient-reported outcome. The aim of the present study was to see if mesenteric defect closure affects health-related quality-of-life (HRQoL) after laparoscopic gastric bypass. Patients operated at 12 centers for bariatric surgery participated in this randomized two-arm parallel study. During the operation, patients were randomized to closure of the mesenteric defects or non-closure. This study was a post-hoc analysis comparing HRQoL of the two groups before surgery, at 1 and 2 years after the operation. HRQoL was estimated using the short form 36 (SF-36-RAND) and the obesity problems (OP) scale. Between May 1, 2010, and November 14, 2011, 2507 patients were included in the study and randomly assigned to mesenteric defect closure (n = 1259) or non-closure (n = 1248). In total, 1619 patients (64.6%) reported on their HRQoL at the 2-year follow-up. Mesenteric defect closure was associated with slightly higher rating of social functioning (87 ± 22.1 vs. 85 ± 24.2, p = 0.047) and role emotional (85 ± 31.5 vs. 82 ± 35.0, p = 0.027). No difference was seen on the OP scale (open defects 22 ± 24.8 vs. closed defects 20 ± 23.8, p = 0.125). When comparing mesenteric defect closure with non-closure, there is no clinically relevant difference in HRQoL after laparoscopic gastric bypass surgery.

  15. Comparison of steady-state plasma concentrations of armodafinil and modafinil late in the day following morning administration: post hoc analysis of two randomized, double-blind, placebo-controlled, multiple-dose studies in healthy male subjects.

    Science.gov (United States)

    Darwish, Mona; Kirby, Mary; Hellriegel, Edward T

    2009-01-01

    Armodafinil, the R- and longer-lasting isomer of modafinil, may maintain higher plasma drug concentrations compared with racemic modafinil because of stereospecific differences in elimination of its isomers. This analysis set out to compare the steady-state pharmacokinetic profiles of armodafinil and modafinil on a milligram-to-milligram basis following once-daily administration. A post hoc analysis of two multiple-dose pharmacokinetic studies in healthy male subjects aged 18-50 years was conducted to compare dose-normalized (200 mg/day) plasma drug concentration and pharmacokinetic data for subjects in each study who completed 7 days of once-daily (morning) administration of armodafinil (n = 34) or modafinil (n = 18). Dose-normalized plasma concentrations of armodafinil on day 7 were higher than those of modafinil, with the greatest differences being observed later in the day. Across the 24-hour dose interval, plasma drug concentration fluctuation and swing were 28% and 42% less, respectively, with armodafinil than with modafinil. In addition, average late-day (3 pm to 7 pm after an 8 am dosing) plasma drug concentrations and partial values for the area under the plasma concentration versus time curve for 7-11 hours after dosing were both 44% higher with armodafinil. At steady state, armodafinil produces consistently higher plasma drug concentrations late in the day than modafinil when compared on a milligram-to-milligram basis. The distinct pharmacokinetic profile of armodafinil compared with that of the racemate may result in fundamentally different durations of action. These differences between the two medications cannot be made equivalent by increasing the dose of the racemate without introducing potential safety concerns.

  16. Effects of non-invasive vagus nerve stimulation on attack frequency over time and expanded response rates in patients with chronic cluster headache: a post hoc analysis of the randomised, controlled PREVA study.

    Science.gov (United States)

    Gaul, Charly; Magis, Delphine; Liebler, Eric; Straube, Andreas

    2017-12-01

    In the PREVention and Acute treatment of chronic cluster headache (PREVA) study, attack frequency reductions from baseline were significantly more pronounced with non-invasive vagus nerve stimulation plus standard of care (nVNS + SoC) than with SoC alone. Given the intensely painful and frequent nature of chronic cluster headache attacks, additional patient-centric outcomes, including the time to and level of therapeutic response, were evaluated in a post hoc analysis of the PREVA study. After a 2-week baseline phase, 97 patients with chronic cluster headache entered a 4-week randomised phase to receive nVNS + SoC (n = 48) or SoC alone (n = 49). All 92 patients who continued into a 4-week extension phase received nVNS + SoC. Compared with SoC alone, nVNS + SoC led to a significantly lower mean weekly attack frequency by week 2 of the randomised phase; the attack frequency remained significantly lower in the nVNS + SoC group through week 3 of the extension phase (P cluster headache attack frequency within 2 weeks after its addition to SoC and was associated with significantly higher ≥25%, ≥50%, and ≥75% response rates than SoC alone. The rapid decrease in weekly attack frequency justifies a 4-week trial period to identify responders to nVNS, with a high degree of confidence, among patients with chronic cluster headache.

  17. Post Hoc Analysis of Data from Two Clinical Trials Evaluating the Minimal Clinically Important Change in International Restless Legs Syndrome Sum Score in Patients with Restless Legs Syndrome (Willis-Ekbom Disease).

    Science.gov (United States)

    Ondo, William G; Grieger, Frank; Moran, Kimberly; Kohnen, Ralf; Roth, Thomas

    2016-01-01

    Determine the minimal clinically important change (MCIC), a measure determining the minimum change in scale score perceived as clinically beneficial, for the international restless legs syndrome (IRLS) and restless legs syndrome 6-item questionnaire (RLS-6) in patients with moderate to severe restless legs syndrome (RLS/Willis-Ekbom disease) treated with the rotigotine transdermal system. This post hoc analysis analyzed data from two 6-mo randomized, double-blind, placebo-controlled studies (SP790 [NCT00136045]; SP792 [NCT00135993]) individually and as a pooled analysis in rotigotine-treated patients, with baseline and end of maintenance IRLS and Clinical Global Impressions of change (CGI Item 2) scores available for analysis. An anchor-based approach and receiver operating characteristic (ROC) curves were used to determine the MCIC for the IRLS and RLS-6. We specifically compared "much improved vs minimally improved," "much improved/very much improved vs minimally improved or worse," and "minimally improved or better vs no change or worse" on the CGI-2 using the full analysis set (data as observed). The MCIC IRLS cut-off scores for SP790 and SP792 were similar. Using the pooled SP790+SP792 analysis, the MCIC total IRLS cut-off score (sensitivity, specificity) for "much improved vs minimally improved" was -9 (0.69, 0.66), for "much improved/very much improved vs minimally improved or worse" was -11 (0.81, 0.84), and for "minimally improved or better vs no change or worse" was -9 (0.79, 0.88). MCIC ROC cut-offs were also calculated for each RLS-6 item. In patients with RLS, the MCIC values derived in the current analysis provide a basis for defining meaningful clinical improvement based on changes in the IRLS and RLS-6 following treatment with rotigotine. © 2016 American Academy of Sleep Medicine.

  18. Prevalence of acute and chronic viral seropositivity and characteristics of disease in patients with psoriatic arthritis treated with cyclosporine: a post hoc analysis from a sex point of view on the observational study of infectious events in psoriasis complicated by active psoriatic arthritis

    Science.gov (United States)

    Colombo, Delia; Chimenti, Sergio; Grossi, Paolo Antonio; Marchesoni, Antonio; Bardazzi, Federico; Ayala, Fabio; Simoni, Lucia; Vassellatti, Donatella; Bellia, Gilberto

    2016-01-01

    Background Sex medicine studies have shown that there are sex differences with regard to disease characteristics in immune-mediated inflammatory diseases, including psoriasis, in immune response and susceptibility to viral infections. We performed a post hoc analysis of the Observational Study of infectious events in psoriasis complicated by active psoriatic arthritis (SYNERGY) study in patients with psoriatic arthritis (PsA) treated with immunosuppressive regimens including cyclosporine, in order to evaluate potential between-sex differences in severity of disease and prevalence of viral infections. Methods SYNERGY was an observational study conducted in 24 Italian dermatology clinics, which included 238 consecutively enrolled patients with PsA, under treatment with immunosuppressant regimens including cyclosporin A. In this post hoc analysis, patients’ demographical data and clinical characteristics of psoriasis, severity and activity of PsA, prevalence of seropositivity for at least one viral infection, and treatments administered for PsA and infections were compared between sexes. Results A total of 225 patients were evaluated in this post hoc analysis, and 121 (54%) were males. Demographic characteristics and concomitant diseases were comparable between sexes. Statistically significant sex differences were observed at baseline in Psoriasis Area and Severity Index score (higher in males), mean number of painful joints, Bath Ankylosing Spondylitis Disease Activity Index, and the global activity of disease assessed by patients (all higher in females). The percentage of patients with at least one seropositivity detected at baseline, indicative of concomitant or former viral infection, was significantly higher among women than among men. No between-sex differences were detected in other measures, at other time points, and in treatments. Patients developed no hepatitis B virus or hepatitis C virus reactivation during cyclosporine treatment. Conclusion Our post hoc

  19. Comparing modelling techniques for analysing urban pluvial flooding.

    Science.gov (United States)

    van Dijk, E; van der Meulen, J; Kluck, J; Straatman, J H M

    2014-01-01

    Short peak rainfall intensities cause sewer systems to overflow leading to flooding of streets and houses. Due to climate change and densification of urban areas, this is expected to occur more often in the future. Hence, next to their minor (i.e. sewer) system, municipalities have to analyse their major (i.e. surface) system in order to anticipate urban flooding during extreme rainfall. Urban flood modelling techniques are powerful tools in both public and internal communications and transparently support design processes. To provide more insight into the (im)possibilities of different urban flood modelling techniques, simulation results have been compared for an extreme rainfall event. The results show that, although modelling software is tending to evolve towards coupled one-dimensional (1D)-two-dimensional (2D) simulation models, surface flow models, using an accurate digital elevation model, prove to be an easy and fast alternative to identify vulnerable locations in hilly and flat areas. In areas at the transition between hilly and flat, however, coupled 1D-2D simulation models give better results since catchments of major and minor systems can differ strongly in these areas. During the decision making process, surface flow models can provide a first insight that can be complemented with complex simulation models for critical locations.

  20. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Du, Qiang [Pennsylvania State Univ., State College, PA (United States)

    2014-11-12

    The rational design of materials, the development of accurate and efficient material simulation algorithms, and the determination of the response of materials to environments and loads occurring in practice all require an understanding of mechanics at disparate spatial and temporal scales. The project addresses mathematical and numerical analyses for material problems for which relevant scales range from those usually treated by molecular dynamics all the way up to those most often treated by classical elasticity. The prevalent approach towards developing a multiscale material model couples two or more well known models, e.g., molecular dynamics and classical elasticity, each of which is useful at a different scale, creating a multiscale multi-model. However, the challenges behind such a coupling are formidable and largely arise because the atomistic and continuum models employ nonlocal and local models of force, respectively. The project focuses on a multiscale analysis of the peridynamics materials model. Peridynamics can be used as a transition between molecular dynamics and classical elasticity so that the difficulties encountered when directly coupling those two models are mitigated. In addition, in some situations, peridynamics can be used all by itself as a material model that accurately and efficiently captures the behavior of materials over a wide range of spatial and temporal scales. Peridynamics is well suited to these purposes because it employs a nonlocal model of force, analogous to that of molecular dynamics; furthermore, at sufficiently large length scales and assuming smooth deformation, peridynamics can be approximated by classical elasticity. The project will extend the emerging mathematical and numerical analysis of peridynamics. One goal is to develop a peridynamics-enabled multiscale multi-model that potentially provides a new and more extensive mathematical basis for coupling classical elasticity and molecular dynamics, thus enabling next

  1. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are more appropriate to accurately reflect the trial data.

  2. Analysing regenerative potential in zebrafish models of congenital muscular dystrophy.

    Science.gov (United States)

    Wood, A J; Currie, P D

    2014-11-01

    The congenital muscular dystrophies (CMDs) are a clinically and genetically heterogeneous group of muscle disorders. Clinically hypotonia is present from birth, with progressive muscle weakness and wasting through development. For the most part, CMDs can mechanistically be attributed to failure of basement membrane protein laminin-α2 sufficiently binding with correctly glycosylated α-dystroglycan. The majority of CMDs therefore arise as the result of either a deficiency of laminin-α2 (MDC1A) or hypoglycosylation of α-dystroglycan (dystroglycanopathy). Here we consider whether by filling a regenerative medicine niche, the zebrafish model can address the present challenge of delivering novel therapeutic solutions for CMD. In the first instance the readiness and appropriateness of the zebrafish as a model organism for pioneering regenerative medicine therapies in CMD is analysed, in particular for MDC1A and the dystroglycanopathies. Despite the recent rapid progress made in gene editing technology, these approaches have yet to yield any novel zebrafish models of CMD. Currently the most genetically relevant zebrafish models to the field of CMD, have all been created by N-ethyl-N-nitrosourea (ENU) mutagenesis. Once genetically relevant models have been established the zebrafish has several important facets for investigating the mechanistic cause of CMD, including rapid ex vivo development, optical transparency up to the larval stages of development and relative ease in creating transgenic reporter lines. Together, these tools are well suited for use in live-imaging studies such as in vivo modelling of muscle fibre detachment. Secondly, the zebrafish's contribution to progress in effective treatment of CMD was analysed. Two approaches were identified in which zebrafish could potentially contribute to effective therapies. The first hinges on the augmentation of functional redundancy within the system, such as upregulating alternative laminin chains in the candyfloss

  3. [Approach to depressogenic genes from genetic analyses of animal models].

    Science.gov (United States)

    Yoshikawa, Takeo

    2004-01-01

    Human depression or mood disorder is defined as a complex disease, making positional cloning of susceptibility genes a formidable task. We have undertaken genetic analyses of three different animal models for depression, comparing our results with advanced database resources. We first performed quantitative trait loci (QTL) analysis on two mouse models of "despair", namely, the forced swim test (FST) and tail suspension test (TST), and detected multiple chromosomal loci that control immobility time in these tests. Since one QTL detected on mouse chromosome 11 harbors the GABA A receptor subunit genes, we tested these genes for association in human mood disorder patients. We obtained significant associations of the alpha 1 and alpha 6 subunit genes with the disease, particularly in females. This result was striking, because we had previously detected an epistatic interaction between mouse chromosomes 11 and X that regulates immobility time in these animals. Next, we performed genome-wide expression analyses using a rat model of depression, learned helplessness (LH). We found that in the frontal cortex of LH rats, a disease implicated region, the LIM kinase 1 gene (Limk 1) showed greatest alteration, in this case down-regulation. By combining data from the QTL analysis of FST/TST and DNA microarray analysis of mouse frontal cortex, we identified adenylyl cyclase-associated CAP protein 1 (Cap 1) as another candidate gene for depression susceptibility. Both Limk 1 and Cap 1 are key players in the modulation of actin G-F conversion. In summary, our current study using animal models suggests disturbances of GABAergic neurotransmission and actin turnover as potential pathophysiologies for mood disorder.

  4. Safety of daily teriparatide treatment: a post hoc analysis of a Phase III study to investigate the possible association of teriparatide treatment with calcium homeostasis in patients with serum procollagen type I N-terminal propeptide elevation

    Directory of Open Access Journals (Sweden)

    Yamamoto T

    2015-07-01

    Full Text Available Takanori Yamamoto,1 Mika Tsujimoto,2 Hideaki Sowa11Medical Science, Lilly Research Laboratories, Medicines Development Unit Japan, 2Asia Pacific Statistical Science-Japan, Science and Regulatory Affairs, LRL MDU-Japan, Eli Lilly Japan K.K, Kobe, Hyogo, JapanObjective: Serum procollagen type I N-terminal propeptide (PINP, a representative marker of bone anabolic action, is strongly related to bone mineral density during teriparatide therapy. This post hoc study analyzed data from a Phase III study (ClinicalTrials.gov identifier NCT00433160 to determine if there was an association between serum PINP elevation and serum calcium concentration or calcium metabolism-related disorders.Research design and methods: Japanese subjects with osteoporosis at high risk of fracture were randomized 2:1 to teriparatide 20 µg/day (n=137 or placebo (n=70 for a 12-month double-blind treatment period, followed by 12 months of open-label teriparatide treatment of all subjects.Main outcome measures: Serum PINP levels were measured at baseline, and after 1, 3, 6, 12, 18, and 24 months of treatment. Serum calcium levels were measured at baseline, and after 1, 3, 6, 9, 12, 15, 18, 21, and 24 months of treatment.Results: Serum PINP increased from baseline to 1 month of treatment and then remained high through 24 months. Twenty-eight of 195 subjects experienced PINP elevations >200 µg/L during teriparatide treatment. Serum calcium concentration in both the teriparatide and placebo groups remained within the normal range. There was no clinically relevant difference in serum calcium concentration between subjects with PINP >200 µg/L and subjects with PINP ≤200 µg/L. Two subjects experienced hypercalcemia and recovered without altering teriparatide treatment. Adverse events possibly related to calcium metabolism disorders included periarthritis calcarea (one subject and chondrocalcinosis pyrophosphate (two subjects, but neither was accompanied with a significant increase

  5. Influence of preinfarction angina and coronary collateral blood flow on the efficacy of remote ischaemic conditioning in patients with ST segment elevation myocardial infarction: post hoc subgroup analysis of a randomised controlled trial

    Science.gov (United States)

    Pryds, Kasper; Bøttcher, Morten; Sloth, Astrid Drivsholm; Munk, Kim; Rahbek Schmidt, Michael; Bøtker, Hans Erik

    2016-01-01

    Objectives Remote ischaemic conditioning (RIC) confers cardioprotection in patients with ST segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (pPCI). We investigated whether preinfarction angina and coronary collateral blood flow (CCBF) to the infarct-related artery modify the efficacy of RIC. Design Post hoc subgroup analysis of a randomised controlled trial. Participants A total of 139 patients with STEMI randomised to treatment with pPCI or RIC+pPCI. Interventions RIC was performed prior to pPCI as four cycles of 5 min upper arm ischaemia and reperfusion with a blood pressure cuff. Primary outcome measure Myocardial salvage index (MSI) assessed by single-photon emission computerised tomography. We evaluated the efficacy of RIC in subgroups of patients with or without preinfarction angina or CCBF. Results Of 139 patients included in the study, 109 had available data for preinfarction angina status and 54 had preinfarction angina. Among 83 patients with Thrombolysis In Myocardial Infarction flow 0/1 on arrival, 43 had CCBF. Overall, RIC+pPCI increased median MSI compared with pPCI alone (0.75 vs 0.56, p=0.045). Mean MSI did not differ between patients with and without preinfarction angina in either the pPCI alone (0.58 and 0.57; 95% CI −0.17 to 0.19, p=0.94) or the RIC+pPCI group (0.66 and 0.69; 95% CI −0.18 to 0.10, p=0.58). Mean MSI did not differ between patients with and without CCBF in the pPCI alone group (0.51 and 0.55; 95% CI −0.20 to 0.13, p=0.64), but was increased in patients with CCBF versus without CCBF in the RIC+pPCI group (0.75 vs 0.58; 95% CI 0.03 to 0.31, p=0.02; effect modification from CCBF on the effect of RIC on MSI, p=0.06). Conclusions Preinfarction angina did not modify the efficacy of RIC in patients with STEMI undergoing pPCI. CCBF to the infarct-related artery seems to be of importance for the cardioprotective efficacy of RIC. Trial registration number NCT00435266, Post

  6. Magnetic fabric analyses in analogue models of clays

    Science.gov (United States)

    García-Lasanta, Cristina; Román-Berdiel, Teresa; Izquierdo-Llavall, Esther; Casas-Sainz, Antonio

    2017-04-01

    Anisotropy of magnetic susceptibility (AMS) studies in sedimentary rocks subjected to deformation indicate that magnetic fabrics orientation can be conditioned by multiple factors: sedimentary conditions, magnetic mineralogy, successive tectonic events, etc. All of them difficult the interpretation of the AMS as a marker of the deformation conditions. Analogue modeling allows to isolate the variables that act in a geological process and to determine the factors and in which extent they influence in the process. This study shows the magnetic fabric analyses applied to several analogue models developed with common commercial red clays. This material resembles natural clay materials that, despite their greater degree of impurities and heterogeneity, have been proved to record a robust magnetic signal carried by a mixture of para- and ferromagnetic minerals. The magnetic behavior of the modeled clay has been characterized by temperature dependent magnetic susceptibility curves (from 40 to 700°C). The measurements were performed combining a KLY-3S Kappabridge susceptometer with a CS3 furnace (AGICO Inc., Czech Republic). The obtained results indicate the presence of an important content of hematite as ferromagnetic phase, as well as a remarkable paramagnetic fraction, probably constituted by phyllosilicates. This mineralogy is common in natural materials such as Permo-Triassic red facies, and magnetic fabric analyses in these natural examples have given consistent results in different tectonic contexts. In this study, sedimentary conditions and magnetic mineralogy are kept constant and the influence of the tectonic regime in the magnetic fabrics is analyzed. Our main objective is to reproduce several tectonic contexts (strike-slip and compression) in a sedimentary environment where material is not yet compacted, in order to determine how tectonic conditions influence the magnetic fabric registered in each case. By dispersing the clays in water and after allowing their

  7. Multi-state models: metapopulation and life history analyses

    Directory of Open Access Journals (Sweden)

    Arnason, A. N.

    2004-06-01

    Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such

  8. Dipole model test with one superconducting coil; results analysed

    CERN Document Server

    Durante, M; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  9. Dipole model test with one superconducting coil: results analysed

    CERN Document Server

    Bajas, H; Benda, V; Berriaud, C; Bajko, M; Bottura, L; Caspi, S; Charrondiere, M; Clément, S; Datskov, V; Devaux, M; Durante, M; Fazilleau, P; Ferracin, P; Fessia, P; Gauthier, R; Giloux, C; Guinchard, M; Kircher, F; Manil, P; Milanese, A; Millot, J-F; Muñoz Garcia, J-E; Oberli, L; Perez, J-C; Pietrowicz, S; Rifflet, J-M; de Rijk, G; Rondeaux, F; Todesco, E; Viret, P; Ziemianski, D

    2013-01-01

    This report is the deliverable report 7.3.1 “Dipole model test with one superconducting coil; results analysed “. The report has four parts: “Design report for the dipole magnet”, “Dipole magnet structure tested in LN2”, “Nb3Sn strand procured for one dipole magnet” and “One test double pancake copper coil made”. The 4 report parts show that, although the magnet construction will be only completed by end 2014, all elements are present for a successful completion. Due to the importance of the project for the future of the participants and given the significant investments done by the participants, there is a full commitment to finish the project.

  10. Incorporating flood event analyses and catchment structures into model development

    Science.gov (United States)

    Oppel, Henning; Schumann, Andreas

    2016-04-01

    The space-time variability in catchment response results from several hydrological processes which differ in their relevance in an event-specific way. An approach to characterise this variance consists in comparisons between flood events in a catchment and between flood responses of several sub-basins in such an event. In analytical frameworks the impact of space and time variability of rainfall on runoff generation due to rainfall excess can be characterised. Moreover the effect of hillslope and channel network routing on runoff timing can be specified. Hence, a modelling approach is needed to specify the runoff generation and formation. Knowing the space-time variability of rainfall and the (spatial averaged) response of a catchment it seems worthwhile to develop new models based on event and catchment analyses. The consideration of spatial order and the distribution of catchment characteristics in their spatial variability and interaction with the space-time variability of rainfall provides additional knowledge about hydrological processes at the basin scale. For this purpose a new procedure to characterise the spatial heterogeneity of catchments characteristics in their succession along the flow distance (differentiated between river network and hillslopes) was developed. It was applied to study of flood responses at a set of nested catchments in a river basin in eastern Germany. In this study the highest observed rainfall-runoff events were analysed, beginning at the catchment outlet and moving upstream. With regard to the spatial heterogeneities of catchment characteristics, sub-basins were separated by new algorithms to attribute runoff-generation, hillslope and river network processes. With this procedure the cumulative runoff response at the outlet can be decomposed and individual runoff features can be assigned to individual aspects of the catchment. Through comparative analysis between the sub-catchments and the assigned effects on runoff dynamics new

  11. A theoretical model for analysing gender bias in medicine

    Directory of Open Access Journals (Sweden)

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  12. An Illumination Modeling System for Human Factors Analyses

    Science.gov (United States)

    Huynh, Thong; Maida, James C.; Bond, Robert L. (Technical Monitor)

    2002-01-01

    Seeing is critical to human performance. Lighting is critical for seeing. Therefore, lighting is critical to human performance. This is common sense, and here on earth, it is easily taken for granted. However, on orbit, because the sun will rise or set every 45 minutes on average, humans working in space must cope with extremely dynamic lighting conditions. Contrast conditions of harsh shadowing and glare is also severe. The prediction of lighting conditions for critical operations is essential. Crew training can factor lighting into the lesson plans when necessary. Mission planners can determine whether low-light video cameras are required or whether additional luminaires need to be flown. The optimization of the quantity and quality of light is needed because of the effects on crew safety, on electrical power and on equipment maintainability. To address all of these issues, an illumination modeling system has been developed by the Graphics Research and Analyses Facility (GRAF) and Lighting Environment Test Facility (LETF) in the Space Human Factors Laboratory at NASA Johnson Space Center. The system uses physically based ray tracing software (Radiance) developed at Lawrence Berkeley Laboratories, a human factors oriented geometric modeling system (PLAID) and an extensive database of humans and environments. Material reflectivity properties of major surfaces and critical surfaces are measured using a gonio-reflectometer. Luminaires (lights) are measured for beam spread distribution, color and intensity. Video camera performances are measured for color and light sensitivity. 3D geometric models of humans and the environment are combined with the material and light models to form a system capable of predicting lighting conditions and visibility conditions in space.

  13. Comparison of two potato simulation models under climate change. I. Model calibration and sensitivity analyses

    NARCIS (Netherlands)

    Wolf, J.

    2002-01-01

    To analyse the effects of climate change on potato growth and production, both a simple growth model, POTATOS, and a comprehensive model, NPOTATO, were applied. Both models were calibrated and tested against results from experiments and variety trials in The Netherlands. The sensitivity of model

  14. Effects of canagliflozin, a sodium glucose co-transporter 2 inhibitor, on blood pressure and markers of arterial stiffness in patients with type 2 diabetes mellitus: a post hoc analysis.

    Science.gov (United States)

    Pfeifer, Michael; Townsend, Raymond R; Davies, Michael J; Vijapurkar, Ujjwala; Ren, Jimmy

    2017-02-27

    Physiologic determinants, such as pulse pressure [difference between systolic blood pressure (SBP) and diastolic BP (DBP)], mean arterial pressure (2/3 DBP + 1/3 SBP), and double product [beats per minute (bpm) × SBP], are linked to cardiovascular outcomes. The effects of canagliflozin, a sodium glucose co-transporter 2 (SGLT2) inhibitor, on pulse pressure, mean arterial pressure, and double product were assessed in patients with type 2 diabetes mellitus (T2DM). This post hoc analysis was based on pooled data from four 26-week, randomized, double-blind, placebo-controlled studies evaluating canagliflozin in patients with T2DM (N = 2313) and a 6-week, randomized, double-blind, placebo-controlled, ambulatory BP monitoring (ABPM) study evaluating canagliflozin in patients with T2DM and hypertension (N = 169). Changes from baseline in SBP, DBP, pulse pressure, mean arterial pressure, and double product were assessed using seated BP measurements (pooled studies) or averaged 24-h BP assessments (ABPM study). Safety was assessed based on adverse event reports. In the pooled studies, canagliflozin 100 and 300 mg reduced SBP (-4.3 and -5.0 vs -0.3 mmHg) and DBP (-2.5 and -2.4 vs -0.6 mmHg) versus placebo at week 26. Reductions in pulse pressure (-1.8 and -2.6 vs 0.2 mmHg), mean arterial pressure (-3.1 and -3.3 vs -0.5 mmHg), and double product (-381 and -416 vs -30 bpm × mmHg) were also seen with canagliflozin 100 and 300 mg versus placebo. In the ABPM study, canagliflozin 100 and 300 mg reduced mean 24-h SBP (-4.5 and -6.2 vs -1.2 mmHg) and DBP (-2.2 and -3.2 vs -0.3 mmHg) versus placebo at week 6. Canagliflozin 300 mg provided reductions in pulse pressure (-3.3 vs -0.8 mmHg) and mean arterial pressure (-4.2 vs -0.6 mmHg) compared with placebo, while canagliflozin 100 mg had more modest effects on these parameters. Canagliflozin was generally well tolerated in both study populations. Canagliflozin improved all three cardiovascular physiologic

  15. Prevalence of acute and chronic viral seropositivity and characteristics of disease in patients with psoriatic arthritis treated with cyclosporine: a post hoc analysis from a sex point of view on the observational study of infectious events in psoriasis complicated by active psoriatic arthritis

    Directory of Open Access Journals (Sweden)

    Colombo D

    2015-12-01

    Full Text Available Delia Colombo,1 Sergio Chimenti,2 Paolo Antonio Grossi,3 Antonio Marchesoni,4 Federico Bardazzi,5 Fabio Ayala,6 Lucia Simoni,7 Donatella Vassellatti,1 Gilberto Bellia1 On behalf of SYNERGY Study Group 1Novartis Farma Italia, Origgio (VA, 2Tor Vergata Polyclinic Rome, 3Macchi Hospital and Foundation, Varese, 4Orthopaedic Institute Pini, Milan, 5S Orsola-Malpighi Polyclinic, Bologna, 6University Federico II Naples, 7MediData srl, Modena, Italy Background: Sex medicine studies have shown that there are sex differences with regard to disease characteristics in immune-mediated inflammatory diseases, including psoriasis, in immune response and susceptibility to viral infections. We performed a post hoc analysis of the Observational Study of infectious events in psoriasis complicated by active psoriatic arthritis (SYNERGY study in patients with psoriatic arthritis (PsA treated with immunosuppressive regimens including cyclosporine, in order to evaluate potential between-sex differences in severity of disease and prevalence of viral infections.Methods: SYNERGY was an observational study conducted in 24 Italian dermatology clinics, which included 238 consecutively enrolled patients with PsA, under treatment with immunosuppressant regimens including cyclosporin A. In this post hoc analysis, patients' demographical data and clinical characteristics of psoriasis, severity and activity of PsA, prevalence of seropositivity for at least one viral infection, and treatments administered for PsA and infections were compared between sexes.Results: A total of 225 patients were evaluated in this post hoc analysis, and 121 (54% were males. Demographic characteristics and concomitant diseases were comparable between sexes. Statistically significant sex differences were observed at baseline in Psoriasis Area and Severity Index score (higher in males, mean number of painful joints, Bath Ankylosing Spondylitis Disease Activity Index, and the global activity of disease

  16. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    Energy Technology Data Exchange (ETDEWEB)

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  17. Analyses on Four Models and Cases of Enterprise Informatization

    Institute of Scientific and Technical Information of China (English)

    Shi Chunsheng(石春生); Han Xinjuan; Yang Cuilan; Zhao Dongbai

    2003-01-01

    The basic conditions of the enterprise informatization in Heilongjiang province are analyzed and 4 models are designed to drive the industrial and commercial information enterprise. The 4 models are the Resource Integration Informatization Model, the Flow Management Informatization Model, the Intranet E-commerce Informatization Model and the Network Enterprise Informatization Model. The conditions for using and problems needing attentions of these 4 models are also analyzed.

  18. Mathematical and Numerical Analyses of Peridynamics for Multiscale Materials Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Gunzburger, Max [Florida State Univ., Tallahassee, FL (United States)

    2015-02-17

    We have treated the modeling, analysis, numerical analysis, and algorithmic development for nonlocal models of diffusion and mechanics. Variational formulations were developed and finite element methods were developed based on those formulations for both steady state and time dependent problems. Obstacle problems and optimization problems for the nonlocal models were also treated and connections made with fractional derivative models.

  19. Unmix 6.0 Model for environmental data analyses

    Science.gov (United States)

    Unmix Model is a mathematical receptor model developed by EPA scientists that provides scientific support for the development and review of the air and water quality standards, exposure research, and environmental forensics.

  20. Analysing Models as a Knowledge Technology in Transport Planning

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2011-01-01

    Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame for such a ......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic...

  1. Analyses of Tsunami Events using Simple Propagation Models

    Science.gov (United States)

    Chilvery, Ashwith Kumar; Tan, Arjun; Aggarwal, Mohan

    2012-03-01

    Tsunamis exhibit the characteristics of ``canal waves'' or ``gravity waves'' which belong to the class of ``long ocean waves on shallow water.'' The memorable tsunami events including the 2004 Indian Ocean tsunami and the 2011 Pacific Ocean tsunami off the coast of Japan are analyzed by constructing simple tsunami propagation models including the following: (1) One-dimensional propagation model; (2) Two-dimensional propagation model on flat surface; (3) Two-dimensional propagation model on spherical surface; and (4) A finite line-source model on two-dimensional surface. It is shown that Model 1 explains the basic features of the tsunami including the propagation speed, depth of the ocean, dispersion-less propagation and bending of tsunamis around obstacles. Models 2 and 3 explain the observed amplitude variations for long-distance tsunami propagation across the Pacific Ocean, including the effect of the equatorial ocean current on the arrival times. Model 3 further explains the enhancement effect on the amplitude due to the curvature of the Earth past the equatorial distance. Finally, Model 4 explains the devastating effect of superposition of tsunamis from two subduction event, which struck the Phuket region during the 2004 Indian Ocean tsunami.

  2. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The pur

  3. Hyperelastic Modelling and Finite Element Analysing of Rubber Bushing

    Directory of Open Access Journals (Sweden)

    Merve Yavuz ERKEK

    2015-03-01

    Full Text Available The objective of this paper is to obtain stiffness curves of rubber bushings which are used in automotive industry with hyperelastic finite element model. Hyperelastic material models were obtained with different material tests. Stress and strain values and static stiffness curves were determined. It is shown that, static stiffness curves are nonlinear. The level of stiffness affects the vehicle dynamics behaviour.

  4. Bayes factors for reinforcement-learning models of the Iowa Gambling Task

    NARCIS (Netherlands)

    Steingroever, H.; Wetzels, R.; Wagenmakers, E.-J.

    2016-01-01

    The psychological processes that underlie performance on the Iowa gambling task (IGT) are often isolated with the help of reinforcement-learning (RL) models. The most popular method to compare RL models is the BIC post hoc fit criterion—a criterion that considers goodness-of-fit relative to model co

  5. Modelling theoretical uncertainties in phenomenological analyses for particle physics

    CERN Document Server

    Charles, Jérôme; Niess, Valentin; Silva, Luiz Vale

    2016-01-01

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding $p$-values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive $p$-value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavour p...

  6. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  7. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  8. Assessment of a geological model by surface wave analyses

    Science.gov (United States)

    Martorana, R.; Capizzi, P.; Avellone, G.; D'Alessandro, A.; Siragusa, R.; Luzio, D.

    2017-02-01

    A set of horizontal to vertical spectral ratio (HVSR) and multichannel analysis of surface waves (MASW) measurements, carried out in the Altavilla Milicia (Sicily) area, is analyzed to test a geological model of the area. Statistical techniques have been used in different stages of the data analysis, to optimize the reliability of the information extracted from geophysical measurements. In particular, cluster analysis algorithms have been implemented to select the time windows of the microseismic signal to be used for calculating the spectral ratio H/V and to identify sets of spectral ratio peaks likely caused by the same underground structures. Using results of reflection seismic lines, typical values of P-wave and S-wave velocity were estimated for each geological formation present in the area. These were used to narrow down the research space of parameters for the HVSR interpretation. MASW profiles have been carried out close to each HVSR measuring point, provided the parameters of the shallower layers for the HVSR models. MASW inversion has been constrained by extrapolating thicknesses from a known stratigraphic sequence. Preliminary 1D seismic models were obtained by adding deeper layers to models that resulted from MASW inversion. These justify the peaks of the HVSR curves due to layers deeper than MASW investigation depth. Furthermore, much deeper layers were included in the HVSR model, as suggested by geological setting and stratigraphic sequence. This choice was made considering that these latter layers do not generate other HVSR peaks and do not significantly affect the misfit. The starting models have been used to limit the starting research space for a more accurate interpretation, made considering the noise as a superposition of Rayleigh and Love waves. Results allowed to recognize four main seismic layers and to associate them to the main stratigraphic successions. The lateral correlation of seismic velocity models, joined with tectonic evidences

  9. Compound dislocation models (CDMs) for volcano deformation analyses

    Science.gov (United States)

    Nikkhoo, Mehdi; Walter, Thomas R.; Lundgren, Paul R.; Prats-Iraola, Pau

    2017-02-01

    Volcanic crises are often preceded and accompanied by volcano deformation caused by magmatic and hydrothermal processes. Fast and efficient model identification and parameter estimation techniques for various sources of deformation are crucial for process understanding, volcano hazard assessment and early warning purposes. As a simple model that can be a basis for rapid inversion techniques, we present a compound dislocation model (CDM) that is composed of three mutually orthogonal rectangular dislocations (RDs). We present new RD solutions, which are free of artefact singularities and that also possess full rotational degrees of freedom. The CDM can represent both planar intrusions in the near field and volumetric sources of inflation and deflation in the far field. Therefore, this source model can be applied to shallow dikes and sills, as well as to deep planar and equidimensional sources of any geometry, including oblate, prolate and other triaxial ellipsoidal shapes. In either case the sources may possess any arbitrary orientation in space. After systematically evaluating the CDM, we apply it to the co-eruptive displacements of the 2015 Calbuco eruption observed by the Sentinel-1A satellite in both ascending and descending orbits. The results show that the deformation source is a deflating vertical lens-shaped source at an approximate depth of 8 km centred beneath Calbuco volcano. The parameters of the optimal source model clearly show that it is significantly different from an isotropic point source or a single dislocation model. The Calbuco example reflects the convenience of using the CDM for a rapid interpretation of deformation data.

  10. A Formal Model to Analyse the Firewall Configuration Errors

    Directory of Open Access Journals (Sweden)

    T. T. Myo

    2015-01-01

    Full Text Available The firewall is widely known as a brandmauer (security-edge gateway. To provide the demanded security, the firewall has to be appropriately adjusted, i.e. be configured. Unfortunately, when configuring, even the skilled administrators may make mistakes, which result in decreasing level of a network security and network infiltration undesirable packages.The network can be exposed to various threats and attacks. One of the mechanisms used to ensure network security is the firewall.The firewall is a network component, which, using a security policy, controls packages passing through the borders of a secured network. The security policy represents the set of rules.Package filters work in the mode without inspection of a state: they investigate packages as the independent objects. Rules take the following form: (condition, action. The firewall analyses the entering traffic, based on the IP address of the sender and recipient, the port number of the sender and recipient, and the used protocol. When the package meets rule conditions, the action specified in the rule is carried out. It can be: allow, deny.The aim of this article is to develop tools to analyse a firewall configuration with inspection of states. The input data are the file with the set of rules. It is required to submit the analysis of a security policy in an informative graphic form as well as to reveal discrepancy available in rules. The article presents a security policy visualization algorithm and a program, which shows how the firewall rules act on all possible packages. To represent a result in an intelligible form a concept of the equivalence region is introduced.Our task is the program to display results of rules action on the packages in a convenient graphic form as well as to reveal contradictions between the rules. One of problems is the large number of measurements. As it was noted above, the following parameters are specified in the rule: Source IP address, appointment IP

  11. Analysing the Organizational Culture of Universities: Two Models

    Science.gov (United States)

    Folch, Marina Tomas; Ion, Georgeta

    2009-01-01

    This article presents the findings of two research projects, examining organizational culture by means of two different models of analysis--one at university level and one at department level--which were carried out over the last four years at Catalonian public universities (Spain). Theoretical and methodological approaches for the two…

  12. Enhancing Technology-Mediated Communication: Tools, Analyses, and Predictive Models

    Science.gov (United States)

    2007-09-01

    the home (see, for example, Nagel, Hudson, & Abowd, 2004), in social Chapter 2: Background 17 settings (see Kern, Antifakos, Schiele ...on Computer Supported Cooperative Work (CSCW 2006), pp. 525-528 ACM Press. Kern, N., Antifakos, S., Schiele , B., & Schwaninger, A. (2004). A model

  13. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    Institute of Scientific and Technical Information of China (English)

    Cai-Ping Feng; John Mundy

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions,TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also discussed.

  14. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    DEFF Research Database (Denmark)

    Feng, Cai-ping; Mundy, J.

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions, TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also...

  15. A new model for analysing thermal stress in granular composite

    Institute of Scientific and Technical Information of China (English)

    郑茂盛; 金志浩; 浩宏奇

    1995-01-01

    A double embedding model of inletting reinforcement grain and hollow matrix ball into the effective media of the particulate-reinforced composite is advanced. And with this model the distributions of thermal stress in different phases of the composite during cooling are studied. Various expressions for predicting elastic and elastoplastic thermal stresses are derived. It is found that the reinforcement suffers compressive hydrostatic stress and the hydrostatic stress in matrix zone is a tensile one when temperature decreases; when temperature further decreases, yield area in matrix forms; when the volume fraction of reinforcement is enlarged, compressive stress on grain and tensile hydrostatic stress in matrix zone decrease; the initial temperature difference of the interface of reinforcement and matrix yielding rises, while that for the matrix yielding overall decreases.

  16. Phase I/II trials of {sup 186}Re-HEDP in metastatic castration-resistant prostate cancer: post-hoc analysis of the impact of administered activity and dosimetry on survival

    Energy Technology Data Exchange (ETDEWEB)

    Denis-Bacelar, Ana M.; Chittenden, Sarah J.; Divoli, Antigoni; Flux, Glenn D. [The Institute of Cancer Research and The Royal Marsden Hospital NHS Foundation Trust, Joint Department of Physics, London (United Kingdom); Dearnaley, David P.; Johnson, Bernadette [The Institute of Cancer Research and The Royal Marsden Hospital NHS Foundation Trust, Division of Radiotherapy and Imaging, London (United Kingdom); O' Sullivan, Joe M. [Queen' s University Belfast, Centre for Cancer Research and Cell Biology, Belfast (United Kingdom); McCready, V.R. [Brighton and Sussex University Hospitals NHS Trust, Department of Nuclear Medicine, Brighton (United Kingdom); Du, Yong [The Royal Marsden Hospital NHS Foundation Trust, Department of Nuclear Medicine and PET/CT, London (United Kingdom)

    2017-04-15

    To investigate the role of patient-specific dosimetry as a predictive marker of survival and as a potential tool for individualised molecular radiotherapy treatment planning of bone metastases from castration-resistant prostate cancer, and to assess whether higher administered levels of activity are associated with a survival benefit. Clinical data from 57 patients who received 2.5-5.1 GBq of {sup 186}Re-HEDP as part of NIH-funded phase I/II clinical trials were analysed. Whole-body and SPECT-based absorbed doses to the whole body and bone lesions were calculated for 22 patients receiving 5 GBq. The patient mean absorbed dose was defined as the mean of all bone lesion-absorbed doses in any given patient. Kaplan-Meier curves, log-rank tests, Cox's proportional hazards model and Pearson's correlation coefficients were used for overall survival (OS) and correlation analyses. A statistically significantly longer OS was associated with administered activities above 3.5 GBq in the 57 patients (20.1 vs 7.1 months, hazard ratio: 0.39, 95 % CI: 0.10-0.58, P = 0.002). A total of 379 bone lesions were identified in 22 patients. The mean of the patient mean absorbed dose was 19 (±6) Gy and the mean of the whole-body absorbed dose was 0.33 (±0.11) Gy for the 22 patients. The patient mean absorbed dose (r = 0.65, P = 0.001) and the whole-body absorbed dose (r = 0.63, P = 0.002) showed a positive correlation with disease volume. Significant differences in OS were observed for the univariate group analyses according to disease volume as measured from SPECT imaging of {sup 186}Re-HEDP (P = 0.03) and patient mean absorbed dose (P = 0.01), whilst only the disease volume remained significant in a multivariable analysis (P = 0.004). This study demonstrated that higher administered activities led to prolonged survival and that for a fixed administered activity, the whole-body and patient mean absorbed doses correlated with the extent of disease, which, in turn, correlated

  17. Analysing an Analytical Solution Model for Simultaneous Mobility

    Directory of Open Access Journals (Sweden)

    Md. Ibrahim Chowdhury

    2013-12-01

    Full Text Available Current mobility models for simultaneous mobility h ave their convolution in designing simultaneous movement where mobile nodes (MNs travel randomly f rom the two adjacent cells at the same time and also have their complexity in the measurement of th e occurrences of simultaneous handover. Simultaneou s mobility problem incurs when two of the MNs start h andover approximately at the same time. As Simultaneous mobility is different for the other mo bility pattern, generally occurs less number of tim es in real time; we analyze that a simplified simultaneou s mobility model can be considered by taking only symmetric positions of MNs with random steps. In ad dition to that, we simulated the model using mSCTP and compare the simulation results in different sce narios with customized cell ranges. The analytical results shows that with the bigger the cell sizes, simultaneous handover with random steps occurrences become lees and for the sequential mobility (where initial positions of MNs is predetermined with ran dom steps, simultaneous handover is more frequent.

  18. A simulation model for analysing brain structure deformations

    Energy Technology Data Exchange (ETDEWEB)

    Bona, Sergio Di [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy); Lutzemberger, Ludovico [Department of Neuroscience, Institute of Neurosurgery, University of Pisa, Via Roma, 67-56100 Pisa (Italy); Salvetti, Ovidio [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy)

    2003-12-21

    Recent developments of medical software applications from the simulation to the planning of surgical operations have revealed the need for modelling human tissues and organs, not only from a geometric point of view but also from a physical one, i.e. soft tissues, rigid body, viscoelasticity, etc. This has given rise to the term 'deformable objects', which refers to objects with a morphology, a physical and a mechanical behaviour of their own and that reflects their natural properties. In this paper, we propose a model, based upon physical laws, suitable for the realistic manipulation of geometric reconstructions of volumetric data taken from MR and CT scans. In particular, a physically based model of the brain is presented that is able to simulate the evolution of different nature pathological intra-cranial phenomena such as haemorrhages, neoplasm, haematoma, etc and to describe the consequences that are caused by their volume expansions and the influences they have on the anatomical and neuro-functional structures of the brain.

  19. Analyses of Cometary Silicate Crystals: DDA Spectral Modeling of Forsterite

    Science.gov (United States)

    Wooden, Diane

    2012-01-01

    Comets are the Solar System's deep freezers of gases, ices, and particulates that were present in the outer protoplanetary disk. Where comet nuclei accreted was so cold that CO ice (approximately 50K) and other supervolatile ices like ethane (C2H2) were preserved. However, comets also accreted high temperature minerals: silicate crystals that either condensed (greater than or equal to 1400 K) or that were annealed from amorphous (glassy) silicates (greater than 850-1000 K). By their rarity in the interstellar medium, cometary crystalline silicates are thought to be grains that formed in the inner disk and were then radially transported out to the cold and ice-rich regimes near Neptune. The questions that comets can potentially address are: How fast, how far, and over what duration were crystals that formed in the inner disk transported out to the comet-forming region(s)? In comets, the mass fractions of silicates that are crystalline, f_cryst, translate to benchmarks for protoplanetary disk radial transport models. The infamous comet Hale-Bopp has crystalline fractions of over 55%. The values for cometary crystalline mass fractions, however, are derived assuming that the mineralogy assessed for the submicron to micron-sized portion of the size distribution represents the compositional makeup of all larger grains in the coma. Models for fitting cometary SEDs make this assumption because models can only fit the observed features with submicron to micron-sized discrete crystals. On the other hand, larger (0.1-100 micrometer radii) porous grains composed of amorphous silicates and amorphous carbon can be easily computed with mixed medium theory wherein vacuum mixed into a spherical particle mimics a porous aggregate. If crystalline silicates are mixed in, the models completely fail to match the observations. Moreover, models for a size distribution of discrete crystalline forsterite grains commonly employs the CDE computational method for ellipsoidal platelets (c:a:b=8

  20. Temporal variations analyses and predictive modeling of microbiological seawater quality.

    Science.gov (United States)

    Lušić, Darija Vukić; Kranjčević, Lado; Maćešić, Senka; Lušić, Dražen; Jozić, Slaven; Linšak, Željko; Bilajac, Lovorka; Grbčić, Luka; Bilajac, Neiro

    2017-08-01

    Bathing water quality is a major public health issue, especially for tourism-oriented regions. Currently used methods within EU allow at least a 2.2 day period for obtaining the analytical results, making outdated the information forwarded to the public. Obtained results and beach assessment are influenced by the temporal and spatial characteristics of sample collection, and numerous environmental parameters, as well as by differences of official water standards. This paper examines the temporal variation of microbiological parameters during the day, as well as the influence of the sampling hour, on decision processes in the management of the beach. Apart from the fecal indicators stipulated by the EU Bathing Water Directive (E. coli and enterococci), additional fecal (C. perfringens) and non-fecal (S. aureus and P. aeriginosa) parameters were analyzed. Moreover, the effects of applying different evaluation criteria (national, EU and U.S. EPA) to beach ranking were studied, and the most common reasons for exceeding water-quality standards were investigated. In order to upgrade routine monitoring, a predictive statistical model was developed. The highest concentrations of fecal indicators were recorded early in the morning (6 AM) due to the lack of solar radiation during the night period. When compared to enterococci, E. coli criteria appears to be more stringent for the detection of fecal pollution. In comparison to EU and U.S. EPA criteria, Croatian national evaluation criteria provide stricter public health standards. Solar radiation and precipitation were the predominant environmental parameters affecting beach water quality, and these parameters were included in the predictive model setup. Predictive models revealed great potential for the monitoring of recreational water bodies, and with further development can become a useful tool for the improvement of public health protection. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Analysing the Competency of Mathematical Modelling in Physics

    CERN Document Server

    Redish, Edward F

    2016-01-01

    A primary goal of physics is to create mathematical models that allow both predictions and explanations of physical phenomena. We weave maths extensively into our physics instruction beginning in high school, and the level and complexity of the maths we draw on grows as our students progress through a physics curriculum. Despite much research on the learning of both physics and math, the problem of how to successfully teach most of our students to use maths in physics effectively remains unsolved. A fundamental issue is that in physics, we don't just use maths, we think about the physical world with it. As a result, we make meaning with math-ematical symbology in a different way than mathematicians do. In this talk we analyze how developing the competency of mathematical modeling is more than just "learning to do math" but requires learning to blend physical meaning into mathematical representations and use that physical meaning in solving problems. Examples are drawn from across the curriculum.

  2. Fluctuating selection models and McDonald-Kreitman type analyses.

    Directory of Open Access Journals (Sweden)

    Toni I Gossmann

    Full Text Available It is likely that the strength of selection acting upon a mutation varies through time due to changes in the environment. However, most population genetic theory assumes that the strength of selection remains constant. Here we investigate the consequences of fluctuating selection pressures on the quantification of adaptive evolution using McDonald-Kreitman (MK style approaches. In agreement with previous work, we show that fluctuating selection can generate evidence of adaptive evolution even when the expected strength of selection on a mutation is zero. However, we also find that the mutations, which contribute to both polymorphism and divergence tend, on average, to be positively selected during their lifetime, under fluctuating selection models. This is because mutations that fluctuate, by chance, to positive selected values, tend to reach higher frequencies in the population than those that fluctuate towards negative values. Hence the evidence of positive adaptive evolution detected under a fluctuating selection model by MK type approaches is genuine since fixed mutations tend to be advantageous on average during their lifetime. Never-the-less we show that methods tend to underestimate the rate of adaptive evolution when selection fluctuates.

  3. A workflow model to analyse pediatric emergency overcrowding.

    Science.gov (United States)

    Zgaya, Hayfa; Ajmi, Ines; Gammoudi, Lotfi; Hammadi, Slim; Martinot, Alain; Beuscart, Régis; Renard, Jean-Marie

    2014-01-01

    The greatest source of delay in patient flow is the waiting time from the health care request, and especially the bed request to exit from the Pediatric Emergency Department (PED) for hospital admission. It represents 70% of the time that these patients occupied in the PED waiting rooms. Our objective in this study is to identify tension indicators and bottlenecks that contribute to overcrowding. Patient flow mapping through the PED was carried out in a continuous 2 years period from January 2011 to December 2012. Our method is to use the collected real data, basing on accurate visits made in the PED of the Regional University Hospital Center (CHRU) of Lille (France), in order to construct an accurate and complete representation of the PED processes. The result of this representation is a Workflow model of the patient journey in the PED representing most faithfully possible the reality of the PED of CHRU of Lille. This model allowed us to identify sources of delay in patient flow and aspects of the PED activity that could be improved. It must be enough retailed to produce an analysis allowing to identify the dysfunctions of the PED and also to propose and to estimate prevention indicators of tensions. Our survey is integrated into the French National Research Agency project, titled: "Hospital: optimization, simulation and avoidance of strain" (ANR HOST).

  4. DeepMIP: experimental design for model simulations of the EECO, PETM, and pre-PETM

    NARCIS (Netherlands)

    Lunt, D. J.; Huber, M.; Baatsen, M. L. J.; Caballero, R.; DeConto, R.; Donnadieu, Y.; Evans, D.; Feng, R.; Foster, G.; Gasson, E.; von der Heydt, A. S.; Hollis, C. J.; Kirtland Turner, S.; Korty, R. L.; Kozdon, R.; Krishnan, S.; Ladant, J. -B.; Langebroek, P.; Lear, C. H.; LeGrande, A. N.; Littler, K.; Markwick, P.; Otto-Bliesner, B.; Pearson, P.; Poulsen, C.; Salzmann, U.; Shields, C.; Snell, K.; Starz, M.; Super, J.; Tabour, C.; Tierney, J.; Tourte, G. J. L.; Upchurch, G. R.; Wade, B.; Wing, S. L.; Winguth, A. M. E.; Wright, N.; Zachos, J. C.; Zeebe, R.

    2016-01-01

    Past warm periods provide an opportunity to evaluate climate models under extreme forcing scenarios, in particular high (> 800 ppmv) atmospheric CO2 concentrations. Although a post-hoc intercomparison of Eocene (~50 million years ago, Ma) climate model simulations and geological data has been carrie

  5. Geographically Isolated Wetlands and Catchment Hydrology: A Modified Model Analyses

    Science.gov (United States)

    Evenson, G.; Golden, H. E.; Lane, C.; D'Amico, E.

    2014-12-01

    Geographically isolated wetlands (GIWs), typically defined as depressional wetlands surrounded by uplands, support an array of hydrological and ecological processes. However, key research questions concerning the hydrological connectivity of GIWs and their impacts on downgradient surface waters remain unanswered. This is particularly important for regulation and management of these systems. For example, in the past decade United States Supreme Court decisions suggest that GIWs can be afforded protection if significant connectivity exists between these waters and traditional navigable waters. Here we developed a simulation procedure to quantify the effects of various spatial distributions of GIWs across the landscape on the downgradient hydrograph using a refined version of the Soil and Water Assessment Tool (SWAT), a catchment-scale hydrological simulation model. We modified the SWAT FORTRAN source code and employed an alternative hydrologic response unit (HRU) definition to facilitate an improved representation of GIW hydrologic processes and connectivity relationships to other surface waters, and to quantify their downgradient hydrological effects. We applied the modified SWAT model to an ~ 202 km2 catchment in the Coastal Plain of North Carolina, USA, exhibiting a substantial population of mapped GIWs. Results from our series of GIW distribution scenarios suggest that: (1) Our representation of GIWs within SWAT conforms to field-based characterizations of regional GIWs in most respects; (2) GIWs exhibit substantial seasonally-dependent effects upon downgradient base flow; (3) GIWs mitigate peak flows, particularly following high rainfall events; and (4) The presence of GIWs on the landscape impacts the catchment water balance (e.g., by increasing groundwater outflows). Our outcomes support the hypothesis that GIWs have an important catchment-scale effect on downgradient streamflow.

  6. Using System Dynamic Model and Neural Network Model to Analyse Water Scarcity in Sudan

    Science.gov (United States)

    Li, Y.; Tang, C.; Xu, L.; Ye, S.

    2017-07-01

    Many parts of the world are facing the problem of Water Scarcity. Analysing Water Scarcity quantitatively is an important step to solve the problem. Water scarcity in a region is gauged by WSI (water scarcity index), which incorporate water supply and water demand. To get the WSI, Neural Network Model and SDM (System Dynamic Model) that depict how environmental and social factors affect water supply and demand are developed to depict how environmental and social factors affect water supply and demand. The uneven distribution of water resource and water demand across a region leads to an uneven distribution of WSI within this region. To predict WSI for the future, logistic model, Grey Prediction, and statistics are applied in predicting variables. Sudan suffers from severe water scarcity problem with WSI of 1 in 2014, water resource unevenly distributed. According to the result of modified model, after the intervention, Sudan’s water situation will become better.

  7. Pan-European modelling of riverine nutrient concentrations - spatial patterns, source detection, trend analyses, scenario modelling

    Science.gov (United States)

    Bartosova, Alena; Arheimer, Berit; Capell, Rene; Donnelly, Chantal; Strömqvist, Johan

    2016-04-01

    Nutrient transport models are important tools for large scale assessments of macro-nutrient fluxes (nitrogen, phosphorus) and thus can serve as support tool for environmental assessment and management. Results from model applications over large areas, i.e. from major river basin to continental scales can fill a gap where monitoring data is not available. Here, we present results from the pan-European rainfall-runoff and nutrient transfer model E-HYPE, which is based on open data sources. We investigate the ability of the E-HYPE model to replicate the spatial and temporal variations found in observed time-series of riverine N and P concentrations, and illustrate the model usefulness for nutrient source detection, trend analyses, and scenario modelling. The results show spatial patterns in N concentration in rivers across Europe which can be used to further our understanding of nutrient issues across the European continent. E-HYPE results show hot spots with highest concentrations of total nitrogen in Western Europe along the North Sea coast. Source apportionment was performed to rank sources of nutrient inflow from land to sea along the European coast. An integrated dynamic model as E-HYPE also allows us to investigate impacts of climate change and measure programs, which was illustrated in a couple of scenarios for the Baltic Sea. Comparing model results with observations shows large uncertainty in many of the data sets and the assumptions used in the model set-up, e.g. point source release estimates. However, evaluation of model performance at a number of measurement sites in Europe shows that mean N concentration levels are generally well simulated. P levels are less well predicted which is expected as the variability of P concentrations in both time and space is higher. Comparing model performance with model set-ups using local data for the Weaver River (UK) did not result in systematically better model performance which highlights the complexity of model

  8. Taxing CO2 and subsidising biomass: Analysed in a macroeconomic and sectoral model

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    2000-01-01

    This paper analyses the combination of taxes and subsidies as an instrument to enable a reduction in CO2 emission. The objective of the study is to compare recycling of a CO2 tax revenue as a subsidy for biomass use as opposed to traditional recycling such as reduced income or corporate taxation....... A model of Denmark's energy supply sector is used to analyse the e€ect of a CO2 tax combined with using the tax revenue for biomass subsidies. The energy supply model is linked to a macroeconomic model such that the macroeconomic consequences of tax policies can be analysed along with the consequences...

  9. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  10. Pathway models for analysing and managing the introduction of alien plant pests - an overview and categorization

    NARCIS (Netherlands)

    Douma, J.C.; Pautasso, M.; Venette, R.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Schans, J.; Werf, van der W.

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative

  11. An improved lake model for climate simulations: Model structure, evaluation, and sensitivity analyses in CESM1

    Directory of Open Access Journals (Sweden)

    Zachary Subin

    2012-02-01

    Full Text Available Lakes can influence regional climate, yet most general circulation models have, at best, simple and largely untested representations of lakes. We developed the Lake, Ice, Snow, and Sediment Simulator(LISSS for inclusion in the land-surface component (CLM4 of an earth system model (CESM1. The existing CLM4 lake modelperformed poorly at all sites tested; for temperate lakes, summer surface water temperature predictions were 10–25uC lower than observations. CLM4-LISSS modifies the existing model by including (1 a treatment of snow; (2 freezing, melting, and ice physics; (3 a sediment thermal submodel; (4 spatially variable prescribed lakedepth; (5 improved parameterizations of lake surface properties; (6 increased mixing under ice and in deep lakes; and (7 correction of previous errors. We evaluated the lake model predictions of water temperature and surface fluxes at three small temperate and boreal lakes where extensive observational data was available. We alsoevaluated the predicted water temperature and/or ice and snow thicknesses for ten other lakes where less comprehensive forcing observations were available. CLM4-LISSS performed very well compared to observations for shallow to medium-depth small lakes. For large, deep lakes, the under-prediction of mixing was improved by increasing the lake eddy diffusivity by a factor of 10, consistent with previouspublished analyses. Surface temperature and surface flux predictions were improved when the aerodynamic roughness lengths were calculated as a function of friction velocity, rather than using a constant value of 1 mm or greater. We evaluated the sensitivity of surface energy fluxes to modeled lake processes and parameters. Largechanges in monthly-averaged surface fluxes (up to 30 W m22 were found when excluding snow insulation or phase change physics and when varying the opacity, depth, albedo of melting lake ice, and mixing strength across ranges commonly found in real lakes. Typical

  12. A modified Lee-Carter model for analysing short-base-period data.

    Science.gov (United States)

    Zhao, Bojuan Barbara

    2012-03-01

    This paper introduces a new modified Lee-Carter model for analysing short-base-period mortality data, for which the original Lee-Carter model produces severely fluctuating predicted age-specific mortality. Approximating the unknown parameters in the modified model by linearized cubic splines and other additive functions, the model can be simplified into a logistic regression when fitted to binomial data. The expected death rate estimated from the modified model is smooth, not only over ages but also over years. The analysis of mortality data in China (2000-08) demonstrates the advantages of the new model over existing models.

  13. Quality measure attainment with dapagliflozin plus metformin extended-release as initial combination therapy in patients with type 2 diabetes: a post hoc pooled analysis of two clinical studies

    Science.gov (United States)

    Bell, Kelly F; Katz, Arie; Sheehan, John J

    2016-01-01

    Background The use of quality measures attempts to improve safety and health outcomes and to reduce costs. In two Phase III trials in treatment-naive patients with type 2 diabetes, dapagliflozin 5 or 10 mg/d as initial combination therapy with metformin extended-release (XR) significantly reduced glycated hemoglobin (A1C) from baseline to 24 weeks and allowed higher proportions of patients to achieve A1C LDL-C) as well as vital status measures of blood pressure (BP) and body mass index (BMI). The proportion of patients achieving A1C, BP, and LDL-C individual and composite measures was assessed, as was the proportion with baseline BMI ≥25 kg/m2 who lost ≥4.5 kg. Subgroup analyses by baseline BMI were also performed. Results A total of 194 and 211 patients were treated with dapagliflozin 5- or 10-mg/d combination therapy, respectively, and 409 with metformin monotherapy. Significantly higher proportions of patients achieved A1C ≤6.5%, LDL-C <100 mg/dL across treatment groups. A higher proportion of patients with baseline BMI ≥25 kg/m2 lost ≥4.5 kg with combination therapy. Combination therapy had a more robust effect on patients with higher baseline BMI. Conclusion Initial combination therapy with dapagliflozin 5 or 10 mg/d and metformin improved quality measures relevant to clinical outcomes and diabetes care.

  14. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Science.gov (United States)

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p plaster models using the caliper and from the digital models using O3d software were identical.

  15. Three-dimensional lake water quality modeling: sensitivity and uncertainty analyses.

    Science.gov (United States)

    Missaghi, Shahram; Hondzo, Miki; Melching, Charles

    2013-11-01

    Two sensitivity and uncertainty analysis methods are applied to a three-dimensional coupled hydrodynamic-ecological model (ELCOM-CAEDYM) of a morphologically complex lake. The primary goals of the analyses are to increase confidence in the model predictions, identify influential model parameters, quantify the uncertainty of model prediction, and explore the spatial and temporal variabilities of model predictions. The influence of model parameters on four model-predicted variables (model output) and the contributions of each of the model-predicted variables to the total variations in model output are presented. The contributions of predicted water temperature, dissolved oxygen, total phosphorus, and algal biomass contributed 3, 13, 26, and 58% of total model output variance, respectively. The fraction of variance resulting from model parameter uncertainty was calculated by two methods and used for evaluation and ranking of the most influential model parameters. Nine out of the top 10 parameters identified by each method agreed, but their ranks were different. Spatial and temporal changes of model uncertainty were investigated and visualized. Model uncertainty appeared to be concentrated around specific water depths and dates that corresponded to significant storm events. The results suggest that spatial and temporal variations in the predicted water quality variables are sensitive to the hydrodynamics of physical perturbations such as those caused by stream inflows generated by storm events. The sensitivity and uncertainty analyses identified the mineralization of dissolved organic carbon, sediment phosphorus release rate, algal metabolic loss rate, internal phosphorus concentration, and phosphorus uptake rate as the most influential model parameters.

  16. Processes models, environmental analyses, and cognitive architectures: quo vadis quantum probability theory?

    Science.gov (United States)

    Marewski, Julian N; Hoffrage, Ulrich

    2013-06-01

    A lot of research in cognition and decision making suffers from a lack of formalism. The quantum probability program could help to improve this situation, but we wonder whether it would provide even more added value if its presumed focus on outcome models were complemented by process models that are, ideally, informed by ecological analyses and integrated into cognitive architectures.

  17. Validity, reliability, and clinical importance of change in a 0-10 numeric rating scale measure of spasticity: a post hoc analysis of a randomized, double-blind, placebo-controlled trial.

    Science.gov (United States)

    Farrar, John T; Troxel, Andrea B; Stott, Colin; Duncombe, Paul; Jensen, Mark P

    2008-05-01

    The measurement of spasticity as a symptom of neurologic disease is an area of growing interest. Clinician-rated measures of spasticity purport to be objective but do not measure the patient's experience and may not be sensitive to changes that are meaningful to the patient. In a patient with clinical spasticity, the best judge of the perceived severity of the symptom is the patient. The aim of this study was to assess the validity and reliability, and determine the clinical importance, of change on a 0-10 numeric rating scale (NRS) as a patient-rated measure of the perceived severity of spasticity. Using data from a large,randomized, doubleblind, placebo-controlled study of an endocannabinoid system modulator in patients with multiple sclerosis-related spasticity, we evaluated the test-retest reliability and comparison-based validity of a patient-reported 0-10 NRS measure of spasticity severity with the Ashworth Scale and Spasm Frequency Scale. We estimated the level of change from baseline on the 0-10 NRS spasticity scale that constituted a clinically important difference (CID) and a minimal CID (MCID) as anchored to the patient's global impression of change (PGIC). Data from a total of 189 patients were included in this assessment (114 women, 75 men; mean age, 49.1 years). The test-retest reliability analysis found an interclass correlation coefficient of 0.83 (P change on 0-10 NRS and change in the Spasm Frequency Scale (r = 0.63; P change on 0-10 NRS and the PGIC (r = 0.47; P change of 18% the MCID. The measurement of the symptom of spasticity using a patient-rated 0-10 NRS was found to be both reliable and valid. The definitions of CID and MCID will facilitate the use of appropriate responder analyses and help clinicians interpret the significance of future results.

  18. Analyses and simulations in income frame regulation model for the network sector from 2007; Analyser og simuleringer i inntektsrammereguleringsmodellen for nettbransjen fra 2007

    Energy Technology Data Exchange (ETDEWEB)

    Askeland, Thomas Haave; Fjellstad, Bjoern

    2007-07-01

    Analyses of the income frame regulation model for the network sector in Norway, introduced 1.st of January 2007. The model's treatment of the norm cost is evaluated, especially the effect analyses carried out by a so called Data Envelopment Analysis model. It is argued that there may exist an age lopsidedness in the data set, and that this should and can be corrected in the effect analyses. The adjustment is proposed corrected for by introducing an age parameter in the data set. Analyses of how the calibration effects in the regulation model affect the business' total income frame, as well as each network company's income frame have been made. It is argued that the calibration, the way it is presented, is not working according to its intention, and should be adjusted in order to provide the sector with the rate of reference in return (ml)

  19. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  20. Omega-3 fatty acid supplementation in adolescents with borderline personality disorder and ultra-high risk criteria for psychosis: a post hoc subgroup analysis of a double-blind, randomized controlled trial.

    Science.gov (United States)

    Amminger, G Paul; Chanen, Andrew M; Ohmann, Susanne; Klier, Claudia M; Mossaheb, Nilufar; Bechdolf, Andreas; Nelson, Barnaby; Thompson, Andrew; McGorry, Patrick D; Yung, Alison R; Schäfer, Miriam R

    2013-07-01

    Objectif : Rechercher si les acides gras polyinsaturés à longue chaîne (AGPLC) oméga-3 (n-3) améliorent le fonctionnement et les symptômes psychiatriques chez les jeunes personnes souffrant du trouble de la personnalité limite (TPL) qui satisfont aussi aux critères du risque ultra-élevé de psychose. Méthodes : Nous avons mené une analyse a posteriori d’un sous-groupe d’un essai randomisé contrôlé à double insu. Quinze adolescents souffrant du TPL (âge moyen 16,2 ans, [ET 2,1]) ont été assignés au hasard à 1,2 g/jour AGPLC n-3, ou à un placebo. La période d’intervention était de 12 semaines. Les mesures de l’étude étaient notamment l’Échelle des symptômes positifs et négatifs, l’échelle de dépression Montgomery–Åsberg, et l’évaluation globale du fonctionnement. Les effets secondaires ont été documentés à l’aide de l’échelle Udvalg for Kliniske Undersøgelser. Les acides gras des érythrocytes ont été analysés par chromatographie capillaire gazeuse. Résultats : Au départ, les taux d’AGPLC n-3 des érythrocytes corrélaient positivement avec le fonctionnement psychosocial et négativement avec la psychopathologie. À la fin de l’intervention, les AGPLC n-3 amélioraient significativement le fonctionnement et réduisaient les symptômes psychiatriques, comparativement au placebo. Les effets secondaires ne différaient pas entre les groupes. Conclusions : Les AGP à longue chaîne n-3 devraient être davantage explorés comme stratégie de traitement viable comportant un risque associé minimal chez les jeunes personnes souffrant du TPL. (Numéro d’enregistrement d’essai clinique : NCT00396643).

  1. Quality measure attainment with dapagliflozin plus metformin extended-release as initial combination therapy in patients with type 2 diabetes: a post hoc pooled analysis of two clinical studies

    Directory of Open Access Journals (Sweden)

    Bell KF

    2016-10-01

    Full Text Available Kelly F Bell, Arie Katz, John J Sheehan AstraZeneca, Wilmington, DE, USA Background: The use of quality measures attempts to improve safety and health outcomes and to reduce costs. In two Phase III trials in treatment-naive patients with type 2 diabetes, dapagliflozin 5 or 10 mg/d as initial combination therapy with metformin extended-release (XR significantly reduced glycated hemoglobin (A1C from baseline to 24 weeks and allowed higher proportions of patients to achieve A1C <7% vs dapagliflozin or metformin monotherapy. Objective: A pooled analysis of data from these two studies assessed the effect of dapagliflozin 5 or 10 mg/d plus metformin XR (combination therapy compared with placebo plus metformin XR (metformin monotherapy on diabetes quality measures. Quality measures include laboratory measures of A1C and low-density lipoprotein cholesterol (LDL-C as well as vital status measures of blood pressure (BP and body mass index (BMI. The proportion of patients achieving A1C, BP, and LDL-C individual and composite measures was assessed, as was the proportion with baseline BMI ≥25 kg/m2 who lost ≥4.5 kg. Subgroup analyses by baseline BMI were also performed. Results: A total of 194 and 211 patients were treated with dapagliflozin 5- or 10-mg/d combination therapy, respectively, and 409 with metformin monotherapy. Significantly higher proportions of patients achieved A1C ≤6.5%, <7%, or <8% with combination therapy vs metformin monotherapy (P<0.02. Significantly higher proportions of patients achieved BP <140/90 mmHg (P<0.02 for each dapagliflozin dose and BP <130/80 mmHg (P<0.02 with dapagliflozin 5 mg/d only with combination therapy vs metformin monotherapy. Similar proportions (29%–33% of patients had LDL-C <100 mg/dL across treatment groups. A higher proportion of patients with baseline BMI ≥25 kg/m2 lost ≥4.5 kg with combination therapy. Combination therapy had a more robust effect on patients with higher baseline BMI. Conclusion

  2. Post hoc analysis of the relationship between baseline white blood cell count and survival outcome in a randomized Phase III trial of decitabine in older patients with newly diagnosed acute myeloid leukemia

    Directory of Open Access Journals (Sweden)

    Arthur C

    2015-01-01

    Full Text Available Christopher Arthur,1 Jaroslav Cermak,2 Jacques Delaunay,3 Jirí Mayer,4 Grzegorz Mazur,5 Xavier Thomas,6 Agnieszka Wierzbowska,7 Mark M Jones,8 Erhan Berrak,8 Hagop Kantarjian9 1Department of Haematology, Royal North Shore Hospital, Sydney, NSW, Australia; 2Institute of Hematology and Blood Transfusion, Prague, Czech Republic; 3Department of Clinical Hematology, University of Nantes, Nantes, France; 4Department of Internal Medicine, Masaryk University Hospital Brno, Central European Institute of Technology, Brno, Czech Republic; 5Department of Hematology, Wroclaw Medical University, Wroclaw, Poland; 6Department of Hematology, Edouard Herriot Hospital, Lyon, France; 7Copernicus Memorial Hospital, Lodz, Poland; 8Oncology Product Creation Unit, Eisai Inc., Woodcliff Lake, NJ, USA; 9Department of Leukemia, University of Texas MD Anderson Cancer Center, Houston, TX, USA Background: In a Phase III trial, 485 patients (≥65 years with newly diagnosed acute myeloid leukemia received decitabine 20 mg/m2 intravenously for 5 days every 4 weeks or a treatment choice (supportive care or cytarabine 20 mg/m2 subcutaneously for 10 days every 4 weeks. Materials and methods: We summarized overall and progression-free survival by baseline white blood cell count using two analyses: <1, 1–5, >5×109/L; ≤10 or .10×109/L. Results: There were 446 deaths (treatment choice, n=227; decitabine, n=219. Median overall survival was 5.0 (treatment choice versus 7.7 months (decitabine; nominal P=0.037. Overall survival differences between white blood cell groups were not significant; hazard ratios (HRs favored decitabine. Significant progression-free survival differences favored decitabine for groups 1–5×109/L (P=0.005, HR =0.67, greater than 5×109/L (P=0.027, HR =0.71, and up to 10×109/L (P=0.003, HR =0.72. Conclusion: There was a trend toward improved outcome with decitabine, regardless of baseline white blood cell count. Keywords: decitabine, acute myeloid

  3. Sensitivity analyses of spatial population viability analysis models for species at risk and habitat conservation planning.

    Science.gov (United States)

    Naujokaitis-Lewis, Ilona R; Curtis, Janelle M R; Arcese, Peter; Rosenfeld, Jordan

    2009-02-01

    Population viability analysis (PVA) is an effective framework for modeling species- and habitat-recovery efforts, but uncertainty in parameter estimates and model structure can lead to unreliable predictions. Integrating complex and often uncertain information into spatial PVA models requires that comprehensive sensitivity analyses be applied to explore the influence of spatial and nonspatial parameters on model predictions. We reviewed 87 analyses of spatial demographic PVA models of plants and animals to identify common approaches to sensitivity analysis in recent publications. In contrast to best practices recommended in the broader modeling community, sensitivity analyses of spatial PVAs were typically ad hoc, inconsistent, and difficult to compare. Most studies applied local approaches to sensitivity analyses, but few varied multiple parameters simultaneously. A lack of standards for sensitivity analysis and reporting in spatial PVAs has the potential to compromise the ability to learn collectively from PVA results, accurately interpret results in cases where model relationships include nonlinearities and interactions, prioritize monitoring and management actions, and ensure conservation-planning decisions are robust to uncertainties in spatial and nonspatial parameters. Our review underscores the need to develop tools for global sensitivity analysis and apply these to spatial PVA.

  4. Effect of glatiramer acetate three-times weekly on the evolution of new, active multiple sclerosis lesions into T1-hypointense "black holes": a post hoc magnetic resonance imaging analysis.

    Science.gov (United States)

    Zivadinov, Robert; Dwyer, Michael; Barkay, Hadas; Steinerman, Joshua R; Knappertz, Volker; Khan, Omar

    2015-03-01

    Conversion of active lesions to black holes has been associated with disability progression in subjects with relapsing-remitting multiple sclerosis (RRMS) and represents a complementary approach to evaluating clinical efficacy. The objective of this study was to assess the conversion of new active magnetic resonance imaging (MRI) lesions, identified 6 months after initiating treatment with glatiramer acetate 40 mg/mL three-times weekly (GA40) or placebo, to T1-hypointense black holes in subjects with RRMS. Subjects received GA40 (n = 943) or placebo (n = 461) for 12 months. MRI was obtained at baseline and Months 6 and 12. New lesions were defined as either gadolinium-enhancing T1 or new T2 lesions at Month 6 that were not present at baseline. The adjusted mean numbers of new active lesions at Month 6 converting to black holes at Month 12 were analyzed using a negative binomial model; adjusted proportions of new active lesions at Month 6 converting to black holes at Month 12 were analyzed using a logistic regression model. Of 1,292 subjects with complete MRI data, 433 (50.3 %) GA-treated and 247 (57.2 %) placebo-treated subjects developed new lesions at Month 6. Compared with placebo, GA40 significantly reduced the mean number (0.31 versus 0.45; P = .0258) and proportion (15.8 versus 19.6 %; P = .006) of new lesions converting to black holes. GA significantly reduced conversion of new active lesions to black holes, highlighting the ability of GA40 to prevent tissue damage in RRMS.

  5. Design evaluation and optimisation in crossover pharmacokinetic studies analysed by nonlinear mixed effects models

    OpenAIRE

    Nguyen, Thu Thuy; Bazzoli, Caroline; Mentré, France

    2012-01-01

    International audience; Bioequivalence or interaction trials are commonly studied in crossover design and can be analysed by nonlinear mixed effects models as an alternative to noncompartmental approach. We propose an extension of the population Fisher information matrix in nonlinear mixed effects models to design crossover pharmacokinetic trials, using a linearisation of the model around the random effect expectation, including within-subject variability and discrete covariates fixed or chan...

  6. Analysing outsourcing policies in an asset management context: a six-stage model

    OpenAIRE

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates the top-down discussion for analysing the degree of outsourcing maintenance. The model is based on the cyclic nature of maintenance. The six-stage model can: (1) give clear statements about the pre...

  7. Efficacy and Safety of Duloxetine in Patients with Chronic Low Back Pain Who Used versus Did Not Use Concomitant Nonsteroidal Anti-Inflammatory Drugs or Acetaminophen: A Post Hoc Pooled Analysis of 2 Randomized, Placebo-Controlled Trials

    Directory of Open Access Journals (Sweden)

    Vladimir Skljarevski

    2012-01-01

    Full Text Available This subgroup analysis assessed the efficacy of duloxetine in patients with chronic low back pain (CLBP who did or did not use concomitant nonsteroidal anti-inflammatory drugs (NSAIDs or acetaminophen (APAP. Data were pooled from two 13-week randomized trials in patients with CLBP who were stratified according to NSAID/APAP use at baseline: duloxetine NSAID/APAP user (=137, placebo NSAID/APAP user (=82, duloxetine NSAID/APAP nonuser (=206, and placebo NSAID/APAP nonuser (=156. NSAID/APAP users were those patients who took NSAID/APAP for at least 14 days per month during 3 months prior to study entry. An analysis of covariance model that included therapy, study, baseline NSAID/APAP use (yes/no, and therapy-by-NSAID/APAP subgroup interaction was used to assess the efficacy. The treatment-by-NSAID/APAP use interaction was not statistically significant (=0.31 suggesting no substantial evidence of differential efficacy for duloxetine over placebo on pain reduction or improvement in physical function between concomitant NSAID/APAP users and non-users.

  8. Geographical variation of sporadic Legionnaires' disease analysed in a grid model

    DEFF Research Database (Denmark)

    Rudbeck, M.; Jepsen, Martin Rudbeck; Sonne, I.B.;

    2010-01-01

    clusters. Four cells had excess incidence in all three time periods. The analysis in 25 different grid positions indicated a low risk of overlooking cells with excess incidence in a random grid. The coefficient of variation ranged from 0.08 to 0.11 independent of the threshold. By application of a random......The aim was to analyse variation in incidence of sporadic Legionnaires' disease in a geographical information system in three time periods (1990-2005) by the application of a grid model and to assess the model's validity by analysing variation according to grid position. Coordinates...

  9. X-ray CT analyses, models and numerical simulations: a comparison with petrophysical analyses in an experimental CO2 study

    Science.gov (United States)

    Henkel, Steven; Pudlo, Dieter; Enzmann, Frieder; Reitenbach, Viktor; Albrecht, Daniel; Ganzer, Leonhard; Gaupp, Reinhard

    2016-06-01

    An essential part of the collaborative research project H2STORE (hydrogen to store), which is funded by the German government, was a comparison of various analytical methods for characterizing reservoir sandstones from different stratigraphic units. In this context Permian, Triassic and Tertiary reservoir sandstones were analysed. Rock core materials, provided by RWE Gasspeicher GmbH (Dortmund, Germany), GDF Suez E&P Deutschland GmbH (Lingen, Germany), E.ON Gas Storage GmbH (Essen, Germany) and RAG Rohöl-Aufsuchungs Aktiengesellschaft (Vienna, Austria), were processed by different laboratory techniques; thin sections were prepared, rock fragments were crushed and cubes of 1 cm edge length and plugs 3 to 5 cm in length with a diameter of about 2.5 cm were sawn from macroscopic homogeneous cores. With this prepared sample material, polarized light microscopy and scanning electron microscopy, coupled with image analyses, specific surface area measurements (after Brunauer, Emmet and Teller, 1938; BET), He-porosity and N2-permeability measurements and high-resolution microcomputer tomography (μ-CT), which were used for numerical simulations, were applied. All these methods were practised on most of the same sample material, before and on selected Permian sandstones also after static CO2 experiments under reservoir conditions. A major concern in comparing the results of these methods is an appraisal of the reliability of the given porosity, permeability and mineral-specific reactive (inner) surface area data. The CO2 experiments modified the petrophysical as well as the mineralogical/geochemical rock properties. These changes are detectable by all applied analytical methods. Nevertheless, a major outcome of the high-resolution μ-CT analyses and following numerical data simulations was that quite similar data sets and data interpretations were maintained by the different petrophysical standard methods. Moreover, the μ-CT analyses are not only time saving, but also non

  10. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    Science.gov (United States)

    Piparo, D.; Schott, G.; Quast, G.

    2009-12-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  11. RooStatsCms: a tool for analyses modelling, combination and statistical studies

    CERN Document Server

    Piparo, D; Quast, Prof G

    2008-01-01

    The RooStatsCms (RSC) software framework allows analysis modelling and combination, statistical studies together with the access to sophisticated graphics routines for results visualisation. The goal of the project is to complement the existing analyses by means of their combination and accurate statistical studies.

  12. Combined Task and Physical Demands Analyses towards a Comprehensive Human Work Model

    Science.gov (United States)

    2014-09-01

    velocities, and accelerations over time for each postural sequence. Neck strain measures derived from biomechanical analyses of these postural...and whole missions. The result is a comprehensive model of tasks and associated physical demands from which one can estimate the accumulative neck ...Griffon Helicopter aircrew (Pilots and Flight Engineers) reported neck pain particularly when wearing Night Vision Goggles (NVGs) (Forde et al. , 2011

  13. Dutch AG-MEMOD model; A tool to analyse the agri-food sector

    NARCIS (Netherlands)

    Leeuwen, van M.G.A.; Tabeau, A.A.

    2005-01-01

    Agricultural policies in the European Union (EU) have a history of continuous reform. AG-MEMOD, acronym for Agricultural sector in the Member states and EU: econometric modelling for projections and analysis of EU policies on agriculture, forestry and the environment, provides a system for analysing

  14. Supply Chain Modeling for Fluorspar and Hydrofluoric Acid and Implications for Further Analyses

    Science.gov (United States)

    2015-04-01

    analysis. 15. SUBJECT TERMS supply chain , model, fluorspar, hydrofluoric acid, shortfall, substitution, Defense Logistics Agency, National Defense...unlimited. IDA Document D-5379 Log: H 15-000099 INSTITUTE FOR DEFENSE ANALYSES 4850 Mark Center Drive Alexandria, Virginia 22311-1882 Supply Chain ...E F E N S E A N A L Y S E S IDA Document D-5379 D. Sean Barnett Jerome Bracken Supply Chain Modeling for Fluorspar and Hydrofluoric Acid and

  15. Wavelet-based spatial comparison technique for analysing and evaluating two-dimensional geophysical model fields

    Directory of Open Access Journals (Sweden)

    S. Saux Picart

    2011-11-01

    Full Text Available Complex numerical models of the Earth's environment, based around 3-D or 4-D time and space domains are routinely used for applications including climate predictions, weather forecasts, fishery management and environmental impact assessments. Quantitatively assessing the ability of these models to accurately reproduce geographical patterns at a range of spatial and temporal scales has always been a difficult problem to address. However, this is crucial if we are to rely on these models for decision making. Satellite data are potentially the only observational dataset able to cover the large spatial domains analysed by many types of geophysical models. Consequently optical wavelength satellite data is beginning to be used to evaluate model hindcast fields of terrestrial and marine environments. However, these satellite data invariably contain regions of occluded or missing data due to clouds, further complicating or impacting on any comparisons with the model. A methodology has recently been developed to evaluate precipitation forecasts using radar observations. It allows model skill to be evaluated at a range of spatial scales and rain intensities. Here we extend the original method to allow its generic application to a range of continuous and discontinuous geophysical data fields, and therefore allowing its use with optical satellite data. This is achieved through two major improvements to the original method: (i all thresholds are determined based on the statistical distribution of the input data, so no a priori knowledge about the model fields being analysed is required and (ii occluded data can be analysed without impacting on the metric results. The method can be used to assess a model's ability to simulate geographical patterns over a range of spatial scales. We illustrate how the method provides a compact and concise way of visualising the degree of agreement between spatial features in two datasets. The application of the new method, its

  16. Ozone therapy as an adjuvant for endondontic protocols: microbiological – ex vivo study and citotoxicity analyses

    Science.gov (United States)

    NOGALES, Carlos Goes; FERREIRA, Marina Beloti; MONTEMOR, Antonio Fernando; RODRIGUES, Maria Filomena de Andrade; Lage-MARQUES, José Luiz; ANTONIAZZI, João Humberto

    2016-01-01

    ABSTRACT Objectives This study evaluated the antimicrobial efficacy of ozone therapy in teeth contaminated with Pseudomonas aeruginosa, Enterococcus faecalis, and Staphylococcus aureus using a mono-species biofilm model. Parallel to this, the study aimed to evaluate the cytotoxicity of ozone for human gingival fibroblasts. Material and Methods: One hundred and eighty single-root teeth were contaminated with a mono-species biofilm of Enterococcus faecalis, Pseudomonas aeruginosa, and Staphylococcus aureus. Groups were formed: Group I – control; Group II – standard protocol; Group III – standard protocol + ozone gas at 40 µg/mL; and Group IV – standard protocol + aqueous ozone at 8 µg/mL. In parallel, human gingival fibroblasts were submitted to the MTT test. Cells were plated, then ozone was applied as follows: Group I (control) – broth medium; Group II – aqueous ozone at 2 µg/mL; Group III – aqueous ozone at 5 µg/mL; and Group IV – aqueous ozone at 8 µg/mL. Data were submitted to the Kruskal Wallis test and Bonferroni post hoc analyses to assess microbiology and cytotoxicity, respectively (p<0.05%). Results The results revealed antimicrobial efficacy by Group IV with no CFU count. The cytotoxicity assay showed Groups III and IV to be the most aggressive, providing a decrease in cell viability at hour 0 from 100% to 77.3% and 68.6%, respectively. Such a decrease in cell viability was reverted, and after 72 hours Groups III and IV provided the greatest increase in cell viability, being statistically different from Groups I and II. Conclusion According to the applied methodology and the limitations of this study, it was possible to conclude that ozone therapy improved the decontamination of the root canal ex vivo. Ozone was toxic to the cells on first contact, but cell viability was recovered. Thus, these findings suggest that ozone might be useful to improve root canal results. PMID:28076466

  17. A Model for Integrating Fixed-, Random-, and Mixed-Effects Meta-Analyses into Structural Equation Modeling

    Science.gov (United States)

    Cheung, Mike W.-L.

    2008-01-01

    Meta-analysis and structural equation modeling (SEM) are two important statistical methods in the behavioral, social, and medical sciences. They are generally treated as two unrelated topics in the literature. The present article proposes a model to integrate fixed-, random-, and mixed-effects meta-analyses into the SEM framework. By applying an…

  18. Stellar abundance analyses in the light of 3D hydrodynamical model atmospheres

    CERN Document Server

    Asplund, M

    2003-01-01

    I describe recent progress in terms of 3D hydrodynamical model atmospheres and 3D line formation and their applications to stellar abundance analyses of late-type stars. Such 3D studies remove the free parameters inherent in classical 1D investigations (mixing length parameters, macro- and microturbulence) yet are highly successful in reproducing a large arsenal of observational constraints such as detailed line shapes and asymmetries. Their potential for abundance analyses is illustrated by discussing the derived oxygen abundances in the Sun and in metal-poor stars, where they seem to resolve long-standing problems as well as significantly alter the inferred conclusions.

  19. WOMBAT: a tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML).

    Science.gov (United States)

    Meyer, Karin

    2007-11-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model; estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses. Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from (http://agbu. une.edu.au/~kmeyer/wombat.html).

  20. Application of an approximate vectorial diffraction model to analysing diffractive micro-optical elements

    Institute of Scientific and Technical Information of China (English)

    Niu Chun-Hui; Li Zhi-Yuan; Ye Jia-Sheng; Gu Ben-Yuan

    2005-01-01

    Scalar diffraction theory, although simple and efficient, is too rough for analysing diffractive micro-optical elements.Rigorous vectorial diffraction theory requires extensive numerical efforts, and is not a convenient design tool. In this paper we employ a simple approximate vectorial diffraction model which combines the principle of the scalar diffraction theory with an approximate local field model to analyse the diffraction of optical waves by some typical two-dimensional diffractive micro-optical elements. The TE and TM polarization modes are both considered. We have found that the approximate vectorial diffraction model can agree much better with the rigorous electromagnetic simulation results than the scalar diffraction theory for these micro-optical elements.

  1. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST. Tropical response

    Energy Technology Data Exchange (ETDEWEB)

    Moron, V. [Universite' de Provence, UFR des sciences geographiques et de l' amenagement, Aix-en-Provence (France); Navarra, A. [Istituto Nazionale di Geofisica e Vulcanologia, Bologna (Italy); Ward, M. N. [University of Oklahoma, Cooperative Institute for Mesoscale Meteorological Studies, Norman OK (United States); Foland, C. K. [Hadley Center for Climate Prediction and Research, Meteorological Office, Bracknell (United Kingdom); Friederichs, P. [Meteorologisches Institute des Universitaet Bonn, Bonn (Germany); Maynard, K.; Polcher, J. [Paris Universite' Pierre et Marie Curie, Paris (France). Centre Nationale de la Recherche Scientifique, Laboratoire de Meteorologie Dynamique, Paris

    2001-08-01

    The ECHAM 3.2 (T21), ECHAM (T30) and LMD (version 6, grid-point resolution with 96 longitudes x 72 latitudes) atmospheric general circulation models were integrated through the period 1961 to 1993 forces with the same observed Sea Surface Temperatures (SSTs) as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The large-scale tropical inter-annual variability is analysed to give a picture of a skill of each model and of some sort of combination of the three models. To analyse the similarity of model response averaged over the same key regions, several widely-used indices are calculated: Southern Oscillation Index (SOI), large-scale wind shear indices of the boreal summer monsoon in Asia and West Africa and rainfall indices for NE Brazil, Sahel and India. Even for the indices where internal noise is large, some years are consistent amongst all the runs, suggesting inter-annual variability of the strength of SST forcing. Averaging the ensemble mean of the three models (the super-ensemble mean) yields improved skill. When each run is weighted according to its skill, taking three runs from different models instead of three runs of the same model improves the mean skill. There is also some indication that one run of a given model could be better than another, suggesting that persistent anomalies could change its sensitivity to SST. The index approach lacks flexibility to assess whether a model's response to SST has been geographically displaced. It can focus on the first mode in the global tropics, found through singular value decomposition analysis, which is clearly related to El Nino/Southern Oscillation (ENSO) in all seasons. The Observed-Model and Model-Model analyses lead to almost the same patterns, suggesting that the dominant pattern of model response is also the most skilful mode. Seasonal modulation of both skill and spatial patterns (both model and observed) clearly exists with highest skill

  2. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  3. Distinguishing Mediational Models and Analyses in Clinical Psychology: Atemporal Associations Do Not Imply Causation.

    Science.gov (United States)

    Winer, E Samuel; Cervone, Daniel; Bryant, Jessica; McKinney, Cliff; Liu, Richard T; Nadorff, Michael R

    2016-09-01

    A popular way to attempt to discern causality in clinical psychology is through mediation analysis. However, mediation analysis is sometimes applied to research questions in clinical psychology when inferring causality is impossible. This practice may soon increase with new, readily available, and easy-to-use statistical advances. Thus, we here provide a heuristic to remind clinical psychological scientists of the assumptions of mediation analyses. We describe recent statistical advances and unpack assumptions of causality in mediation, underscoring the importance of time in understanding mediational hypotheses and analyses in clinical psychology. Example analyses demonstrate that statistical mediation can occur despite theoretical mediation being improbable. We propose a delineation of mediational effects derived from cross-sectional designs into the terms temporal and atemporal associations to emphasize time in conceptualizing process models in clinical psychology. The general implications for mediational hypotheses and the temporal frameworks from within which they may be drawn are discussed. © 2016 Wiley Periodicals, Inc.

  4. FluxExplorer: A general platform for modeling and analyses of metabolic networks based on stoichiometry

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Stoichiometry-based analyses of meta- bolic networks have aroused significant interest of systems biology researchers in recent years. It is necessary to develop a more convenient modeling platform on which users can reconstruct their network models using completely graphical operations, and explore them with powerful analyzing modules to get a better understanding of the properties of metabolic systems. Herein, an in silico platform, FluxExplorer, for metabolic modeling and analyses based on stoichiometry has been developed as a publicly available tool for systems biology research. This platform integrates various analytic approaches, in- cluding flux balance analysis, minimization of meta- bolic adjustment, extreme pathways analysis, shadow prices analysis, and singular value decom- position, providing a thorough characterization of the metabolic system. Using a graphic modeling process, metabolic networks can be reconstructed and modi- fied intuitively and conveniently. The inconsistencies of a model with respect to the FBA principles can be proved automatically. In addition, this platform sup- ports systems biology markup language (SBML). FluxExplorer has been applied to rebuild a metabolic network in mammalian mitochondria, producing meaningful results. Generally, it is a powerful and very convenient tool for metabolic network modeling and analysis.

  5. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  6. Calibration of back-analysed model parameters for landslides using classification statistics

    Science.gov (United States)

    Cepeda, Jose; Henderson, Laura

    2016-04-01

    Back-analyses are useful for characterizing the geomorphological and mechanical processes and parameters involved in the initiation and propagation of landslides. These processes and parameters can in turn be used for improving forecasts of scenarios and hazard assessments in areas or sites which have similar settings to the back-analysed cases. The selection of the modeled landslide that produces the best agreement with the actual observations requires running a number of simulations by varying the type of model and the sets of input parameters. The comparison of the simulated and observed parameters is normally performed by visual comparison of geomorphological or dynamic variables (e.g., geometry of scarp and final deposit, maximum velocities and depths). Over the past six years, a method developed by NGI has been used by some researchers for a more objective selection of back-analysed input model parameters. That method includes an adaptation of the equations for calculation of classifiers, and a comparative evaluation of classifiers of the selected parameter sets in the Receiver Operating Characteristic (ROC) space. This contribution presents an updating of the methodology. The proposed procedure allows comparisons between two or more "clouds" of classifiers. Each cloud represents the performance of a model over a range of input parameters (e.g., samples of probability distributions). Considering the fact that each cloud does not necessarily produce a full ROC curve, two new normalised ROC-space parameters are introduced for characterizing the performance of each cloud. The first parameter is representative of the cloud position relative to the point of perfect classification. The second parameter characterizes the position of the cloud relative to the theoretically perfect ROC curve and the no-discrimination line. The methodology is illustrated with back-analyses of slope stability and landslide runout of selected case studies. This research activity has been

  7. Volvo Logistics Corporation Returnable Packaging System : a model for analysing cost savings when switching packaging system

    OpenAIRE

    2008-01-01

    This thesis is a study for analysing costs affected by packaging in a producing industry. The purpose is to develop a model that will calculate and present possible cost savings for the customer by using Volvo Logistics Corporations, VLC’s, returnable packaging instead of other packaging solutions. The thesis is based on qualitative data gained from both theoretical and empirical studies. The methodology for gaining information has been to study theoretical sources such as course literature a...

  8. Computational model for supporting SHM systems design: Damage identification via numerical analyses

    Science.gov (United States)

    Sartorato, Murilo; de Medeiros, Ricardo; Vandepitte, Dirk; Tita, Volnei

    2017-02-01

    This work presents a computational model to simulate thin structures monitored by piezoelectric sensors in order to support the design of SHM systems, which use vibration based methods. Thus, a new shell finite element model was proposed and implemented via a User ELement subroutine (UEL) into the commercial package ABAQUS™. This model was based on a modified First Order Shear Theory (FOST) for piezoelectric composite laminates. After that, damaged cantilever beams with two piezoelectric sensors in different positions were investigated by using experimental analyses and the proposed computational model. A maximum difference in the magnitude of the FRFs between numerical and experimental analyses of 7.45% was found near the resonance regions. For damage identification, different levels of damage severity were evaluated by seven damage metrics, including one proposed by the present authors. Numerical and experimental damage metrics values were compared, showing a good correlation in terms of tendency. Finally, based on comparisons of numerical and experimental results, it is shown a discussion about the potentials and limitations of the proposed computational model to be used for supporting SHM systems design.

  9. Model error analyses of photochemistry mechanisms using the BEATBOX/BOXMOX data assimilation toy model

    Science.gov (United States)

    Knote, C. J.; Eckl, M.; Barré, J.; Emmons, L. K.

    2016-12-01

    Simplified descriptions of photochemistry in the atmosphere ('photochemical mechanisms') necessary to reduce the computational burden of a model simulation contribute significantly to the overall uncertainty of an air quality model. Understanding how the photochemical mechanism contributes to observed model errors through examination of results of the complete model system is next to impossible due to cancellation and amplification effects amongst the tightly interconnected model components. Here we present BEATBOX, a novel method to evaluate photochemical mechanisms using the underlying chemistry box model BOXMOX. With BOXMOX we can rapidly initialize various mechanisms (e.g. MOZART, RACM, CBMZ, MCM) with homogenized observations (e.g. from field campaigns) and conduct idealized 'chemistry in a jar' simulations under controlled conditions. BEATBOX is a data assimilation toy model built upon BOXMOX which allows to simulate the effects of assimilating observations (e.g., CO, NO2, O3) into these simulations. In this presentation we show how we use the Master Chemical Mechanism (MCM, U Leeds) as benchmark for more simplified mechanisms like MOZART, use BEATBOX to homogenize the chemical environment and diagnose errors within the more simplified mechanisms. We present BEATBOX as a new, freely available tool that allows researchers to rapidly evaluate their chemistry mechanism against a range of others under varying chemical conditions.

  10. Modeling and performance analyses of evaporators in frozen-food supermarket display cabinets at low temperatures

    Energy Technology Data Exchange (ETDEWEB)

    Getu, H.M.; Bansal, P.K. [Department of Mechanical Engineering, The University of Auckland, Private Bag 92019, Auckland (New Zealand)

    2007-11-15

    This paper presents modeling and experimental analyses of evaporators in 'in situ' frozen-food display cabinets at low temperatures in the supermarket industry. Extensive experiments were conducted to measure store and display cabinet relative humidities and temperatures, and pressures, temperatures and mass flow rates of the refrigerant. The mathematical model adopts various empirical correlations of heat transfer coefficients and frost properties in a fin-tube heat exchanger in order to investigate the influence of indoor conditions on the performance of the display cabinets. The model is validated with the experimental data of 'in situ' cabinets. The model would be a good guide tool to the design engineers to evaluate the performance of supermarket display cabinet heat exchangers under various store conditions. (author)

  11. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.; Sobel, A.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overview of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.

  12. Risk Factor Analyses for the Return of Spontaneous Circulation in the Asphyxiation Cardiac Arrest Porcine Model

    Directory of Open Access Journals (Sweden)

    Cai-Jun Wu

    2015-01-01

    Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.

  13. Prediction Uncertainty Analyses for the Combined Physically-Based and Data-Driven Models

    Science.gov (United States)

    Demissie, Y. K.; Valocchi, A. J.; Minsker, B. S.; Bailey, B. A.

    2007-12-01

    The unavoidable simplification associated with physically-based mathematical models can result in biased parameter estimates and correlated model calibration errors, which in return affect the accuracy of model predictions and the corresponding uncertainty analyses. In this work, a physically-based groundwater model (MODFLOW) together with error-correcting artificial neural networks (ANN) are used in a complementary fashion to obtain an improved prediction (i.e. prediction with reduced bias and error correlation). The associated prediction uncertainty of the coupled MODFLOW-ANN model is then assessed using three alternative methods. The first method estimates the combined model confidence and prediction intervals using first-order least- squares regression approximation theory. The second method uses Monte Carlo and bootstrap techniques for MODFLOW and ANN, respectively, to construct the combined model confidence and prediction intervals. The third method relies on a Bayesian approach that uses analytical or Monte Carlo methods to derive the intervals. The performance of these approaches is compared with Generalized Likelihood Uncertainty Estimation (GLUE) and Calibration-Constrained Monte Carlo (CCMC) intervals of the MODFLOW predictions alone. The results are demonstrated for a hypothetical case study developed based on a phytoremediation site at the Argonne National Laboratory. This case study comprises structural, parameter, and measurement uncertainties. The preliminary results indicate that the proposed three approaches yield comparable confidence and prediction intervals, thus making the computationally efficient first-order least-squares regression approach attractive for estimating the coupled model uncertainty. These results will be compared with GLUE and CCMC results.

  14. The importance of accurate muscle modelling for biomechanical analyses: a case study with a lizard skull

    Science.gov (United States)

    Gröning, Flora; Jones, Marc E. H.; Curtis, Neil; Herrel, Anthony; O'Higgins, Paul; Evans, Susan E.; Fagan, Michael J.

    2013-01-01

    Computer-based simulation techniques such as multi-body dynamics analysis are becoming increasingly popular in the field of skull mechanics. Multi-body models can be used for studying the relationships between skull architecture, muscle morphology and feeding performance. However, to be confident in the modelling results, models need to be validated against experimental data, and the effects of uncertainties or inaccuracies in the chosen model attributes need to be assessed with sensitivity analyses. Here, we compare the bite forces predicted by a multi-body model of a lizard (Tupinambis merianae) with in vivo measurements, using anatomical data collected from the same specimen. This subject-specific model predicts bite forces that are very close to the in vivo measurements and also shows a consistent increase in bite force as the bite position is moved posteriorly on the jaw. However, the model is very sensitive to changes in muscle attributes such as fibre length, intrinsic muscle strength and force orientation, with bite force predictions varying considerably when these three variables are altered. We conclude that accurate muscle measurements are crucial to building realistic multi-body models and that subject-specific data should be used whenever possible. PMID:23614944

  15. Analysing adverse events by time-to-event models: the CLEOPATRA study.

    Science.gov (United States)

    Proctor, Tanja; Schumacher, Martin

    2016-07-01

    When analysing primary and secondary endpoints in a clinical trial with patients suffering from a chronic disease, statistical models for time-to-event data are commonly used and accepted. This is in contrast to the analysis of data on adverse events where often only a table with observed frequencies and corresponding test statistics is reported. An example is the recently published CLEOPATRA study where a three-drug regimen is compared with a two-drug regimen in patients with HER2-positive first-line metastatic breast cancer. Here, as described earlier, primary and secondary endpoints (progression-free and overall survival) are analysed using time-to-event models, whereas adverse events are summarized in a simple frequency table, although the duration of study treatment differs substantially. In this paper, we demonstrate the application of time-to-event models to first serious adverse events using the data of the CLEOPATRA study. This will cover the broad range between a simple incidence rate approach over survival and competing risks models (with death as a competing event) to multi-state models. We illustrate all approaches by means of graphical displays highlighting the temporal dynamics and compare the obtained results. For the CLEOPATRA study, the resulting hazard ratios are all in the same order of magnitude. But the use of time-to-event models provides valuable and additional information that would potentially be overlooked by only presenting incidence proportions. These models adequately address the temporal dynamics of serious adverse events as well as death of patients. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Models and analyses for inertial-confinement fusion-reactor studies

    Energy Technology Data Exchange (ETDEWEB)

    Bohachevsky, I.O.

    1981-05-01

    This report describes models and analyses devised at Los Alamos National Laboratory to determine the technical characteristics of different inertial confinement fusion (ICF) reactor elements required for component integration into a functional unit. We emphasize the generic properties of the different elements rather than specific designs. The topics discussed are general ICF reactor design considerations; reactor cavity phenomena, including the restoration of interpulse ambient conditions; first-wall temperature increases and material losses; reactor neutronics and hydrodynamic blanket response to neutron energy deposition; and analyses of loads and stresses in the reactor vessel walls, including remarks about the generation and propagation of very short wavelength stress waves. A discussion of analytic approaches useful in integrations and optimizations of ICF reactor systems concludes the report.

  17. Dynamics and spatial structure of ENSO from re-analyses versus CMIP5 models

    Science.gov (United States)

    Serykh, Ilya; Sonechkin, Dmitry

    2016-04-01

    Basing on a mathematical idea about the so-called strange nonchaotic attractor (SNA) in the quasi-periodically forced dynamical systems, the currently available re-analyses data are considered. It is found that the El Niño - Southern Oscillation (ENSO) is driven not only by the seasonal heating, but also by three more external periodicities (incommensurate to the annual period) associated with the ~18.6-year lunar-solar nutation of the Earth rotation axis, ~11-year sunspot activity cycle and the ~14-month Chandler wobble in the Earth's pole motion. Because of the incommensurability of their periods all four forces affect the system in inappropriate time moments. As a result, the ENSO time series look to be very complex (strange in mathematical terms) but nonchaotic. The power spectra of ENSO indices reveal numerous peaks located at the periods that are multiples of the above periodicities as well as at their sub- and super-harmonic. In spite of the above ENSO complexity, a mutual order seems to be inherent to the ENSO time series and their spectra. This order reveals itself in the existence of a scaling of the power spectrum peaks and respective rhythms in the ENSO dynamics that look like the power spectrum and dynamics of the SNA. It means there are no limits to forecast ENSO, in principle. In practice, it opens a possibility to forecast ENSO for several years ahead. Global spatial structures of anomalies during El Niño and power spectra of ENSO indices from re-analyses are compared with the respective output quantities in the CMIP5 climate models (the Historical experiment). It is found that the models reproduce global spatial structures of the near surface temperature and sea level pressure anomalies during El Niño very similar to these fields in the re-analyses considered. But the power spectra of the ENSO indices from the CMIP5 models show no peaks at the same periods as the re-analyses power spectra. We suppose that it is possible to improve modeled

  18. How robust are probabilistic models of higher-level cognition?

    Science.gov (United States)

    Marcus, Gary F; Davis, Ernest

    2013-12-01

    An increasingly popular theory holds that the mind should be viewed as a near-optimal or rational engine of probabilistic inference, in domains as diverse as word learning, pragmatics, naive physics, and predictions of the future. We argue that this view, often identified with Bayesian models of inference, is markedly less promising than widely believed, and is undermined by post hoc practices that merit wholesale reevaluation. We also show that the common equation between probabilistic and rational or optimal is not justified.

  19. Design evaluation and optimisation in crossover pharmacokinetic studies analysed by nonlinear mixed effects models.

    Science.gov (United States)

    Nguyen, Thu Thuy; Bazzoli, Caroline; Mentré, France

    2012-05-20

    Bioequivalence or interaction trials are commonly studied in crossover design and can be analysed by nonlinear mixed effects models as an alternative to noncompartmental approach. We propose an extension of the population Fisher information matrix in nonlinear mixed effects models to design crossover pharmacokinetic trials, using a linearisation of the model around the random effect expectation, including within-subject variability and discrete covariates fixed or changing between periods. We use the expected standard errors of treatment effect to compute the power for the Wald test of comparison or equivalence and the number of subjects needed for a given power. We perform various simulations mimicking crossover two-period trials to show the relevance of these developments. We then apply these developments to design a crossover pharmacokinetic study of amoxicillin in piglets and implement them in the new version 3.2 of the r function PFIM.

  20. An age-dependent model to analyse the evolutionary stability of bacterial quorum sensing.

    Science.gov (United States)

    Mund, A; Kuttler, C; Pérez-Velázquez, J; Hense, B A

    2016-09-21

    Bacterial communication is enabled through the collective release and sensing of signalling molecules in a process called quorum sensing. Cooperative processes can easily be destabilized by the appearance of cheaters, who contribute little or nothing at all to the production of common goods. This especially applies for planktonic cultures. In this study, we analyse the dynamics of bacterial quorum sensing and its evolutionary stability under two levels of cooperation, namely signal and enzyme production. The model accounts for mutation rates and switches between planktonic and biofilm state of growth. We present a mathematical approach to model these dynamics using age-dependent colony models. We explore the conditions under which cooperation is stable and find that spatial structuring can lead to long-term scenarios such as coexistence or bistability, depending on the non-linear combination of different parameters like death rates and production costs. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Assessing Cognitive Processes with Diffusion Model Analyses: A Tutorial based on fast-dm-30

    Directory of Open Access Journals (Sweden)

    Andreas eVoss

    2015-03-01

    Full Text Available Diffusion models can be used to infer cognitive processes involved in fast binary decision tasks. The model assumes that information is accumulated continuously until one of two thresholds is hit. In the analysis, response time distributions from numerous trials of the decision task are used to estimate a set of parameters mapping distinct cognitive processes. In recent years, diffusion model analyses have become more and more popular in different fields of psychology. This increased popularity is based on the recent development of several software solutions for the parameter estimation. Although these programs make the application of the model relatively easy, there is a shortage of knowledge about different steps of a state-of-the-art diffusion model study. In this paper, we give a concise tutorial on diffusion modelling, and we present fast-dm-30, a thoroughly revised and extended version of the fast-dm software (Voss & Voss, 2007 for diffusion model data analysis. The most important improvement of the fast-dm version is the possibility to choose between different optimization criteria (i.e., Maximum Likelihood, Chi-Square, and Kolmogorov-Smirnov, which differ in applicability for different data sets.

  2. Post hoc analysis of the PATRICIA randomized trial of the efficacy of human papillomavirus type 16 (HPV-16)/HPV-18 AS04-adjuvanted vaccine against incident and persistent infection with nonvaccine oncogenic HPV types using an alternative multiplex type-specific PCR assay for HPV DNA.

    Science.gov (United States)

    Struyf, Frank; Colau, Brigitte; Wheeler, Cosette M; Naud, Paulo; Garland, Suzanne; Quint, Wim; Chow, Song-Nan; Salmerón, Jorge; Lehtinen, Matti; Del Rosario-Raymundo, M Rowena; Paavonen, Jorma; Teixeira, Júlio C; Germar, Maria Julieta; Peters, Klaus; Skinner, S Rachel; Limson, Genara; Castellsagué, Xavier; Poppe, Willy A J; Ramjattan, Brian; Klein, Terry D; Schwarz, Tino F; Chatterjee, Archana; Tjalma, Wiebren A A; Diaz-Mitoma, Francisco; Lewis, David J M; Harper, Diane M; Molijn, Anco; van Doorn, Leen-Jan; David, Marie-Pierre; Dubin, Gary

    2015-02-01

    The efficacy of the human papillomavirus type 16 (HPV-16)/HPV-18 AS04-adjuvanted vaccine against cervical infections with HPV in the Papilloma Trial against Cancer in Young Adults (PATRICIA) was evaluated using a combination of the broad-spectrum L1-based SPF10 PCR-DNA enzyme immunoassay (DEIA)/line probe assay (LiPA25) system with type-specific PCRs for HPV-16 and -18. Broad-spectrum PCR assays may underestimate the presence of HPV genotypes present at relatively low concentrations in multiple infections, due to competition between genotypes. Therefore, samples were retrospectively reanalyzed using a testing algorithm incorporating the SPF10 PCR-DEIA/LiPA25 plus a novel E6-based multiplex type-specific PCR and reverse hybridization assay (MPTS12 RHA), which permits detection of a panel of nine oncogenic HPV genotypes (types 16, 18, 31, 33, 35, 45, 52, 58, and 59). For the vaccine against HPV types 16 and 18, there was no major impact on estimates of vaccine efficacy (VE) for incident or 6-month or 12-month persistent infections when the MPTS12 RHA was included in the testing algorithm versus estimates with the protocol-specified algorithm. However, the alternative testing algorithm showed greater sensitivity than the protocol-specified algorithm for detection of some nonvaccine oncogenic HPV types. More cases were gained in the control group than in the vaccine group, leading to higher point estimates of VE for 6-month and 12-month persistent infections for the nonvaccine oncogenic types included in the MPTS12 RHA assay (types 31, 33, 35, 45, 52, 58, and 59). This post hoc analysis indicates that the per-protocol testing algorithm used in PATRICIA underestimated the VE against some nonvaccine oncogenic HPV types and that the choice of the HPV DNA testing methodology is important for the evaluation of VE in clinical trials. (This study has been registered at ClinicalTrials.gov under registration no. NCT00122681.). Copyright © 2015, American Society for

  3. Models of population-based analyses for data collected from large extended families.

    Science.gov (United States)

    Wang, Wenyu; Lee, Elisa T; Howard, Barbara V; Fabsitz, Richard R; Devereux, Richard B; MacCluer, Jean W; Laston, Sandra; Comuzzie, Anthony G; Shara, Nawar M; Welty, Thomas K

    2010-12-01

    Large studies of extended families usually collect valuable phenotypic data that may have scientific value for purposes other than testing genetic hypotheses if the families were not selected in a biased manner. These purposes include assessing population-based associations of diseases with risk factors/covariates and estimating population characteristics such as disease prevalence and incidence. Relatedness among participants however, violates the traditional assumption of independent observations in these classic analyses. The commonly used adjustment method for relatedness in population-based analyses is to use marginal models, in which clusters (families) are assumed to be independent (unrelated) with a simple and identical covariance (family) structure such as those called independent, exchangeable and unstructured covariance structures. However, using these simple covariance structures may not be optimally appropriate for outcomes collected from large extended families, and may under- or over-estimate the variances of estimators and thus lead to uncertainty in inferences. Moreover, the assumption that families are unrelated with an identical family structure in a marginal model may not be satisfied for family studies with large extended families. The aim of this paper is to propose models incorporating marginal models approaches with a covariance structure for assessing population-based associations of diseases with their risk factors/covariates and estimating population characteristics for epidemiological studies while adjusting for the complicated relatedness among outcomes (continuous/categorical, normally/non-normally distributed) collected from large extended families. We also discuss theoretical issues of the proposed models and show that the proposed models and covariance structure are appropriate for and capable of achieving the aim.

  4. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  5. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Directory of Open Access Journals (Sweden)

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  6. Sampling and sensitivity analyses tools (SaSAT) for computational modelling.

    Science.gov (United States)

    Hoare, Alexander; Regan, David G; Wilson, David P

    2008-02-27

    SaSAT (Sampling and Sensitivity Analysis Tools) is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab, a numerical mathematical software package, and utilises algorithms contained in the Matlab Statistics Toolbox. However, Matlab is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  7. Analysing animal social network dynamics: the potential of stochastic actor-oriented models.

    Science.gov (United States)

    Fisher, David N; Ilany, Amiyaal; Silk, Matthew J; Tregenza, Tom

    2017-03-01

    Animals are embedded in dynamically changing networks of relationships with conspecifics. These dynamic networks are fundamental aspects of their environment, creating selection on behaviours and other traits. However, most social network-based approaches in ecology are constrained to considering networks as static, despite several calls for such analyses to become more dynamic. There are a number of statistical analyses developed in the social sciences that are increasingly being applied to animal networks, of which stochastic actor-oriented models (SAOMs) are a principal example. SAOMs are a class of individual-based models designed to model transitions in networks between discrete time points, as influenced by network structure and covariates. It is not clear, however, how useful such techniques are to ecologists, and whether they are suited to animal social networks. We review the recent applications of SAOMs to animal networks, outlining findings and assessing the strengths and weaknesses of SAOMs when applied to animal rather than human networks. We go on to highlight the types of ecological and evolutionary processes that SAOMs can be used to study. SAOMs can include effects and covariates for individuals, dyads and populations, which can be constant or variable. This allows for the examination of a wide range of questions of interest to ecologists. However, high-resolution data are required, meaning SAOMs will not be useable in all study systems. It remains unclear how robust SAOMs are to missing data and uncertainty around social relationships. Ultimately, we encourage the careful application of SAOMs in appropriate systems, with dynamic network analyses likely to prove highly informative. Researchers can then extend the basic method to tackle a range of existing questions in ecology and explore novel lines of questioning. © 2016 The Authors. Journal of Animal Ecology published by John Wiley & Sons Ltd on behalf of British Ecological Society.

  8. Power analyses for negative binomial models with application to multiple sclerosis clinical trials.

    Science.gov (United States)

    Rettiganti, Mallik; Nagaraja, H N

    2012-01-01

    We use negative binomial (NB) models for the magnetic resonance imaging (MRI)-based brain lesion count data from parallel group (PG) and baseline versus treatment (BVT) trials for relapsing remitting multiple sclerosis (RRMS) patients, and describe the associated likelihood ratio (LR), score, and Wald tests. We perform power analyses and sample size estimation using the simulated percentiles of the exact distribution of the test statistics for the PG and BVT trials. When compared to the corresponding nonparametric test, the LR test results in 30-45% reduction in sample sizes for the PG trials and 25-60% reduction for the BVT trials.

  9. Analysing and modelling battery drain of 3G terminals due to port scan attacks

    OpenAIRE

    Pascual Trigos, Mar

    2010-01-01

    In this thesis there is detected a threat in 3G mobile phone, specifically in the eventual draining terminal's battery due to undesired data traffic. The objectives of the thesis are to analyse the battery drain of 3G mobile phones because of uplink and downlink traffic and to model the battery drain. First of all, there is described how we can make a mobile phone to increase its consumption, and therefore to shorten its battery life time. Concretely, we focus in data traffic. This traffic ca...

  10. Analysing the Effects of Flood-Resilience Technologies in Urban Areas Using a Synthetic Model Approach

    Directory of Open Access Journals (Sweden)

    Reinhard Schinke

    2016-11-01

    Full Text Available Flood protection systems with their spatial effects play an important role in managing and reducing flood risks. The planning and decision process as well as the technical implementation are well organized and often exercised. However, building-related flood-resilience technologies (FReT are often neglected due to the absence of suitable approaches to analyse and to integrate such measures in large-scale flood damage mitigation concepts. Against this backdrop, a synthetic model-approach was extended by few complementary methodical steps in order to calculate flood damage to buildings considering the effects of building-related FReT and to analyse the area-related reduction of flood risks by geo-information systems (GIS with high spatial resolution. It includes a civil engineering based investigation of characteristic properties with its building construction including a selection and combination of appropriate FReT as a basis for derivation of synthetic depth-damage functions. Depending on the real exposition and the implementation level of FReT, the functions can be used and allocated in spatial damage and risk analyses. The application of the extended approach is shown at a case study in Valencia (Spain. In this way, the overall research findings improve the integration of FReT in flood risk management. They provide also some useful information for advising of individuals at risk supporting the selection and implementation of FReT.

  11. Modeling of high homologous temperature deformation behavior for stress and life-time analyses

    Energy Technology Data Exchange (ETDEWEB)

    Krempl, E. [Rensselaer Polytechnic Institute, Troy, NY (United States)

    1997-12-31

    Stress and lifetime analyses need realistic and accurate constitutive models for the inelastic deformation behavior of engineering alloys at low and high temperatures. Conventional creep and plasticity models have fundamental difficulties in reproducing high homologous temperature behavior. To improve the modeling capabilities {open_quotes}unified{close_quotes} state variable theories were conceived. They consider all inelastic deformation rate-dependent and do not have separate repositories for creep and plasticity. The viscoplasticity theory based on overstress (VBO), one of the unified theories, is introduced and its properties are delineated. At high homologous temperature where secondary and tertiary creep are observed modeling is primarily accomplished by a static recovery term and a softening isotropic stress. At low temperatures creep is merely a manifestation of rate dependence. The primary creep modeled at low homologous temperature is due to the rate dependence of the flow law. The model is unaltered in the transition from low to high temperature except that the softening of the isotropic stress and the influence of the static recovery term increase with an increase of the temperature.

  12. Incorporating uncertainty of management costs in sensitivity analyses of matrix population models.

    Science.gov (United States)

    Salomon, Yacov; McCarthy, Michael A; Taylor, Peter; Wintle, Brendan A

    2013-02-01

    The importance of accounting for economic costs when making environmental-management decisions subject to resource constraints has been increasingly recognized in recent years. In contrast, uncertainty associated with such costs has often been ignored. We developed a method, on the basis of economic theory, that accounts for the uncertainty in population-management decisions. We considered the case where, rather than taking fixed values, model parameters are random variables that represent the situation when parameters are not precisely known. Hence, the outcome is not precisely known either. Instead of maximizing the expected outcome, we maximized the probability of obtaining an outcome above a threshold of acceptability. We derived explicit analytical expressions for the optimal allocation and its associated probability, as a function of the threshold of acceptability, where the model parameters were distributed according to normal and uniform distributions. To illustrate our approach we revisited a previous study that incorporated cost-efficiency analyses in management decisions that were based on perturbation analyses of matrix population models. Incorporating derivations from this study into our framework, we extended the model to address potential uncertainties. We then applied these results to 2 case studies: management of a Koala (Phascolarctos cinereus) population and conservation of an olive ridley sea turtle (Lepidochelys olivacea) population. For low aspirations, that is, when the threshold of acceptability is relatively low, the optimal strategy was obtained by diversifying the allocation of funds. Conversely, for high aspirations, the budget was directed toward management actions with the highest potential effect on the population. The exact optimal allocation was sensitive to the choice of uncertainty model. Our results highlight the importance of accounting for uncertainty when making decisions and suggest that more effort should be placed on

  13. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  14. Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.

    Science.gov (United States)

    Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D

    2014-05-01

    In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available.

  15. A conceptual model for analysing informal learning in online social networks for health professionals.

    Science.gov (United States)

    Li, Xin; Gray, Kathleen; Chang, Shanton; Elliott, Kristine; Barnett, Stephen

    2014-01-01

    Online social networking (OSN) provides a new way for health professionals to communicate, collaborate and share ideas with each other for informal learning on a massive scale. It has important implications for ongoing efforts to support Continuing Professional Development (CPD) in the health professions. However, the challenge of analysing the data generated in OSNs makes it difficult to understand whether and how they are useful for CPD. This paper presents a conceptual model for using mixed methods to study data from OSNs to examine the efficacy of OSN in supporting informal learning of health professionals. It is expected that using this model with the dataset generated in OSNs for informal learning will produce new and important insights into how well this innovation in CPD is serving professionals and the healthcare system.

  16. Rockslide and Impulse Wave Modelling in the Vajont Reservoir by DEM-CFD Analyses

    Science.gov (United States)

    Zhao, T.; Utili, S.; Crosta, G. B.

    2016-06-01

    This paper investigates the generation of hydrodynamic water waves due to rockslides plunging into a water reservoir. Quasi-3D DEM analyses in plane strain by a coupled DEM-CFD code are adopted to simulate the rockslide from its onset to the impact with the still water and the subsequent generation of the wave. The employed numerical tools and upscaling of hydraulic properties allow predicting a physical response in broad agreement with the observations notwithstanding the assumptions and characteristics of the adopted methods. The results obtained by the DEM-CFD coupled approach are compared to those published in the literature and those presented by Crosta et al. (Landslide spreading, impulse waves and modelling of the Vajont rockslide. Rock mechanics, 2014) in a companion paper obtained through an ALE-FEM method. Analyses performed along two cross sections are representative of the limit conditions of the eastern and western slope sectors. The max rockslide average velocity and the water wave velocity reach ca. 22 and 20 m/s, respectively. The maximum computed run up amounts to ca. 120 and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 and 190 m, respectively). Therefore, the overall study lays out a possible DEM-CFD framework for the modelling of the generation of the hydrodynamic wave due to the impact of a rapid moving rockslide or rock-debris avalanche.

  17. Economic modeling of electricity production from hot dry rock geothermal reservoirs: methodology and analyses. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Cummings, R.G.; Morris, G.E.

    1979-09-01

    An analytical methodology is developed for assessing alternative modes of generating electricity from hot dry rock (HDR) geothermal energy sources. The methodology is used in sensitivity analyses to explore relative system economics. The methodology used a computerized, intertemporal optimization model to determine the profit-maximizing design and management of a unified HDR electric power plant with a given set of geologic, engineering, and financial conditions. By iterating this model on price, a levelized busbar cost of electricity is established. By varying the conditions of development, the sensitivity of both optimal management and busbar cost to these conditions are explored. A plausible set of reference case parameters is established at the outset of the sensitivity analyses. This reference case links a multiple-fracture reservoir system to an organic, binary-fluid conversion cycle. A levelized busbar cost of 43.2 mills/kWh ($1978) was determined for the reference case, which had an assumed geothermal gradient of 40/sup 0/C/km, a design well-flow rate of 75 kg/s, an effective heat transfer area per pair of wells of 1.7 x 10/sup 6/ m/sup 2/, and plant design temperature of 160/sup 0/C. Variations in the presumed geothermal gradient, size of the reservoir, drilling costs, real rates of return, and other system parameters yield minimum busbar costs between -40% and +76% of the reference case busbar cost.

  18. Establishing a Numerical Modeling Framework for Hydrologic Engineering Analyses of Extreme Storm Events

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xiaodong; Hossain, Faisal; Leung, L. Ruby

    2017-08-01

    In this study a numerical modeling framework for simulating extreme storm events was established using the Weather Research and Forecasting (WRF) model. Such a framework is necessary for the derivation of engineering parameters such as probable maximum precipitation that are the cornerstone of large water management infrastructure design. Here this framework was built based on a heavy storm that occurred in Nashville (USA) in 2010, and verified using two other extreme storms. To achieve the optimal setup, several combinations of model resolutions, initial/boundary conditions (IC/BC), cloud microphysics and cumulus parameterization schemes were evaluated using multiple metrics of precipitation characteristics. The evaluation suggests that WRF is most sensitive to IC/BC option. Simulation generally benefits from finer resolutions up to 5 km. At the 15km level, NCEP2 IC/BC produces better results, while NAM IC/BC performs best at the 5km level. Recommended model configuration from this study is: NAM or NCEP2 IC/BC (depending on data availability), 15km or 15km-5km nested grids, Morrison microphysics and Kain-Fritsch cumulus schemes. Validation of the optimal framework suggests that these options are good starting choices for modeling extreme events similar to the test cases. This optimal framework is proposed in response to emerging engineering demands of extreme storm events forecasting and analyses for design, operations and risk assessment of large water infrastructures.

  19. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper;

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  20. Development of steady-state model for MSPT and detailed analyses of receiver

    Science.gov (United States)

    Yuasa, Minoru; Sonoda, Masanori; Hino, Koichi

    2016-05-01

    Molten salt parabolic trough system (MSPT) uses molten salt as heat transfer fluid (HTF) instead of synthetic oil. The demonstration plant of MSPT was constructed by Chiyoda Corporation and Archimede Solar Energy in Italy in 2013. Chiyoda Corporation developed a steady-state model for predicting the theoretical behavior of the demonstration plant. The model was designed to calculate the concentrated solar power and heat loss using ray tracing of incident solar light and finite element modeling of thermal energy transferred into the medium. This report describes the verification of the model using test data on the demonstration plant, detailed analyses on the relation between flow rate and temperature difference on the metal tube of receiver and the effect of defocus angle on concentrated power rate, for solar collector assembly (SCA) development. The model is accurate to an extent of 2.0% as systematic error and 4.2% as random error. The relationships between flow rate and temperature difference on metal tube and the effect of defocus angle on concentrated power rate are shown.

  1. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  2. Evaluation of hydrological models for scenario analyses: signal-to-noise-ratio between scenario effects and model uncertainty

    Directory of Open Access Journals (Sweden)

    H. Bormann

    2005-01-01

    Full Text Available Many model applications suffer from the fact that although it is well known that model application implies different sources of uncertainty there is no objective criterion to decide whether a model is suitable for a particular application or not. This paper introduces a comparative index between the uncertainty of a model and the change effects of scenario calculations which enables the modeller to objectively decide about suitability of a model to be applied in scenario analysis studies. The index is called "signal-to-noise-ratio", and it is applied for an exemplary scenario study which was performed within the GLOWA-IMPETUS project in Benin. The conceptual UHP model was applied on the upper Ouémé basin. Although model calibration and validation were successful, uncertainties on model parameters and input data could be identified. Applying the "signal-to-noise-ratio" on regional scale subcatchments of the upper Ouémé comparing water availability indicators for uncertainty studies and scenario analyses the UHP model turned out to be suitable to predict long-term water balances under the present poor data availability and changing environmental conditions in subhumid West Africa.

  3. A model intercomparison analysing the link between column ozone and geopotential height anomalies in January

    Directory of Open Access Journals (Sweden)

    P. Braesicke

    2008-05-01

    Full Text Available A statistical framework to evaluate the performance of chemistry-climate models with respect to the interaction between meteorology and column ozone during northern hemisphere mid-winter, in particularly January, is used. Different statistical diagnostics from four chemistry-climate models (E39C, ME4C, UMUCAM, ULAQ are compared with the ERA-40 re-analysis. First, we analyse vertical coherence in geopotential height anomalies as described by linear correlations between two different pressure levels (30 and 200 hPa of the atmosphere. In addition, linear correlations between column ozone and geopotential height anomalies at 200 hPa are discussed to motivate a simple picture of the meteorological impacts on column ozone on interannual timescales. Secondly, we discuss characteristic spatial structures in geopotential height and column ozone anomalies as given by their first two empirical orthogonal functions. Finally, we describe the covariance patterns between reconstructed anomalies of geopotential height and column ozone. In general we find good agreement between the models with higher horizontal resolution (E39C, ME4C, UMUCAM and ERA-40. The Pacific-North American (PNA pattern emerges as a useful qualitative benchmark for the model performance. Models with higher horizontal resolution and high upper boundary (ME4C and UMUCAM show good agreement with the PNA tripole derived from ERA-40 data, including the column ozone modulation over the Pacfic sector. The model with lowest horizontal resolution does not show a classic PNA pattern (ULAQ, and the model with the lowest upper boundary (E39C does not capture the PNA related column ozone variations over the Pacific sector. Those discrepancies have to be taken into account when providing confidence intervals for climate change integrations.

  4. PASMet: a web-based platform for prediction, modelling and analyses of metabolic systems.

    Science.gov (United States)

    Sriyudthsak, Kansuporn; Mejia, Ramon Francisco; Arita, Masanori; Hirai, Masami Yokota

    2016-07-01

    PASMet (Prediction, Analysis and Simulation of Metabolic networks) is a web-based platform for proposing and verifying mathematical models to understand the dynamics of metabolism. The advantages of PASMet include user-friendliness and accessibility, which enable biologists and biochemists to easily perform mathematical modelling. PASMet offers a series of user-functions to handle the time-series data of metabolite concentrations. The functions are organised into four steps: (i) Prediction of a probable metabolic pathway and its regulation; (ii) Construction of mathematical models; (iii) Simulation of metabolic behaviours; and (iv) Analysis of metabolic system characteristics. Each function contains various statistical and mathematical methods that can be used independently. Users who may not have enough knowledge of computing or programming can easily and quickly analyse their local data without software downloads, updates or installations. Users only need to upload their files in comma-separated values (CSV) format or enter their model equations directly into the website. Once the time-series data or mathematical equations are uploaded, PASMet automatically performs computation on server-side. Then, users can interactively view their results and directly download them to their local computers. PASMet is freely available with no login requirement at http://pasmet.riken.jp/ from major web browsers on Windows, Mac and Linux operating systems.

  5. Correlation of Klebsiella pneumoniae comparative genetic analyses with virulence profiles in a murine respiratory disease model.

    Directory of Open Access Journals (Sweden)

    Ramy A Fodah

    Full Text Available Klebsiella pneumoniae is a bacterial pathogen of worldwide importance and a significant contributor to multiple disease presentations associated with both nosocomial and community acquired disease. ATCC 43816 is a well-studied K. pneumoniae strain which is capable of causing an acute respiratory disease in surrogate animal models. In this study, we performed sequencing of the ATCC 43816 genome to support future efforts characterizing genetic elements required for disease. Furthermore, we performed comparative genetic analyses to the previously sequenced genomes from NTUH-K2044 and MGH 78578 to gain an understanding of the conservation of known virulence determinants amongst the three strains. We found that ATCC 43816 and NTUH-K2044 both possess the known virulence determinant for yersiniabactin, as well as a Type 4 secretion system (T4SS, CRISPR system, and an acetonin catabolism locus, all absent from MGH 78578. While both NTUH-K2044 and MGH 78578 are clinical isolates, little is known about the disease potential of these strains in cell culture and animal models. Thus, we also performed functional analyses in the murine macrophage cell lines RAW264.7 and J774A.1 and found that MGH 78578 (K52 serotype was internalized at higher levels than ATCC 43816 (K2 and NTUH-K2044 (K1, consistent with previous characterization of the antiphagocytic properties of K1 and K2 serotype capsules. We also examined the three K. pneumoniae strains in a novel BALB/c respiratory disease model and found that ATCC 43816 and NTUH-K2044 are highly virulent (LD50<100 CFU while MGH 78578 is relatively avirulent.

  6. Kinetic analyses and mathematical modeling of primary photochemical and photoelectrochemical processes in plant photosystems.

    Science.gov (United States)

    Vredenberg, Wim

    2011-02-01

    In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with multi-turnover pulses (MTFs) of variable intensity and duration. These analyses have led to definition and formulation of rate equations that describe the sequence of primary linear electron transfer (LET) steps in photosystem II (PSII) and of cyclic electron transport (CET) in PSI. The model considers heterogeneity in PSII reaction centers (RCs) associated with the S-states of the OEC and incorporates in a dark-adapted state the presence of a 15-35% fraction of Q(B)-nonreducing RCs that probably is identical with the S₀ fraction. The fluorescence induction algorithm (FIA) in the 10 μs-1s excitation time range considers a photochemical O-J-D, a photo-electrochemical J-I and an I-P phase reflecting the response of the variable fluorescence to the electric trans-thylakoid potential generated by the proton pump fuelled by CET in PSI. The photochemical phase incorporates the kinetics associated with the double reduction of the acceptor pair of pheophytin (Phe) and plastoquinone Q(A) [PheQ(A)] in Q(B) nonreducing RCs and the associated doubling of the variable fluorescence, in agreement with the three-state trapping model (TSTM) of PS II. The decline in fluorescence emission during the so called SMT in the 1-100s excitation time range, known as the Kautsky curve, is shown to be associated with a substantial decrease of CET-powered proton efflux from the stroma into the chloroplast lumen through the ATPsynthase of the photosynthetic machinery.

  7. D Recording for 2d Delivering - the Employment of 3d Models for Studies and Analyses -

    Science.gov (United States)

    Rizzi, A.; Baratti, G.; Jiménez, B.; Girardi, S.; Remondino, F.

    2011-09-01

    In the last years, thanks to the advances of surveying sensors and techniques, many heritage sites could be accurately replicated in digital form with very detailed and impressive results. The actual limits are mainly related to hardware capabilities, computation time and low performance of personal computer. Often, the produced models are not visible on a normal computer and the only solution to easily visualized them is offline using rendered videos. This kind of 3D representations is useful for digital conservation, divulgation purposes or virtual tourism where people can visit places otherwise closed for preservation or security reasons. But many more potentialities and possible applications are available using a 3D model. The problem is the ability to handle 3D data as without adequate knowledge this information is reduced to standard 2D data. This article presents some surveying and 3D modeling experiences within the APSAT project ("Ambiente e Paesaggi dei Siti d'Altura Trentini", i.e. Environment and Landscapes of Upland Sites in Trentino). APSAT is a multidisciplinary project funded by the Autonomous Province of Trento (Italy) with the aim documenting, surveying, studying, analysing and preserving mountainous and hill-top heritage sites located in the region. The project focuses on theoretical, methodological and technological aspects of the archaeological investigation of mountain landscape, considered as the product of sequences of settlements, parcelling-outs, communication networks, resources, and symbolic places. The mountain environment preserves better than others the traces of hunting and gathering, breeding, agricultural, metallurgical, symbolic activities characterised by different lengths and environmental impacts, from Prehistory to the Modern Period. Therefore the correct surveying and documentation of this heritage sites and material is very important. Within the project, the 3DOM unit of FBK is delivering all the surveying and 3D material to

  8. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Directory of Open Access Journals (Sweden)

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  9. DESCRIPTION OF MODELING ANALYSES IN SUPPORT OF THE 200-ZP-1 REMEDIAL DESIGN/REMEDIAL ACTION

    Energy Technology Data Exchange (ETDEWEB)

    VONGARGEN BH

    2009-11-03

    The Feasibility Study/or the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-28) and the Proposed Plan/or Remediation of the 200-ZP-1 Groundwater Operable Unit (DOE/RL-2007-33) describe the use of groundwater pump-and-treat technology for the 200-ZP-1 Groundwater Operable Unit (OU) as part of an expanded groundwater remedy. During fiscal year 2008 (FY08), a groundwater flow and contaminant transport (flow and transport) model was developed to support remedy design decisions at the 200-ZP-1 OU. This model was developed because the size and influence of the proposed 200-ZP-1 groundwater pump-and-treat remedy will have a larger areal extent than the current interim remedy, and modeling is required to provide estimates of influent concentrations and contaminant mass removal rates to support the design of the aboveground treatment train. The 200 West Area Pre-Conceptual Design/or Final Extraction/Injection Well Network: Modeling Analyses (DOE/RL-2008-56) documents the development of the first version of the MODFLOW/MT3DMS model of the Hanford Site's Central Plateau, as well as the initial application of that model to simulate a potential well field for the 200-ZP-1 remedy (considering only the contaminants carbon tetrachloride and technetium-99). This document focuses on the use of the flow and transport model to identify suitable extraction and injection well locations as part of the 200 West Area 200-ZP-1 Pump-and-Treat Remedial Design/Remedial Action Work Plan (DOEIRL-2008-78). Currently, the model has been developed to the extent necessary to provide approximate results and to lay a foundation for the design basis concentrations that are required in support of the remedial design/remediation action (RD/RA) work plan. The discussion in this document includes the following: (1) Assignment of flow and transport parameters for the model; (2) Definition of initial conditions for the transport model for each simulated contaminant of concern (COC) (i.e., carbon

  10. Assessing the hydrodynamic boundary conditions for risk analyses in coastal areas: a stochastic storm surge model

    Directory of Open Access Journals (Sweden)

    T. Wahl

    2011-11-01

    Full Text Available This paper describes a methodology to stochastically simulate a large number of storm surge scenarios (here: 10 million. The applied model is very cheap in computation time and will contribute to improve the overall results from integrated risk analyses in coastal areas. Initially, the observed storm surge events from the tide gauges of Cuxhaven (located in the Elbe estuary and Hörnum (located in the southeast of Sylt Island are parameterised by taking into account 25 parameters (19 sea level parameters and 6 time parameters. Throughout the paper, the total water levels are considered. The astronomical tides are semidiurnal in the investigation area with a tidal range >2 m. The second step of the stochastic simulation consists in fitting parametric distribution functions to the data sets resulting from the parameterisation. The distribution functions are then used to run Monte-Carlo-Simulations. Based on the simulation results, a large number of storm surge scenarios are reconstructed. Parameter interdependencies are considered and different filter functions are applied to avoid inconsistencies. Storm surge scenarios, which are of interest for risk analyses, can easily be extracted from the results.

  11. Evaluating and Refining the Conceptual Model Used in the Study of Health and Activity in Preschool Environments (SHAPES) Intervention.

    Science.gov (United States)

    Saunders, Ruth P; Pfeiffer, Karin; Brown, William H; Howie, Erin K; Dowda, Marsha; O'Neill, Jennifer R; McIver, Kerry; Pate, Russell R

    2017-01-01

    This study investigated the utility of the Study of Health and Activity in Preschool Environments (SHAPES) conceptual model, which targeted physical activity (PA) behavior in preschool children, by examining the relationship between implementation monitoring data and child PA during the school day. We monitored implementation completeness and fidelity based on multiple elements identified in the conceptual model. Comparing high-implementing, low-implementing, and control groups revealed no association between implementation and outcomes. We performed post hoc analyses, using process data, to refine our conceptual model's depiction of an effective preschool PA-promoting environment. Results suggest that a single component of the original four-component conceptual model, providing opportunities for moderate-to-vigorous physical activity through recess for 4-year-old children in preschool settings, may be a good starting place for increasing moderate-to-vigorous physical activity. Interventions that are implemented with optimal levels of completeness and fidelity are more likely to achieve behavior change if they are based on accurate conceptual models. Examining the mechanisms through which an intervention produces its effects, as articulated in the conceptual model that guides it, is particularly important for environmentally focused interventions because they are guided by emerging frameworks. The results of this study underscore the utility of using implementation monitoring data to examine the conceptual model on which the intervention is based.

  12. Models for regionalizing economic data and their applications within the scope of forensic disaster analyses

    Science.gov (United States)

    Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.

    2015-04-01

    The impact of natural hazards on the economic system can be observed in many different regions all over the world. Once the local economic structure is hit by an event direct costs instantly occur. However, the disturbance on a local level (e.g. parts of city or industries along a river bank) might also cause monetary damages in other, indirectly affected sectors. If the impact of an event is strong, these damages are likely to cascade and spread even on an international scale (e.g. the eruption of Eyjafjallajökull and its impact on the automotive sector in Europe). In order to determine these special impacts, one has to gain insights into the directly hit economic structure before being able to calculate these side effects. Especially, regarding the development of a model used for near real-time forensic disaster analyses any simulation needs to be based on data that is rapidly available or easily to be computed. Therefore, we investigated commonly used or recently discussed methodologies for regionalizing economic data. Surprisingly, even for German federal states there is no official input-output data available that can be used, although it might provide detailed figures concerning economic interrelations between different industry sectors. In the case of highly developed countries, such as Germany, we focus on models for regionalizing nationwide input-output table which is usually available at the national statistical offices. However, when it comes to developing countries (e.g. South-East Asia) the data quality and availability is usually much poorer. In this case, other sources need to be found for the proper assessment of regional economic performance. We developed an indicator-based model that can fill this gap because of its flexibility regarding the level of aggregation and the composability of different input parameters. Our poster presentation brings up a literature review and a summary on potential models that seem to be useful for this specific task

  13. Modifications in the AA5083 Johnson-Cook Material Model for Use in Friction Stir Welding Computational Analyses

    Science.gov (United States)

    2011-12-30

    REPORT Modifications in the AA5083 Johnson-Cook Material Model for Use in Friction Stir Welding Computational Analyses 14. ABSTRACT 16. SECURITY...TERMS AA5083, friction stir welding , Johnson-Cook material model M. Grujicic, B. Pandurangan, C.-F. Yen, B. A. Cheeseman Clemson University Office of...Use in Friction Stir Welding Computational Analyses Report Title ABSTRACT Johnson-Cook strength material model is frequently used in finite-element

  14. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  15. Evaluation of Temperature and Humidity Profiles of Unified Model and ECMWF Analyses Using GRUAN Radiosonde Observations

    Directory of Open Access Journals (Sweden)

    Young-Chan Noh

    2016-07-01

    Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.

  16. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  17. Testing a dual-systems model of adolescent brain development using resting-state connectivity analyses.

    Science.gov (United States)

    van Duijvenvoorde, A C K; Achterberg, M; Braams, B R; Peters, S; Crone, E A

    2016-01-01

    The current study aimed to test a dual-systems model of adolescent brain development by studying changes in intrinsic functional connectivity within and across networks typically associated with cognitive-control and affective-motivational processes. To this end, resting-state and task-related fMRI data were collected of 269 participants (ages 8-25). Resting-state analyses focused on seeds derived from task-related neural activation in the same participants: the dorsal lateral prefrontal cortex (dlPFC) from a cognitive rule-learning paradigm and the nucleus accumbens (NAcc) from a reward-paradigm. Whole-brain seed-based resting-state analyses showed an age-related increase in dlPFC connectivity with the caudate and thalamus, and an age-related decrease in connectivity with the (pre)motor cortex. nAcc connectivity showed a strengthening of connectivity with the dorsal anterior cingulate cortex (ACC) and subcortical structures such as the hippocampus, and a specific age-related decrease in connectivity with the ventral medial PFC (vmPFC). Behavioral measures from both functional paradigms correlated with resting-state connectivity strength with their respective seed. That is, age-related change in learning performance was mediated by connectivity between the dlPFC and thalamus, and age-related change in winning pleasure was mediated by connectivity between the nAcc and vmPFC. These patterns indicate (i) strengthening of connectivity between regions that support control and learning, (ii) more independent functioning of regions that support motor and control networks, and (iii) more independent functioning of regions that support motivation and valuation networks with age. These results are interpreted vis-à-vis a dual-systems model of adolescent brain development.

  18. Comparative modeling analyses of Cs-137 fate in the rivers impacted by Chernobyl and Fukushima accidents

    Energy Technology Data Exchange (ETDEWEB)

    Zheleznyak, M.; Kivva, S. [Institute of Environmental Radioactivity, Fukushima University (Japan)

    2014-07-01

    The consequences of two largest nuclear accidents of the last decades - at Chernobyl Nuclear Power Plant (ChNPP) (1986) and at Fukushima Daiichi NPP (FDNPP) (2011) clearly demonstrated that radioactive contamination of water bodies in vicinity of NPP and on the waterways from it, e.g., river- reservoir water after Chernobyl accident and rivers and coastal marine waters after Fukushima accident, in the both cases have been one of the main sources of the public concerns on the accident consequences. The higher weight of water contamination in public perception of the accidents consequences in comparison with the real fraction of doses via aquatic pathways in comparison with other dose components is a specificity of public perception of environmental contamination. This psychological phenomenon that was confirmed after these accidents provides supplementary arguments that the reliable simulation and prediction of the radionuclide dynamics in water and sediments is important part of the post-accidental radioecological research. The purpose of the research is to use the experience of the modeling activities f conducted for the past more than 25 years within the Chernobyl affected Pripyat River and Dnieper River watershed as also data of the new monitoring studies in Japan of Abukuma River (largest in the region - the watershed area is 5400 km{sup 2}), Kuchibuto River, Uta River, Niita River, Natsui River, Same River, as also of the studies on the specific of the 'water-sediment' {sup 137}Cs exchanges in this area to refine the 1-D model RIVTOX and 2-D model COASTOX for the increasing of the predictive power of the modeling technologies. The results of the modeling studies are applied for more accurate prediction of water/sediment radionuclide contamination of rivers and reservoirs in the Fukushima Prefecture and for the comparative analyses of the efficiency of the of the post -accidental measures to diminish the contamination of the water bodies. Document

  19. Development of microbial-enzyme-mediated decomposition model parameters through steady-state and dynamic analyses

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Gangsheng [ORNL; Post, Wilfred M [ORNL; Mayes, Melanie [ORNL

    2013-01-01

    We developed a Microbial-ENzyme-mediated Decomposition (MEND) model, based on the Michaelis-Menten kinetics, that describes the dynamics of physically defined pools of soil organic matter (SOC). These include particulate, mineral-associated, dissolved organic matter (POC, MOC, and DOC, respectively), microbial biomass, and associated exoenzymes. The ranges and/or distributions of parameters were determined by both analytical steady-state and dynamic analyses with SOC data from the literature. We used an improved multi-objective parameter sensitivity analysis (MOPSA) to identify the most important parameters for the full model: maintenance of microbial biomass, turnover and synthesis of enzymes, and carbon use efficiency (CUE). The model predicted an increase of 2 C (baseline temperature =12 C) caused the pools of POC-Cellulose, MOC, and total SOC to increase with dynamic CUE and decrease with constant CUE, as indicated by the 50% confidence intervals. Regardless of dynamic or constant CUE, the pool sizes of POC, MOC, and total SOC varied from 8% to 8% under +2 C. The scenario analysis using a single parameter set indicates that higher temperature with dynamic CUE might result in greater net increases in both POC-Cellulose and MOC pools. Different dynamics of various SOC pools reflected the catalytic functions of specific enzymes targeting specific substrates and the interactions between microbes, enzymes, and SOC. With the feasible parameter values estimated in this study, models incorporating fundamental principles of microbial-enzyme dynamics can lead to simulation results qualitatively different from traditional models with fast/slow/passive pools.

  20. Genomic analyses with biofilter 2.0: knowledge driven filtering, annotation, and model development.

    Science.gov (United States)

    Pendergrass, Sarah A; Frase, Alex; Wallace, John; Wolfe, Daniel; Katiyar, Neerja; Moore, Carrie; Ritchie, Marylyn D

    2013-12-30

    The ever-growing wealth of biological information available through multiple comprehensive database repositories can be leveraged for advanced analysis of data. We have now extensively revised and updated the multi-purpose software tool Biofilter that allows researchers to annotate and/or filter data as well as generate gene-gene interaction models based on existing biological knowledge. Biofilter now has the Library of Knowledge Integration (LOKI), for accessing and integrating existing comprehensive database information, including more flexibility for how ambiguity of gene identifiers are handled. We have also updated the way importance scores for interaction models are generated. In addition, Biofilter 2.0 now works with a range of types and formats of data, including single nucleotide polymorphism (SNP) identifiers, rare variant identifiers, base pair positions, gene symbols, genetic regions, and copy number variant (CNV) location information. Biofilter provides a convenient single interface for accessing multiple publicly available human genetic data sources that have been compiled in the supporting database of LOKI. Information within LOKI includes genomic locations of SNPs and genes, as well as known relationships among genes and proteins such as interaction pairs, pathways and ontological categories.Via Biofilter 2.0 researchers can:• Annotate genomic location or region based data, such as results from association studies, or CNV analyses, with relevant biological knowledge for deeper interpretation• Filter genomic location or region based data on biological criteria, such as filtering a series SNPs to retain only SNPs present in specific genes within specific pathways of interest• Generate Predictive Models for gene-gene, SNP-SNP, or CNV-CNV interactions based on biological information, with priority for models to be tested based on biological relevance, thus narrowing the search space and reducing multiple hypothesis-testing. Biofilter is a software

  1. Controls on Yardang Morphology: Insights from Field Measurements, Lidar Topographic Analyses, and Numerical Modeling

    Science.gov (United States)

    Pelletier, J. D.; Kapp, P. A.

    2014-12-01

    Yardangs are streamlined bedforms sculpted by the wind and wind-blown sand. They can form as relatively resistant exposed rocks erode more slowly than surrounding exposed rocks, thus causing the more resistant rocks to stand higher in the landscape and deflect the wind and wind-blown sand into adjacent troughs in a positive feedback. How this feedback gives rise to streamlined forms that locally have a consistent size is not well understood theoretically. In this study we combine field measurements in the yardangs of Ocotillo Wells SVRA with analyses of airborne and terrestrial lidar datasets and numerical modeling to quantify and understand the controls on yardang morphology. The classic model for yardang morphology is that they evolve to an ideal 4:1 length-to-width aspect ratio that minimizes aerodynamic drag. We show using computational fluid dynamics (CFD) modeling that this model is incorrect: the 4:1 aspect ratio is the value corresponding to minimum drag for free bodies, i.e. obstacles around which air flows on all sides. Yardangs, in contrast, are embedded in Earth's surface. For such rough streamlined half-bodies, the aspect ratio corresponding to minimum drag is larger than 20:1. As an alternative to the minimum-drag model, we propose that the aspect ratio of yardangs not significantly influenced by structural controls is controlled by the angle of dispersion of the aerodynamic jet created as deflected wind and wind-blown sand exits the troughs between incipient yardang noses. Aerodynamic jets have a universal dispersion angle of 11.8 degrees, thus predicting a yardang aspect ratio of ~5:1. We developed a landscape evolution model that combines the physics of boundary layer flow with aeolian saltation and bedrock erosion to form yardangs with a range of sizes and aspect ratios similar to those observed in nature. Yardangs with aspect ratios both larger and smaller than 5:1 occur in the model since the strike and dip of the resistant rock unit also exerts

  2. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  3. A Hidden Markov model web application for analysing bacterial genomotyping DNA microarray experiments.

    Science.gov (United States)

    Newton, Richard; Hinds, Jason; Wernisch, Lorenz

    2006-01-01

    Whole genome DNA microarray genomotyping experiments compare the gene content of different species or strains of bacteria. A statistical approach to analysing the results of these experiments was developed, based on a Hidden Markov model (HMM), which takes adjacency of genes along the genome into account when calling genes present or absent. The model was implemented in the statistical language R and applied to three datasets. The method is numerically stable with good convergence properties. Error rates are reduced compared with approaches that ignore spatial information. Moreover, the HMM circumvents a problem encountered in a conventional analysis: determining the cut-off value to use to classify a gene as absent. An Apache Struts web interface for the R script was created for the benefit of users unfamiliar with R. The application may be found at http://hmmgd.cryst.bbk.ac.uk/hmmgd. The source code illustrating how to run R scripts from an Apache Struts-based web application is available from the corresponding author on request. The application is also available for local installation if required.

  4. Global isoprene emissions estimated using MEGAN, ECMWF analyses and a detailed canopy environment model

    Directory of Open Access Journals (Sweden)

    J.-F. Müller

    2008-03-01

    Full Text Available The global emissions of isoprene are calculated at 0.5° resolution for each year between 1995 and 2006, based on the MEGAN (Model of Emissions of Gases and Aerosols from Nature version 2 model (Guenther et al., 2006 and a detailed multi-layer canopy environment model for the calculation of leaf temperature and visible radiation fluxes. The calculation is driven by meteorological fields – air temperature, cloud cover, downward solar irradiance, windspeed, volumetric soil moisture in 4 soil layers – provided by analyses of the European Centre for Medium-Range Weather Forecasts (ECMWF. The estimated annual global isoprene emission ranges between 374 Tg (in 1996 and 449 Tg (in 1998 and 2005, for an average of ca. 410 Tg/year over the whole period, i.e. about 30% less than the standard MEGAN estimate (Guenther et al., 2006. This difference is due, to a large extent, to the impact of the soil moisture stress factor, which is found here to decrease the global emissions by more than 20%. In qualitative agreement with past studies, high annual emissions are found to be generally associated with El Niño events. The emission inventory is evaluated against flux measurement campaigns at Harvard forest (Massachussets and Tapajós in Amazonia, showing that the model can capture quite well the short-term variability of emissions, but that it fails to reproduce the observed seasonal variation at the tropical rainforest site, with largely overestimated wet season fluxes. The comparison of the HCHO vertical columns calculated by a chemistry and transport model (CTM with HCHO distributions retrieved from space provides useful insights on tropical isoprene emissions. For example, the relatively low emissions calculated over Western Amazonia (compared to the corresponding estimates in the inventory of Guenther et al., 1995 are validated by the excellent agreement found between the CTM and HCHO data over this region. The parameterized impact of the soil moisture

  5. Stream Tracer Integrity: Comparative Analyses of Rhodamine-WT and Sodium Chloride through Transient Storage Modeling

    Science.gov (United States)

    Smull, E. M.; Wlostowski, A. N.; Gooseff, M. N.; Bowden, W. B.; Wollheim, W. M.

    2013-12-01

    Solute transport in natural channels describes the transport of water and dissolved matter through a river reach of interest. Conservative tracers allow us to label a parcel of stream water, such that we can track its movement downstream through space and time. A transient storage model (TSM) can be fit to the breakthrough curve (BTC) following a stream tracer experiment, as a way to quantify advection, dispersion, and transient storage processes. Arctic streams and rivers, in particular, are continuously underlain by permafrost, which provides for a simplified surface water-groundwater exchange. Sodium chloride (NaCl) and Rhodamine-WT (RWT) are widely used tracers, and differences between the two in conservative behavior and detection limits have been noted in small-scale field and laboratory studies. This study seeks to further this understanding by applying the OTIS model to NaCl and RWT BTC data from a field study on the Kuparuk River, Alaska, at varying flow rates. There are two main questions to be answered: 1) Do differences in NaCl and RWT manifest in OTIS parameter values? 2) Are the OTIS model results reliable for NaCl, RWT, or both? Fieldwork was performed in the summer of 2012 on the Kuparuk River, and modeling was performed using a modified OTIS framework, which provided for parameter optimization and further global sensitivity analyses. The results of this study will contribute to the greater body of literature surrounding Arctic stream hydrology, and it will assist in methodology for future tracer field studies. Additionally, the modeling work will provide an analysis for OTIS parameter identifiability, and assess stream tracer integrity (i.e. how well the BTC data represents the system) and its relation to TSM performance (i.e. how well the TSM can find a unique fit to the BTC data). The quantitative tools used can be applied to other solute transport studies, to better understand potential deviations in model outcome due to stream tracer choice and

  6. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Directory of Open Access Journals (Sweden)

    Varsha H. Rallapalli

    2016-10-01

    Full Text Available Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM has demonstrated that the signal-to-noise ratio (SNRENV from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N is assumed to: (a reduce S + N envelope power by filling in dips within clean speech (S and (b introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  7. Dynamic causal modelling of brain-behaviour relationships.

    Science.gov (United States)

    Rigoux, L; Daunizeau, J

    2015-08-15

    In this work, we expose a mathematical treatment of brain-behaviour relationships, which we coin behavioural Dynamic Causal Modelling or bDCM. This approach aims at decomposing the brain's transformation of stimuli into behavioural outcomes, in terms of the relative contribution of brain regions and their connections. In brief, bDCM places the brain at the interplay between stimulus and behaviour: behavioural outcomes arise from coordinated activity in (hidden) neural networks, whose dynamics are driven by experimental inputs. Estimating neural parameters that control network connectivity and plasticity effectively performs a neurobiologically-constrained approximation to the brain's input-outcome transform. In other words, neuroimaging data essentially serves to enforce the realism of bDCM's decomposition of input-output relationships. In addition, post-hoc artificial lesions analyses allow us to predict induced behavioural deficits and quantify the importance of network features for funnelling input-output relationships. This is important, because this enables one to bridge the gap with neuropsychological studies of brain-damaged patients. We demonstrate the face validity of the approach using Monte-Carlo simulations, and its predictive validity using empirical fMRI/behavioural data from an inhibitory control task. Lastly, we discuss promising applications of this work, including the assessment of functional degeneracy (in the healthy brain) and the prediction of functional recovery after lesions (in neurological patients).

  8. A comparison of two coaching approaches to enhance implementation of a recovery-oriented service model.

    Science.gov (United States)

    Deane, Frank P; Andresen, Retta; Crowe, Trevor P; Oades, Lindsay G; Ciarrochi, Joseph; Williams, Virginia

    2014-09-01

    Moving to recovery-oriented service provision in mental health may entail retraining existing staff, as well as training new staff. This represents a substantial burden on organisations, particularly since transfer of training into practice is often poor. Follow-up supervision and/or coaching have been found to improve the implementation and sustainment of new approaches. We compared the effect of two coaching conditions, skills-based and transformational coaching, on the implementation of a recovery-oriented model following training. Training followed by coaching led to significant sustained improvements in the quality of care planning in accordance with the new model over the 12-month study period. No interaction effect was observed between the two conditions. However, post hoc analyses suggest that transformational coaching warrants further exploration. The results support the provision of supervision in the form of coaching in the implementation of a recovery-oriented service model, and suggest the need to better elucidate the mechanisms within different coaching approaches that might contribute to improved care.

  9. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input subs

  10. Molecular analyses of neurogenic defects in a human pluripotent stem cell model of fragile X syndrome.

    Science.gov (United States)

    Boland, Michael J; Nazor, Kristopher L; Tran, Ha T; Szücs, Attila; Lynch, Candace L; Paredes, Ryder; Tassone, Flora; Sanna, Pietro Paolo; Hagerman, Randi J; Loring, Jeanne F

    2017-01-29

    New research suggests that common pathways are altered in many neurodevelopmental disorders including autism spectrum disorder; however, little is known about early molecular events that contribute to the pathology of these diseases. The study of monogenic, neurodevelopmental disorders with a high incidence of autistic behaviours, such as fragile X syndrome, has the potential to identify genes and pathways that are dysregulated in autism spectrum disorder as well as fragile X syndrome. In vitro generation of human disease-relevant cell types provides the ability to investigate aspects of disease that are impossible to study in patients or animal models. Differentiation of human pluripotent stem cells recapitulates development of the neocortex, an area affected in both fragile X syndrome and autism spectrum disorder. We have generated induced human pluripotent stem cells from several individuals clinically diagnosed with fragile X syndrome and autism spectrum disorder. When differentiated to dorsal forebrain cell fates, our fragile X syndrome human pluripotent stem cell lines exhibited reproducible aberrant neurogenic phenotypes. Using global gene expression and DNA methylation profiling, we have analysed the early stages of neurogenesis in fragile X syndrome human pluripotent stem cells. We discovered aberrant DNA methylation patterns at specific genomic regions in fragile X syndrome cells, and identified dysregulated gene- and network-level correlates of fragile X syndrome that are associated with developmental signalling, cell migration, and neuronal maturation. Integration of our gene expression and epigenetic analysis identified altered epigenetic-mediated transcriptional regulation of a distinct set of genes in fragile X syndrome. These fragile X syndrome-aberrant networks are significantly enriched for genes associated with autism spectrum disorder, giving support to the idea that underlying similarities exist among these neurodevelopmental diseases.

  11. Using EEG and stimulus context to probe the modelling of auditory-visual speech.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2016-02-01

    We investigated whether internal models of the relationship between lip movements and corresponding speech sounds [Auditory-Visual (AV) speech] could be updated via experience. AV associations were indexed by early and late event related potentials (ERPs) and by oscillatory power and phase locking. Different AV experience was produced via a context manipulation. Participants were presented with valid (the conventional pairing) and invalid AV speech items in either a 'reliable' context (80% AVvalid items) or an 'unreliable' context (80% AVinvalid items). The results showed that for the reliable context, there was N1 facilitation for AV compared to auditory only speech. This N1 facilitation was not affected by AV validity. Later ERPs showed a difference in amplitude between valid and invalid AV speech and there was significant enhancement of power for valid versus invalid AV speech. These response patterns did not change over the context manipulation, suggesting that the internal models of AV speech were not updated by experience. The results also showed that the facilitation of N1 responses did not vary as a function of the salience of visual speech (as previously reported); in post-hoc analyses, it appeared instead that N1 facilitation varied according to the relative time of the acoustic onset, suggesting for AV events N1 may be more sensitive to the relationship of AV timing than form. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  12. A simple beam model to analyse the durability of adhesively bonded tile floorings in presence of shrinkage

    Directory of Open Access Journals (Sweden)

    S. de Miranda

    2014-07-01

    Full Text Available A simple beam model for the evaluation of tile debonding due to substrate shrinkage is presented. The tile-adhesive-substrate package is modeled as an Euler-Bernoulli beam laying on a two-layer elastic foundation. An effective discrete model for inter-tile grouting is introduced with the aim of modelling workmanship defects due to partial filled groutings. The model is validated using the results of a 2D FE model. Different defect configurations and adhesive typologies are analysed, focusing the attention on the prediction of normal stresses in the adhesive layer under the assumption of Mode I failure of the adhesive.

  13. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters.

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-09-11

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  14. Insights into the evolution of tectonically-active glaciated mountain ranges from digital elevation model analyses

    Science.gov (United States)

    Brocklehurst, S. H.; Whipple, K. X.

    2003-12-01

    Glaciers have played an important role in the development of most active mountain ranges around the world during the Quaternary, but the interaction between glacial erosion (as modulated by climate change) and tectonic processes is poorly understood. The so-called glacial buzzsaw hypothesis (Brozovic et al., 1997) proposes that glaciers can incise as rapidly as the most rapid rock uplift rates, such that glaciated landscapes experiencing different rock uplift rates but the same snowline elevation will look essentially the same, with mean elevations close to the snowline. Digital elevation model-based analyses of the glaciated landscapes of the Nanga Parbat region, Pakistan, and the Southern Alps, New Zealand, lend some support to this hypothesis, but also reveal considerably more variety to the landscapes of glaciated, tectonically-active mountain ranges. Larger glaciers in the Nanga Parbat region maintain a low downvalley gradient and valley floor elevations close to the snowline, even in the face of extremely rapid rock uplift. However, smaller glaciers steepen in response to rapid uplift, similar to the response of rivers. A strong correlation between the height of hillslopes rising from the cirque floors and rock uplift rates implies that erosion processes on hillslopes cannot initially keep up with more rapid glacial incision rates. It is these staggering hillslopes that permit mountain peaks to rise above 8000m. The glacial buzzsaw hypothesis does not describe the evolution of the Southern Alps as well, because here mean elevations rise in areas of more rapid rock uplift. The buzzsaw hypothesis may work well in the Nanga Parbat region because the zone of rapid rock uplift is structurally confined to a narrow region. Alternatively, the Southern Alps may not have been rising sufficiently rapidly or sufficiently long for the glacial buzzsaw to be imposed outside the most rapidly uplifting region, around Mount Cook. The challenge now is to understand in detail

  15. Soil carbon response to land-use change: evaluation of a global vegetation model using observational meta-analyses

    Science.gov (United States)

    Nyawira, Sylvia S.; Nabel, Julia E. M. S.; Don, Axel; Brovkin, Victor; Pongratz, Julia

    2016-10-01

    Global model estimates of soil carbon changes from past land-use changes remain uncertain. We develop an approach for evaluating dynamic global vegetation models (DGVMs) against existing observational meta-analyses of soil carbon changes following land-use change. Using the DGVM JSBACH, we perform idealized simulations where the entire globe is covered by one vegetation type, which then undergoes a land-use change to another vegetation type. We select the grid cells that represent the climatic conditions of the meta-analyses and compare the mean simulated soil carbon changes to the meta-analyses. Our simulated results show model agreement with the observational data on the direction of changes in soil carbon for some land-use changes, although the model simulated a generally smaller magnitude of changes. The conversion of crop to forest resulted in soil carbon gain of 10 % compared to a gain of 42 % in the data, whereas the forest-to-crop change resulted in a simulated loss of -15 % compared to -40 %. The model and the observational data disagreed for the conversion of crop to grasslands. The model estimated a small soil carbon loss (-4 %), while observational data indicate a 38 % gain in soil carbon for the same land-use change. These model deviations from the observations are substantially reduced by explicitly accounting for crop harvesting and ignoring burning in grasslands in the model. We conclude that our idealized simulation approach provides an appropriate framework for evaluating DGVMs against meta-analyses and that this evaluation helps to identify the causes of deviation of simulated soil carbon changes from the meta-analyses.

  16. A very simple dynamic soil acidification model for scenario analyses and target load calculations

    NARCIS (Netherlands)

    Posch, M.; Reinds, G.J.

    2009-01-01

    A very simple dynamic soil acidification model, VSD, is described, which has been developed as the simplest extension of steady-state models for critical load calculations and with an eye on regional applications. The model requires only a minimum set of inputs (compared to more detailed models) and

  17. A Conceptual Model for Analysing Management Development in the UK Hospitality Industry

    Science.gov (United States)

    Watson, Sandra

    2007-01-01

    This paper presents a conceptual, contingent model of management development. It explains the nature of the UK hospitality industry and its potential influence on MD practices, prior to exploring dimensions and relationships in the model. The embryonic model is presented as a model that can enhance our understanding of the complexities of the…

  18. Secondary Evaluations of MTA 36-Month Outcomes: Propensity Score and Growth Mixture Model Analyses

    Science.gov (United States)

    Swanson, James M.; Hinshaw, Stephen P.; Arnold, L. Eugene; Gibbons, Robert D.; Marcus, Sue; Hur, Kwan; Jensen, Peter S.; Vitiello, Benedetto; Abikoff, Howard B.: Greenhill, Laurence L.; Hechtman, Lily; Pelham, William E.; Wells, Karen C.; Conners, C. Keith; March, John S.; Elliott, Glen R.; Epstein, Jeffery N.; Hoagwood, Kimberly; Hoza, Betsy; Molina, Brooke S. G.; Newcorn, Jeffrey H.; Severe, Joanne B.; Wigal, Timothy

    2007-01-01

    Objective: To evaluate two hypotheses: that self-selection bias contributed to lack of medication advantage at the 36-month assessment of the Multimodal Treatment Study of Children With ADHD (MTA) and that overall improvement over time obscured treatment effects in subgroups with different outcome trajectories. Method: Propensity score analyses,…

  19. Secondary Evaluations of MTA 36-Month Outcomes: Propensity Score and Growth Mixture Model Analyses

    Science.gov (United States)

    Swanson, James M.; Hinshaw, Stephen P.; Arnold, L. Eugene; Gibbons, Robert D.; Marcus, Sue; Hur, Kwan; Jensen, Peter S.; Vitiello, Benedetto; Abikoff, Howard B.: Greenhill, Laurence L.; Hechtman, Lily; Pelham, William E.; Wells, Karen C.; Conners, C. Keith; March, John S.; Elliott, Glen R.; Epstein, Jeffery N.; Hoagwood, Kimberly; Hoza, Betsy; Molina, Brooke S. G.; Newcorn, Jeffrey H.; Severe, Joanne B.; Wigal, Timothy

    2007-01-01

    Objective: To evaluate two hypotheses: that self-selection bias contributed to lack of medication advantage at the 36-month assessment of the Multimodal Treatment Study of Children With ADHD (MTA) and that overall improvement over time obscured treatment effects in subgroups with different outcome trajectories. Method: Propensity score analyses,…

  20. Efficacy of extended release quetiapine fumarate monotherapy in elderly patients with major depressive disorder: secondary analyses in subgroups of patients according to baseline anxiety, sleep disturbance, and pain levels.

    Science.gov (United States)

    Montgomery, Stuart A; Altamura, A Carlo; Katila, Heikki; Datto, Catherine; Szamosi, Johan; Eriksson, Hans

    2014-03-01

    This study evaluated extended release quetiapine fumarate (quetiapine XR) monotherapy in elderly patients with major depressive disorder (MDD) according to baseline levels of anxiety, sleep disturbance, and pain. Post-hoc analyses of data from an 11-week (9-week randomized-treatment, 2-week post-treatment phase), double-blind, placebo-controlled study of quetiapine XR (50-300 mg/day) monotherapy in elderly (≥66 years) patients (n=338) with MDD were carried out. Outcomes included randomization to week 9 change in Montgomery Åsberg Depression Rating Scale (MADRS) score and week 9 response (≥50% MADRS score reduction) rates. Post-hoc analyses were carried out to assess subgroups of patients with MDD according to baseline levels in terms of the following: higher or lower anxiety (Hamilton Rating Scale for Anxiety total score≥20 or Depression sleep disturbance factor (items 4+5+6) score≥5 or depressive symptoms in elderly patients with MDD, irrespective of baseline levels of anxiety, sleep disturbance, and pain.

  1. Antiapoptotic and neuroprotective role of Curcumin in Pentylenetetrazole (PTZ) induced kindling model in rat.

    Science.gov (United States)

    Saha, Lekha; Chakrabarti, Amitava; Kumari, Sweta; Bhatia, Alka; Banerjee, Dibyojyoti

    2016-02-01

    Kindling, a sub threshold chemical or electrical stimulation, increases seizure duration and enhances accompanied behavior until it reaches a sort of equilibrium state. The present study aimed to explore the effect of curcumin on the development of kindling in PTZ kindled rats and its role in apoptosis and neuronal damage. In a PTZ kindled Wistar rat model, different doses of curcumin (100, 200 and 300 mg/kg) were administrated orally one hour before the PTZ injections on alternate day during the whole kindling days. The following parameters were compared between control and experimental groups: the course of kindling, stages of seizures, Histopathological scoring of hippocampus, antioxidant parameters in the hippocampus, DNA fragmentation and caspase-3 expression in hippocampus, and neuron-specific enolase in the blood. One way ANOVA followed by Bonferroni post hoc analysis and Fischer's Exact test were used for statistical analyses. PTZ, 30 mg/kg, induced kindling in rats after 32.0 ± 1.4 days. Curcumin showed dose-dependent anti-seizure effect. Curcumin (300 mg/kg) significantly increased the latency to myoclonic jerks, clonic seizures as well as generalized tonic-clonic seizures, improved the seizure score and decreased the number of myoclonic jerks. PTZ kindling induced a significant neuronal injury, oxidative stress and apoptosis which were reversed by pretreatment with curcumin in a dose-dependent manner. Our study suggests that curcumin has a potential antiepileptogenic effect on kindling-induced epileptogenesis.

  2. Using an operating cost model to analyse the selection of aircraft type on short-haul routes

    CSIR Research Space (South Africa)

    Ssamula, B

    2006-08-01

    Full Text Available and the effect of passenger volume analysed. The model was applied to a specific route within Africa, and thereafter varying passenger numbers, to choose the least costly aircraft. The results showed that smaller capacity aircraft, even though limited by maximum...

  3. Pathway models for analysing and managing the introduction of alien plant pests—an overview and categorization

    Science.gov (United States)

    J.C. Douma; M. Pautasso; R.C. Venette; C. Robinet; L. Hemerik; M.C.M. Mourits; J. Schans; W. van der Werf

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative estimates of introduction risks and effectiveness of management options....

  4. Developing computational model-based diagnostics to analyse clinical chemistry data

    NARCIS (Netherlands)

    Schalkwijk, D.B. van; Bochove, K. van; Ommen, B. van; Freidig, A.P.; Someren, E.P. van; Greef, J. van der; Graaf, A.A. de

    2010-01-01

    This article provides methodological and technical considerations to researchers starting to develop computational model-based diagnostics using clinical chemistry data.These models are of increasing importance, since novel metabolomics and proteomics measuring technologies are able to produce large

  5. Bio-economic farm modelling to analyse agricultural land productivity in Rwanda

    NARCIS (Netherlands)

    Bidogeza, J.C.

    2011-01-01

    Keywords: Rwanda; farm household typology; sustainable technology adoption; multivariate analysis;
    land degradation; food security; bioeconomic model; crop simulation models; organic fertiliser; inorganic fertiliser; policy incentives In Rwanda, land degradation contributes to the low and

  6. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care

    OpenAIRE

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-01-01

    Objectives To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). Setting 2 university hospitals in south-eastern Spain from April to October 2013. Design A correlational descriptive study. Participants A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. Results The differences in obstetrical results were (biom...

  7. Quantifying and Analysing Neighbourhood Characteristics Supporting Urban Land-Use Modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2009-01-01

    Land-use modelling and spatial scenarios have gained increased attention as a means to meet the challenge of reducing uncertainty in the spatial planning and decision-making. Several organisations have developed software for land-use modelling. Many of the recent modelling efforts incorporate cel...

  8. Driver Model of a Powered Wheelchair Operation as a Tool of Theoretical Analyses

    Science.gov (United States)

    Ito, Takuma; Inoue, Takenobu; Shino, Motoki; Kamata, Minoru

    This paper describes the construction of a driver model of a powered wheelchair operation for the understanding of the characteristics of the driver. The main targets of existing researches about driver models are the operation of the automobiles and motorcycles, not a low-speed vehicle such as powered wheelchairs. Therefore, we started by verifying the possibility of modeling the turning operation at a corner of a corridor. At first, we conducted an experiment on a daily powered wheelchair user by using his vehicle. High reproducibility of driving and the driving characteristics for the construction of a driver model were both confirmed from the result of the experiment. Next, experiments with driving simulators were conducted for the collection of quantitative driving data. The parameters of the proposed driver model were identified from experimental results. From the simulations with the proposed driver model and identified parameters, the characteristics of the proposed driver model were analyzed.

  9. Fixed- and random-effects meta-analytic structural equation modeling: examples and analyses in R.

    Science.gov (United States)

    Cheung, Mike W-L

    2014-03-01

    Meta-analytic structural equation modeling (MASEM) combines the ideas of meta-analysis and structural equation modeling for the purpose of synthesizing correlation or covariance matrices and fitting structural equation models on the pooled correlation or covariance matrix. Cheung and Chan (Psychological Methods 10:40-64, 2005b, Structural Equation Modeling 16:28-53, 2009) proposed a two-stage structural equation modeling (TSSEM) approach to conducting MASEM that was based on a fixed-effects model by assuming that all studies have the same population correlation or covariance matrices. The main objective of this article is to extend the TSSEM approach to a random-effects model by the inclusion of study-specific random effects. Another objective is to demonstrate the procedures with two examples using the metaSEM package implemented in the R statistical environment. Issues related to and future directions for MASEM are discussed.

  10. Healthy volunteers can be phenotyped using cutaneous sensitization pain models.

    Directory of Open Access Journals (Sweden)

    Mads U Werner

    Full Text Available BACKGROUND: Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repeated measurements. The aim of this study was to determine if the areas of secondary hyperalgesia were consistently robust to be useful for phenotyping subjects, based on their pattern of sensitization by the heat pain models. METHODS: We performed post-hoc analyses of 10 completed healthy volunteer studies (n = 342 [409 repeated measurements]. Three different models were used to induce secondary hyperalgesia to monofilament stimulation: the heat/capsaicin sensitization (H/C, the brief thermal sensitization (BTS, and the burn injury (BI models. Three studies included both the H/C and BTS models. RESULTS: Within-subject compared to between-subject variability was low, and there was substantial strength of agreement between repeated induction-sessions in most studies. The intraclass correlation coefficient (ICC improved little with repeated testing beyond two sessions. There was good agreement in categorizing subjects into 'small area' (1(st quartile [75%] responders: 56-76% of subjects consistently fell into same 'small-area' or 'large-area' category on two consecutive study days. There was moderate to substantial agreement between the areas of secondary hyperalgesia induced on the same day using the H/C (forearm and BTS (thigh models. CONCLUSION: Secondary hyperalgesia induced by experimental heat pain models seem a consistent measure of sensitization in pharmacodynamic and physiological research. The analysis indicates that healthy volunteers can be phenotyped based on their pattern of sensitization by the heat [and heat plus capsaicin] pain models.

  11. Structural Equation Modeling with Mplus Basic Concepts, Applications, and Programming

    CERN Document Server

    Byrne, Barbara M

    2011-01-01

    Modeled after Barbara Byrne's other best-selling structural equation modeling (SEM) books, this practical guide reviews the basic concepts and applications of SEM using Mplus Versions 5 & 6. The author reviews SEM applications based on actual data taken from her own research. Using non-mathematical language, it is written for the novice SEM user. With each application chapter, the author "walks" the reader through all steps involved in testing the SEM model including: an explanation of the issues addressed illustrated and annotated testing of the hypothesized and post hoc models expl

  12. WOMBAT——A tool for mixed model analyses in quantitative genetics by restricted maximum likelihood (REML)

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    WOMBAT is a software package for quantitative genetic analyses of continuous traits, fitting a linear, mixed model;estimates of covariance components and the resulting genetic parameters are obtained by restricted maximum likelihood. A wide range of models, comprising numerous traits, multiple fixed and random effects, selected genetic covariance structures, random regression models and reduced rank estimation are accommodated. WOMBAT employs up-to-date numerical and computational methods. Together with the use of efficient compilers, this generates fast executable programs, suitable for large scale analyses.Use of WOMBAT is illustrated for a bivariate analysis. The package consists of the executable program, available for LINUX and WINDOWS environments, manual and a set of worked example, and can be downloaded free of charge from http://agbu.une.edu.au/~kmeyer/wombat.html

  13. Kinetic models for analysing myocardial [{sup 11}C]palmitate data

    Energy Technology Data Exchange (ETDEWEB)

    Jong, Hugo W.A.M. de [University Medical Centre Utrecht, Department of Radiology and Nuclear Medicine, Utrecht (Netherlands); VU University Medical Centre, Department of Nuclear Medicine and PET Research, Amsterdam (Netherlands); Rijzewijk, Luuk J.; Diamant, Michaela [VU University Medical Centre, Diabetes Centre, Amsterdam (Netherlands); Lubberink, Mark; Lammertsma, Adriaan A. [VU University Medical Centre, Department of Nuclear Medicine and PET Research, Amsterdam (Netherlands); Meer, Rutger W. van der; Lamb, Hildo J. [Leiden University Medical Centre, Department of Radiology, Leiden (Netherlands); Smit, Jan W.A. [Leiden University Medical Centre, Department of Endocrinology, Leiden (Netherlands)

    2009-06-15

    [{sup 11}C]Palmitate PET can be used to study myocardial fatty acid metabolism in vivo. Several models have been applied to describe and quantify its kinetics, but to date no systematic analysis has been performed to define the most suitable model. In this study a total of 21 plasma input models comprising one to three compartments and up to six free rate constants were compared using statistical analysis of clinical data and simulations. To this end, 14 healthy volunteers were scanned using [{sup 11}C]palmitate, whilst myocardial blood flow was measured using H{sub 2} {sup 15}O. Models including an oxidative pathway, representing production of {sup 11}CO{sub 2}, provided significantly better fits to the data than other models. Model robustness was increased by fixing efflux of {sup 11}CO{sub 2} to the oxidation rate. Simulations showed that a three-tissue compartment model describing oxidation and esterification was feasible when no more than three free rate constants were included. Although further studies in patients are required to substantiate this choice, based on the accuracy of data description, the number of free parameters and generality, the three-tissue model with three free rate constants was the model of choice for describing [{sup 11}C]palmitate kinetics in terms of oxidation and fatty acid accumulation in the cell. (orig.)

  14. A novel substance flow analysis model for analysing multi-year phosphorus flow at the regional scale.

    Science.gov (United States)

    Chowdhury, Rubel Biswas; Moore, Graham A; Weatherley, Anthony J; Arora, Meenakshi

    2016-12-01

    Achieving sustainable management of phosphorus (P) is crucial for both global food security and global environmental protection. In order to formulate informed policy measures to overcome existing barriers of achieving sustainable P management, there is need for a sound understanding of the nature and magnitude of P flow through various systems at different geographical and temporal scales. So far, there is a limited understanding on the nature and magnitude of P flow over multiple years at the regional scale. In this study, we have developed a novel substance flow analysis (SFA) model in the MATLAB/Simulink® software platform that can be effectively utilized to analyse the nature and magnitude of multi-year P flow at the regional scale. The model is inclusive of all P flows and storage relating to all key systems, subsystems, processes or components, and the associated interactions of P flow required to represent a typical P flow system at the regional scale. In an annual time step, this model can analyse P flow and storage over as many as years required at a time, and therefore, can indicate the trends and changes in P flow and storage over many years, which is not offered by the existing regional scale SFA models of P. The model is flexible enough to allow any modification or the inclusion of any degree of complexity, and therefore, can be utilized for analysing P flow in any region around the world. The application of the model in the case of Gippsland region, Australia has revealed that the model generates essential information about the nature and magnitude of P flow at the regional scale which can be utilized for making improved management decisions towards attaining P sustainability. A systematic reliability check on the findings of model application also indicates that the model produces reliable results.

  15. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care.

    Science.gov (United States)

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-08-26

    To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). 2 university hospitals in south-eastern Spain from April to October 2013. A correlational descriptive study. A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. The differences in obstetrical results were (biomedical model/humanised model): onset of labour (spontaneous 66/137, augmentation 70/1, p=0.0005), pain relief (epidural 172/132, no pain relief 9/40, p=0.0005), mode of delivery (normal vaginal 140/165, instrumental 48/23, p=0.004), length of labour (0-4 hours 69/93, >4 hours 133/108, p=0.011), condition of perineum (intact perineum or tear 94/178, episiotomy 100/24, p=0.0005). The total questionnaire score (100) gave a mean (M) of 78.33 and SD of 8.46 in the biomedical model of care and an M of 82.01 and SD of 7.97 in the humanised model of care (p=0.0005). In the analysis of the results per items, statistical differences were found in 8 of the 9 subscales. The highest scores were reached in the humanised model of maternity care. The humanised model of maternity care offers better obstetrical outcomes and women's satisfaction scores during the labour, birth and immediate postnatal period than does the biomedical model. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  16. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care

    Science.gov (United States)

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-01-01

    Objectives To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). Setting 2 university hospitals in south-eastern Spain from April to October 2013. Design A correlational descriptive study. Participants A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. Results The differences in obstetrical results were (biomedical model/humanised model): onset of labour (spontaneous 66/137, augmentation 70/1, p=0.0005), pain relief (epidural 172/132, no pain relief 9/40, p=0.0005), mode of delivery (normal vaginal 140/165, instrumental 48/23, p=0.004), length of labour (0–4 hours 69/93, >4 hours 133/108, p=0.011), condition of perineum (intact perineum or tear 94/178, episiotomy 100/24, p=0.0005). The total questionnaire score (100) gave a mean (M) of 78.33 and SD of 8.46 in the biomedical model of care and an M of 82.01 and SD of 7.97 in the humanised model of care (p=0.0005). In the analysis of the results per items, statistical differences were found in 8 of the 9 subscales. The highest scores were reached in the humanised model of maternity care. Conclusions The humanised model of maternity care offers better obstetrical outcomes and women's satisfaction scores during the labour, birth and immediate postnatal period than does the biomedical model. PMID:27566632

  17. A computationally efficient parallel Levenberg-Marquardt algorithm for highly parameterized inverse model analyses

    Science.gov (United States)

    Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.

    2016-09-01

    Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace such that the dimensionality of the problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2-D and a random hydraulic conductivity field in 3-D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ˜101 to ˜102 in a multicore computational environment. Therefore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate to large-scale problems.

  18. Comparative Analyses of MIRT Models and Software (BMIRT and flexMIRT)

    Science.gov (United States)

    Yavuz, Guler; Hambleton, Ronald K.

    2017-01-01

    Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…

  19. The Cannon 2: A data-driven model of stellar spectra for detailed chemical abundance analyses

    CERN Document Server

    Casey, Andrew R; Ness, Melissa; Rix, Hans-Walter; Ho, Anna Q Y; Gilmore, Gerry

    2016-01-01

    We have shown that data-driven models are effective for inferring physical attributes of stars (labels; Teff, logg, [M/H]) from spectra, even when the signal-to-noise ratio is low. Here we explore whether this is possible when the dimensionality of the label space is large (Teff, logg, and 15 abundances: C, N, O, Na, Mg, Al, Si, S, K, Ca, Ti, V, Mn, Fe, Ni) and the model is non-linear in its response to abundance and parameter changes. We adopt ideas from compressed sensing to limit overall model complexity while retaining model freedom. The model is trained with a set of 12,681 red-giant stars with high signal-to-noise spectroscopic observations and stellar parameters and abundances taken from the APOGEE Survey. We find that we can successfully train and use a model with 17 stellar labels. Validation shows that the model does a good job of inferring all 17 labels (typical abundance precision is 0.04 dex), even when we degrade the signal-to-noise by discarding ~50% of the observing time. The model dependencie...

  20. Analysing empowerment-oriented email consultation for parents : Development of the Guiding the Empowerment Process model

    NARCIS (Netherlands)

    Nieuwboer, C.C.; Fukkink, R.G.; Hermanns, J.M.A.

    2017-01-01

    Online consultation is increasingly offered by parenting practitioners, but it is not clear if it is feasible to provide empowerment-oriented support in a single session email consultation. Based on the empowerment theory, we developed the Guiding the Empowerment Process model (GEP model) to evaluat

  1. Transport of nutrients from land to sea: Global modeling approaches and uncertainty analyses

    NARCIS (Netherlands)

    Beusen, A.H.W.

    2014-01-01

    This thesis presents four examples of global models developed as part of the Integrated Model to Assess the Global Environment (IMAGE). They describe different components of global biogeochemical cycles of the nutrients nitrogen (N), phosphorus (P) and silicon (Si), with a focus on approaches to

  2. Modelling and analysing track cycling Omnium performances using statistical and machine learning techniques.

    Science.gov (United States)

    Ofoghi, Bahadorreza; Zeleznikow, John; Dwyer, Dan; Macmahon, Clare

    2013-01-01

    This article describes the utilisation of an unsupervised machine learning technique and statistical approaches (e.g., the Kolmogorov-Smirnov test) that assist cycling experts in the crucial decision-making processes for athlete selection, training, and strategic planning in the track cycling Omnium. The Omnium is a multi-event competition that will be included in the summer Olympic Games for the first time in 2012. Presently, selectors and cycling coaches make decisions based on experience and intuition. They rarely have access to objective data. We analysed both the old five-event (first raced internationally in 2007) and new six-event (first raced internationally in 2011) Omniums and found that the addition of the elimination race component to the Omnium has, contrary to expectations, not favoured track endurance riders. We analysed the Omnium data and also determined the inter-relationships between different individual events as well as between those events and the final standings of riders. In further analysis, we found that there is no maximum ranking (poorest performance) in each individual event that riders can afford whilst still winning a medal. We also found the required times for riders to finish the timed components that are necessary for medal winning. The results of this study consider the scoring system of the Omnium and inform decision-making toward successful participation in future major Omnium competitions.

  3. Mathematical modeling of materially nonlinear problems in structural analyses, Part II: Application in contemporary software

    Directory of Open Access Journals (Sweden)

    Bonić Zoran

    2010-01-01

    Full Text Available The paper presents application of nonlinear material models in the software package Ansys. The development of the model theory is presented in the paper of the mathematical modeling of material nonlinear problems in structural analysis (part I - theoretical foundations, and here is described incremental-iterative procedure for solving problems of nonlinear material used by this package and an example of modeling of spread footing by using Bilinear-kinematics and Drucker-Prager mode was given. A comparative analysis of the results obtained by these modeling and experimental research of the author was made. Occurrence of the load level that corresponds to plastic deformation was noted, development of deformations with increasing load, as well as the distribution of dilatation in the footing was observed. Comparison of calculated and measured values of reinforcement dilatation shows their very good agreement.

  4. Crowd-structure interaction in footbridges: Modelling, application to a real case-study and sensitivity analyses

    Science.gov (United States)

    Bruno, Luca; Venuti, Fiammetta

    2009-06-01

    A mathematical and computational model used to simulate crowd-structure interaction in lively footbridges is presented in this work. The model is based on the mathematical and numerical decomposition of the coupled multiphysical nonlinear system into two interacting subsystems. The model was conceived to simulate the synchronous lateral excitation phenomenon caused by pedestrians walking on footbridges. The model was first applied to simulate a crowd event on an actual footbridge, the T-bridge in Japan. Three sensitivity analyses were then performed on the same benchmark to evaluate the properties of the model. The simulation results show good agreement with the experimental data found in literature and the model could be considered a useful tool for designers and engineers in the different phases of footbridge design.

  5. Bicultural competence, acculturative family distancing, and future depression in Latino/a college students: a moderated mediation model.

    Science.gov (United States)

    Carrera, Stephanie G; Wei, Meifen

    2014-07-01

    In his acculturative family distancing (AFD) theory, Hwang (2006b) argued that acculturation gaps among parents and youth may lead to psychological and emotional distancing. AFD includes 2 dimensions: incongruent cultural values and breakdowns in communication. This study examined whether bicultural competence (BC) served as a mediator and moderator for the relationship between AFD and depression using structural equation modeling. Two hundred and forty-one Latino/a college students attending predominantly White, midwestern universities completed an online survey at 2 time points. For mediation, results indicated that BC at Time 2 (T2) mediated the relationship between AFD at Time 1 (T1) and depression at T2 above and beyond the effects of depression, acculturation, and enculturation at T1. A bootstrap method estimated the significance of the indirect effect. Moreover, 16% of the variance in BC at T2 was explained by acculturation, enculturation, and AFD at T1; 30% of the variance in depression at T2 was explained by BC at T2 and depression at T1. Post hoc analyses of the AFD and BC dimensions suggested that (a) positive attitudes toward both groups, communication ability, and social groundedness were significant mediators for the incongruent cultural values-depression link and (b) communication ability and social groundedness were significant mediators for the communication breakdown-depression link. For moderation, the AFD × BC interaction did not significantly predict depression at T2. Limitations, future research directions, and counseling implications are discussed.

  6. Stochastic Spatio-Temporal Models for Analysing NDVI Distribution of GIMMS NDVI3g Images

    Directory of Open Access Journals (Sweden)

    Ana F. Militino

    2017-01-01

    Full Text Available The normalized difference vegetation index (NDVI is an important indicator for evaluating vegetation change, monitoring land surface fluxes or predicting crop models. Due to the great availability of images provided by different satellites in recent years, much attention has been devoted to testing trend changes with a time series of NDVI individual pixels. However, the spatial dependence inherent in these data is usually lost unless global scales are analyzed. In this paper, we propose incorporating both the spatial and the temporal dependence among pixels using a stochastic spatio-temporal model for estimating the NDVI distribution thoroughly. The stochastic model is a state-space model that uses meteorological data of the Climatic Research Unit (CRU TS3.10 as auxiliary information. The model will be estimated with the Expectation-Maximization (EM algorithm. The result is a set of smoothed images providing an overall analysis of the NDVI distribution across space and time, where fluctuations generated by atmospheric disturbances, fire events, land-use/cover changes or engineering problems from image capture are treated as random fluctuations. The illustration is carried out with the third generation of NDVI images, termed NDVI3g, of the Global Inventory Modeling and Mapping Studies (GIMMS in continental Spain. This data are taken in bymonthly periods from January 2011 to December 2013, but the model can be applied to many other variables, countries or regions with different resolutions.

  7. Neural Network-Based Model for Landslide Susceptibility and Soil Longitudinal Profile Analyses

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Barari, Amin; Choobbasti, A. J.

    2011-01-01

    The purpose of this study was to create an empirical model for assessing the landslide risk potential at Savadkouh Azad University, which is located in the rural surroundings of Savadkouh, about 5 km from the city of Pol-Sefid in northern Iran. The soil longitudinal profile of the city of Babol......, located 25 km from the Caspian Sea, also was predicted with an artificial neural network (ANN). A multilayer perceptron neural network model was applied to the landslide area and was used to analyze specific elements in the study area that contributed to previous landsliding events. The ANN models were...... studies in landslide susceptibility zonation....

  8. Model-Based Fault Diagnosis: Performing Root Cause and Impact Analyses in Real Time

    Science.gov (United States)

    Figueroa, Jorge F.; Walker, Mark G.; Kapadia, Ravi; Morris, Jonathan

    2012-01-01

    Generic, object-oriented fault models, built according to causal-directed graph theory, have been integrated into an overall software architecture dedicated to monitoring and predicting the health of mission- critical systems. Processing over the generic fault models is triggered by event detection logic that is defined according to the specific functional requirements of the system and its components. Once triggered, the fault models provide an automated way for performing both upstream root cause analysis (RCA), and for predicting downstream effects or impact analysis. The methodology has been applied to integrated system health management (ISHM) implementations at NASA SSC's Rocket Engine Test Stands (RETS).

  9. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach......This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section....... The model is able to correctly separate the two experimental groups. Two different approaches to estimate the thickness of each section of specimen being imaged are introduced. The first approach uses Darboux frame and Cartan matrix to measure the isophote curvature and the second approach is based...

  10. Genomic analyses with biofilter 2.0: knowledge driven filtering, annotation, and model development

    National Research Council Canada - National Science Library

    Pendergrass, Sarah A; Frase, Alex; Wallace, John; Wolfe, Daniel; Katiyar, Neerja; Moore, Carrie; Ritchie, Marylyn D

    2013-01-01

    .... We have now extensively revised and updated the multi-purpose software tool Biofilter that allows researchers to annotate and/or filter data as well as generate gene-gene interaction models based...

  11. Wave modelling for the North Indian Ocean using MSMR analysed winds

    Digital Repository Service at National Institute of Oceanography (India)

    Vethamony, P.; Sudheesh, K.; Rupali, S.P.; Babu, M.T.; Jayakumar, S.; Saran, A.K.; Basu, S.K.; Kumar, R.; Sarkar, A.

    NCMRWF (National Centre for Medium Range Weather Forecast) winds assimilated with MSMR (Multi-channel Scanning Microwave Radiometer) winds are used as input to MIKE21 Offshore Spectral Wave model (OSW) which takes into account wind induced wave...

  12. The strut-and-tie models in reinforced concrete structures analysed by a numerical technique

    Directory of Open Access Journals (Sweden)

    V. S. Almeida

    Full Text Available The strut-and-tie models are appropriate to design and to detail certain types of structural elements in reinforced concrete and in regions of stress concentrations, called "D" regions. This is a good model representation of the structural behavior and mechanism. The numerical techniques presented herein are used to identify stress regions which represent the strut-and-tie elements and to quantify their respective efforts. Elastic linear plane problems are analyzed using strut-and-tie models by coupling the classical evolutionary structural optimization, ESO, and a new variant called SESO - Smoothing ESO, for finite element formulation. The SESO method is based on the procedure of gradual reduction of stiffness contribution of the inefficient elements at lower stress until it no longer has any influence. Optimal topologies of strut-and-tie models are presented in several instances with good settings comparing with other pioneer works allowing the design of reinforcement for structural elements.

  13. Het Job-demands resources model: Een motivationele analyse vanuit de Zelf-Determinatie Theorie

    OpenAIRE

    2013-01-01

    This article details the doctoral dissertation of Anja Van Broeck (2010) detailing employee motivation from two different recent perspectives: the job demands-resources model (JD-R model) en the self-determination theory (SDT). This article primarily highlights how the studies of this dissertation add to the JDR by relying on SDT. First, a distinction is made between two types of job demands: job hindrances and job challenges Second, motivation is shown to represent the underlying mechanism ...

  14. Optimization of extraction procedures for ecotoxicity analyses: Use of TNT contaminated soil as a model

    Energy Technology Data Exchange (ETDEWEB)

    Sunahara, G.I.; Renoux, A.Y.; Dodard, S.; Paquet, L.; Hawari, J. [BRI, Montreal, Quebec (Canada); Ampleman, G.; Lavigne, J.; Thiboutot, S. [DREV, Courcelette, Quebec (Canada)

    1995-12-31

    The environmental impact of energetic substances (TNT, RDX, GAP, NC) in soil is being examined using ecotoxicity bioassays. An extraction method was characterized to optimize bioassay assessment of TNT toxicity in different soil types. Using the Microtox{trademark} (Photobacterium phosphoreum) assay and non-extracted samples, TNT was most acutely toxic (IC{sub 50} = 1--9 PPM) followed by RDX and GAP; NC did not show obvious toxicity (probably due to solubility limitations). TNT (in 0.25% DMSO) yielded an IC{sub 50} 0.98 + 0.10 (SD) ppm. The 96h-EC{sub 50} (Selenastrum capricornutum growth inhibition) of TNT (1. 1 ppm) was higher than GAP and RDX; NC was not apparently toxic (probably due to solubility limitations). Soil samples (sand or a silt-sand mix) were spiked with either 2,000 or 20,000 mg TNT/kg soil, and were adjusted to 20% moisture. Samples were later mixed with acetonitrile, sonicated, and then treated with CaCl{sub 2} before filtration, HPLC and ecotoxicity analyses. Results indicated that: the recovery of TNT from soil (97.51% {+-} 2.78) was independent of the type of soil or moisture content; CaCl{sub 2} interfered with TNT toxicity and acetonitrile extracts could not be used directly for algal testing. When TNT extracts were diluted to fixed concentrations, similar TNT-induced ecotoxicities were generally observed and suggested that, apart from the expected effects of TNT concentrations in the soil, the soil texture and the moisture effects were minimal. The extraction procedure permits HPLC analyses as well as ecotoxicity testing and minimizes secondary soil matrix effects. Studies will be conducted to study the toxic effects of other energetic substances present in soil using this approach.

  15. Analysing stratified medicine business models and value systems: innovation-regulation interactions.

    Science.gov (United States)

    Mittra, James; Tait, Joyce

    2012-09-15

    Stratified medicine offers both opportunities and challenges to the conventional business models that drive pharmaceutical R&D. Given the increasingly unsustainable blockbuster model of drug development, due in part to maturing product pipelines, alongside increasing demands from regulators, healthcare providers and patients for higher standards of safety, efficacy and cost-effectiveness of new therapies, stratified medicine promises a range of benefits to pharmaceutical and diagnostic firms as well as healthcare providers and patients. However, the transition from 'blockbusters' to what might now be termed 'niche-busters' will require the adoption of new, innovative business models, the identification of different and perhaps novel types of value along the R&D pathway, and a smarter approach to regulation to facilitate innovation in this area. In this paper we apply the Innogen Centre's interdisciplinary ALSIS methodology, which we have developed for the analysis of life science innovation systems in contexts where the value creation process is lengthy, expensive and highly uncertain, to this emerging field of stratified medicine. In doing so, we consider the complex collaboration, timing, coordination and regulatory interactions that shape business models, value chains and value systems relevant to stratified medicine. More specifically, we explore in some depth two convergence models for co-development of a therapy and diagnostic before market authorisation, highlighting the regulatory requirements and policy initiatives within the broader value system environment that have a key role in determining the probable success and sustainability of these models.

  16. Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes

    Directory of Open Access Journals (Sweden)

    Marc Carreras

    2016-08-01

    Full Text Available Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain for the year 2012 (N = 92,498 individuals. A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals.

  17. An LP-model to analyse economic and ecological sustainability on Dutch dairy farms: model presentation and application for experimental farm "de Marke"

    NARCIS (Netherlands)

    Calker, van K.J.; Berentsen, P.B.M.; Boer, de I.J.M.; Giesen, G.W.J.; Huirne, R.B.M.

    2004-01-01

    Farm level modelling can be used to determine how farm management adjustments and environmental policy affect different sustainability indicators. In this paper indicators were included in a dairy farm LP (linear programming)-model to analyse the effects of environmental policy and management

  18. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  19. AMME: an Automatic Mental Model Evaluation to analyse user behaviour traced in a finite, discrete state space.

    Science.gov (United States)

    Rauterberg, M

    1993-11-01

    To support the human factors engineer in designing a good user interface, a method has been developed to analyse the empirical data of the interactive user behaviour traced in a finite discrete state space. The sequences of actions produced by the user contain valuable information about the mental model of this user, the individual problem solution strategies for a given task and the hierarchical structure of the task-subtasks relationships. The presented method, AMME, can analyse the action sequences and automatically generate (1) a net description of the task dependent model of the user, (2) a complete state transition matrix, and (3) various quantitative measures of the user's task solving process. The behavioural complexity of task-solving processes carried out by novices has been found to be significantly larger than the complexity of task-solving processes carried out by experts.

  20. Model-driven meta-analyses for informing health care: a diabetes meta-analysis as an exemplar.

    Science.gov (United States)

    Brown, Sharon A; Becker, Betsy Jane; García, Alexandra A; Brown, Adama; Ramírez, Gilbert

    2015-04-01

    A relatively novel type of meta-analysis, a model-driven meta-analysis, involves the quantitative synthesis of descriptive, correlational data and is useful for identifying key predictors of health outcomes and informing clinical guidelines. Few such meta-analyses have been conducted and thus, large bodies of research remain unsynthesized and uninterpreted for application in health care. We describe the unique challenges of conducting a model-driven meta-analysis, focusing primarily on issues related to locating a sample of published and unpublished primary studies, extracting and verifying descriptive and correlational data, and conducting analyses. A current meta-analysis of the research on predictors of key health outcomes in diabetes is used to illustrate our main points.

  1. Multicollinearity in prognostic factor analyses using the EORTC QLQ-C30: identification and impact on model selection.

    Science.gov (United States)

    Van Steen, Kristel; Curran, Desmond; Kramer, Jocelyn; Molenberghs, Geert; Van Vreckem, Ann; Bottomley, Andrew; Sylvester, Richard

    2002-12-30

    Clinical and quality of life (QL) variables from an EORTC clinical trial of first line chemotherapy in advanced breast cancer were used in a prognostic factor analysis of survival and response to chemotherapy. For response, different final multivariate models were obtained from forward and backward selection methods, suggesting a disconcerting instability. Quality of life was measured using the EORTC QLQ-C30 questionnaire completed by patients. Subscales on the questionnaire are known to be highly correlated, and therefore it was hypothesized that multicollinearity contributed to model instability. A correlation matrix indicated that global QL was highly correlated with 7 out of 11 variables. In a first attempt to explore multicollinearity, we used global QL as dependent variable in a regression model with other QL subscales as predictors. Afterwards, standard diagnostic tests for multicollinearity were performed. An exploratory principal components analysis and factor analysis of the QL subscales identified at most three important components and indicated that inclusion of global QL made minimal difference to the loadings on each component, suggesting that it is redundant in the model. In a second approach, we advocate a bootstrap technique to assess the stability of the models. Based on these analyses and since global QL exacerbates problems of multicollinearity, we therefore recommend that global QL be excluded from prognostic factor analyses using the QLQ-C30. The prognostic factor analysis was rerun without global QL in the model, and selected the same significant prognostic factors as before.

  2. Analysing improvements to on-street public transport systems: a mesoscopic model approach

    DEFF Research Database (Denmark)

    Ingvardson, Jesper Bláfoss; Kornerup Jensen, Jonas; Nielsen, Otto Anker

    2017-01-01

    Light rail transit and bus rapid transit have shown to be efficient and cost-effective in improving public transport systems in cities around the world. As these systems comprise various elements, which can be tailored to any given setting, e.g. pre-board fare-collection, holding strategies...... and other advanced public transport systems (APTS), the attractiveness of such systems depends heavily on their implementation. In the early planning stage it is advantageous to deploy simple and transparent models to evaluate possible ways of implementation. For this purpose, the present study develops...... a mesoscopic model which makes it possible to evaluate public transport operations in details, including dwell times, intelligent traffic signal timings and holding strategies while modelling impacts from other traffic using statistical distributional data thereby ensuring simplicity in use and fast...

  3. Latent Variable Modelling and Item Response Theory Analyses in Marketing Research

    Directory of Open Access Journals (Sweden)

    Brzezińska Justyna

    2016-12-01

    Full Text Available Item Response Theory (IRT is a modern statistical method using latent variables designed to model the interaction between a subject’s ability and the item level stimuli (difficulty, guessing. Item responses are treated as the outcome (dependent variables, and the examinee’s ability and the items’ characteristics are the latent predictor (independent variables. IRT models the relationship between a respondent’s trait (ability, attitude and the pattern of item responses. Thus, the estimation of individual latent traits can differ even for two individuals with the same total scores. IRT scores can yield additional benefits and this will be discussed in detail. In this paper theory and application with R software with the use of packages designed for modelling IRT will be presented.

  4. Modeling human papillomavirus and cervical cancer in the United States for analyses of screening and vaccination

    Directory of Open Access Journals (Sweden)

    Ortendahl Jesse

    2007-10-01

    Full Text Available Abstract Background To provide quantitative insight into current U.S. policy choices for cervical cancer prevention, we developed a model of human papillomavirus (HPV and cervical cancer, explicitly incorporating uncertainty about the natural history of disease. Methods We developed a stochastic microsimulation of cervical cancer that distinguishes different HPV types by their incidence, clearance, persistence, and progression. Input parameter sets were sampled randomly from uniform distributions, and simulations undertaken with each set. Through systematic reviews and formal data synthesis, we established multiple epidemiologic targets for model calibration, including age-specific prevalence of HPV by type, age-specific prevalence of cervical intraepithelial neoplasia (CIN, HPV type distribution within CIN and cancer, and age-specific cancer incidence. For each set of sampled input parameters, likelihood-based goodness-of-fit (GOF scores were computed based on comparisons between model-predicted outcomes and calibration targets. Using 50 randomly resampled, good-fitting parameter sets, we assessed the external consistency and face validity of the model, comparing predicted screening outcomes to independent data. To illustrate the advantage of this approach in reflecting parameter uncertainty, we used the 50 sets to project the distribution of health outcomes in U.S. women under different cervical cancer prevention strategies. Results Approximately 200 good-fitting parameter sets were identified from 1,000,000 simulated sets. Modeled screening outcomes were externally consistent with results from multiple independent data sources. Based on 50 good-fitting parameter sets, the expected reductions in lifetime risk of cancer with annual or biennial screening were 76% (range across 50 sets: 69–82% and 69% (60–77%, respectively. The reduction from vaccination alone was 75%, although it ranged from 60% to 88%, reflecting considerable parameter

  5. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    Stress can affect the brain functionality in many ways. As the synaptic vesicles have a major role in nervous signal transportation in synapses, their distribution in relationship to the active zone is very important in studying the neuron responses. We study the effect of stress on brain functio...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....

  6. Analysing Amazonian forest productivity using a new individual and trait-based model (TFS v.1)

    Science.gov (United States)

    Fyllas, N. M.; Gloor, E.; Mercado, L. M.; Sitch, S.; Quesada, C. A.; Domingues, T. F.; Galbraith, D. R.; Torre-Lezama, A.; Vilanova, E.; Ramírez-Angulo, H.; Higuchi, N.; Neill, D. A.; Silveira, M.; Ferreira, L.; Aymard C., G. A.; Malhi, Y.; Phillips, O. L.; Lloyd, J.

    2014-07-01

    Repeated long-term censuses have revealed large-scale spatial patterns in Amazon basin forest structure and dynamism, with some forests in the west of the basin having up to a twice as high rate of aboveground biomass production and tree recruitment as forests in the east. Possible causes for this variation could be the climatic and edaphic gradients across the basin and/or the spatial distribution of tree species composition. To help understand causes of this variation a new individual-based model of tropical forest growth, designed to take full advantage of the forest census data available from the Amazonian Forest Inventory Network (RAINFOR), has been developed. The model allows for within-stand variations in tree size distribution and key functional traits and between-stand differences in climate and soil physical and chemical properties. It runs at the stand level with four functional traits - leaf dry mass per area (Ma), leaf nitrogen (NL) and phosphorus (PL) content and wood density (DW) varying from tree to tree - in a way that replicates the observed continua found within each stand. We first applied the model to validate canopy-level water fluxes at three eddy covariance flux measurement sites. For all three sites the canopy-level water fluxes were adequately simulated. We then applied the model at seven plots, where intensive measurements of carbon allocation are available. Tree-by-tree multi-annual growth rates generally agreed well with observations for small trees, but with deviations identified for larger trees. At the stand level, simulations at 40 plots were used to explore the influence of climate and soil nutrient availability on the gross (ΠG) and net (ΠN) primary production rates as well as the carbon use efficiency (CU). Simulated ΠG, ΠN and CU were not associated with temperature. On the other hand, all three measures of stand level productivity were positively related to both mean annual precipitation and soil nutrient status

  7. Integrated modeling/analyses of thermal-shock effects in SNS targets

    Energy Technology Data Exchange (ETDEWEB)

    Taleyarkhan, R.P.; Haines, J. [Oak Ridge National Lab., TN (United States)

    1996-06-01

    In a spallation neutron source (SNS), extremely rapid energy pulses are introduced in target materials such as mercury, lead, tungsten, uranium, etc. Shock phenomena in such systems may possibly lead to structural material damage beyond the design basis. As expected, the progression of shock waves and interaction with surrounding materials for liquid targets can be quite different from that in solid targets. The purpose of this paper is to describe ORNL`s modeling framework for `integrated` assessment of thermal-shock issues in liquid and solid target designs. This modeling framework is being developed based upon expertise developed from past reactor safety studies, especially those related to the Advanced Neutron Source (ANS) Project. Unlike previous separate-effects modeling approaches employed (for evaluating target behavior when subjected to thermal shocks), the present approach treats the overall problem in a coupled manner using state-of-the-art equations of state for materials of interest (viz., mercury, tungsten and uranium). That is, the modeling framework simultaneously accounts for localized (and distributed) compression pressure pulse generation due to transient heat deposition, the transport of this shock wave outwards, interaction with surrounding boundaries, feedback to mercury from structures, multi-dimensional reflection patterns & stress induced (possible) breakup or fracture.

  8. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Science.gov (United States)

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  9. Using Latent Trait Measurement Models to Analyse Attitudinal Data: A Synthesis of Viewpoints.

    Science.gov (United States)

    Andrich, David

    A Rasch model for ordered response categories is derived and it is shown that it retains the key features of both the Thurstone and Likert approaches to studying attitude. Key features of the latter approaches are reviewed. Characteristics in common with the Thurstone approach are: statements are scaled with respect to their affective values;…

  10. An anisotropic numerical model for thermal hydraulic analyses: application to liquid metal flow in fuel assemblies

    Science.gov (United States)

    Vitillo, F.; Vitale Di Maio, D.; Galati, C.; Caruso, G.

    2015-11-01

    A CFD analysis has been carried out to study the thermal-hydraulic behavior of liquid metal coolant in a fuel assembly of triangular lattice. In order to obtain fast and accurate results, the isotropic two-equation RANS approach is often used in nuclear engineering applications. A different approach is provided by Non-Linear Eddy Viscosity Models (NLEVM), which try to take into account anisotropic effects by a nonlinear formulation of the Reynolds stress tensor. This approach is very promising, as it results in a very good numerical behavior and in a potentially better fluid flow description than classical isotropic models. An Anisotropic Shear Stress Transport (ASST) model, implemented into a commercial software, has been applied in previous studies, showing very trustful results for a large variety of flows and applications. In the paper, the ASST model has been used to perform an analysis of the fluid flow inside the fuel assembly of the ALFRED lead cooled fast reactor. Then, a comparison between the results of wall-resolved conjugated heat transfer computations and the results of a decoupled analysis using a suitable thermal wall-function previously implemented into the solver has been performed and presented.

  11. Cyclodextrin--piroxicam inclusion complexes: analyses by mass spectrometry and molecular modelling

    Science.gov (United States)

    Gallagher, Richard T.; Ball, Christopher P.; Gatehouse, Deborah R.; Gates, Paul J.; Lobell, Mario; Derrick, Peter J.

    1997-11-01

    Mass spectrometry has been used to investigate the natures of non-covalent complexes formed between the anti-inflammatory drug piroxicam and [alpha]-, [beta]- and [gamma]-cyclodextrins. Energies of these complexes have been calculated by means of molecular modelling. There is a correlation between peak intensities in the mass spectra and the calculated energies.

  12. Survival data analyses in ecotoxicology: critical effect concentrations, methods and models. What should we use?

    Science.gov (United States)

    Forfait-Dubuc, Carole; Charles, Sandrine; Billoir, Elise; Delignette-Muller, Marie Laure

    2012-05-01

    In ecotoxicology, critical effect concentrations are the most common indicators to quantitatively assess risks for species exposed to contaminants. Three types of critical effect concentrations are classically used: lowest/ no observed effect concentration (LOEC/NOEC), LC( x) (x% lethal concentration) and NEC (no effect concentration). In this article, for each of these three types of critical effect concentration, we compared methods or models used for their estimation and proposed one as the most appropriate. We then compared these critical effect concentrations to each other. For that, we used nine survival data sets corresponding to D. magna exposition to nine different contaminants, for which the time-course of the response was monitored. Our results showed that: (i) LOEC/NOEC values at day 21 were method-dependent, and that the Cochran-Armitage test with a step-down procedure appeared to be the most protective for the environment; (ii) all tested concentration-response models we compared gave close values of LC50 at day 21, nevertheless the Weibull model had the lowest global mean deviance; (iii) a simple threshold NEC-model both concentration and time dependent more completely described whole data (i.e. all timepoints) and enabled a precise estimation of the NEC. We then compared the three critical effect concentrations and argued that the use of the NEC might be a good option for environmental risk assessment.

  13. Transformation of Baumgarten's aesthetics into a tool for analysing works and for modelling

    DEFF Research Database (Denmark)

    Thomsen, Bente Dahl

    2006-01-01

      Abstract: Is this the best form, or does it need further work? The aesthetic object does not possess the perfect qualities; but how do I proceed with the form? These are questions that all modellers ask themselves at some point, and with which they can grapple for days - even weeks - before the...

  14. Modelling and analysing 3D buildings with a primal/dual data structure

    NARCIS (Netherlands)

    Boguslawski, P.; Gold, C.; Ledoux, H.

    2011-01-01

    While CityGML permits us to represent 3D city models, its use for applications where spatial analysis and/or real-time modifications are required is limited since at this moment the possibility to store topological relationships between the elements is rather limited and often not exploited. We pres

  15. Modelling and analysing 3D buildings with a primal/dual data structure

    NARCIS (Netherlands)

    Boguslawski, P.; Gold, C.; Ledoux, H.

    2011-01-01

    While CityGML permits us to represent 3D city models, its use for applications where spatial analysis and/or real-time modifications are required is limited since at this moment the possibility to store topological relationships between the elements is rather limited and often not exploited. We

  16. A multi-scale modelling approach for analysing landscape service dynamics

    NARCIS (Netherlands)

    Willemen, L.; Veldkamp, A.; Verburg, P.H.; Hein, L.G.; Leemans, R.

    2012-01-01

    Shifting societal needs drive and shape landscapes and the provision of their services. This paper presents a modelling approach to visualize the regional spatial and temporal dynamics in landscape service supply as a function of changing landscapes and societal demand. This changing demand can resu

  17. GSEVM v.2: MCMC software to analyse genetically structured environmental variance models

    DEFF Research Database (Denmark)

    Ibáñez-Escriche, N; Garcia, M; Sorensen, D

    2010-01-01

    This note provides a description of software that allows to fit Bayesian genetically structured variance models using Markov chain Monte Carlo (MCMC). The gsevm v.2 program was written in Fortran 90. The DOS and Unix executable programs, the user's guide, and some example files are freely availab...

  18. Analysing outsourcing policies in an asset management context: a six-stage model

    NARCIS (Netherlands)

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates

  19. Analysing outsourcing policies in an asset management context: a six-stage model

    NARCIS (Netherlands)

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates

  20. Analysing green supply chain management practices in Brazil's electrical/electronics industry using interpretive structural modelling

    DEFF Research Database (Denmark)

    Govindan, Kannan; Kannan, Devika; Mathiyazhagan, K.

    2013-01-01

    that exists between GSCM practices with regard to their adoption within Brazilian electrical/electronic industry with the help of interpretive structural modelling (ISM). From the results, we infer that cooperation with customers for eco-design practice is driving other practices, and this practice acts...

  1. Analysing the Severity and Frequency of Traffic Crashes in Riyadh City Using Statistical Models

    Directory of Open Access Journals (Sweden)

    Saleh Altwaijri

    2012-12-01

    Full Text Available Traffic crashes in Riyadh city cause losses in the form of deaths, injuries and property damages, in addition to the pain and social tragedy affecting families of the victims. In 2005, there were a total of 47,341 injury traffic crashes occurred in Riyadh city (19% of the total KSA crashes and 9% of those crashes were severe. Road safety in Riyadh city may have been adversely affected by: high car ownership, migration of people to Riyadh city, high daily trips reached about 6 million, high rate of income, low-cost of petrol, drivers from different nationalities, young drivers and tremendous growth in population which creates a high level of mobility and transport activities in the city. The primary objective of this paper is therefore to explore factors affecting the severity and frequency of road crashes in Riyadh city using appropriate statistical models aiming to establish effective safety policies ready to be implemented to reduce the severity and frequency of road crashes in Riyadh city. Crash data for Riyadh city were collected from the Higher Commission for the Development of Riyadh (HCDR for a period of five years from 1425H to 1429H (roughly corresponding to 2004-2008. Crash data were classified into three categories: fatal, serious-injury and slight-injury. Two nominal response models have been developed: a standard multinomial logit model (MNL and a mixed logit model to injury-related crash data. Due to a severe underreporting problem on the slight injury crashes binary and mixed binary logistic regression models were also estimated for two categories of severity: fatal and serious crashes. For frequency, two count models such as Negative Binomial (NB models were employed and the unit of analysis was 168 HAIs (wards in Riyadh city. Ward-level crash data are disaggregated by severity of the crash (such as fatal and serious injury crashes. The results from both multinomial and binary response models are found to be fairly consistent but

  2. Using species abundance distribution models and diversity indices for biogeographical analyses

    Science.gov (United States)

    Fattorini, Simone; Rigal, François; Cardoso, Pedro; Borges, Paulo A. V.

    2016-01-01

    We examine whether Species Abundance Distribution models (SADs) and diversity indices can describe how species colonization status influences species community assembly on oceanic islands. Our hypothesis is that, because of the lack of source-sink dynamics at the archipelago scale, Single Island Endemics (SIEs), i.e. endemic species restricted to only one island, should be represented by few rare species and consequently have abundance patterns that differ from those of more widespread species. To test our hypothesis, we used arthropod data from the Azorean archipelago (North Atlantic). We divided the species into three colonization categories: SIEs, archipelagic endemics (AZEs, present in at least two islands) and native non-endemics (NATs). For each category, we modelled rank-abundance plots using both the geometric series and the Gambin model, a measure of distributional amplitude. We also calculated Shannon entropy and Buzas and Gibson's evenness. We show that the slopes of the regression lines modelling SADs were significantly higher for SIEs, which indicates a relative predominance of a few highly abundant species and a lack of rare species, which also depresses diversity indices. This may be a consequence of two factors: (i) some forest specialist SIEs may be at advantage over other, less adapted species; (ii) the entire populations of SIEs are by definition concentrated on a single island, without possibility for inter-island source-sink dynamics; hence all populations must have a minimum number of individuals to survive natural, often unpredictable, fluctuations. These findings are supported by higher values of the α parameter of the Gambin mode for SIEs. In contrast, AZEs and NATs had lower regression slopes, lower α but higher diversity indices, resulting from their widespread distribution over several islands. We conclude that these differences in the SAD models and diversity indices demonstrate that the study of these metrics is useful for

  3. Evaluation of a dentoalveolar model for testing mouthguards: stress and strain analyses.

    Science.gov (United States)

    Verissimo, Crisnicaw; Costa, Paulo Victor Moura; Santos-Filho, Paulo César Freitas; Fernandes-Neto, Alfredo Júlio; Tantbirojn, Daranee; Versluis, Antheunis; Soares, Carlos José

    2016-02-01

    Custom-fitted mouthguards are devices used to decrease the likelihood of dental trauma. The aim of this study was to develop an experimental bovine dentoalveolar model with periodontal ligament to evaluate mouthguard shock absorption, and impact strain and stress behavior. A pendulum impact device was developed to perform the impact tests with two different impact materials (steel ball and baseball). Five bovine jaws were selected with standard age and dimensions. Six-mm mouthguards were made for the impact tests. The jaws were fixed in a pendulum device and impacts were performed from 90, 60, and 45° angles, with and without mouthguard. Strain gauges were attached at the palatal surface of the impacted tooth. The strain and shock absorption of the mouthguards was calculated and data were analyzed with 3-way anova and Tukey's test (α = 0.05). Two-dimensional finite element models were created based on the cross-section of the bovine dentoalveolar model used in the experiment. A nonlinear dynamic impact analysis was performed to evaluate the strain and stress distributions. Without mouthguards, the increase in impact angulation significantly increased strains and stresses. Mouthguards reduced strain and stress values. Impact velocity, impact object (steel ball or baseball), and mouthguard presence affected the impact stresses and strains in a bovine dentoalveolar model. Experimental strain measurements and finite element models predicted similar behavior; therefore, both methodologies are suitable for evaluating the biomechanical performance of mouthguards. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Novel basophil- or eosinophil-depleted mouse models for functional analyses of allergic inflammation.

    Science.gov (United States)

    Matsuoka, Kunie; Shitara, Hiroshi; Taya, Choji; Kohno, Kenji; Kikkawa, Yoshiaki; Yonekawa, Hiromichi

    2013-01-01

    Basophils and eosinophils play important roles in various host defense mechanisms but also act as harmful effectors in allergic disorders. We generated novel basophil- and eosinophil-depletion mouse models by introducing the human diphtheria toxin (DT) receptor gene under the control of the mouse CD203c and the eosinophil peroxidase promoter, respectively, to study the critical roles of these cells in the immunological response. These mice exhibited selective depletion of the target cells upon DT administration. In the basophil-depletion model, DT administration attenuated a drop in body temperature in IgG-mediated systemic anaphylaxis in a dose-dependent manner and almost completely abolished the development of ear swelling in IgE-mediated chronic allergic inflammation (IgE-CAI), a typical skin swelling reaction with massive eosinophil infiltration. In contrast, in the eosinophil-depletion model, DT administration ameliorated the ear swelling in IgE-CAI whether DT was administered before, simultaneously, or after, antigen challenge, with significantly lower numbers of eosinophils infiltrating into the swelling site. These results confirm that basophils and eosinophils act as the initiator and the effector, respectively, in IgE-CAI. In addition, antibody array analysis suggested that eotaxin-2 is a principal chemokine that attracts proinflammatory cells, leading to chronic allergic inflammation. Thus, the two mouse models established in this study are potentially useful and powerful tools for studying the in vivo roles of basophils and eosinophils. The combination of basophil- and eosinophil-depletion mouse models provides a new approach to understanding the complicated mechanism of allergic inflammation in conditions such as atopic dermatitis and asthma.

  5. Novel basophil- or eosinophil-depleted mouse models for functional analyses of allergic inflammation.

    Directory of Open Access Journals (Sweden)

    Kunie Matsuoka

    Full Text Available Basophils and eosinophils play important roles in various host defense mechanisms but also act as harmful effectors in allergic disorders. We generated novel basophil- and eosinophil-depletion mouse models by introducing the human diphtheria toxin (DT receptor gene under the control of the mouse CD203c and the eosinophil peroxidase promoter, respectively, to study the critical roles of these cells in the immunological response. These mice exhibited selective depletion of the target cells upon DT administration. In the basophil-depletion model, DT administration attenuated a drop in body temperature in IgG-mediated systemic anaphylaxis in a dose-dependent manner and almost completely abolished the development of ear swelling in IgE-mediated chronic allergic inflammation (IgE-CAI, a typical skin swelling reaction with massive eosinophil infiltration. In contrast, in the eosinophil-depletion model, DT administration ameliorated the ear swelling in IgE-CAI whether DT was administered before, simultaneously, or after, antigen challenge, with significantly lower numbers of eosinophils infiltrating into the swelling site. These results confirm that basophils and eosinophils act as the initiator and the effector, respectively, in IgE-CAI. In addition, antibody array analysis suggested that eotaxin-2 is a principal chemokine that attracts proinflammatory cells, leading to chronic allergic inflammation. Thus, the two mouse models established in this study are potentially useful and powerful tools for studying the in vivo roles of basophils and eosinophils. The combination of basophil- and eosinophil-depletion mouse models provides a new approach to understanding the complicated mechanism of allergic inflammation in conditions such as atopic dermatitis and asthma.

  6. Static simulation and analyses of mower's ROPS behavior in a finite element model.

    Science.gov (United States)

    Wang, X; Ayers, P; Womac, A R

    2009-10-01

    The goal of this research was to numerically predict the maximum lateral force acting on a mower rollover protective structure (ROPS) and the energy absorbed by the ROPS during a lateral continuous roll. A finite element (FE) model of the ROPS was developed using elastic and plastic theories including nonlinear relationships between stresses and strains in the plastic deformation range. Model validation was performed using field measurements of ROPS behavior in a lateral continuous roll on a purpose-designed test slope. Field tests determined the maximum deformation of the ROPS of a 900 kg John Deere F925 mower with a 183 cm (72 in.) mowing deck during an actual lateral roll on a pad and on soil. In the FE model, lateral force was gradually added to the ROPS until the field-measured maximum deformation was achieved. The results from the FE analysis indicated that the top corners of the ROPS enter slightly into the plastic deformation region. Maximum lateral forces acting on the ROPS during the simulated impact with the pad and soil were 19650 N and 22850 N, respectively. The FE model predicted that the energy absorbed by the ROPS (643 J) in the lateral roll test on the pad was less than the static test requirements (1575 J) of Organization for Economic Development (OECD) Code 6. In addition, the energy absorbed by the ROPS (1813 J) in the test on the soil met the static test requirements (1575 J). Both the FE model and the field test results indicated that the deformed ROPS of the F925 mower with deck did not intrude into the occupant clearance zone during the lateral continuous or non-continuous roll.

  7. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  8. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling......, which leads to more accurate results. Finally, we present a thorough statistical investigation of the shape, orientation and interactions of the synaptic vesicles during active time of the synapse. Focused ion beam-scanning electron microscopy images of a male mammalian brain are used for this study...

  9. A note on the Fourier series model for analysing line transect data.

    Science.gov (United States)

    Buckland, S T

    1982-06-01

    The Fourier series model offers a powerful procedure for the estimation of animal population density from line transect data. The estimate is reliable over a wide range of detection functions. In contrast, analytic confidence intervals yield, at best, 90% confidence for nominal 95% intervals. Three solutions, one using Monte Carlo techniques, another making direct use of replicate lines and the third based on the jackknife method, are discussed and compared.

  10. Analysing Amazonian forest productivity using a new individual and trait-based model (TFS v.1

    Directory of Open Access Journals (Sweden)

    N. M. Fyllas

    2014-02-01

    Full Text Available Repeated long-term censuses have revealed large-scale spatial patterns in Amazon Basin forest structure and dynamism, with some forests in the west of the Basin having up to a twice as high rate of aboveground biomass production and tree recruitment as forests in the east. Possible causes for this variation could be the climatic and edaphic gradients across the Basin and/or the spatial distribution of tree species composition. To help understand causes of this variation a new individual-based model of tropical forest growth designed to take full advantage of the forest census data available from the Amazonian Forest Inventory Network (RAINFOR has been developed. The model incorporates variations in tree size distribution, functional traits and soil physical properties and runs at the stand level with four functional traits, leaf dry mass per area (Ma, leaf nitrogen (NL and phosphorus (PL content and wood density (DW used to represent a continuum of plant strategies found in tropical forests. We first applied the model to validate canopy-level water fluxes at three Amazon eddy flux sites. For all three sites the canopy-level water fluxes were adequately simulated. We then applied the model at seven plots, where intensive measurements of carbon allocation are available. Tree-by-tree multi-annual growth rates generally agreed well with observations for small trees, but with deviations identified for large trees. At the stand-level, simulations at 40 plots were used to explore the influence of climate and soil fertility on the gross (ΠG and net (ΠN primary production rates as well as the carbon use efficiency (CU. Simulated ΠG, ΠN and CU were not associated with temperature. However all three measures of stand level productivity were positively related to annual precipitation and soil fertility.

  11. Sensitivity to model geometry in finite element analyses of reconstructed skeletal structures: experience with a juvenile pelvis.

    Science.gov (United States)

    Watson, Peter J; Fagan, Michael J; Dobson, Catherine A

    2015-01-01

    Biomechanical analysis of juvenile pelvic growth can be used in the evaluation of medical devices and investigation of hip joint disorders. This requires access to scan data of healthy juveniles, which are not always freely available. This article analyses the application of a geometric morphometric technique, which facilitates the reconstruction of the articulated juvenile pelvis from cadaveric remains, in biomechanical modelling. The sensitivity of variation in reconstructed morphologies upon predicted stress/strain distributions is of particular interest. A series of finite element analyses of a 9-year-old hemi-pelvis were performed to examine differences in predicted strain distributions between a reconstructed model and the originally fully articulated specimen. Only minor differences in the minimum principal strain distributions were observed between two varying hemi-pelvic morphologies and that of the original articulation. A Wilcoxon rank-sum test determined there was no statistical significance between the nodal strains recorded at 60 locations throughout the hemi-pelvic structures. This example suggests that finite element models created by this geometric morphometric reconstruction technique can be used with confidence, and as observed with this hemi-pelvis model, even a visual morphological difference does not significantly affect the predicted results. The validated use of this geometric morphometric reconstruction technique in biomechanical modelling reduces the dependency on clinical scan data.

  12. Systematic Selection of Key Logistic Regression Variables for Risk Prediction Analyses: A Five-Factor Maximum Model.

    Science.gov (United States)

    Hewett, Timothy E; Webster, Kate E; Hurd, Wendy J

    2017-08-16

    The evolution of clinical practice and medical technology has yielded an increasing number of clinical measures and tests to assess a patient's progression and return to sport readiness after injury. The plethora of available tests may be burdensome to clinicians in the absence of evidence that demonstrates the utility of a given measurement. Thus, there is a critical need to identify a discrete number of metrics to capture during clinical assessment to effectively and concisely guide patient care. The data sources included Pubmed and PMC Pubmed Central articles on the topic. Therefore, we present a systematic approach to injury risk analyses and how this concept may be used in algorithms for risk analyses for primary anterior cruciate ligament (ACL) injury in healthy athletes and patients after ACL reconstruction. In this article, we present the five-factor maximum model, which states that in any predictive model, a maximum of 5 variables will contribute in a meaningful manner to any risk factor analysis. We demonstrate how this model already exists for prevention of primary ACL injury, how this model may guide development of the second ACL injury risk analysis, and how the five-factor maximum model may be applied across the injury spectrum for development of the injury risk analysis.

  13. Hydrogeologic analyses in support of the conceptual model for the LANL Area G LLRW performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Vold, E.L.; Birdsell, K.; Rogers, D.; Springer, E.; Krier, D.; Turin, H.J.

    1996-04-01

    The Los Alamos National Laboratory low level radioactive waste disposal facility at Area G is currently completing a draft of the site Performance Assessment. Results from previous field studies have estimated a range in recharge rate up to 1 cm/yr. Recent estimates of unsaturated hydraulic conductivity for each stratigraphic layer under a unit gradient assumption show a wide range in recharge rate of 10{sup {minus}4} to 1 cm/yr depending upon location. Numerical computations show that a single net infiltration rate at the mesa surface does not match the moisture profile in each stratigraphic layer simultaneously, suggesting local source or sink terms possibly due to surface connected porous regions. The best fit to field data at deeper stratigraphic layers occurs for a net infiltration of about 0.1 cm/yr. A recent detailed analysis evaluated liquid phase vertical moisture flux, based on moisture profiles in several boreholes and van Genuchten fits to the hydraulic properties for each of the stratigraphic units. Results show a near surface infiltration region averages 8m deep, below which is a dry, low moisture content, and low flux region, where liquid phase recharge averages to zero. Analysis shows this low flux region is dominated by vapor movement. Field data from tritium diffusion studies, from pressure fluctuation attenuation studies, and from comparisons of in-situ and core sample permeabilities indicate that the vapor diffusion is enhanced above that expected in the matrix and is presumably due to enhanced flow through the fractures. Below this dry region within the mesa is a moisture spike which analyses show corresponds to a moisture source. The likely physical explanation is seasonal transient infiltration through surface-connected fractures. This anomalous region is being investigated in current field studies, because it is critical in understanding the moisture flux which continues to deeper regions through the unsaturated zone.

  14. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling

    Science.gov (United States)

    Schneider, P. A.; Blank, H.; Geiger, B.; Mank, K.; Martinov, S.; Ryter, F.; Weiland, M.; Weller, A.

    2015-07-01

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  15. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, P. A.; Blank, H.; Geiger, B.; Mank, K.; Martinov, S.; Ryter, F.; Weiland, M.; Weller, A. [Max-Planck-Institut für Plasmaphysik, Garching (Germany)

    2015-07-15

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  16. High-temperature series analyses of the classical Heisenberg and XY model

    CERN Document Server

    Adler, J; Janke, W

    1993-01-01

    Although there is now a good measure of agreement between Monte Carlo and high-temperature series expansion estimates for Ising ($n=1$) models, published results for the critical temperature from series expansions up to 12{\\em th} order for the three-dimensional classical Heisenberg ($n=3$) and XY ($n=2$) model do not agree very well with recent high-precision Monte Carlo estimates. In order to clarify this discrepancy we have analyzed extended high-temperature series expansions of the susceptibility, the second correlation moment, and the second field derivative of the susceptibility, which have been derived a few years ago by L\\"uscher and Weisz for general $O(n)$ vector spin models on $D$-dimensional hypercubic lattices up to 14{\\em th} order in $K \\equiv J/k_B T$. By analyzing these series expansions in three dimensions with two different methods that allow for confluent correction terms, we obtain good agreement with the standard field theory exponent estimates and with the critical temperature estimates...

  17. Metabolic model for the filamentous ‘Candidatus Microthrix parvicella' based on genomic and metagenomic analyses

    Science.gov (United States)

    Jon McIlroy, Simon; Kristiansen, Rikke; Albertsen, Mads; Michael Karst, Søren; Rossetti, Simona; Lund Nielsen, Jeppe; Tandoi, Valter; James Seviour, Robert; Nielsen, Per Halkjær

    2013-01-01

    ‘Candidatus Microthrix parvicella' is a lipid-accumulating, filamentous bacterium so far found only in activated sludge wastewater treatment plants, where it is a common causative agent of sludge separation problems. Despite attracting considerable interest, its detailed physiology is still unclear. In this study, the genome of the RN1 strain was sequenced and annotated, which facilitated the construction of a theoretical metabolic model based on available in situ and axenic experimental data. This model proposes that under anaerobic conditions, this organism accumulates preferentially long-chain fatty acids as triacylglycerols. Utilisation of trehalose and/or polyphosphate stores or partial oxidation of long-chain fatty acids may supply the energy required for anaerobic lipid uptake and storage. Comparing the genome sequence of this isolate with metagenomes from two full-scale wastewater treatment plants with enhanced biological phosphorus removal reveals high similarity, with few metabolic differences between the axenic and the dominant community ‘Ca. M. parvicella' strains. Hence, the metabolic model presented in this paper could be considered generally applicable to strains in full-scale treatment systems. The genomic information obtained here will provide the basis for future research into in situ gene expression and regulation. Such information will give substantial insight into the ecophysiology of this unusual and biotechnologically important filamentous bacterium. PMID:23446830

  18. Consequence modeling for nuclear weapons probabilistic cost/benefit analyses of safety retrofits

    Energy Technology Data Exchange (ETDEWEB)

    Harvey, T.F.; Peters, L.; Serduke, F.J.D.; Hall, C.; Stephens, D.R.

    1998-01-01

    The consequence models used in former studies of costs and benefits of enhanced safety retrofits are considered for (1) fuel fires; (2) non-nuclear detonations; and, (3) unintended nuclear detonations. Estimates of consequences were made using a representative accident location, i.e., an assumed mixed suburban-rural site. We have explicitly quantified land- use impacts and human-health effects (e.g. , prompt fatalities, prompt injuries, latent cancer fatalities, low- levels of radiation exposure, and clean-up areas). Uncertainty in the wind direction is quantified and used in a Monte Carlo calculation to estimate a range of results for a fuel fire with uncertain respirable amounts of released Pu. We define a nuclear source term and discuss damage levels of concern. Ranges of damages are estimated by quantifying health impacts and property damages. We discuss our dispersal and prompt effects models in some detail. The models used to loft the Pu and fission products and their particle sizes are emphasized.

  19. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling.

    Science.gov (United States)

    Schneider, P A; Blank, H; Geiger, B; Mank, K; Martinov, S; Ryter, F; Weiland, M; Weller, A

    2015-07-01

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  20. Analysing the origin of long-range interactions in proteins using lattice models

    Directory of Open Access Journals (Sweden)

    Unger Ron

    2009-01-01

    Full Text Available Abstract Background Long-range communication is very common in proteins but the physical basis of this phenomenon remains unclear. In order to gain insight into this problem, we decided to explore whether long-range interactions exist in lattice models of proteins. Lattice models of proteins have proven to capture some of the basic properties of real proteins and, thus, can be used for elucidating general principles of protein stability and folding. Results Using a computational version of double-mutant cycle analysis, we show that long-range interactions emerge in lattice models even though they are not an input feature of them. The coupling energy of both short- and long-range pairwise interactions is found to become more positive (destabilizing in a linear fashion with increasing 'contact-frequency', an entropic term that corresponds to the fraction of states in the conformational ensemble of the sequence in which the pair of residues is in contact. A mathematical derivation of the linear dependence of the coupling energy on 'contact-frequency' is provided. Conclusion Our work shows how 'contact-frequency' should be taken into account in attempts to stabilize proteins by introducing (or stabilizing contacts in the native state and/or through 'negative design' of non-native contacts.

  1. Analyses of the redistribution of work following cardiac resynchronisation therapy in a patient specific model.

    Directory of Open Access Journals (Sweden)

    Steven Alexander Niederer

    Full Text Available Regulation of regional work is essential for efficient cardiac function. In patients with heart failure and electrical dysfunction such as left branch bundle block regional work is often depressed in the septum. Following cardiac resynchronisation therapy (CRT this heterogeneous distribution of work can be rebalanced by altering the pattern of electrical activation. To investigate the changes in regional work in these patients and the mechanisms underpinning the improved function following CRT we have developed a personalised computational model. Simulations of electromechanical cardiac function in the model estimate the regional stress, strain and work pre- and post-CRT. These simulations predict that the increase in observed work performed by the septum following CRT is not due to an increase in the volume of myocardial tissue recruited during contraction but rather that the volume of recruited myocardium remains the same and the average peak work rate per unit volume increases. These increases in the peak average rate of work is is attributed to slower and more effective contraction in the septum, as opposed to a change in active tension. Model results predict that this improved septal work rate following CRT is a result of resistance to septal contraction provided by the LV free wall. This resistance results in septal shortening over a longer period which, in turn, allows the septum to contract while generating higher levels of active tension to produce a higher work rate.

  2. Marginal estimation for multi-stage models: waiting time distributions and competing risks analyses.

    Science.gov (United States)

    Satten, Glen A; Datta, Somnath

    2002-01-15

    We provide non-parametric estimates of the marginal cumulative distribution of stage occupation times (waiting times) and non-parametric estimates of marginal cumulative incidence function (proportion of persons who leave stage j for stage j' within time t of entering stage j) using right-censored data from a multi-stage model. We allow for stage and path dependent censoring where the censoring hazard for an individual may depend on his or her natural covariate history such as the collection of stages visited before the current stage and their occupation times. Additional external time dependent covariates that may induce dependent censoring can also be incorporated into our estimates, if available. Our approach requires modelling the censoring hazard so that an estimate of the integrated censoring hazard can be used in constructing the estimates of the waiting times distributions. For this purpose, we propose the use of an additive hazard model which results in very flexible (robust) estimates. Examples based on data from burn patients and simulated data with tracking are also provided to demonstrate the performance of our estimators.

  3. Promoting Social Inclusion through Sport for Refugee-Background Youth in Australia: Analysing Different Participation Models

    Directory of Open Access Journals (Sweden)

    Karen Block

    2017-06-01

    Full Text Available Sports participation can confer a range of physical and psychosocial benefits and, for refugee and migrant youth, may even act as a critical mediator for achieving positive settlement and engaging meaningfully in Australian society. This group has low participation rates however, with identified barriers including costs; discrimination and a lack of cultural sensitivity in sporting environments; lack of knowledge of mainstream sports services on the part of refugee-background settlers; inadequate access to transport; culturally determined gender norms; and family attitudes. Organisations in various sectors have devised programs and strategies for addressing these participation barriers. In many cases however, these responses appear to be ad hoc and under-theorised. This article reports findings from a qualitative exploratory study conducted in a range of settings to examine the benefits, challenges and shortcomings associated with different participation models. Interview participants were drawn from non-government organisations, local governments, schools, and sports clubs. Three distinct models of participation were identified, including short term programs for refugee-background children; ongoing programs for refugee-background children and youth; and integration into mainstream clubs. These models are discussed in terms of their relative challenges and benefits and their capacity to promote sustainable engagement and social inclusion for this population group.

  4. A biophysically-based finite state machine model for analysing gastric experimental entrainment and pacing recordings

    Science.gov (United States)

    Sathar, Shameer; Trew, Mark L.; Du, Peng; O’ Grady, Greg; Cheng, Leo K.

    2014-01-01

    Gastrointestinal motility is coordinated by slow waves (SWs) generated by the interstitial cells of Cajal (ICC). Experimental studies have shown that SWs spontaneously activate at different intrinsic frequencies in isolated tissue, whereas in intact tissues they are entrained to a single frequency. Gastric pacing has been used in an attempt to improve motility in disorders such as gastroparesis by modulating entrainment, but the optimal methods of pacing are currently unknown. Computational models can aid in the interpretation of complex in-vivo recordings and help to determine optical pacing strategies. However, previous computational models of SW entrainment are limited to the intrinsic pacing frequency as the primary determinant of the conduction velocity, and are not able to accurately represent the effects of external stimuli and electrical anisotropies. In this paper, we present a novel computationally efficient method for modelling SW propagation through the ICC network while accounting for conductivity parameters and fiber orientations. The method successfully reproduced experimental recordings of entrainment following gastric transection and the effects of gastric pacing on SW activity. It provides a reliable new tool for investigating gastric electrophysiology in normal and diseased states, and to guide and focus future experimental studies. PMID:24276722

  5. Study on dynamic response of embedded long span corrugated steel culverts using scaled model shaking table tests and numerical analyses

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    A series of scaled-model shaking table tests and its simulation analyses using dynamic finite element method were performed to clarify the dynamic behaviors and the seismic stability of embedded corrugated steel culverts due to strong earthquakes like the 1995 Hyogoken-nanbu earthquake. The dynamic strains of the embedded culvert models and the seismic soil pressure acting on the models due to sinusoidal and random strong motions were investigated. This study verified that the corrugated culvert model was subjected to dynamic horizontal forces (lateral seismic soil pressure) from the surrounding ground,which caused the large bending strains on the structure; and that the structures do not exceed the allowable plastic deformation and do not collapse completely during strong earthquake like Hyogoken-nanbu earthquake. The results obtained are useful for design and construction of embedded long span corrugated steel culverts in seismic regions.

  6. Model-independent analyses of non-Gaussianity in Planck CMB maps using Minkowski functionals

    Science.gov (United States)

    Buchert, Thomas; France, Martin J.; Steiner, Frank

    2017-05-01

    Despite the wealth of Planck results, there are difficulties in disentangling the primordial non-Gaussianity of the Cosmic Microwave Background (CMB) from the secondary and the foreground non-Gaussianity (NG). For each of these forms of NG the lack of complete data introduces model-dependences. Aiming at detecting the NGs of the CMB temperature anisotropy δ T , while paying particular attention to a model-independent quantification of NGs, our analysis is based upon statistical and morphological univariate descriptors, respectively: the probability density function P(δ T) , related to v0, the first Minkowski Functional (MF), and the two other MFs, v1 and v2. From their analytical Gaussian predictions we build the discrepancy functions {{ Δ }k} (k  =  P, 0, 1, 2) which are applied to an ensemble of 105 CMB realization maps of the Λ CDM model and to the Planck CMB maps. In our analysis we use general Hermite expansions of the {{ Δ }k} up to the 12th order, where the coefficients are explicitly given in terms of cumulants. Assuming hierarchical ordering of the cumulants, we obtain the perturbative expansions generalizing the second order expansions of Matsubara to arbitrary order in the standard deviation {σ0} for P(δ T) and v0, where the perturbative expansion coefficients are explicitly given in terms of complete Bell polynomials. The comparison of the Hermite expansions and the perturbative expansions is performed for the Λ CDM map sample and the Planck data. We confirm the weak level of non-Gaussianity (1-2)σ of the foreground corrected masked Planck 2015 maps.

  7. Computational and Statistical Analyses of Insertional Polymorphic Endogenous Retroviruses in a Non-Model Organism

    Directory of Open Access Journals (Sweden)

    Le Bao

    2014-11-01

    Full Text Available Endogenous retroviruses (ERVs are a class of transposable elements found in all vertebrate genomes that contribute substantially to genomic functional and structural diversity. A host species acquires an ERV when an exogenous retrovirus infects a germ cell of an individual and becomes part of the genome inherited by viable progeny. ERVs that colonized ancestral lineages are fixed in contemporary species. However, in some extant species, ERV colonization is ongoing, which results in variation in ERV frequency in the population. To study the consequences of ERV colonization of a host genome, methods are needed to assign each ERV to a location in a species’ genome and determine which individuals have acquired each ERV by descent. Because well annotated reference genomes are not widely available for all species, de novo clustering approaches provide an alternative to reference mapping that are insensitive to differences between query and reference and that are amenable to mobile element studies in both model and non-model organisms. However, there is substantial uncertainty in both identifying ERV genomic position and assigning each unique ERV integration site to individuals in a population. We present an analysis suitable for detecting ERV integration sites in species without the need for a reference genome. Our approach is based on improved de novo clustering methods and statistical models that take the uncertainty of assignment into account and yield a probability matrix of shared ERV integration sites among individuals. We demonstrate that polymorphic integrations of a recently identified endogenous retrovirus in deer reflect contemporary relationships among individuals and populations.

  8. Analysing and modelling the impact of habitat fragmentation on species diversity: a macroecological perspective

    Directory of Open Access Journals (Sweden)

    Thomas Matthews

    2015-07-01

    Full Text Available My research aimed to examine a variety of macroecological and biogeographical patterns using a large number of purely habitat island datasets (i.e. isolated patches of natural habitat set within in a matrix of human land uses sourced from both the literature and my own sampling, with the objective of testing various macroecological and biogeographical patterns. These patterns can be grouped under four broad headings: 1 species–area relationships (SAR, 2 nestedness, 3 species abundance distributions (SADs and 4 species incidence functions (function of area. Overall, I found that there were few hard macroecological generalities that hold in all cases across habitat island systems. This is because most habitat island systems are highly disturbed environments, with a variety of confounding variables and ‘undesirable’ species (e.g. species associated with human land uses acting to modulate the patterns of interest. Nonetheless, some clear patterns did emerge. For example, the power model was by the far the best general SAR model for habitat islands. The slope of the island species–area relationship (ISAR was related to the matrix type surrounding archipelagos, such that habitat island ISARs were shallower than true island ISARs. Significant compositional and functional nestedness was rare in habitat island datasets, although island area was seemingly responsible for what nestedness was observed. Species abundance distribution models were found to provide useful information for conservation in fragmented landscapes, but the presence of undesirable species substantially affected the shape of the SAD. In conclusion, I found that the application of theory derived from the study of true islands, to habitat island systems, is inappropriate as it fails to incorporate factors that are unique to habitat islands. 

  9. Using Rasch Modeling to Re-Evaluate Rapid Malaria Diagnosis Test Analyses

    Directory of Open Access Journals (Sweden)

    Dawit G. Ayele

    2014-06-01

    Full Text Available The objective of this study was to demonstrate the use of the Rasch model by assessing the appropriateness of the demographic, social-economic and geographic factors in providing a total score in malaria RDT in accordance with the model’s expectations. The baseline malaria indicator survey was conducted in Amhara, Oromiya and Southern Nation Nationalities and People (SNNP regions of Ethiopia by The Carter Center in 2007. The result shows high reliability and little disordering of thresholds with no evidence of differential item functioning.

  10. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    Stress can affect the brain functionality in many ways. As the synaptic vesicles have a major role in nervous signal transportation in synapses, their distribution in relationship to the active zone is very important in studying the neuron responses. We study the effect of stress on brain...... functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...

  11. The influence of jet-grout constitutive modelling in excavation analyses

    OpenAIRE

    Ciantia, M.; Arroyo Alvarez de Toledo, Marcos; Castellanza, R; Gens Solé, Antonio

    2012-01-01

    A bonded elasto-plastic soil model is employed to characterize cement-treated clay in the finite element analysis of an excavation on soft clay supported with a soil-cement slab at the bottom. The soft clay is calibrated to represent the behaviour of Bangkok soft clay. A parametric study is run for a series of materials characterised by increasing cement content in the clay-cement mixture. The different mixtures are indirectly specified by means of their unconfined compressive strength. A ...

  12. Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes

    Directory of Open Access Journals (Sweden)

    Stoyan Stoyanov

    2009-08-01

    Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.

  13. Daniel K. Inouye Solar Telescope: computational fluid dynamic analyses and evaluation of the air knife model

    Science.gov (United States)

    McQuillen, Isaac; Phelps, LeEllen; Warner, Mark; Hubbard, Robert

    2016-08-01

    Implementation of an air curtain at the thermal boundary between conditioned and ambient spaces allows for observation over wavelength ranges not practical when using optical glass as a window. The air knife model of the Daniel K. Inouye Solar Telescope (DKIST) project, a 4-meter solar observatory that will be built on Haleakalā, Hawai'i, deploys such an air curtain while also supplying ventilation through the ceiling of the coudé laboratory. The findings of computational fluid dynamics (CFD) analysis and subsequent changes to the air knife model are presented. Major design constraints include adherence to the Interface Control Document (ICD), separation of ambient and conditioned air, unidirectional outflow into the coudé laboratory, integration of a deployable glass window, and maintenance and accessibility requirements. Optimized design of the air knife successfully holds full 12 Pa backpressure under temperature gradients of up to 20°C while maintaining unidirectional outflow. This is a significant improvement upon the .25 Pa pressure differential that the initial configuration, tested by Linden and Phelps, indicated the curtain could hold. CFD post- processing, developed by Vogiatzis, is validated against interferometry results of initial air knife seeing evaluation, performed by Hubbard and Schoening. This is done by developing a CFD simulation of the initial experiment and using Vogiatzis' method to calculate error introduced along the optical path. Seeing error, for both temperature differentials tested in the initial experiment, match well with seeing results obtained from the CFD analysis and thus validate the post-processing model. Application of this model to the realizable air knife assembly yields seeing errors that are well within the error budget under which the air knife interface falls, even with a temperature differential of 20°C between laboratory and ambient spaces. With ambient temperature set to 0°C and conditioned temperature set to 20

  14. Perspectives on econometric modelling to inform policy: a UK qualitative case study of minimum unit pricing of alcohol.

    Science.gov (United States)

    Katikireddi, Srinivasa V; Bond, Lyndal; Hilton, Shona

    2014-06-01

    Novel policy interventions may lack evaluation-based evidence. Considerations to introduce minimum unit pricing (MUP) of alcohol in the UK were informed by econometric modelling (the 'Sheffield model'). We aim to investigate policy stakeholders' views of the utility of modelling studies for public health policy. In-depth qualitative interviews with 36 individuals involved in MUP policy debates (purposively sampled to include civil servants, politicians, academics, advocates and industry-related actors) were conducted and thematically analysed. Interviewees felt familiar with modelling studies and often displayed detailed understandings of the Sheffield model. Despite this, many were uneasy about the extent to which the Sheffield model could be relied on for informing policymaking and preferred traditional evaluations. A tension was identified between this preference for post hoc evaluations and a desire for evidence derived from local data, with modelling seen to offer high external validity. MUP critics expressed concern that the Sheffield model did not adequately capture the 'real life' world of the alcohol market, which was conceptualized as a complex and, to some extent, inherently unpredictable system. Communication of modelling results was considered intrinsically difficult but presenting an appropriate picture of the uncertainties inherent in modelling was viewed as desirable. There was general enthusiasm for increased use of econometric modelling to inform future policymaking but an appreciation that such evidence should only form one input into the process. Modelling studies are valued by policymakers as they provide contextually relevant evidence for novel policies, but tensions exist with views of traditional evaluation-based evidence. © The Author 2013. Published by Oxford University Press on behalf of the European Public Health Association.

  15. Subchannel and Computational Fluid Dynamic Analyses of a Model Pin Bundle

    Energy Technology Data Exchange (ETDEWEB)

    Gairola, A.; Arif, M.; Suh, K. Y. [Seoul National Univ., Seoul (Korea, Republic of)

    2014-05-15

    The current study showed that the simplistic approach of subchannel analysis code MATRA was not good in capturing the physical behavior of the coolant inside the rod bundle. With the incorporation of more detailed geometry of the grid spacer in the CFX code it was possible to approach the experimental values. However, it is vital to incorporate more advanced turbulence mixing models to more realistically simulate behavior of the liquid metal coolant inside the model pin bundle in parallel with the incorporation of the bottom and top grid structures. In the framework of the 11{sup th} international meeting of International Association for Hydraulic Research and Engineering (IAHR) working group on the advanced reactor thermal hydraulics a standard problem was conducted. The quintessence of the problem was to check on the hydraulics and heat transfer in a novel pin bundle with different pitch to rod diameter ratio and heat flux cooled by liquid metal. The standard problem stems from the field of nuclear safety research with the idea of validating and checking the performances of computer codes against the experimental results. Comprehensive checks between the two will succor in improving the dependability and exactness of the codes used for accident simulations.

  16. Integration of 3d Models and Diagnostic Analyses Through a Conservation-Oriented Information System

    Science.gov (United States)

    Mandelli, A.; Achille, C.; Tommasi, C.; Fassi, F.

    2017-08-01

    In the recent years, mature technologies for producing high quality virtual 3D replicas of Cultural Heritage (CH) artefacts has grown thanks to the progress of Information Technologies (IT) tools. These methods are an efficient way to present digital models that can be used with several scopes: heritage managing, support to conservation, virtual restoration, reconstruction and colouring, art cataloguing and visual communication. The work presented is an emblematic case of study oriented to the preventive conservation through monitoring activities, using different acquisition methods and instruments. It was developed inside a project founded by Lombardy Region, Italy, called "Smart Culture", which was aimed to realise a platform that gave the users the possibility to easily access to the CH artefacts, using as an example a very famous statue. The final product is a 3D reality-based model that contains a lot of information inside it, and that can be consulted through a common web browser. In the end, it was possible to define the general strategies oriented to the maintenance and the valorisation of CH artefacts, which, in this specific case, must consider the integration of different techniques and competencies, to obtain a complete, accurate and continuative monitoring of the statue.

  17. Preliminary Thermal Hydraulic Analyses of the Conceptual Core Models with Tubular Type Fuel Assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Chae, Hee Taek; Park, Jong Hark; Park, Cheol

    2006-11-15

    A new research reactor (AHR, Advanced HANARO Reactor) based on the HANARO has being conceptually developed for the future needs of research reactors. A tubular type fuel was considered as one of the fuel options of the AHR. A tubular type fuel assembly has several curved fuel plates arranged with a constant small gap to build up cooling channels, which is very similar to an annulus pipe with many layers. This report presents the preliminary analysis of thermal hydraulic characteristics and safety margins for three conceptual core models using tubular fuel assemblies. Four design criteria, which are the fuel temperature, ONB (Onset of Nucleate Boiling) margin, minimum DNBR (Departure from Nucleate Boiling Ratio) and OFIR (Onset of Flow Instability Ratio), were investigated along with various core flow velocities in the normal operating conditions. And the primary coolant flow rate based a conceptual core model was suggested as a design information for the process design of the primary cooling system. The computational fluid dynamics analysis was also carried out to evaluate the coolant velocity distributions between tubular channels and the pressure drop characteristics of the tubular fuel assembly.

  18. A new non-randomized model for analysing sensitive questions with binary outcomes.

    Science.gov (United States)

    Tian, Guo-Liang; Yu, Jun-Wu; Tang, Man-Lai; Geng, Zhi

    2007-10-15

    We propose a new non-randomized model for assessing the association of two sensitive questions with binary outcomes. Under the new model, respondents only need to answer a non-sensitive question instead of the original two sensitive questions. As a result, it can protect a respondent's privacy, avoid the usage of any randomizing device, and be applied to both the face-to-face interview and mail questionnaire. We derive the constrained maximum likelihood estimates of the cell probabilities and the odds ratio for two binary variables associated with the sensitive questions via the EM algorithm. The corresponding standard error estimates are then obtained by bootstrap approach. A likelihood ratio test and a chi-squared test are developed for testing association between the two binary variables. We discuss the loss of information due to the introduction of the non-sensitive question, and the design of the co-operative parameters. Simulations are performed to evaluate the empirical type I error rates and powers for the two tests. In addition, a simulation is conducted to study the relationship between the probability of obtaining valid estimates and the sample size for any given cell probability vector. A real data set from an AIDS study is used to illustrate the proposed methodologies.

  19. Coupled biophysical global ocean model and molecular genetic analyses identify multiple introductions of cryptogenic species.

    Science.gov (United States)

    Dawson, Michael N; Sen Gupta, Alex; England, Matthew H

    2005-08-23

    The anthropogenic introduction of exotic species is one of the greatest modern threats to marine biodiversity. Yet exotic species introductions remain difficult to predict and are easily misunderstood because knowledge of natural dispersal patterns, species diversity, and biogeography is often insufficient to distinguish between a broadly dispersed natural population and an exotic one. Here we compare a global molecular phylogeny of a representative marine meroplanktonic taxon, the moon-jellyfish Aurelia, with natural dispersion patterns predicted by a global biophysical ocean model. Despite assumed high dispersal ability, the phylogeny reveals many cryptic species and predominantly regional structure with one notable exception: the globally distributed Aurelia sp.1, which, molecular data suggest, may occasionally traverse the Pacific unaided. This possibility is refuted by the ocean model, which shows much more limited dispersion and patterns of distribution broadly consistent with modern biogeographic zones, thus identifying multiple introductions worldwide of this cryptogenic species. This approach also supports existing evidence that (i) the occurrence in Hawaii of Aurelia sp. 4 and other native Indo-West Pacific species with similar life histories is most likely due to anthropogenic translocation, and (ii) there may be a route for rare natural colonization of northeast North America by the European marine snail Littorina littorea, whose status as endemic or exotic is unclear.

  20. Cotton chromosome substitution lines crossed with cultivars: genetic model evaluation and seed trait analyses.

    Science.gov (United States)

    Wu, Jixiang; McCarty, Jack C; Jenkins, Johnie N

    2010-05-01

    Seed from upland cotton, Gossypium hirsutum L., provides a desirable and important nutrition profile. In this study, several seed traits (protein content, oil content, seed hull fiber content, seed index, seed volume, embryo percentage) for F(3) hybrids of 13 cotton chromosome substitution lines crossed with five elite cultivars over four environments were evaluated. Oil and protein were expressed both as percentage of total seed weight and as an index which is the grams of product/100 seeds. An additive and dominance (AD) genetic model with cytoplasmic effects was designed, assessed by simulations, and employed to analyze these seed traits. Simulated results showed that this model was sufficient for analyzing the data structure with F(3) and parents in multiple environments without replications. Significant cytoplasmic effects were detected for seed oil content, oil index, seed index, seed volume, and seed embryo percentage. Additive effects were significant for protein content, fiber content, protein index, oil index, fiber index, seed index, seed volume, and embryo percentage. Dominance effects were significant for oil content, oil index, seed index, and seed volume. Cytoplasmic and additive effects for parents and dominance effects in homozygous and heterozygous forms were predicted. Favorable genetic effects were predicted in this study and the results provided evidence that these seed traits can be genetically improved. In addition, chromosome associations with AD effects were detected and discussed in this study.

  1. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST: northern extratropical response

    Directory of Open Access Journals (Sweden)

    K. Maynard

    2001-06-01

    Full Text Available The ECHAM 3.2 (T21, ECHAM 4 (T30 and LMD (version 6, grid-point resolution with 96 longitudes × 72 latitudes atmospheric general circulation models were integrated through the period 1961 to 1993 forced with the same observed Sea Surface Temperatures (SSTs as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The mid-latitude circulation pattern which maximises the covariance between the simulation and the observations, i.e. the most skilful mode, and the one which maximises the covariance amongst the runs, i.e. the most reproducible mode, is calculated as the leading mode of a Singular Value Decomposition (SVD analysis of observed and simulated Sea Level Pressure (SLP and geopotential height at 500 hPa (Z500 seasonal anomalies. A common response amongst the different models, having different resolution and parametrization should be considered as a more robust atmospheric response to SST than the same response obtained with only one model. A robust skilful mode is found mainly in December-February (DJF, and in June-August (JJA. In DJF, this mode is close to the SST-forced pattern found by Straus and Shukla (2000 over the North Pacific and North America with a wavy out-of-phase between the NE Pacific and the SE US on the one hand and the NE North America on the other. This pattern evolves in a NAO-like pattern over the North Atlantic and Europe (SLP and in a more N-S tripole on the Atlantic and European sector with an out-of-phase between the middle Europe on the one hand and the northern and southern parts on the other (Z500. There are almost no spatial shifts between either field around North America (just a slight eastward shift of the highest absolute heterogeneous correlations for SLP relative to the Z500 ones. The time evolution of the SST-forced mode is moderatly to strongly related to the ENSO/LNSO events but the spread amongst the ensemble of runs is not systematically related

  2. Analysing and combining atmospheric general circulation model simulations forced by prescribed SST. Northern extra tropical response

    Energy Technology Data Exchange (ETDEWEB)

    Moron, V. [Universite' de Provence, UFR des sciences geographiques et de l' amenagement, Aix-en-Provence (France); Navarra, A. [Istituto Nazionale di Geofisica e Vulcanologia, Bologna (Italy); Ward, M. N. [University of Oklahoma, Cooperative Institute for Mesoscale Meteorological Studies, Norman OK (United States); Foland, C. K. [Hadley Center for Climate Prediction and Research, Meteorological Office, Bracknell (United Kingdom); Friederichs, P. [Meteorologisches Institute des Universitaet Bonn, Bonn (Germany); Maynard, K.; Polcher, J. [Paris Universite' Pierre et Marie Curie, Paris (France). Centre Nationale de la Recherche Scientifique, Laboratoire de Meteorologie Dynamique, Paris

    2001-08-01

    The ECHAM 3.2 (T21), ECHAM 4 (T30) and LMD (version 6, grid-point resolution with 96 longitudes x 72 latitudes) atmospheric general circulation models were integrated through the period 1961 to 1993 forced with the same observed Sa Surface Temperatures (SSTs) as compiled at the Hadley Centre. Three runs were made for each model starting from different initial conditions. The mid-latitude circulation pattern which maximises the covariance between the simulation and the observations, i.e. the most skilful mode, and the one which maximises the covariance amongst the runs, i.e. the most reproducible mode, is calculated as the leading mode of a Singular Value Decomposition (SVD) analysis of observed and simulated Sea Level Pressure (SLP) and geo potential height at 500 hPa (Z500) seasonal anomalies. A common response amongst the different models, having different resolution and parametrization should be considered as a more robust atmospheric response to SST than the sam response obtained with only one model A robust skilful mode is found mainly in December-February (DJF), and in June-August (JJA). In DJF, this mode is close to the SST-forced pattern found by Straus nd Shukla (2000) over the North Pacific and North America with a wavy out-of-phase between the NE Pacific and the SE US on the one hand and the NE North America on the other. This pattern evolves in a NAO-like pattern over the North Atlantic and Europe (SLP) and in a more N-S tripote on the Atlantic and European sector with an out-of-phase between the middle Europe on the one hand and the northern and southern parts on the other (Z500). There are almost no spatial shifts between either field around North America (just a slight eastward shift of the highest absolute heterogenous correlations for SLP relative to the Z500 ones). The time evolution of the SST-forced mode is moderately to strongly related to the ENSO/LNSO events but the spread amongst the ensemble of runs is not systematically related at all to

  3. Possibilities for a sustainable development. Muligheter for en baerekraftig utvikling; Analyser paa ''World Model''

    Energy Technology Data Exchange (ETDEWEB)

    Bjerkholt, O.; Johnsen, T.; Thonstad, K.

    1993-01-01

    This report is the final report of a project that the Central Bureau of Statistics of Norway has carried out. The report present analyses of the relations between economic development, energy consumption and emission of pollutants to air in a global perspective. The analyses are based on the ''World Model'', that has been developed at the Institute for Economic Analysis at New York University. The analyses show that it will be very difficult to obtain a global stabilization of the CO[sub 2] emission on the 1990 level. In the reference scenario of the United Nations report ''Our Common Future'', the increase of CO[sub 2] emissions from 1990 to 2020 was 73%. Even in the scenario with the most drastic measures, the emissions in 2020 will be about 43% above the 1990 level, according to the present report. A stabilization of the global emissions at the 1990 level will require strong measures beyond those assumed in the model calculations, or a considerable breakthrough in energy technology. 17 refs., 5 figs., 21 tabs.

  4. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in forested ecosystems

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-12-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. Here we explore the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants using a new, simple model of ecosystem C-N cycling and interactions (ACONITE). ACONITE builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C:N, N fixation, and plant C use efficiency) based on the optimization of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state and transient ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C:N differed among the three ecosystem types (temperate deciduous traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C:N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C:N, while a more recently reported non-linear relationship simulated leaf C:N that compared better to the global trait database than the linear relationship. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C:N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple approach with emergent properties based on coupled C-N dynamics has

  5. Application of satellite precipitation data to analyse and model arbovirus activity in the tropics

    Directory of Open Access Journals (Sweden)

    Corner Robert J

    2011-01-01

    Full Text Available Abstract Background Murray Valley encephalitis virus (MVEV is a mosquito-borne Flavivirus (Flaviviridae: Flavivirus which is closely related to Japanese encephalitis virus, West Nile virus and St. Louis encephalitis virus. MVEV is enzootic in northern Australia and Papua New Guinea and epizootic in other parts of Australia. Activity of MVEV in Western Australia (WA is monitored by detection of seroconversions in flocks of sentinel chickens at selected sample sites throughout WA. Rainfall is a major environmental factor influencing MVEV activity. Utilising data on rainfall and seroconversions, statistical relationships between MVEV occurrence and rainfall can be determined. These relationships can be used to predict MVEV activity which, in turn, provides the general public with important information about disease transmission risk. Since ground measurements of rainfall are sparse and irregularly distributed, especially in north WA where rainfall is spatially and temporally highly variable, alternative data sources such as remote sensing (RS data represent an attractive alternative to ground measurements. However, a number of competing alternatives are available and careful evaluation is essential to determine the most appropriate product for a given problem. Results The Tropical Rainfall Measurement Mission (TRMM Multi-satellite Precipitation Analysis (TMPA 3B42 product was chosen from a range of RS rainfall products to develop rainfall-based predictor variables and build logistic regression models for the prediction of MVEV activity in the Kimberley and Pilbara regions of WA. Two models employing monthly time-lagged rainfall variables showed the strongest discriminatory ability of 0.74 and 0.80 as measured by the Receiver Operating Characteristics area under the curve (ROC AUC. Conclusions TMPA data provide a state-of-the-art data source for the development of rainfall-based predictive models for Flavivirus activity in tropical WA. Compared to

  6. IMPROVEMENTS IN HANFORD TRANSURANIC (TRU) PROGRAM UTILIZING SYSTEMS MODELING AND ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    UYTIOCO EM

    2007-11-12

    Hanford's Transuranic (TRU) Program is responsible for certifying contact-handled (CH) TRU waste and shipping the certified waste to the Waste Isolation Pilot Plant (WIPP). Hanford's CH TRU waste includes material that is in retrievable storage as well as above ground storage, and newly generated waste. Certifying a typical container entails retrieving and then characterizing it (Real-Time Radiography, Non-Destructive Assay, and Head Space Gas Sampling), validating records (data review and reconciliation), and designating the container for a payload. The certified payload is then shipped to WIPP. Systems modeling and analysis techniques were applied to Hanford's TRU Program to help streamline the certification process and increase shipping rates.

  7. Analysing green supply chain management practices in Brazil's electrical/electronics industry using interpretive structural modelling

    DEFF Research Database (Denmark)

    Govindan, Kannan; Kannan, Devika; Mathiyazhagan, K.

    2013-01-01

    Industries need to adopt the environmental management concepts in the traditional supply chain management. The green supply chain management (GSCM) is an established concept to ensure environment-friendly activities in industry. This paper identifies the relationship of driving and dependence...... that exists between GSCM practices with regard to their adoption within Brazilian electrical/electronic industry with the help of interpretive structural modelling (ISM). From the results, we infer that cooperation with customers for eco-design practice is driving other practices, and this practice acts...... as a vital role among other practices. Commitment to GSCM from senior managers and cooperation with customers for cleaner production occupy the highest level. © 2013 © 2013 Taylor & Francis....

  8. Statistical Analyses and Modeling of the Implementation of Agile Manufacturing Tactics in Industrial Firms

    Directory of Open Access Journals (Sweden)

    Mohammad D. AL-Tahat

    2012-01-01

    Full Text Available This paper provides a review and introduction on agile manufacturing. Tactics of agile manufacturing are mapped into different production areas (eight-construct latent: manufacturing equipment and technology, processes technology and know-how, quality and productivity improvement, production planning and control, shop floor management, product design and development, supplier relationship management, and customer relationship management. The implementation level of agile manufacturing tactics is investigated in each area. A structural equation model is proposed. Hypotheses are formulated. Feedback from 456 firms is collected using five-point-Likert-scale questionnaire. Statistical analysis is carried out using IBM SPSS and AMOS. Multicollinearity, content validity, consistency, construct validity, ANOVA analysis, and relationships between agile components are tested. The results of this study prove that the agile manufacturing tactics have positive effect on the overall agility level. This conclusion can be used by manufacturing firms to manage challenges when trying to be agile.

  9. Reporting Results from Structural Equation Modeling Analyses in Archives of Scientific Psychology.

    Science.gov (United States)

    Hoyle, Rick H; Isherwood, Jennifer C

    2013-02-01

    Psychological research typically involves the analysis of data (e.g., questionnaire responses, records of behavior) using statistical methods. The description of how those methods are used and the results they produce is a key component of scholarly publications. Despite their importance, these descriptions are not always complete and clear. In order to ensure the completeness and clarity of these descriptions, the Archives of Scientific Psychology requires that authors of manuscripts to be considered for publication adhere to a set of publication standards. Although the current standards cover most of the statistical methods commonly used in psychological research, they do not cover them all. In this manuscript, we propose adjustments to the current standards and the addition of additional standards for a statistical method not adequately covered in the current standards-structural equation modeling (SEM). Adherence to the standards we propose would ensure that scholarly publications that report results of data analyzed using SEM are complete and clear.

  10. Personality change over 40 years of adulthood: hierarchical linear modeling analyses of two longitudinal samples.

    Science.gov (United States)

    Helson, Ravenna; Jones, Constance; Kwan, Virginia S Y

    2002-09-01

    Normative personality change over 40 years was shown in 2 longitudinal cohorts with hierarchical linear modeling of California Psychological Inventory data obtained at multiple times between ages 21-75. Although themes of change and the paucity of differences attributable to gender and cohort largely supported findings of multiethnic cross-sectional samples, the authors also found much quadratic change and much individual variability. The form of quadratic change supported predictions about the influence of period of life and social climate as factors in change over the adult years: Scores on Dominance and Independence peaked in the middle age of both cohorts, and scores on Responsibility were lowest during peak years of the culture of individualism. The idea that personality change is most pronounced before age 30 and then reaches a plateau received no support.

  11. Exploring prospective secondary mathematics teachers' interpretation of student thinking through analysing students' work in modelling

    Science.gov (United States)

    Didis, Makbule Gozde; Erbas, Ayhan Kursat; Cetinkaya, Bulent; Cakiroglu, Erdinc; Alacaci, Cengiz

    2016-09-01

    Researchers point out the importance of teachers' knowledge of student thinking and the role of examining student work in various contexts to develop a knowledge base regarding students' ways of thinking. This study investigated prospective secondary mathematics teachers' interpretations of students' thinking as manifested in students' work that embodied solutions of mathematical modelling tasks. The data were collected from 25 prospective mathematics teachers enrolled in an undergraduate course through four 2-week-long cycles. Analysis of data revealed that the prospective teachers interpreted students' thinking in four ways: describing, questioning, explaining, and comparing. Moreover, whereas some of the prospective teachers showed a tendency to increase their attention to the meaning of students' ways of thinking more while they engaged in students' work in depth over time and experience, some of them continued to focus on only judging the accuracy of students' thinking. The implications of the findings for understanding and developing prospective teachers' ways of interpreting students' thinking are discussed.

  12. The usefulness of optical analyses for detecting vulnerable plaques using rabbit models

    Science.gov (United States)

    Nakai, Kanji; Ishihara, Miya; Kawauchi, Satoko; Shiomi, Masashi; Kikuchi, Makoto; Kaji, Tatsumi

    2011-03-01

    Purpose: Carotid artery stenting (CAS) has become a widely used option for treatment of carotid stenosis. Although technical improvements have led to a decrease in complications related to CAS, distal embolism continues to be a problem. The purpose of this research was to investigate the usefulness of optical methods (Time-Resolved Laser- Induced Fluorescence Spectroscopy [TR-LIFS] and reflection spectroscopy [RS] as diagnostic tools for assessment of vulnerable atherosclerotic lesions, using rabbit models of vulnerable plaque. Materials & Methods: Male Japanese white rabbits were divided into a high cholesterol diet group and a normal diet group. In addition, we used a Watanabe heritable hyperlipidemic (WHHL) rabbit, because we confirmed the reliability of our animal model for this study. Experiment 1: TR-LIFS. Fluorescence was induced using the third harmonic wave of a Q switch Nd:YAG laser. The TR-LIFS was performed using a photonic multi-channel analyzer with ICCD (wavelength range, 200 - 860 nm). Experiment 2: RS. Refection spectra in the wavelength range of 900 to 1700 nm were acquired using a spectrometer. Results: In the TR-LIFS, the wavelength at the peak was longer by plaque formation. The TR-LIFS method revealed a difference in peak levels between a normal aorta and a lipid-rich aorta. The RS method showed increased absorption from 1450 to 1500 nm for lipid-rich plaques. We observed absorption around 1200 nm due to lipid only in the WHHL group. Conclusion: These methods using optical analysis might be useful for diagnosis of vulnerable plaques. Keywords: Carotid artery stenting, vulnerable plaque, Time-Resolved Laser-Induced Fluorescence

  13. Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Wetterslev, Jørn; Awad, Tahany;

    2011-01-01

    In random-effects model meta-analysis, the conventional DerSimonian-Laird (DL) estimator typically underestimates the between-trial variance. Alternative variance estimators have been proposed to address this bias. This study aims to empirically compare statistical inferences from random......-effects model meta-analyses on the basis of the DL estimator and four alternative estimators, as well as distributional assumptions (normal distribution and t-distribution) about the pooled intervention effect. We evaluated the discrepancies of p-values, 95% confidence intervals (CIs) in statistically...... significant meta-analyses, and the degree (percentage) of statistical heterogeneity (e.g. I(2)) across 920 Cochrane primary outcome meta-analyses. In total, 414 of the 920 meta-analyses were statistically significant with the DL meta-analysis, and 506 were not. Compared with the DL estimator, the four...

  14. In silico analyses of dystrophin Dp40 cellular distribution, nuclear export signals and structure modeling

    Directory of Open Access Journals (Sweden)

    Alejandro Martínez-Herrera

    2015-09-01

    Full Text Available Dystrophin Dp40 is the shortest protein encoded by the DMD (Duchenne muscular dystrophy gene. This protein is unique since it lacks the C-terminal end of dystrophins. In this data article, we describe the subcellular localization, nuclear export signals and the three-dimensional structure modeling of putative Dp40 proteins using bioinformatics tools. The Dp40 wild type protein was predicted as a cytoplasmic protein while the Dp40n4 was predicted to be nuclear. Changes L93P and L170P are involved in the nuclear localization of Dp40n4 protein. A close analysis of Dp40 protein scored that amino acids 93LEQEHNNLV101 and 168LLLHDSIQI176 could function as NES sequences and the scores are lost in Dp40n4. In addition, the changes L93/170P modify the tertiary structure of putative Dp40 mutants. The analysis showed that changes of residues 93 and 170 from leucine to proline allow the nuclear localization of Dp40 proteins. The data described here are related to the research article entitled “EF-hand domains are involved in the differential cellular distribution of dystrophin Dp40” (J. Aragón et al. Neurosci. Lett. 600 (2015 115–120 [1].

  15. Analysing hydro-mechanical behaviour of reinforced slopes through centrifuge modelling

    Science.gov (United States)

    Veenhof, Rick; Wu, Wei

    2017-04-01

    Every year, slope instability is causing casualties and damage to properties and the environment. The behaviour of slopes during and after these kind of events is complex and depends on meteorological conditions, slope geometry, hydro-mechanical soil properties, boundary conditions and the initial state of the soils. This study describes the effects of adding reinforcement, consisting of randomly distributed polyolefin monofilament fibres or Ryegrass (Lolium), on the behaviour of medium-fine sand in loose and medium dense conditions. Direct shear tests were performed on sand specimens with different void ratios, water content and fibre or root density, respectively. To simulate the stress state of real scale field situations, centrifuge model tests were conducted on sand specimens with different slope angles, thickness of the reinforced layer, fibre density, void ratio and water content. An increase in peak shear strength is observed in all reinforced cases. Centrifuge tests show that for slopes that are reinforced the period until failure is extended. The location of shear band formation and patch displacement behaviour indicate that the design of slope reinforcement has a significant effect on the failure behaviour. Future research will focus on the effect of plant water uptake on soil cohesion.

  16. Biomechanical analyses of prosthetic mesh repair in a hiatal hernia model.

    Science.gov (United States)

    Alizai, Patrick Hamid; Schmid, Sofie; Otto, Jens; Klink, Christian Daniel; Roeth, Anjali; Nolting, Jochen; Neumann, Ulf Peter; Klinge, Uwe

    2014-10-01

    Recurrence rate of hiatal hernia can be reduced with prosthetic mesh repair; however, type and shape of the mesh are still a matter of controversy. The purpose of this study was to investigate the biomechanical properties of four conventional meshes: pure polypropylene mesh (PP-P), polypropylene/poliglecaprone mesh (PP-U), polyvinylidenefluoride/polypropylene mesh (PVDF-I), and pure polyvinylidenefluoride mesh (PVDF-S). Meshes were tested either in warp direction (parallel to production direction) or perpendicular to the warp direction. A Zwick testing machine was used to measure elasticity and effective porosity of the textile probes. Stretching of the meshes in warp direction required forces that were up to 85-fold higher than the same elongation in perpendicular direction. Stretch stress led to loss of effective porosity in most meshes, except for PVDF-S. Biomechanical impact of the mesh was additionally evaluated in a hiatal hernia model. The different meshes were used either as rectangular patches or as circular meshes. Circular meshes led to a significant reinforcement of the hiatus, largely unaffected by the orientation of the warp fibers. In contrast, rectangular meshes provided a significant reinforcement only when warp fibers ran perpendicular to the crura. Anisotropic elasticity of prosthetic meshes should therefore be considered in hiatal closure with rectangular patches.

  17. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  18. Fungal-Induced Deterioration of Mural Paintings: In Situ and Mock-Model Microscopy Analyses.

    Science.gov (United States)

    Unković, Nikola; Grbić, Milica Ljaljević; Stupar, Miloš; Savković, Željko; Jelikić, Aleksa; Stanojević, Dragan; Vukojević, Jelena

    2016-04-01

    Fungal deterioration of frescoes was studied in situ on a selected Serbian church, and on a laboratory model, utilizing standard and newly implemented microscopy techniques. Scanning electron microscopy (SEM) with energy-dispersive X-ray confirmed the limestone components of the plaster. Pigments used were identified as carbon black, green earth, iron oxide, ocher, and an ocher/cinnabar mixture. In situ microscopy, applied via a portable microscope ShuttlePix P-400R, proved very useful for detection of invisible micro-impairments and hidden, symptomless, microbial growth. SEM and optical microscopy established that observed deterioration symptoms, predominantly discoloration and pulverization of painted layers, were due to bacterial filaments and fungal hyphal penetration, and formation of a wide range of fungal structures (i.e., melanized hyphae, chlamydospores, microcolonial clusters, Cladosporium-like conidia, and Chaetomium perithecia and ascospores). The all year-round monitoring of spontaneous and induced fungal colonization of a "mock painting" in controlled laboratory conditions confirmed the decisive role of humidity level (70.18±6.91% RH) in efficient colonization of painted surfaces, as well as demonstrated increased bioreceptivity of painted surfaces to fungal colonization when plant-based adhesives (ilinocopie, murdent), compared with organic adhesives of animal origin (bone glue, egg white), are used for pigment sizing.

  19. Analysing movements in investor’s risk aversion using the Heston volatility model

    Directory of Open Access Journals (Sweden)

    Alexie ALUPOAIEI

    2013-03-01

    Full Text Available In this paper we intend to identify and analyze, if it is the case, an “epidemiological” relationship between forecasts of professional investors and short-term developments in the EUR/RON exchange rate. Even that we don’t call a typical epidemiological model as those ones used in biology fields of research, we investigated the hypothesis according to which after the Lehman Brothers crash and implicit the generation of the current financial crisis, the forecasts of professional investors pose a significant explanatory power on the futures short-run movements of EUR/RON. How does it work this mechanism? Firstly, the professional forecasters account for the current macro, financial and political states, then they elaborate forecasts. Secondly, based on that forecasts they get positions in the Romanian exchange market for hedging and/or speculation purposes. But their positions incorporate in addition different degrees of uncertainty. In parallel, a part of their anticipations are disseminated to the public via media channels. Since some important movements are viewed within macro, financial or political fields, the positions of professsional investors from FX derivative market are activated. The current study represents a first step in that direction of analysis for Romanian case. For the above formulated objectives, in this paper different measures of EUR/RON rate volatility have been estimated and compared with implied volatilities. In a second timeframe we called the co-integration and dynamic correlation based tools in order to investigate the relationship between implied volatility and daily returns of EUR/RON exchange rate.

  20. Model-based analyses of bioequivalence crossover trials using the stochastic approximation expectation maximisation algorithm.

    Science.gov (United States)

    Dubois, Anne; Lavielle, Marc; Gsteiger, Sandro; Pigeolet, Etienne; Mentré, France

    2011-09-20

    In this work, we develop a bioequivalence analysis using nonlinear mixed effects models (NLMEM) that mimics the standard noncompartmental analysis (NCA). We estimate NLMEM parameters, including between-subject and within-subject variability and treatment, period and sequence effects. We explain how to perform a Wald test on a secondary parameter, and we propose an extension of the likelihood ratio test for bioequivalence. We compare these NLMEM-based bioequivalence tests with standard NCA-based tests. We evaluate by simulation the NCA and NLMEM estimates and the type I error of the bioequivalence tests. For NLMEM, we use the stochastic approximation expectation maximisation (SAEM) algorithm implemented in monolix. We simulate crossover trials under H(0) using different numbers of subjects and of samples per subject. We simulate with different settings for between-subject and within-subject variability and for the residual error variance. The simulation study illustrates the accuracy of NLMEM-based geometric means estimated with the SAEM algorithm, whereas the NCA estimates are biased for sparse design. NCA-based bioequivalence tests show good type I error except for high variability. For a rich design, type I errors of NLMEM-based bioequivalence tests (Wald test and likelihood ratio test) do not differ from the nominal level of 5%. Type I errors are inflated for sparse design. We apply the bioequivalence Wald test based on NCA and NLMEM estimates to a three-way crossover trial, showing that Omnitrope®; (Sandoz GmbH, Kundl, Austria) powder and solution are bioequivalent to Genotropin®; (Pfizer Pharma GmbH, Karlsruhe, Germany). NLMEM-based bioequivalence tests are an alternative to standard NCA-based tests. However, caution is needed for small sample size and highly variable drug.

  1. Tropical cyclones in a T159 resolution global climate model: comparison with observations and re-analyses

    Science.gov (United States)

    Bengtsson, L.; Hodges, K. I.; Esch, M.

    2007-08-01

    Tropical cyclones have been investigated in a T159 version of the MPI ECHAM5 climate model using a novel technique to diagnose the evolution of the three-dimensional vorticity structure of tropical cyclones, including their full life cycle from weak initial vortices to their possible extra-tropical transition. Results have been compared with re-analyses [the European Centre for Medium-Range Weather Forecasts (ECMWF) 40-yr Re-analysis (ERA40) and Japanese 25 yr re-analysis (JRA25)] and observed tropical storms during the period 1978-1999 for the Northern Hemisphere. There is no indication of any trend in the number or intensity of tropical storms during this period in ECHAM5 or in re-analyses but there are distinct inter-annual variations. The storms simulated by ECHAM5 are realistic both in space and time, but the model and even more so the re-analyses, underestimate the intensities of the most intense storms (in terms of their maximum wind speeds). There is an indication of a response to El Niño-Southern Oscillation (ENSO) with a smaller number of Atlantic storms during El Niño in agreement with previous studies. The global divergence circulation responds to El Niño by setting up a large-scale convergence flow, with the centre over the central Pacific with enhanced subsidence over the tropical Atlantic. At the same time there is an increase in the vertical wind shear in the region of the tropical Atlantic where tropical storms normally develop. There is a good correspondence between the model and ERA40 except that the divergence circulation is somewhat stronger in the model. The model underestimates storms in the Atlantic but tends to overestimate them in the Western Pacific and in the North Indian Ocean. It is suggested that the overestimation of storms in the Pacific by the model is related to an overly strong response to the tropical Pacific sea surface temperature (SST) anomalies. The overestimation in the North Indian Ocean is likely to be due to an over

  2. Pathophysiologic and transcriptomic analyses of viscerotropic yellow fever in a rhesus macaque model.

    Science.gov (United States)

    Engelmann, Flora; Josset, Laurence; Girke, Thomas; Park, Byung; Barron, Alex; Dewane, Jesse; Hammarlund, Erika; Lewis, Anne; Axthelm, Michael K; Slifka, Mark K; Messaoudi, Ilhem

    2014-01-01

    Infection with yellow fever virus (YFV), an explosively replicating flavivirus, results in viral hemorrhagic disease characterized by cardiovascular shock and multi-organ failure. Unvaccinated populations experience 20 to 50% fatality. Few studies have examined the pathophysiological changes that occur in humans during YFV infection due to the sporadic nature and remote locations of outbreaks. Rhesus macaques are highly susceptible to YFV infection, providing a robust animal model to investigate host-pathogen interactions. In this study, we characterized disease progression as well as alterations in immune system homeostasis, cytokine production and gene expression in rhesus macaques infected with the virulent YFV strain DakH1279 (YFV-DakH1279). Following infection, YFV-DakH1279 replicated to high titers resulting in viscerotropic disease with ∼72% mortality. Data presented in this manuscript demonstrate for the first time that lethal YFV infection results in profound lymphopenia that precedes the hallmark changes in liver enzymes and that although tissue damage was noted in liver, kidneys, and lymphoid tissues, viral antigen was only detected in the liver. These observations suggest that additional tissue damage could be due to indirect effects of viral replication. Indeed, circulating levels of several cytokines peaked shortly before euthanasia. Our study also includes the first description of YFV-DakH1279-induced changes in gene expression within peripheral blood mononuclear cells 3 days post-infection prior to any clinical signs. These data show that infection with wild type YFV-DakH1279 or live-attenuated vaccine strain YFV-17D, resulted in 765 and 46 differentially expressed genes (DEGs), respectively. DEGs detected after YFV-17D infection were mostly associated with innate immunity, whereas YFV-DakH1279 infection resulted in dysregulation of genes associated with the development of immune response, ion metabolism, and apoptosis. Therefore, WT-YFV infection

  3. [Selection of a statistical model for the evaluation of the reliability of the results of toxicological analyses. II. Selection of our statistical model for the evaluation].

    Science.gov (United States)

    Antczak, K; Wilczyńska, U

    1980-01-01

    Part II presents a statistical model devised by the authors for evaluating toxicological analyses results. The model includes: 1. Establishment of a reference value, basing on our own measurements taken by two independent analytical methods. 2. Selection of laboratories -- basing on the deviation of the obtained values from reference ones. 3. On consideration of variance analysis, t-student's test and differences test, subsequent quality controls and particular laboratories have been evaluated.

  4. An assessment of the wind re-analyses in the modelling of an extreme sea state in the Black Sea

    Science.gov (United States)

    Akpinar, Adem; Ponce de León, S.

    2016-03-01

    This study aims at an assessment of wind re-analyses for modelling storms in the Black Sea. A wind-wave modelling system (Simulating WAve Nearshore, SWAN) is applied to the Black Sea basin and calibrated with buoy data for three recent re-analysis wind sources, namely the European Centre for Medium-Range Weather Forecasts Reanalysis-Interim (ERA-Interim), Climate Forecast System Reanalysis (CFSR), and Modern Era Retrospective Analysis for Research and Applications (MERRA) during an extreme wave condition that occurred in the north eastern part of the Black Sea. The SWAN model simulations are carried out for default and tuning settings for deep water source terms, especially whitecapping. Performances of the best model configurations based on calibration with buoy data are discussed using data from the JASON2, TOPEX-Poseidon, ENVISAT and GFO satellites. The SWAN model calibration shows that the best configuration is obtained with Janssen and Komen formulations with whitecapping coefficient (Cds) equal to 1.8e-5 for wave generation by wind and whitecapping dissipation using ERA-Interim. In addition, from the collocated SWAN results against the satellite records, the best configuration is determined to be the SWAN using the CFSR winds. Numerical results, thus show that the accuracy of a wave forecast will depend on the quality of the wind field and the ability of the SWAN model to simulate the waves under extreme wind conditions in fetch limited wave conditions.

  5. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  6. Experiments and sensitivity analyses for heat transfer in a meter-scale regularly fractured granite model with water flow

    Institute of Scientific and Technical Information of China (English)

    Wei LU; Yan-yong XIANG

    2012-01-01

    Experiments of saturated water flow and heat transfer were conducted for a meter-scale model of regularly fractured granite.The fractured rock model (height 1502.5 mm,width 904 mm,and thickness 300 mm),embedded with two vertical and two horizontal fractures of pre-set apertures,was constructed using 18 pieces of intact granite.The granite was taken from a site currently being investigated for a high-level nuclear waste repository in China.The experiments involved different heat source temperatures and vertical water fluxes in the embedded fractures either open or filled with sand.A finite difference scheme and computer code for calculation of water flow and heat transfer in regularly fractured rocks was developed,verified against both the experimental data and calculations from the TOUGH2 code,and employed for parametric sensitivity analyses.The experiments revealed that,among other things,the temperature distribution was influenced by water flow in the fractures,especially the water flow in the vertical fracture adjacent to the heat source,and that the heat conduction between the neighboring rock blocks in the model with sand-filled fractures was enhanced by the sand,with larger range of influence of the heat source and longer time for approaching asymptotic steady-state than those of the model with open fractures.The temperatures from the experiments were in general slightly smaller than those from the numerical calculations,probably due to the fact that a certain amount of outward heat transfer at the model perimeter was unavoidable in the experiments.The parametric sensitivity analyses indicated that the temperature distribution was highly sensitive to water flow in the fractures,and the water temperature in the vertical fracture adjacent to the heat source was rather insensitive to water flow in other fractures.

  7. Spatial Modeling Techniques for Characterizing Geomaterials: Deterministic vs. Stochastic Modeling for Single-Variable and Multivariate Analyses%Spatial Modeling Techniques for Characterizing Geomaterials:Deterministic vs. Stochastic Modeling for Single-Variable and Multivariate Analyses

    Institute of Scientific and Technical Information of China (English)

    Katsuaki Koike

    2011-01-01

    Sample data in the Earth and environmental sciences are limited in quantity and sampling location and therefore, sophisticated spatial modeling techniques are indispensable for accurate imaging of complicated structures and properties of geomaterials. This paper presents several effective methods that are grouped into two categories depending on the nature of regionalized data used. Type I data originate from plural populations and type II data satisfy the prerequisite of stationarity and have distinct spatial correlations. For the type I data, three methods are shown to be effective and demonstrated to produce plausible results: (1) a spline-based method, (2) a combination of a spline-based method with a stochastic simulation, and (3) a neural network method. Geostatistics proves to be a powerful tool for type II data. Three new approaches of geostatistics are presented with case studies: an application to directional data such as fracture, multi-scale modeling that incorporates a scaling law,and space-time joint analysis for multivariate data. Methods for improving the contribution of such spatial modeling to Earth and environmental sciences are also discussed and future important problems to be solved are summarized.

  8. Using machine learning to model dose-response relationships.

    Science.gov (United States)

    Linden, Ariel; Yarnold, Paul R; Nallamothu, Brahmajee K

    2016-12-01

    Establishing the relationship between various doses of an exposure and a response variable is integral to many studies in health care. Linear parametric models, widely used for estimating dose-response relationships, have several limitations. This paper employs the optimal discriminant analysis (ODA) machine-learning algorithm to determine the degree to which exposure dose can be distinguished based on the distribution of the response variable. By framing the dose-response relationship as a classification problem, machine learning can provide the same functionality as conventional models, but can additionally make individual-level predictions, which may be helpful in practical applications like establishing responsiveness to prescribed drug regimens. Using data from a study measuring the responses of blood flow in the forearm to the intra-arterial administration of isoproterenol (separately for 9 black and 13 white men, and pooled), we compare the results estimated from a generalized estimating equations (GEE) model with those estimated using ODA. Generalized estimating equations and ODA both identified many statistically significant dose-response relationships, separately by race and for pooled data. Post hoc comparisons between doses indicated ODA (based on exact P values) was consistently more conservative than GEE (based on estimated P values). Compared with ODA, GEE produced twice as many instances of paradoxical confounding (findings from analysis of pooled data that are inconsistent with findings from analyses stratified by race). Given its unique advantages and greater analytic flexibility, maximum-accuracy machine-learning methods like ODA should be considered as the primary analytic approach in dose-response applications. © 2016 John Wiley & Sons, Ltd.

  9. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    Science.gov (United States)

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  10. Data Assimilation Tools for CO2 Reservoir Model Development – A Review of Key Data Types, Analyses, and Selected Software

    Energy Technology Data Exchange (ETDEWEB)

    Rockhold, Mark L.; Sullivan, E. C.; Murray, Christopher J.; Last, George V.; Black, Gary D.

    2009-09-30

    Pacific Northwest National Laboratory (PNNL) has embarked on an initiative to develop world-class capabilities for performing experimental and computational analyses associated with geologic sequestration of carbon dioxide. The ultimate goal of this initiative is to provide science-based solutions for helping to mitigate the adverse effects of greenhouse gas emissions. This Laboratory-Directed Research and Development (LDRD) initiative currently has two primary focus areas—advanced experimental methods and computational analysis. The experimental methods focus area involves the development of new experimental capabilities, supported in part by the U.S. Department of Energy’s (DOE) Environmental Molecular Science Laboratory (EMSL) housed at PNNL, for quantifying mineral reaction kinetics with CO2 under high temperature and pressure (supercritical) conditions. The computational analysis focus area involves numerical simulation of coupled, multi-scale processes associated with CO2 sequestration in geologic media, and the development of software to facilitate building and parameterizing conceptual and numerical models of subsurface reservoirs that represent geologic repositories for injected CO2. This report describes work in support of the computational analysis focus area. The computational analysis focus area currently consists of several collaborative research projects. These are all geared towards the development and application of conceptual and numerical models for geologic sequestration of CO2. The software being developed for this focus area is referred to as the Geologic Sequestration Software Suite or GS3. A wiki-based software framework is being developed to support GS3. This report summarizes work performed in FY09 on one of the LDRD projects in the computational analysis focus area. The title of this project is Data Assimilation Tools for CO2 Reservoir Model Development. Some key objectives of this project in FY09 were to assess the current state

  11. Kvalitative analyser ..

    DEFF Research Database (Denmark)

    Boolsen, Merete Watt

    bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse......bogen forklarer de fundamentale trin i forskningsprocessen og applikerer dem på udvalgte kvalitative analyser: indholdsanalyse, Grounded Theory, argumentationsanalyse og diskursanalyse...

  12. PartitionFinder 2: New Methods for Selecting Partitioned Models of Evolution for Molecular and Morphological Phylogenetic Analyses.

    Science.gov (United States)

    Lanfear, Robert; Frandsen, Paul B; Wright, April M; Senfeld, Tereza; Calcott, Brett

    2017-03-01

    PartitionFinder 2 is a program for automatically selecting best-fit partitioning schemes and models of evolution for phylogenetic analyses. PartitionFinder 2 is substantially faster and more efficient than version 1, and incorporates many new methods and features. These include the ability to analyze morphological datasets, new methods to analyze genome-scale datasets, new output formats to facilitate interoperability with downstream software, and many new models of molecular evolution. PartitionFinder 2 is freely available under an open source license and works on Windows, OSX, and Linux operating systems. It can be downloaded from www.robertlanfear.com/partitionfinder. The source code is available at https://github.com/brettc/partitionfinder. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  13. ANALYSES ON NONLINEAR COUPLING OF MAGNETO-THERMO-ELASTICITY OF FERROMAGNETIC THIN SHELL-Ⅱ: FINITE ELEMENT MODELING AND APPLICATION

    Institute of Scientific and Technical Information of China (English)

    Xingzhe Wang; Xiaojing Zheng

    2009-01-01

    Based on the generalized variational principle of magneto-thermo-elasticity of a ferromagnetic thin shell established (see, Analyses on nonlinear coupling of magneto-thermo-elasticity of ferromagnetic thin shell-Ⅰ), the present paper developed a finite element modeling for the mechanical-magneto-thermal multi-field coupling of a ferromagnetic thin shell. The numerical modeling composes of finite element equations for three sub-systems of magnetic, thermal and deformation fields, as well as iterative methods for nonlinearities of the geometrical large-deflection and the multi-field coupling of the ferromagnetic shell. As examples, the numerical simulations on magneto-elastic behaviors of a ferromagnetic cylindrical shell in an applied magnetic field, and magneto-thermo-elastic behaviors of the shell in applied magnetic and thermal fields are carried out. The results are in good agreement with the experimental ones.

  14. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes

    Directory of Open Access Journals (Sweden)

    Ilona Naujokaitis-Lewis

    2016-07-01

    Full Text Available Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0 that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat

  15. Advances in global sensitivity analyses of demographic-based species distribution models to address uncertainties in dynamic landscapes.

    Science.gov (United States)

    Naujokaitis-Lewis, Ilona; Curtis, Janelle M R

    2016-01-01

    Developing a rigorous understanding of multiple global threats to species persistence requires the use of integrated modeling methods that capture processes which influence species distributions. Species distribution models (SDMs) coupled with population dynamics models can incorporate relationships between changing environments and demographics and are increasingly used to quantify relative extinction risks associated with climate and land-use changes. Despite their appeal, uncertainties associated with complex models can undermine their usefulness for advancing predictive ecology and informing conservation management decisions. We developed a computationally-efficient and freely available tool (GRIP 2.0) that implements and automates a global sensitivity analysis of coupled SDM-population dynamics models for comparing the relative influence of demographic parameters and habitat attributes on predicted extinction risk. Advances over previous global sensitivity analyses include the ability to vary habitat suitability across gradients, as well as habitat amount and configuration of spatially-explicit suitability maps of real and simulated landscapes. Using GRIP 2.0, we carried out a multi-model global sensitivity analysis of a coupled SDM-population dynamics model of whitebark pine (Pinus albicaulis) in Mount Rainier National Park as a case study and quantified the relative influence of input parameters and their interactions on model predictions. Our results differed from the one-at-time analyses used in the original study, and we found that the most influential parameters included the total amount of suitable habitat within the landscape, survival rates, and effects of a prevalent disease, white pine blister rust. Strong interactions between habitat amount and survival rates of older trees suggests the importance of habitat in mediating the negative influences of white pine blister rust. Our results underscore the importance of considering habitat attributes along

  16. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  17. Using FOSM-Based Data Worth Analyses to Design Geophysical Surveys to Reduce Uncertainty in a Regional Groundwater Model Update

    Science.gov (United States)

    Smith, B. D.; White, J.; Kress, W. H.; Clark, B. R.; Barlow, J.

    2016-12-01

    Hydrogeophysical surveys have become an integral part of understanding hydrogeological frameworks used in groundwater models. Regional models cover a large area where water well data is, at best, scattered and irregular. Since budgets are finite, priorities must be assigned to select optimal areas for geophysical surveys. For airborne electromagnetic (AEM) geophysical surveys, optimization of mapping depth and line spacing needs to take in account the objectives of the groundwater models. The approach discussed here uses a first-order, second-moment (FOSM) uncertainty analyses which assumes an approximate linear relation between model parameters and observations. This assumption allows FOSM analyses to be applied to estimate the value of increased parameter knowledge to reduce forecast uncertainty. FOSM is used to facilitate optimization of yet-to-be-completed geophysical surveying to reduce model forecast uncertainty. The main objective of geophysical surveying is assumed to estimate values and spatial variation in hydrologic parameters (i.e. hydraulic conductivity) as well as map lower permeability layers that influence the spatial distribution of recharge flux. The proposed data worth analysis was applied to Mississippi Embayment Regional Aquifer Study (MERAS) which is being updated. The objective of MERAS is to assess the ground-water availability (status and trends) of the Mississippi embayment aquifer system. The study area covers portions of eight states including Alabama, Arkansas, Illinois, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee. The active model grid covers approximately 70,000 square miles, and incorporates some 6,000 miles of major rivers and over 100,000 water wells. In the FOSM analysis, a dense network of pilot points was used to capture uncertainty in hydraulic conductivity and recharge. To simulate the effect of AEM flight lines, the prior uncertainty for hydraulic conductivity and recharge pilots along potential flight lines was

  18. Using plant growth modeling to analyse C source-sink relations under drought: inter and intra specific comparison

    Directory of Open Access Journals (Sweden)

    Benoit ePallas

    2013-11-01

    Full Text Available The ability to assimilate C and allocate NSC (non structural carbohydrates to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyse such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed.

  19. A multinomial logit model-Bayesian network hybrid approach for driver injury severity analyses in rear-end crashes.

    Science.gov (United States)

    Chen, Cong; Zhang, Guohui; Tarefder, Rafiqul; Ma, Jianming; Wei, Heng; Guan, Hongzhi

    2015-07-01

    Rear-end crash is one of the most common types of traffic crashes in the U.S. A good understanding of its characteristics and contributing factors is of practical importance. Previously, both multinomial Logit models and Bayesian network methods have been used in crash modeling and analysis, respectively, although each of them has its own application restrictions and limitations. In this study, a hybrid approach is developed to combine multinomial logit models and Bayesian network methods for comprehensively analyzing driver injury severities in rear-end crashes based on state-wide crash data collected in New Mexico from 2010 to 2011. A multinomial logit model is developed to investigate and identify significant contributing factors for rear-end crash driver injury severities classified into three categories: no injury, injury, and fatality. Then, the identified significant factors are utilized to establish a Bayesian network to explicitly formulate statistical associations between injury severity outcomes and explanatory attributes, including driver behavior, demographic features, vehicle factors, geometric and environmental characteristics, etc. The test results demonstrate that the proposed hybrid approach performs reasonably well. The Bayesian network reference analyses indicate that the factors including truck-involvement, inferior lighting conditions, windy weather conditions, the number of vehicles involved, etc. could significantly increase driver injury severities in rear-end crashes. The developed methodology and estimation results provide insights for developing effective countermeasures to reduce rear-end crash injury severities and improve traffic system safety performance.

  20. Analyses of simulations of three-dimensional lattice proteins in comparison with a simplified statistical mechanical model of protein folding.

    Science.gov (United States)

    Abe, H; Wako, H

    2006-07-01

    Folding and unfolding simulations of three-dimensional lattice proteins were analyzed using a simplified statistical mechanical model in which their amino acid sequences and native conformations were incorporated explicitly. Using this statistical mechanical model, under the assumption that only interactions between amino acid residues within a local structure in a native state are considered, the partition function of the system can be calculated for a given native conformation without any adjustable parameter. The simulations were carried out for two different native conformations, for each of which two foldable amino acid sequences were considered. The native and non-native contacts between amino acid residues occurring in the simulations were examined in detail and compared with the results derived from the theoretical model. The equilibrium thermodynamic quantities (free energy, enthalpy, entropy, and the probability of each amino acid residue being in the native state) at various temperatures obtained from the simulations and the theoretical model were also examined in order to characterize the folding processes that depend on the native conformations and the amino acid sequences. Finally, the free energy landscapes were discussed based on these analyses.

  1. Comparative analyses reveal potential uses of Brachypodium distachyon as a model for cold stress responses in temperate grasses

    Directory of Open Access Journals (Sweden)

    Li Chuan

    2012-05-01

    Full Text Available Abstract Background Little is known about the potential of Brachypodium distachyon as a model for low temperature stress responses in Pooideae. The ice recrystallization inhibition protein (IRIP genes, fructosyltransferase (FST genes, and many C-repeat binding factor (CBF genes are Pooideae specific and important in low temperature responses. Here we used comparative analyses to study conservation and evolution of these gene families in B. distachyon to better understand its potential as a model species for agriculturally important temperate grasses. Results Brachypodium distachyon contains cold responsive IRIP genes which have evolved through Brachypodium specific gene family expansions. A large cold responsive CBF3 subfamily was identified in B. distachyon, while CBF4 homologs are absent from the genome. No B. distachyon FST gene homologs encode typical core Pooideae FST-motifs and low temperature induced fructan accumulation was dramatically different in B. distachyon compared to core Pooideae species. Conclusions We conclude that B. distachyon can serve as an interesting model for specific molecular mechanisms involved in low temperature responses in core Pooideae species. However, the evolutionary history of key genes involved in low temperature responses has been different in Brachypodium and core Pooideae species. These differences limit the use of B. distachyon as a model for holistic studies relevant for agricultural core Pooideae species.

  2. 3D RECORDING FOR 2D DELIVERING – THE EMPLOYMENT OF 3D MODELS FOR STUDIES AND ANALYSES

    Directory of Open Access Journals (Sweden)

    A. Rizzi

    2012-09-01

    Full Text Available In the last years, thanks to the advances of surveying sensors and techniques, many heritage sites could be accurately replicated in digital form with very detailed and impressive results. The actual limits are mainly related to hardware capabilities, computation time and low performance of personal computer. Often, the produced models are not visible on a normal computer and the only solution to easily visualized them is offline using rendered videos. This kind of 3D representations is useful for digital conservation, divulgation purposes or virtual tourism where people can visit places otherwise closed for preservation or security reasons. But many more potentialities and possible applications are available using a 3D model. The problem is the ability to handle 3D data as without adequate knowledge this information is reduced to standard 2D data. This article presents some surveying and 3D modeling experiences within the APSAT project ("Ambiente e Paesaggi dei Siti d’Altura Trentini", i.e. Environment and Landscapes of Upland Sites in Trentino. APSAT is a multidisciplinary project funded by the Autonomous Province of Trento (Italy with the aim documenting, surveying, studying, analysing and preserving mountainous and hill-top heritage sites located in the region. The project focuses on theoretical, methodological and technological aspects of the archaeological investigation of mountain landscape, considered as the product of sequences of settlements, parcelling-outs, communication networks, resources, and symbolic places. The mountain environment preserves better than others the traces of hunting and gathering, breeding, agricultural, metallurgical, symbolic activities characterised by different lengths and environmental impacts, from Prehistory to the Modern Period. Therefore the correct surveying and documentation of this heritage sites and material is very important. Within the project, the 3DOM unit of FBK is delivering all the surveying

  3. Assessing models of speciation under different biogeographic scenarios; An empirical study using multi-locus and RNA-seq analyses

    Science.gov (United States)

    Edwards, Taylor; Tollis, Marc; Hsieh, PingHsun; Gutenkunst, Ryan N.; Liu, Zhen; Kusumi, Kenro; Culver, Melanie; Murphy, Robert W.

    2016-01-01

    Evolutionary biology often seeks to decipher the drivers of speciation, and much debate persists over the relative importance of isolation and gene flow in the formation of new species. Genetic studies of closely related species can assess if gene flow was present during speciation, because signatures of past introgression often persist in the genome. We test hypotheses on which mechanisms of speciation drove diversity among three distinct lineages of desert tortoise in the genus Gopherus. These lineages offer a powerful system to study speciation, because different biogeographic patterns (physical vs. ecological segregation) are observed at opposing ends of their distributions. We use 82 samples collected from 38 sites, representing the entire species' distribution and generate sequence data for mtDNA and four nuclear loci. A multilocus phylogenetic analysis in *BEAST estimates the species tree. RNA-seq data yield 20,126 synonymous variants from 7665 contigs from two individuals of each of the three lineages. Analyses of these data using the demographic inference package ∂a∂i serve to test the null hypothesis of no gene flow during divergence. The best-fit demographic model for the three taxa is concordant with the *BEAST species tree, and the ∂a∂i analysis does not indicate gene flow among any of the three lineages during their divergence. These analyses suggest that divergence among the lineages occurred in the absence of gene flow and in this scenario the genetic signature of ecological isolation (parapatric model) cannot be differentiated from geographic isolation (allopatric model).

  4. Virus-induced gene silencing as a tool for functional analyses in the emerging model plant Aquilegia (columbine, Ranunculaceae

    Directory of Open Access Journals (Sweden)

    Kramer Elena M

    2007-04-01

    Full Text Available Abstract Background The lower eudicot genus Aquilegia, commonly known as columbine, is currently the subject of extensive genetic and genomic research aimed at developing this taxon as a new model for the study of ecology and evolution. The ability to perform functional genetic analyses is a critical component of this development process and ultimately has the potential to provide insight into the genetic basis for the evolution of a wide array of traits that differentiate flowering plants. Aquilegia is of particular interest due to both its recent evolutionary history, which involves a rapid adaptive radiation, and its intermediate phylogenetic position between core eudicot (e.g., Arabidopsis and grass (e.g., Oryza model species. Results Here we demonstrate the effective use of a reverse genetic technique, virus-induced gene silencing (VIGS, to study gene function in this emerging model plant. Using Agrobacterium mediated transfer of tobacco rattle virus (TRV based vectors, we induce silencing of PHYTOENE DESATURASE (AqPDS in Aquilegia vulgaris seedlings, and ANTHOCYANIDIN SYNTHASE (AqANS and the B-class floral organ identity gene PISTILLATA in A. vulgaris flowers. For all of these genes, silencing phenotypes are associated with consistent reduction in endogenous transcript levels. In addition, we show that silencing of AqANS has no effect on overall floral morphology and is therefore a suitable marker for the identification of silenced flowers in dual-locus silencing experiments. Conclusion Our results show that TRV-VIGS in Aquilegia vulgaris allows data to be rapidly obtained and can be reproduced with effective survival and silencing rates. Furthermore, this method can successfully be used to evaluate the function of early-acting developmental genes. In the future, data derived from VIGS analyses will be combined with large-scale sequencing and microarray experiments already underway in order to address both recent and ancient evolutionary

  5. Standards, accuracy, and questions of bias in Rorschach meta-analyses: reply to Wood, Garb, Nezworski, Lilienfeld, and Duke (2015).

    Science.gov (United States)

    Mihura, Joni L; Meyer, Gregory J; Bombel, George; Dumitrascu, Nicolae

    2015-01-01

    Wood, Garb, Nezworski, Lilienfeld, and Duke (2015) found our systematic review and meta-analyses of 65 Rorschach variables to be accurate and unbiased, and hence removed their previous recommendation for a moratorium on the applied use of the Rorschach. However, Wood et al. (2015) hypothesized that publication bias would exist for 4 Rorschach variables. To test this hypothesis, they replicated our meta-analyses for these 4 variables and added unpublished dissertations to the pool of articles. In the process, they used procedures that contradicted their standards and recommendations for sound Rorschach research, which consistently led to significantly lower effect sizes. In reviewing their meta-analyses, we found numerous methodological errors, data errors, and omitted studies. In contrast to their strict requirements for interrater reliability in the Rorschach meta-analyses of other researchers, they did not report interrater reliability for any of their coding and classification decisions. In addition, many of their conclusions were based on a narrative review of individual studies and post hoc analyses rather than their meta-analytic findings. Finally, we challenge their sole use of dissertations to test publication bias because (a) they failed to reconcile their conclusion that publication bias was present with the analyses we conducted showing its absence, and (b) we found numerous problems with dissertation study quality. In short, one cannot rely on the findings or the conclusions reported in Wood et al.

  6. Acoustic analyses of thyroidectomy-related changes in vowel phonation.

    Science.gov (United States)

    Solomon, Nancy Pearl; Awan, Shaheen N; Helou, Leah B; Stojadinovic, Alexander

    2012-11-01

    Changes in vocal function that can occur after thyroidectomy were tracked with acoustic analyses of sustained vowel productions. The purpose was to determine which time-based or spectral/cepstral-based measures of two vowels were able to detect voice changes over time in patients undergoing thyroidectomy. Prospective, longitudinal, and observational clinical trial. Voice samples of sustained /ɑ/ and /i/ recorded from 70 adults before and approximately 2 weeks, 3 months, and 6 months after thyroid surgery were analyzed for jitter, shimmer, harmonic-to-noise ratio (HNR), cepstral peak prominence (CPP), low-to-high ratio of spectral energy (L/H ratio), and the standard deviations of CPP and L/H ratio. Three trained listeners rated vowel and sentence productions for the four data collection sessions for each participant. For analysis purposes, participants were categorized post hoc according to voice outcome (VO) at their first postthyroidectomy assessment session. Shimmer, HNR, and CPP differed significantly across sessions; follow-up analyses revealed the strongest effect for CPP. CPP for /ɑ/ and /i/ differed significantly between groups of participants with normal versus negative (adverse) VO and between the pre- and 2-week postthyroidectomy sessions for the negative VO group. HNR, CPP, and L/H ratio differed across vowels, but both /ɑ/ and /i/ were similarly effective in tracking voice changes over time and differentiating VO groups. This study indicated that shimmer, HNR, and CPP determined from vowel productions can be used to track changes in voice over time as patients undergo and subsequently recover from thyroid surgery, with CPP being the strongest variable for this purpose. Evidence did not clearly reveal whether acoustic voice evaluations should include both /ɑ/ and /i/ vowels, but they should specify which vowel is used to allow for comparisons across studies and multiple clinical assessments. Copyright © 2012 The Voice Foundation. All rights

  7. Systems genetics of obesity in an F2 pig model by genome-wide association, genetic network and pathway analyses

    Directory of Open Access Journals (Sweden)

    Lisette J. A. Kogelman

    2014-07-01

    Full Text Available Obesity is a complex condition with world-wide exponentially rising prevalence rates, linked with severe diseases like Type 2 Diabetes. Economic and welfare consequences have led to a raised interest in a better understanding of the biological and genetic background. To date, whole genome investigations focusing on single genetic variants have achieved limited success, and the importance of including genetic interactions is becoming evident. Here, the aim was to perform an integrative genomic analysis in an F2 pig resource population that was constructed with an aim to maximize genetic variation of obesity-related phenotypes and genotyped using the 60K SNP chip. Firstly, Genome Wide Association (GWA analysis was performed on the Obesity Index to locate candidate genomic regions that were further validated using combined Linkage Disequilibrium Linkage Analysis and investigated by evaluation of haplotype blocks. We built Weighted Interaction SNP Hub (WISH and differentially wired (DW networks using genotypic correlations amongst obesity-associated SNPs resulting from GWA analysis. GWA results and SNP modules detected by WISH and DW analyses were further investigated by functional enrichment analyses. The functional annotation of SNPs revealed several genes associated with obesity, e.g. NPC2 and OR4D10. Moreover, gene enrichment analyses identified several significantly associated pathways, over and above the GWA study results, that may influence obesity and obesity related diseases, e.g. metabolic processes. WISH networks based on genotypic correlations allowed further identification of various gene ontology terms and pathways related to obesity and related traits, which were not identified by the GWA study. In conclusion, this is the first study to develop a (genetic obesity index and employ systems genetics in a porcine model to provide important insights into the complex genetic architecture associated with obesity and many biological pathways

  8. Systems genetics of obesity in an F2 pig model by genome-wide association, genetic network, and pathway analyses.

    Science.gov (United States)

    Kogelman, Lisette J A; Pant, Sameer D; Fredholm, Merete; Kadarmideen, Haja N

    2014-01-01

    Obesity is a complex condition with world-wide exponentially rising prevalence rates, linked with severe diseases like Type 2 Diabetes. Economic and welfare consequences have led to a raised interest in a better understanding of the biological and genetic background. To date, whole genome investigations focusing on single genetic variants have achieved limited success, and the importance of including genetic interactions is becoming evident. Here, the aim was to perform an integrative genomic analysis in an F2 pig resource population that was constructed with an aim to maximize genetic variation of obesity-related phenotypes and genotyped using the 60K SNP chip. Firstly, Genome Wide Association (GWA) analysis was performed on the Obesity Index to locate candidate genomic regions that were further validated using combined Linkage Disequilibrium Linkage Analysis and investigated by evaluation of haplotype blocks. We built Weighted Interaction SNP Hub (WISH) and differentially wired (DW) networks using genotypic correlations amongst obesity-associated SNPs resulting from GWA analysis. GWA results and SNP modules detected by WISH and DW analyses were further investigated by functional enrichment analyses. The functional annotation of SNPs revealed several genes associated with obesity, e.g., NPC2 and OR4D10. Moreover, gene enrichment analyses identified several significantly associated pathways, over and above the GWA study results, that may influence obesity and obesity related diseases, e.g., metabolic processes. WISH networks based on genotypic correlations allowed further identification of various gene ontology terms and pathways related to obesity and related traits, which were not identified by the GWA study. In conclusion, this is the first study to develop a (genetic) obesity index and employ systems genetics in a porcine model to provide important insights into the complex genetic architecture associated with obesity and many biological pathways that underlie

  9. Analyses of the Classical Model for Porous Materials%多孔材料模型分析

    Institute of Scientific and Technical Information of China (English)

    刘培生; 夏凤金; 罗军

    2009-01-01

    New developments are ceaselessly gained for the preparation, the application and the property study of porous materials. As to the theories about the structure and properties of porous materials, the famous classical model-Gibson-Ashby model has been being commonly endorsed in the field of porous materials all over the world, and is the theoretical foundation widespreadly applied by numerous investigators to their relative researches up to now. Some supplementary thinking and analyses are made for the shortages in this model in the present paper, and it is found that some shortages can even break the completivity originally shown by this model. Based on the summery about these problems, another new model is introduced which can make up the shortcomings existed in Gibson-Ashby model.%多孔泡沫材料的制备、应用和性能研究均不断取得新的进展.在关于多孔材料结构和性能方面的理论中,著名的经典性模型--Gibson-Ashby模型一直受到国际同行的普遍认同,迄今仍然是众多研究者在研究工作中广泛应用的理论基础.对该模型尚存在的若干不足和问题进行了一些补充思考和分析,发现其中有些缺陷甚至可以打破该模型原来表现出来的"完满性".在总结陈述这些问题的基础上,引荐了可以克服或弥补上述模型不足的另一个模型.

  10. CREB3 subfamily transcription factors are not created equal: Recent insights from global analyses and animal models

    Directory of Open Access Journals (Sweden)

    Chan Chi-Ping

    2011-02-01

    Full Text Available Abstract The CREB3 subfamily of membrane-bound bZIP transcription factors has five members in mammals known as CREB3 and CREB3L1-L4. One current model suggests that CREB3 subfamily transcription factors are similar to ATF6 in regulated intramembrane proteolysis and transcriptional activation. Particularly, they were all thought to be proteolytically activated in response to endoplasmic reticulum (ER stress to stimulate genes that are involved in unfolded protein response (UPR. Although the physiological inducers of their proteolytic activation remain to be identified, recent findings from microarray analyses, RNAi screens and gene knockouts not only demonstrated their critical roles in regulating development, metabolism, secretion, survival and tumorigenesis, but also revealed cell type-specific patterns in the activation of their target genes. Members of the CREB3 subfamily show differential activity despite their structural similarity. The spectrum of their biological function expands beyond ER stress and UPR. Further analyses are required to elucidate the mechanism of their proteolytic activation and the molecular basis of their target recognition.

  11. Hierarchical linear modeling analyses of the NEO-PI-R scales in the Baltimore Longitudinal Study of Aging.

    Science.gov (United States)

    Terracciano, Antonio; McCrae, Robert R; Brant, Larry J; Costa, Paul T

    2005-09-01

    The authors examined age trends in the 5 factors and 30 facets assessed by the Revised NEO Personality Inventory in Baltimore Longitudinal Study of Aging data (N=1,944; 5,027 assessments) collected between 1989 and 2004. Consistent with cross-sectional results, hierarchical linear modeling analyses showed gradual personality changes in adulthood: a decline in Neuroticism up to age 80, stability and then decline in Extraversion, decline in Openness, increase in Agreeableness, and increase in Conscientiousness up to age 70. Some facets showed different curves from the factor they define. Birth cohort effects were modest, and there were no consistent Gender x Age interactions. Significant nonnormative changes were found for all 5 factors; they were not explained by attrition but might be due to genetic factors, disease, or life experience. Copyright (c) 2005 APA, all rights reserved.

  12. Hierarchical Linear Modeling Analyses of NEO-PI-R Scales In the Baltimore Longitudinal Study of Aging

    Science.gov (United States)

    Terracciano, Antonio; McCrae, Robert R.; Brant, Larry J.; Costa, Paul T.

    2009-01-01

    We examined age trends in the five factors and 30 facets assessed by the Revised NEO Personality Inventory in Baltimore Longitudinal Study of Aging data (N = 1,944; 5,027 assessments) collected between 1989 and 2004. Consistent with cross-sectional results, Hierarchical Linear Modeling analyses showed gradual personality changes in adulthood: a decline up to age 80 in Neuroticism, stability and then decline in Extraversion, decline in Openness, increase in Agreeableness, and increase up to age 70 in Conscientiousness. Some facets showed different curves from the factor they define. Birth cohort effects were modest, and there were no consistent Gender × Age interactions. Significant non-normative changes were found for all five factors; they were not explained by attrition but might be due to genetic factors, disease, or life experience. PMID:16248708

  13. COUPLING EFFECTS FOR CELL-TRUSS SPAR PLATFORM: COMPARISON OF FREQUENCY- AND TIME-DOMAIN ANALYSES WITH MODEL TESTS

    Institute of Scientific and Technical Information of China (English)

    ZHANG Fan; YANG Jian-min; LI Run-pei; CHEN Gang

    2008-01-01

    For the floating structures in deepwater, the coupling effects of the mooring lines and risers on the motion responses of the structures become increasingly significant. Viscous damping, inertial mass, current loading and restoring, etc. from these slender structures should be carefully handled to accurately predict the motion responses and line tensions. For the spar platforms, coupling the mooring system and riser with the vessel motion typically results in a reduction in extreme motion responses. This article presents numerical simulations and model tests on a new cell-truss spar platform in the State Key Laboratory of Ocean Engineering in Shanghai Jiaotong University. Results from three calculation methods, including frequency-domain analysis, time-domain semi-coupled and fully-coupled analyses, were compared with the experimental data to find the applicability of different approaches. Proposals for the improvement of numerical calculations and experimental technique were tabled as well.

  14. Using niche-modelling and species-specific cost analyses to determine a multispecies corridor in a fragmented landscape

    Science.gov (United States)

    Zurano, Juan Pablo; Selleski, Nicole; Schneider, Rosio G.

    2017-01-01

    types independent of the degree of legal protection. These data used with multifocal GIS analyses balance the varying degree of overlap and unique properties among them allowing for comprehensive conservation strategies to be developed relatively rapidly. Our comprehensive approach serves as a model to other regions faced with habitat loss and lack of data. The five carnivores focused on in our study have wide ranges, so the results from this study can be expanded and combined with surrounding countries, with analyses at the species or community level. PMID:28841692

  15. From Global Climate Model Projections to Local Impacts Assessments: Analyses in Support of Planning for Climate Change

    Science.gov (United States)

    Snover, A. K.; Littell, J. S.; Mantua, N. J.; Salathe, E. P.; Hamlet, A. F.; McGuire Elsner, M.; Tohver, I.; Lee, S.

    2010-12-01

    Assessing and planning for the impacts of climate change require regionally-specific information. Information is required not only about projected changes in climate but also the resultant changes in natural and human systems at the temporal and spatial scales of management and decision making. Therefore, climate impacts assessment typically results in a series of analyses, in which relatively coarse-resolution global climate model projections of changes in regional climate are downscaled to provide appropriate input to local impacts models. This talk will describe recent examples in which coarse-resolution (~150 to 300km) GCM output was “translated” into information requested by decision makers at relatively small (watershed) and large (multi-state) scales using regional climate modeling, statistical downscaling, hydrologic modeling, and sector-specific impacts modeling. Projected changes in local air temperature, precipitation, streamflow, and stream temperature were developed to support Seattle City Light’s assessment of climate change impacts on hydroelectric operations, future electricity load, and resident fish populations. A state-wide assessment of climate impacts on eight sectors (agriculture, coasts, energy, forests, human health, hydrology and water resources, salmon, and urban stormwater infrastructure) was developed for Washington State to aid adaptation planning. Hydro-climate change scenarios for approximately 300 streamflow locations in the Columbia River basin and selected coastal drainages west of the Cascades were developed in partnership with major water management agencies in the Pacific Northwest to allow planners to consider how hydrologic changes may affect management objectives. Treatment of uncertainty in these assessments included: using “bracketing” scenarios to describe a range of impacts, using ensemble averages to characterize the central estimate of future conditions (given an emissions scenario), and explicitly assessing

  16. Civil engineering: EDF needs for concrete modelling; Genie civile: analyse des besoins EDF en modelisation du comportement des betons

    Energy Technology Data Exchange (ETDEWEB)

    Didry, O.; Gerard, B.; Bui, D. [Electricite de France (EDF), Direction des Etudes et Recherches, 92 - Clamart (France)

    1997-12-31

    Concrete structures which are encountered at EDF, like all civil engineering structures, age. In order to adapt the maintenance conditions of these structures, particularly to extend their service life, and also to prepare constructions of future structures, tools for predicting the behaviour of these structures in their environment should be available. For EDF the technical risks are high and consequently very appropriate R and D actions are required. In this context the Direction des Etudes et Recherches (DER) has developed a methodology for analysing concrete structure behaviour modelling. This approach has several aims: - making a distinction between the problems which refer to the existing models and those which require R and D; - displaying disciplinary links between different problems encountered on EDF structures (non-linear mechanical, chemical - hydraulic - mechanical coupling, etc); - listing of the existing tools and positioning the DER `Aster` finite element code among them. This document is a state of the art of scientific knowledge intended to shed light on the fields in which one should be involved when there is, on one part a strong requirement on the side of structure operators, and on the other one, the present tools do not allow this requirement to be satisfactorily met. The analysis has been done on 12 scientific subjects: 1) Hydration of concrete at early ages: exothermicity, hardening, autogenous shrinkage; 2) Drying and drying shrinkage; 3) Alkali-silica reaction and bulky stage formation; 4) Long term deterioration by leaching; 5) Ionic diffusion and associated attacks: the chlorides case; 6) Permeability / tightness of concrete; 7) Concretes -nonlinear behaviour and cracking (I): contribution of the plasticity models; 8) Concretes - nonlinear behaviour and cracking (II): contribution of the damage models; 9) Concretes - nonlinear behaviour and cracking (III): the contribution of the probabilistic analysis model; 10) Delayed behaviour of

  17. Spatially quantitative models for vulnerability analyses and resilience measures in flood risk management: Case study Rafina, Greece

    Science.gov (United States)

    Karagiorgos, Konstantinos; Chiari, Michael; Hübl, Johannes; Maris, Fotis; Thaler, Thomas; Fuchs, Sven

    2013-04-01

    We will address spatially quantitative models for vulnerability analyses in flood risk management in the catchment of Rafina, 25 km east of Athens, Greece; and potential measures to reduce damage costs. The evaluation of flood damage losses is relatively advanced. Nevertheless, major problems arise since there are no market prices for the evaluation process available. Moreover, there is particular gap in quantifying the damages and necessary expenditures for the implementation of mitigation measures with respect to flash floods. The key issue is to develop prototypes for assessing flood losses and the impact of mitigation measures on flood resilience by adjusting a vulnerability model and to further develop the method in a Mediterranean region influenced by both, mountain and coastal characteristics of land development. The objective of this study is to create a spatial and temporal analysis of the vulnerability factors based on a method combining spatially explicit loss data, data on the value of exposed elements at risk, and data on flood intensities. In this contribution, a methodology for the development of a flood damage assessment as a function of the process intensity and the degree of loss is presented. It is shown that (1) such relationships for defined object categories are dependent on site-specific and process-specific characteristics, but there is a correlation between process types that have similar characteristics; (2) existing semi-quantitative approaches of vulnerability assessment for elements at risk can be improved based on the proposed quantitative method; and (3) the concept of risk can be enhanced with respect to a standardised and comprehensive implementation by applying the vulnerability functions to be developed within the proposed research. Therefore, loss data were collected from responsible administrative bodies and analysed on an object level. The used model is based on a basin scale approach as well as data on elements at risk exposed

  18. Modeling dose-response relationships of the effects of fesoterodine in patients with overactive bladder

    Directory of Open Access Journals (Sweden)

    Cardozo Linda

    2010-08-01

    Full Text Available Abstract Background Fesoterodine is an antimuscarinic for the treatment of overactive bladder, a syndrome of urgency, with or without urgency urinary incontinence (UUI, usually with increased daytime frequency and nocturia. Our objective was to develop predictive models to describe the dose response of fesoterodine. Methods Data from subjects enrolled in double-blind, placebo-controlled phase II and III trials were used for developing longitudinal dose-response models. Results The models predicted that clinically significant and near-maximum treatment effects would be seen within 3 to 4 weeks after treatment initiation. For a typical patient with 11 micturitions per 24 hours at baseline, predicted change was -1.2, -1.7, and -2.2 micturitions for placebo and fesoterodine 4 mg and 8 mg, respectively. For a typical patient with 2 UUI episodes per 24 hours at baseline, predicted change was -1.05, -1.26, and -1.43 UUI episodes for placebo and fesoterodine 4 mg and 8 mg, respectively. Increase in mean voided volume was estimated at 9.7 mL for placebo, with an additional 14.2 mL and 28.4 mL for fesoterodine 4 mg and 8 mg, respectively. Conclusions A consistent dose response for fesoterodine was demonstrated for bladder diary endpoints in subjects with overactive bladder, a result that supports the greater efficacy seen with fesoterodine 8 mg in post hoc analyses of clinical trial data. The dose-response models can be used to predict outcomes for doses not studied or for patient subgroups underrepresented in clinical trials. Trial Registration The phase III trials used in this analysis have been registered at ClinicalTrials.gov (NCT00220363 and NCT00138723.

  19. Modeling dose-response relationships of the effects of fesoterodine in patients with overactive bladder.

    Science.gov (United States)

    Cardozo, Linda; Khullar, Vik; El-Tahtawy, Ahmed; Guan, Zhonghong; Malhotra, Bimal; Staskin, David

    2010-08-19

    Fesoterodine is an antimuscarinic for the treatment of overactive bladder, a syndrome of urgency, with or without urgency urinary incontinence (UUI), usually with increased daytime frequency and nocturia. Our objective was to develop predictive models to describe the dose response of fesoterodine. Data from subjects enrolled in double-blind, placebo-controlled phase II and III trials were used for developing longitudinal dose-response models. The models predicted that clinically significant and near-maximum treatment effects would be seen within 3 to 4 weeks after treatment initiation. For a typical patient with 11 micturitions per 24 hours at baseline, predicted change was -1.2, -1.7, and -2.2 micturitions for placebo and fesoterodine 4 mg and 8 mg, respectively. For a typical patient with 2 UUI episodes per 24 hours at baseline, predicted change was -1.05, -1.26, and -1.43 UUI episodes for placebo and fesoterodine 4 mg and 8 mg, respectively. Increase in mean voided volume was estimated at 9.7 mL for placebo, with an additional 14.2 mL and 28.4 mL for fesoterodine 4 mg and 8 mg, respectively. A consistent dose response for fesoterodine was demonstrated for bladder diary endpoints in subjects with overactive bladder, a result that supports the greater efficacy seen with fesoterodine 8 mg in post hoc analyses of clinical trial data. The dose-response models can be used to predict outcomes for doses not studied or for patient subgroups underrepresented in clinical trials. The phase III trials used in this analysis have been registered at ClinicalTrials.gov (NCT00220363 and NCT00138723).

  20. Statistical correlations and risk analyses techniques for a diving dual phase bubble model and data bank using massively parallel supercomputers.

    Science.gov (United States)

    Wienke, B R; O'Leary, T R

    2008-05-01

    Linking model and data, we detail the LANL diving reduced gradient bubble model (RGBM), dynamical principles, and correlation with data in the LANL Data Bank. Table, profile, and meter risks are obtained from likelihood analysis and quoted for air, nitrox, helitrox no-decompression time limits, repetitive dive tables, and selected mixed gas and repetitive profiles. Application analyses include the EXPLORER decompression meter algorithm, NAUI tables, University of Wisconsin Seafood Diver tables, comparative NAUI, PADI, Oceanic NDLs and repetitive dives, comparative nitrogen and helium mixed gas risks, USS Perry deep rebreather (RB) exploration dive,world record open circuit (OC) dive, and Woodville Karst Plain Project (WKPP) extreme cave exploration profiles. The algorithm has seen extensive and utilitarian application in mixed gas diving, both in recreational and technical sectors, and forms the bases forreleased tables and decompression meters used by scientific, commercial, and research divers. The LANL Data Bank is described, and the methods used to deduce risk are detailed. Risk functions for dissolved gas and bubbles are summarized. Parameters that can be used to estimate profile risk are tallied. To fit data, a modified Levenberg-Marquardt routine is employed with L2 error norm. Appendices sketch the numerical methods, and list reports from field testing for (real) mixed gas diving. A Monte Carlo-like sampling scheme for fast numerical analysis of the data is also detailed, as a coupled variance reduction technique and additional check on the canonical approach to estimating diving risk. The method suggests alternatives to the canonical approach. This work represents a first time correlation effort linking a dynamical bubble model with deep stop data. Supercomputing resources are requisite to connect model and data in application.

  1. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  2. Time Headway Modelling of Motorcycle-Dominated Traffic to Analyse Traffic Safety Performance and Road Link Capacity of Single Carriageways

    Directory of Open Access Journals (Sweden)

    D. M. Priyantha Wedagama

    2017-04-01

    Full Text Available This study aims to develop time headway distribution models to analyse traffic safety performance and road link capacities for motorcycle-dominated traffic in Denpasar, Bali. Three road links selected as the case study are Jl. Hayam Wuruk, Jl.Hang Tuah, and Jl. Padma. Data analysis showed that between 55%-80% of motorists in Denpasar during morning and evening peak hours paid less attention to the safe distance with the vehicles in front. The study found that Lognormal distribution models are best to fit time headway data during morning peak hours while either Weibull (3P or Pearson III distributions is for evening peak hours. Road link capacities for mixed traffic predominantly motorcycles are apparently affected by the behaviour of motorists in keeping safe distance with the vehicles in front. Theoretical road link capacities for Jl. Hayam Wuruk, Jl. Hang Tuah and Jl. Padma are 3,186 vehicles/hour, 3,077 vehicles/hour and 1935 vehicles/hour respectively.

  3. Bayesian salamanders: analysing the demography of an underground population of the European plethodontid Speleomantes strinatii with state-space modelling

    Directory of Open Access Journals (Sweden)

    Salvidio Sebastiano

    2010-02-01

    Full Text Available Abstract Background It has been suggested that Plethodontid salamanders are excellent candidates for indicating ecosystem health. However, detailed, long-term data sets of their populations are rare, limiting our understanding of the demographic processes underlying their population fluctuations. Here we present a demographic analysis based on a 1996 - 2008 data set on an underground population of Speleomantes strinatii (Aellen in NW Italy. We utilised a Bayesian state-space approach allowing us to parameterise a stage-structured Lefkovitch model. We used all the available population data from annual temporary removal experiments to provide us with the baseline data on the numbers of juveniles, subadults and adult males and females present at any given time. Results Sampling the posterior chains of the converged state-space model gives us the likelihood distributions of the state-specific demographic rates and the associated uncertainty of these estimates. Analysing the resulting parameterised Lefkovitch matrices shows that the population growth is very close to 1, and that at population equilibrium we expect half of the individuals present to be adults of reproductive age which is what we also observe in the data. Elasticity analysis shows that adult survival is the key determinant for population growth. Conclusion This analysis demonstrates how an understanding of population demography can be gained from structured population data even in a case where following marked individuals over their whole lifespan is not practical.

  4. Predictive validation of modeled health technology assessment claims: lessons from NICE.

    Science.gov (United States)

    Belsey, Jonathan

    2015-01-01

    The use of cost-effectiveness modeling to prioritize healthcare spending has become a key foundation of UK government policy. Although the preferred method of evaluation-cost-utility analysis-is not without its critics, it represents a standard approach that can arguably be used to assess relative value for money across a range of disease types and interventions. A key limitation of economic modeling, however, is that its conclusions hinge on the input assumptions, many of which are derived from randomized controlled trials or meta-analyses that cannot be reliably linked to real-world performance of treatments in a broader clinical context. This means that spending decisions are frequently based on artificial constructs that may project costs and benefits that are significantly at odds with those that are achievable in reality. There is a clear agenda to carry out some form of predictive validation for the model claims, in order to assess not only whether the spending decisions made can be justified post hoc, but also to ensure that budgetary expenditure continues to be allocated in the most rational way. To date, however, no timely, effective system to carry out this testing has been implemented, with the consequence that there is little objective evidence as to whether the prioritization decisions made are actually living up to expectations. This article reviews two unfulfilled initiatives that have been carried out in the UK over the past 20 years, each of which had the potential to address this objective, and considers why they failed to deliver the expected outcomes.

  5. Comparison of statistical inferences from the DerSimonian-Laird and alternative random-effects model meta-analyses - an empirical assessment of 920 Cochrane primary outcome meta-analyses.

    Science.gov (United States)

    Thorlund, Kristian; Wetterslev, Jørn; Awad, Tahany; Thabane, Lehana; Gluud, Christian

    2011-12-01

    In random-effects model meta-analysis, the conventional DerSimonian-Laird (DL) estimator typically underestimates the between-trial variance. Alternative variance estimators have been proposed to address this bias. This study aims to empirically compare statistical inferences from random-effects model meta-analyses on the basis of the DL estimator and four alternative estimators, as well as distributional assumptions (normal distribution and t-distribution) about the pooled intervention effect. We evaluated the discrepancies of p-values, 95% confidence intervals (CIs) in statistically significant meta-analyses, and the degree (percentage) of statistical heterogeneity (e.g. I(2)) across 920 Cochrane primary outcome meta-analyses. In total, 414 of the 920 meta-analyses were statistically significant with the DL meta-analysis, and 506 were not. Compared with the DL estimator, the four alternative estimators yielded p-values and CIs that could be interpreted as discordant in up to 11.6% or 6% of the included meta-analyses pending whether a normal distribution or a t-distribution of the intervention effect estimates were assumed. Large discrepancies were observed for the measures of degree of heterogeneity when comparing DL with each of the four alternative estimators. Estimating the degree (percentage) of heterogeneity on the basis of less biased between-trial variance estimators seems preferable to current practice. Disclosing inferential sensitivity of p-values and CIs may also be necessary when borderline significant results have substantial impact on the conclusion. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Using ecological niche models and niche analyses to understand speciation patterns: the case of sister neotropical orchid bees.

    Directory of Open Access Journals (Sweden)

    Daniel P Silva

    Full Text Available The role of past connections between the two major South American forested biomes on current species distribution has been recognized a long time ago. Climatic oscillations that further separated these biomes have promoted parapatric speciation, in which many species had their continuous distribution split, giving rise to different but related species (i.e., different potential distributions and realized niche features. The distribution of many sister species of orchid bees follow this pattern. Here, using ecological niche models and niche analyses, we (1 tested the role of ecological niche differentiation on the divergence between sister orchid-bees (genera Eulaema and Eufriesea from the Amazon and Atlantic forests, and (2 highlighted interesting areas for new surveys. Amazonian species occupied different realized niches than their Atlantic sister species. Conversely, species of sympatric but distantly related Eulaema bees occupied similar realized niches. Amazonian species had a wide potential distribution in South America, whereas Atlantic Forest species were more limited to the eastern coast of the continent. Additionally, we identified several areas in need of future surveys. Our results show that the realized niche of Atlantic-Amazonian sister species of orchid bees, which have been previously treated as allopatric populations of three species, had limited niche overlap and similarity. These findings agree with their current taxonomy, which treats each of those populations as distinct valid species.

  7. Using ecological niche models and niche analyses to understand speciation patterns: the case of sister neotropical orchid bees.

    Science.gov (United States)

    Silva, Daniel P; Vilela, Bruno; De Marco, Paulo; Nemésio, André

    2014-01-01

    The role of past connections between the two major South American forested biomes on current species distribution has been recognized a long time ago. Climatic oscillations that further separated these biomes have promoted parapatric speciation, in which many species had their continuous distribution split, giving rise to different but related species (i.e., different potential distributions and realized niche features). The distribution of many sister species of orchid bees follow this pattern. Here, using ecological niche models and niche analyses, we (1) tested the role of ecological niche differentiation on the divergence between sister orchid-bees (genera Eulaema and Eufriesea) from the Amazon and Atlantic forests, and (2) highlighted interesting areas for new surveys. Amazonian species occupied different realized niches than their Atlantic sister species. Conversely, species of sympatric but distantly related Eulaema bees occupied similar realized niches. Amazonian species had a wide potential distribution in South America, whereas Atlantic Forest species were more limited to the eastern coast of the continent. Additionally, we identified several areas in need of future surveys. Our results show that the realized niche of Atlantic-Amazonian sister species of orchid bees, which have been previously treated as allopatric populations of three species, had limited niche overlap and similarity. These findings agree with their current taxonomy, which treats each of those populations as distinct valid species.

  8. Water flow experiments and analyses on the cross-flow type mercury target model with the flow guide plates

    CERN Document Server

    Haga, K; Kaminaga, M; Hino, R

    2001-01-01

    A mercury target is used in the spallation neutron source driven by a high-intensity proton accelerator. In this study, the effectiveness of the cross-flow type mercury target structure was evaluated experimentally and analytically. Prior to the experiment, the mercury flow field and the temperature distribution in the target container were analyzed assuming a proton beam energy and power of 1.5 GeV and 5 MW, respectively, and the feasibility of the cross-flow type target was evaluated. Then the average water flow velocity field in the target mock-up model, which was fabricated from Plexiglass for a water experiment, was measured at room temperature using the PIV technique. Water flow analyses were conducted and the analytical results were compared with the experimental results. The experimental results showed that the cross-flow could be realized in most of the proton beam path area and the analytical result of the water flow velocity field showed good correspondence to the experimental results in the case w...

  9. Modeling Acequia Irrigation Systems Using System Dynamics: Model Development, Evaluation, and Sensitivity Analyses to Investigate Effects of Socio-Economic and Biophysical Feedbacks

    Directory of Open Access Journals (Sweden)

    Benjamin L. Turner

    2016-10-01

    Full Text Available Agriculture-based irrigation communities of northern New Mexico have survived for centuries despite the arid environment in which they reside. These irrigation communities are threatened by regional population growth, urbanization, a changing demographic profile, economic development, climate change, and other factors. Within this context, we investigated the extent to which community resource management practices centering on shared resources (e.g., water for agricultural in the floodplains and grazing resources in the uplands and mutualism (i.e., shared responsibility of local residents to maintaining traditional irrigation policies and upholding cultural and spiritual observances embedded within the community structure influence acequia function. We used a system dynamics modeling approach as an interdisciplinary platform to integrate these systems, specifically the relationship between community structure and resource management. In this paper we describe the background and context of acequia communities in northern New Mexico and the challenges they face. We formulate a Dynamic Hypothesis capturing the endogenous feedbacks driving acequia community vitality. Development of the model centered on major stock-and-flow components, including linkages for hydrology, ecology, community, and economics. Calibration metrics were used for model evaluation, including statistical correlation of observed and predicted values and Theil inequality statistics. Results indicated that the model reproduced trends exhibited by the observed system. Sensitivity analyses of socio-cultural processes identified absentee decisions, cumulative income effect on time in agriculture, and land use preference due to time allocation, community demographic effect, effect of employment on participation, and farm size effect as key determinants of system behavior and response. Sensitivity analyses of biophysical parameters revealed that several key parameters (e.g., acres per

  10. Epidemiology of HPV 16 and cervical cancer in Finland and the potential impact of vaccination: mathematical modelling analyses.

    Directory of Open Access Journals (Sweden)

    Ruanne V Barnabas

    2006-05-01

    Full Text Available BACKGROUND: Candidate human papillomavirus (HPV vaccines have demonstrated almost 90%-100% efficacy in preventing persistent, type-specific HPV infection over 18 mo in clinical trials. If these vaccines go on to demonstrate prevention of precancerous lesions in phase III clinical trials, they will be licensed for public use in the near future. How these vaccines will be used in countries with national cervical cancer screening programmes is an important question. METHODS AND FINDINGS: We developed a transmission model of HPV 16 infection and progression to cervical cancer and calibrated it to Finnish HPV 16 seroprevalence over time. The model was used to estimate the transmission probability of the virus, to look at the effect of changes in patterns of sexual behaviour and smoking on age-specific trends in cancer incidence, and to explore the impact of HPV 16 vaccination. We estimated a high per-partnership transmission probability of HPV 16, of 0.6. The modelling analyses showed that changes in sexual behaviour and smoking accounted, in part, for the increase seen in cervical cancer incidence in 35- to 39-y-old women from 1990 to 1999. At both low (10% in opportunistic immunisation and high (90% in a national immunisation programme coverage of the adolescent population, vaccinating women and men had little benefit over vaccinating women alone. We estimate that vaccinating 90% of young women before sexual debut has the potential to decrease HPV type-specific (e.g., type 16 cervical cancer incidence by 91%. If older women are more likely to have persistent infections and progress to cancer, then vaccination with a duration of protection of less than 15 y could result in an older susceptible cohort and no decrease in cancer incidence. While vaccination has the potential to significantly reduce type-specific cancer incidence, its combination with screening further improves cancer prevention. CONCLUSIONS: HPV vaccination has the potential to

  11. A multilevel model to address batch effects in copy number estimation using SNP arrays.

    Science.gov (United States)

    Scharpf, Robert B; Ruczinski, Ingo; Carvalho, Benilton; Doan, Betty; Chakravarti, Aravinda; Irizarry, Rafael A

    2011-01-01

    Submicroscopic changes in chromosomal DNA copy number dosage are common and have been implicated in many heritable diseases and cancers. Recent high-throughput technologies have a resolution that permits the detection of segmental changes in DNA copy number that span thousands of base pairs in the genome. Genomewide association studies (GWAS) may simultaneously screen for copy number phenotype and single nucleotide polymorphism (SNP) phenotype associations as part of the analytic strategy. However, genomewide array analyses are particularly susceptible to batch effects as the logistics of preparing DNA and processing thousands of arrays often involves multiple laboratories and technicians, or changes over calendar time to the reagents and laboratory equipment. Failure to adjust for batch effects can lead to incorrect inference and requires inefficient post hoc quality control procedures to exclude regions that are associated with batch. Our work extends previous model-based approaches for copy number estimation by explicitly modeling batch and using shrinkage to improve locus-specific estimates of copy number uncertainty. Key features of this approach include the use of biallelic genotype calls from experimental data to estimate batch-specific and locus-specific parameters of background and signal without the requirement of training data. We illustrate these ideas using a study of bipolar disease and a study of chromosome 21 trisomy. The former has batch effects that dominate much of the observed variation in the quantile-normalized intensities, while the latter illustrates the robustness of our approach to a data set in which approximately 27% of the samples have altered copy number. Locus-specific estimates of copy number can be plotted on the copy number scale to investigate mosaicism and guide the choice of appropriate downstream approaches for smoothing the copy number as a function of physical position. The software is open source and implemented in the R

  12. Multiplicative models of analysis : a description and the use in analysing accident ratios as a function of hourly traffic volume and road-surface skidding resistance.

    NARCIS (Netherlands)

    Oppe, S.

    1977-01-01

    Accident ratios are analysed with regard to the variables road surface skidding resistance and hourly traffic volume. It is concluded that the multiplicative model describes the data better than the additive model. Moreover that there is no interaction between skidding resistance and traffic volume

  13. A second-generation device for automated training and quantitative behavior analyses of molecularly-tractable model organisms.

    Directory of Open Access Journals (Sweden)

    Douglas Blackiston

    Full Text Available A deep understanding of cognitive processes requires functional, quantitative analyses of the steps leading from genetics and the development of nervous system structure to behavior. Molecularly-tractable model systems such as Xenopus laevis and planaria offer an unprecedented opportunity to dissect the mechanisms determining the complex structure of the brain and CNS. A standardized platform that facilitated quantitative analysis of behavior would make a significant impact on evolutionary ethology, neuropharmacology, and cognitive science. While some animal tracking systems exist, the available systems do not allow automated training (feedback to individual subjects in real time, which is necessary for operant conditioning assays. The lack of standardization in the field, and the numerous technical challenges that face the development of a versatile system with the necessary capabilities, comprise a significant barrier keeping molecular developmental biology labs from integrating behavior analysis endpoints into their pharmacological and genetic perturbations. Here we report the development of a second-generation system that is a highly flexible, powerful machine vision and environmental control platform. In order to enable multidisciplinary studies aimed at understanding the roles of genes in brain function and behavior, and aid other laboratories that do not have the facilities to undergo complex engineering development, we describe the device and the problems that it overcomes. We also present sample data using frog tadpoles and flatworms to illustrate its use. Having solved significant engineering challenges in its construction, the resulting design is a relatively inexpensive instrument of wide relevance for several fields, and will accelerate interdisciplinary discovery in pharmacology, neurobiology, regenerative medicine, and cognitive science.

  14. Transcriptomics and proteomics analyses of the PACAP38 influenced ischemic brain in permanent middle cerebral artery occlusion model mice

    Directory of Open Access Journals (Sweden)

    Hori Motohide

    2012-11-01

    Full Text Available Abstract Introduction The neuropeptide pituitary adenylate cyclase-activating polypeptide (PACAP is considered to be a potential therapeutic agent for prevention of cerebral ischemia. Ischemia is a most common cause of death after heart attack and cancer causing major negative social and economic consequences. This study was designed to investigate the effect of PACAP38 injection intracerebroventrically in a mouse model of permanent middle cerebral artery occlusion (PMCAO along with corresponding SHAM control that used 0.9% saline injection. Methods Ischemic and non-ischemic brain tissues were sampled at 6 and 24 hours post-treatment. Following behavioral analyses to confirm whether the ischemia has occurred, we investigated the genome-wide changes in gene and protein expression using DNA microarray chip (4x44K, Agilent and two-dimensional gel electrophoresis (2-DGE coupled with matrix assisted laser desorption/ionization-time of flight-mass spectrometry (MALDI-TOF-MS, respectively. Western blotting and immunofluorescent staining were also used to further examine the identified protein factor. Results Our results revealed numerous changes in the transcriptome of ischemic hemisphere (ipsilateral treated with PACAP38 compared to the saline-injected SHAM control hemisphere (contralateral. Previously known (such as the interleukin family and novel (Gabra6, Crtam genes were identified under PACAP influence. In parallel, 2-DGE analysis revealed a highly expressed protein spot in the ischemic hemisphere that was identified as dihydropyrimidinase-related protein 2 (DPYL2. The DPYL2, also known as Crmp2, is a marker for the axonal growth and nerve development. Interestingly, PACAP treatment slightly increased its abundance (by 2-DGE and immunostaining at 6 h but not at 24 h in the ischemic hemisphere, suggesting PACAP activates neuronal defense mechanism early on. Conclusions This study provides a detailed inventory of PACAP influenced gene expressions

  15. Aeroelastic Analyses of the SemiSpan SuperSonic Transport (S4T) Wind Tunnel Model at Mach 0.95

    Science.gov (United States)

    Hur, Jiyoung

    2014-01-01

    Detailed aeroelastic analyses of the SemiSpan SuperSonic Transport (S4T) wind tunnel model at Mach 0.95 with a 1.75deg fixed angle of attack are presented. First, a numerical procedure using the Computational Fluids Laboratory 3-Dimensional (CFL3D) Version 6.4 flow solver is investigated. The mesh update method for structured multi-block grids was successfully applied to the Navier-Stokes simulations. Second, the steady aerodynamic analyses with a rigid structure of the S4T wind tunnel model are reviewed in transonic flow. Third, the static analyses were performed for both the Euler and Navier-Stokes equations. Both the Euler and Navier-Stokes equations predicted a significant increase of lift forces, compared to the results from the rigid structure of the S4T wind-tunnel model, over various dynamic pressures. Finally, dynamic aeroelastic analyses were performed to investigate the flutter condition of the S4T wind tunnel model at the transonic Mach number. The condition of flutter was observed at a dynamic pressure of approximately 75.0-psf for the Navier-Stokes simulations. However, it was observed that the flutter condition occurred a dynamic pressure of approximately 47.27-psf for the Euler simulations. Also, the computational efficiency of the aeroelastic analyses for the S4T wind tunnel model has been assessed.

  16. Post hoc evaluation of a common-sense intervention for asthma management in community pharmacy

    Science.gov (United States)

    Seubert, Liza; Schneider, Carl R; Clifford, Rhonda

    2016-01-01

    Objectives The aim was to evaluate a common-sense, behavioural change intervention to implement clinical guidelines for asthma management in the community pharmacy setting. Design The components of the common-sense intervention were described in terms of categories and dimensions using the Intervention Taxonomy (ITAX) and Behaviour Change Techniques (BCTs) using the Behaviour Change Wheel (BCW), Capability, Opportunity and Motivation-Behaviour (COM-B) System and Behaviour Change Techniques Taxonomy (BCTTv1). The retrospective application of these existing tools facilitated evaluation of the mechanism, fidelity, logistics and rationale of the common-sense intervention. Intervention The initial intervention study was conducted in 336 community pharmacies in the metropolitan area of Perth, Western Australia. Small-group workshops were conducted in 25 pharmacies; 162 received academic detailing and 149 acted as controls. The intervention was designed to improve pharmacy compliance with guidelines for a non-prescription supply of asthma reliever medications. Results Retrospective application of ITAX identified mechanisms for the short-acting β agonists intervention including improving knowledge, behavioural skills, problem-solving skills, motivation and self-efficacy. All the logistical elements were considered in the intervention design but the duration and intensity of the intervention was minimal. The intervention was delivered as intended (as a workshop) to 13.4% of participants indicating compromised fidelity and significant adaptation. Retrospective application of the BCW, COM-B system and BCTTv1 identified 9 different behaviour change techniques as the rationale for promoting guideline-based practice change. Conclusions There was a sound rationale and clear mechanism for all the components of the intervention but issues related to logistics, adaptability and fidelity might have affected outcomes. Small group workshops could be a useful implementation strategy in community pharmacy, if logistical issues can be overcome and less adaptation occurs. Duration, intensity and reinforcement need consideration for successful wider implementation. Further qualitative evaluations, triangulation of research and evaluations across interventions should be used to provide a greater understanding of unresolved issues. PMID:27864251

  17. Reducing consumption of confectionery foods: A post-hoc segmentation analysis using a social cognition approach.

    Science.gov (United States)

    Naughton, Paul; McCarthy, Mary; McCarthy, Sinéad

    2017-10-01

    Considering confectionary consumption behaviour this