WorldWideScience

Sample records for model post-hoc analyses

  1. Deconstructing tolerance with clobazam: Post hoc analyses from an open-label extension study.

    Science.gov (United States)

    Gidal, Barry E; Wechsler, Robert T; Sankar, Raman; Montouris, Georgia D; White, H Steve; Cloyd, James C; Kane, Mary Clare; Peng, Guangbin; Tworek, David M; Shen, Vivienne; Isojarvi, Jouko

    2016-10-25

    To evaluate potential development of tolerance to adjunctive clobazam in patients with Lennox-Gastaut syndrome. Eligible patients enrolled in open-label extension study OV-1004, which continued until clobazam was commercially available in the United States or for a maximum of 2 years outside the United States. Enrolled patients started at 0.5 mg·kg -1 ·d -1 clobazam, not to exceed 40 mg/d. After 48 hours, dosages could be adjusted up to 2.0 mg·kg -1 ·d -1 (maximum 80 mg/d) on the basis of efficacy and tolerability. Post hoc analyses evaluated mean dosages and drop-seizure rates for the first 2 years of the open-label extension based on responder categories and baseline seizure quartiles in OV-1012. Individual patient listings were reviewed for dosage increases ≥40% and increasing seizure rates. Data from 200 patients were included. For patients free of drop seizures, there was no notable change in dosage over 24 months. For responder groups still exhibiting drop seizures, dosages were increased. Weekly drop-seizure rates for 100% and ≥75% responders demonstrated a consistent response over time. Few patients had a dosage increase ≥40% associated with an increase in seizure rates. Two-year findings suggest that the majority of patients do not develop tolerance to the antiseizure actions of clobazam. Observed dosage increases may reflect best efforts to achieve seizure freedom. It is possible that the clinical development of tolerance to clobazam has been overstated. NCT00518713 and NCT01160770. This study provides Class III evidence that the majority of patients do not develop tolerance to clobazam over 2 years of treatment. © 2016 American Academy of Neurology.

  2. Switching to olanzapine long-acting injection from either oral olanzapine or any other antipsychotic: comparative post hoc analyses

    Directory of Open Access Journals (Sweden)

    Ciudad A

    2013-11-01

    Full Text Available Antonio Ciudad,1 Ernie Anand,2 Lovisa Berggren,3 Marta Casillas,4 Alexander Schacht,3 Elena Perrin5 1Department of Clinical Research and Development, Eli Lilly & Co, Madrid, Spain; 2Neuroscience Medical Affairs – EU, Lilly Research Centre, Windlesham, Surrey, UK; 3Global Statistical Sciences, Eli Lilly & Co, Bad Homburg, Germany; 4European Scientific Communications, Eli Lilly & Co, Madrid, Spain; 5Medical Department, Eli Lilly & Co, Suresnes, Paris, France Background: A considerable proportion of patients suffering from schizophrenia show suboptimal responses to oral antipsychotics due to inadequate adherence. Hence, they are likely to benefit from switching to a long-acting injectable formulation. These post hoc analyses assessed the clinical effects of switching to olanzapine long-acting injection (OLAI from either oral olanzapine (OLZ or other antipsychotics (non-OLZ. Methods: Post hoc analyses were done based on two randomized studies (one short-term, one long-term conducted in patients suffering from schizophrenia and treated with OLAI. The short-term study was an 8-week placebo-controlled, double-blind trial in acute patients, and the long-term study was a 2-year, oral olanzapine-controlled, open-label, follow-up of stabilized outpatients. Results: These analyses used data from 62 OLAI-treated patients (12 switched from OLZ, 50 from non-OLZ from the short-term study and 190 OLAI-treated patients (56 switched from OLZ, 134 from non-OLZ from the long-term study. Kaplan–Meier survival analyses of time to all-cause discontinuation of OLAI treatment did not differ significantly between OLZ and non-OLZ patients in the short-term study (P=0.209 or long-term study (P=0.448. Similarly, the proportions of OLZ and non-OLZ patients that discontinued OLAI were not statistically different in the short-term (16.7% versus 36.0%, respectively; P=0.198 or long-term (57.1% versus 47.8% respectively; P=0.238 studies. In the short-term study, no

  3. Symptomatic efficacy of rasagiline monotherapy in early Parkinson's disease: post-hoc analyses from the ADAGIO trial.

    Science.gov (United States)

    Jankovic, Joseph; Berkovich, Elijahu; Eyal, Eli; Tolosa, Eduardo

    2014-06-01

    The ADAGIO study included a large cohort of patients with early PD (baseline total-UPDRS = 20) who were initially randomized to rasagiline and placebo, thereby allowing analyses of symptomatic efficacy. Post-hoc analyses comparing the efficacy of rasagiline 1 mg/day (n = 288) versus placebo (n = 588) on key symptoms at 36 weeks, and on total-UPDRS scores over 72 weeks (completer population: rasagiline 1 mg/day n = 221, placebo n = 392) were performed. Treatment with rasagiline resulted in significantly better tremor, bradykinesia, rigidity and postural-instability-gait-difficulty scores at week 36 versus placebo. Whereas the placebo group experienced progressive deterioration from baseline (2.6 UPDRS points at week 36), patients in the rasagiline group were maintained at baseline values at week 60 (UPDRS-change of 0.3 points). At week 72, patients who had received continuous monotherapy with rasagiline experienced a worsening of only 1.6 points. Treatment with rasagiline maintained motor function to baseline values for at least a year with significant benefits observed in all key PD motor symptoms. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Anti-Vascular Endothelial Growth Factor Comparative Effectiveness Trial for Diabetic Macular Edema: Additional Efficacy Post Hoc Analyses of a Randomized Clinical Trial.

    Science.gov (United States)

    Jampol, Lee M; Glassman, Adam R; Bressler, Neil M; Wells, John A; Ayala, Allison R

    2016-12-01

    Post hoc analyses from the Diabetic Retinopathy Clinical Research Network randomized clinical trial comparing aflibercept, bevacizumab, and ranibizumab for diabetic macular edema (DME) might influence interpretation of study results. To provide additional outcomes comparing 3 anti-vascular endothelial growth factor (VEGF) agents for DME. Post hoc analyses performed from May 3, 2016, to June 21, 2016, of a randomized clinical trial performed from August 22, 2012, to September 23, 2015, of 660 participants comparing 3 anti-VEGF treatments in eyes with center-involved DME causing vision impairment. Randomization to intravitreous aflibercept (2.0 mg), bevacizumab (1.25 mg), or ranibizumab (0.3 mg) administered up to monthly based on a structured retreatment regimen. Focal/grid laser treatment was added after 6 months for the treatment of persistent DME. Change in visual acuity (VA) area under the curve and change in central subfield thickness (CST) within subgroups based on whether an eye received laser treatment for DME during the study. Post hoc analyses were performed for 660 participants (mean [SD] age, 61 [10] years; 47% female, 65% white, 16% black or African American, 16% Hispanic, and 3% other). For eyes with an initial VA of 20/50 or worse, VA improvement was greater with aflibercept than the other agents at 1 year but superior only to bevacizumab at 2 years. Mean (SD) letter change in VA over 2 years (area under curve) was greater with aflibercept (+17.1 [9.7]) than with bevacizumab (+12.1 [9.4]; 95% CI, +1.6 to +7.3; P grid laser treatment was performed for DME, the only participants to have a substantial reduction in mean CST between 1 and 2 years were those with a baseline VA of 20/50 or worse receiving bevacizumab and laser treatment (mean [SD], -55 [108] µm; 95% CI, -82 to -28 µm; P grid laser treatment, ceiling and floor effects, or both may account for mean thickness reductions noted only in bevacizumab-treated eyes between 1 and 2 years

  5. Post-hoc pattern-oriented testing and tuning of an existing large model: lessons from the field vole.

    Directory of Open Access Journals (Sweden)

    Christopher J Topping

    Full Text Available Pattern-oriented modeling (POM is a general strategy for modeling complex systems. In POM, multiple patterns observed at different scales and hierarchical levels are used to optimize model structure, to test and select sub-models of key processes, and for calibration. So far, POM has been used for developing new models and for models of low to moderate complexity. It remains unclear, though, whether the basic idea of POM to utilize multiple patterns, could also be used to test and possibly develop existing and established models of high complexity. Here, we use POM to test, calibrate, and further develop an existing agent-based model of the field vole (Microtus agrestis, which was developed and tested within the ALMaSS framework. This framework is complex because it includes a high-resolution representation of the landscape and its dynamics, of the individual's behavior, and of the interaction between landscape and individual behavior. Results of fitting to the range of patterns chosen were generally very good, but the procedure required to achieve this was long and complicated. To obtain good correspondence between model and the real world it was often necessary to model the real world environment closely. We therefore conclude that post-hoc POM is a useful and viable way to test a highly complex simulation model, but also warn against the dangers of over-fitting to real world patterns that lack details in their explanatory driving factors. To overcome some of these obstacles we suggest the adoption of open-science and open-source approaches to ecological simulation modeling.

  6. Onset of efficacy and tolerability following the initiation dosing of long-acting paliperidone palmitate: post-hoc analyses of a randomized, double-blind clinical trial

    Directory of Open Access Journals (Sweden)

    Fu Dong-Jing

    2011-05-01

    Full Text Available Abstract Background Paliperidone palmitate is a long-acting injectable atypical antipsychotic for the acute and maintenance treatment of adults with schizophrenia. The recommended initiation dosing regimen is 234 mg on Day 1 and 156 mg on Day 8 via intramuscular (deltoid injection; followed by 39 to 234 mg once-monthly thereafter (deltoid or gluteal. These post-hoc analyses addressed two commonly encountered clinical issues regarding the initiation dosing: the time to onset of efficacy and the associated tolerability. Methods In a 13-week double-blind trial, 652 subjects with schizophrenia were randomized to paliperidone palmitate 39, 156, or 234 mg (corresponding to 25, 100, or 150 mg equivalents of paliperidone, respectively or placebo (NCT#00590577. Subjects randomized to paliperidone palmitate received 234 mg on Day 1, followed by their randomized fixed dose on Day 8, and monthly thereafter, with no oral antipsychotic supplementation. The onset of efficacy was defined as the first timepoint where the paliperidone palmitate group showed significant improvement in the Positive and Negative Syndrome Scale (PANSS score compared to placebo (Analysis of Covariance [ANCOVA] models and Last Observation Carried Forward [LOCF] methodology without adjusting for multiplicity using data from the Days 4, 8, 22, and 36 assessments. Adverse event (AE rates and relative risks (RR with 95% confidence intervals (CI versus placebo were determined. Results Paliperidone palmitate 234 mg on Day 1 was associated with greater improvement than placebo on Least Squares (LS mean PANSS total score at Day 8 (p = 0.037. After the Day 8 injection of 156 mg, there was continued PANSS improvement at Day 22 (p ≤ 0.007 vs. placebo and Day 36 (p Conclusions Significantly greater symptom improvement was observed by Day 8 with paliperidone palmitate (234 mg on Day 1 compared to placebo; this effect was maintained after the 156 mg Day 8 injection, with a trend towards a dose

  7. Post Hoc Analyses of Randomized Clinical Trial for the Effect of Clopidogrel Added to Aspirin on Kidney Function.

    Science.gov (United States)

    Ikeme, Jesse C; Pergola, Pablo E; Scherzer, Rebecca; Shlipak, Michael G; Benavente, Oscar R; Peralta, Carmen A

    2017-07-07

    Despite the high burden of CKD, few specific therapies are available that can halt disease progression. In animal models, clopidogrel has emerged as a potential therapy to preserve kidney function. The effect of clopidogrel on kidney function in humans has not been established. The Secondary Prevention of Small Subcortical Strokes Study randomized participants with prior lacunar stroke to treatment with aspirin or aspirin plus clopidogrel. We compared annual eGFR decline and incidence of rapid eGFR decline (≥30% from baseline) using generalized estimating equations and interval-censored proportional hazards regression, respectively. We also stratified our analyses by baseline eGFR, systolic BP target, and time after randomization. At randomization, median age was 62 (interquartile range, 55-71) years old; 36% had a history of diabetes, 90% had hypertension, and the median eGFR was 81 (interquartile range, 65-94) ml/min per 1 m 2 . Persons receiving aspirin plus clopidogrel had an average annual change in kidney function of -1.39 (95% confidence interval, -1.15 to -1.62) ml/min per 1.73 m 2 per year compared with -1.52 (95% confidence interval, -1.30 to -1.74) ml/min per 1.73 m 2 per year among persons receiving aspirin only ( P =0.42). Rapid kidney function decline occurred in 21% of participants receiving clopidogrel plus aspirin compared with 22% of participants receiving aspirin plus placebo (hazard ratio, 0.94; 95% confidence interval, 0.79 to 1.10; P =0.42). Findings did not vary by baseline eGFR, time after randomization, or systolic BP target (all P values for interaction were >0.3). We found no effect of clopidogrel added to aspirin compared with aspirin alone on kidney function decline among persons with prior lacunar stroke. Copyright © 2017 by the American Society of Nephrology.

  8. Estimated medical expenditure and risk of job loss among rheumatoid arthritis patients undergoing tofacitinib treatment: post hoc analyses of two randomized clinical trials.

    Science.gov (United States)

    Rendas-Baum, Regina; Kosinski, Mark; Singh, Amitabh; Mebus, Charles A; Wilkinson, Bethany E; Wallenstein, Gene V

    2017-08-01

    RA causes high disability levels and reduces health-related quality of life, triggering increased costs and risk of unemployment. Tofacitinib is an oral Janus kinase inhibitor for the treatment of RA. These post hoc analyses of phase 3 data aimed to assess monthly medical expenditure (MME) and risk of job loss for tofacitinib treatment vs placebo. Data analysed were from two randomized phase 3 studies of RA patients (n = 1115) with inadequate response to MTX or TNF inhibitors (TNFi) receiving tofacitinib 5 or 10 mg twice daily, adalimumab (one study only) or placebo, in combination with MTX. Short Form 36 version 2 Health Survey physical and mental component summary scores were translated into predicted MME via an algorithm and concurrent inability to work and job loss risks at 6, 12 and 24 months, using Medical Outcomes Study data. MME reduction by month 3 was $100 greater for tofacitinib- than placebo-treated TNFi inadequate responders (P 20 and 6% reductions from baseline, respectively. By month 3 of tofacitinib treatment, the odds of inability to work decreased ⩾16%, and risk of future job loss decreased ∼20% (P tofacitinib- than placebo-treated MTX inadequate responders (P tofacitinib treatment, the odds of inability to work decreased ⩾31% and risk of future job loss decreased ⩾25% (P Tofacitinib treatment had a positive impact on estimated medical expenditure and risk of job loss for RA patients with inadequate response to MTX or TNFi. © The Author 2017. Published by Oxford University Press on behalf of the British Society for Rheumatology.

  9. Post-Hoc Pattern-Oriented Testing and Tuning of an Existing Large Model: Lessons from the Field Vole

    DEFF Research Database (Denmark)

    Topping, Christopher John; Dalkvist, Trine; Grimm, Volker

    2012-01-01

    Pattern-oriented modeling (POM) is a general strategy for modeling complex systems. In POM, multiple patterns observed at different scales and hierarchical levels are used to optimize model structure, to test and select sub-models of key processes, and for calibration. So far, POM has been used f...

  10. Effects of memantine on cognition in patients with moderate to severe Alzheimer's disease: post-hoc analyses of ADAS-cog and SIB total and single-item scores from six randomized, double-blind, placebo-controlled studies.

    Science.gov (United States)

    Mecocci, Patrizia; Bladström, Anna; Stender, Karina

    2009-05-01

    The post-hoc analyses reported here evaluate the specific effects of memantine treatment on ADAS-cog single-items or SIB subscales for patients with moderate to severe AD. Data from six multicentre, randomised, placebo-controlled, parallel-group, double-blind, 6-month studies were used as the basis for these post-hoc analyses. All patients with a Mini-Mental State Examination (MMSE) score of less than 20 were included. Analyses of patients with moderate AD (MMSE: 10-19), evaluated with the Alzheimer's disease Assessment Scale (ADAS-cog) and analyses of patients with moderate to severe AD (MMSE: 3-14), evaluated using the Severe Impairment Battery (SIB), were performed separately. The mean change from baseline showed a significant benefit of memantine treatment on both the ADAS-cog (p ADAS-cog single-item analyses showed significant benefits of memantine treatment, compared to placebo, for mean change from baseline for commands (p < 0.001), ideational praxis (p < 0.05), orientation (p < 0.01), comprehension (p < 0.05), and remembering test instructions (p < 0.05) for observed cases (OC). The SIB subscale analyses showed significant benefits of memantine, compared to placebo, for mean change from baseline for language (p < 0.05), memory (p < 0.05), orientation (p < 0.01), praxis (p < 0.001), and visuospatial ability (p < 0.01) for OC. Memantine shows significant benefits on overall cognitive abilities as well as on specific key cognitive domains for patients with moderate to severe AD. (c) 2009 John Wiley & Sons, Ltd.

  11. Bardoxolone Methyl Improves Kidney Function in Patients with Chronic Kidney Disease Stage 4 and Type 2 Diabetes: Post-Hoc Analyses from Bardoxolone Methyl Evaluation in Patients with Chronic Kidney Disease and Type 2 Diabetes Study

    Science.gov (United States)

    Chin, Melanie P.; Bakris, George L.; Block, Geoffrey A.; Chertow, Glenn M.; Goldsberry, Angie; Inker, Lesley A.; Heerspink, Hiddo J.L.; O'Grady, Megan; Pergola, Pablo E.; Wanner, Christoph; Warnock, David G.; Meyer, Colin J.

    2018-01-01

    Background Increases in measured inulin clearance, measured creatinine clearance, and estimated glomerular filtration rate (eGFR) have been observed with bardoxolone methyl in 7 studies enrolling approximately 2,600 patients with type 2 diabetes (T2D) and chronic kidney disease (CKD). The largest of these studies was Bardoxolone Methyl Evaluation in Patients with Chronic Kidney Disease and Type 2 Diabetes (BEACON), a multinational, randomized, double-blind, placebo-controlled phase 3 trial which enrolled patients with T2D and CKD stage 4. The BEACON trial was terminated after preliminary analyses showed that patients randomized to bardoxolone methyl experienced significantly higher rates of heart failure events. We performed post-hoc analyses to characterize changes in kidney function induced by bardoxolone methyl. Methods Patients in ­BEACON (n = 2,185) were randomized 1: 1 to receive once-daily bardoxolone methyl (20 mg) or placebo. We compared the effects of bardoxolone methyl and placebo on a post-hoc composite renal endpoint consisting of ≥30% decline from baseline in eGFR, eGFR <15 mL/min/1.73 m2, and end-stage renal disease (ESRD) events (provision of dialysis or kidney transplantation). Results Consistent with prior studies, patients randomized to bardoxolone methyl experienced mean increases in eGFR that were sustained through study week 48. Moreover, increases in eGFR from baseline were sustained 4 weeks after cessation of treatment. Patients randomized to bardoxolone methyl were significantly less likely to experience the composite renal endpoint (hazards ratio 0.48 [95% CI 0.36–0.64]; p < 0.0001). Conclusions Bardoxolone methyl preserves kidney function and may delay the onset of ESRD in patients with T2D and stage 4 CKD. PMID:29402767

  12. Post hoc subgroups in clinical trials: Anathema or analytics?

    Science.gov (United States)

    Weisberg, Herbert I; Pontes, Victor P

    2015-08-01

    There is currently much interest in generating more individualized estimates of treatment effects. However, traditional statistical methods are not well suited to this task. Post hoc subgroup analyses of clinical trials are fraught with methodological problems. We suggest that the alternative research paradigm of predictive analytics, widely used in many business contexts, can be adapted to help. We compare the statistical and analytics perspectives and suggest that predictive modeling should often replace subgroup analysis. We then introduce a new approach, cadit modeling, that can be useful to identify and test individualized causal effects. The cadit technique is particularly useful in the context of selecting from among a large number of potential predictors. We describe a new variable-selection algorithm that has been applied in conjunction with cadit. The cadit approach is illustrated through a reanalysis of data from the Randomized Aldactone Evaluation Study trial, which studied the efficacy of spironolactone in heart-failure patients. The trial was successful, but a serious adverse effect (hyperkalemia) was subsequently discovered. Our reanalysis suggests that it may be possible to predict the degree of hyperkalemia based on a logistic model and to identify a subgroup in which the effect is negligible. Cadit modeling is a promising alternative to subgroup analyses. Cadit regression is relatively straightforward to implement, generates results that are easy to present and explain, and can mesh straightforwardly with many variable-selection algorithms. © The Author(s) 2015.

  13. Post hoc analyses of the impact of previous medication on the efficacy of lisdexamfetamine dimesylate in the treatment of attention-deficit/hyperactivity disorder in a randomized, controlled trial

    Directory of Open Access Journals (Sweden)

    Coghill DR

    2014-10-01

    =317; LDX, -18.6 [-21.5, -15.7]; OROS- MPH, -13.0 [-15.9, -10.2] and treatment-naïve individuals (n=147; LDX, -15.1 [-19.4, -10.9]; OROS-MPH, -12.7 [-16.8, -8.5] or patients previously treated with any ADHD medication (n=170; LDX, -21.5 [-25.5, -17.6]; OROS-MPH, -14.2 [-18.1, -10.3]. In addition, similar proportions of patients receiving active treatment were categorized as improved based on CGI-I score (CGI-I of 1 or 2 in the overall study population and among treatment-naïve individuals or patients previously treated with any ADHD medication. Conclusion: In these post hoc analyses, the response to LDX treatment, and to the reference treatment OROS-MPH, was similar to that observed for the overall study population in subgroups of patients categorized according to whether or not they had previously received ADHD medication. Keywords: attention-deficit/hyperactivity disorder, lisdexamfetamine dimesylate, methylphenidate, central nervous system stimulants

  14. Does rectal indomethacin eliminate the need for prophylactic pancreatic stent placement in patients undergoing high-risk ERCP? Post hoc efficacy and cost-benefit analyses using prospective clinical trial data.

    Science.gov (United States)

    Elmunzer, B Joseph; Higgins, Peter D R; Saini, Sameer D; Scheiman, James M; Parker, Robert A; Chak, Amitabh; Romagnuolo, Joseph; Mosler, Patrick; Hayward, Rodney A; Elta, Grace H; Korsnes, Sheryl J; Schmidt, Suzette E; Sherman, Stuart; Lehman, Glen A; Fogel, Evan L

    2013-03-01

    A recent large-scale randomized controlled trial (RCT) demonstrated that rectal indomethacin administration is effective in addition to pancreatic stent placement (PSP) for preventing post-endoscopic retrograde cholangiopancreatography (ERCP) pancreatitis (PEP) in high-risk cases. We performed a post hoc analysis of this RCT to explore whether rectal indomethacin can replace PSP in the prevention of PEP and to estimate the potential cost savings of such an approach. We retrospectively classified RCT subjects into four prevention groups: (1) no prophylaxis, (2) PSP alone, (3) rectal indomethacin alone, and (4) the combination of PSP and indomethacin. Multivariable logistic regression was used to adjust for imbalances in the prevalence of risk factors for PEP between the groups. Based on these adjusted PEP rates, we conducted an economic analysis comparing the costs associated with PEP prevention strategies employing rectal indomethacin alone, PSP alone, or the combination of both. After adjusting for risk using two different logistic regression models, rectal indomethacin alone appeared to be more effective for preventing PEP than no prophylaxis, PSP alone, and the combination of indomethacin and PSP. Economic analysis revealed that indomethacin alone was a cost-saving strategy in 96% of Monte Carlo trials. A prevention strategy employing rectal indomethacin alone could save approximately $150 million annually in the United States compared with a strategy of PSP alone, and $85 million compared with a strategy of indomethacin and PSP. This hypothesis-generating study suggests that prophylactic rectal indomethacin could replace PSP in patients undergoing high-risk ERCP, potentially improving clinical outcomes and reducing healthcare costs. A RCT comparing rectal indomethacin alone vs. indomethacin plus PSP is needed.

  15. Modelling of increased homocysteine in ischaemic stroke: post-hoc cross-sectional matched case-control analysis in young patients Aumento de homocisteína em acidente vascular cerebral isquêmico: análise post-hoc com casos controles em pacientes jovens

    Directory of Open Access Journals (Sweden)

    Penka A. Atanassova

    2007-03-01

    Full Text Available BACKGROUND & PURPOSE: Hyperhomocysteinaemia has been postulated to participate in pathogenesis of ischaemic stroke (IS. However, especially in young adults, there is possibility of significantly increased IS risk due to increased ‘normal’ homocysteinaemia, i.e., ‘hidden’ (‘pathologically dormant’ prevalence within a healthy, normally-defined range. We performed a post-hoc modelling investigation on plasma total homocysteinaemia (THCY in gender- and age-matched young patients in the acute IS phase. We evaluated relationships between THCY and prevalence of other potential risk factors in 41 patients vs. 41 healthy controls. METHOD: We used clinical methods, instrumental and neuroimmaging procedures, risk factors examination, total plasma homocysteine measurements and other laboratory and statistical modelling techniques. RESULTS: IS patients and healthy controls were similar not only for matching variables, but also for smoking, main vitamin status, serum creatinine and lipid profile. Patients with IS, however, had lower vitamin B6 levels and higher THCY, fibrinogen and triglycerides (TGL. At multivariate stepwise logistic regression only increased THCY and TGL were significantly and independently associated with the risk for stroke (72% model accuracy, p model=0.001. An increase of THCY with 1.0 µmol/L was associated with 22% higher risk of ischaemic stroke [adjusted OR=1.22 (95%CI 1.03?1.44]. In this way, novel lower cut-off value for HCY of 11.58 µmol/L in younger patients has been revealed (ROC AUC= 0.67, 95CI% 0.55-0.78, p=0.009. CONCLUSION: The new THCY cut-off clearly discriminated between absence and presence of IS (sensitivity>63%, specificity>68% irrespectively of age and gender and may be applied to better evaluate and more precisely define, as earlier as possible, the young patients at increased IS risk.OBJETIVO: Hiperhomocisteinemia tem sido postulada como um dos fatores de risco na patogênese do acidente vascular

  16. Dose-related beneficial and harmful effects of gabapentin in postoperative pain management – post hoc analyses from a systematic review with meta-analyses and trial sequential analyses

    Directory of Open Access Journals (Sweden)

    Fabritius ML

    2017-11-01

    Full Text Available Maria Louise Fabritius,1 Jørn Wetterslev,2 Ole Mathiesen,3 Jørgen B Dahl1 1Department of Anaesthesiology and Intensive Care, Bispebjerg and Frederiksberg Hospitals, Copenhagen, Denmark; 2Copenhagen Trial Unit, Centre for Clinical Intervention Research, Copenhagen University Hospital, Copenhagen, Denmark; 3Department of Anaesthesiology, Zealand University Hospital, Køge, Denmark Background: During the last 15 years, gabapentin has become an established component of postoperative pain treatment. Gabapentin has been employed in a wide range of doses, but little is known about the optimal dose, providing the best balance between benefit and harm. This systematic review with meta-analyses aimed to explore the beneficial and harmful effects of various doses of gabapentin administered to surgical patients.Materials and methods: Data in this paper were derived from an original review, and the subgroup analyses were predefined in an International Prospective Register of Systematic Reviews published protocol: PROSPERO (ID: CRD42013006538. The methods followed Cochrane guidelines. The Cochrane Library’s CENTRAL, PubMed, EMBASE, Science Citation Index Expanded, Google Scholar, and FDA database were searched for relevant trials. Randomized clinical trials comparing gabapentin versus placebo were included. Four different dose intervals were investigated: 0–350, 351–700, 701–1050, and >1050 mg. Primary co-outcomes were 24-hour morphine consumption and serious adverse events (SAEs, with emphasis put on trials with low risk of bias. Results: One hundred and twenty-two randomized clinical trials, with 8466 patients, were included. Sixteen were overall low risk of bias. No consistent increase in morphine-sparing effect was observed with increasing doses of gabapentin from the trials with low risk of bias. Analyzing all trials, the smallest and the highest dose subgroups demonstrated numerically the most prominent reduction in morphine consumption

  17. Post hoc analyses of East Asian patients from the randomized placebo-controlled PREVAIL trial of enzalutamide in patients with chemotherapy-naïve, metastatic castration-resistant prostate cancer.

    Science.gov (United States)

    Kim, Choung Soo; Choi, Young Deuk; Lee, Sang Eun; Lee, Hyun Moo; Ueda, Takeshi; Yonese, Junji; Fukagai, Takashi; Chiong, Edmund; Lau, Weber; Abhyankar, Sarang; Theeuwes, Ad; Tombal, Bertrand; Beer, Tomasz M; Kimura, Go

    2017-07-01

    Enzalutamide is an androgen receptor (AR) inhibitor that acts on different steps in the AR signaling pathway. In PREVAIL, an international, phase III, double-blind, placebo-controlled trial, enzalutamide significantly reduced the risk of radiographic progression by 81% (hazard ratio [HR], 0.19; P PREVAIL trial, we performed a post hoc analysis of the Japanese, Korean, and Singaporean patients. PREVAIL enrolled patients with asymptomatic or mildly symptomatic chemotherapy-naïve metastatic castration-resistant prostate cancer who had progressed on androgen deprivation therapy. During the study, patients received enzalutamide (160 mg/d) or placebo (1:1) until death or discontinuation because of radiographic progression or skeletal-related event and initiation of subsequent therapy. Centrally assessed radiographic progression-free survival (rPFS) and overall survival (OS) were coprimary endpoints. The secondary endpoints of the PREVAIL trial were investigator-assessed rPFS, time to initiation of chemotherapy, time to prostate-specific antigen (PSA) progression, and PSA response (≥50% decline). Of 1717 patients, 148 patients were enrolled at sites in East Asia (enzalutamide 73, placebo 75). Treatment effect of enzalutamide versus placebo was consistent with that for the overall population as indicated by the HRs (95% confidence interval) of 0.38 (0.10-1.44) for centrally assessed rPFS, 0.59 (0.29-1.23) for OS, 0.33 (0.19-0.60) for time to chemotherapy, and 0.32 (0.20-0.50) for time to PSA progression. In East Asian patients, PSA responses were observed in 68.5% and 14.7% of enzalutamide- and placebo-treated patients, respectively. The enzalutamide plasma concentration ratio (East Asian:non-Asian patients) was 1.12 (90% confidence interval, 1.05-1.20) at 13 weeks. Treatment-related adverse events grade ≥ 3 occurred in 1.4% and 2.7% of enzalutamide- and placebo-treated East Asian patients, respectively. Treatment effects and safety of enzalutamide in East Asian

  18. Perioperative hyperoxia - Long-term impact on cardiovascular complications after abdominal surgery, a post hoc analysis of the PROXI trial

    DEFF Research Database (Denmark)

    Fonnes, Siv; Gogenur, Ismail; Sondergaard, Edith Smed

    2016-01-01

    BACKGROUND: Increased long-term mortality was found in patients exposed to perioperative hyperoxia in the PROXI trial, where patients undergoing laparotomy were randomised to 80% versus 30% oxygen during and after surgery. This post hoc follow-up study assessed the impact of perioperative hyperoxia...... included myocardial infarction, other heart disease, and acute coronary syndrome or death. Data were analysed in the Cox proportional hazards model. RESULTS: The primary outcome, acute coronary syndrome, occurred in 2.5% versus 1.3% in the 80% versus 30% oxygen group; HR 2.15 (95% CI 0.96-4.84). Patients...

  19. The post hoc use of randomised controlled trials to explore drug associated cancer outcomes

    DEFF Research Database (Denmark)

    Stefansdottir, Gudrun; Zoungas, Sophia; Chalmers, John

    2013-01-01

    on public health before proper regulatory action can be taken. This paper aims to discuss challenges of exploring drug-associated cancer outcomes by post-hoc analyses of Randomised controlled trials (RCTs) designed for other purposes. METHODOLOGICAL CHALLENGES TO CONSIDER: We set out to perform a post......-hoc nested case-control analysis in the ADVANCE trial in order to examine the association between insulin use and cancer. We encountered several methodological challenges that made the results difficult to interpret, including short duration of exposure of interest, lack of power, and correlation between...... exposure and potential confounders. Considering these challenges, we concluded that using the data would not enlighten the discussion about insulin use and cancer risk and only serve to further complicate any understanding. Therefore, we decided to use our experience to illustrate methodological challenges...

  20. Sigsearch: a new term for post hoc unplanned search for statistically significant relationships with the intent to create publishable findings.

    Science.gov (United States)

    Hashim, Muhammad Jawad

    2010-09-01

    Post-hoc secondary data analysis with no prespecified hypotheses has been discouraged by textbook authors and journal editors alike. Unfortunately no single term describes this phenomenon succinctly. I would like to coin the term "sigsearch" to define this practice and bring it within the teaching lexicon of statistics courses. Sigsearch would include any unplanned, post-hoc search for statistical significance using multiple comparisons of subgroups. It would also include data analysis with outcomes other than the prespecified primary outcome measure of a study as well as secondary data analyses of earlier research.

  1. Preventive Ceftriaxone in Patients with Stroke Treated with Intravenous Thrombolysis: Post Hoc Analysis of the Preventive Antibiotics in Stroke Study

    NARCIS (Netherlands)

    Vermeij, Jan-Dirk; Westendorp, Willeke F.; Roos, Yvo B.; Brouwer, Matthijs C.; van de Beek, Diederik; Nederkoorn, Paul J.

    2016-01-01

    The Preventive Antibiotics in Stroke Study (PASS), a randomized open-label masked endpoint trial, showed that preventive ceftriaxone did not improve functional outcome at 3 months in patients with acute stroke (adjusted common OR 0.95; 95% CI 0.82-1.09). Post-hoc analyses showed that among patients

  2. simulate_CAT: A Computer Program for Post-Hoc Simulation for Computerized Adaptive Testing

    Directory of Open Access Journals (Sweden)

    İlker Kalender

    2015-06-01

    Full Text Available This paper presents a computer software developed by the author. The software conducts post-hoc simulations for computerized adaptive testing based on real responses of examinees to paper and pencil tests under different parameters that can be defined by user. In this paper, short information is given about post-hoc simulations. After that, the working principle of the software is provided and a sample simulation with required input files is shown. And last, output files are described

  3. Post-hoc analysis of randomised, placebo-controlled, double-blind study (MCI186-19) of edaravone (MCI-186) in amyotrophic lateral sclerosis.

    Science.gov (United States)

    Takei, Koji; Takahashi, Fumihiro; Liu, Shawn; Tsuda, Kikumi; Palumbo, Joseph

    2017-10-01

    Post-hoc analyses of the ALS Functional Rating Scale-Revised (ALSFRS-R) score data, the primary endpoint in the 24-week double-blind placebo-controlled study of edaravone (MCI186-19, NCT01492686), were performed to confirm statistical robustness of the result. The previously reported original analysis had used a last observation carried forward (LOCF) method and also excluded patients with fewer than three completed treatment cycles. The post-hoc sensitivity analyses used different statistical methods as follows: 1) including all patients regardless of treatment cycles received (ALL LOCF); 2) a mixed model for repeated measurements (MMRM) analysis; and 3) the Combined Assessment of Function and Survival (CAFS) endpoint. Findings were consistent with the original primary analysis in showing superiority of edaravone over placebo. We also investigated the distribution of change in ALSFRS-R total score across all patients in the study as well as which ALSFRS-R items and domains may have contributed to the overall efficacy findings. The distribution of changes in ALSFRS-R total score from baseline to the end of cycle 6 (ALL LOCF) shifted in favour of edaravone compared to placebo. Edaravone was descriptively favoured for each ALSFRS-R item and each of the four ALSFRS-R domains at the end of cycle 6 (ALL LOCF), suggesting a generalised effect of edaravone in slowing functional decline across all anatomical regions. The effect of edaravone appeared to be similar in patients with bulbar onset and limb onset. Together, these observations would be consistent with its putative neuroprotective effects against the development of oxidative damage unspecific to anatomical regions.

  4. Post hoc support vector machine learning for impedimetric biosensors based on weak protein-ligand interactions.

    Science.gov (United States)

    Rong, Y; Padron, A V; Hagerty, K J; Nelson, N; Chi, S; Keyhani, N O; Katz, J; Datta, S P A; Gomes, C; McLamore, E S

    2018-04-30

    Impedimetric biosensors for measuring small molecules based on weak/transient interactions between bioreceptors and target analytes are a challenge for detection electronics, particularly in field studies or in the analysis of complex matrices. Protein-ligand binding sensors have enormous potential for biosensing, but achieving accuracy in complex solutions is a major challenge. There is a need for simple post hoc analytical tools that are not computationally expensive, yet provide near real time feedback on data derived from impedance spectra. Here, we show the use of a simple, open source support vector machine learning algorithm for analyzing impedimetric data in lieu of using equivalent circuit analysis. We demonstrate two different protein-based biosensors to show that the tool can be used for various applications. We conclude with a mobile phone-based demonstration focused on the measurement of acetone, an important biomarker related to the onset of diabetic ketoacidosis. In all conditions tested, the open source classifier was capable of performing as well as, or better, than the equivalent circuit analysis for characterizing weak/transient interactions between a model ligand (acetone) and a small chemosensory protein derived from the tsetse fly. In addition, the tool has a low computational requirement, facilitating use for mobile acquisition systems such as mobile phones. The protocol is deployed through Jupyter notebook (an open source computing environment available for mobile phone, tablet or computer use) and the code was written in Python. For each of the applications, we provide step-by-step instructions in English, Spanish, Mandarin and Portuguese to facilitate widespread use. All codes were based on scikit-learn, an open source software machine learning library in the Python language, and were processed in Jupyter notebook, an open-source web application for Python. The tool can easily be integrated with the mobile biosensor equipment for rapid

  5. A-priori and post-hoc segmentation in the design of healthy eating campaigns

    DEFF Research Database (Denmark)

    Kazbare, Laura; van Trijp, Hans C. M.; Eskildsen, Jacob Kjær

    2010-01-01

    . Although such practice may be justifiable from the practical point of view, it is unclear how effective these implicit segmentations are. In this study the authors argue that  it is important to transcend demographic boundaries and to further segment demographic groups. A study with 13-15-year-old...... and a post-hoc segmentation. The results of the study show that it is useful and also ethical to differentiate people using segmentation methods, since it facilitates reaching more vulnerable segments of society that in general resist change. It also demonstrates that post-hoc segmentation is more helpful...

  6. Correctness-by-construction and post-hoc verification : a marriage of convenience?

    NARCIS (Netherlands)

    Watson, B.W.; Kourie, D.G.; Schaefer, I.; Cleophas, L.G.W.A.; Margaria, T.; Steffen, B.

    2016-01-01

    Correctness-by-construction (CbC), traditionally based on weakest precondition semantics, and post-hoc verification (PhV) aspire to ensure functional correctness. We argue for a lightweight approach to CbC where lack of formal rigour increases productivity. In order to mitigate the risk of

  7. Impulse control disorder related behaviours during long-term rotigotine treatment: a post hoc analysis.

    Science.gov (United States)

    Antonini, A; Chaudhuri, K R; Boroojerdi, B; Asgharnejad, M; Bauer, L; Grieger, F; Weintraub, D

    2016-10-01

    Dopamine agonists in Parkinson's disease (PD) are associated with impulse control disorders (ICDs) and other compulsive behaviours (together called ICD behaviours). The frequency of ICD behaviours reported as adverse events (AEs) in long-term studies of rotigotine transdermal patch in PD was evaluated. This was a post hoc analysis of six open-label extension studies up to 6 years in duration. Analyses included patients treated with rotigotine for at least 6 months and administered the modified Minnesota Impulse Disorders Interview. ICD behaviours reported as AEs were identified and categorized. For 786 patients, the mean (±SD) exposure to rotigotine was 49.4 ± 17.6 months. 71 (9.0%) patients reported 106 ICD AEs cumulatively. Occurrence was similar across categories: 2.5% patients reported 'compulsive sexual behaviour', 2.3% 'buying disorder', 2.0% 'compulsive gambling', 1.7% 'compulsive eating' and 1.7% 'punding behaviour'. Examining at 6-month intervals, the incidence was relatively low during the first 30 months; it was higher over the next 30 months, peaking in the 54-60-month period. No ICD AEs were serious, and 97% were mild or moderate in intensity. Study discontinuation occurred in seven (9.9%) patients with ICD AEs; these then resolved in five patients. Dose reduction occurred for 23 AEs, with the majority (73.9%) resolving. In this analysis of >750 patients with PD treated with rotigotine, the frequency of ICD behaviour AEs was 9.0%, with a specific incidence timeline observed. Active surveillance as duration of treatment increases may help early identification and management; once ICD behaviours are present rotigotine dose reduction may be considered. © 2016 The Authors. European Journal of Neurology published by John Wiley & Sons Ltd on behalf of European Academy of Neurology.

  8. Regressive research: The pitfalls of post hoc data selection in the study of unconscious mental processes.

    Science.gov (United States)

    Shanks, David R

    2017-06-01

    Many studies of unconscious processing involve comparing a performance measure (e.g., some assessment of perception or memory) with an awareness measure (such as a verbal report or a forced-choice response) taken either concurrently or separately. Unconscious processing is inferred when above-chance performance is combined with null awareness. Often, however, aggregate awareness is better than chance, and data analysis therefore employs a form of extreme group analysis focusing post hoc on participants, trials, or items where awareness is absent or at chance. The pitfalls of this analytic approach are described with particular reference to recent research on implicit learning and subliminal perception. Because of regression to the mean, the approach can mislead researchers into erroneous conclusions concerning unconscious influences on behavior. Recommendations are made about future use of post hoc selection in research on unconscious cognition.

  9. Level of literacy and dementia: A secondary post-hoc analysis from North-West India

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Raina

    2014-01-01

    Full Text Available Introduction: A relation between literacy and dementia has been studied in past and an association has been documented. This is in spite of some studies pointing to the contrary. The current study was aimed at investigating the influence of level of literacy on dementia in a sample stratified by geography (Migrant, Urban, Rural and Tribal areas of sub-Himalayan state of Himachal Pradesh, India. Materials and Methods: The study was based on post-hoc analysis of data obtained from a study conducted on elderly population (60 years and above from selected geographical areas (Migrant, Urban, Rural and Tribal of Himachal Pradesh state in North-west India. Results: Analysis of variance revealed an effect of education on cognitive scores [F = 2.823, P =0.01], however, post-hoc Tukey′s HSD test did not reveal any significant pairwise comparisons. Discussion: The possibility that education effects dementia needs further evaluation, more so in Indian context.

  10. Outcome when adrenaline (epinephrine) was actually given vs. not given - post hoc analysis of a randomized clinical trial.

    Science.gov (United States)

    Olasveengen, Theresa M; Wik, Lars; Sunde, Kjetil; Steen, Petter A

    2012-03-01

    IV line insertion and drugs did not affect long-term survival in an out-of-hospital cardiac arrest (OHCA) randomized clinical trial (RCT). In a previous large registry study adrenaline was negatively associated with survival from OHCA. The present post hoc analysis on the RCT data compares outcomes for patients actually receiving adrenaline to those not receiving adrenaline. Patients from a RCT performed May 2003 to April 2008 were included. Three patients from the original intention-to-treat analysis were excluded due to insufficient documentation of adrenaline administration. Quality of cardiopulmonary resuscitation (CPR) and clinical outcomes were compared. Clinical characteristics were similar and CPR quality comparable and within guideline recommendations for 367 patients receiving adrenaline and 481 patients not receiving adrenaline. Odds ratio (OR) for being admitted to hospital, being discharged from hospital and surviving with favourable neurological outcome for the adrenaline vs. no-adrenaline group was 2.5 (CI 1.9, 3.4), 0.5 (CI 0.3, 0.8) and 0.4 (CI 0.2, 0.7), respectively. Ventricular fibrillation, response interval, witnessed arrest, gender, age and endotracheal intubation were confounders in multivariate logistic regression analysis. OR for survival for adrenaline vs. no-adrenaline adjusted for confounders was 0.52 (95% CI: 0.29, 0.92). Receiving adrenaline was associated with improved short-term survival, but decreased survival to hospital discharge and survival with favourable neurological outcome after OHCA. This post hoc survival analysis is in contrast to the previous intention-to-treat analysis of the same data, but agrees with previous non-randomized registry data. This shows limitations of non-randomized or non-intention-to-treat analyses. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  11. Onychomycosis of Toenails and Post-hoc Analyses with Efinaconazole 10% Solution Once-daily Treatment

    Science.gov (United States)

    2016-01-01

    Topical treatment for toenail onychomycosis has been fraught with a long-standing reputation of poor efficaey, primarily due to physical properties of the nail unit that impede drug penetration. Newer topical agents have been formulated as Solution, which appear to provide better therapeutic response in properly selected patients. It is important to recognize the impact the effects that mitigating and concomitant factors can have on efficaey. These factors include disease severity, gender, presence of tinea pedis, and diabetes. This article reviews results achieved in Phase 3 pivotal studies with topical efinaconazole 10% Solution applied once daily for 48 weeks with a focus on how the aforementioned factors influenced therapeutic outcomes. It is important for clinicians treating patients for onychomycosis to evaluate severity, treat concomitant tinea pedis, address control of diabetes if present by encouraging involvement of the patient’s primary care physician, and consider longer treatment courses when clinically relevant. PMID:27047631

  12. Quantitative Research Methods in Chaos and Complexity: From Probability to Post Hoc Regression Analyses

    Science.gov (United States)

    Gilstrap, Donald L.

    2013-01-01

    In addition to qualitative methods presented in chaos and complexity theories in educational research, this article addresses quantitative methods that may show potential for future research studies. Although much in the social and behavioral sciences literature has focused on computer simulations, this article explores current chaos and…

  13. Post-hoc simulation study to adopt a computerized adaptive testing (CAT) for a Korean Medical License Examination.

    Science.gov (United States)

    Seo, Dong Gi; Choi, Jeongwook

    2018-05-17

    Computerized adaptive testing (CAT) has been adopted in license examinations due to a test efficiency and accuracy. Many research about CAT have been published to prove the efficiency and accuracy of measurement. This simulation study investigated scoring method and item selection methods to implement CAT in Korean medical license examination (KMLE). This study used post-hoc (real data) simulation design. The item bank used in this study was designed with all items in a 2017 KMLE. All CAT algorithms for this study were implemented by a 'catR' package in R program. In terms of accuracy, Rasch and 2parametric logistic (PL) model performed better than 3PL model. Modal a Posteriori (MAP) or Expected a Posterior (EAP) provided more accurate estimates than MLE and WLE. Furthermore Maximum posterior weighted information (MPWI) or Minimum expected posterior variance (MEPV) performed better than other item selection methods. In terms of efficiency, Rasch model was recommended to reduce test length. Simulation study should be performed under varied test conditions before adopting a live CAT. Based on a simulation study, specific scoring and item selection methods should be predetermined before implementing a live CAT.

  14. Assessment of direct analgesic effect of duloxetine for chronic low back pain: post hoc path analysis of double-blind, placebo-controlled studies

    Directory of Open Access Journals (Sweden)

    Enomoto H

    2017-06-01

    Full Text Available Hiroyuki Enomoto,1 Shinji Fujikoshi,2 Jumpei Funai,3 Nao Sasaki,4 Michael H Ossipov,5 Toshinaga Tsuji,6 Levent Alev,7 Takahiro Ushida8 1Medical Science, Eli Lilly Japan K.K., Tokyo, 2Statistical Science, 3Science Communications, 4Medical Science, Eli Lilly Japan K.K., Kobe, Japan; 5Clinical Division, inVentiv Health, LLC, Blue Bell, PA, USA; 6Medical Affairs Department, Shionogi & Co., Ltd., Osaka, Japan; 7Medical Department, Lilly Turkey, Istanbul, Turkey; 8Multidisciplinary Pain Center, Aichi Medical University, Nagakute, Aichi, Japan Background: Comorbid depression and depressive symptoms are common in patients with chronic low back pain (CLBP. Duloxetine is clinically effective in major depressive disorder and several chronic pain states, including CLBP. The objective of this post hoc meta-analysis was to assess direct and indirect analgesic efficacy of duloxetine for patients with CLBP in previous clinical trials. Methods: Post hoc path analyses were conducted of 3 randomized, double-blind, clinical studies of patients receiving duloxetine or placebo for CLBP. The primary outcome measure for pain was the Brief Pain Inventory, average pain score. A secondary outcome measure, the Beck Depression Inventory-II, was used for depressive symptoms. The changes in score from baseline to endpoint were determined for each index. Path analyses were employed to calculate the proportion of analgesia that may be attributed to a direct effect of duloxetine on pain.Results: A total of 851 patients (400 duloxetine and 451 placebo were included in this analysis. Duloxetine significantly improved pain scores compared with placebo (p<0.001. It also significantly improved depressive scores compared with placebo (p=0.015. Path analyses showed that 91.1% of the analgesic effect of duloxetine could be attributed to a direct analgesic effect, and 8.9% to its antidepressant effect. Similar results were obtained when data were evaluated at weeks 4 and 7, and when

  15. Residual symptoms and functioning in depression, does the type of residual symptom matter? A post-hoc analysis

    Directory of Open Access Journals (Sweden)

    Romera Irene

    2013-02-01

    Full Text Available Abstract Background The degrees to which residual symptoms in major depressive disorder (MDD adversely affect patient functioning is not known. This post-hoc analysis explored the association between different residual symptoms and patient functioning. Methods Patients with MDD who responded (≥50% on the 17-item Hamilton Rating Scale for Depression; HAMD-17 after 3 months of treatment (624/930 were included. Residual core mood-symptoms (HAMD-17 core symptom subscale ≥1, residual insomnia-symptoms (HAMD-17 sleep subscale ≥1, residual anxiety-symptoms (HAMD-17-anxiety subscale ≥1, residual somatic-symptoms (HAMD-17 Item 13 ≥1, pain (Visual Analogue Scale ≥30, and functioning were assessed after 3 months treatment. A stepwise logistic regression model with normal functioning (Social and Occupational Functioning Assessment Scale ≥80 as the dependent variable was used. Results After 3 months, 59.5% of patients (371/624 achieved normal functioning and 66.0% (412/624 were in remission. Residual symptom prevalence was: core mood symptoms 72%; insomnia 63%; anxiety 78%; and somatic symptoms 41%. Pain reported in 18%. Factors associated with normal functioning were absence of core mood symptoms (odds ratio [OR] 8.7; 95% confidence interval [CI], 4.6–16.7, absence of insomnia symptoms (OR 1.8; 95% CI, 1.2–2.7, episode length (4–24 weeks vs. ≥24 weeks [OR 2.0; 95% CI, 1.1–3.6] and better baseline functioning (OR 1.0; 95% CI, 1.0–1.1. A significant interaction between residual anxiety symptoms and pain was found (p = 0.0080. Conclusions Different residual symptoms are associated to different degrees with patient functioning. To achieve normal functioning, specific residual symptoms domains might be targeted for treatment.

  16. Atomoxetine treatment outcomes in adolescents and young adults with attention-deficit/hyperactivity disorder: results from a post hoc, pooled analysis.

    Science.gov (United States)

    Adler, Lenard A; Wilens, Timothy; Zhang, Shuyu; Dittmann, Ralf W; D'Souza, Deborah N; Schuh, Leslie; Durell, Todd M

    2012-02-01

    Many children with attention-deficit/hyperactivity disorder (ADHD) continue to experience this disorder as adults, which may, in part, be due to the discontinuity of health care that often occurs during the transition period between late adolescence and young adulthood. Although atomoxetine is reported to be efficacious in both adolescents and young adults, no longitudinal studies have been designed to assess directly the effects of atomoxetine treatment during this transition period. As a first step, we present the results of a post hoc, pooled analysis that compared the efficacy and safety profile of atomoxetine in these 2 patient populations. The aim of the present study was to assess the efficacy and safety profile of atomoxetine treatment in adolescents and young adults with ADHD. A post hoc, pooled analysis was conducted by combining data from 6 double-blind trials (6-9 weeks in duration) that studied adolescents (12-17 years of age; atomoxetine, n = 154; placebo, n = 88; mean final dose = 1.38 mg/kg) and 3 trials (10 weeks in duration) that studied young adults (18-30 years of age; atomoxetine, n = 117; placebo, n = 125; mean final dose = 1.21 mg/kg). Efficacy measures used in these analyses were ADHD Rating Scale (ADHDRS) for adolescents, Conners' Adult ADHD Rating Scale (CAARS) for young adults, and Clinical Global Impressions-ADHD-Severity (CGI-ADHD-S) for both age groups. Treatment response was defined as ≥30% reduction from baseline in total ADHD symptom score. In adolescents (mean age, 13.4 years), atomoxetine improved ADHD significantly compared with placebo (ADHDRS total score change, -12.9 vs -7.5; P young adults (mean age, 24.7 years), atomoxetine improved ADHD significantly (CAARS total score change, -13.6 vs -7.7; P young adults (13.7% vs 4.8%, respectively; P = 0.024); in adolescents no statistically significant differences were observed in frequency of nausea between atomoxetine and placebo treatment (4.5% vs 10.2%, respectively; P = 0

  17. Protective effect of yerba mate intake on the cardiovascular system: a post hoc analysis study in postmenopausal women

    OpenAIRE

    da Veiga, D.T.A.; Bringhenti, R.; Copes, R.; Tatsch, E.; Moresco, R.N.; Comim, F.V.; Premaor, M.O.

    2018-01-01

    The prevalence of cardiovascular and metabolic diseases is increased in postmenopausal women, which contributes to the burden of illnesses in this period of life. Yerba mate (Ilex paraguariensis) is a native bush from Southern South America. Its leaves are rich in phenolic components, which may have antioxidant, vasodilating, hypocholesterolemic, and hypoglycemic proprieties. This post hoc analysis of the case-control study nested in the Obesity and Bone Fracture Cohort evaluated the consumpt...

  18. Clinical Factors Associated with Dose of Loop Diuretics After Pediatric Cardiac Surgery: Post Hoc Analysis.

    Science.gov (United States)

    Haiberger, Roberta; Favia, Isabella; Romagnoli, Stefano; Cogo, Paola; Ricci, Zaccaria

    2016-06-01

    A post hoc analysis of a randomized controlled trial comparing the clinical effects of furosemide and ethacrynic acid was conducted. Infants undergoing cardiac surgery with cardiopulmonary bypass were included in order to explore which clinical factors are associated with diuretic dose in infants with congenital heart disease. Overall, 67 patients with median (interquartile range) age of 48 (13-139) days were enrolled. Median diuretic dose was 0.34 (0.25-0.4) mg/kg/h at the end of postoperative day (POD) 0 and it significantly decreased (p = 0.04) over the following PODs; during this period, the ratio between urine output and diuretic dose increased significantly (p = 0.04). Age (r -0.26, p = 0.02), weight (r -0.28, p = 0.01), cross-clamp time (r 0.27, p = 0.03), administration of ethacrynic acid (OR 0.01, p = 0.03), and, at the end of POD0, creatinine levels (r 0.3, p = 0.009), renal near-infrared spectroscopy saturation (-0.44, p = 0.008), whole-blood neutrophil gelatinase-associated lipocalin levels (r 0.30, p = 0.01), pH (r -0.26, p = 0.02), urinary volume (r -0.2755, p = 0.03), and fluid balance (r 0.2577, p = 0.0266) showed a significant association with diuretic dose. At multivariable logistic regression cross-clamp time (OR 1.007, p = 0.04), use of ethacrynic acid (OR 0.2, p = 0.01) and blood pH at the end of POD0 (OR 0.0001, p = 0.03) was independently associated with diuretic dose. Early resistance to loop diuretics continuous infusion is evident in post-cardiac surgery infants: Higher doses are administered to patients with lower urinary output. Independently associated variables with diuretic dose in our population appeared to be cross-clamping time, the administration of ethacrynic acid, and blood pH.

  19. Response to duloxetine in chronic low back pain: exploratory post hoc analysis of a Japanese Phase III randomized study

    Directory of Open Access Journals (Sweden)

    Tsuji T

    2017-09-01

    Full Text Available Toshinaga Tsuji,1 Naohiro Itoh,1 Mitsuhiro Ishida,2 Toshimitsu Ochiai,3 Shinichi Konno4 1Medical Affairs Department, 2Clinical Research Development, 3Biostatistics Department, Shionogi & Co. Ltd, Osaka, 4Department of Orthopedic Surgery, Fukushima Medical University, Fukushima, Japan Purpose: Duloxetine is efficacious for chronic low back pain (CLBP. This post hoc analysis of a Japanese randomized, placebo-controlled trial (ClinicalTrials.gov, NCT01855919 assessed whether patients with CLBP with early pain reduction or treatment-related adverse events of special interest (TR-AESIs; nausea, somnolence, constipation have enhanced responses to duloxetine. Patients and methods: Patients (N = 456 with CLBP for ≥6 months and Brief Pain Inventory (BPI average pain severity score of ≥4 were randomized (1:1 to duloxetine 60 mg/day or placebo for 14 weeks. Primary outcome was change from baseline in BPI average pain severity score (pain reduction. Subgroup analyses included early pain reduction (≥30%, 10%–30%, or <10% at Week 4 and early TR-AESIs (with or without TR-AESIs by Week 2. Measures included changes from baseline in BPI average pain severity score and BPI Interference scores (quality of life; QOL, and response rate (≥30% or ≥50% pain reduction at Week 14. Results: Patients with ≥30% early pain reduction (n = 108 or early TR-AESIs (n = 50 had significantly greater improvements in pain and QOL than placebo-treated patients (n = 226, whereas patients with 10%–30% (n = 63 or <10% (n = 48 pain reduction did not; patients without early TR-AESIs (n = 180 had significant improvements in pain at Week 14. Response rates (≥30%/≥50% pain reduction were 94.4%/82.4%, 66.7%/49.2%, and 25.0%/18.8% for patients with ≥30%, 10%–30%, and <10% early pain reduction, respectively, 74.0%/64.0% for patients with early TR-AESIs, 67.2%/54.4% for patients without early TR-AESIs, and 52.2%/39.4% for placebo.Conclusion: Early pain reduction or TR

  20. Structural effects of sprifermin in knee osteoarthritis: a post-hoc analysis on cartilage and non-cartilaginous tissue alterations in a randomized controlled trial.

    Science.gov (United States)

    Roemer, Frank W; Aydemir, Aida; Lohmander, Stefan; Crema, Michel D; Marra, Monica Dias; Muurahainen, Norma; Felson, David T; Eckstein, Felix; Guermazi, Ali

    2016-07-09

    A recent publication on efficacy of Sprifermin for knee osteoarthritis (OA) using quantitatively MRI-defined central medial tibio-femoral compartment cartilage thickness as the structural primary endpoint reported no statistically significant dose response. However, Sprifermin was associated with statistically significant, dose-dependent reductions in loss of total and lateral tibio-femoral cartilage thickness. Based on these preliminary promising data a post-hoc analysis of secondary assessment and endpoints was performed to evaluate potential effects of Sprifermin on semi-quantitatively evaluated structural MRI parameters. Aim of the present analysis was to determine effects of sprifermin on several knee joint tissues over a 12 month period. 1.5 T or 3 T MRIs were acquired at baseline and 12 months follow-up using a standard protocol. MRIs were read according to the Whole-Organ Magnetic Resonance Imaging Score (WORMS) scoring system (in 14 articular subregions) by four muskuloskeletal radiologists independently. Analyses focused on semiquantitative changes in the 100 μg subgroup and matching placebo of multiple MRI-defined structural alterations. Analyses included a delta-subregional and delta-sum approach for the whole knee and the medial and lateral tibio-femoral (MTFJ, LTFJ), and patello-femoral (PFJ) compartments, taking into account number of subregions showing no change, improvement or worsening and changes in the sum of subregional scores. Mann-Whitney - Wilcoxon tests assessed differences between groups. Fifty-seven and 18 patients were included in the treatment and matched placebo subgroups. Less worsening of cartilage damage was observed from baseline to 12 months in the PFJ (0.02, 95 % confidence interval (CI) (-0.04, 0.08) vs. placebo 0.22, 95 % CI (-0.05, 0.49), p = 0.046). For bone marrow lesions (BMLs), more improvement was observed from 6 to 12 months for whole knee analyses (-0.14, 95 % CI (-0.48, 0.19) vs. placebo 0.44, 95

  1. Are we drawing the right conclusions from randomised placebo-controlled trials? A post-hoc analysis of data from a randomised controlled trial

    Directory of Open Access Journals (Sweden)

    Bone Kerry M

    2009-06-01

    Full Text Available Abstract Background Assumptions underlying placebo controlled trials include that the placebo effect impacts on all study arms equally, and that treatment effects are additional to the placebo effect. However, these assumptions have recently been challenged, and different mechanisms may potentially be operating in the placebo and treatment arms. The objective of the current study was to explore the nature of placebo versus pharmacological effects by comparing predictors of the placebo response with predictors of the treatment response in a randomised, placebo-controlled trial of a phytotherapeutic combination for the treatment of menopausal symptoms. A substantial placebo response was observed but no significant difference in efficacy between the two arms. Methods A post hoc analysis was conducted on data from 93 participants who completed this previously published study. Variables at baseline were investigated as potential predictors of the response on any of the endpoints of flushing, overall menopausal symptoms and depression. Focused tests were conducted using hierarchical linear regression analyses. Based on these findings, analyses were conducted for both groups separately. These findings are discussed in relation to existing literature on placebo effects. Results Distinct differences in predictors were observed between the placebo and active groups. A significant difference was found for study entry anxiety, and Greene Climacteric Scale (GCS scores, on all three endpoints. Attitude to menopause was found to differ significantly between the two groups for GCS scores. Examination of the individual arms found anxiety at study entry to predict placebo response on all three outcome measures individually. In contrast, low anxiety was significantly associated with improvement in the active treatment group. None of the variables found to predict the placebo response was relevant to the treatment arm. Conclusion This study was a post hoc analysis

  2. Subsequent Chemotherapy and Treatment Patterns After Abiraterone Acetate in Patients with Metastatic Castration-resistant Prostate Cancer: Post Hoc Analysis of COU-AA-302.

    Science.gov (United States)

    de Bono, Johann S; Smith, Matthew R; Saad, Fred; Rathkopf, Dana E; Mulders, Peter F A; Small, Eric J; Shore, Neal D; Fizazi, Karim; De Porre, Peter; Kheoh, Thian; Li, Jinhui; Todd, Mary B; Ryan, Charles J; Flaig, Thomas W

    2017-04-01

    Treatment patterns for metastatic castration-resistant prostate cancer (mCRPC) have changed substantially in the last few years. In trial COU-AA-302 (chemotherapy-naïve men with mCRPC), abiraterone acetate plus prednisone (AA) significantly improved radiographic progression-free survival and overall survival (OS) when compared to placebo plus prednisone (P). This post hoc analysis investigated clinical responses to docetaxel as first subsequent therapy (FST) among patients who progressed following protocol-specified treatment with AA, and characterized subsequent treatment patterns among older (≥75 yr) and younger (AA arm received subsequent treatment with one or more agents approved for mCRPC. Efficacy analysis was performed for patients for whom baseline and at least one post-baseline prostate-specific antigen (PSA) values were available. Baseline and at least one post-baseline PSA values were available for 100 AA patients who received docetaxel as FST. While acknowledging the limitations of post hoc analyses, 40% (40/100) of these patients had an unconfirmed ≥50% PSA decline with first subsequent docetaxel therapy, and 27% (27/100) had a confirmed ≥50% PSA decline. The median docetaxel treatment duration among these 100 patients was 4.2 mo. Docetaxel was the most common FST among older and younger patients from each treatment arm. However, 43% (79/185) of older patients who progressed on AA received no subsequent therapy for mCRPC, compared with 17% (60/361) of younger patients. Patients with mCRPC who progress with AA treatment may still derive benefit from subsequent docetaxel therapy. These data support further assessment of treatment patterns following AA treatment for mCRPC, particularly among older patients. ClinicalTrials.gov NCT00887198. Treatment patterns for advanced prostate cancer have changed substantially in the last few years. This additional analysis provides evidence of clinical benefit for subsequent chemotherapy in men with advanced

  3. Once-monthly injection of paliperidone palmitate in patients with recently diagnosed and chronic schizophrenia: a post-hoc comparison of efficacy and safety.

    Science.gov (United States)

    Si, Tianmei; Zhuo, Jianmin; Turkoz, Ibrahim; Mathews, Maju; Tan, Wilson; Feng, Yu

    2017-12-01

    The use of long-acting injectable antipsychotics in recently diagnosed schizophrenia remains less explored. We evaluated the efficacy and safety of paliperidone palmitate once-monthly (PP1M) treatment in adult patients with recently diagnosed vs. chronic schizophrenia. These post-hoc analyses included two multicenter studies. Study 1 (NCT01527305) enrolled recently diagnosed (≤5 years) and chronic (>5 years) patients; Study 2 (NCT01051531) enrolled recently diagnosed patients only. Recently diagnosed patients were further sub-grouped into ≤2 years or 2-5 years. The primary efficacy endpoint was the change from baseline in Positive and Negative Syndrome Scale (PANSS) total score. In Study 1, 41.5% patients had recent diagnosis (≤2 years: 56.8%; 2-5 years: 43.2%); 58.5% had chronic schizophrenia. In Study 2, 52.8% and 47.2% patients were grouped into ≤2 years and 2-5 years, respectively. PANSS total score showed significantly greater improvement in patients with recently diagnosed vs. chronic schizophrenia. Similar results were obtained for PANSS responder rate, improvements in PANSS, and CGI-S scores. PP1M was efficacious in both recently diagnosed and chronic schizophrenia, with the benefits being more pronounced in patients with recently diagnosed schizophrenia. This adds to growing evidence recommending long-acting antipsychotic interventions at early stages of schizophrenia.

  4. A SAS(®) macro implementation of a multiple comparison post hoc test for a Kruskal-Wallis analysis.

    Science.gov (United States)

    Elliott, Alan C; Hynan, Linda S

    2011-04-01

    The Kruskal-Wallis (KW) nonparametric analysis of variance is often used instead of a standard one-way ANOVA when data are from a suspected non-normal population. The KW omnibus procedure tests for some differences between groups, but provides no specific post hoc pair wise comparisons. This paper provides a SAS(®) macro implementation of a multiple comparison test based on significant Kruskal-Wallis results from the SAS NPAR1WAY procedure. The implementation is designed for up to 20 groups at a user-specified alpha significance level. A Monte-Carlo simulation compared this nonparametric procedure to commonly used parametric multiple comparison tests. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.

  5. A review and additional post-hoc analyses of the incidence and impact of constipation observed in darifenacin clinical trials

    Directory of Open Access Journals (Sweden)

    Tack J

    2012-09-01

    Full Text Available Jan Tack,1 Jean-Jacques Wyndaele,2 Greg Ligozio,3 Mathias Egermark41University of Leuven, Gastroenterology Section, Leuven, 2University of Antwerp, Department of Urology, Antwerp, Belgium; 3Novartis Pharmaceuticals Corporation, NJ, USA; 4Roche Diagnostics Scandinavia AB, Bromma, Sweden and formerly of Novartis Pharma AG, Basel, SwitzerlandBackground: Constipation is a common side effect of antimuscarinic treatment for overactive bladder (OAB. This review evaluates the incidence and impact of constipation on the lives of patients with OAB being treated with darifenacin.Methods: Constipation data from published Phase III and Phase IIIb/IV darifenacin studies were reviewed and analyzed. Over 4000 patients with OAB (aged 18–89 years; ≥80% female enrolled in nine studies (three Phase III [data from these fixed-dose studies were pooled and provide the primary focus for this review], three Phase IIIb, and three Phase IV. The impact of constipation was assessed by discontinuations, use of concomitant laxatives, patient-reported perception of treatment, and a bowel habit questionnaire.Results: In the pooled Phase III trials, 14.8% (50/337 of patients on darifenacin 7.5 mg/day and 21.3% (71/334 on 15 mg/day experienced constipation compared with 12.6% (28/223 and 6.2% (24/388 with tolterodine and placebo, respectively. In addition, a few patients discontinued treatment due to constipation (0.6% [2/337], 1.2% [4/334], 1.8% [4/223], and 0.3% [1/388] in the darifenacin 7.5 mg/day or 15 mg/day, tolterodine, and placebo groups, respectively, or required concomitant laxatives (3.3% [11/337], 6.6% [22/334], 7.2% [16/223], and 1.5% [6/388] in the darifenacin 7.5 mg/day or 15 mg/day, tolterodine, and placebo groups, respectively. Patient-reported perception of treatment quality was observed to be similar between patients who experienced constipation and those who did not. During the long-term extension study, a bowel habit questionnaire showed only small numerical changes over time in frequency of bowel movements, straining to empty bowels, or number of days with hard stools.Conclusion: While constipation associated with darifenacin was reported in ≤21% of the patient population, it only led to concomitant laxative use in approximately one-third of these patients and a low incidence of treatment discontinuation. These data suggest that constipation did not impact patient perception of treatment quality.Keywords: antimuscarinics, tolerability, overactive bladder

  6. Illustrating, Quantifying, and Correcting for Bias in Post-hoc Analysis of Gene-Based Rare Variant Tests of Association

    Directory of Open Access Journals (Sweden)

    Kelsey E. Grinde

    2017-09-01

    Full Text Available To date, gene-based rare variant testing approaches have focused on aggregating information across sets of variants to maximize statistical power in identifying genes showing significant association with diseases. Beyond identifying genes that are associated with diseases, the identification of causal variant(s in those genes and estimation of their effect is crucial for planning replication studies and characterizing the genetic architecture of the locus. However, we illustrate that straightforward single-marker association statistics can suffer from substantial bias introduced by conditioning on gene-based test significance, due to the phenomenon often referred to as “winner's curse.” We illustrate the ramifications of this bias on variant effect size estimation and variant prioritization/ranking approaches, outline parameters of genetic architecture that affect this bias, and propose a bootstrap resampling method to correct for this bias. We find that our correction method significantly reduces the bias due to winner's curse (average two-fold decrease in bias, p < 2.2 × 10−6 and, consequently, substantially improves mean squared error and variant prioritization/ranking. The method is particularly helpful in adjustment for winner's curse effects when the initial gene-based test has low power and for relatively more common, non-causal variants. Adjustment for winner's curse is recommended for all post-hoc estimation and ranking of variants after a gene-based test. Further work is necessary to continue seeking ways to reduce bias and improve inference in post-hoc analysis of gene-based tests under a wide variety of genetic architectures.

  7. Lenient vs. strict rate control in patients with atrial fibrillation and heart failure: a post-hoc analysis of the RACE II study

    NARCIS (Netherlands)

    Mulder, Bart A.; van Veldhuisen, Dirk J.; Crijns, Harry J. G. M.; Tijssen, Jan G. P.; Hillege, Hans L.; Alings, Marco; Rienstra, Michiel; Groenveld, Hessel F.; van den Berg, Maarten P.; van Gelder, Isabelle C.

    2013-01-01

    It is unknown whether lenient rate control is an acceptable strategy in patients with AF and heart failure. We evaluated differences in outcome in patients with AF and heart failure treated with lenient or strict rate control. This post-hoc analysis of the RACE II trial included patients with an

  8. Influencing Anesthesia Provider Behavior Using Anesthesia Information Management System Data for Near Real-Time Alerts and Post Hoc Reports.

    Science.gov (United States)

    Epstein, Richard H; Dexter, Franklin; Patel, Neil

    2015-09-01

    In this review article, we address issues related to using data from anesthesia information management systems (AIMS) to deliver near real-time alerts via AIMS workstation popups and/or alphanumeric pagers and post hoc reports via e-mail. We focus on reports and alerts for influencing the behavior of anesthesia providers (i.e., anesthesiologists, anesthesia residents, and nurse anesthetists). Multiple studies have shown that anesthesia clinical decision support (CDS) improves adherence to protocols and increases financial performance through facilitation of billing, regulatory, and compliance documentation; however, improved clinical outcomes have not been demonstrated. We inform developers and users of feedback systems about the multitude of concerns to consider during development and implementation of CDS to increase its effectiveness and to mitigate its potentially disruptive aspects. We discuss the timing and modalities used to deliver messages, implications of outlier-only versus individualized feedback, the need to consider possible unintended consequences of such feedback, regulations, sustainability, and portability among systems. We discuss statistical issues related to the appropriate evaluation of CDS efficacy. We provide a systematic review of the published literature (indexed in PubMed) of anesthesia CDS and offer 2 case studies of CDS interventions using AIMS data from our own institution illustrating the salient points. Because of the considerable expense and complexity of maintaining near real-time CDS systems, as compared with providing individual reports via e-mail after the fact, we suggest that if the same goal can be accomplished via delayed reporting versus immediate feedback, the former approach is preferable. Nevertheless, some processes require near real-time alerts to produce the desired improvement. Post hoc e-mail reporting from enterprise-wide electronic health record systems is straightforward and can be accomplished using system

  9. Effect of Concomitant Medications on the Safety and Efficacy of Extended-Release Carbidopa-Levodopa (IPX066) in Patients With Advanced Parkinson Disease: A Post Hoc Analysis.

    Science.gov (United States)

    LeWitt, Peter A; Verhagen Metman, Leo; Rubens, Robert; Khanna, Sarita; Kell, Sherron; Gupta, Suneel

    Extended-release (ER) carbidopa-levodopa (CD-LD) (IPX066/RYTARY/NUMIENT) produces improvements in "off" time, "on" time without troublesome dyskinesia, and Unified Parkinson Disease Rating Scale scores compared with immediate-release (IR) CD-LD or IR CD-LD plus entacapone (CLE). Post hoc analyses of 2 ER CD-LD phase 3 trials evaluated whether the efficacy and safety of ER CD-LD relative to the respective active comparators were altered by concomitant medications (dopaminergic agonists, monoamine oxidase B [MAO-B] inhibitors, or amantadine). ADVANCE-PD (n = 393) assessed safety and efficacy of ER CD-LD versus IR CD-LD. ASCEND-PD (n = 91) evaluated ER CD-LD versus CLE. In both studies, IR- and CLE-experienced patients underwent a 6-week, open-label dose-conversion period to ER CD-LD prior to randomization. For analysis, the randomized population was divided into 3 subgroups: dopaminergic agonists, rasagiline or selegiline, and amantadine. For each subgroup, changes from baseline in PD diary measures ("off" time and "on" time with and without troublesome dyskinesia), Unified Parkinson Disease Rating Scale Parts II + III scores, and adverse events were analyzed, comparing ER CD-LD with the active comparator. Concomitant dopaminergic agonist or MAO-B inhibitor use did not diminish the efficacy (improvement in "off" time and "on" time without troublesome dyskinesia) of ER CD-LD compared with IR CD-LD or CLE, whereas the improvement with concomitant amantadine failed to reach significance. Safety and tolerability were similar among the subgroups, and ER CD-LD did not increase troublesome dyskinesia. For patients on oral LD regimens and taking a dopaminergic agonist, and/or a MAO-B inhibitor, changing from an IR to an ER CD-LD formulation provides approximately an additional hour of "good" on time.

  10. Illustrating, Quantifying, and Correcting for Bias in Post-hoc Analysis of Gene-Based Rare Variant Tests of Association

    Science.gov (United States)

    Grinde, Kelsey E.; Arbet, Jaron; Green, Alden; O'Connell, Michael; Valcarcel, Alessandra; Westra, Jason; Tintle, Nathan

    2017-01-01

    To date, gene-based rare variant testing approaches have focused on aggregating information across sets of variants to maximize statistical power in identifying genes showing significant association with diseases. Beyond identifying genes that are associated with diseases, the identification of causal variant(s) in those genes and estimation of their effect is crucial for planning replication studies and characterizing the genetic architecture of the locus. However, we illustrate that straightforward single-marker association statistics can suffer from substantial bias introduced by conditioning on gene-based test significance, due to the phenomenon often referred to as “winner's curse.” We illustrate the ramifications of this bias on variant effect size estimation and variant prioritization/ranking approaches, outline parameters of genetic architecture that affect this bias, and propose a bootstrap resampling method to correct for this bias. We find that our correction method significantly reduces the bias due to winner's curse (average two-fold decrease in bias, p bias and improve inference in post-hoc analysis of gene-based tests under a wide variety of genetic architectures. PMID:28959274

  11. Fluoxetine-clonazepam cotherapy for anxious depression: an exploratory, post-hoc analysis of a randomized, double blind study.

    Science.gov (United States)

    Papakostas, George I; Clain, Alisabet; Ameral, Victoria E; Baer, Lee; Brintz, Carrie; Smith, Ward T; Londborg, Peter D; Glaudin, Vincent; Painter, John R; Fava, Maurizio

    2010-01-01

    Anxious depression, defined as major depressive disorder (MDD) accompanied by high levels of anxiety, seems to be both common and difficult to treat, with antidepressant monotherapy often yielding modest results. We sought to examine the relative benefits of antidepressant-anxiolytic cotherapy versus antidepressant monotherapy for patients with anxious depression versus without anxious depression. We conducted a post-hoc analysis of an existing dataset (N=80), from a 3-week, randomized, double-blind trial which demonstrated cotherapy with fluoxetine and clonazepam to result in superior efficacy than fluoxetine monotherapy in MDD. The present analysis involved examining whether anxious depression status served as a predictor and moderator of symptom improvement. Anxious depression status was not found to predict symptom improvement, or serve as a moderator of clinical improvement to cotherapy versus monotherapy. However, the advantage in remission rates in favor of cotherapy versus monotherapy was, numerically, much larger for patients with anxious depression (32.2%) than it was for patients without anxious MDD (9.7%). The respective number needed to treat statistic for these two differences in response rates were, approximately, one in three for patients with anxious depression versus one in 10 for patients without anxious depression. The efficacy of fluoxetine-clonazepam cotherapy compared with fluoxetine monotherapy was numerically but not statistically enhanced for patients with anxious depression than those without anxious depression.

  12. Risk reductions for cardiovascular disease with pravastatin treatment by dyslipidemia phenotype: a post hoc analysis of the MEGA Study.

    Science.gov (United States)

    Nishiwaki, Masato; Ikewaki, Katsunori; Ayaori, Makoto; Mizuno, Kyoichi; Ohashi, Yasuo; Ohsuzu, Fumitaka; Ishikawa, Toshitsugu; Nakamura, Haruo

    2013-03-01

    The beneficial effect of statins for cardiovascular disease (CVD) prevention has been well established. However, the effectiveness among different phenotypes of dyslipidemia has not been confirmed. We evaluated the effect of pravastatin on the incidence of CVD in relation to the phenotype of dyslipidemia. The MEGA Study evaluated the effect of low-dose pravastatin on primary prevention of CVD in 7832 Japanese patients, who were randomized to diet alone or diet plus pravastatin and followed for more than 5 years. These patients were classified into phenotype IIa (n=5589) and IIb (n=2041) based on the electrophoretic pattern for this post hoc analysis. In the diet group there was no significant difference in the incidence of coronary heart disease (CHD), stroke, CVD, and total mortality between the two phenotypes. Phenotype IIb patients, compared to phenotype IIa, had lower levels of high-density lipoprotein cholesterol (HDL-C) and a significantly higher incidence of CVD in relation to a low HDL-C level (dyslipidemia. Significant risk reductions were observed for CHD by 38% (p=0.04) and CVD by 31% (p=0.02) in type IIa dyslipidemia but not in phenotype IIb. Pravastatin therapy provided significant risk reductions for CHD and CVD in patients with phenotype IIa dyslipidemia, but not in those with phenotype IIb dyslipidemia. Copyright © 2012 Japanese College of Cardiology. Published by Elsevier Ltd. All rights reserved.

  13. The longitudinal interplay between negative and positive symptom trajectories in patients under antipsychotic treatment: a post hoc analysis of data from a randomized, 1-year pragmatic trial.

    Science.gov (United States)

    Chen, Lei; Johnston, Joseph A; Kinon, Bruce J; Stauffer, Virginia; Succop, Paul; Marques, Tiago R; Ascher-Svanum, Haya

    2013-11-28

    Schizophrenia is a highly heterogeneous disorder with positive and negative symptoms being characteristic manifestations of the disease. While these two symptom domains are usually construed as distinct and orthogonal, little is known about the longitudinal pattern of negative symptoms and their linkage with the positive symptoms. This study assessed the temporal interplay between these two symptom domains and evaluated whether the improvements in these symptoms were inversely correlated or independent with each other. This post hoc analysis used data from a multicenter, randomized, open-label, 1-year pragmatic trial of patients with schizophrenia spectrum disorder who were treated with first- and second-generation antipsychotics in the usual clinical settings. Data from all treatment groups were pooled resulting in 399 patients with complete data on both the negative and positive subscale scores from the Positive and Negative Syndrome Scale (PANSS). Individual-based growth mixture modeling combined with interplay matrix was used to identify the latent trajectory patterns in terms of both the negative and positive symptoms. Pearson correlation coefficients were calculated to examine the relationship between the changes of these two symptom domains within each combined trajectory pattern. We identified four distinct negative symptom trajectories and three positive symptom trajectories. The trajectory matrix formed 11 combined trajectory patterns, which evidenced that negative and positive symptom trajectories moved generally in parallel. Correlation coefficients for changes in negative and positive symptom subscale scores were positive and statistically significant (P negative and positive symptoms (n = 70, 18%), (2) mild and sustained improvement in negative and positive symptoms (n = 237, 59%), and (3) no improvement in either negative or positive symptoms (n = 82, 21%). This study of symptom trajectories over 1 year shows that changes in negative

  14. Relationship between response to aripiprazole once-monthly and paliperidone palmitate on work readiness and functioning in schizophrenia: A post-hoc analysis of the QUALIFY study.

    Directory of Open Access Journals (Sweden)

    Steven G Potkin

    Full Text Available Schizophrenia is a chronic disease with negative impact on patients' employment status and quality of life. This post-hoc analysis uses data from the QUALIFY study to elucidate the relationship between work readiness and health-related quality of life and functioning. QUALIFY was a 28-week, randomized study (NCT01795547 comparing the treatment effectiveness of aripiprazole once-monthly 400 mg and paliperidone palmitate once-monthly using the Heinrichs-Carpenter Quality-of-Life Scale as the primary endpoint. Also, patients' capacity to work and work readiness (Yes/No was assessed with the Work Readiness Questionnaire. We categorized patients, irrespective of treatment, by work readiness at baseline and week 28: No to Yes (n = 41, Yes to Yes (n = 49, or No at week 28 (n = 118. Quality-of-Life Scale total, domains, and item scores were assessed with a mixed model of repeated measures. Patients who shifted from No to Yes in work readiness showed robust improvements on Quality-of-Life Scale total scores, significantly greater than patients not ready to work at week 28 (least squares mean difference: 11.6±2.6, p<0.0001. Scores on Quality-of-Life Scale instrumental role domain and items therein-occupational role, work functioning, work levels, work satisfaction-significantly improved in patients shifting from No to Yes in work readiness (vs patients No at Week 28. Quality-of-Life Scale total scores also significantly predicted work readiness at week 28. Overall, these results highlight a strong association between improvements in health-related quality of life and work readiness, and suggest that increasing patients' capacity to work is an achievable and meaningful goal in the treatment of impaired functioning in schizophrenia.

  15. Relationship of Glucose Variability With Glycated Hemoglobin and Daily Mean Glucose: A Post Hoc Analysis of Data From 5 Phase 3 Studies.

    Science.gov (United States)

    Luo, Junxiang; Qu, Yongming; Zhang, Qianyi; Chang, Annette M; Jacober, Scott J

    2018-03-01

    The association of glucose variability (GV) with other glycemic measures is emerging as a topic of interest. The aim of this analysis is to study the correlation between GV and measures of glycemic control, such as glycated hemoglobin (HbA1c) and daily mean glucose (DMG). Data from 5 phase 3 trials were pooled into 3 analysis groups: type 2 diabetes (T2D) treated with basal insulin only, T2D treated with basal-bolus therapy, and type 1 diabetes (T1D). A generalized boosted model was used post hoc to assess the relationship of the following variables with glycemic control parameters (HbA1c and DMG): within-day GV, between-day GV (calculated using self-monitored blood glucose and fasting blood glucose [FBG]), hypoglycemia rate, and certain baseline characteristics. Within-day GV (calculated using standard deviation [SD]) was found to have a significant influence on endpoints HbA1c and DMG in all 3 patient groups. Between-day GV from FBG (calculated using SD), within-day GV (calculated using coefficient of variation), and hypoglycemia rate were found to significantly influence the endpoint HbA1c in the T2D basal-only group. Lower within-day GV was significantly associated with improvement in DMG and HbA1c. This finding suggests that GV could be a marker in the early phases of new antihyperglycemic therapy development for predicting clinical outcomes in terms of HbA1c and DMG.

  16. Dose-related beneficial and harmful effects of gabapentin in postoperative pain management - post hoc analyses from a systematic review with meta-analyses and trial sequential analyses

    DEFF Research Database (Denmark)

    Fabritius, Maria Louise; Wetterslev, Jørn; Mathiesen, Ole

    2017-01-01

    versus placebo were included. Four different dose intervals were investigated: 0-350, 351-700, 701-1050, and >1050 mg. Primary co-outcomes were 24-hour morphine consumption and serious adverse events (SAEs), with emphasis put on trials with low risk of bias. Results: One hundred and twenty-two randomized...

  17. Post-hoc analysis of MCI186-17, the extension study to MCI186-16, the confirmatory double-blind, parallel-group, placebo-controlled study of edaravone in amyotrophic lateral sclerosis.

    Science.gov (United States)

    Takahashi, Fumihiro; Takei, Koji; Tsuda, Kikumi; Palumbo, Joseph

    2017-10-01

    In the 24-week double-blind study of edaravone in ALS (MCI186-16), edaravone did not show a statistically significant difference versus placebo for the primary efficacy endpoint. For post-hoc analyses, two subpopulations were identified in which edaravone might be expected to show efficacy: the efficacy-expected subpopulation (EESP), defined by scores of ≥2 points on all 12 items of the ALS Functional Rating Scale-Revised (ALSFRS-R) and a percent predicted forced vital capacity (%FVC) ≥80% at baseline; and the definite/probable EESP 2 years (dpEESP2y) subpopulation which, in addition to EESP criteria, had definite or probable ALS diagnosed by El Escorial revised criteria, and disease duration of ≤2 years. In the 36-week extension study of MCI186-16, a 24-week double-blind comparison followed by 12 weeks of open-label edaravone (MCI186-17; NCT00424463), analyses of ALSFRS-R scores of the edaravone-edaravone group and edaravone-placebo group for the full analysis set (FAS) and EESP, as prospectively defined, were reported in a previous article. Here we additionally report results in patients who met dpEESP2y criteria at the baseline of MCI186-16. In the dpEESP2y, the difference in ALSFRS-R changes from 24 to 48 weeks between the edaravone-edaravone and edaravone-placebo groups was 2.79 (p = 0.0719), which was greater than the differences previously reported for the EESP and the FAS. The pattern of adverse events in the dpEESP2y did not show any additional safety findings to those from the earlier prospective study. In conclusion, this post-hoc analysis suggests a potential effect of edaravone between 24 and 48 weeks in patients meeting dpEESP2y criteria at baseline.

  18. Tiotropium and Salmeterol in COPD Patients at Risk of Exacerbations: A Post Hoc Analysis from POET-COPD(®).

    Science.gov (United States)

    Vogelmeier, Claus F; Asijee, Guus M; Kupas, Katrin; Beeh, Kai M

    2015-06-01

    Among patients with chronic obstructive pulmonary disease (COPD), the frequency and severity of past exacerbations potentiates future events. The impact of current therapies on exacerbation frequency and severity in patients with different exacerbation risks is not well known. A post hoc analysis of patients at low (≤1 exacerbation [oral steroids/antibiotics requirement] and no COPD-related hospitalization in the year preceding trial entry) or high (≥2 exacerbations [oral steroids/antibiotics requirement] or ≥1 COPD-related hospitalization[s] in the year preceding trial entry) exacerbation risk, from the Prevention of Exacerbations with Tiotropium in Chronic Obstructive Pulmonary Disease (POET-COPD(®)) database. Compared with salmeterol, tiotropium significantly increased time to first COPD exacerbation (hazard ratio 0.84; 95% confidence interval [CI] 0.76-0.92; p = 0.0002) and reduced the number of COPD exacerbations (rate ratio 0.90; 95% CI 0.81-0.99; p = 0.0383) in patients at high exacerbation risk. With treatment, the risk of remaining in the high-risk exacerbator subgroup was statistically lower with tiotropium versus salmeterol (risk ratio [RR] 0.89; 95% CI 0.80-1.00; p = 0.0478). For low-risk patients, time to first COPD exacerbation and number of COPD exacerbations were numerically lower with tiotropium versus salmeterol. With treatment, the risk of transitioning from a low to a high exacerbation risk was lower with tiotropium versus salmeterol (RR 0.87; 95% CI 0.71-1.07; p = 0.1968). This analysis confirms the higher efficacy of tiotropium versus salmeterol in prolonging time to first COPD exacerbation and reducing number of exacerbations in patients both at low and high exacerbation risk. Boehringer Ingelheim and Pfizer. ClinicalTrials.gov NCT00563381.

  19. Protective effect of yerba mate intake on the cardiovascular system: a post hoc analysis study in postmenopausal women.

    Science.gov (United States)

    da Veiga, D T A; Bringhenti, R; Copes, R; Tatsch, E; Moresco, R N; Comim, F V; Premaor, M O

    2018-01-01

    The prevalence of cardiovascular and metabolic diseases is increased in postmenopausal women, which contributes to the burden of illnesses in this period of life. Yerba mate (Ilex paraguariensis) is a native bush from Southern South America. Its leaves are rich in phenolic components, which may have antioxidant, vasodilating, hypocholesterolemic, and hypoglycemic proprieties. This post hoc analysis of the case-control study nested in the Obesity and Bone Fracture Cohort evaluated the consumption of yerba mate and the prevalence of hypertension, dyslipidemia, and coronary diseases in postmenopausal women. Ninety-five postmenopausal women were included in this analysis. A questionnaire was applied to evaluate the risk factors and diagnosis of cardiovascular diseases and consumption of yerba mate infusion. Student's t-test and chi-square test were used to assess significant differences between groups. The group that consumed more than 1 L/day of mate infusion had significantly fewer diagnoses of coronary disease, dyslipidemia, and hypertension (P<0.049, P<0.048, and P<0.016, respectively). Furthermore, the serum levels of glucose were lower in the group with a higher consumption of yerba mate infusion (P<0.013). The serum levels of total cholesterol, LDL-cholesterol, HDL-cholesterol, and triglycerides were similar between the groups. This pragmatic study points out the benefits of yerba mate consumption for the cardiovascular and metabolic systems. The ingestion of more than 1 L/day of mate infusion was associated with fewer self-reported cardiovascular diseases and lower serum levels of glucose. Longitudinal studies are needed to evaluate the association between yerba mate infusion and reduction of cardiovascular diseases in postmenopausal women.

  20. Radiographic progression with nonrising PSA in metastatic castration-resistant prostate cancer: post hoc analysis of PREVAIL.

    Science.gov (United States)

    Bryce, A H; Alumkal, J J; Armstrong, A; Higano, C S; Iversen, P; Sternberg, C N; Rathkopf, D; Loriot, Y; de Bono, J; Tombal, B; Abhyankar, S; Lin, P; Krivoshik, A; Phung, D; Beer, T M

    2017-06-01

    Advanced prostate cancer is a phenotypically diverse disease that evolves through multiple clinical courses. PSA level is the most widely used parameter for disease monitoring, but it has well-recognized limitations. Unlike in clinical trials, in practice, clinicians may rely on PSA monitoring alone to determine disease status on therapy. This approach has not been adequately tested. Chemotherapy-naive asymptomatic or mildly symptomatic men (n=872) with metastatic castration-resistant prostate cancer (mCRPC) who were treated with the androgen receptor inhibitor enzalutamide in the PREVAIL study were analyzed post hoc for rising versus nonrising PSA (empirically defined as >1.05 vs ⩽1.05 times the PSA level from 3 months earlier) at the time of radiographic progression. Clinical characteristics and disease outcomes were compared between the rising and nonrising PSA groups. Of 265 PREVAIL patients with radiographic progression and evaluable PSA levels on the enzalutamide arm, nearly one-quarter had a nonrising PSA. Median progression-free survival in this cohort was 8.3 months versus 11.1 months in the rising PSA cohort (hazard ratio 1.68; 95% confidence interval 1.26-2.23); overall survival was similar between the two groups, although less than half of patients in either group were still at risk at 24 months. Baseline clinical characteristics of the two groups were similar. Non-rising PSA at radiographic progression is a common phenomenon in mCRPC patients treated with enzalutamide. As restaging in advanced prostate cancer patients is often guided by increases in PSA levels, our results demonstrate that disease progression on enzalutamide can occur without rising PSA levels. Therefore, a disease monitoring strategy that includes imaging not entirely reliant on serial serum PSA measurement may more accurately identify disease progression.

  1. Perampanel with concomitant levetiracetam and topiramate: Post hoc analysis of adverse events related to hostility and aggression.

    Science.gov (United States)

    Chung, Steve; Williams, Betsy; Dobrinsky, Cindy; Patten, Anna; Yang, Haichen; Laurenza, Antonio

    2017-10-01

    In 4 Phase III registration trials (3 in patients with partial seizures, N=1480; 1 in patients with PGTCS, N=163), perampanel administered to patients already receiving 1-3 concomitant antiepileptic drugs (AEDs) demonstrated statistically superior efficacy compared to placebo in reducing seizure frequency. However, use of perampanel in these studies was associated with a risk of psychiatric and behavioral adverse reactions, including aggression, hostility, irritability, anger, and homicidal ideation and threats. The present study is a post hoc analysis of pooled data from these 4 trials to determine if concomitant treatment with levetiracetam and/or topiramate increased the risk of hostility- and aggression-related AEs. Treatment-emergent AEs (TEAEs) were determined using a "Narrow & Broad" search based on the Medical Dictionary for Regulatory Activities (MedDRA) standard MedDRA query (SMQ) for hostility- and aggression-related events. The rate of hostility- and aggression-related TEAEs was observed to be similar among perampanel-treated patients: a) receiving levetiracetam (N=340) compared to those not receiving levetiracetam (N=779); b) receiving topiramate (N=223) compared to those not receiving topiramate (N=896); and c) receiving both levetiracetam and topiramate (N=47) compared to those not receiving levetiracetam and topiramate (N=1072). Severe and serious TEAEs related to hostility and aggression were rare and occurred at a similar rate regardless of concomitant levetiracetam and/or topiramate therapy. Taken together, these results suggest that concomitant treatment with levetiracetam and/or topiramate has no appreciable effect on the occurrence of hostility- or aggression-related TEAEs in patients receiving perampanel. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  2. Post hoc interlaboratory comparison of single particle ICP-MS size measurements of NIST gold nanoparticle reference materials.

    Science.gov (United States)

    Montoro Bustos, Antonio R; Petersen, Elijah J; Possolo, Antonio; Winchester, Michael R

    2015-09-01

    Single particle inductively coupled plasma-mass spectrometry (spICP-MS) is an emerging technique that enables simultaneous measurement of nanoparticle size and number quantification of metal-containing nanoparticles at realistic environmental exposure concentrations. Such measurements are needed to understand the potential environmental and human health risks of nanoparticles. Before spICP-MS can be considered a mature methodology, additional work is needed to standardize this technique including an assessment of the reliability and variability of size distribution measurements and the transferability of the technique among laboratories. This paper presents the first post hoc interlaboratory comparison study of the spICP-MS technique. Measurement results provided by six expert laboratories for two National Institute of Standards and Technology (NIST) gold nanoparticle reference materials (RM 8012 and RM 8013) were employed. The general agreement in particle size between spICP-MS measurements and measurements by six reference techniques demonstrates the reliability of spICP-MS and validates its sizing capability. However, the precision of the spICP-MS measurement was better for the larger 60 nm gold nanoparticles and evaluation of spICP-MS precision indicates substantial variability among laboratories, with lower variability between operators within laboratories. Global particle number concentration and Au mass concentration recovery were quantitative for RM 8013 but significantly lower and with a greater variability for RM 8012. Statistical analysis did not suggest an optimal dwell time, because this parameter did not significantly affect either the measured mean particle size or the ability to count nanoparticles. Finally, the spICP-MS data were often best fit with several single non-Gaussian distributions or mixtures of Gaussian distributions, rather than the more frequently used normal or log-normal distributions.

  3. Evaluation of patient-rated stiffness associated with fibromyalgia: a post-hoc analysis of 4 pooled, randomized clinical trials of duloxetine.

    Science.gov (United States)

    Bennett, Robert; Russell, I Jon; Choy, Ernest; Spaeth, Michael; Mease, Philip; Kajdasz, Daniel; Walker, Daniel; Wang, Fujun; Chappell, Amy

    2012-04-01

    Patients with fibromyalgia (FM) rate stiffness as one of the most troublesome symptoms of the disorder. However, there are few published studies that have focused on better understanding the nature of stiffness in FM. The primary objectives of these analyses were to characterize the distribution of stiffness severity in patients at baseline, evaluate changes in stiffness after 12 weeks of treatment with duloxetine, and determine which outcomes were correlated with stiffness. These were post-hoc analyses of 3-month data from 4 randomized, double-blind, placebo-controlled studies that assessed efficacy of duloxetine in adults with FM. Severity of stiffness was assessed by using the Fibromyalgia Impact Questionnaire (FIQ) on a scale from 0 (no stiffness) to 10 (most severe stiffness). The association between changes in stiffness and other measures was evaluated by using Pearson's correlation coefficient. The FIQ total score and items, the Brief Pain Inventory (BPI-modified short form), the Clinical Global Impression-Severity scale, the Multidimensional Fatigue Inventory, the 17-item Hamilton Depression Rating Scale, the Sheehan Disability Scale, the 36-item Short-Form Health Survey, and the EuroQoL Questionnaire-5 Dimensions were evaluated in the correlation analyses. Stepwise linear regression was used to identify the variables that were most highly predictive of the changes in FIQ stiffness. The analysis included 1332 patients (mean age, 50.2 years; 94.7% female; and 87.8% white). The mean (SD) baseline FIQ stiffness score was 7.7 (2.0), and this score correlated with baseline BPI pain score and FIQ function. Duloxetine significantly improved the FIQ stiffness score compared with placebo (P FIQ pain and interference scores, FIQ nonrefreshing sleep, FIQ anxiety, 36-item Short-Form Health Survey bodily pain, and Sheehan Disability Scale total score. Variables related to severity of pain, pain interfering with daily activities, and physical functioning were predictors

  4. Clinical Outcomes from Androgen Signaling-directed Therapy after Treatment with Abiraterone Acetate and Prednisone in Patients with Metastatic Castration-resistant Prostate Cancer: Post Hoc Analysis of COU-AA-302.

    Science.gov (United States)

    Smith, Matthew R; Saad, Fred; Rathkopf, Dana E; Mulders, Peter F A; de Bono, Johann S; Small, Eric J; Shore, Neal D; Fizazi, Karim; Kheoh, Thian; Li, Jinhui; De Porre, Peter; Todd, Mary B; Yu, Margaret K; Ryan, Charles J

    2017-07-01

    In the COU-AA-302 trial, abiraterone acetate plus prednisone significantly increased overall survival for patients with chemotherapy-naïve metastatic castration-resistant prostate cancer (mCRPC). Limited information exists regarding response to subsequent androgen signaling-directed therapies following abiraterone acetate plus prednisone in patients with mCRPC. We investigated clinical outcomes associated with subsequent abiraterone acetate plus prednisone (55 patients) and enzalutamide (33 patients) in a post hoc analysis of COU-AA-302. Prostate-specific antigen (PSA) response was assessed. Median time to PSA progression was estimated using the Kaplan-Meier method. The PSA response rate (≥50% PSA decline, unconfirmed) was 44% and 67%, respectively. The median time to PSA progression was 3.9 mo (range 2.6-not estimable) for subsequent abiraterone acetate plus prednisone and 2.8 mo (range 1.8-not estimable) for subsequent enzalutamide. The majority of patients (68%) received intervening chemotherapy before subsequent abiraterone acetate plus prednisone or enzalutamide. While acknowledging the limitations of post hoc analyses and high censoring (>75%) in both treatment groups, these results suggest that subsequent therapy with abiraterone acetate plus prednisone or enzalutamide for patients who progressed on abiraterone acetate is associated with limited clinical benefit. This analysis showed limited clinical benefit for subsequent abiraterone acetate plus prednisone or enzalutamide in patients with metastatic castration-resistant prostate cancer following initial treatment with abiraterone acetate plus prednisone. This analysis does not support prioritization of subsequent abiraterone acetate plus prednisone or enzalutamide following initial therapy with abiraterone acetate plus prednisone. Copyright © 2017 European Association of Urology. Published by Elsevier B.V. All rights reserved.

  5. Apolipoprotein A-IV concentrations and clinical outcomes in haemodialysis patients with type 2 diabetes mellitus--a post hoc analysis of the 4D Study.

    Science.gov (United States)

    Kollerits, B; Krane, V; Drechsler, C; Lamina, C; März, W; Ritz, E; Wanner, C; Kronenberg, F

    2012-12-01

    Apolipoprotein A-IV (apoA-IV) is an anti-atherogenic and anti-oxidative plasma glycoprotein involved in reverse cholesterol transport. The aim of this study was to examine the association between apoA-IV and all-cause mortality, cardiovascular endpoints and parameters of protein-energy wasting and nutrition in haemodialysis patients. This post hoc analysis was performed in the German Diabetes Dialysis Study (4D Study) evaluating atorvastatin in 1255 haemodialysis patients with type 2 diabetes mellitus, followed for a median of 4 years. The association between apoA-IV and relevant outcomes was analysed using Cox proportional hazards regression analyses. Body mass index (BMI) was used as a marker of protein-energy wasting. In addition, a definition of extended wasting was applied, combining median values of BMI, serum albumin, creatinine and sensitive C-reactive protein, to classify patients. Mean (±SD) apoA-IV concentration was 49.8 ± 14.2 mg dL(-1). Age- and gender-adjusted apoA-IV concentrations were strongly associated with the presence of congestive heart failure at baseline [odds ratio = 0.81, 95% confidence interval (CI) 0.74-0.88 per 10 mg dL(-1) increase; P 1). During the prospective follow-up, the strongest association was found for all-cause mortality [hazard ratio (HR) = 0.89, 95% CI 0.85-0.95, P = 0.001), which was mainly because of patients with BMI > 23 kg m(-2) (HR = 0.87, 95% CI 0.82-0.94, P 1) and those in the nonwasting group according to the extended definition (HR = 0.89, 95% CI 0.84-0.96, P = 0.001). This association remained significant after additionally adjusting for parameters associated with apoA-IV at baseline. Further associations were observed for sudden cardiac death. ApoA-IV was less strongly associated with atherogenic events such as myocardial infarction. Low apoA-IV levels seem to be a risk predictor of all-cause mortality and sudden cardiac death. This association might be modified by nutritional status. © 2012 The Association

  6. Relationship between intraoperative regional cerebral oxygen saturation trends and cognitive decline after total knee replacement: a post-hoc analysis.

    Science.gov (United States)

    Salazar, Fátima; Doñate, Marta; Boget, Teresa; Bogdanovich, Ana; Basora, Misericordia; Torres, Ferran; Gracia, Isabel; Fàbregas, Neus

    2014-01-01

    Bilateral regional brain oxygen saturation (rSO2) trends, reflecting intraoperative brain oxygen imbalance, could warn of brain dysfunction. Various types of cognitive impairment, such as memory decline, alterations in executive function or subjective complaints, have been described three months after surgery. Our aim was to explore the potential utility of rSO2 values as a warning sign for the development of different types of decline in postoperative psychological function. Observational post-hoc analysis of data for the patient sample (n = 125) of a previously conducted clinical trial in patients over the age of 65 years undergoing total knee replacement under spinal anesthesia. Demographic, hemodynamic and bilateral rSO2 intraoperative values were recorded. An absolute rSO2 value of 20% or >25% below baseline were chosen as relevant cutoffs. Composite function test scores were created from baseline to three months for each patient and adjusted for the mean (SD) score changes for a control group (n = 55). Tests were used to assess visual-motor coordination and executive function (VM-EF) (Wechsler Digit Symbol-Coding and Visual Reproduction, Trail Making Test) and memory (Auditory Verbal Learning, Wechsler Memory Scale); scales were used to assess psychological symptoms. We observed no differences in baseline rSO2 values; rSO2 decreased significantly in all patients during surgery (P Left and right rSO2 values were asymmetric in patients who had memory decline (mean [SD] left-right ratio of 95.03 [8.51] vs 101.29 [6.7] for patients with no changes, P = 0.0012). The mean right-left difference in rSO2 was also significant in these patients (-2.87% [4.73%], lower on the right, P = 0.0034). Detection of a trend to asymmetry in rSO2 values can warn of possible postoperative onset of memory decline. Psychological symptoms and memory decline were common three months after knee replacement in our patients over the age of 65 years.

  7. Efficacy of lisdexamfetamine dimesylate in children with attention-deficit/hyperactivity disorder previously treated with methylphenidate: a post hoc analysis

    Directory of Open Access Journals (Sweden)

    Jain Rakesh

    2011-11-01

    Full Text Available Abstract Background Attention-deficit/hyperactivity disorder (ADHD is a common neurobehavioral psychiatric disorder that afflicts children, with a reported prevalence of 2.4% to 19.8% worldwide. Stimulants (methylphenidate [MPH] and amphetamine are considered first-line ADHD pharmacotherapy. MPH is a catecholamine reuptake inhibitor, whereas amphetamines have additional presynaptic activity. Although MPH and amphetamine can effectively manage ADHD symptoms in most pediatric patients, many still fail to respond optimally to either. After administration, the prodrug stimulant lisdexamfetamine dimesylate (LDX is converted to l-lysine and therapeutically active d-amphetamine in the blood. The objective of this study was to evaluate the clinical efficacy of LDX in children with ADHD who remained symptomatic (ie, nonremitters; ADHD Rating Scale IV [ADHD-RS-IV] total score > 18 on MPH therapy prior to enrollment in a 4-week placebo-controlled LDX trial, compared with the overall population. Methods In this post hoc analysis of data from a multicenter, randomized, double-blind, forced-dose titration study, we evaluated the clinical efficacy of LDX in children aged 6-12 years with and without prior MPH treatment at screening. ADHD symptoms were assessed using the ADHD-RS-IV scale, Conners' Parent Rating Scale-Revised short form (CPRS-R, and Clinical Global Impressions-Improvement scale, at screening, baseline, and endpoint. ADHD-RS-IV total and CPRS-R ADHD Index scores were summarized as mean (SD. Clinical response for the subgroup analysis was defined as a ≥ 30% reduction from baseline in ADHD-RS-IV score and a CGI-I score of 1 or 2. Dunnett test was used to compare change from baseline in all groups. Number needed to treat to achieve one clinical responder or one symptomatic remitter was calculated as the reciprocal of the difference in their proportions on active treatment and placebo at endpoint. Results Of 290 randomized participants enrolled, 28

  8. Prognostic Significance of Creatinine Increases During an Acute Heart Failure Admission in Patients With and Without Residual Congestion: A Post Hoc Analysis of the PROTECT Data.

    Science.gov (United States)

    Metra, Marco; Cotter, Gad; Senger, Stefanie; Edwards, Christopher; Cleland, John G; Ponikowski, Piotr; Cursack, Guillermo C; Milo, Olga; Teerlink, John R; Givertz, Michael M; O'Connor, Christopher M; Dittrich, Howard C; Bloomfield, Daniel M; Voors, Adriaan A; Davison, Beth A

    2018-05-01

    The importance of a serum creatinine increase, traditionally considered worsening renal function (WRF), during admission for acute heart failure has been recently debated, with data suggesting an interaction between congestion and creatinine changes. In post hoc analyses, we analyzed the association of WRF with length of hospital stay, 30-day death or cardiovascular/renal readmission and 90-day mortality in the PROTECT study (Placebo-Controlled Randomized Study of the Selective A1 Adenosine Receptor Antagonist Rolofylline for Patients Hospitalized With Acute Decompensated Heart Failure and Volume Overload to Assess Treatment Effect on Congestion and Renal Function). Daily creatinine changes from baseline were categorized as WRF (an increase of 0.3 mg/dL or more) or not. Daily congestion scores were computed by summing scores for orthopnea, edema, and jugular venous pressure. Of the 2033 total patients randomized, 1537 patients had both available at study day 14. Length of hospital stay was longer and 30-day cardiovascular/renal readmission or death more common in patients with WRF. However, these were driven by significant associations in patients with concomitant congestion at the time of assessment of renal function. The mean difference in length of hospital stay because of WRF was 3.51 (95% confidence interval, 1.29-5.73) more days ( P =0.0019), and the hazard ratio for WRF on 30-day death or heart failure hospitalization was 1.49 (95% confidence interval, 1.06-2.09) times higher ( P =0.0205), in significantly congested than nonsignificantly congested patients. A similar trend was observed with 90-day mortality although not statistically significant. In patients admitted for acute heart failure, WRF defined as a creatinine increase of ≥0.3 mg/dL was associated with longer length of hospital stay, and worse 30- and 90-day outcomes. However, effects were largely driven by patients who had residual congestion at the time of renal function assessment. URL: https

  9. Effect of sacubitril/valsartan versus enalapril on glycaemic control in patients with heart failure and diabetes: a post-hoc analysis from the PARADIGM-HF trial.

    Science.gov (United States)

    Seferovic, Jelena P; Claggett, Brian; Seidelmann, Sara B; Seely, Ellen W; Packer, Milton; Zile, Michael R; Rouleau, Jean L; Swedberg, Karl; Lefkowitz, Martin; Shi, Victor C; Desai, Akshay S; McMurray, John J V; Solomon, Scott D

    2017-05-01

    Diabetes is an independent risk factor for heart failure progression. Sacubitril/valsartan, a combination angiotensin receptor-neprilysin inhibitor, improves morbidity and mortality in patients with heart failure with reduced ejection fraction (HFrEF), compared with the angiotensin-converting enzyme inhibitor enalapril, and improves peripheral insulin sensitivity in obese hypertensive patients. We aimed to investigate the effect of sacubitril/valsartan versus enalapril on HbA 1c and time to first-time initiation of insulin or oral antihyperglycaemic drugs in patients with diabetes and HFrEF. In a post-hoc analysis of the PARADIGM-HF trial, we included 3778 patients with known diabetes or an HbA 1c ≥6·5% at screening out of 8399 patients with HFrEF who were randomly assigned to treatment with sacubitril/valsartan or enalapril. Of these patients, most (98%) had type 2 diabetes. We assessed changes in HbA 1c , triglycerides, HDL cholesterol and BMI in a mixed effects longitudinal analysis model. Time to initiation of oral antihyperglycaemic drugs or insulin in subjects previously not treated with these agents were compared between treatment groups. There were no significant differences in HbA 1c concentrations between randomised groups at screening. During the first year of follow-up, HbA 1c concentrations decreased by 0·16% (SD 1·40) in the enalapril group and 0·26% (SD 1·25) in the sacubitril/valsartan group (between-group reduction 0·13%, 95% CI 0·05-0·22, p=0·0023). HbA 1c concentrations were persistently lower in the sacubitril/valsartan group than in the enalapril group over the 3-year follow-up (between-group reduction 0·14%, 95% CI 0·06-0·23, p=0·0055). New use of insulin was 29% lower in patients receiving sacubitril/valsartan (114 [7%] patients) compared with patients receiving enalapril (153 [10%]; hazard ratio 0·71, 95% CI 0·56-0·90, p=0·0052). Similarly, fewer patients were started on oral antihyperglycaemic therapy (0·77, 0·58-1·02

  10. Post hoc analysis of plasma amino acid profiles: towards a specific pattern in autism spectrum disorder and intellectual disability.

    Science.gov (United States)

    Delaye, Jean-Baptiste; Patin, Franck; Lagrue, Emmanuelle; Le Tilly, Olivier; Bruno, Clement; Vuillaume, Marie-Laure; Raynaud, Martine; Benz-De Bretagne, Isabelle; Laumonnier, Frederic; Vourc'h, Patrick; Andres, Christian; Blasco, Helene

    2018-01-01

    Objectives Autism spectrum disorders and intellectual disability present a challenge for therapeutic and dietary management. We performed a re-analysis of plasma amino acid chromatography of children with autism spectrum disorders ( n = 22) or intellectual disability ( n = 29) to search for a metabolic signature that can distinguish individuals with these disorders from controls ( n = 30). Methods We performed univariate and multivariate analyses using different machine learning strategies, from the raw data of the amino acid chromatography. Finally, we analysed the metabolic pathways associated with discriminant biomarkers. Results Multivariate analysis revealed models to discriminate patients with autism spectrum disorders or intellectual disability and controls from plasma amino acid profiles ( P intellectual disability patients shared similar differences relative to controls, including lower glutamate ( P intellectual disability revealed the involvement of urea, 3-methyl-histidine and histidine metabolism. Biosigner analysis and univariate analysis confirmed the role of 3-methylhistidine ( P = 0.004), histidine ( P = 0.003), urea ( P = 0.0006) and lysine ( P = 0.002). Conclusions We revealed discriminant metabolic patterns between autism spectrum disorders, intellectual disability and controls. Amino acids known to play a role in neurotransmission were discriminant in the models comparing autism spectrum disorders or intellectual disability to controls, and histidine and b-alanine metabolism was specifically highlighted in the model.

  11. Graphical models for genetic analyses

    DEFF Research Database (Denmark)

    Lauritzen, Steffen Lilholt; Sheehan, Nuala A.

    2003-01-01

    This paper introduces graphical models as a natural environment in which to formulate and solve problems in genetics and related areas. Particular emphasis is given to the relationships among various local computation algorithms which have been developed within the hitherto mostly separate areas...... of graphical models and genetics. The potential of graphical models is explored and illustrated through a number of example applications where the genetic element is substantial or dominating....

  12. An extensible analysable system model

    DEFF Research Database (Denmark)

    Probst, Christian W.; Hansen, Rene Rydhof

    2008-01-01

    , this does not hold for real physical systems. Approaches such as threat modelling try to target the formalisation of the real-world domain, but still are far from the rigid techniques available in security research. Many currently available approaches to assurance of critical infrastructure security...

  13. Improvement in 24-hour bronchodilation and symptom control with aclidinium bromide versus tiotropium and placebo in symptomatic patients with COPD: post hoc analysis of a Phase IIIb study

    Directory of Open Access Journals (Sweden)

    Beier J

    2017-06-01

    Full Text Available Jutta Beier,1 Robert Mroz,2,3 Anne-Marie Kirsten,4 Ferran Chuecos,5 Esther Garcia Gil5 1insaf Respiratory Research Institute, Wiesbaden, Germany; 2Centrum Medycyny Oddechowej, 3Medical University of Białystok, Białystok, Poland; 4Pulmonary Research Institute at LungenClinic Grosshansdorf, Airway Research Center North, German Center for Lung Research, Grosshansdorf, Germany; 5AstraZeneca PLC, Barcelona, Spain Background: A previous Phase IIIb study (NCT01462929 in patients with moderate to severe COPD demonstrated that 6 weeks of treatment with aclidinium led to improvements in 24-hour bronchodilation comparable to those with tiotropium, and improvement of symptoms versus placebo. This post hoc analysis was performed to assess the effect of treatment in the symptomatic patient group participating in the study. Methods: Symptomatic patients (defined as those with Evaluating Respiratory Symptoms [E-RS™] in COPD baseline score ≥10 units received aclidinium bromide 400 µg twice daily (BID, tiotropium 18 µg once daily (QD, or placebo, for 6 weeks. Lung function, COPD respiratory symptoms, and incidence of adverse events (AEs were assessed. Results: In all, 277 symptomatic patients were included in this post hoc analysis. Aclidinium and tiotropium treatment improved forced expiratory volume in 1 second (FEV1 from baseline to week 6 at all time points over 24 hours versus placebo. In addition, improvements in FEV1 from baseline during the nighttime period were observed for aclidinium versus tiotropium on day 1 (aclidinium 157 mL, tiotropium 67 mL; P<0.001 and week 6 (aclidinium 153 mL, tiotropium 90 mL; P<0.05. Aclidinium improved trough FEV1 from baseline versus placebo and tiotropium at day 1 (aclidinium 136 mL, tiotropium 68 mL; P<0.05 and week 6 (aclidinium 137 mL, tiotropium 71 mL; P<0.05. Aclidinium also improved early-morning and nighttime symptom severity, limitation of early-morning activities, and E-RS Total and domain scores versus

  14. Morning pulse pressure is associated more strongly with elevated albuminuria than systolic blood pressure in patients with type 2 diabetes mellitus: post hoc analysis of a cross-sectional multicenter study.

    Science.gov (United States)

    Ushigome, Emi; Fukui, Michiaki; Hamaguchi, Masahide; Matsumoto, Shinobu; Mineoka, Yusuke; Nakanishi, Naoko; Senmaru, Takafumi; Yamazaki, Masahiro; Hasegawa, Goji; Nakamura, Naoto

    2013-09-01

    Recently, focus has been directed toward pulse pressure as a potentially independent risk factor for micro- and macrovascular disease. This study was designed to examine the relationship between pulse pressure taken at home and elevated albuminuria in patients with type 2 diabetes. This study is a post hoc analysis of a cross-sectional multicenter study. Home blood pressure measurements were performed for 14 consecutive days in 858 patients with type 2 diabetes. We investigated the relationship between systolic blood pressure or pulse pressure in the morning or in the evening and urinary albumin excretion using univariate and multivariate analyses. Furthermore, we measured area under the receiver-operating characteristic curve (AUC) to compare the ability to identify elevated albuminuria, defined as urinary albumin excretion equal to or more than 30 mg/g creatinine, of systolic blood pressure or pulse pressure. Morning systolic blood pressure (β=0.339, Ppressure (β=0.378, PAUC for elevated albuminuria in morning systolic blood pressure and morning pulse pressure were 0.668 (0.632-0.705; PAUC of morning pulse pressure was significantly greater than that of morning systolic blood pressure (P=0.040). Our findings implicate that morning pulse pressure is associated with elevated albuminuria in patients with type 2 diabetes, which suggests that lowering morning pulse pressure could prevent the development and progression of diabetic nephropathy. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  15. Post-hoc principal component analysis on a largely illiterate elderly population from North-west India to identify important elements of mini-mental state examination

    Directory of Open Access Journals (Sweden)

    Sunil Kumar Raina

    2016-01-01

    Full Text Available Background: Mini-mental state examination (MMSE scale measures cognition using specific elements that can be isolated, defined, and subsequently measured. This study was conducted with the aim to analyze the factorial structure of MMSE in a largely, illiterate, elderly population in India and to reduce the number of variables to a few meaningful and interpretable combinations. Methodology: Principal component analysis (PCA was performed post-hoc on the data generated by a research project conducted to estimate the prevalence of dementia in four geographically defined habitations in Himachal Pradesh state of India. Results: Questions on orientation and registration account for high percentage of cumulative variance in comparison to other questions. Discussion: The PCA conducted on the data derived from a largely, illiterate population reveals that the most important components to consider for the estimation of cognitive impairment in illiterate Indian population are temporal orientation, spatial orientation, and immediate memory.

  16. Post-hoc principal component analysis on a largely illiterate elderly population from North-west India to identify important elements of mini-mental state examination.

    Science.gov (United States)

    Raina, Sunil Kumar; Chander, Vishav; Raina, Sujeet; Grover, Ashoo

    2016-01-01

    Mini-mental state examination (MMSE) scale measures cognition using specific elements that can be isolated, defined, and subsequently measured. This study was conducted with the aim to analyze the factorial structure of MMSE in a largely, illiterate, elderly population in India and to reduce the number of variables to a few meaningful and interpretable combinations. Principal component analysis (PCA) was performed post-hoc on the data generated by a research project conducted to estimate the prevalence of dementia in four geographically defined habitations in Himachal Pradesh state of India. Questions on orientation and registration account for high percentage of cumulative variance in comparison to other questions. The PCA conducted on the data derived from a largely, illiterate population reveals that the most important components to consider for the estimation of cognitive impairment in illiterate Indian population are temporal orientation, spatial orientation, and immediate memory.

  17. Investigating the Reliability and Factor Structure of Kalichman's "Survey 2: Research Misconduct" Questionnaire: A Post Hoc Analysis Among Biomedical Doctoral Students in Scandinavia.

    Science.gov (United States)

    Holm, Søren; Hofmann, Bjørn

    2017-10-01

    A precondition for reducing scientific misconduct is evidence about scientists' attitudes. We need reliable survey instruments, and this study investigates the reliability of Kalichman's "Survey 2: research misconduct" questionnaire. The study is a post hoc analysis of data from three surveys among biomedical doctoral students in Scandinavia (2010-2015). We perform reliability analysis, and exploratory and confirmatory factor analysis using a split-sample design as a partial validation. The results indicate that a reliable 13-item scale can be formed (Cronbach's α = .705), and factor analysis indicates that there are four reliable subscales each tapping a different construct: (a) general attitude to misconduct (α = .768), (b) attitude to personal misconduct (α = .784), (c) attitude to whistleblowing (α = .841), and (d) attitude to blameworthiness/punishment (α = .877). A full validation of the questionnaire requires further research. We, nevertheless, hope that the results will facilitate the increased use of the questionnaire in research.

  18. Nausea and Vomiting following Balanced Xenon Anesthesia Compared to Sevoflurane: A Post-Hoc Explorative Analysis of a Randomized Controlled Trial.

    Directory of Open Access Journals (Sweden)

    Astrid V Fahlenkamp

    Full Text Available Like other inhalational anesthetics xenon seems to be associated with post-operative nausea and vomiting (PONV. We assessed nausea incidence following balanced xenon anesthesia compared to sevoflurane, and dexamethasone for its prophylaxis in a randomized controlled trial with post-hoc explorative analysis.220 subjects with elevated PONV risk (Apfel score ≥2 undergoing elective abdominal surgery were randomized to receive xenon or sevoflurane anesthesia and dexamethasone or placebo after written informed consent. 93 subjects in the xenon group and 94 subjects in the sevoflurane group completed the trial. General anesthesia was maintained with 60% xenon or 2.0% sevoflurane. Dexamethasone 4mg or placebo was administered in the first hour. Subjects were analyzed for nausea and vomiting in predefined intervals during a 24h post-anesthesia follow-up.Logistic regression, controlled for dexamethasone and anesthesia/dexamethasone interaction, showed a significant risk to develop nausea following xenon anesthesia (OR 2.30, 95% CI 1.02-5.19, p = 0.044. Early-onset nausea incidence was 46% after xenon and 35% after sevoflurane anesthesia (p = 0.138. After xenon, nausea occurred significantly earlier (p = 0.014, was more frequent and rated worse in the beginning. Dexamethasone did not markedly reduce nausea occurrence in both groups. Late-onset nausea showed no considerable difference between the groups.In our study setting, xenon anesthesia was associated with an elevated risk to develop nausea in sensitive subjects. Dexamethasone 4mg was not effective preventing nausea in our study. Group size or dosage might have been too small, and change of statistical analysis parameters in the post-hoc evaluation might have further contributed to a limitation of our results. Further trials will be needed to address prophylaxis of xenon-induced nausea.EU Clinical Trials EudraCT-2008-004132-20 ClinicalTrials.gov NCT00793663.

  19. A post-hoc subgroup analysis of outcomes in the first phase III clinical study of edaravone (MCI-186) in amyotrophic lateral sclerosis.

    Science.gov (United States)

    2017-10-01

    Our first phase III study failed to demonstrate efficacy of edaravone for amyotrophic lateral sclerosis (ALS) compared to placebo. Here, we performed post-hoc subgroup analysis to identify a subgroup in which edaravone might be expected to show efficacy. We focussed on two newly defined subgroups, EESP and dpEESP2y. The EESP was defined as the efficacy-expected subpopulation with % forced vital capacity of ≥80%, and ≥2 points for all item scores in the revised ALS functional rating scale (ALSFRS-R) score before treatment. The dpEESP2y was defined as the greater-efficacy-expected subpopulation within EESP having a diagnosis of 'definite' or 'probable' ALS according to the El Escorial revised Airlie House diagnostic criteria and onset of disease within two years. The primary endpoint of the post-hoc analysis was the change in the ALSFRS-R score during the 24-week treatment period. The intergroup differences of the least-squares mean change in the ALSFRS-R score ± standard error during treatment were 0.65 ± 0.78 (p = 0.4108) in the full analysis set, 2.20 ± 1.03 (p = 0.0360) in the EESP, and 3.01 ± 1.33 (p = 0.0270) in the dpEESP2y. Edaravone exhibited efficacy in the dpEESP2y subgroup. A further clinical study in patients meeting dpEESP2y criteria is warranted.

  20. Insight and Treatment Outcomes in Schizophrenia: Post-hoc Analysis of a Long-term, Double-blind Study Comparing Lurasidone and Quetiapine XR.

    Science.gov (United States)

    Harvey, Philip D; Siu, Cynthia O; Loebel, Antony D

    2017-12-01

    Objective: The objective of this post-hoc analysis was to evaluate the effect of lurasidone and quetiapine extended-release (XR) on insight and judgment and assess the longitudinal relationships between improvement in insight and cognitive performance, functional capacity, quality of well-being, and depressive symptoms in patients with schizophrenia. Design: Clinically unstable patients with schizophrenia (N=488) were randomized to once-daily, fixed-dose treatment with lurasidone 80mg, lurasidone 160mg, quetiapine XR 600mg, or placebo, followed by a long-term, double-blind, flexible-dose continuation study involving these agents. Results: Significantly greater improvement in insight and judgment (assessed by the Positive and Negative Syndrome Scale G12 item) for the lurasidone and quetiapine XR groups, compared to the placebo group, was observed at Week 6. Over a subsequent six-month continuation period, the flexible dose lurasidone group showed significantly greater improvement in insight from acute phase baseline compared to the flexible-dose quetiapine XR group (QXR-QXR) (p=0.032). Improvement in insight was significantly correlated with improvement in cognition ( p =0.014), functional capacity (p=0.006, UPSA-B), quality of well-being ( p =0.033, QWB), and depressive symptoms ( p =0.05, Montgomery-Åsberg Depression Rating Scale [MADRS] score) across treatment groups and study periods. Conclusion: In this post-hoc analysis, flexibly dosed lurasidone 40 to 160mg/d was found to be associated with significantly greater improvement in insight compared to flexibly dosed quetiapine XR 200 to 800mg/d over long-term treatment in patients with schizophrenia. Across treatment groups, improvement in insight and judgment was significantly associated with improvement in cognition, functional capacity, quality of well-being, and depressive symptoms over time.

  1. Predictive value of {sup 18}F-FDG PET/CT in adults with T-cell lymphoblastic lymphoma. Post hoc analysis of results from the GRAALL-LYSA LLO3 trial

    Energy Technology Data Exchange (ETDEWEB)

    Becker, Stephanie; Vera, Pierre [Centre Henri-Becquerel, Department of Nuclear Medicine, Rouen (France); University of Rouen, QuantIF-LITIS (EA [Equipe d' Accueil] 4108), Faculty of Medicine, Rouen (France); Vermeulin, Thomas [Rouen University Hospital, Department of Biostatistics, Rouen (France); Cottereau, Anne-Segolene [Hopital Tenon, AP-HP, Department of Nuclear Medicine, Paris (France); Boissel, Nicolas [Universite Paris Diderot, Department of Hematology, Hopital Saint-Louis, AP-HP, Paris (France); Lepretre, Stephane [Centre Henri Becquerel and Normandie Univ UNIROUEN, Inserm U1245 and Department of Hematology, Rouen (France)

    2017-11-15

    We examined whether FDG PET can be used to predict outcome in patients with lymphoblastic lymphoma (LL). This was a retrospective post hoc analysis of data from the GRAAL-LYSA LL03 trial, in which the treatment of LL using an adapted paediatric-like acute lymphoblastic leukaemia protocol was evaluated. PET data acquired at baseline and after induction were analysed. Maximum standardized uptake values (SUV{sub max}), total metabolic tumour volume and total lesion glycolysis were measured at baseline. The relative changes in SUV{sub max} from baseline (ΔSUV{sub max}) and the Deauville score were determined after induction. The population analysed comprised 36 patients with T-type LL. SUV{sub max} using a cut-off value of ≤8.76 vs. >8.76 was predictive of 3-year event-free survival (31.6% vs. 80.4%; p = 0.013) and overall survival (35.0% vs. 83.7%; p = 0.028). ΔSUV{sub max} using a cut-off value of ≤80% vs. >80% tended also to be predictive of 3-year event-free survival (40.0% vs. 76.0%; p = 0.054) and overall survival (49.2% vs. 85.6%; p = 0.085). Total metabolic tumour volume, baseline total lesion glycolysis and response according to the Deauville score were not predictive of outcome. A low initial SUV{sub max} was predictive of worse outcomes in our series of patients with T-type LL. Although relatively few patients were included, the study also suggested that ΔSUV{sub max} may be useful for predicting therapeutic efficacy. (orig.)

  2. History of early abuse as a predictor of treatment response in patients with fibromyalgia : A post-hoc analysis of a 12-week, randomized, double-blind, placebo-controlled trial of paroxetine controlled release

    NARCIS (Netherlands)

    Pae, Chi-Un; Masand, Prakash S.; Marks, David M.; Krulewicz, Stan; Han, Changsu; Peindl, Kathleen; Mannelli, Paolo; Patkar, Ashwin A.

    2009-01-01

    Objectives. We conducted a post-hoc analysis to determine whether a history of physical or sexual abuse was associated with response to treatment in a double-blind, randomized, placebo-controlled trial of paroxetine controlled release (CR) in fibromyalgia. Methods. A randomized, double-blind,

  3. Sustained disease-activity-free status in patients with relapsing-remitting multiple sclerosis treated with cladribine tablets in the CLARITY study: a post-hoc and subgroup analysis

    DEFF Research Database (Denmark)

    Giovannoni, Gavin; Cook, Stuart; Rammohan, Kottil

    2011-01-01

    /kg over 96 weeks was more effective than placebo. Achieving sustained freedom from disease activity is becoming a viable treatment goal in RRMS; we therefore aimed to assess the effects of cladribine on this composite outcome measure by doing a post-hoc analysis of data from the CLARITY study....

  4. Neuromathematical Trichotomous Mixed Methods Analysis: Using the Neuroscientific Tri-Squared Test Statistical Metric as a Post Hoc Analytic to Determine North Carolina School of Science and Mathematics Leadership Efficacy

    Science.gov (United States)

    Osler, James Edward, II; Mason, Letita R.

    2016-01-01

    This study examines the leadership efficacy amongst graduates of The North Carolina School of Science and Mathematics (NCSSM) for the classes of 2000 through 2007 from a neuroscientific and neuromathematic perspective. NCSSM alumni (as the primary unit of analysis) were examined using a novel neuromathematic post hoc method of analysis. This study…

  5. Does the effect of one-day simulation team training in obstetric emergencies decline within one year? A post-hoc analysis of a multicentre cluster randomised controlled trial

    NARCIS (Netherlands)

    van de Ven, J.; Fransen, A F; Schuit, E.; van Runnard Heimel, P.J.; Mol, Ben W.; Oei, Swan G.

    2017-01-01

    Does the effect of one-day simulation team training in obstetric emergencies decline within one year? A post-hoc analysis of a multicentre cluster randomised controlled trial. J van de Ven, AF Fransen, E Schuit, PJ van Runnard Heimel, BW Mol, SG Oei Objective To investigate whether the effect of a

  6. PDE5 inhibitor treatment persistence and adherence in Brazilian men: post-hoc analyses from a 6-month, prospective, observational study.

    Science.gov (United States)

    Cairoli, Carlos; Reyes, Luis Antonio; Henneges, Carsten; Sorsaburu, Sebastian

    2014-01-01

    Characterize persistence and adherence to phosphodiesterase type - 5 inhibitor (PDE5I) on-demand therapy over 6 months among Brazilian men in an observational, non-interventional study of Latin American men naïve to PDE5Is with erectile dysfunction (ED). Men were prescribed PDE5Is per routine clinical practice. Persistence was defined as using ≥ 1 dose during the previous 4 - weeks, and adherence as following dosing instructions for the most recent dose, assessed using the Persistence and Adherence Questionnaire. Other measures included the Self - Esteem and Relationship (SEAR) Questionnaire, and International Index of Erectile Function (IIEF). Multivariate logistic regression was used to identify factors associated with persistence/adherence. 104 Brazilian men were enrolled; mean age by treatment was 53 to 59 years, and most presented with moderate ED (61.7%). The prescribed PDE5I was sildenafil citrate for 50 (48.1%), tadalafil for 36 (34.6%), vardenafil for 15 (14.4%), and lodenafil for 3 patients (2.9%). Overall treatment persistence was 69.2% and adherence was 70.2%; both were numerically higher with tadalafil (75.0%) versus sildenafil or vardenafil (range 60.0% to 68.0%). Potential associations of persistence and/or adherence were observed with education level, ED etiology, employment status, and coronary artery disease. Improvements in all IIEF domain scores, and both SEAR domain scores were observed for all treatments. Study limitations included the observational design, brief duration, dependence on patient self - reporting, and limited sample size. Approximately two-thirds of PDE5I-naive, Brazilian men with ED were treatment persistent and adherent after 6 months. Further study is warranted to improve long-term outcomes of ED treatment.

  7. PDE5 Inhibitor Treatment Persistence and Adherence in Brazilian Men: Post-hoc Analyses from a 6-Month, Prospective, Observational Study

    Directory of Open Access Journals (Sweden)

    Carlos Cairoli

    2014-06-01

    Full Text Available Purpose Characterize persistence and adherence to phosphodiesterase type - 5 inhibitor (PDE5I on-demand therapy over 6 months among Brazilian men in an observational, non-interventional study of Latin American men naïve to PDE5Is with erectile dysfunction (ED. Materials and Methods Men were prescribed PDE5Is per routine clinical practice. Persistence was defined as using ≥ 1 dose during the previous 4 - weeks, and adherence as following dosing instructions for the most recent dose, assessed using the Persistence and Adherence Questionnaire. Other measures included the Self - Esteem and Relationship (SEAR Questionnaire, and International Index of Erectile Function (IIEF. Multivariate logistic regression was used to identify factors associated with persistence/adherence. Results 104 Brazilian men were enrolled; mean age by treatment was 53 to 59 years, and most presented with moderate ED (61.7%. The prescribed PDE5I was sildenafil citrate for 50 (48.1%, tadalafil for 36 (34.6%, vardenafil for 15 (14.4%, and lodenafil for 3 patients (2.9%. Overall treatment persistence was 69.2% and adherence was 70.2%; both were numerically higher with tadalafil (75.0% versus sildenafil or vardenafil (range 60.0% to 68.0%. Potential associations of persistence and/or adherence were observed with education level, ED etiology, employment status, and coronary artery disease. Improvements in all IIEF domain scores, and both SEAR domain scores were observed for all treatments. Study limitations included the observational design, brief duration, dependence on patient self - reporting, and limited sample size. Conclusion Approximately two-thirds of PDE5I-naive, Brazilian men with ED were treatment persistent and adherent after 6 months. Further study is warranted to improve long-term outcomes of ED treatment.

  8. Effect of moderate alcohol consumption on fetuin-A levels in men and women: post-hoc analyses of three open-label randomized crossover trials

    NARCIS (Netherlands)

    Joosten, M.M.; Schrieks, I.C.; Hendriks, H.F.J.

    2014-01-01

    Background Fetuin-A, a liver-derived glycoprotein that impairs insulin-signalling, has emerged as a biomarker for diabetes risk. Although moderate alcohol consumption has been inversely associated with fetuin-A, data from clinical trials are lacking. Thus, we evaluated whether moderate alcohol

  9. Effects of vilazodone on suicidal ideation and behavior in adults with major depressive disorder or generalized anxiety disorder: post-hoc analysis of randomized, double-blind, placebo-controlled trials.

    Science.gov (United States)

    Thase, Michael E; Edwards, John; Durgam, Suresh; Chen, Changzheng; Chang, Cheng-Tao; Mathews, Maju; Gommoll, Carl P

    2017-09-01

    Treatment-emergent suicidal ideation and behavior are ongoing concerns with antidepressants. Vilazodone, currently approved for the treatment of major depressive disorder (MDD) in adults, has also been evaluated in generalized anxiety disorder (GAD). Post-hoc analyses of vilazodone trials were carried out to examine its effects on suicidal ideation and behavior in adults with MDD or GAD. Data were pooled from vilazodone trials in MDD (four studies) and GAD (three studies). The incidence of suicide-related events was analyzed on the basis of treatment-emergent adverse event reporting and Columbia-Suicide Severity Rating Scale (C-SSRS) monitoring. Treatment-emergent suicidal ideation was analyzed on the basis of a C-SSRS category shift from no suicidal ideation/behavior (C-SSRS=0) at baseline to suicide ideation (C-SSRS=1-5) during treatment. In pooled safety populations (MDD, n=2233; GAD, n=1475), suicide-related treatment-emergent adverse events occurred in less than 1% of vilazodone-treated and placebo-treated patients. Incidences of C-SSRS suicidal ideation were as follows: MDD (vilazodone=19.9%, placebo=24.7%); GAD (vilazodone=7.7%, placebo=9.4%). Shifts from no suicidal ideation/behavior at baseline to suicidal ideation during treatment were as follows: MDD (vilazodone=9.4%, placebo=10.3%); GAD (vilazodone=4.4%, placebo=6.1%). Data from placebo-controlled studies indicate little or no risk of treatment-emergent suicidal ideation or behavior with vilazodone in adults with MDD or GAD. Nevertheless, all patients should be monitored for suicidal thoughts and behaviors during antidepressant treatment.

  10. Inherent Risk Factors for Nosocomial Infection in the Long Stay Critically Ill Child Without Known Baseline Immunocompromise: A Post Hoc Analysis of the CRISIS Trial.

    Science.gov (United States)

    Carcillo, Joseph A; Dean, J Michael; Holubkov, Richard; Berger, John; Meert, Kathleen L; Anand, Kanwaljeet J S; Zimmerman, Jerry; Newth, Christopher J; Harrison, Rick; Burr, Jeri; Willson, Douglas F; Nicholson, Carol; Bell, Michael J; Berg, Robert A; Shanley, Thomas P; Heidemann, Sabrina M; Dalton, Heidi; Jenkins, Tammara L; Doctor, Allan; Webster, Angie

    2016-11-01

    Nosocomial infection remains an important health problem in long stay (>3 days) pediatric intensive care unit (PICU) patients. Admission risk factors related to the development of nosocomial infection in long stay immune competent patients in particular are not known. Post-hoc analysis of the previously published Critical Illness Stress induced Immune Suppression (CRISIS) prevention trial database, to identify baseline risk factors for nosocomial infection. Because there was no difference between treatment arms of that study in nosocomial infection in the population without known baseline immunocompromise, both arms were combined and the cohort that developed nosocomial infection was compared with the cohort that did not. There were 254 long stay PICU patients without known baseline immunocompromise. Ninety (35%) developed nosocomial infection, and 164 (65%) did not. Admission characteristics associated with increased nosocomial infection risk were increased age, higher Pediatric Risk of Mortality version III score, the diagnoses of trauma or cardiac arrest and lymphopenia (P risk of developing nosocomial infection (P risk factors (P < 0.05); whereas trauma tended to be related to nosocomial infection development (P = 0.07). These data suggest that increasing age, cardiac arrest and lymphopenia predispose long stay PICU patients without known baseline immunocompromise to nosocomial infection. These findings may inform pre-hoc stratification randomization strategies for prospective studies designed to prevent nosocomial infection in this population.

  11. Comparison of Efficacy and Safety of Liraglutide 3.0 mg in Individuals with BMI above and below 35 kg/m²: A Post-hoc Analysis.

    Science.gov (United States)

    le Roux, Carel; Aroda, Vanita; Hemmingsson, Joanna; Cancino, Ana Paula; Christensen, Rune; Pi-Sunyer, Xavier

    2017-01-01

    To investigate whether the efficacy and safety of liraglutide 3.0 mg differed between two subgroups, BMI 27 to 3.0 mg were evaluated by testing the interaction between treatment group and baseline BMI subgroup. Significantly greater weight loss (0-56 weeks) was observed with liraglutide 3.0 mg versus placebo in all patient groups while on treatment. There was no evidence that the weight-lowering effect of liraglutide 3.0 mg differed between BMI subgroups (interaction p > 0.05). Similarly, for most secondary endpoints significantly greater improvements were observed with liraglutide 3.0 mg versus placebo, with no indication treatment effects differing between subgroups. The safety profile of liraglutide 3.0 mg was broadly similar across BMI subgroups. This post-hoc analysis did not indicate any differences in the treatment effects, or safety profile, of liraglutide 3.0 mg for individuals with BMI 27 to 3.0 mg can therefore be considered for individuals with a BMI of ≥35 as well as for those with a BMI of 27 to <35 kg/m². © 2017 The Author(s) Published by S. Karger GmbH, Freiburg.

  12. Comparison of Efficacy and Safety of Liraglutide 3.0 mg in Individuals with BMI above and below 35 kg/m²: A Post-hoc Analysis

    Science.gov (United States)

    le Roux, Carel; Aroda, Vanita; Hemmingsson, Joanna; Cancino, Ana Paula; Christensen, Rune; Pi-Sunyer, Xavier

    2018-01-01

    Objective To investigate whether the efficacy and safety of liraglutide 3.0 mg differed between two subgroups, BMI 27 to 3.0 mg were evaluated by testing the interaction between treatment group and baseline BMI subgroup. Results Significantly greater weight loss (0–56 weeks) was observed with liraglutide 3.0 mg versus placebo in all patient groups while on treatment. There was no evidence that the weight-lowering effect of liraglutide 3.0 mg differed between BMI subgroups (interaction p > 0.05). Similarly, for most secondary endpoints significantly greater improvements were observed with liraglutide 3.0 mg versus placebo, with no indication treatment effects differing between subgroups. The safety profile of liraglutide 3.0 mg was broadly similar across BMI subgroups. Conclusion This post-hoc analysis did not indicate any differences in the treatment effects, or safety profile, of liraglutide 3.0 mg for individuals with BMI 27 to 3.0 mg can therefore be considered for individuals with a BMI of ≥35 as well as for those with a BMI of 27 to <35 kg/m². PMID:29145215

  13. Influence of complete administration of adjuvant chemotherapy cycles on overall and disease-free survival in locally advanced rectal cancer: post hoc analysis of a randomized, multicenter, non-inferiority, phase 3 trial.

    Science.gov (United States)

    Sandra-Petrescu, Flavius; Herrle, Florian; Burkholder, Iris; Kienle, Peter; Hofheinz, Ralf-Dieter

    2018-04-03

    A randomized trial demonstrated that capecitabine is at least as effective as fluorouracil in the adjuvant treatment of patients with locally advanced rectal cancer. However, not all patients receive all planned cycles of chemotherapy. Therefore it is of interest how complete or partial administration of chemotherapy influences oncological outcome. A post hoc analysis of a trial with 401 randomized patients, nine being excluded because of missing data, was performed. 392 patients (197 - capecitabine, 195 - fluorouracil) could be analyzed regarding the number of administered adjuvant chemotherapy cycles. In the subgroup of 361 patients with an overall survival of at least six months, five-year overall and disease-free survival were analyzed in respect to completion (complete vs. incomplete) of chemotherapy cycles. Survival rates and curves were calculated and compared using the log-rank test. The effect of completion of chemotherapy was adjusted for relevant confounding factors. Two hundred fifty-one (64.0%) of analyzed patients received all postoperative scheduled cycles. Five-year overall survival was significantly better in these patients compared to the incomplete group (76.0 vs. 60.6%, p cycles. Five-year overall survival was also significantly better than in the incomplete group (76.0 vs. 66.4%, p = 0.0073). Five-year disease free survival was numerically better (64.9 vs. 58.7%, p = 0.0646; HR [not all cycles vs. all cycles] = 1.42 95% CI: [0.98, 2.07]). Cox regression models show a non-significant better OS (p = 0.061) and DFS (p = 0.083), if chemotherapy cycles were administered completely. Complete administration of chemotherapy cycles was associated with improved five-year overall and disease-free survival in patients with locally advanced rectal cancer.

  14. Modelling and analysing oriented fibrous structures

    International Nuclear Information System (INIS)

    Rantala, M; Lassas, M; Siltanen, S; Sampo, J; Takalo, J; Timonen, J

    2014-01-01

    A mathematical model for fibrous structures using a direction dependent scaling law is presented. The orientation of fibrous nets (e.g. paper) is analysed with a method based on the curvelet transform. The curvelet-based orientation analysis has been tested successfully on real data from paper samples: the major directions of fibrefibre orientation can apparently be recovered. Similar results are achieved in tests on data simulated by the new model, allowing a comparison with ground truth

  15. Pregabalin versus SSRIs and SNRIs in benzodiazepine-refractory outpatients with generalized anxiety disorder: a post hoc cost-effectiveness analysis in usual medical practice in Spain

    Directory of Open Access Journals (Sweden)

    De Salas-Cansado M

    2012-06-01

    Full Text Available Marina De Salas-Cansado,1 José M Olivares,2 Enrique Álvarez,3 Jose L Carrasco,4 Andoni Barrueta,5 Javier Rejas,51Trial Form Support Spain, Madrid; 2Department of Psychiatry, Hospital Meixoeiro, Complejo Hospitalario Universitario, Vigo; 3Department of Psychiatry, Hospital de la Santa Creu i San Pau, Barcelona; 4Department of Psychiatry, Hospital Clínico San Carlos, Madrid; 5Health Outcomes Research Department, Medical Unit, Pfizer Spain, Alcobendas, Madrid, SpainBackground: Generalized anxiety disorder (GAD is a prevalent health condition which seriously affects both patient quality of life and the National Health System. The aim of this research was to carry out a post hoc cost-effectiveness analysis of the effect of pregabalin versus selective serotonin reuptake inhibitors (SSRIs/serotonin norepinephrine reuptake inhibitors (SNRIs in treated benzodiazepine-refractory outpatients with GAD.Methods: This post hoc cost-effectiveness analysis used secondary data extracted from the 6-month cohort, prospective, noninterventional ADAN study, which was conducted to ascertain the cost of illness in GAD subjects diagnosed according to Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition criteria. Benzodiazepine-refractory subjects were those who claimed persistent symptoms of anxiety and showed a suboptimal response (Hamilton Anxiety Rating Scale ≥16 to benzodiazepines, alone or in combination, over 6 months. Patients could switch to pregabalin (as monotherapy or addon or to an SSRI or SNRI, alone or in combination. Effectiveness was expressed as quality-adjusted life years gained, and the perspective was that of the National Health System in the year 2008. A sensitivity analysis was performed using bootstrapping techniques (10,000 resamples were obtained in order to obtain a cost-effectiveness plane and a corresponding acceptability curve.Results: A total of 282 subjects (mean Hamilton Anxiety Rating Scale score 25.8 were

  16. The Impact of Conscious Sedation versus General Anesthesia for Stroke Thrombectomy on the Predictive Value of Collateral Status: A Post Hoc Analysis of the SIESTA Trial.

    Science.gov (United States)

    Schönenberger, S; Pfaff, J; Uhlmann, L; Klose, C; Nagel, S; Ringleb, P A; Hacke, W; Kieser, M; Bendszus, M; Möhlenbruch, M A; Bösel, J

    2017-08-01

    Radiologic selection criteria to identify patients likely to benefit from endovascular stroke treatment are still controversial. In this post hoc analysis of the recent randomized Sedation versus Intubation for Endovascular Stroke TreAtment (SIESTA) trial, we aimed to investigate the impact of sedation mode (conscious sedation versus general anesthesia) on the predictive value of collateral status. Using imaging data from SIESTA, we assessed collateral status with the collateral score of Tan et al and graded it from absent to good collaterals (0-3). We examined the association of collateral status with 24-hour improvement of the NIHSS score, infarct volume, and mRS at 3 months according to the sedation regimen. In a cohort of 104 patients, the NIHSS score improved significantly in patients with moderate or good collaterals (2-3) compared with patients with no or poor collaterals (0-1) ( P = .011; mean, -5.8 ± 7.6 versus -1.1 ± 10.7). Tan 2-3 was also associated with significantly higher ASPECTS before endovascular stroke treatment (median, 9 versus 7; P collateral status (0.1 versus 2.3), the sedation modes conscious sedation and general anesthesia were not associated with significant differences in the predictive value of collateral status regarding infarction size or functional outcome. The sedation mode, conscious sedation or general anesthesia, did not influence the predictive value of collaterals in patients with large-vessel occlusion anterior circulation stroke undergoing thrombectomy in the SIESTA trial. © 2017 by American Journal of Neuroradiology.

  17. Impact of time of initiation of once-monthly paliperidone palmitate in hospitalized Asian patients with acute exacerbation of schizophrenia: a post hoc analysis from the PREVAIL study.

    Science.gov (United States)

    Li, Huafang; Li, Yan; Feng, Yu; Zhuo, Jianmin; Turkoz, Ibrahim; Mathews, Maju; Tan, Wilson

    2018-01-01

    To evaluate the differences in efficacy and safety outcomes in acute exacerbating schizophrenia patients between 2 subgroups (≤1 week and >1 week), differing in time interval from hospitalization to time of initiation of once-monthly paliperidone palmitate. PREVAIL was a multicenter, single-arm, open-label, prospective Phase IV study in hospitalized Asian patients (either sex, aged 18-65 years) diagnosed with schizophrenia ( Diagnostic and Statistical Manual of Mental Disorders , Fourth Edition). Change from baseline to week 13 in primary (Positive and Negative Syndrome Scale [PANSS] total score), secondary endpoints (PANSS responder rate, PANSS subscale, PANSS Marder factor, Clinical Global Impression-Severity, and Personal and Social Performance scale scores, readiness for hospital discharge questionnaire) and safety were assessed in this post hoc analysis. Significant mean reduction from baseline to week 13 in the PANSS total score, 30% PANSS responder rates ( P ≤0.01), PANSS subscales (positive and general psychopathology; all P ≤0.01), PANSS Marder factor (positive symptoms, uncontrolled hostility, and excitement and anxiety/depression; all P ≤0.01), Personal and Social Performance scale scores ( P ≤0.05) and Clinical Global Impression-Severity categorical summary ( P ≤0.05) were significantly greater in the ≤1 week subgroup versus >1 week subgroup ( P ≤0.05). The readiness for hospital discharge questionnaire improved over time for the overall study population, but remained similar between subgroups at all-time points. Treatment-emergent adverse events were similar between the subgroups. Early initiation of once-monthly paliperidone palmitate in hospitalized patients with acute exacerbation of schizophrenia led to greater improvements in psychotic symptoms with comparable safety than treatment initiation following 1 week of hospitalization.

  18. The PREVAIL trial of enzalutamide in men with chemotherapy-naïve, metastatic castration-resistant prostate cancer: Post hoc analysis of Korean patients.

    Science.gov (United States)

    Kim, Choung-Soo; Theeuwes, Ad; Kwon, Dong Deuk; Choi, Young Deuk; Chung, Byung Ha; Lee, Hyun Moo; Lee, Kang Hyun; Lee, Sang Eun

    2016-05-01

    This post hoc analysis evaluated treatment effects, safety, and pharmacokinetics of enzalutamide in Korean patients in the phase 3, double-blind, placebo-controlled PREVAIL trial. Asymptomatic or mildly symptomatic chemotherapy-naive men with metastatic castration-resistant prostate cancer that progressed on androgen deprivation therapy received 160 mg/d oral enzalutamide or placebo (1:1) until death or discontinuation due to radiographic progression or skeletal-related event and initiation of subsequent therapy. Coprimary end points were centrally assessed radiographic progression-free survival (rPFS) and overall survival (OS). Secondary end points included investigator-assessed rPFS, time to initiation of chemotherapy, time to prostate-specific antigen (PSA) progression, PSA response (≥50% decline), and time to skeletal-related event. Of 1,717 total patients, 78 patients were enrolled in Korea (enzalutamide, n=40; placebo, n=38). Hazard ratios (95% confidence interval) for enzalutamide versus placebo were 0.23 (0.02-2.24) for centrally assessed rPFS, 0.77 (0.28-2.15) for OS, 0.21 (0.08-0.51) for time to chemotherapy, and 0.31 (0.17-0.56) for time to PSA progression. A PSA response was observed in 70.0% of enzalutamide-treated and 10.5% of placebo-treated Korean patients. Adverse events of grade ≥3 occurred in 33% of enzalutamide-treated and 11% of placebo-treated Korean patients, with median treatment durations of 13.0 and 5.1 months, respectively. At 13 weeks, the plasma concentration of enzalutamide plus N-desmethyl enzalutamide was similar in Korean and non-Korean patients (geometric mean ratio, 1.04; 90% confidence interval, 0.97-1.10). In Korean patients, treatment effects and safety of enzalutamide were consistent with those observed in the overall PREVAIL study population (ClinicalTrials.gov Identifier: NCT01212991).

  19. Records of pan (floodplain wetland) sedimentation as an approach for post-hoc investigation of the hydrological impacts of dam impoundment: The Pongolo river, KwaZulu-Natal.

    Science.gov (United States)

    Heath, S K; Plater, A J

    2010-07-01

    River impoundment by dams has far-reaching consequences for downstream floodplains in terms of hydrology, water quality, geomorphology, ecology and ecosystem services. With the imperative of economic development, there is the danger that potential environmental impacts are not assessed adequately or monitored appropriately. Here, an investigation of sediment composition of two pans (floodplain wetlands) in the Pongolo River floodplain, KwaZulu-Natal, downstream of the Pongolapoort dam constructed in 1974, is considered as a method for post-hoc assessment of the impacts on river hydrology, sediment supply and water quality. Bumbe and Sokhunti pans have contrasting hydrological regimes in terms of their connection to the main Pongolo channel - Bumbe is a shallow ephemeral pan and Sokhunti is a deep, perennial water body. The results of X-ray fluorescence (XRF) geochemical analysis of their sediment records over a depth of >1 m show that whilst the two pans exhibit similar sediment composition and variability in their lower part, Bumbe pan exhibits a shift toward increased fine-grained mineral supply and associated nutrient influx at a depth of c. 45 cm whilst Sokhunti pan is characterised by increased biogenic productivity at a depth of c. 26 cm due to enhanced nutrient status. The underlying cause is interpreted as a shift in hydrology to a 'post-dam' flow regime of reduced flood frequencies with more regular baseline flows which reduce the average flow velocity. In addition, Sokhunti shows a greater sensitivity to soil influx during flood events due to the nature of its 'background' of autochthonous biogenic sedimentation. The timing of the overall shift in sediment composition and the dates of the mineral inwash events are not well defined, but the potential for these wetlands as sensitive recorders of dam-induced changes in floodplain hydrology, especially those with a similar setting to Sokhunti pan, is clearly demonstrated. Copyright 2010 Elsevier Ltd. All

  20. Blood transfusion strategy and risk of postoperative delirium in nursing homes residents with hip fracture. A post hoc analysis based on the TRIFE randomized controlled trial.

    Science.gov (United States)

    Blandfort, Sif; Gregersen, Merete; Borris, Lars Carl; Damsgaard, Else Marie

    2017-06-01

    To investigate whether a liberal blood transfusion strategy [Hb levels ≥11.3 g/dL (7 mmol/L)] reduces the risk of postoperative delirium (POD) on day 10, among nursing home residents with hip fracture, compared to a restrictive transfusion strategy [Hb levels ≥9.7 g/dL (6 mmol/L)]. Furthermore, to investigate whether POD influences mortality within 90 days after hip surgery. This is a post hoc analysis based on The TRIFE - a randomized controlled trial. Frail anemic patients from the Orthopedic Surgical Ward at Aarhus University Hospital were enrolled consecutively between January 18, 2010 and June 6, 2013. These patients (aged ≥65 years) had been admitted from nursing homes for unilateral hip fracture surgery. After surgery, 179 patients were included in this study. On the first day of hospitalization, all enrolled patients were examined for cognitive impairment (assessed by MMSE) and delirium (assessed by CAM). Delirium was also assessed on the tenth postoperative day. The prevalence of delirium was 10 % in patients allocated to a liberal blood transfusion strategy (LB) and 21 % in the group with a restrictive blood transfusion strategy (RB). LB prevents development of delirium on day 10, compared to RB, odds ratio 0.41 (95 % CI 0.17-0.96), p = 0.04. Development of POD on day 10 increased the risk of 90-day death, hazard ratio 3.14 (95 % CI 1.72-5.78), p < 0.001. In nursing home residents undergoing surgery for hip fracture, maintaining hemoglobin level above 11.3 g/dL reduces the rate of POD on day 10 compared to a RB. Development of POD is associated with increased mortality.

  1. Effectiveness of treat-to-target strategy for LDL-cholesterol control in type 2 diabetes: post-hoc analysis of data from the MIND.IT study.

    Science.gov (United States)

    Ardigò, Diego; Vaccaro, Olga; Cavalot, Franco; Rivellese, Albarosa Angela; Franzini, Laura; Miccoli, Roberto; Patti, Lidia; Boemi, Massimo; Trovati, Mariella; Zavaroni, Ivana

    2014-04-01

    The paper presents a post-hoc analysis of the intensity of dyslipidaemia care operated in the first 2 years of Multiple-Intervention-in-type-2-Diabetes.ITaly (MIND.IT) study. MIND.IT is a multicentric, randomized, two-parallel arm trial involving 1461 type 2 diabetic patients at high cardiovascular (CV) risk. The study compares the usual care (UC) of CV prevention with a multifactorial intensive care (IC) approach aiming at achieving target values for the main CV risk factors according to a step-wise treat-to-target approach. Proportion of patients on target for low-density lipoprotein cholesterol (LDL-C) was about 10% at baseline and increased significantly more with IC than UC (43 vs. 27%; p < 0.001). However, the majority (57%) of patients, in this intended intensively treated cohort, failed to achieve the proposed target. Average LDL-C decreased from 144 ± 35 to 108 ± 31 mg/dl with IC and from 142 ± 28 to 118 ± 32 with UC (p-for-interaction <0.0001). IC was associated with a significantly greater increase in statin prescription and lower withdrawal from treatment than UC (43 vs. 11% and 28 vs. 61%, respectively; both p < 0.001). However, the new treatments were characterized in both groups by the use of low starting doses (≤ 10 mg of atorvastatin, equivalent dose in more than 90% of patients) without increase in case of missed target. The application of a multifactorial treat-to-target intervention is associated with a significant improvement in LDL-C beyond usual practice. However, the change in LDL-C appears to be more related to an increased number of treated patients and a decreased treatment withdrawal than to a true treat-to-target approach.

  2. Costs and health resources utilization following switching to pregabalin in individuals with gabapentin-refractory neuropathic pain: a post hoc analysis.

    Science.gov (United States)

    Navarro, Ana; Saldaña, María T; Pérez, Concepción; Masramón, Xavier; Rejas, Javier

    2012-06-01

    To analyze the changes in pain severity and associated costs resulting from resource utilization and reduced productivity in patients with gabapentin-refractory peripheral neuropathic pain who switched to pregabalin therapy in primary care settings in Spain. This is a post hoc analysis of a 12-week, multicentre, noninterventional cost-of-illness study. Patients were included in the study if they were over 18 years of age and had a diagnosis of chronic, treatment-refractory peripheral neuropathic pain. The analysis included all pregabalin-naïve patients who had previously shown an inadequate response to gabapentin and switched to pregabalin. Severity of pain before and after treatment with pregabalin, alone or as an add-on therapy, was assessed using the Short-Form McGill Pain Questionnaire (SF-MPQ) and its related visual analogue scale (VA). Healthcare resource utilization, productivity (including lost-workday equivalents [LWDE]), and related costs were assessed at baseline and after pregabalin treatment. A total of 174 patients switched to pregabalin had significant and clinically relevant reductions in pain severity (mean [SD] change on SF-MPQ VA scale, -31.9 [22.1]; P use [in pregabalin add-on group], ancillary tests, and unscheduled medical visits) were observed at the end of trial. Additionally, there were substantial improvements in productivity, including a reduction in the number of LWDE following pregabalin treatment (-18.9 [26.0]; P < 0.0001). These changes correlated with substantial reductions in both direct (-652.9 ± 1622.4 €; P < 0.0001) and indirect healthcare costs (-851.6 [1259.6] €; P < 0.0001). The cost of care in patients with gabapentin-refractory peripheral neuropathic pain appeared to be significantly reduced after switching to pregabalin treatment, alone or in combination with other analgesic drugs, in a real-life setting. © 2011 The Authors. Pain Practice © 2011 World Institute of Pain.

  3. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...... attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate...

  4. Health-related quality of life effects of enzalutamide in patients with metastatic castration-resistant prostate cancer: an in-depth post hoc analysis of EQ-5D data from the PREVAIL trial.

    Science.gov (United States)

    Devlin, Nancy; Herdman, Michael; Pavesi, Marco; Phung, De; Naidoo, Shevani; Beer, Tomasz M; Tombal, Bertrand; Loriot, Yohann; Ivanescu, Cristina; Parli, Teresa; Balk, Mark; Holmstrom, Stefan

    2017-06-23

    The effect of enzalutamide on health-related quality of life (HRQoL) in the PREVAIL trial in chemotherapy-naïve men with metastatic castration-resistant prostate cancer was analyzed using the generic EQ-5D instrument. Patients received oral enzalutamide 160 mg/day (n = 872) or placebo (n = 845). EQ-5D index and EQ-5D visual analogue scale (EQ-5D VAS) scores were evaluated at baseline, week 13, and every 12 weeks until week 61 due to sample size reduction thereafter. Changes on individual dimensions were assessed, and Paretian Classification of Health Change (PCHC) and time-to-event analyses were conducted. With enzalutamide, EQ-5D index and EQ-5D VAS scores declined more slowly versus placebo and time to diverge from full health was prolonged. Average decline in EQ-5D index (-0.042 vs. -0.070; P < .0001) and EQ-5D VAS (-1.3 vs. -4.4; P < .0001) was significantly smaller with enzalutamide. There were significant (P < .05) between-group differences favoring enzalutamide in Pain/Discomfort to week 37, Anxiety/Depression at week 13, and Usual Activities at week 25, but no significant differences for Mobility and Self-care. The PCHC analysis showed more enzalutamide patients reporting improvement than placebo patients at weeks 13, 25, and 49 (all P < .05) and week 37 (P = .0512). Enzalutamide was superior (P ≤ .0003) to placebo for time to diverge from full health and time to first deterioration on Pain/Discomfort and Anxiety/Depression dimensions. This in-depth post hoc analysis showed that enzalutamide delayed HRQoL deterioration and had beneficial effects on several HRQoL domains, including Pain/Discomfort and the proportion of patients in full health, compared with placebo, and may help to support future analyses of this type. NCT01212991.

  5. Effectiveness and tolerability of second-line therapy with vildagliptin versus other oral agents in type 2 diabetes (EDGE): post-hoc subanalysis of the Belgian data.

    Science.gov (United States)

    Hoste, J; Daci, E; Mathieu, C

    2014-06-01

    To assess the efficacy and safety of vildagliptin versus other oral glucose-lowering drugs added to antidiabetic monotherapy in Belgian patients with type 2 diabetes mellitus, in comparison to the global EDGE study results. This is a pre-specified post-hoc subanalysis of the Belgian patient cohort from a worldwide 1-year observational study that compared the effectiveness and tolerability of vildagliptin to other oral antidiabetic agents in type 2 diabetes patients failing monotherapy with oral glucose-lowering agents (EDGE). A total of 1793 Belgian patients were enrolled. Physicians could add any oral antidiabetic drug and patients entered either into the vildagliptin or the comparator cohort. The primary effectiveness and tolerability endpoint was defined as the proportion of patients having a treatment response (HbA1c reduction from baseline to month 12 endpoint >0·3%) without hypoglycemia, weight gain, peripheral oedema, or gastrointestinal side-effects. In the Belgian population, 37·8% of patients in the vildagliptin group and 32·8% in the comparator group had a decrease in HbA1c of >0·3% without the predefined tolerability issues of hypoglycemia, weight gain, oedema or, gastrointestinal complaints (primary endpoint), resulting in an unadjusted odds ratio of 1·24 (95% CI: 0·96-1·61). Mean HbA1c change from baseline was -0·81% in the vildagliptin cohort and -0·75% in the comparator cohort. Overall, vildagliptin was well tolerated with similarly low incidences of total adverse events (14·9% versus 14·5% in the compactor group) and serious adverse events (2·7% versus 2·5% in the comparator group). In this EDGE subgroup of Belgian patients with type 2 diabetes who do not achieve the glycemic targets with monotherapy, a similar trend as in the global EDGE study was observed. Adding vildagliptin as a second oral glucose-lowering agent resulted in lowering HbA1c to <7% without weight gain, hypoglycemia or peripheral oedema in a higher proportion of

  6. Impact of time of initiation of once-monthly paliperidone palmitate in hospitalized Asian patients with acute exacerbation of schizophrenia: a post hoc analysis from the PREVAIL study

    Directory of Open Access Journals (Sweden)

    Li H

    2018-04-01

    Full Text Available Huafang Li,1,2 Yan Li,1,2 Yu Feng,3 Jianmin Zhuo,4 Ibrahim Turkoz,5 Maju Mathews,5 Wilson Tan3 1Shanghai Mental Health Centre, Shanghai Jiao Tong University School of Medicine, Shanghai, China; 2Shanghai Key Laboratory of Psychotic Disorders, Shanghai, China; 3Janssen Pharmaceutical Companies of Johnson and Johnson, Singapore; 4Janssen China Research and Development, Shanghai, China; 5Janssen Research & Development LLC, Titusville, NJ, USA Purpose: To evaluate the differences in efficacy and safety outcomes in acute exacerbating schizophrenia patients between 2 subgroups (≤1 week and >1 week, differing in time interval from hospitalization to time of initiation of once-monthly paliperidone palmitate. Patients and methods: PREVAIL was a multicenter, single-arm, open-label, prospective Phase IV study in hospitalized Asian patients (either sex, aged 18–65 years diagnosed with schizophrenia (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition. Change from baseline to week 13 in primary (Positive and Negative Syndrome Scale [PANSS] total score, secondary endpoints (PANSS responder rate, PANSS subscale, PANSS Marder factor, Clinical Global Impression-Severity, and Personal and Social Performance scale scores, readiness for hospital discharge questionnaire and safety were assessed in this post hoc analysis. Results: Significant mean reduction from baseline to week 13 in the PANSS total score, 30% PANSS responder rates (P≤0.01, PANSS subscales (positive and general psychopathology; all P≤0.01, PANSS Marder factor (positive symptoms, uncontrolled hostility, and excitement and anxiety/depression; all P≤0.01, Personal and Social Performance scale scores (P≤0.05 and Clinical Global Impression-Severity categorical summary (P≤0.05 were significantly greater in the ≤1 week subgroup versus >1 week subgroup (P≤0.05. The readiness for hospital discharge questionnaire improved over time for the overall study population, but

  7. Enzalutamide in Japanese patients with chemotherapy-naïve, metastatic castration-resistant prostate cancer: A post-hoc analysis of the placebo-controlled PREVAIL trial.

    Science.gov (United States)

    Kimura, Go; Yonese, Junji; Fukagai, Takashi; Kamba, Tomomi; Nishimura, Kazuo; Nozawa, Masahiro; Mansbach, Hank; Theeuwes, Ad; Beer, Tomasz M; Tombal, Bertrand; Ueda, Takeshi

    2016-05-01

    To evaluate the treatment effects, safety and pharmacokinetics of enzalutamide in Japanese patients. This was a post-hoc analysis of the phase 3, double-blind, placebo-controlled PREVAIL trial. Asymptomatic or mildly symptomatic chemotherapy-naïve patients with metastatic castration-resistant prostate cancer progressing on androgen deprivation therapy were randomized one-to-one to 160 mg/day oral enzalutamide or placebo until discontinuation on radiographic progression or skeletal-related event and initiation of subsequent antineoplastic therapy. Coprimary end-points were centrally assessed radiographic progression-free survival and overall survival. Secondary end-points were investigator-assessed radiographic progression-free survival, time to initiation of chemotherapy, time to prostate-specific antigen progression, prostate-specific antigen response (≥50% decline) and time to skeletal-related event. Of 1717 patients, 61 were enrolled in Japan (enzalutamide, n = 28; placebo, n = 33); hazard ratios (95% confidence interval) of 0.30 for centrally assessed radiographic progression-free survival (0.03-2.95), 0.59 for overall survival (0.20-1.8), 0.46 for time to chemotherapy (0.22-0.96) and 0.36 for time to prostate-specific antigen progression (0.17-0.75) showed the treatment benefit of enzalutamide over the placebo. Prostate-specific antigen responses were observed in 60.7% of enzalutamide-treated men versus 21.2% of placebo-treated men. Plasma concentrations of enzalutamide were higher in Japanese patients: the geometric mean ratio of Japanese/non-Japanese patients was 1.126 (90% confidence interval 1.018-1.245) at 13 weeks. Treatment-related adverse events grade ≥3 occurred in 3.6% of enzalutamide- and 6.1% of placebo-treated Japanese patients. Treatment effects and safety in Japanese patients were generally consistent with the overall results from PREVAIL. © 2016 The Authors. International Journal of Urology published by John Wiley & Sons Australia, Ltd on

  8. Effect of fenofibrate on uric acid and gout in type 2 diabetes: a post-hoc analysis of the randomised, controlled FIELD study.

    Science.gov (United States)

    Waldman, Boris; Ansquer, Jean-Claude; Sullivan, David R; Jenkins, Alicia J; McGill, Neil; Buizen, Luke; Davis, Timothy M E; Best, James D; Li, Liping; Feher, Michael D; Foucher, Christelle; Kesaniemi, Y Antero; Flack, Jeffrey; d'Emden, Michael C; Scott, Russell S; Hedley, John; Gebski, Val; Keech, Anthony C

    2018-04-01

    Gout is a painful disorder and is common in type 2 diabetes. Fenofibrate lowers uric acid and reduces gout attacks in small, short-term studies. Whether fenofibrate produces sustained reductions in uric acid and gout attacks is unknown. In the Fenofibrate Intervention and Event Lowering in Diabetes (FIELD) trial, participants aged 50-75 years with type 2 diabetes were randomly assigned to receive either co-micronised fenofibrate 200 mg once per day or matching placebo for a median of 5 years follow-up. We did a post-hoc analysis of recorded on-study gout attacks and plasma uric acid concentrations according to treatment allocation. The outcomes of this analysis were change in uric acid concentrations and risk of on-study gout attacks. The FIELD study is registered with ISRCTN, number ISRCTN64783481. Between Feb 23, 1998, and Nov 3, 2000, 9795 patients were randomly assigned to fenofibrate (n=4895) or placebo (n=4900) in the FIELD study. Uric acid concentrations fell by 20·2% (95% CI 19·9-20·5) during the 6-week active fenofibrate run-in period immediately pre-randomisation (a reduction of 0·06 mmol/L or 1 mg/dL) and remained -20·1% (18·5-21·7, puric acid concentration higher than 0·36 mmol/L and 13·9% in those with baseline uric acid concentration higher than 0·42 mmol/L, compared with 3·4% and 5·7%, respectively, in the fenofibrate group. Risk reductions were similar among men and women and those with dyslipidaemia, on diuretics, and with elevated uric acid concentrations. For participants with elevated baseline uric acid concentrations despite taking allopurinol at study entry, there was no heterogeneity of the treatment effect of fenofibrate on gout risk. Taking account of all gout events, fenofibrate treatment halved the risk (HR 0·48, 95% CI 0·37-0·60; puric acid concentrations by 20%, and almost halved first on-study gout events over 5 years of treatment. Fenofibrate could be a useful adjunct for preventing gout in diabetes. None. Copyright

  9. Treatment with macrolides and glucocorticosteroids in severe community-acquired pneumonia: A post-hoc exploratory analysis of a randomized controlled trial.

    Directory of Open Access Journals (Sweden)

    Adrian Ceccato

    Full Text Available Systemic corticosteroids have anti-inflammatory effects, whereas macrolides also have immunomodulatory activity in addition to their primary antimicrobial actions. We aimed to evaluate the potential interaction effect between corticosteroids and macrolides on the systemic inflammatory response in patients with severe community-acquired pneumonia to determine if combining these two immunomodulating agents was harmful, or possibly beneficial.We performed a post-hoc exploratory analysis of a randomized clinical trial conducted in three tertiary hospitals in Spain. This trial included patients with severe community-acquired pneumonia with high inflammatory response (C-reactive protein [CRP] >15 mg/dL who were randomized to receive methylprednisolone 0.5 mg/kg/tpd or placebo. The choice of antibiotic treatment was at the physician's discretion. One hundred and six patients were classified into four groups according to antimicrobial therapy combination (β-lactam plus macrolide or β-lactam plus fluoroquinolone and corticosteroid arm (placebo or corticosteroids. The primary outcome was treatment failure (composite outcome of early treatment failure, or of late treatment failure, or of both early and late treatment failure.The methylprednisolone with β-lactam plus macrolide group had more elderly patients, with comorbidities, and higher pneumonia severity index (PSI risk class V, but a lower proportion of intensive care unit admission, compared to the other groups. We found non differences in treatment failure between groups (overall p = 0.374; however, a significant difference in late treatment failure was observed (4 patients in the placebo with β-lactam plus macrolide group (31% vs. 9 patients in the placebo with β-lactam plus fluoroquinolone group (24% vs. 0 patients in the methylprednisolone with β-lactam plus macrolide group (0% vs. 2 patients [5%] in the methylprednisolone with β-lactam plus fluoroquinolone group overall p = 0.009. We found

  10. Indomethacin reduces glomerular and tubular damage markers but not renal inflammation in chronic kidney disease patients: a post-hoc analysis.

    Directory of Open Access Journals (Sweden)

    Martin H de Borst

    Full Text Available Under specific conditions non-steroidal anti-inflammatory drugs (NSAIDs may be used to lower therapy-resistant proteinuria. The potentially beneficial anti-proteinuric, tubulo-protective, and anti-inflammatory effects of NSAIDs may be offset by an increased risk of (renal side effects. We investigated the effect of indomethacin on urinary markers of glomerular and tubular damage and renal inflammation. We performed a post-hoc analysis of a prospective open-label crossover study in chronic kidney disease patients (n = 12 with mild renal function impairment and stable residual proteinuria of 4.7±4.1 g/d. After a wash-out period of six wks without any RAAS blocking agents or other therapy to lower proteinuria (untreated proteinuria (UP, patients subsequently received indomethacin 75 mg BID for 4 wks (NSAID. Healthy subjects (n = 10 screened for kidney donation served as controls. Urine and plasma levels of total IgG, IgG4, KIM-1, beta-2-microglobulin, H-FABP, MCP-1 and NGAL were determined using ELISA. Following NSAID treatment, 24 h -urinary excretion of glomerular and proximal tubular damage markers was reduced in comparison with the period without anti-proteinuric treatment (total IgG: UP 131[38-513] vs NSAID 38[17-218] mg/24 h, p<0.01; IgG4: 50[16-68] vs 10[1-38] mg/24 h, p<0.001; beta-2-microglobulin: 200[55-404] vs 50[28-110] ug/24 h, p = 0.03; KIM-1: 9[5]-[14] vs 5[2]-[9] ug/24 h, p = 0.01. Fractional excretions of these damage markers were also reduced by NSAID. The distal tubular marker H-FABP showed a trend to reduction following NSAID treatment. Surprisingly, NSAID treatment did not reduce urinary excretion of the inflammation markers MCP-1 and NGAL, but did reduce plasma MCP-1 levels, resulting in an increased fractional MCP-1 excretion. In conclusion, the anti-proteinuric effect of indomethacin is associated with reduced urinary excretion of glomerular and tubular damage markers, but not with reduced excretion of renal

  11. Bayesian uncertainty analyses of probabilistic risk models

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1989-01-01

    Applications of Bayesian principles to the uncertainty analyses are discussed in the paper. A short review of the most important uncertainties and their causes is provided. An application of the principle of maximum entropy to the determination of Bayesian prior distributions is described. An approach based on so called probabilistic structures is presented in order to develop a method of quantitative evaluation of modelling uncertainties. The method is applied to a small example case. Ideas for application areas for the proposed method are discussed

  12. YALINA Booster subcritical assembly modeling and analyses

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, Y.; Aliberti, G.; Cao, Y.; Zhong, Z.; Kiyavitskaya, H.; Bournos, V.; Fokov, Y.; Routkovskaya, C.; Sadovich, S.

    2010-01-01

    Full text: Accurate simulation models of the YALINA Booster assembly of the Joint Institute for Power and Nuclear Research (JIPNR)-Sosny, Belarus have been developed by Argonne National Laboratory (ANL) of the USA. YALINA-Booster has coupled zones operating with fast and thermal neutron spectra, which requires a special attention in the modelling process. Three different uranium enrichments of 90%, 36% or 21% were used in the fast zone and 10% uranium enrichment was used in the thermal zone. Two of the most advanced Monte Carlo computer programs have been utilized for the ANL analyses: MCNP of the Los Alamos National Laboratory and MONK of the British Nuclear Fuel Limited and SERCO Assurance. The developed geometrical models for both computer programs modelled all the details of the YALINA Booster facility as described in the technical specifications defined in the International Atomic Energy Agency (IAEA) report without any geometrical approximation or material homogenization. Materials impurities and the measured material densities have been used in the models. The obtained results for the neutron multiplication factors calculated in criticality mode (keff) and in source mode (ksrc) with an external neutron source from the two Monte Carlo programs are very similar. Different external neutron sources have been investigated including californium, deuterium-deuterium (D-D), and deuterium-tritium (D-T) neutron sources. The spatial neutron flux profiles and the neutron spectra in the experimental channels were calculated. In addition, the kinetic parameters were defined including the effective delayed neutron fraction, the prompt neutron lifetime, and the neutron generation time. A new calculation methodology has been developed at ANL to simulate the pulsed neutron source experiments. In this methodology, the MCNP code is used to simulate the detector response from a single pulse of the external neutron source and a C code is used to superimpose the pulse until the

  13. Post hoc analysis of Japanese patients from the placebo-controlled PREVAIL trial of enzalutamide in patients with chemotherapy-naive, metastatic castration-resistant prostate cancer-updated results.

    Science.gov (United States)

    Kimura, Go; Ueda, Takeshi

    2017-03-01

    A post hoc analysis of interim results from PREVAIL, a Phase III, double-blind, placebo-controlled trial of men with metastatic castration-resistant prostate cancer, demonstrated that the treatment effects, safety and pharmacokinetics of enzalutamide in Japanese patients were generally consistent with those of the overall population. A recent longer term analysis of PREVAIL demonstrated continued benefit of enzalutamide treatment over placebo. Here, we report results from a post hoc analysis of Japanese patients enrolled in PREVAIL at the prespecified number of deaths for the final analysis. In Japanese patients, enzalutamide reduced the risk of death by 35% (hazard ratio, 0.65; 95% confidence interval, 0.28-1.51) and the risk of investigator-assessed radiographic progression or death by 60% (hazard ratio, 0.40; 95% confidence interval, 0.18-0.90). These results show that treatment effects and safety in Japanese patients in the final analysis of PREVAIL continued to be generally consistent with those of the overall population. © The Author 2016. Published by Oxford University Press.

  14. Post hoc analysis of Japanese patients from the placebo-controlled PREVAIL trial of enzalutamide in patients with chemotherapy-naive, metastatic castration-resistant prostate cancer—updated results

    Science.gov (United States)

    Ueda, Takeshi

    2017-01-01

    Abstract A post hoc analysis of interim results from PREVAIL, a Phase III, double-blind, placebo-controlled trial of men with metastatic castration-resistant prostate cancer, demonstrated that the treatment effects, safety and pharmacokinetics of enzalutamide in Japanese patients were generally consistent with those of the overall population. A recent longer term analysis of PREVAIL demonstrated continued benefit of enzalutamide treatment over placebo. Here, we report results from a post hoc analysis of Japanese patients enrolled in PREVAIL at the prespecified number of deaths for the final analysis. In Japanese patients, enzalutamide reduced the risk of death by 35% (hazard ratio, 0.65; 95% confidence interval, 0.28–1.51) and the risk of investigator-assessed radiographic progression or death by 60% (hazard ratio, 0.40; 95% confidence interval, 0.18–0.90). These results show that treatment effects and safety in Japanese patients in the final analysis of PREVAIL continued to be generally consistent with those of the overall population. PMID:28003320

  15. Analysing Feature Model Changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable sys- tems is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this con- text, the evolution of the feature model closely follows the evolution of the system.

  16. Modelling and Analyses of Embedded Systems Design

    DEFF Research Database (Denmark)

    Brekling, Aske Wiid

    We present the MoVES languages: a language with which embedded systems can be specified at a stage in the development process where an application is identified and should be mapped to an execution platform (potentially multi- core). We give a formal model for MoVES that captures and gives......-based verification is a promising approach for assisting developers of embedded systems. We provide examples of system verifications that, in size and complexity, point in the direction of industrially-interesting systems....... semantics to the elements of specifications in the MoVES language. We show that even for seem- ingly simple systems, the complexity of verifying real-time constraints can be overwhelming - but we give an upper limit to the size of the search-space that needs examining. Furthermore, the formal model exposes...

  17. Applications of Historical Analyses in Combat Modelling

    Science.gov (United States)

    2011-12-01

    causes of those results [2]. Models can be classified into three descriptive types [8], according to the degree of abstraction required:  iconic ...22 ( A5 ) Hence the first bracket of Equation A3 is zero. Therefore:         022 22 22 122 22 22 2      o o o o xx xxy yy yyx...A6) Equation A5 can also be used to replace the term in Equation A6, leaving: 22 oxx     222 2 22 1 2222 2 oo xxa byyyx

  18. Radiobiological analyse based on cell cluster models

    International Nuclear Information System (INIS)

    Lin Hui; Jing Jia; Meng Damin; Xu Yuanying; Xu Liangfeng

    2010-01-01

    The influence of cell cluster dimension on EUD and TCP for targeted radionuclide therapy was studied using the radiobiological method. The radiobiological features of tumor with activity-lack in core were evaluated and analyzed by associating EUD, TCP and SF.The results show that EUD will increase with the increase of tumor dimension under the activity homogeneous distribution. If the extra-cellular activity was taken into consideration, the EUD will increase 47%. Under the activity-lack in tumor center and the requirement of TCP=0.90, the α cross-fire influence of 211 At could make up the maximum(48 μm)3 activity-lack for Nucleus source, but(72 μm)3 for Cytoplasm, Cell Surface, Cell and Voxel sources. In clinic,the physician could prefer the suggested dose of Cell Surface source in case of the future of local tumor control for under-dose. Generally TCP could well exhibit the effect difference between under-dose and due-dose, but not between due-dose and over-dose, which makes TCP more suitable for the therapy plan choice. EUD could well exhibit the difference between different models and activity distributions,which makes it more suitable for the research work. When the user uses EUD to study the influence of activity inhomogeneous distribution, one should keep the consistency of the configuration and volume of the former and the latter models. (authors)

  19. Clinical response to eliglustat in treatment-naïve patients with Gaucher disease type 1: Post-hoc comparison to imiglucerase-treated patients enrolled in the International Collaborative Gaucher Group Gaucher Registry

    Directory of Open Access Journals (Sweden)

    Jennifer Ibrahim

    2016-09-01

    Full Text Available Eliglustat is a recently approved oral therapy in the United States and Europe for adults with Gaucher disease type 1 who are CYP2D6 extensive, intermediate, or poor metabolizers (>90% of patients that has been shown to decrease spleen and liver volume and increase hemoglobin concentrations and platelet counts in untreated adults with Gaucher disease type 1 and maintain these parameters in patients previously stabilized on enzyme replacement therapy. In a post-hoc analysis, we compared the results of eliglustat treatment in treatment-naïve patients in two clinical studies with the results of imiglucerase treatment among a cohort of treatment-naïve patients with comparable baseline hematologic and visceral parameters in the International Collaborative Gaucher Group Gaucher Registry. Organ volumes and hematologic parameters improved from baseline in both treatment groups, with a time course and degree of improvement in eliglustat-treated patients similar to imiglucerase-treated patients.

  20. Clinical response to eliglustat in treatment-naïve patients with Gaucher disease type 1: Post-hoc comparison to imiglucerase-treated patients enrolled in the International Collaborative Gaucher Group Gaucher Registry.

    Science.gov (United States)

    Ibrahim, Jennifer; Underhill, Lisa H; Taylor, John S; Angell, Jennifer; Peterschmitt, M Judith

    2016-09-01

    Eliglustat is a recently approved oral therapy in the United States and Europe for adults with Gaucher disease type 1 who are CYP2D6 extensive, intermediate, or poor metabolizers (> 90% of patients) that has been shown to decrease spleen and liver volume and increase hemoglobin concentrations and platelet counts in untreated adults with Gaucher disease type 1 and maintain these parameters in patients previously stabilized on enzyme replacement therapy. In a post-hoc analysis, we compared the results of eliglustat treatment in treatment-naïve patients in two clinical studies with the results of imiglucerase treatment among a cohort of treatment-naïve patients with comparable baseline hematologic and visceral parameters in the International Collaborative Gaucher Group Gaucher Registry. Organ volumes and hematologic parameters improved from baseline in both treatment groups, with a time course and degree of improvement in eliglustat-treated patients similar to imiglucerase-treated patients.

  1. Effect of maternal death reviews and training on maternal mortality among cesarean delivery: post-hoc analysis of a cluster-randomized controlled trial.

    Science.gov (United States)

    Zongo, Augustin; Dumont, Alexandre; Fournier, Pierre; Traore, Mamadou; Kouanda, Séni; Sondo, Blaise

    2015-02-01

    To explore the differential effect of a multifaceted intervention on hospital-based maternal mortality between patients with cesarean and vaginal delivery in low-resource settings. We reanalyzed the data from a major cluster-randomized controlled trial, QUARITE (Quality of care, Risk management and technology in obstetrics). These subgroup analyses were not pre-specified and were treated as exploratory. The intervention consisted of an initial interactive workshop and quarterly educational clinically oriented and evidence-based outreach visits focused on maternal death reviews (MDR) and best practices implementation. The trial originally recruited 191,167 patients who delivered in each of the 46 participating hospitals in Mali and Senegal, between 2007 and 2011. The primary endpoint was hospital-based maternal mortality. Subgroup-specific Odds Ratios (ORs) of maternal mortality were computed and tested for differential intervention effect using generalized linear mixed model between two subgroups (cesarean: 40,975; and vaginal delivery: 150,192). The test for homogeneity of intervention effects on hospital-based maternal mortality among the two delivery mode subgroups was statistically significant (p-value: 0.0201). Compared to the control, the adjusted OR of maternal mortality was 0.71 (95% CI: 0.58-0.82, p=0.0034) among women with cesarean delivery. The intervention had no significant effect among women with vaginal delivery (adjusted OR 0.87, 95% CI 0.69-1.11, p=0.6213). This differential effect was particularly marked for district hospitals. Maternal deaths reviews and on-site training on emergency obstetric care may be more effective in reducing maternal mortality among high-risk women who need a cesarean section than among low-risk women with vaginal delivery. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Exploring the variation in implementation of a COPD disease management programme and its impact on health outcomes: a post hoc analysis of the RECODE cluster randomised trial.

    Science.gov (United States)

    Boland, Melinde R S; Kruis, Annemarije L; Huygens, Simone A; Tsiachristas, Apostolos; Assendelft, Willem J J; Gussekloo, Jacobijn; Blom, Coert M G; Chavannes, Niels H; Rutten-van Mölken, Maureen P M H

    2015-12-17

    This study aims to (1) examine the variation in implementation of a 2-year chronic obstructive pulmonary disease (COPD) management programme called RECODE, (2) analyse the facilitators and barriers to implementation and (3) investigate the influence of this variation on health outcomes. Implementation variation among the 20 primary-care teams was measured directly using a self-developed scale and indirectly through the level of care integration as measured with the Patient Assessment of Chronic Illness Care (PACIC) and the Assessment of Chronic Illness Care (ACIC). Interviews were held to obtain detailed information regarding the facilitators and barriers to implementation. Multilevel models were used to investigate the association between variation in implementation and change in outcomes. The teams implemented, on average, eight of the 19 interventions, and the specific package of interventions varied widely. Important barriers and facilitators of implementation were (in)sufficient motivation of healthcare provider and patient, the high starting level of COPD care, the small size of the COPD population per team, the mild COPD population, practicalities of the information and communication technology (ICT) system, and hurdles in reimbursement. Level of implementation as measured with our own scale and the ACIC was not associated with health outcomes. A higher level of implementation measured with the PACIC was positively associated with improved self-management capabilities, but this association was not found for other outcomes. There was a wide variety in the implementation of RECODE, associated with barriers at individual, social, organisational and societal level. There was little association between extent of implementation and health outcomes.

  3. Exploring the variation in implementation of a COPD disease management programme and its impact on health outcomes: a post hoc analysis of the RECODE cluster randomised trial

    Science.gov (United States)

    Boland, Melinde R S; Kruis, Annemarije L; Huygens, Simone A; Tsiachristas, Apostolos; Assendelft, Willem J J; Gussekloo, Jacobijn; Blom, Coert M G; Chavannes, Niels H; Rutten-van Mölken, Maureen P M H

    2015-01-01

    This study aims to (1) examine the variation in implementation of a 2-year chronic obstructive pulmonary disease (COPD) management programme called RECODE, (2) analyse the facilitators and barriers to implementation and (3) investigate the influence of this variation on health outcomes. Implementation variation among the 20 primary-care teams was measured directly using a self-developed scale and indirectly through the level of care integration as measured with the Patient Assessment of Chronic Illness Care (PACIC) and the Assessment of Chronic Illness Care (ACIC). Interviews were held to obtain detailed information regarding the facilitators and barriers to implementation. Multilevel models were used to investigate the association between variation in implementation and change in outcomes. The teams implemented, on average, eight of the 19 interventions, and the specific package of interventions varied widely. Important barriers and facilitators of implementation were (in)sufficient motivation of healthcare provider and patient, the high starting level of COPD care, the small size of the COPD population per team, the mild COPD population, practicalities of the information and communication technology (ICT) system, and hurdles in reimbursement. Level of implementation as measured with our own scale and the ACIC was not associated with health outcomes. A higher level of implementation measured with the PACIC was positively associated with improved self-management capabilities, but this association was not found for other outcomes. There was a wide variety in the implementation of RECODE, associated with barriers at individual, social, organisational and societal level. There was little association between extent of implementation and health outcomes. PMID:26677770

  4. Delphi consensus on the diagnosis and management of dyslipidaemia in chronic kidney disease patients: A post hoc analysis of the DIANA study

    Directory of Open Access Journals (Sweden)

    Aleix Cases Amenós

    2016-11-01

    Conclusions: The consensus to analyse the lipid profile in CKD patients suggests acknowledgement of the high cardiovascular risk of this condition. However, the lack of consensus in considering renal function or albuminuria, both when selecting a statin and during follow-up, suggests a limited knowledge of the differences between statins in relation to CKD. Thus, it would be advisable to develop a guideline/consensus document on the use of statins in CKD.

  5. The association of the effect of lithium in the maintenance treatment of bipolar disorder with lithium plasma levels : a post hoc analysis of a double-blind study comparing switching to lithium or placebo in patients who responded to quetiapine (Trial 144)

    NARCIS (Netherlands)

    Nolen, Willem A.; Weisler, Richard H.

    Nolen WA, Weisler RH. The association of the effect of lithium in the maintenance treatment of bipolar disorder with lithium plasma levels: a post hoc analysis of a double-blind study comparing switching to lithium or placebo in patients who responded to quetiapine (Trial 144). Bipolar Disord 2012:

  6. Does the effect of one-day simulation team training in obstetric emergencies decline within one year? A post-hoc analysis of a multicentre cluster randomised controlled trial.

    Science.gov (United States)

    van de Ven, J; Fransen, A F; Schuit, E; van Runnard Heimel, P J; Mol, B W; Oei, S G

    2017-09-01

    Does the effect of one-day simulation team training in obstetric emergencies decline within one year? A post-hoc analysis of a multicentre cluster randomised controlled trial. J van de Ven, AF Fransen, E Schuit, PJ van Runnard Heimel, BW Mol, SG Oei OBJECTIVE: To investigate whether the effect of a one-day simulation-based obstetric team training on patient outcome changes over time. Post-hoc analysis of a multicentre, open, randomised controlled trial that evaluated team training in obstetrics (TOSTI study).We studied women with a singleton pregnancy beyond 24 weeks of gestation in 24 obstetric units. Included obstetric units were randomised to either a one-day, multi-professional simulation-based team training focusing on crew resource management in a medical simulation centre (12 units) or to no team training (12 units). We assessed whether outcomes differed between both groups in each of the first four quarters following the team training and compared the effect of team training over quarters. Primary outcome was a composite outcome of low Apgar score, severe postpartum haemorrhage, trauma due to shoulder dystocia, eclampsia and hypoxic-ischemic encephalopathy. During a one year period after the team training the rate of obstetric complications, both on the composite level and the individual component level, did not differ between any of the quarters. For trauma due to shoulder dystocia team training led to a significant decrease in the first quarter (0.06% versus 0.26%, OR 0.19, 95% CI 0.03 to 0.98) but in the subsequent quarters no significant reductions were observed. Similar results were found for invasive treatment for severe postpartum haemorrhage where a significant increase was only seen in the first quarter (0.4% versus 0.03%, OR 19, 95% CI 2.5-147), and not thereafter. The beneficial effect of a one-day, simulation-based, multiprofessional, obstetric team training seems to decline after three months. If team training is further evaluated or

  7. Post Hoc Analysis of Potential Predictors of Response to Atomoxetine for the Treatment of Adults with Attention-Deficit/Hyperactivity Disorder using an Integrated Database.

    Science.gov (United States)

    Bushe, Chris; Sobanski, Esther; Coghill, David; Berggren, Lovisa; De Bruyckere, Katrien; Leppämäki, Sami

    2016-04-01

    Responses to atomoxetine vary for individual patients with attention-deficit/hyperactivity disorder (ADHD). However, we do not know whether any factors can be used to reliably predict how individuals with ADHD will respond to treatment. Our objective was to evaluate background variables that facilitate early identification of those adults with ADHD who are likely to respond to treatment with atomoxetine. We pooled data for atomoxetine-treated adults with ADHD from 12 clinical trials for a short-term (10-week) analysis, and from 11 clinical trials for a long-term (24-week) analysis. Patients not meeting a response definition [≥30 % reduction in Conners' Adult ADHD Rating Scales-Investigator Rated: Screening Version (CAARS-Inv:SV) total score and Clinical Global Impressions of ADHD Severity Scale (CGI-S) score ≤3 at endpoint], or who discontinued, were defined as non-responders. Another definition of response (≥30 % reduction in CAARS-Inv:SV total score at endpoint) was also used in these analyses; only the results with the former definition are shown in this abstract, as the same conclusions were gained with both definitions. A treatment-specified subgroup detection tool (a resampling-based ensemble tree method) was used to identify predictors of response. Of 1945 adults in the long-term analysis, 548 (28.2 %) were responders to atomoxetine at week 24; 65.2 % of 1397 non-responders had discontinued. Of 4524 adults in the short-term analysis, 1490 (32.9 %) were responders at week 10; 33.2 % of 1006 non-responders had discontinued. No analyzed baseline parameters (age, sex, prior stimulant use, ADHD subtype, CAARS-Inv:SV, CGI-S) were statistically significant predictors of response. Reductions in CAARS-Inv:SV total, CAARS-Inv:SV subscores, and CGI-S at week 4 in the short-term analysis, and at weeks 4 or 10 in the long-term analysis, were statistically significant predictors of response, i.e., patients with versus without these reductions early in

  8. Healthy Aging 5 Years After a Period of Daily Supplementation With Antioxidant Nutrients: A Post Hoc Analysis of the French Randomized Trial SU.VI.MAX.

    Science.gov (United States)

    Assmann, Karen E; Andreeva, Valentina A; Jeandel, Claude; Hercberg, Serge; Galan, Pilar; Kesse-Guyot, Emmanuelle

    2015-10-15

    This study's objective was to investigate healthy aging in older French adults 5 years after a period of daily nutritional-dose supplementation with antioxidant nutrients. The study was based on the double-blind, randomized trial, Supplementation with Antioxidant Vitamins and Minerals (SU.VI.MAX) Study (1994-2002) and the SU.VI.MAX 2 Follow-up Study (2007-2009). During 1994-2002, participants received a daily combination of vitamin C (120 mg), β-carotene (6 mg), vitamin E (30 mg), selenium (100 µg), and zinc (20 mg) or placebo. Healthy aging was assessed in 2007-2009 by using multiple criteria, including the absence of major chronic disease and good physical and cognitive functioning. Data from a subsample of the SU.VI.MAX 2 cohort, initially free of major chronic disease, with a mean age of 65.3 years in 2007-2009 (n = 3,966), were used to calculate relative risks. Supplementation was associated with a greater healthy aging probability among men (relative risk = 1.16, 95% confidence interval: 1.04, 1.29) but not among women (relative risk = 0.98, 95% confidence interval: 0.86, 1.11) or all participants (relative risk = 1.07, 95% confidence interval: 0.99, 1.16). Moreover, exploratory subgroup analyses indicated effect modification by initial serum concentrations of zinc and vitamin C. In conclusion, an adequate supply of antioxidant nutrients (equivalent to quantities provided by a balanced diet rich in fruits and vegetables) may have a beneficial role for healthy aging. © The Author 2015. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  9. Efficacy of Cladribine Tablets in high disease activity subgroups of patients with relapsing multiple sclerosis: A post hoc analysis of the CLARITY study.

    Science.gov (United States)

    Giovannoni, Gavin; Soelberg Sorensen, Per; Cook, Stuart; Rammohan, Kottil W; Rieckmann, Peter; Comi, Giancarlo; Dangond, Fernando; Hicking, Christine; Vermersch, Patrick

    2018-04-01

    In the CLARITY (CLAdRIbine Tablets treating multiple sclerosis orallY) study, Cladribine Tablets significantly improved clinical and magnetic resonance imaging (MRI) outcomes (vs placebo) in patients with relapsing-remitting multiple sclerosis. Describe two clinically relevant definitions for patients with high disease activity (HDA) at baseline of the CLARITY study (utility verified in patients receiving placebo) and assess the treatment effects of Cladribine Tablets 3.5 mg/kg compared with the overall study population. Outcomes of patients randomised to Cladribine Tablets 3.5 mg/kg or placebo were analysed for subgroups using HDA definitions based on high relapse activity (HRA; patients with ⩾2 relapses during the year prior to study entry, whether on DMD treatment or not) or HRA plus disease activity on treatment (HRA + DAT; patients with ⩾2 relapses during the year prior to study entry, whether on DMD treatment or not, PLUS patients with ⩾1 relapse during the year prior to study entry while on therapy with other DMDs and ⩾1 T1 Gd+ or ⩾9 T2 lesions). In the overall population, Cladribine Tablets 3.5 mg/kg reduced the risk of 6-month-confirmed Expanded Disability Status Scale (EDSS) worsening by 47% vs placebo. A risk reduction of 82% vs placebo was seen in both the HRA and HRA + DAT subgroups (vs 19% for non-HRA and 18% for non-HRA + DAT), indicating greater responsiveness to Cladribine Tablets 3.5 mg/kg in patients with HDA. There were consistent results for other efficacy endpoints. The safety profile in HDA patients was consistent with the overall CLARITY population. Patients with HDA showed clinical and MRI responses to Cladribine Tablets 3.5 mg/kg that were generally better than, or at least comparable with, the outcomes seen in the overall CLARITY population.

  10. VIPRE modeling of VVER-1000 reactor core for DNB analyses

    Energy Technology Data Exchange (ETDEWEB)

    Sung, Y.; Nguyen, Q. [Westinghouse Electric Corporation, Pittsburgh, PA (United States); Cizek, J. [Nuclear Research Institute, Prague, (Czech Republic)

    1995-09-01

    Based on the one-pass modeling approach, the hot channels and the VVER-1000 reactor core can be modeled in 30 channels for DNB analyses using the VIPRE-01/MOD02 (VIPRE) code (VIPRE is owned by Electric Power Research Institute, Palo Alto, California). The VIPRE one-pass model does not compromise any accuracy in the hot channel local fluid conditions. Extensive qualifications include sensitivity studies of radial noding and crossflow parameters and comparisons with the results from THINC and CALOPEA subchannel codes. The qualifications confirm that the VIPRE code with the Westinghouse modeling method provides good computational performance and accuracy for VVER-1000 DNB analyses.

  11. Validation of previously reported predictors for radiation-induced hypothyroidism in nasopharyngeal cancer patients treated with intensity-modulated radiation therapy, a post hoc analysis from a Phase III randomized trial.

    Science.gov (United States)

    Lertbutsayanukul, Chawalit; Kitpanit, Sarin; Prayongrat, Anussara; Kannarunimit, Danita; Netsawang, Buntipa; Chakkabat, Chakkapong

    2018-05-10

    This study aimed to validate previously reported dosimetric parameters, including thyroid volume, mean dose, and percentage thyroid volume, receiving at least 40, 45 and 50 Gy (V40, V45 and V50), absolute thyroid volume spared (VS) from 45, 50 and 60 Gy (VS45, VS50 and VS60), and clinical factors affecting the development of radiation-induced hypothyroidism (RHT). A post hoc analysis was performed in 178 euthyroid nasopharyngeal cancer (NPC) patients from a Phase III study comparing sequential versus simultaneous-integrated boost intensity-modulated radiation therapy. RHT was determined by increased thyroid-stimulating hormone (TSH) with or without reduced free thyroxin, regardless of symptoms. The median follow-up time was 42.5 months. The 1-, 2- and 3-year freedom from RHT rates were 78.4%, 56.4% and 43.4%, respectively. The median latency period was 21 months. The thyroid gland received a median mean dose of 53.5 Gy. Female gender, smaller thyroid volume, higher pretreatment TSH level (≥1.55 μU/ml) and VS60 treatment planning.

  12. Effect of aripiprazole 2 to 15 mg/d on health-related quality of life in the treatment of irritability associated with autistic disorder in children: a post hoc analysis of two controlled trials.

    Science.gov (United States)

    Varni, James W; Handen, Benjamin L; Corey-Lisle, Patricia K; Guo, Zhenchao; Manos, George; Ammerman, Diane K; Marcus, Ronald N; Owen, Randall; McQuade, Robert D; Carson, William H; Mathew, Suja; Mankoski, Raymond

    2012-04-01

    There are limited published data on the impact of treatment on the health-related quality of life (HRQOL) in individuals with autistic disorder. The aim of this study was to evaluate the impact of aripiprazole on HRQOL in the treatment of irritability in pediatric patients (aged 6-17 years) with autistic disorder. This post hoc analysis assessed data from two 8-week, double-blind, randomized, placebo-controlled studies that compared the efficacy of aripiprazole (fixed-dose study, 5, 10, and 15 mg/d; flexible-dose study, 2-15 mg/d) with placebo in the treatment of irritability associated with autistic disorder. HRQOL was assessed at baseline and week 8 using 3 Pediatric Quality of Life Inventory (PedsQL™) scales. Clinically relevant improvement in HRQOL was determined using an accepted distribution-based criterion-1 standard error of measurement. In total, 316 patients were randomly assigned to receive treatment with aripiprazole (fixed-dose study, 166; flexible-dose study, 47) or placebo (fixed-dose study, 52; flexible-dose study, 51). Aripiprazole was associated with significantly greater improvement than placebo in PedsQL combined-scales total score (difference, 7.8; 95% CI, 3.8-11.8; P autistic disorder. Copyright © 2012 Elsevier HS Journals, Inc. All rights reserved.

  13. Physicians Experience with and Expectations of the Safety and Tolerability of WHO-Step III Opioids for Chronic (Low Back Pain: Post Hoc Analysis of Data from a German Cross-Sectional Physician Survey

    Directory of Open Access Journals (Sweden)

    Michael A. Ueberall

    2015-01-01

    Full Text Available Objective. To describe physicians’ daily life experience with WHO-step III opioids in the treatment of chronic (low back pain (CLBP. Methods. Post hoc analysis of data from a cross-sectional online survey with 4.283 Germany physicians. Results. With a reported median use in 17% of affected patients, WHO-step III opioids play a minor role in treatment of CLBP in daily practice associated with a broad spectrum of positive and negative effects. If prescribed, potent opioids were reported to show clinically relevant effects (such as ≥50% pain relief in approximately 3 of 4 patients (median 72%. Analgesic effects reported are frequently related with adverse events (AEs. Only 20% of patients were reported to remain free of any AE. Most frequently reported AE was constipation (50%, also graded highest for AE-related daily life restrictions (median 46%. Specific AE countermeasures were reported to be necessary in approximately half of patients (median 45%; nevertheless AE-related premature discontinuation rates reported were high (median 22%. Fentanyl/morphine were the most/least prevalently prescribed potent opioids mentioned (median 20 versus 8%. Conclusion. Overall, use of WHO-step III opioids for CLBP is low. AEs, especially constipation, are commonly reported and interfere significantly with analgesic effects in daily practice. Nevertheless, beneficial effects outweigh related AEs in most patients with CLBP.

  14. The relationship, structure and profiles of schizophrenia measurements: a post-hoc analysis of the baseline measures from a randomized clinical trial

    Directory of Open Access Journals (Sweden)

    Chen Lei

    2011-12-01

    Full Text Available Background To fully assess the various dimensions affected by schizophrenia, clinical trials often include multiple scales measuring various symptom profiles, cognition, quality of life, subjective well-being, and functional impairment. In this exploratory study, we characterized the relationships among six clinical, functional, cognitive, and quality-of-life measures, identifying a parsimonious set of measurements. Methods We used baseline data from a randomized, multicenter study of patients diagnosed with schizophrenia, schizoaffective disorder, or schizophreniform disorder who were experiencing an acute symptom exacerbation (n = 628 to examine the relationship among several outcome measures. These measures included the Positive and Negative Syndrome Scale (PANSS, Montgomery-Asberg Depression Rating Scale (MADRS, Brief Assessment of Cognition in Schizophrenia Symbol Coding Test, Subjective Well-being Under Neuroleptics Scale Short Form (SWN-K, Schizophrenia Objective Functioning Instrument (SOFI, and Quality of Life Scale (QLS. Three analytic approaches were used: 1 path analysis; 2 factor analysis; and 3 categorical latent variable analysis. In the optimal path model, the SWN-K was selected as the final outcome, while the SOFI mediated the effect of the exogenous variables (PANSS, MADRS on the QLS. Results The overall model explained 47% of variance in QLS and 17% of the variance in SOFI, but only 15% in SWN-K. Factor analysis suggested four factors: "Functioning," "Daily Living," "Depression," and "Psychopathology." A strong positive correlation was observed between the SOFI and QLS (r = 0.669, and both the QLS and SOFI loaded on the "Functioning" factor, suggesting redundancy between these scales. The measurement profiles from the categorical latent variable analysis showed significant variation in functioning and quality of life despite similar levels of psychopathology. Conclusions Researchers should consider collecting PANSS, SOFI, and

  15. Development of ITER 3D neutronics model and nuclear analyses

    International Nuclear Information System (INIS)

    Zeng, Q.; Zheng, S.; Lu, L.; Li, Y.; Ding, A.; Hu, H.; Wu, Y.

    2007-01-01

    ITER nuclear analyses rely on the calculations with the three-dimensional (3D) Monte Carlo code e.g. the widely-used MCNP. However, continuous changes in the design of the components require the 3D neutronics model for nuclear analyses should be updated. Nevertheless, the modeling of a complex geometry with MCNP by hand is a very time-consuming task. It is an efficient way to develop CAD-based interface code for automatic conversion from CAD models to MCNP input files. Based on the latest CAD model and the available interface codes, the two approaches of updating 3D nuetronics model have been discussed by ITER IT (International Team): The first is to start with the existing MCNP model 'Brand' and update it through a combination of direct modification of the MCNP input file and generation of models for some components directly from the CAD data; The second is to start from the full CAD model, make the necessary simplifications, and generate the MCNP model by one of the interface codes. MCAM as an advanced CAD-based MCNP interface code developed by FDS Team in China has been successfully applied to update the ITER 3D neutronics model by adopting the above two approaches. The Brand model has been updated to generate portions of the geometry based on the newest CAD model by MCAM. MCAM has also successfully performed conversion to MCNP neutronics model from a full ITER CAD model which is simplified and issued by ITER IT to benchmark the above interface codes. Based on the two updated 3D neutronics models, the related nuclear analyses are performed. This paper presents the status of ITER 3D modeling by using MCAM and its nuclear analyses, as well as a brief introduction of advanced version of MCAM. (authors)

  16. Sensitivity and uncertainty analyses for performance assessment modeling

    International Nuclear Information System (INIS)

    Doctor, P.G.

    1988-08-01

    Sensitivity and uncertainty analyses methods for computer models are being applied in performance assessment modeling in the geologic high level radioactive waste repository program. The models used in performance assessment tend to be complex physical/chemical models with large numbers of input variables. There are two basic approaches to sensitivity and uncertainty analyses: deterministic and statistical. The deterministic approach to sensitivity analysis involves numerical calculation or employs the adjoint form of a partial differential equation to compute partial derivatives; the uncertainty analysis is based on Taylor series expansions of the input variables propagated through the model to compute means and variances of the output variable. The statistical approach to sensitivity analysis involves a response surface approximation to the model with the sensitivity coefficients calculated from the response surface parameters; the uncertainty analysis is based on simulation. The methods each have strengths and weaknesses. 44 refs

  17. Analysing the temporal dynamics of model performance for hydrological models

    NARCIS (Netherlands)

    Reusser, D.E.; Blume, T.; Schaefli, B.; Zehe, E.

    2009-01-01

    The temporal dynamics of hydrological model performance gives insights into errors that cannot be obtained from global performance measures assigning a single number to the fit of a simulated time series to an observed reference series. These errors can include errors in data, model parameters, or

  18. Meta-analyses of KIF6 Trp719Arg in coronary heart disease and statin therapeutic effect.

    Directory of Open Access Journals (Sweden)

    Ping Peng

    Full Text Available The goal of our study is to assess the contribution of KIF6 Trp719Arg to both the risk of CHD and the efficacy of statin therapy in CHD patients.Meta-analysis of 8 prospective studies among 77,400 Caucasians provides evidence that 719Arg increases the risk of CHD (P<0.001, HR = 1.27, 95% CI = 1.15-1.41. However, another meta-analysis of 7 case-control studies among 65,200 individuals fails to find a significant relationship between Trp719Arg and the risk of CHD (P = 0.642, OR = 1.02, 95% CI = 0.95-1.08. This suggests that the contribution of Trp719Arg to CHD varies in different ethnic groups. Additional meta-analysis also shows that statin therapy only benefit the vascular patients carry 719Arg allele (P<0.001, relative ratio (RR = 0.60, 95% CI = 0.54-0.67. To examine the role of this genetic variant in CHD risk in Han Chinese, we have conducted a case-control study with 289 CHD cases, 193 non-CHD controls, and 329 unrelated healthy volunteers as healthy controls. On post hoc analysis, significant allele frequency difference of 719Arg is observed between female CHD cases and female total controls under the dominant model (P = 0.04, χ(2 = 4.228, df = 1, odd ratio (OR = 1.979, 95% confidence interval (CI = 1.023-3.828. Similar trends are observed for post hoc analysis between female CHD cases and female healthy controls (dominant model: P = 0.04, χ(2 = 4.231, df = 1, OR = 2.015, 95% CI = 1.024-3.964. Non-genetic CHD risk factors are not controlled in these analyses.Our meta-analysis demonstrates the role of Trp719Arg of KIF6 gene in the risk of CHD in Caucasians. The meta-analysis also suggests the role of this variant in statin therapeutic response in vascular diseases. Our case-control study suggests that Trp719Arg of KIF6 gene is associated with CHD in female Han Chinese through a post hoc analysis.

  19. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  20. Onychomycosis of Toenails and Post-hoc Analyses with Efinaconazole 10% Solution Once-daily Treatment: Impact of Disease Severity and Other Concomitant Associated Factors on Selection of Therapy and Therapeutic Outcomes.

    Science.gov (United States)

    Del Rosso, James Q

    2016-02-01

    Topical treatment for toenail onychomycosis has been fraught with a long-standing reputation of poor efficaey, primarily due to physical properties of the nail unit that impede drug penetration. Newer topical agents have been formulated as Solution, which appear to provide better therapeutic response in properly selected patients. It is important to recognize the impact the effects that mitigating and concomitant factors can have on efficaey. These factors include disease severity, gender, presence of tinea pedis, and diabetes. This article reviews results achieved in Phase 3 pivotal studies with topical efinaconazole 10% Solution applied once daily for 48 weeks with a focus on how the aforementioned factors influenced therapeutic outcomes. It is important for clinicians treating patients for onychomycosis to evaluate severity, treat concomitant tinea pedis, address control of diabetes if present by encouraging involvement of the patient's primary care physician, and consider longer treatment courses when clinically relevant.

  1. Insights into the efficacy of golimumab plus methotrexate in patients with active rheumatoid arthritis who discontinued prior anti-tumour necrosis factor therapy: post-hoc analyses from the GO-AFTER study

    NARCIS (Netherlands)

    Smolen, Josef S.; Kay, Jonathan; Matteson, Eric L.; Landewé, Robert; Hsia, Elizabeth C.; Xu, Stephen; Zhou, Yiying; Doyle, Mittie K.

    2014-01-01

    Evaluate golimumab in patients with active rheumatoid arthritis (RA) and previous tumour necrosis factor-α (TNF) inhibitor use. Patients (n=461) previously receiving ≥1 TNF inhibitor were randomised to subcutaneous injections of placebo, golimumab 50 mg or golimumab 100 mg q4 weeks. Primary endpoint

  2. Use of flow models to analyse loss of coolant accidents

    International Nuclear Information System (INIS)

    Pinet, Bernard

    1978-01-01

    This article summarises current work on developing the use of flow models to analyse loss-of-coolant accident in pressurized-water plants. This work is being done jointly, in the context of the LOCA Technical Committee, by the CEA, EDF and FRAMATOME. The construction of the flow model is very closely based on some theoretical studies of the two-fluid model. The laws of transfer at the interface and at the wall are tested experimentally. The representativity of the model then has to be checked in experiments involving several elementary physical phenomena [fr

  3. Effects of non-invasive vagus nerve stimulation on attack frequency over time and expanded response rates in patients with chronic cluster headache: a post hoc analysis of the randomised, controlled PREVA study.

    Science.gov (United States)

    Gaul, Charly; Magis, Delphine; Liebler, Eric; Straube, Andreas

    2017-12-01

    In the PREVention and Acute treatment of chronic cluster headache (PREVA) study, attack frequency reductions from baseline were significantly more pronounced with non-invasive vagus nerve stimulation plus standard of care (nVNS + SoC) than with SoC alone. Given the intensely painful and frequent nature of chronic cluster headache attacks, additional patient-centric outcomes, including the time to and level of therapeutic response, were evaluated in a post hoc analysis of the PREVA study. After a 2-week baseline phase, 97 patients with chronic cluster headache entered a 4-week randomised phase to receive nVNS + SoC (n = 48) or SoC alone (n = 49). All 92 patients who continued into a 4-week extension phase received nVNS + SoC. Compared with SoC alone, nVNS + SoC led to a significantly lower mean weekly attack frequency by week 2 of the randomised phase; the attack frequency remained significantly lower in the nVNS + SoC group through week 3 of the extension phase (P cluster headache attack frequency within 2 weeks after its addition to SoC and was associated with significantly higher ≥25%, ≥50%, and ≥75% response rates than SoC alone. The rapid decrease in weekly attack frequency justifies a 4-week trial period to identify responders to nVNS, with a high degree of confidence, among patients with chronic cluster headache.

  4. The Effect of Sitagliptin on the Regression of Carotid Intima-Media Thickening in Patients with Type 2 Diabetes Mellitus: A Post Hoc Analysis of the Sitagliptin Preventive Study of Intima-Media Thickness Evaluation

    Directory of Open Access Journals (Sweden)

    Tomoya Mita

    2017-01-01

    Full Text Available Background. The effect of dipeptidyl peptidase-4 (DPP-4 inhibitors on the regression of carotid IMT remains largely unknown. The present study aimed to clarify whether sitagliptin, DPP-4 inhibitor, could regress carotid intima-media thickness (IMT in insulin-treated patients with type 2 diabetes mellitus (T2DM. Methods. This is an exploratory analysis of a randomized trial in which we investigated the effect of sitagliptin on the progression of carotid IMT in insulin-treated patients with T2DM. Here, we compared the efficacy of sitagliptin treatment on the number of patients who showed regression of carotid IMT of ≥0.10 mm in a post hoc analysis. Results. The percentages of the number of the patients who showed regression of mean-IMT-CCA (28.9% in the sitagliptin group versus 16.4% in the conventional group, P = 0.022 and left max-IMT-CCA (43.0% in the sitagliptin group versus 26.2% in the conventional group, P = 0.007, but not right max-IMT-CCA, were higher in the sitagliptin treatment group compared with those in the non-DPP-4 inhibitor treatment group. In multiple logistic regression analysis, sitagliptin treatment significantly achieved higher target attainment of mean-IMT-CCA ≥0.10 mm and right and left max-IMT-CCA ≥0.10 mm compared to conventional treatment. Conclusions. Our data suggested that DPP-4 inhibitors were associated with the regression of carotid atherosclerosis in insulin-treated T2DM patients. This study has been registered with the University Hospital Medical Information Network Clinical Trials Registry (UMIN000007396.

  5. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Ihekwaba, Adoha

    2007-01-01

    A. Ihekwaba, R. Mardare. A Calculus for Modelling, Simulating and Analysing Compartmentalized Biological Systems. Case study: NFkB system. In Proc. of International Conference of Computational Methods in Sciences and Engineering (ICCMSE), American Institute of Physics, AIP Proceedings, N 2...

  6. SVM models for analysing the headstreams of mine water inrush

    Energy Technology Data Exchange (ETDEWEB)

    Yan Zhi-gang; Du Pei-jun; Guo Da-zhi [China University of Science and Technology, Xuzhou (China). School of Environmental Science and Spatial Informatics

    2007-08-15

    The support vector machine (SVM) model was introduced to analyse the headstrean of water inrush in a coal mine. The SVM model, based on a hydrogeochemical method, was constructed for recognising two kinds of headstreams and the H-SVMs model was constructed for recognising multi- headstreams. The SVM method was applied to analyse the conditions of two mixed headstreams and the value of the SVM decision function was investigated as a means of denoting the hydrogeochemical abnormality. The experimental results show that the SVM is based on a strict mathematical theory, has a simple structure and a good overall performance. Moreover the parameter W in the decision function can describe the weights of discrimination indices of the headstream of water inrush. The value of the decision function can denote hydrogeochemistry abnormality, which is significant in the prevention of water inrush in a coal mine. 9 refs., 1 fig., 7 tabs.

  7. Vocational Teachers and Professionalism - A Model Based on Empirical Analyses

    DEFF Research Database (Denmark)

    Duch, Henriette Skjærbæk; Andreasen, Karen E

    Vocational Teachers and Professionalism - A Model Based on Empirical Analyses Several theorists has developed models to illustrate the processes of adult learning and professional development (e.g. Illeris, Argyris, Engeström; Wahlgren & Aarkorg, Kolb and Wenger). Models can sometimes be criticized...... emphasis on the adult employee, the organization, its surroundings as well as other contextual factors. Our concern is adult vocational teachers attending a pedagogical course and teaching at vocational colleges. The aim of the paper is to discuss different models and develop a model concerning teachers...... at vocational colleges based on empirical data in a specific context, vocational teacher-training course in Denmark. By offering a basis and concepts for analysis of practice such model is meant to support the development of vocational teachers’ professionalism at courses and in organizational contexts...

  8. Performance of neutron kinetics models for ADS transient analyses

    International Nuclear Information System (INIS)

    Rineiski, A.; Maschek, W.; Rimpault, G.

    2002-01-01

    Within the framework of the SIMMER code development, neutron kinetics models for simulating transients and hypothetical accidents in advanced reactor systems, in particular in Accelerator Driven Systems (ADSs), have been developed at FZK/IKET in cooperation with CE Cadarache. SIMMER is a fluid-dynamics/thermal-hydraulics code, coupled with a structure model and a space-, time- and energy-dependent neutronics module for analyzing transients and accidents. The advanced kinetics models have also been implemented into KIN3D, a module of the VARIANT/TGV code (stand-alone neutron kinetics) for broadening application and for testing and benchmarking. In the paper, a short review of the SIMMER and KIN3D neutron kinetics models is given. Some typical transients related to ADS perturbations are analyzed. The general models of SIMMER and KIN3D are compared with more simple techniques developed in the context of this work to get a better understanding of the specifics of transients in subcritical systems and to estimate the performance of different kinetics options. These comparisons may also help in elaborating new kinetics models and extending existing computation tools for ADS transient analyses. The traditional point-kinetics model may give rather inaccurate transient reaction rate distributions in an ADS even if the material configuration does not change significantly. This inaccuracy is not related to the problem of choosing a 'right' weighting function: the point-kinetics model with any weighting function cannot take into account pronounced flux shape variations related to possible significant changes in the criticality level or to fast beam trips. To improve the accuracy of the point-kinetics option for slow transients, we have introduced a correction factor technique. The related analyses give a better understanding of 'long-timescale' kinetics phenomena in the subcritical domain and help to evaluate the performance of the quasi-static scheme in a particular case. One

  9. Graphic-based musculoskeletal model for biomechanical analyses and animation.

    Science.gov (United States)

    Chao, Edmund Y S

    2003-04-01

    The ability to combine physiology and engineering analyses with computer sciences has opened the door to the possibility of creating the 'Virtual Human' reality. This paper presents a broad foundation for a full-featured biomechanical simulator for the human musculoskeletal system physiology. This simulation technology unites the expertise in biomechanical analysis and graphic modeling to investigate joint and connective tissue mechanics at the structural level and to visualize the results in both static and animated forms together with the model. Adaptable anatomical models including prosthetic implants and fracture fixation devices and a robust computational infrastructure for static, kinematic, kinetic, and stress analyses under varying boundary and loading conditions are incorporated on a common platform, the VIMS (Virtual Interactive Musculoskeletal System). Within this software system, a manageable database containing long bone dimensions, connective tissue material properties and a library of skeletal joint system functional activities and loading conditions are also available and they can easily be modified, updated and expanded. Application software is also available to allow end-users to perform biomechanical analyses interactively. This paper details the design, capabilities, and features of the VIMS development at Johns Hopkins University, an effort possible only through academic and commercial collaborations. Examples using these models and the computational algorithms in a virtual laboratory environment are used to demonstrate the utility of this unique database and simulation technology. This integrated system will impact on medical education, basic research, device development and application, and clinical patient care related to musculoskeletal diseases, trauma, and rehabilitation.

  10. Model-based Recursive Partitioning for Subgroup Analyses

    OpenAIRE

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-01-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by...

  11. The relationship of renal function to outcome: A post hoc analysis from the EdoxabaN versus warfarin in subjectS UndeRgoing cardiovErsion of Atrial Fibrillation (ENSURE-AF) study.

    Science.gov (United States)

    Lip, Gregory Y H; Al-Saady, Naab; Ezekowitz, Michael D; Banach, Maciej; Goette, Andreas

    2017-11-01

    The ENSURE-AF study (NCT 02072434) of anticoagulation for electrical cardioversion in nonvalvular atrial fibrillation (NVAF) showed comparable low rates of bleeding and thromboembolism between the edoxaban and the enoxaparin-warfarin treatment arms. This post hoc analysis investigated the relationship between renal function and clinical outcomes. ENSURE-AF was a multicenter, PROBE evaluation trial of edoxaban 60 mg, or dose reduced to 30 mg/d for weight≤60 kg, creatinine clearance (CrCl; Cockcroft-Gault) ≤50 mL/min, or concomitant P-glycoprotein inhibitors compared with therapeutically monitored enoxaparin-warfarin in 2,199 NVAF patients undergoing electrical cardioversion. Efficacy and safety outcomes and time in therapeutic range in the warfarin arm were analyzed in relation to CrCl in prespecified ranges ≥15 and ≤30, >30 and ≤50, >50 and warfarin. Mean age was 64.3±10 and 64.2±11 years. Mean time in therapeutic range was progressively lower with reducing CrCl strata, being 66.8% in those with CrCl >30 to ≤50 compared with 71.8% in those with CrCl ≥80. The odds ratios for the primary efficacy and safety end points were comparable for the different predefined renal function strata; given the small numbers, the 95% CI included 1.0. In the subset of those with CrCl ≥95, the odds ratios showed consistency with the other CrCl strata. When CrCl was assessed as a continuous variable, there was a nonsignificant trend toward higher major or clinically relevant nonmajor bleeding with reducing CrCl levels, with no significant differences between the 2 treatment arms. When we assessed CrCl at baseline compared with end of treatment, there were no significant differences in CrCl change between the edoxaban and enoxaparin-warfarin arms. The proportions with worsening of renal function (defined as a decrease of >20% from baseline) were similar in the 2 treatment arms. Given the small number of events in ENSURE-AF, no effect of renal (dys)function was

  12. Modeling hard clinical end-point data in economic analyses.

    Science.gov (United States)

    Kansal, Anuraag R; Zheng, Ying; Palencia, Roberto; Ruffolo, Antonio; Hass, Bastian; Sorensen, Sonja V

    2013-11-01

    The availability of hard clinical end-point data, such as that on cardiovascular (CV) events among patients with type 2 diabetes mellitus, is increasing, and as a result there is growing interest in using hard end-point data of this type in economic analyses. This study investigated published approaches for modeling hard end-points from clinical trials and evaluated their applicability in health economic models with different disease features. A review of cost-effectiveness models of interventions in clinically significant therapeutic areas (CV diseases, cancer, and chronic lower respiratory diseases) was conducted in PubMed and Embase using a defined search strategy. Only studies integrating hard end-point data from randomized clinical trials were considered. For each study included, clinical input characteristics and modeling approach were summarized and evaluated. A total of 33 articles (23 CV, eight cancer, two respiratory) were accepted for detailed analysis. Decision trees, Markov models, discrete event simulations, and hybrids were used. Event rates were incorporated either as constant rates, time-dependent risks, or risk equations based on patient characteristics. Risks dependent on time and/or patient characteristics were used where major event rates were >1%/year in models with fewer health states (Models of infrequent events or with numerous health states generally preferred constant event rates. The detailed modeling information and terminology varied, sometimes requiring interpretation. Key considerations for cost-effectiveness models incorporating hard end-point data include the frequency and characteristics of the relevant clinical events and how the trial data is reported. When event risk is low, simplification of both the model structure and event rate modeling is recommended. When event risk is common, such as in high risk populations, more detailed modeling approaches, including individual simulations or explicitly time-dependent event rates, are

  13. A 1024 channel analyser of model FH 465

    International Nuclear Information System (INIS)

    Tang Cunxun

    1988-01-01

    The FH 465 is renewed type of the 1024 Channel Analyser of model FH451. Besides simple operation and fine display, featured by the primary one, the core memory is replaced by semiconductor memory; the integration has been improved; employment of 74LS low power consumpted devices widely used in the world has not only greatly decreased the cost, but also can be easily interchanged with Apple-II, Great Wall-0520-CH or IBM-PC/XT Microcomputers. The operating principle, main specifications and test results are described

  14. Applications of one-dimensional models in simplified inelastic analyses

    International Nuclear Information System (INIS)

    Kamal, S.A.; Chern, J.M.; Pai, D.H.

    1980-01-01

    This paper presents an approximate inelastic analysis based on geometric simplification with emphasis on its applicability, modeling, and the method of defining the loading conditions. Two problems are investigated: a one-dimensional axisymmetric model of generalized plane strain thick-walled cylinder is applied to the primary sodium inlet nozzle of the Clinch River Breeder Reactor Intermediate Heat Exchanger (CRBRP-IHX), and a finite cylindrical shell is used to simulate the branch shell forging (Y) junction. The results are then compared with the available detailed inelastic analyses under cyclic loading conditions in terms of creep and fatigue damages and inelastic ratchetting strains per the ASME Code Case N-47 requirements. In both problems, the one-dimensional simulation is able to trace the detailed stress-strain response. The quantitative comparison is good for the nozzle, but less satisfactory for the Y junction. Refinements are suggested to further improve the simulation

  15. A non-equilibrium neutral model for analysing cultural change.

    Science.gov (United States)

    Kandler, Anne; Shennan, Stephen

    2013-08-07

    Neutral evolution is a frequently used model to analyse changes in frequencies of cultural variants over time. Variants are chosen to be copied according to their relative frequency and new variants are introduced by a process of random mutation. Here we present a non-equilibrium neutral model which accounts for temporally varying population sizes and mutation rates and makes it possible to analyse the cultural system under consideration at any point in time. This framework gives an indication whether observed changes in the frequency distributions of a set of cultural variants between two time points are consistent with the random copying hypothesis. We find that the likelihood of the existence of the observed assemblage at the end of the considered time period (expressed by the probability of the observed number of cultural variants present in the population during the whole period under neutral evolution) is a powerful indicator of departures from neutrality. Further, we study the effects of frequency-dependent selection on the evolutionary trajectories and present a case study of change in the decoration of pottery in early Neolithic Central Europe. Based on the framework developed we show that neutral evolution is not an adequate description of the observed changes in frequency. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Multi-state models: metapopulation and life history analyses

    Directory of Open Access Journals (Sweden)

    Arnason, A. N.

    2004-06-01

    Full Text Available Multi–state models are designed to describe populations that move among a fixed set of categorical states. The obvious application is to population interchange among geographic locations such as breeding sites or feeding areas (e.g., Hestbeck et al., 1991; Blums et al., 2003; Cam et al., 2004 but they are increasingly used to address important questions of evolutionary biology and life history strategies (Nichols & Kendall, 1995. In these applications, the states include life history stages such as breeding states. The multi–state models, by permitting estimation of stage–specific survival and transition rates, can help assess trade–offs between life history mechanisms (e.g. Yoccoz et al., 2000. These trade–offs are also important in meta–population analyses where, for example, the pre–and post–breeding rates of transfer among sub–populations can be analysed in terms of target colony distance, density, and other covariates (e.g., Lebreton et al. 2003; Breton et al., in review. Further examples of the use of multi–state models in analysing dispersal and life–history trade–offs can be found in the session on Migration and Dispersal. In this session, we concentrate on applications that did not involve dispersal. These applications fall in two main categories: those that address life history questions using stage categories, and a more technical use of multi–state models to address problems arising from the violation of mark–recapture assumptions leading to the potential for seriously biased predictions or misleading insights from the models. Our plenary paper, by William Kendall (Kendall, 2004, gives an overview of the use of Multi–state Mark–Recapture (MSMR models to address two such violations. The first is the occurrence of unobservable states that can arise, for example, from temporary emigration or by incomplete sampling coverage of a target population. Such states can also occur for life history reasons, such

  17. Antiplatelet therapy and the effects of B vitamins in patients with previous stroke or transient ischaemic attack: a post-hoc subanalysis of VITATOPS, a randomised, placebo-controlled trial.

    Science.gov (United States)

    Hankey, Graeme J; Eikelboom, John W; Yi, Qilong; Lees, Kennedy R; Chen, Christopher; Xavier, Denis; Navarro, Jose C; Ranawaka, Udaya K; Uddin, Wasim; Ricci, Stefano; Gommans, John; Schmidt, Reinhold

    2012-06-01

    Previous studies have suggested that any benefits of folic acid-based therapy to lower serum homocysteine in prevention of cardiovascular events might be offset by concomitant use of antiplatelet therapy. We aimed to establish whether there is an interaction between antiplatelet therapy and the effects of folic acid-based homocysteine-lowering therapy on major vascular events in patients with stroke or transient ischaemic attack enrolled in the vitamins to prevent stroke (VITATOPS) trial. In the VITATOPS trial, 8164 patients with recent stroke or transient ischaemic attack were randomly allocated to double-blind treatment with one tablet daily of placebo or B vitamins (2 mg folic acid, 25 mg vitamin B(6), and 500 μg vitamin B(12)) and followed up for a median 3·4 years (IQR 2·0-5·5) for the primary composite outcome of stroke, myocardial infarction, or death from vascular causes. In our post-hoc analysis of the interaction between antiplatelet therapy and the effects of treatment with B vitamins on the primary outcome, we used Cox proportional hazards regression before and after adjusting for imbalances in baseline prognostic factors in participants who were and were not taking antiplatelet drugs at baseline and in participants assigned to receive B vitamins or placebo. We also assessed the interaction in different subgroups of patients and different secondary outcomes. The VITATOPS trial is registered with ClinicalTrials.gov, number NCT00097669, and Current Controlled Trials, number ISRCTN74743444. At baseline, 6609 patients were taking antiplatelet therapy and 1463 were not. Patients not receiving antiplatelet therapy were more likely to be younger, east Asian, and disabled, to have a haemorrhagic stroke or cardioembolic ischaemic stroke, and to have a history of hypertension or atrial fibrillation. They were less likely to be smokers and to have a history of peripheral artery disease, hypercholesterolaemia, diabetes, ischaemic heart disease, and a

  18. Effect of ELOM-080 on exacerbations and symptoms in COPD patients with a chronic bronchitis phenotype – a post-hoc analysis of a randomized, double-blind, placebo-controlled clinical trial

    Directory of Open Access Journals (Sweden)

    Beeh KM

    2016-11-01

    Full Text Available Kai-Michael Beeh,1 Jutta Beier,1 Henning Candler,2 Thomas Wittig2 1Insaf Respiratory Research Institute, Wiesbaden, Germany; 2G. Pohl-Boskamp GmbH & Co KG, Hohenlockstedt, Germany Background: Treating symptoms and preventing exacerbations are key components of chronic obstructive pulmonary disease (COPD long-term management. Recently, a more tailored treatment approach has been proposed, in particular for two well-established clinical phenotypes, frequent exacerbators and chronic bronchitis-dominant COPD. ELOM-080 has demonstrated clinical efficacy in treating symptoms and preventing exacerbations in subjects with chronic bronchitis. However, little is known about the potential effects of ELOM-080 in COPD patients. Aim: To evaluate the effect on exacerbation, cough sputum, and general state of health of long-term treatment with ELOM-080 in COPD patients with an exacerbation history and chronic bronchitis. Methods: We performed a post-hoc analysis of a randomized, double-blinded, placebo-controlled parallel-group clinical trial of a 6-month treatment with ELOM-080 (3×300 mg in patients with chronic bronchitis and concomitant COPD. The primary outcome was the proportion of subjects with at least one exacerbation over the 6-month study period. Secondary outcomes included the total number of exacerbations (ie, cumulative occurrence of exacerbations during the study period and the proportion of acute exacerbations necessitating an antibiotic treatment, monthly evaluations of sputum and cough symptoms, and the general state of health and a safety analysis. Results: Of 260 randomized subjects, 64 patients fulfilled the inclusion criteria for COPD (ELOM-080: 35, placebo: 29. Compared to placebo, ELOM-080 reduced the percentage of subjects with at least one exacerbation (29% versus 55%, P=0.031 and a reduction in the overall occurrence of exacerbations (ELOM-080: 10, placebo: 21, P=0.012 during the winter season. The percentage of asymptomatic or

  19. Model-Based Recursive Partitioning for Subgroup Analyses.

    Science.gov (United States)

    Seibold, Heidi; Zeileis, Achim; Hothorn, Torsten

    2016-05-01

    The identification of patient subgroups with differential treatment effects is the first step towards individualised treatments. A current draft guideline by the EMA discusses potentials and problems in subgroup analyses and formulated challenges to the development of appropriate statistical procedures for the data-driven identification of patient subgroups. We introduce model-based recursive partitioning as a procedure for the automated detection of patient subgroups that are identifiable by predictive factors. The method starts with a model for the overall treatment effect as defined for the primary analysis in the study protocol and uses measures for detecting parameter instabilities in this treatment effect. The procedure produces a segmented model with differential treatment parameters corresponding to each patient subgroup. The subgroups are linked to predictive factors by means of a decision tree. The method is applied to the search for subgroups of patients suffering from amyotrophic lateral sclerosis that differ with respect to their Riluzole treatment effect, the only currently approved drug for this disease.

  20. A theoretical model for analysing gender bias in medicine

    Directory of Open Access Journals (Sweden)

    Johansson Eva E

    2009-08-01

    Full Text Available Abstract During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  1. A theoretical model for analysing gender bias in medicine.

    Science.gov (United States)

    Risberg, Gunilla; Johansson, Eva E; Hamberg, Katarina

    2009-08-03

    During the last decades research has reported unmotivated differences in the treatment of women and men in various areas of clinical and academic medicine. There is an ongoing discussion on how to avoid such gender bias. We developed a three-step-theoretical model to understand how gender bias in medicine can occur and be understood. In this paper we present the model and discuss its usefulness in the efforts to avoid gender bias. In the model gender bias is analysed in relation to assumptions concerning difference/sameness and equity/inequity between women and men. Our model illustrates that gender bias in medicine can arise from assuming sameness and/or equity between women and men when there are genuine differences to consider in biology and disease, as well as in life conditions and experiences. However, gender bias can also arise from assuming differences when there are none, when and if dichotomous stereotypes about women and men are understood as valid. This conceptual thinking can be useful for discussing and avoiding gender bias in clinical work, medical education, career opportunities and documents such as research programs and health care policies. Too meet the various forms of gender bias, different facts and measures are needed. Knowledge about biological differences between women and men will not reduce bias caused by gendered stereotypes or by unawareness of health problems and discrimination associated with gender inequity. Such bias reflects unawareness of gendered attitudes and will not change by facts only. We suggest consciousness-rising activities and continuous reflections on gender attitudes among students, teachers, researchers and decision-makers.

  2. Impact of sophisticated fog spray models on accident analyses

    International Nuclear Information System (INIS)

    Roblyer, S.P.; Owzarski, P.C.

    1978-01-01

    The N-Reactor confinement system release dose to the public in a postulated accident is reduced by washing the confinement atmosphere with fog sprays. This allows a low pressure release of confinement atmosphere containing fission products through filters and out an elevated stack. The current accident analysis required revision of the CORRAL code and other codes such as CONTEMPT to properly model the N Reactor confinement into a system of multiple fog-sprayed compartments. In revising these codes, more sophisticated models for the fog sprays and iodine plateout were incorporated to remove some of the conservatism of steam condensing rate, fission product washout and iodine plateout than used in previous studies. The CORRAL code, which was used to describe the transport and deposition of airborne fission products in LWR containment systems for the Rasmussen Study, was revised to describe fog spray removal of molecular iodine (I 2 ) and particulates in multiple compartments for sprays having individual characteristics of on-off times, flow rates, fall heights, and drop sizes in changing containment atmospheres. During postulated accidents, the code determined the fission product removal rates internally rather than from input decontamination factors. A discussion is given of how the calculated plateout and washout rates vary with time throughout the analysis. The results of the accident analyses indicated that more credit could be given to fission product washout and plateout. An important finding was that the release of fission products to the atmosphere and adsorption of fission products on the filters were significantly lower than previous studies had indicated

  3. A dialogue game for analysing group model building: framing collaborative modelling and its facilitation

    NARCIS (Netherlands)

    Hoppenbrouwers, S.J.B.A.; Rouwette, E.A.J.A.

    2012-01-01

    This paper concerns a specific approach to analysing and structuring operational situations in collaborative modelling. Collaborative modelling is viewed here as 'the goal-driven creation and shaping of models that are based on the principles of rational description and reasoning'. Our long term

  4. Micromechanical Failure Analyses for Finite Element Polymer Modeling

    Energy Technology Data Exchange (ETDEWEB)

    CHAMBERS,ROBERT S.; REEDY JR.,EARL DAVID; LO,CHI S.; ADOLF,DOUGLAS B.; GUESS,TOMMY R.

    2000-11-01

    Polymer stresses around sharp corners and in constrained geometries of encapsulated components can generate cracks leading to system failures. Often, analysts use maximum stresses as a qualitative indicator for evaluating the strength of encapsulated component designs. Although this approach has been useful for making relative comparisons screening prospective design changes, it has not been tied quantitatively to failure. Accurate failure models are needed for analyses to predict whether encapsulated components meet life cycle requirements. With Sandia's recently developed nonlinear viscoelastic polymer models, it has been possible to examine more accurately the local stress-strain distributions in zones of likely failure initiation looking for physically based failure mechanisms and continuum metrics that correlate with the cohesive failure event. This study has identified significant differences between rubbery and glassy failure mechanisms that suggest reasonable alternatives for cohesive failure criteria and metrics. Rubbery failure seems best characterized by the mechanisms of finite extensibility and appears to correlate with maximum strain predictions. Glassy failure, however, seems driven by cavitation and correlates with the maximum hydrostatic tension. Using these metrics, two three-point bending geometries were tested and analyzed under variable loading rates, different temperatures and comparable mesh resolution (i.e., accuracy) to make quantitative failure predictions. The resulting predictions and observations agreed well suggesting the need for additional research. In a separate, additional study, the asymptotically singular stress state found at the tip of a rigid, square inclusion embedded within a thin, linear elastic disk was determined for uniform cooling. The singular stress field is characterized by a single stress intensity factor K{sub a} and the applicable K{sub a} calibration relationship has been determined for both fully bonded and

  5. Modelling and analysing interoperability in service compositions using COSMO

    NARCIS (Netherlands)

    Quartel, Dick; van Sinderen, Marten J.

    2008-01-01

    A service composition process typically involves multiple service models. These models may represent the composite and composed services from distinct perspectives, e.g. to model the role of some system that is involved in a service, and at distinct abstraction levels, e.g. to model the goal,

  6. A comparison of linear tyre models for analysing shimmy

    NARCIS (Netherlands)

    Besselink, I.J.M.; Maas, J.W.L.H.; Nijmeijer, H.

    2011-01-01

    A comparison is made between three linear, dynamic tyre models using low speed step responses and yaw oscillation tests. The match with the measurements improves with increasing complexity of the tyre model. Application of the different tyre models to a two degree of freedom trailing arm suspension

  7. Phase I/II trials of {sup 186}Re-HEDP in metastatic castration-resistant prostate cancer: post-hoc analysis of the impact of administered activity and dosimetry on survival

    Energy Technology Data Exchange (ETDEWEB)

    Denis-Bacelar, Ana M.; Chittenden, Sarah J.; Divoli, Antigoni; Flux, Glenn D. [The Institute of Cancer Research and The Royal Marsden Hospital NHS Foundation Trust, Joint Department of Physics, London (United Kingdom); Dearnaley, David P.; Johnson, Bernadette [The Institute of Cancer Research and The Royal Marsden Hospital NHS Foundation Trust, Division of Radiotherapy and Imaging, London (United Kingdom); O' Sullivan, Joe M. [Queen' s University Belfast, Centre for Cancer Research and Cell Biology, Belfast (United Kingdom); McCready, V.R. [Brighton and Sussex University Hospitals NHS Trust, Department of Nuclear Medicine, Brighton (United Kingdom); Du, Yong [The Royal Marsden Hospital NHS Foundation Trust, Department of Nuclear Medicine and PET/CT, London (United Kingdom)

    2017-04-15

    To investigate the role of patient-specific dosimetry as a predictive marker of survival and as a potential tool for individualised molecular radiotherapy treatment planning of bone metastases from castration-resistant prostate cancer, and to assess whether higher administered levels of activity are associated with a survival benefit. Clinical data from 57 patients who received 2.5-5.1 GBq of {sup 186}Re-HEDP as part of NIH-funded phase I/II clinical trials were analysed. Whole-body and SPECT-based absorbed doses to the whole body and bone lesions were calculated for 22 patients receiving 5 GBq. The patient mean absorbed dose was defined as the mean of all bone lesion-absorbed doses in any given patient. Kaplan-Meier curves, log-rank tests, Cox's proportional hazards model and Pearson's correlation coefficients were used for overall survival (OS) and correlation analyses. A statistically significantly longer OS was associated with administered activities above 3.5 GBq in the 57 patients (20.1 vs 7.1 months, hazard ratio: 0.39, 95 % CI: 0.10-0.58, P = 0.002). A total of 379 bone lesions were identified in 22 patients. The mean of the patient mean absorbed dose was 19 (±6) Gy and the mean of the whole-body absorbed dose was 0.33 (±0.11) Gy for the 22 patients. The patient mean absorbed dose (r = 0.65, P = 0.001) and the whole-body absorbed dose (r = 0.63, P = 0.002) showed a positive correlation with disease volume. Significant differences in OS were observed for the univariate group analyses according to disease volume as measured from SPECT imaging of {sup 186}Re-HEDP (P = 0.03) and patient mean absorbed dose (P = 0.01), whilst only the disease volume remained significant in a multivariable analysis (P = 0.004). This study demonstrated that higher administered activities led to prolonged survival and that for a fixed administered activity, the whole-body and patient mean absorbed doses correlated with the extent of disease, which, in turn, correlated

  8. Modelling, singular perturbation and bifurcation analyses of bitrophic food chains.

    Science.gov (United States)

    Kooi, B W; Poggiale, J C

    2018-04-20

    Two predator-prey model formulations are studied: for the classical Rosenzweig-MacArthur (RM) model and the Mass Balance (MB) chemostat model. When the growth and loss rate of the predator is much smaller than that of the prey these models are slow-fast systems leading mathematically to singular perturbation problem. In contradiction to the RM-model, the resource for the prey are modelled explicitly in the MB-model but this comes with additional parameters. These parameter values are chosen such that the two models become easy to compare. In both models a transcritical bifurcation, a threshold above which invasion of predator into prey-only system occurs, and the Hopf bifurcation where the interior equilibrium becomes unstable leading to a stable limit cycle. The fast-slow limit cycles are called relaxation oscillations which for increasing differences in time scales leads to the well known degenerated trajectories being concatenations of slow parts of the trajectory and fast parts of the trajectory. In the fast-slow version of the RM-model a canard explosion of the stable limit cycles occurs in the oscillatory region of the parameter space. To our knowledge this type of dynamics has not been observed for the RM-model and not even for more complex ecosystem models. When a bifurcation parameter crosses the Hopf bifurcation point the amplitude of the emerging stable limit cycles increases. However, depending of the perturbation parameter the shape of this limit cycle changes abruptly from one consisting of two concatenated slow and fast episodes with small amplitude of the limit cycle, to a shape with large amplitude of which the shape is similar to the relaxation oscillation, the well known degenerated phase trajectories consisting of four episodes (concatenation of two slow and two fast). The canard explosion point is accurately predicted by using an extended asymptotic expansion technique in the perturbation and bifurcation parameter simultaneously where the small

  9. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; van Deursen, A.; Pinzger, M.

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  10. Analysing the Linux kernel feature model changes using FMDiff

    NARCIS (Netherlands)

    Dintzner, N.J.R.; Van Deursen, A.; Pinzger, M.

    2015-01-01

    Evolving a large scale, highly variable system is a challenging task. For such a system, evolution operations often require to update consistently both their implementation and its feature model. In this context, the evolution of the feature model closely follows the evolution of the system. The

  11. Analysing Models as a Knowledge Technology in Transport Planning

    DEFF Research Database (Denmark)

    Gudmundsson, Henrik

    2011-01-01

    critical analytic literature on knowledge utilization and policy influence. A simple scheme based in this literature is drawn up to provide a framework for discussing the interface between urban transport planning and model use. A successful example of model use in Stockholm, Sweden is used as a heuristic......Models belong to a wider family of knowledge technologies, applied in the transport area. Models sometimes share with other such technologies the fate of not being used as intended, or not at all. The result may be ill-conceived plans as well as wasted resources. Frequently, the blame...... device to illuminate how such an analytic scheme may allow patterns of insight about the use, influence and role of models in planning to emerge. The main contribution of the paper is to demonstrate that concepts and terminologies from knowledge use literature can provide interpretations of significance...

  12. GOTHIC MODEL OF BWR SECONDARY CONTAINMENT DRAWDOWN ANALYSES

    International Nuclear Information System (INIS)

    Hansen, P.N.

    2004-01-01

    This article introduces a GOTHIC version 7.1 model of the Secondary Containment Reactor Building Post LOCA drawdown analysis for a BWR. GOTHIC is an EPRI sponsored thermal hydraulic code. This analysis is required by the Utility to demonstrate an ability to restore and maintain the Secondary Containment Reactor Building negative pressure condition. The technical and regulatory issues associated with this modeling are presented. The analysis includes the affect of wind, elevation and thermal impacts on pressure conditions. The model includes a multiple volume representation which includes the spent fuel pool. In addition, heat sources and sinks are modeled as one dimensional heat conductors. The leakage into the building is modeled to include both laminar as well as turbulent behavior as established by actual plant test data. The GOTHIC code provides components to model heat exchangers used to provide fuel pool cooling as well as area cooling via air coolers. The results of the evaluation are used to demonstrate the time that the Reactor Building is at a pressure that exceeds external conditions. This time period is established with the GOTHIC model based on the worst case pressure conditions on the building. For this time period the Utility must assume the primary containment leakage goes directly to the environment. Once the building pressure is restored below outside conditions the release to the environment can be credited as a filtered release

  13. Marginal Utility of Conditional Sensitivity Analyses for Dynamic Models

    Science.gov (United States)

    Background/Question/MethodsDynamic ecological processes may be influenced by many factors. Simulation models thatmimic these processes often have complex implementations with many parameters. Sensitivityanalyses are subsequently used to identify critical parameters whose uncertai...

  14. A shock absorber model for structure-borne noise analyses

    Science.gov (United States)

    Benaziz, Marouane; Nacivet, Samuel; Thouverez, Fabrice

    2015-08-01

    Shock absorbers are often responsible for undesirable structure-borne noise in cars. The early numerical prediction of this noise in the automobile development process can save time and money and yet remains a challenge for industry. In this paper, a new approach to predicting shock absorber structure-borne noise is proposed; it consists in modelling the shock absorber and including the main nonlinear phenomena responsible for discontinuities in the response. The model set forth herein features: compressible fluid behaviour, nonlinear flow rate-pressure relations, valve mechanical equations and rubber mounts. The piston, base valve and complete shock absorber model are compared with experimental results. Sensitivity of the shock absorber response is evaluated and the most important parameters are classified. The response envelope is also computed. This shock absorber model is able to accurately reproduce local nonlinear phenomena and improves our state of knowledge on potential noise sources within the shock absorber.

  15. Plasma-safety assessment model and safety analyses of ITER

    International Nuclear Information System (INIS)

    Honda, T.; Okazaki, T.; Bartels, H.-H.; Uckan, N.A.; Sugihara, M.; Seki, Y.

    2001-01-01

    A plasma-safety assessment model has been provided on the basis of the plasma physics database of the International Thermonuclear Experimental Reactor (ITER) to analyze events including plasma behavior. The model was implemented in a safety analysis code (SAFALY), which consists of a 0-D dynamic plasma model and a 1-D thermal behavior model of the in-vessel components. Unusual plasma events of ITER, e.g., overfueling, were calculated using the code and plasma burning is found to be self-bounded by operation limits or passively shut down due to impurity ingress from overheated divertor targets. Sudden transition of divertor plasma might lead to failure of the divertor target because of a sharp increase of the heat flux. However, the effects of the aggravating failure can be safely handled by the confinement boundaries. (author)

  16. Modeling theoretical uncertainties in phenomenological analyses for particle physics

    Energy Technology Data Exchange (ETDEWEB)

    Charles, Jerome [CNRS, Aix-Marseille Univ, Universite de Toulon, CPT UMR 7332, Marseille Cedex 9 (France); Descotes-Genon, Sebastien [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Niess, Valentin [CNRS/IN2P3, UMR 6533, Laboratoire de Physique Corpusculaire, Aubiere Cedex (France); Silva, Luiz Vale [CNRS, Univ. Paris-Sud, Universite Paris-Saclay, Laboratoire de Physique Theorique (UMR 8627), Orsay Cedex (France); Univ. Paris-Sud, CNRS/IN2P3, Universite Paris-Saclay, Groupe de Physique Theorique, Institut de Physique Nucleaire, Orsay Cedex (France); J. Stefan Institute, Jamova 39, P. O. Box 3000, Ljubljana (Slovenia)

    2017-04-15

    The determination of the fundamental parameters of the Standard Model (and its extensions) is often limited by the presence of statistical and theoretical uncertainties. We present several models for the latter uncertainties (random, nuisance, external) in the frequentist framework, and we derive the corresponding p values. In the case of the nuisance approach where theoretical uncertainties are modeled as biases, we highlight the important, but arbitrary, issue of the range of variation chosen for the bias parameters. We introduce the concept of adaptive p value, which is obtained by adjusting the range of variation for the bias according to the significance considered, and which allows us to tackle metrology and exclusion tests with a single and well-defined unified tool, which exhibits interesting frequentist properties. We discuss how the determination of fundamental parameters is impacted by the model chosen for theoretical uncertainties, illustrating several issues with examples from quark flavor physics. (orig.)

  17. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.; Mai, Paul Martin; Thingbaijam, Kiran Kumar; Razafindrakoto, H. N. T.; Genton, Marc G.

    2014-01-01

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  18. Analysing earthquake slip models with the spatial prediction comparison test

    KAUST Repository

    Zhang, L.

    2014-11-10

    Earthquake rupture models inferred from inversions of geophysical and/or geodetic data exhibit remarkable variability due to uncertainties in modelling assumptions, the use of different inversion algorithms, or variations in data selection and data processing. A robust statistical comparison of different rupture models obtained for a single earthquake is needed to quantify the intra-event variability, both for benchmark exercises and for real earthquakes. The same approach may be useful to characterize (dis-)similarities in events that are typically grouped into a common class of events (e.g. moderate-size crustal strike-slip earthquakes or tsunamigenic large subduction earthquakes). For this purpose, we examine the performance of the spatial prediction comparison test (SPCT), a statistical test developed to compare spatial (random) fields by means of a chosen loss function that describes an error relation between a 2-D field (‘model’) and a reference model. We implement and calibrate the SPCT approach for a suite of synthetic 2-D slip distributions, generated as spatial random fields with various characteristics, and then apply the method to results of a benchmark inversion exercise with known solution. We find the SPCT to be sensitive to different spatial correlations lengths, and different heterogeneity levels of the slip distributions. The SPCT approach proves to be a simple and effective tool for ranking the slip models with respect to a reference model.

  19. Business models for telehealth in the US: analyses and insights

    Directory of Open Access Journals (Sweden)

    Pereira F

    2017-02-01

    Full Text Available Francis Pereira Data Sciences and Operations, Marshall School of Business, University of Southern, Los Angeles, CA, USAAbstract: A growing shortage of medical doctors and nurses, globally, coupled with increasing life expectancy, is generating greater cost pressures on health care, in the US and globally. In this respect, telehealth can help alleviate these pressures, as well as extend medical services to underserved or unserved areas. However, its relatively slow adoption in the US, as well as in other markets, suggests the presence of barriers and challenges. The use of a business model framework helps identify the value proposition of telehealth as well as these challenges, which include identifying the right revenue model, organizational structure, and, perhaps more importantly, the stakeholders in the telehealth ecosystem. Successful and cost-effective deployment of telehealth require a redefinition of the ecosystem and a comprehensive review of all benefits and beneficiaries of such a system; hence a reassessment of all the stakeholders that could benefit from such a system, beyond the traditional patient–health provider–insurer model, and thus “who should pay” for such a system, and the driving efforts of a “keystone” player in developing this initiative would help. Keywords: telehealth, business model framework, stakeholders, ecosystem, VISOR business Model

  20. A Cyber-Attack Detection Model Based on Multivariate Analyses

    Science.gov (United States)

    Sakai, Yuto; Rinsaka, Koichiro; Dohi, Tadashi

    In the present paper, we propose a novel cyber-attack detection model based on two multivariate-analysis methods to the audit data observed on a host machine. The statistical techniques used here are the well-known Hayashi's quantification method IV and cluster analysis method. We quantify the observed qualitative audit event sequence via the quantification method IV, and collect similar audit event sequence in the same groups based on the cluster analysis. It is shown in simulation experiments that our model can improve the cyber-attack detection accuracy in some realistic cases where both normal and attack activities are intermingled.

  1. Model analyses for sustainable energy supply under CO2 restrictions

    International Nuclear Information System (INIS)

    Matsuhashi, Ryuji; Ishitani, Hisashi.

    1995-01-01

    This paper aims at clarifying key points for realizing sustainable energy supply under restrictions on CO 2 emissions. For this purpose, possibility of solar breeding system is investigated as a key technology for the sustainable energy supply. The authors describe their mathematical model simulating global energy supply and demand in ultra-long term. Depletion of non-renewable resources and constraints on CO 2 emissions are taken into consideration in the model. Computed results have shown that present energy system based on non-renewable resources shifts to a system based on renewable resources in the ultra-long term with appropriate incentives

  2. Models for transient analyses in advanced test reactors

    International Nuclear Information System (INIS)

    Gabrielli, Fabrizio

    2011-01-01

    Several strategies are developed worldwide to respond to the world's increasing demand for electricity. Modern nuclear facilities are under construction or in the planning phase. In parallel, advanced nuclear reactor concepts are being developed to achieve sustainability, minimize waste, and ensure uranium resources. To optimize the performance of components (fuels and structures) of these systems, significant efforts are under way to design new Material Test Reactors facilities in Europe which employ water as a coolant. Safety provisions and the analyses of severe accidents are key points in the determination of sound designs. In this frame, the SIMMER multiphysics code systems is a very attractive tool as it can simulate transients and phenomena within and beyond the design basis in a tightly coupled way. This thesis is primarily focused upon the extension of the SIMMER multigroup cross-sections processing scheme (based on the Bondarenko method) for a proper heterogeneity treatment in the analyses of water-cooled thermal neutron systems. Since the SIMMER code was originally developed for liquid metal-cooled fast reactors analyses, the effect of heterogeneity had been neglected. As a result, the application of the code to water-cooled systems leads to a significant overestimation of the reactivity feedbacks and in turn to non-conservative results. To treat the heterogeneity, the multigroup cross-sections should be computed by properly taking account of the resonance self-shielding effects and the fine intra-cell flux distribution in space group-wise. In this thesis, significant improvements of the SIMMER cross-section processing scheme are described. A new formulation of the background cross-section, based on the Bell and Wigner correlations, is introduced and pre-calculated reduction factors (Effective Mean Chord Lengths) are used to take proper account of the resonance self-shielding effects of non-fuel isotopes. Moreover, pre-calculated parameters are applied

  3. A Formal Model to Analyse the Firewall Configuration Errors

    Directory of Open Access Journals (Sweden)

    T. T. Myo

    2015-01-01

    Full Text Available The firewall is widely known as a brandmauer (security-edge gateway. To provide the demanded security, the firewall has to be appropriately adjusted, i.e. be configured. Unfortunately, when configuring, even the skilled administrators may make mistakes, which result in decreasing level of a network security and network infiltration undesirable packages.The network can be exposed to various threats and attacks. One of the mechanisms used to ensure network security is the firewall.The firewall is a network component, which, using a security policy, controls packages passing through the borders of a secured network. The security policy represents the set of rules.Package filters work in the mode without inspection of a state: they investigate packages as the independent objects. Rules take the following form: (condition, action. The firewall analyses the entering traffic, based on the IP address of the sender and recipient, the port number of the sender and recipient, and the used protocol. When the package meets rule conditions, the action specified in the rule is carried out. It can be: allow, deny.The aim of this article is to develop tools to analyse a firewall configuration with inspection of states. The input data are the file with the set of rules. It is required to submit the analysis of a security policy in an informative graphic form as well as to reveal discrepancy available in rules. The article presents a security policy visualization algorithm and a program, which shows how the firewall rules act on all possible packages. To represent a result in an intelligible form a concept of the equivalence region is introduced.Our task is the program to display results of rules action on the packages in a convenient graphic form as well as to reveal contradictions between the rules. One of problems is the large number of measurements. As it was noted above, the following parameters are specified in the rule: Source IP address, appointment IP

  4. Theoretical modeling and experimental analyses of laminated wood composite poles

    Science.gov (United States)

    Cheng Piao; Todd F. Shupe; Vijaya Gopu; Chung Y. Hse

    2005-01-01

    Wood laminated composite poles consist of trapezoid-shaped wood strips bonded with synthetic resin. The thick-walled hollow poles had adequate strength and stiffness properties and were a promising substitute for solid wood poles. It was necessary to develop theoretical models to facilitate the manufacture and future installation and maintenance of this novel...

  5. Gene Discovery and Functional Analyses in the Model Plant Arabidopsis

    DEFF Research Database (Denmark)

    Feng, Cai-ping; Mundy, J.

    2006-01-01

    The present mini-review describes newer methods and strategies, including transposon and T-DNA insertions, TILLING, Deleteagene, and RNA interference, to functionally analyze genes of interest in the model plant Arabidopsis. The relative advantages and disadvantages of the systems are also discus...

  6. Capacity allocation in wireless communication networks - models and analyses

    NARCIS (Netherlands)

    Litjens, Remco

    2003-01-01

    This monograph has concentrated on capacity allocation in cellular and Wireless Local Area Networks, primarily with a network operator’s perspective. In the introduc- tory chapter, a reference model has been proposed for the extensive suite of capacity allocation mechanisms that can be applied at

  7. Vegetable parenting practices scale: Item response modeling analyses

    Science.gov (United States)

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  8. Complex accident scenarios modelled and analysed by Stochastic Petri Nets

    International Nuclear Information System (INIS)

    Nývlt, Ondřej; Haugen, Stein; Ferkl, Lukáš

    2015-01-01

    This paper is focused on the usage of Petri nets for an effective modelling and simulation of complicated accident scenarios, where an order of events can vary and some events may occur anywhere in an event chain. These cases are hardly manageable by traditional methods as event trees – e.g. one pivotal event must be often inserted several times into one branch of the tree. Our approach is based on Stochastic Petri Nets with Predicates and Assertions and on an idea, which comes from the area of Programmable Logic Controllers: an accidental scenario is described as a net of interconnected blocks, which represent parts of the scenario. So the scenario is firstly divided into parts, which are then modelled by Petri nets. Every block can be easily interconnected with other blocks by input/output variables to create complex ones. In the presented approach, every event or a part of a scenario is modelled only once, independently on a number of its occurrences in the scenario. The final model is much more transparent then the corresponding event tree. The method is shown in two case studies, where the advanced one contains a dynamic behavior. - Highlights: • Event & Fault trees have problems with scenarios where an order of events can vary. • Paper presents a method for modelling and analysis of dynamic accident scenarios. • The presented method is based on Petri nets. • The proposed method solves mentioned problems of traditional approaches. • The method is shown in two case studies: simple and advanced (with dynamic behavior)

  9. Analyses of Lattice Traffic Flow Model on a Gradient Highway

    International Nuclear Information System (INIS)

    Gupta Arvind Kumar; Redhu Poonam; Sharma Sapna

    2014-01-01

    The optimal current difference lattice hydrodynamic model is extended to investigate the traffic flow dynamics on a unidirectional single lane gradient highway. The effect of slope on uphill/downhill highway is examined through linear stability analysis and shown that the slope significantly affects the stability region on the phase diagram. Using nonlinear stability analysis, the Burgers, Korteweg-deVries (KdV) and modified Korteweg-deVries (mKdV) equations are derived in stable, metastable and unstable region, respectively. The effect of reaction coefficient is examined and concluded that it plays an important role in suppressing the traffic jams on a gradient highway. The theoretical findings have been verified through numerical simulation which confirm that the slope on a gradient highway significantly influence the traffic dynamics and traffic jam can be suppressed efficiently by considering the optimal current difference effect in the new lattice model. (nuclear physics)

  10. Aggregated Wind Park Models for Analysing Power System Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Poeller, Markus; Achilles, Sebastian [DIgSILENT GmbH, Gomaringen (Germany)

    2003-11-01

    The increasing amount of wind power generation in European power systems requires stability analysis considering interaction between wind-farms and transmission systems. Dynamics introduced by dispersed wind generators at the distribution level can usually be neglected. However, large on- and offshore wind farms have a considerable influence to power system dynamics and must definitely be considered for analyzing power system dynamics. Compared to conventional power stations, wind power plants consist of a large number of generators of small size. Therefore, representing every wind generator individually increases the calculation time of dynamic simulations considerably. Therefore, model aggregation techniques should be applied for reducing calculation times. This paper presents aggregated models for wind parks consisting of fixed or variable speed wind generators.

  11. A simulation model for analysing brain structure deformations

    Energy Technology Data Exchange (ETDEWEB)

    Bona, Sergio Di [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy); Lutzemberger, Ludovico [Department of Neuroscience, Institute of Neurosurgery, University of Pisa, Via Roma, 67-56100 Pisa (Italy); Salvetti, Ovidio [Institute for Information Science and Technologies, Italian National Research Council (ISTI-8211-CNR), Via G Moruzzi, 1-56124 Pisa (Italy)

    2003-12-21

    Recent developments of medical software applications from the simulation to the planning of surgical operations have revealed the need for modelling human tissues and organs, not only from a geometric point of view but also from a physical one, i.e. soft tissues, rigid body, viscoelasticity, etc. This has given rise to the term 'deformable objects', which refers to objects with a morphology, a physical and a mechanical behaviour of their own and that reflects their natural properties. In this paper, we propose a model, based upon physical laws, suitable for the realistic manipulation of geometric reconstructions of volumetric data taken from MR and CT scans. In particular, a physically based model of the brain is presented that is able to simulate the evolution of different nature pathological intra-cranial phenomena such as haemorrhages, neoplasm, haematoma, etc and to describe the consequences that are caused by their volume expansions and the influences they have on the anatomical and neuro-functional structures of the brain.

  12. Sequence Modeling for Analysing Student Interaction with Educational Systems

    DEFF Research Database (Denmark)

    Hansen, Christian; Hansen, Casper; Hjuler, Niklas Oskar Daniel

    2017-01-01

    as exhibiting unproductive student behaviour. Based on our results this student representation is promising, especially for educational systems offering many different learning usages, and offers an alternative to common approaches like modelling student behaviour as a single Markov chain often done......The analysis of log data generated by online educational systems is an important task for improving the systems, and furthering our knowledge of how students learn. This paper uses previously unseen log data from Edulab, the largest provider of digital learning for mathematics in Denmark...

  13. On model-independent analyses of elastic hadron scattering

    International Nuclear Information System (INIS)

    Avila, R.F.; Campos, S.D.; Menon, M.J.; Montanha, J.

    2007-01-01

    By means of an almost model-independent parametrization for the elastic hadron-hadron amplitude, as a function of the energy and the momentum transfer, we obtain good descriptions of the physical quantities that characterize elastic proton-proton and antiproton-proton scattering (total cross section, r parameter and differential cross section). The parametrization is inferred on empirical grounds and selected according to high energy theorems and limits from axiomatic quantum field theory. Based on the predictive character of the approach we present predictions for the above physical quantities at the Brookhaven RHIC, Fermilab Tevatron and CERN LHC energies. (author)

  14. Intensive multifactorial treatment modifies the effect of family history of diabetes on glycaemic control in people with Type 2 diabetes: a post hoc analysis of the ADDITION-Denmark randomized controlled trial.

    Science.gov (United States)

    Eliraqi, G M; Vistisen, D; Lauritzen, T; Sandbaek, A; Jørgensen, M E; Faerch, K

    2015-08-01

    To investigate whether intensive multifactorial treatment can reverse the predisposed adverse phenotype of people with Type 2 diabetes who have a family history of diabetes. Data from the randomized controlled trial ADDITION-Denmark were used. A total of 1441 newly diagnosed patients with diabetes (598 with family history of diabetes) were randomized to intensive treatment or routine care. Family history of diabetes was defined as having one parent and/or sibling with diabetes. Linear mixed-effects models were used to assess the changes in risk factors (BMI, waist circumference, blood pressure, lipids and HbA1c ) after 5 years of follow-up in participants with and without a family history of diabetes. An interaction term between family history of diabetes and treatment group was included in the models to test for a modifying effect of the intervention. All analyses were adjusted for age, sex, baseline value of the risk factor and general practice (random effect). At baseline, participants with a family history of diabetes were younger and had a 1.1 mmol/mol (0.1%) higher HbA1c concentration at the time of diagnosis than those without a family history of diabetes. Family history of diabetes modified the effect of the intervention on changes in HbA1c levels. In the group receiving routine care, participants with a family history of diabetes experienced an improvement in HbA1c concentration that was 3.3 mmol/mol (0.3%) lower than the improvement found in those without a family history of diabetes after 5 years of follow-up. In the intensive treatment group, however, there was no difference in HbA1c concentrations between participants with and without a family history of diabetes after 5 years of treatment. Intensive treatment of diabetes may partly remove the adverse effects of family history of diabetes on glycaemic control. The effect of this improvement on long-term diabetic complications warrants further investigation. © 2015 The Authors. Diabetic Medicine

  15. Biofuel market and carbon modeling to analyse French biofuel policy

    International Nuclear Information System (INIS)

    Bernard, F.; Prieur, A.

    2007-01-01

    In order to comply with European Union objectives, France has set up an ambitious biofuel plan. This plan is evaluated on the basis of two criteria: tax exemption on fossil fuels and greenhouse gases (GHG) emission savings. An economic marginal analysis and a life cycle assessment (LCA) are provided using a coupling procedure between a partial agro-industrial equilibrium model and an oil refining optimization model. Thus, we determine the minimum tax exemption needed to place on the market a targeted quantity of biofuel by deducting the biofuel long-run marginal revenue of refiners from the agro-industrial marginal cost of biofuel production. With a clear view of the refiner's economic choices, total pollutant emissions along the biofuel production chains are quantified and used to feed an LCA. The French biofuel plan is evaluated for 2008, 2010 and 2012 using prospective scenarios. Results suggest that biofuel competitiveness depends on crude oil prices and demand for petroleum products and consequently these parameters should be taken into account by authorities to modulate biofuel tax exemption. LCA results show that biofuel production and use, from 'seed to wheel', would facilitate the French Government's compliance with its 'Plan Climat' objectives by reducing up to 5% GHG emissions in the French road transport sector by 2010

  16. Glycomic analyses of mouse models of congenital muscular dystrophy.

    Science.gov (United States)

    Stalnaker, Stephanie H; Aoki, Kazuhiro; Lim, Jae-Min; Porterfield, Mindy; Liu, Mian; Satz, Jakob S; Buskirk, Sean; Xiong, Yufang; Zhang, Peng; Campbell, Kevin P; Hu, Huaiyu; Live, David; Tiemeyer, Michael; Wells, Lance

    2011-06-17

    Dystroglycanopathies are a subset of congenital muscular dystrophies wherein α-dystroglycan (α-DG) is hypoglycosylated. α-DG is an extensively O-glycosylated extracellular matrix-binding protein and a key component of the dystrophin-glycoprotein complex. Previous studies have shown α-DG to be post-translationally modified by both O-GalNAc- and O-mannose-initiated glycan structures. Mutations in defined or putative glycosyltransferase genes involved in O-mannosylation are associated with a loss of ligand-binding activity of α-DG and are causal for various forms of congenital muscular dystrophy. In this study, we sought to perform glycomic analysis on brain O-linked glycan structures released from proteins of three different knock-out mouse models associated with O-mannosylation (POMGnT1, LARGE (Myd), and DAG1(-/-)). Using mass spectrometry approaches, we were able to identify nine O-mannose-initiated and 25 O-GalNAc-initiated glycan structures in wild-type littermate control mouse brains. Through our analysis, we were able to confirm that POMGnT1 is essential for the extension of all observed O-mannose glycan structures with β1,2-linked GlcNAc. Loss of LARGE expression in the Myd mouse had no observable effect on the O-mannose-initiated glycan structures characterized here. Interestingly, we also determined that similar amounts of O-mannose-initiated glycan structures are present on brain proteins from α-DG-lacking mice (DAG1) compared with wild-type mice, indicating that there must be additional proteins that are O-mannosylated in the mammalian brain. Our findings illustrate that classical β1,2-elongation and β1,6-GlcNAc branching of O-mannose glycan structures are dependent upon the POMGnT1 enzyme and that O-mannosylation is not limited solely to α-DG in the brain.

  17. Facet personality and surface-level diversity as team mental model antecedents: implications for implicit coordination.

    Science.gov (United States)

    Fisher, David M; Bell, Suzanne T; Dierdorff, Erich C; Belohlav, James A

    2012-07-01

    Team mental models (TMMs) have received much attention as important drivers of effective team processes and performance. Less is known about the factors that give rise to these shared cognitive structures. We examined potential antecedents of TMMs, with a specific focus on team composition variables, including various facets of personality and surface-level diversity. Further, we examined implicit coordination as an important outcome of TMMs. Results suggest that team composition in terms of the cooperation facet of agreeableness and racial diversity were significantly related to team-focused TMM similarity. TMM similarity was also positively predictive of implicit coordination, which mediated the relationship between TMM similarity and team performance. Post hoc analyses revealed a significant interaction between the trust facet of agreeableness and racial diversity in predicting TMM similarity. Results are discussed in terms of facilitating the emergence of TMMs and corresponding implications for team-related human resource practices. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  18. Maximum home systolic blood pressure is a useful indicator of arterial stiffness in patients with type 2 diabetes mellitus: post hoc analysis of a cross-sectional multicenter study.

    Science.gov (United States)

    Ushigome, Emi; Fukui, Michiaki; Hamaguchi, Masahide; Tanaka, Toru; Atsuta, Haruhiko; Mogami, Shin-ichi; Tsunoda, Sei; Yamazaki, Masahiro; Hasegawa, Goji; Nakamura, Naoto

    2014-09-01

    Maximum (max) home systolic blood pressure (HSBP) as well as mean HSBP or HSBP variability was reported to increase the predictive value of target organ damage. Yet, the association between max HSBP and target organ damage in patients with type 2 diabetes has never been reported. The aim of this study was to investigate the association between max HSBP and pulse wave velocity (PWV), a marker of arterial stiffness which in turn is a marker of target organ damage, in patients with type 2 diabetes. We assessed the relationship of mean HSBP or max HSBP to PWV, and compared area under the receiver-operating characteristic curve (AUC) of mean HSBP or max HSBP for arterial stiffness in 758 patients with type 2 diabetes. In the univariate analyses, age, duration of diabetes mellitus, body mass index, mean clinic systolic blood pressure (SBP), mean HSBP and max HSBP were associated with PWV. Multivariate linear regression analyses indicated that mean morning SBP (β=0.156, P=0.001) or max morning SBP (β=0.146, P=0.001) were significantly associated with PWV. AUC (95% CI) for arterial stiffness, defined as PWV equal to or more than 1800 cm per second, in mean morning SBP and max morning SBP were 0.622 (0.582-0.662; P<0.001) and 0.631 (0.591-0.670; P<0.001), respectively. Our findings implicate that max HSBP as well as mean HSBP was significantly associated with arterial stiffness in patients with type 2 diabetes. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  19. High titers of both rheumatoid factor and anti-CCP antibodies at baseline in patients with rheumatoid arthritis are associated with increased circulating baseline TNF level, low drug levels, and reduced clinical responses: a post hoc analysis of the RISING study.

    Science.gov (United States)

    Takeuchi, Tsutomu; Miyasaka, Nobuyuki; Inui, Takashi; Yano, Toshiro; Yoshinari, Toru; Abe, Tohru; Koike, Takao

    2017-09-02

    Although both rheumatoid factor (RF) and anticyclic citrullinated peptide antibodies (anti-CCP) are useful for diagnosing rheumatoid arthritis (RA), the impact of these autoantibodies on the efficacy of tumor necrosis factor (TNF) inhibitors has been controversial. The aim of this post hoc analysis of a randomized double-blind study (the RISING study) was to investigate the influences of RF and anti-CCP on the clinical response to infliximab in patients with RA. Methotrexate-refractory patients with RA received 3 mg/kg of infliximab from weeks 0 to 6 and then 3, 6, or 10 mg/kg every 8 weeks from weeks 14 to 46. In this post hoc analysis, patients were stratified into three classes on the basis of baseline RF/anti-CCP titers: "low/low-C" (RF < 55 IU/ml, anti-CCP < 42 U/ml), "high/high-C" (RF ≥ 160 IU/ml, anti-CCP ≥ 100 U/ml), and "middle-C" (neither low/low-C nor high/high-C). Baseline plasma TNF level, serum infliximab level, and disease activity were compared between the three classes. Baseline RF and anti-CCP titers showed significant correlations with baseline TNF and infliximab levels in weeks 2-14. Comparison of the three classes showed that baseline TNF level was lowest in the low/low-C group and highest in the high/high-C group (median 0.73 versus 1.15 pg/ml), that infliximab levels at week 14 were highest in the low/low-C group and lowest in the high/high-C group (median 1.0 versus 0.1 μg/ml), and that Disease Activity Score in 28 joints based on C-reactive protein at week 14 was lowest in the low/low-C group and highest in the high/high-C group (median 3.17 versus 3.82). A similar correlation was observed at week 54 in the 3 mg/kg dosing group, but not in the 6 or 10 mg/kg group. Significant decreases in both RF and anti-CCP were observed during infliximab treatment. RF/anti-CCP titers correlated with TNF level. This might explain the association of RF/anti-CCP with infliximab level and clinical response in patients with RA

  20. Longitudinal Data Analyses Using Linear Mixed Models in SPSS: Concepts, Procedures and Illustrations

    Directory of Open Access Journals (Sweden)

    Daniel T. L. Shek

    2011-01-01

    Full Text Available Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes in Hong Kong are presented.

  1. Longitudinal data analyses using linear mixed models in SPSS: concepts, procedures and illustrations.

    Science.gov (United States)

    Shek, Daniel T L; Ma, Cecilia M S

    2011-01-05

    Although different methods are available for the analyses of longitudinal data, analyses based on generalized linear models (GLM) are criticized as violating the assumption of independence of observations. Alternatively, linear mixed models (LMM) are commonly used to understand changes in human behavior over time. In this paper, the basic concepts surrounding LMM (or hierarchical linear models) are outlined. Although SPSS is a statistical analyses package commonly used by researchers, documentation on LMM procedures in SPSS is not thorough or user friendly. With reference to this limitation, the related procedures for performing analyses based on LMM in SPSS are described. To demonstrate the application of LMM analyses in SPSS, findings based on six waves of data collected in the Project P.A.T.H.S. (Positive Adolescent Training through Holistic Social Programmes) in Hong Kong are presented.

  2. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    International Nuclear Information System (INIS)

    2014-12-01

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  3. Provisional safety analyses for SGT stage 2 -- Models, codes and general modelling approach

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2014-12-15

    In the framework of the provisional safety analyses for Stage 2 of the Sectoral Plan for Deep Geological Repositories (SGT), deterministic modelling of radionuclide release from the barrier system along the groundwater pathway during the post-closure period of a deep geological repository is carried out. The calculated radionuclide release rates are interpreted as annual effective dose for an individual and assessed against the regulatory protection criterion 1 of 0.1 mSv per year. These steps are referred to as dose calculations. Furthermore, from the results of the dose calculations so-called characteristic dose intervals are determined, which provide input to the safety-related comparison of the geological siting regions in SGT Stage 2. Finally, the results of the dose calculations are also used to illustrate and to evaluate the post-closure performance of the barrier systems under consideration. The principal objective of this report is to describe comprehensively the technical aspects of the dose calculations. These aspects comprise: · the generic conceptual models of radionuclide release from the solid waste forms, of radionuclide transport through the system of engineered and geological barriers, of radionuclide transfer in the biosphere, as well as of the potential radiation exposure of the population, · the mathematical models for the explicitly considered release and transport processes, as well as for the radiation exposure pathways that are included, · the implementation of the mathematical models in numerical codes, including an overview of these codes and the most relevant verification steps, · the general modelling approach when using the codes, in particular the generic assumptions needed to model the near field and the geosphere, along with some numerical details, · a description of the work flow related to the execution of the calculations and of the software tools that are used to facilitate the modelling process, and · an overview of the

  4. Quality measure attainment with dapagliflozin plus metformin extended-release as initial combination therapy in patients with type 2 diabetes: a post hoc pooled analysis of two clinical studies

    Directory of Open Access Journals (Sweden)

    Bell KF

    2016-10-01

    Full Text Available Kelly F Bell, Arie Katz, John J Sheehan AstraZeneca, Wilmington, DE, USA Background: The use of quality measures attempts to improve safety and health outcomes and to reduce costs. In two Phase III trials in treatment-naive patients with type 2 diabetes, dapagliflozin 5 or 10 mg/d as initial combination therapy with metformin extended-release (XR significantly reduced glycated hemoglobin (A1C from baseline to 24 weeks and allowed higher proportions of patients to achieve A1C <7% vs dapagliflozin or metformin monotherapy. Objective: A pooled analysis of data from these two studies assessed the effect of dapagliflozin 5 or 10 mg/d plus metformin XR (combination therapy compared with placebo plus metformin XR (metformin monotherapy on diabetes quality measures. Quality measures include laboratory measures of A1C and low-density lipoprotein cholesterol (LDL-C as well as vital status measures of blood pressure (BP and body mass index (BMI. The proportion of patients achieving A1C, BP, and LDL-C individual and composite measures was assessed, as was the proportion with baseline BMI ≥25 kg/m2 who lost ≥4.5 kg. Subgroup analyses by baseline BMI were also performed. Results: A total of 194 and 211 patients were treated with dapagliflozin 5- or 10-mg/d combination therapy, respectively, and 409 with metformin monotherapy. Significantly higher proportions of patients achieved A1C ≤6.5%, <7%, or <8% with combination therapy vs metformin monotherapy (P<0.02. Significantly higher proportions of patients achieved BP <140/90 mmHg (P<0.02 for each dapagliflozin dose and BP <130/80 mmHg (P<0.02 with dapagliflozin 5 mg/d only with combination therapy vs metformin monotherapy. Similar proportions (29%–33% of patients had LDL-C <100 mg/dL across treatment groups. A higher proportion of patients with baseline BMI ≥25 kg/m2 lost ≥4.5 kg with combination therapy. Combination therapy had a more robust effect on patients with higher baseline BMI. Conclusion

  5. Relationship of the adherence to the Mediterranean diet with health-related quality of life and treatment satisfaction in patients with type 2 diabetes mellitus: a post-hoc analysis of a cross-sectional study.

    Science.gov (United States)

    Alcubierre, Nuria; Martinez-Alonso, Montserrat; Valls, Joan; Rubinat, Esther; Traveset, Alicia; Hernández, Marta; Martínez-González, Maria Dolores; Granado-Casas, Minerva; Jurjo, Carmen; Vioque, Jesus; Navarrete-Muñoz, Eva Maria; Mauricio, Didac

    2016-05-04

    The main aim of this study was to assess the association between adherence to the traditional Mediterranean diet (MedDiet) and health-related quality of life (HRQoL) and treatment satisfaction in patients with type 2 diabetes mellitus (T2DM). This cross-sectional study included 294 patients with T2DM (146 with diabetic retinopathy and 148 without retinopathy). HRQoL and treatment satisfaction were assessed with the Audit Diabetes-Dependent Quality of Life and Diabetes Treatment Satisfaction Questionnaires, respectively. Adherence to the MedDiet was evaluated with the relative Mediterranean Diet Score (rMED). The rMED was added to multivariate linear regression models to assess its relative contribution as a quantitative as well as a qualitative variable after recoding to maximize each of the model's coefficients of determination to explain quality of life as well as treatment satisfaction dimensions. The adherence to the Mediterranean diet showed no significant association with the overall quality of life score. However, rMED was associated with some HRQoL dimensions: travels, self-confidence and freedom to eat and drink (p = 0.020, p = 0.015, p = 0.037 and p = 0.015, respectively). Concerning treatment satisfaction, rMED was positively associated with its overall score (p = 0.046), and especially with the understanding of diabetes (p = 0.0004) and treatment recommendation (p = 0.036), as well as with the perceived frequency of hyperglycaemias (p = 0.039). Adherence to the Mediterranean diet was associated with greater treatment satisfaction in patients with T2DM. Although we found no association with overall HRQoL, adherence to this dietary pattern was associated with some quality of life dimensions.

  6. Effect of glatiramer acetate three-times weekly on the evolution of new, active multiple sclerosis lesions into T1-hypointense "black holes": a post hoc magnetic resonance imaging analysis.

    Science.gov (United States)

    Zivadinov, Robert; Dwyer, Michael; Barkay, Hadas; Steinerman, Joshua R; Knappertz, Volker; Khan, Omar

    2015-03-01

    Conversion of active lesions to black holes has been associated with disability progression in subjects with relapsing-remitting multiple sclerosis (RRMS) and represents a complementary approach to evaluating clinical efficacy. The objective of this study was to assess the conversion of new active magnetic resonance imaging (MRI) lesions, identified 6 months after initiating treatment with glatiramer acetate 40 mg/mL three-times weekly (GA40) or placebo, to T1-hypointense black holes in subjects with RRMS. Subjects received GA40 (n = 943) or placebo (n = 461) for 12 months. MRI was obtained at baseline and Months 6 and 12. New lesions were defined as either gadolinium-enhancing T1 or new T2 lesions at Month 6 that were not present at baseline. The adjusted mean numbers of new active lesions at Month 6 converting to black holes at Month 12 were analyzed using a negative binomial model; adjusted proportions of new active lesions at Month 6 converting to black holes at Month 12 were analyzed using a logistic regression model. Of 1,292 subjects with complete MRI data, 433 (50.3 %) GA-treated and 247 (57.2 %) placebo-treated subjects developed new lesions at Month 6. Compared with placebo, GA40 significantly reduced the mean number (0.31 versus 0.45; P = .0258) and proportion (15.8 versus 19.6 %; P = .006) of new lesions converting to black holes. GA significantly reduced conversion of new active lesions to black holes, highlighting the ability of GA40 to prevent tissue damage in RRMS.

  7. On the use of uncertainty analyses to test hypotheses regarding deterministic model predictions of environmental processes

    International Nuclear Information System (INIS)

    Gilbert, R.O.; Bittner, E.A.; Essington, E.H.

    1995-01-01

    This paper illustrates the use of Monte Carlo parameter uncertainty and sensitivity analyses to test hypotheses regarding predictions of deterministic models of environmental transport, dose, risk and other phenomena. The methodology is illustrated by testing whether 238 Pu is transferred more readily than 239+240 Pu from the gastrointestinal (GI) tract of cattle to their tissues (muscle, liver and blood). This illustration is based on a study wherein beef-cattle grazed for up to 1064 days on a fenced plutonium (Pu)-contaminated arid site in Area 13 near the Nevada Test Site in the United States. Periodically, cattle were sacrificed and their tissues analyzed for Pu and other radionuclides. Conditional sensitivity analyses of the model predictions were also conducted. These analyses indicated that Pu cattle tissue concentrations had the largest impact of any model parameter on the pdf of predicted Pu fractional transfers. Issues that arise in conducting uncertainty and sensitivity analyses of deterministic models are discussed. (author)

  8. Analyses and simulations in income frame regulation model for the network sector from 2007

    International Nuclear Information System (INIS)

    Askeland, Thomas Haave; Fjellstad, Bjoern

    2007-01-01

    Analyses of the income frame regulation model for the network sector in Norway, introduced 1.st of January 2007. The model's treatment of the norm cost is evaluated, especially the effect analyses carried out by a so called Data Envelopment Analysis model. It is argued that there may exist an age lopsidedness in the data set, and that this should and can be corrected in the effect analyses. The adjustment is proposed corrected for by introducing an age parameter in the data set. Analyses of how the calibration effects in the regulation model affect the business' total income frame, as well as each network company's income frame have been made. It is argued that the calibration, the way it is presented, is not working according to its intention, and should be adjusted in order to provide the sector with the rate of reference in return (ml)

  9. Pathway models for analysing and managing the introduction of alien plant pests - an overview and categorization

    NARCIS (Netherlands)

    Douma, J.C.; Pautasso, M.; Venette, R.C.; Robinet, C.; Hemerik, L.; Mourits, M.C.M.; Schans, J.; Werf, van der W.

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative

  10. Relationship between patient-reported outcomes and clinical outcomes in metastatic castration-resistant prostate cancer: post hoc analysis of COU-AA-301 and COU-AA-302.

    Science.gov (United States)

    Cella, D; Traina, S; Li, T; Johnson, K; Ho, K F; Molina, A; Shore, N D

    2018-02-01

    Patient-reported outcomes (PROs) are used to assess benefit-risk in drug development. The relationship between PROs and clinical outcomes is not well understood. We aim to elucidate the relationships between changes in PRO measures and clinical outcomes in metastatic castration-resistant prostate cancer (mCRPC). We investigated relationships between changes in self-reported fatigue, pain, functional well-being (FWB), physical well-being (PWB) and prostate cancer-specific symptoms with overall survival (OS) and radiographic progression-free survival (rPFS) after 6 and 12 months of treatment in COU-AA-301 (N = 1195) or COU-AA-302 (N = 1088). Eligible COU-AA-301 patients had progressed after docetaxel and had Eastern Cooperative Oncology Group performance status (ECOG PS) ≤ 2. Eligible COU-AA-302 patients had no prior chemotherapy and ECOG PS 0 or 1. Patients were treated with abiraterone acetate (1000 mg/day) plus prednisone (10 mg/day) or prednisone alone daily. Association between self-reported fatigue, pain and functional status, and OS and/or rPFS, using pooled data regardless of treatment, was assessed. Cox proportional hazard regression modeled time to death or radiographic progression. In COU-AA-301 patients, PRO improvements were associated with longer OS and longer time to radiographic progression versus worsening or stable PROs (P AA-302 patients, worsening PROs were associated with higher likelihood of radiographic progression (P ≤ 0.025) compared with improved or stable PROs. In multivariate models, worsening PWB remained associated with worse rPFS. The 12-month analysis confirmed the 6-month results. PROs are significantly associated with clinically relevant time-to-event efficacy outcomes in clinical trials and may complement and help predict traditional clinical practice methods for monitoring patients for disease progression. © The Author 2017. Published by Oxford University Press on behalf of the European Society for

  11. Efficacy and Safety of Duloxetine in Patients with Chronic Low Back Pain Who Used versus Did Not Use Concomitant Nonsteroidal Anti-Inflammatory Drugs or Acetaminophen: A Post Hoc Pooled Analysis of 2 Randomized, Placebo-Controlled Trials

    Directory of Open Access Journals (Sweden)

    Vladimir Skljarevski

    2012-01-01

    Full Text Available This subgroup analysis assessed the efficacy of duloxetine in patients with chronic low back pain (CLBP who did or did not use concomitant nonsteroidal anti-inflammatory drugs (NSAIDs or acetaminophen (APAP. Data were pooled from two 13-week randomized trials in patients with CLBP who were stratified according to NSAID/APAP use at baseline: duloxetine NSAID/APAP user (=137, placebo NSAID/APAP user (=82, duloxetine NSAID/APAP nonuser (=206, and placebo NSAID/APAP nonuser (=156. NSAID/APAP users were those patients who took NSAID/APAP for at least 14 days per month during 3 months prior to study entry. An analysis of covariance model that included therapy, study, baseline NSAID/APAP use (yes/no, and therapy-by-NSAID/APAP subgroup interaction was used to assess the efficacy. The treatment-by-NSAID/APAP use interaction was not statistically significant (=0.31 suggesting no substantial evidence of differential efficacy for duloxetine over placebo on pain reduction or improvement in physical function between concomitant NSAID/APAP users and non-users.

  12. An IEEE 802.11 EDCA Model with Support for Analysing Networks with Misbehaving Nodes

    Directory of Open Access Journals (Sweden)

    Szott Szymon

    2010-01-01

    Full Text Available We present a novel model of IEEE 802.11 EDCA with support for analysing networks with misbehaving nodes. In particular, we consider backoff misbehaviour. Firstly, we verify the model by extensive simulation analysis and by comparing it to three other IEEE 802.11 models. The results show that our model behaves satisfactorily and outperforms other widely acknowledged models. Secondly, a comparison with simulation results in several scenarios with misbehaving nodes proves that our model performs correctly for these scenarios. The proposed model can, therefore, be considered as an original contribution to the area of EDCA models and backoff misbehaviour.

  13. Prevotella-to-Bacteroides ratio predicts body weight and fat loss success on 24-week diets varying in macronutrient composition and dietary fiber: results from a post-hoc analysis.

    Science.gov (United States)

    Hjorth, Mads F; Blædel, Trine; Bendtsen, Line Q; Lorenzen, Janne K; Holm, Jacob B; Kiilerich, Pia; Roager, Henrik M; Kristiansen, Karsten; Larsen, Lesli H; Astrup, Arne

    2018-05-17

    Individuals with high pre-treatment bacterial Prevotella-to-Bacteroides (P/B) ratio have been reported to lose more body weight on diets high in fiber than subjects with a low P/B ratio. Therefore, the aim of the present study was to examine potential differences in dietary weight loss responses between participants with low and high P/B. Eighty overweight participants were randomized (52 completed) to a 500 kcal/d energy deficit diet with a macronutrient composition of 30 energy percentage (E%) fat, 52 E% carbohydrate and 18 E% protein either high (≈1500 mg calcium/day) or low ( ≤ 600 mg calcium/day) in dairy products for 24 weeks. Body weight, body fat, and dietary intake (by 7-day dietary records) were determined. Individuals were dichotomized according to their pre-treatment P/B ratio derived from 16S rRNA gene sequencing of collected fecal samples to test the potential modification of dietary effects using linear mixed models. Independent of the randomized diets, individuals with high P/B lost 3.8 kg (95%CI, 1.8,5.8; P ratio lost 8.3 kg (95% CI, 5.8;10.9, P ratio [Mean difference: 5.1 kg (95% CI, 1.7;8.6, P = 0.003)]. Partial correlation coefficients between fiber intake and weight change was 0.90 (P ratio and 0.25 (P = 0.29) among individuals with low P/B ratio. Individuals with high P/B lost more body weight and body fat compared to individuals with low P/B, confirming that individuals with a high P/B are more susceptible to weight loss on a diet rich in fiber.

  14. Regional analyses of labor markets and demography: a model based Norwegian example.

    Science.gov (United States)

    Stambol, L S; Stolen, N M; Avitsland, T

    1998-01-01

    The authors discuss the regional REGARD model, developed by Statistics Norway to analyze the regional implications of macroeconomic development of employment, labor force, and unemployment. "In building the model, empirical analyses of regional producer behavior in manufacturing industries have been performed, and the relation between labor market development and regional migration has been investigated. Apart from providing a short description of the REGARD model, this article demonstrates the functioning of the model, and presents some results of an application." excerpt

  15. Global analyses of historical masonry buildings: Equivalent frame vs. 3D solid models

    Science.gov (United States)

    Clementi, Francesco; Mezzapelle, Pardo Antonio; Cocchi, Gianmichele; Lenci, Stefano

    2017-07-01

    The paper analyses the seismic vulnerability of two different masonry buildings. It provides both an advanced 3D modelling with solid elements and an equivalent frame modelling. The global structural behaviour and the dynamic properties of the compound have been evaluated using the Finite Element Modelling (FEM) technique, where the nonlinear behaviour of masonry has been taken into account by proper constitutive assumptions. A sensitivity analysis is done to evaluate the effect of the choice of the structural models.

  16. Material model for non-linear finite element analyses of large concrete structures

    NARCIS (Netherlands)

    Engen, Morten; Hendriks, M.A.N.; Øverli, Jan Arve; Åldstedt, Erik; Beushausen, H.

    2016-01-01

    A fully triaxial material model for concrete was implemented in a commercial finite element code. The only required input parameter was the cylinder compressive strength. The material model was suitable for non-linear finite element analyses of large concrete structures. The importance of including

  17. USE OF THE SIMPLE LINEAR REGRESSION MODEL IN MACRO-ECONOMICAL ANALYSES

    Directory of Open Access Journals (Sweden)

    Constantin ANGHELACHE

    2011-10-01

    Full Text Available The article presents the fundamental aspects of the linear regression, as a toolbox which can be used in macroeconomic analyses. The article describes the estimation of the parameters, the statistical tests used, the homoscesasticity and heteroskedasticity. The use of econometrics instrument in macroeconomics is an important factor that guarantees the quality of the models, analyses, results and possible interpretation that can be drawn at this level.

  18. Taxing CO2 and subsidising biomass: Analysed in a macroeconomic and sectoral model

    DEFF Research Database (Denmark)

    Klinge Jacobsen, Henrik

    2000-01-01

    This paper analyses the combination of taxes and subsidies as an instrument to enable a reduction in CO2 emission. The objective of the study is to compare recycling of a CO2 tax revenue as a subsidy for biomass use as opposed to traditional recycling such as reduced income or corporate taxation....... A model of Denmark's energy supply sector is used to analyse the e€ect of a CO2 tax combined with using the tax revenue for biomass subsidies. The energy supply model is linked to a macroeconomic model such that the macroeconomic consequences of tax policies can be analysed along with the consequences...... for speci®c sectors such as agriculture. Electricity and heat are produced at heat and power plants utilising fuels which minimise total fuel cost, while the authorities regulate capacity expansion technologies. The e€ect of fuel taxes and subsidies on fuels is very sensitive to the fuel substitution...

  19. Experimental and Computational Modal Analyses for Launch Vehicle Models considering Liquid Propellant and Flange Joints

    Directory of Open Access Journals (Sweden)

    Chang-Hoon Sim

    2018-01-01

    Full Text Available In this research, modal tests and analyses are performed for a simplified and scaled first-stage model of a space launch vehicle using liquid propellant. This study aims to establish finite element modeling techniques for computational modal analyses by considering the liquid propellant and flange joints of launch vehicles. The modal tests measure the natural frequencies and mode shapes in the first and second lateral bending modes. As the liquid filling ratio increases, the measured frequencies decrease. In addition, as the number of flange joints increases, the measured natural frequencies increase. Computational modal analyses using the finite element method are conducted. The liquid is modeled by the virtual mass method, and the flange joints are modeled using one-dimensional spring elements along with the node-to-node connection. Comparison of the modal test results and predicted natural frequencies shows good or moderate agreement. The correlation between the modal tests and analyses establishes finite element modeling techniques for modeling the liquid propellant and flange joints of space launch vehicles.

  20. Cost-effectiveness of natalizumab vs fingolimod for the treatment of relapsing-remitting multiple sclerosis: analyses in Sweden.

    Science.gov (United States)

    O'Day, Ken; Meyer, Kellie; Stafkey-Mailey, Dana; Watson, Crystal

    2015-04-01

    To assess the cost-effectiveness of natalizumab vs fingolimod over 2 years in relapsing-remitting multiple sclerosis (RRMS) patients and patients with rapidly evolving severe disease in Sweden. A decision analytic model was developed to estimate the incremental cost per relapse avoided of natalizumab and fingolimod from the perspective of the Swedish healthcare system. Modeled 2-year costs in Swedish kronor of treating RRMS patients included drug acquisition costs, administration and monitoring costs, and costs of treating MS relapses. Effectiveness was measured in terms of MS relapses avoided using data from the AFFIRM and FREEDOMS trials for all patients with RRMS and from post-hoc sub-group analyses for patients with rapidly evolving severe disease. Probabilistic sensitivity analyses were conducted to assess uncertainty. The analysis showed that, in all patients with MS, treatment with fingolimod costs less (440,463 Kr vs 444,324 Kr), but treatment with natalizumab results in more relapses avoided (0.74 vs 0.59), resulting in an incremental cost-effectiveness ratio (ICER) of 25,448 Kr per relapse avoided. In patients with rapidly evolving severe disease, natalizumab dominated fingolimod. Results of the sensitivity analysis demonstrate the robustness of the model results. At a willingness-to-pay (WTP) threshold of 500,000 Kr per relapse avoided, natalizumab is cost-effective in >80% of simulations in both patient populations. Limitations include absence of data from direct head-to-head studies comparing natalizumab and fingolimod, use of relapse rate reduction rather than sustained disability progression as the primary model outcome, assumption of 100% adherence to MS treatment, and exclusion of adverse event costs in the model. Natalizumab remains a cost-effective treatment option for patients with MS in Sweden. In the RRMS patient population, the incremental cost per relapse avoided is well below a 500,000 Kr WTP threshold per relapse avoided. In the rapidly

  1. Present status of theories and data analyses of mathematical models for carcinogenesis

    International Nuclear Information System (INIS)

    Kai, Michiaki; Kawaguchi, Isao

    2007-01-01

    Reviewed are the basic mathematical models (hazard functions), present trend of the model studies and that for radiation carcinogenesis. Hazard functions of carcinogenesis are described for multi-stage model and 2-event model related with cell dynamics. At present, the age distribution of cancer mortality is analyzed, relationship between mutation and carcinogenesis is discussed, and models for colorectal carcinogenesis are presented. As for radiation carcinogenesis, models of Armitage-Doll and of generalized MVK (Moolgavkar, Venson, Knudson, 1971-1990) by 2-stage clonal expansion have been applied to analysis of carcinogenesis in A-bomb survivors, workers in uranium mine (Rn exposure) and smoking doctors in UK and other cases, of which characteristics are discussed. In analyses of A-bomb survivors, models above are applied to solid tumors and leukemia to see the effect, if any, of stage, age of exposure, time progression etc. In miners and smokers, stages of the initiation, promotion and progression in carcinogenesis are discussed on the analyses. Others contain the analyses of workers in Canadian atomic power plant, and of patients who underwent the radiation therapy. Model analysis can help to understand the carcinogenic process in a quantitative aspect rather than to describe the process. (R.T.)

  2. Comparison of linear measurements and analyses taken from plaster models and three-dimensional images.

    Science.gov (United States)

    Porto, Betina Grehs; Porto, Thiago Soares; Silva, Monica Barros; Grehs, Renésio Armindo; Pinto, Ary dos Santos; Bhandi, Shilpa H; Tonetto, Mateus Rodrigues; Bandéca, Matheus Coelho; dos Santos-Pinto, Lourdes Aparecida Martins

    2014-11-01

    Digital models are an alternative for carrying out analyses and devising treatment plans in orthodontics. The objective of this study was to evaluate the accuracy and the reproducibility of measurements of tooth sizes, interdental distances and analyses of occlusion using plaster models and their digital images. Thirty pairs of plaster models were chosen at random, and the digital images of each plaster model were obtained using a laser scanner (3Shape R-700, 3Shape A/S). With the plaster models, the measurements were taken using a caliper (Mitutoyo Digimatic(®), Mitutoyo (UK) Ltd) and the MicroScribe (MS) 3DX (Immersion, San Jose, Calif). For the digital images, the measurement tools used were those from the O3d software (Widialabs, Brazil). The data obtained were compared statistically using the Dahlberg formula, analysis of variance and the Tukey test (p < 0.05). The majority of the measurements, obtained using the caliper and O3d were identical, and both were significantly different from those obtained using the MS. Intra-examiner agreement was lowest when using the MS. The results demonstrated that the accuracy and reproducibility of the tooth measurements and analyses from the plaster models using the caliper and from the digital models using O3d software were identical.

  3. arXiv Statistical Analyses of Higgs- and Z-Portal Dark Matter Models

    CERN Document Server

    Ellis, John; Marzola, Luca; Raidal, Martti

    2018-06-12

    We perform frequentist and Bayesian statistical analyses of Higgs- and Z-portal models of dark matter particles with spin 0, 1/2 and 1. Our analyses incorporate data from direct detection and indirect detection experiments, as well as LHC searches for monojet and monophoton events, and we also analyze the potential impacts of future direct detection experiments. We find acceptable regions of the parameter spaces for Higgs-portal models with real scalar, neutral vector, Majorana or Dirac fermion dark matter particles, and Z-portal models with Majorana or Dirac fermion dark matter particles. In many of these cases, there are interesting prospects for discovering dark matter particles in Higgs or Z decays, as well as dark matter particles weighing $\\gtrsim 100$ GeV. Negative results from planned direct detection experiments would still allow acceptable regions for Higgs- and Z-portal models with Majorana or Dirac fermion dark matter particles.

  4. Vibration tests and analyses of the reactor building model on a small scale

    International Nuclear Information System (INIS)

    Tsuchiya, Hideo; Tanaka, Mitsuru; Ogihara, Yukio; Moriyama, Ken-ichi; Nakayama, Masaaki

    1985-01-01

    The purpose of this paper is to describe the vibration tests and the simulation analyses of the reactor building model on a small scale. The model vibration tests were performed to investigate the vibrational characteristics of the combined super-structure and to verify the computor code based on Dr. H. Tajimi's Thin Layered Element Theory, using the uniaxial shaking table (60 cm x 60 cm). The specimens consist of ground model, three structural model (prestressed concrete containment vessel, inner concrete structure, and enclosure building), a combined structural model and a combined structure-soil interaction model. These models are made of silicon-rubber, and they have a scale of 1:600. Harmonic step by step excitation of 40 gals was performed to investigate the vibrational characteristics for each structural model. The responses of the specimen to harmonic excitation were measured by optical displacement meters, and analyzed by a real time spectrum analyzer. The resonance and phase lag curves of the specimens to the shaking table were obtained respectively. As for the tests of a combined structure-soil interaction model, three predominant frequencies were observed in the resonance curves. These values were in good agreement with the analytical transfer function curves on the computer code. From the vibration tests and the simulation analyses, the silicon-rubber model test is useful for the fundamental study of structural problems. The computer code based on the Thin Element Theory can simulate well the test results. (Kobozono, M.)

  5. Comparison of plasma input and reference tissue models for analysing [(11)C]flumazenil studies

    NARCIS (Netherlands)

    Klumpers, Ursula M. H.; Veltman, Dick J.; Boellaard, Ronald; Comans, Emile F.; Zuketto, Cassandra; Yaqub, Maqsood; Mourik, Jurgen E. M.; Lubberink, Mark; Hoogendijk, Witte J. G.; Lammertsma, Adriaan A.

    2008-01-01

    A single-tissue compartment model with plasma input is the established method for analysing [(11)C]flumazenil ([(11)C]FMZ) studies. However, arterial cannulation and measurement of metabolites are time-consuming. Therefore, a reference tissue approach is appealing, but this approach has not been

  6. Kinetic analyses and mathematical modeling of primary photochemical and photoelectrochemical processes in plant photosystems

    NARCIS (Netherlands)

    Vredenberg, W.J.

    2011-01-01

    In this paper the model and simulation of primary photochemical and photo-electrochemical reactions in dark-adapted intact plant leaves is presented. A descriptive algorithm has been derived from analyses of variable chlorophyll a fluorescence and P700 oxidation kinetics upon excitation with

  7. Analysing and controlling the tax evasion dynamics via majority-vote model

    Energy Technology Data Exchange (ETDEWEB)

    Lima, F W S, E-mail: fwslima@gmail.co, E-mail: wel@ufpi.edu.b [Departamento de Fisica, Universidade Federal do PiauI, 64049-550, Teresina - PI (Brazil)

    2010-09-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q{sub c} to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  8. Analysing and controlling the tax evasion dynamics via majority-vote model

    International Nuclear Information System (INIS)

    Lima, F W S

    2010-01-01

    Within the context of agent-based Monte-Carlo simulations, we study the well-known majority-vote model (MVM) with noise applied to tax evasion on simple square lattices, Voronoi-Delaunay random lattices, Barabasi-Albert networks, and Erdoes-Renyi random graphs. In the order to analyse and to control the fluctuations for tax evasion in the economics model proposed by Zaklan, MVM is applied in the neighborhood of the noise critical q c to evolve the Zaklan model. The Zaklan model had been studied recently using the equilibrium Ising model. Here we show that the Zaklan model is robust because this can be studied using equilibrium dynamics of Ising model also through the nonequilibrium MVM and on various topologies cited above giving the same behavior regardless of dynamic or topology used here.

  9. Nurses' intention to leave: critically analyse the theory of reasoned action and organizational commitment model.

    Science.gov (United States)

    Liou, Shwu-Ru

    2009-01-01

    To systematically analyse the Organizational Commitment model and Theory of Reasoned Action and determine concepts that can better explain nurses' intention to leave their job. The Organizational Commitment model and Theory of Reasoned Action have been proposed and applied to understand intention to leave and turnover behaviour, which are major contributors to nursing shortage. However, the appropriateness of applying these two models in nursing was not analysed. Three main criteria of a useful model were used for the analysis: consistency in the use of concepts, testability and predictability. Both theories use concepts consistently. Concepts in the Theory of Reasoned Action are defined broadly whereas they are operationally defined in the Organizational Commitment model. Predictability of the Theory of Reasoned Action is questionable whereas the Organizational Commitment model can be applied to predict intention to leave. A model was proposed based on this analysis. Organizational commitment, intention to leave, work experiences, job characteristics and personal characteristics can be concepts for predicting nurses' intention to leave. Nursing managers may consider nurses' personal characteristics and experiences to increase their organizational commitment and enhance their intention to stay. Empirical studies are needed to test and cross-validate the re-synthesized model for nurses' intention to leave their job.

  10. A model finite-element to analyse the mechanical behavior of a PWR fuel rod

    International Nuclear Information System (INIS)

    Galeao, A.C.N.R.; Tanajura, C.A.S.

    1988-01-01

    A model to analyse the mechanical behavior of a PWR fuel rod is presented. We drew our attention to the phenomenon of pellet-pellet and pellet-cladding contact by taking advantage of an elastic model which include the effects of thermal gradients, cladding internal and external pressures, swelling and initial relocation. The problem of contact gives rise ro a variational formulation which employs Lagrangian multipliers. An iterative scheme is constructed and the finite element method is applied to obtain the numerical solution. Some results and comments are presented to examine the performance of the model. (author) [pt

  11. Analysing, Interpreting, and Testing the Invariance of the Actor-Partner Interdependence Model

    Directory of Open Access Journals (Sweden)

    Gareau, Alexandre

    2016-09-01

    Full Text Available Although in recent years researchers have begun to utilize dyadic data analyses such as the actor-partner interdependence model (APIM, certain limitations to the applicability of these models still exist. Given the complexity of APIMs, most researchers will often use observed scores to estimate the model's parameters, which can significantly limit and underestimate statistical results. The aim of this article is to highlight the importance of conducting a confirmatory factor analysis (CFA of equivalent constructs between dyad members (i.e. measurement equivalence/invariance; ME/I. Different steps for merging CFA and APIM procedures will be detailed in order to shed light on new and integrative methods.

  12. Tests and analyses of 1/4-scale upgraded nine-bay reinforced concrete basement models

    International Nuclear Information System (INIS)

    Woodson, S.C.

    1983-01-01

    Two nine-bay prototype structures, a flat plate and two-way slab with beams, were designed in accordance with the 1977 ACI code. A 1/4-scale model of each prototype was constructed, upgraded with timber posts, and statically tested. The development of the timber posts placement scheme was based upon yield-line analyses, punching shear evaluation, and moment-thrust interaction diagrams of the concrete slab sections. The flat plate model and the slab with beams model withstood approximate overpressures of 80 and 40 psi, respectively, indicating that required hardness may be achieved through simple upgrading techniques

  13. Demographic origins of skewed operational and adult sex ratios: perturbation analyses of two-sex models.

    Science.gov (United States)

    Veran, Sophie; Beissinger, Steven R

    2009-02-01

    Skewed sex ratios - operational (OSR) and Adult (ASR) - arise from sexual differences in reproductive behaviours and adult survival rates due to the cost of reproduction. However, skewed sex-ratio at birth, sex-biased dispersal and immigration, and sexual differences in juvenile mortality may also contribute. We present a framework to decompose the roles of demographic traits on sex ratios using perturbation analyses of two-sex matrix population models. Metrics of sensitivity are derived from analyses of sensitivity, elasticity, life-table response experiments and life stage simulation analyses, and applied to the stable stage distribution instead of lambda. We use these approaches to examine causes of male-biased sex ratios in two populations of green-rumped parrotlets (Forpus passerinus) in Venezuela. Female local juvenile survival contributed the most to the unbalanced OSR and ASR due to a female-biased dispersal rate, suggesting sexual differences in philopatry can influence sex ratios more strongly than the cost of reproduction.

  14. Beta-Poisson model for single-cell RNA-seq data analyses.

    Science.gov (United States)

    Vu, Trung Nghia; Wills, Quin F; Kalari, Krishna R; Niu, Nifang; Wang, Liewei; Rantalainen, Mattias; Pawitan, Yudi

    2016-07-15

    Single-cell RNA-sequencing technology allows detection of gene expression at the single-cell level. One typical feature of the data is a bimodality in the cellular distribution even for highly expressed genes, primarily caused by a proportion of non-expressing cells. The standard and the over-dispersed gamma-Poisson models that are commonly used in bulk-cell RNA-sequencing are not able to capture this property. We introduce a beta-Poisson mixture model that can capture the bimodality of the single-cell gene expression distribution. We further integrate the model into the generalized linear model framework in order to perform differential expression analyses. The whole analytical procedure is called BPSC. The results from several real single-cell RNA-seq datasets indicate that ∼90% of the transcripts are well characterized by the beta-Poisson model; the model-fit from BPSC is better than the fit of the standard gamma-Poisson model in > 80% of the transcripts. Moreover, in differential expression analyses of simulated and real datasets, BPSC performs well against edgeR, a conventional method widely used in bulk-cell RNA-sequencing data, and against scde and MAST, two recent methods specifically designed for single-cell RNA-seq data. An R package BPSC for model fitting and differential expression analyses of single-cell RNA-seq data is available under GPL-3 license at https://github.com/nghiavtr/BPSC CONTACT: yudi.pawitan@ki.se or mattias.rantalainen@ki.se Supplementary data are available at Bioinformatics online. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  15. Quantifying and Analysing Neighbourhood Characteristics Supporting Urban Land-Use Modelling

    DEFF Research Database (Denmark)

    Hansen, Henning Sten

    2009-01-01

    Land-use modelling and spatial scenarios have gained increased attention as a means to meet the challenge of reducing uncertainty in the spatial planning and decision-making. Several organisations have developed software for land-use modelling. Many of the recent modelling efforts incorporate...... cellular automata (CA) to accomplish spatially explicit land-use change modelling. Spatial interaction between neighbour land-uses is an important component in urban cellular automata. Nevertheless, this component is calibrated through trial-and-error estimation. The aim of the current research project has...... been to quantify and analyse land-use neighbourhood characteristics and impart useful information for cell based land-use modelling. The results of our research is a major step forward, because we have estimated rules for neighbourhood interaction from really observed land-use changes at a yearly basis...

  16. Evaluation of Uncertainties in hydrogeological modeling and groundwater flow analyses. Model calibration

    International Nuclear Information System (INIS)

    Ijiri, Yuji; Ono, Makoto; Sugihara, Yutaka; Shimo, Michito; Yamamoto, Hajime; Fumimura, Kenichi

    2003-03-01

    This study involves evaluation of uncertainty in hydrogeological modeling and groundwater flow analysis. Three-dimensional groundwater flow in Shobasama site in Tono was analyzed using two continuum models and one discontinuous model. The domain of this study covered area of four kilometers in east-west direction and six kilometers in north-south direction. Moreover, for the purpose of evaluating how uncertainties included in modeling of hydrogeological structure and results of groundwater simulation decreased with progress of investigation research, updating and calibration of the models about several modeling techniques of hydrogeological structure and groundwater flow analysis techniques were carried out, based on the information and knowledge which were newly acquired. The acquired knowledge is as follows. As a result of setting parameters and structures in renewal of the models following to the circumstances by last year, there is no big difference to handling between modeling methods. The model calibration is performed by the method of matching numerical simulation with observation, about the pressure response caused by opening and closing of a packer in MIU-2 borehole. Each analysis technique attains reducing of residual sum of squares of observations and results of numerical simulation by adjusting hydrogeological parameters. However, each model adjusts different parameters as water conductivity, effective porosity, specific storage, and anisotropy. When calibrating models, sometimes it is impossible to explain the phenomena only by adjusting parameters. In such case, another investigation may be required to clarify details of hydrogeological structure more. As a result of comparing research from beginning to this year, the following conclusions are obtained about investigation. (1) The transient hydraulic data are effective means in reducing the uncertainty of hydrogeological structure. (2) Effective porosity for calculating pore water velocity of

  17. Modular 3-D solid finite element model for fatigue analyses of a PWR coolant system

    International Nuclear Information System (INIS)

    Garrido, Oriol Costa; Cizelj, Leon; Simonovski, Igor

    2012-01-01

    Highlights: ► A 3-D model of a reactor coolant system for fatigue usage assessment. ► The performed simulations are a heat transfer and stress analyses. ► The main results are the expected ranges of fatigue loadings. - Abstract: The extension of operational licenses of second generation pressurized water reactor (PWR) nuclear power plants depends to a large extent on the analyses of fatigue usage of the reactor coolant pressure boundary. The reliable estimation of the fatigue usage requires detailed thermal and stress analyses of the affected components. Analyses, based upon the in-service transient loads should be compared to the loads analyzed at the design stage. The thermal and stress transients can be efficiently analyzed using the finite element method. This requires that a 3-D solid model of a given system is discretized with finite elements (FE). The FE mesh density is crucial for both the accuracy and the cost of the analysis. The main goal of the paper is to propose a set of computational tools which assist a user in a deployment of modular spatial FE model of main components of a typical reactor coolant system, e.g., pipes, pressure vessels and pumps. The modularity ensures that the components can be analyzed individually or in a system. Also, individual components can be meshed with different mesh densities, as required by the specifics of the particular transient studied. For optimal accuracy, all components are meshed with hexahedral elements with quadratic interpolation. The performance of the model is demonstrated with simulations performed with a complete two-loop PWR coolant system (RCS). Heat transfer analysis and stress analysis for a complete loading and unloading cycle of the RCS are performed. The main results include expected ranges of fatigue loading for the pipe lines and coolant pump components under the given conditions.

  18. Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts.

    Energy Technology Data Exchange (ETDEWEB)

    Sevougian, S. David; Freeze, Geoffrey A.; Gardner, William Payton; Hammond, Glenn Edward; Mariner, Paul

    2014-09-01

    directly, rather than through simplified abstractions. It also a llows for complex representations of the source term, e.g., the explicit representation of many individual waste packages (i.e., meter - scale detail of an entire waste emplacement drift). This report fulfills the Generic Disposal System Analysis Work Packa ge Level 3 Milestone - Performance Assessment Modeling and Sensitivity Analyses of Generic Disposal System Concepts (M 3 FT - 1 4 SN08080 3 2 ).

  19. An open-population hierarchical distance sampling model

    Science.gov (United States)

    Sollmann, Rachel; Beth Gardner,; Richard B Chandler,; Royle, J. Andrew; T Scott Sillett,

    2015-01-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for direct estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for island scrub-jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying number of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  20. An open-population hierarchical distance sampling model.

    Science.gov (United States)

    Sollmann, Rahel; Gardner, Beth; Chandler, Richard B; Royle, J Andrew; Sillett, T Scott

    2015-02-01

    Modeling population dynamics while accounting for imperfect detection is essential to monitoring programs. Distance sampling allows estimating population size while accounting for imperfect detection, but existing methods do not allow for estimation of demographic parameters. We develop a model that uses temporal correlation in abundance arising from underlying population dynamics to estimate demographic parameters from repeated distance sampling surveys. Using a simulation study motivated by designing a monitoring program for Island Scrub-Jays (Aphelocoma insularis), we investigated the power of this model to detect population trends. We generated temporally autocorrelated abundance and distance sampling data over six surveys, using population rates of change of 0.95 and 0.90. We fit the data generating Markovian model and a mis-specified model with a log-linear time effect on abundance, and derived post hoc trend estimates from a model estimating abundance for each survey separately. We performed these analyses for varying numbers of survey points. Power to detect population changes was consistently greater under the Markov model than under the alternatives, particularly for reduced numbers of survey points. The model can readily be extended to more complex demographic processes than considered in our simulations. This novel framework can be widely adopted for wildlife population monitoring.

  1. Generic uncertainty model for DETRA for environmental consequence analyses. Application and sample outputs

    International Nuclear Information System (INIS)

    Suolanen, V.; Ilvonen, M.

    1998-10-01

    Computer model DETRA applies a dynamic compartment modelling approach. The compartment structure of each considered application can be tailored individually. This flexible modelling method makes it possible that the transfer of radionuclides can be considered in various cases: aquatic environment and related food chains, terrestrial environment, food chains in general and food stuffs, body burden analyses of humans, etc. In the former study on this subject, modernization of the user interface of DETRA code was carried out. This new interface works in Windows environment and the usability of the code has been improved. The objective of this study has been to further develop and diversify the user interface so that also probabilistic uncertainty analyses can be performed by DETRA. The most common probability distributions are available: uniform, truncated Gaussian and triangular. The corresponding logarithmic distributions are also available. All input data related to a considered case can be varied, although this option is seldomly needed. The calculated output values can be selected as monitored values at certain simulation time points defined by the user. The results of a sensitivity run are immediately available after simulation as graphical presentations. These outcomes are distributions generated for varied parameters, density functions of monitored parameters and complementary cumulative density functions (CCDF). An application considered in connection with this work was the estimation of contamination of milk caused by radioactive deposition of Cs (10 kBq(Cs-137)/m 2 ). The multi-sequence calculation model applied consisted of a pasture modelling part and a dormant season modelling part. These two sequences were linked periodically simulating the realistic practice of care taking of domestic animals in Finland. The most important parameters were varied in this exercise. The performed diversifying of the user interface of DETRA code seems to provide an easily

  2. A response-modeling alternative to surrogate models for support in computational analyses

    International Nuclear Information System (INIS)

    Rutherford, Brian

    2006-01-01

    Often, the objectives in a computational analysis involve characterization of system performance based on some function of the computed response. In general, this characterization includes (at least) an estimate or prediction for some performance measure and an estimate of the associated uncertainty. Surrogate models can be used to approximate the response in regions where simulations were not performed. For most surrogate modeling approaches, however (1) estimates are based on smoothing of available data and (2) uncertainty in the response is specified in a point-wise (in the input space) fashion. These aspects of the surrogate model construction might limit their capabilities. One alternative is to construct a probability measure, G(r), for the computer response, r, based on available data. This 'response-modeling' approach will permit probability estimation for an arbitrary event, E(r), based on the computer response. In this general setting, event probabilities can be computed: prob(E)=∫ r I(E(r))dG(r) where I is the indicator function. Furthermore, one can use G(r) to calculate an induced distribution on a performance measure, pm. For prediction problems where the performance measure is a scalar, its distribution F pm is determined by: F pm (z)=∫ r I(pm(r)≤z)dG(r). We introduce response models for scalar computer output and then generalize the approach to more complicated responses that utilize multiple response models

  3. BWR Mark III containment analyses using a GOTHIC 8.0 3D model

    International Nuclear Information System (INIS)

    Jimenez, Gonzalo; Serrano, César; Lopez-Alonso, Emma; Molina, M del Carmen; Calvo, Daniel; García, Javier; Queral, César; Zuriaga, J. Vicente; González, Montserrat

    2015-01-01

    Highlights: • The development of a 3D GOTHIC code model of BWR Mark-III containment is described. • Suppression pool modelling based on the POOLEX STB-20 and STB-16 experimental tests. • LOCA and SBO transient simulated to verify the behaviour of the 3D GOTHIC model. • Comparison between the 3D GOTHIC model and MAAP4.07 model is conducted. • Accurate reproduction of pre severe accident conditions with the 3D GOTHIC model. - Abstract: The purpose of this study is to establish a detailed three-dimensional model of Cofrentes NPP BWR/6 Mark III containment building using the containment code GOTHIC 8.0. This paper presents the model construction, the phenomenology tests conducted and the selected transient for the model evaluation. In order to study the proper settings for the model in the suppression pool, two experiments conducted with the experimental installation POOLEX have been simulated, allowing to obtain a proper behaviour of the model under different suppression pool phenomenology. In the transient analyses, a Loss of Coolant Accident (LOCA) and a Station Blackout (SBO) transient have been performed. The main results of the simulations of those transients were qualitative compared with the results obtained from simulations with MAAP 4.07 Cofrentes NPP model, used by the plant for simulating severe accidents. From this comparison, a verification of the model in terms of pressurization, asymmetric discharges and high pressure release were obtained. The completeness of this model has proved to adequately simulate the thermal hydraulic phenomena which occur in the containment during accidental sequences

  4. Using Weather Data and Climate Model Output in Economic Analyses of Climate Change

    Energy Technology Data Exchange (ETDEWEB)

    Auffhammer, M.; Hsiang, S. M.; Schlenker, W.; Sobel, A.

    2013-06-28

    Economists are increasingly using weather data and climate model output in analyses of the economic impacts of climate change. This article introduces a set of weather data sets and climate models that are frequently used, discusses the most common mistakes economists make in using these products, and identifies ways to avoid these pitfalls. We first provide an introduction to weather data, including a summary of the types of datasets available, and then discuss five common pitfalls that empirical researchers should be aware of when using historical weather data as explanatory variables in econometric applications. We then provide a brief overview of climate models and discuss two common and significant errors often made by economists when climate model output is used to simulate the future impacts of climate change on an economic outcome of interest.

  5. Risk Factor Analyses for the Return of Spontaneous Circulation in the Asphyxiation Cardiac Arrest Porcine Model

    Directory of Open Access Journals (Sweden)

    Cai-Jun Wu

    2015-01-01

    Full Text Available Background: Animal models of asphyxiation cardiac arrest (ACA are frequently used in basic research to mirror the clinical course of cardiac arrest (CA. The rates of the return of spontaneous circulation (ROSC in ACA animal models are lower than those from studies that have utilized ventricular fibrillation (VF animal models. The purpose of this study was to characterize the factors associated with the ROSC in the ACA porcine model. Methods: Forty-eight healthy miniature pigs underwent endotracheal tube clamping to induce CA. Once induced, CA was maintained untreated for a period of 8 min. Two minutes following the initiation of cardiopulmonary resuscitation (CPR, defibrillation was attempted until ROSC was achieved or the animal died. To assess the factors associated with ROSC in this CA model, logistic regression analyses were performed to analyze gender, the time of preparation, the amplitude spectrum area (AMSA from the beginning of CPR and the pH at the beginning of CPR. A receiver-operating characteristic (ROC curve was used to evaluate the predictive value of AMSA for ROSC. Results: ROSC was only 52.1% successful in this ACA porcine model. The multivariate logistic regression analyses revealed that ROSC significantly depended on the time of preparation, AMSA at the beginning of CPR and pH at the beginning of CPR. The area under the ROC curve in for AMSA at the beginning of CPR was 0.878 successful in predicting ROSC (95% confidence intervals: 0.773∼0.983, and the optimum cut-off value was 15.62 (specificity 95.7% and sensitivity 80.0%. Conclusions: The time of preparation, AMSA and the pH at the beginning of CPR were associated with ROSC in this ACA porcine model. AMSA also predicted the likelihood of ROSC in this ACA animal model.

  6. Growth Modeling with Non-Ignorable Dropout: Alternative Analyses of the STAR*D Antidepressant Trial

    Science.gov (United States)

    Muthén, Bengt; Asparouhov, Tihomir; Hunter, Aimee; Leuchter, Andrew

    2011-01-01

    This paper uses a general latent variable framework to study a series of models for non-ignorable missingness due to dropout. Non-ignorable missing data modeling acknowledges that missingness may depend on not only covariates and observed outcomes at previous time points as with the standard missing at random (MAR) assumption, but also on latent variables such as values that would have been observed (missing outcomes), developmental trends (growth factors), and qualitatively different types of development (latent trajectory classes). These alternative predictors of missing data can be explored in a general latent variable framework using the Mplus program. A flexible new model uses an extended pattern-mixture approach where missingness is a function of latent dropout classes in combination with growth mixture modeling using latent trajectory classes. A new selection model allows not only an influence of the outcomes on missingness, but allows this influence to vary across latent trajectory classes. Recommendations are given for choosing models. The missing data models are applied to longitudinal data from STAR*D, the largest antidepressant clinical trial in the U.S. to date. Despite the importance of this trial, STAR*D growth model analyses using non-ignorable missing data techniques have not been explored until now. The STAR*D data are shown to feature distinct trajectory classes, including a low class corresponding to substantial improvement in depression, a minority class with a U-shaped curve corresponding to transient improvement, and a high class corresponding to no improvement. The analyses provide a new way to assess drug efficiency in the presence of dropout. PMID:21381817

  7. NUMERICAL MODELLING AS NON-DESTRUCTIVE METHOD FOR THE ANALYSES AND DIAGNOSIS OF STONE STRUCTURES: MODELS AND POSSIBILITIES

    Directory of Open Access Journals (Sweden)

    Nataša Štambuk-Cvitanović

    1999-12-01

    Full Text Available Assuming the necessity of analysis, diagnosis and preservation of existing valuable stone masonry structures and ancient monuments in today European urban cores, numerical modelling become an efficient tool for the structural behaviour investigation. It should be supported by experimentally found input data and taken as a part of general combined approach, particularly non-destructive techniques on the structure/model within it. For the structures or their detail which may require more complex analyses three numerical models based upon finite elements technique are suggested: (1 standard linear model; (2 linear model with contact (interface elements; and (3 non-linear elasto-plastic and orthotropic model. The applicability of these models depend upon the accuracy of the approach or type of the problem, and will be presented on some characteristic samples.

  8. Models and analyses for inertial-confinement fusion-reactor studies

    International Nuclear Information System (INIS)

    Bohachevsky, I.O.

    1981-05-01

    This report describes models and analyses devised at Los Alamos National Laboratory to determine the technical characteristics of different inertial confinement fusion (ICF) reactor elements required for component integration into a functional unit. We emphasize the generic properties of the different elements rather than specific designs. The topics discussed are general ICF reactor design considerations; reactor cavity phenomena, including the restoration of interpulse ambient conditions; first-wall temperature increases and material losses; reactor neutronics and hydrodynamic blanket response to neutron energy deposition; and analyses of loads and stresses in the reactor vessel walls, including remarks about the generation and propagation of very short wavelength stress waves. A discussion of analytic approaches useful in integrations and optimizations of ICF reactor systems concludes the report

  9. Quantitative Model for Economic Analyses of Information Security Investment in an Enterprise Information System

    Directory of Open Access Journals (Sweden)

    Bojanc Rok

    2012-11-01

    Full Text Available The paper presents a mathematical model for the optimal security-technology investment evaluation and decision-making processes based on the quantitative analysis of security risks and digital asset assessments in an enterprise. The model makes use of the quantitative analysis of different security measures that counteract individual risks by identifying the information system processes in an enterprise and the potential threats. The model comprises the target security levels for all identified business processes and the probability of a security accident together with the possible loss the enterprise may suffer. The selection of security technology is based on the efficiency of selected security measures. Economic metrics are applied for the efficiency assessment and comparative analysis of different protection technologies. Unlike the existing models for evaluation of the security investment, the proposed model allows direct comparison and quantitative assessment of different security measures. The model allows deep analyses and computations providing quantitative assessments of different options for investments, which translate into recommendations facilitating the selection of the best solution and the decision-making thereof. The model was tested using empirical examples with data from real business environment.

  10. A modeling approach to compare ΣPCB concentrations between congener-specific analyses

    Science.gov (United States)

    Gibson, Polly P.; Mills, Marc A.; Kraus, Johanna M.; Walters, David M.

    2017-01-01

    Changes in analytical methods over time pose problems for assessing long-term trends in environmental contamination by polychlorinated biphenyls (PCBs). Congener-specific analyses vary widely in the number and identity of the 209 distinct PCB chemical configurations (congeners) that are quantified, leading to inconsistencies among summed PCB concentrations (ΣPCB) reported by different studies. Here we present a modeling approach using linear regression to compare ΣPCB concentrations derived from different congener-specific analyses measuring different co-eluting groups. The approach can be used to develop a specific conversion model between any two sets of congener-specific analytical data from similar samples (similar matrix and geographic origin). We demonstrate the method by developing a conversion model for an example data set that includes data from two different analytical methods, a low resolution method quantifying 119 congeners and a high resolution method quantifying all 209 congeners. We used the model to show that the 119-congener set captured most (93%) of the total PCB concentration (i.e., Σ209PCB) in sediment and biological samples. ΣPCB concentrations estimated using the model closely matched measured values (mean relative percent difference = 9.6). General applications of the modeling approach include (a) generating comparable ΣPCB concentrations for samples that were analyzed for different congener sets; and (b) estimating the proportional contribution of different congener sets to ΣPCB. This approach may be especially valuable for enabling comparison of long-term remediation monitoring results even as analytical methods change over time. 

  11. Sampling and sensitivity analyses tools (SaSAT for computational modelling

    Directory of Open Access Journals (Sweden)

    Wilson David P

    2008-02-01

    Full Text Available Abstract SaSAT (Sampling and Sensitivity Analysis Tools is a user-friendly software package for applying uncertainty and sensitivity analyses to mathematical and computational models of arbitrary complexity and context. The toolbox is built in Matlab®, a numerical mathematical software package, and utilises algorithms contained in the Matlab® Statistics Toolbox. However, Matlab® is not required to use SaSAT as the software package is provided as an executable file with all the necessary supplementary files. The SaSAT package is also designed to work seamlessly with Microsoft Excel but no functionality is forfeited if that software is not available. A comprehensive suite of tools is provided to enable the following tasks to be easily performed: efficient and equitable sampling of parameter space by various methodologies; calculation of correlation coefficients; regression analysis; factor prioritisation; and graphical output of results, including response surfaces, tornado plots, and scatterplots. Use of SaSAT is exemplified by application to a simple epidemic model. To our knowledge, a number of the methods available in SaSAT for performing sensitivity analyses have not previously been used in epidemiological modelling and their usefulness in this context is demonstrated.

  12. A chip-level modeling approach for rail span collapse and survivability analyses

    International Nuclear Information System (INIS)

    Marvis, D.G.; Alexander, D.R.; Dinger, G.L.

    1989-01-01

    A general semiautomated analysis technique has been developed for analyzing rail span collapse and survivability of VLSI microcircuits in high ionizing dose rate radiation environments. Hierarchical macrocell modeling permits analyses at the chip level and interactive graphical postprocessing provides a rapid visualization of voltage, current and power distributions over an entire VLSIC. The technique is demonstrated for a 16k C MOS/SOI SRAM and a CMOS/SOS 8-bit multiplier. The authors also present an efficient method to treat memory arrays as well as a three-dimensional integration technique to compute sapphire photoconduction from the design layout

  13. Analyses and testing of model prestressed concrete reactor vessels with built-in planes of weakness

    International Nuclear Information System (INIS)

    Dawson, P.; Paton, A.A.; Fleischer, C.C.

    1990-01-01

    This paper describes the design, construction, analyses and testing of two small scale, single cavity prestressed concrete reactor vessel models, one without planes of weakness and one with planes of weakness immediately behind the cavity liner. This work was carried out to extend a previous study which had suggested the likely feasibility of constructing regions of prestressed concrete reactor vessels and biological shields, which become activated, using easily removable blocks, separated by a suitable membrane. The paper describes the results obtained and concludes that the planes of weakness concept could offer a means of facilitating the dismantling of activated regions of prestressed concrete reactor vessels, biological shields and similar types of structure. (author)

  14. Integrated tokamak modelling with the fast-ion Fokker–Planck solver adapted for transient analyses

    International Nuclear Information System (INIS)

    Toma, M; Hamamatsu, K; Hayashi, N; Honda, M; Ide, S

    2015-01-01

    Integrated tokamak modelling that enables the simulation of an entire discharge period is indispensable for designing advanced tokamak plasmas. For this purpose, we extend the integrated code TOPICS to make it more suitable for transient analyses in the fast-ion part. The fast-ion Fokker–Planck solver is integrated into TOPICS at the same level as the bulk transport solver so that the time evolutions of the fast ion and the bulk plasma are consistent with each other as well as with the equilibrium magnetic field. The fast-ion solver simultaneously handles neutral beam-injected ions and alpha particles. Parallelisation of the fast-ion solver in addition to its computational lightness owing to a dimensional reduction in the phase space enables transient analyses for long periods in the order of tens of seconds. The fast-ion Fokker–Planck calculation is compared and confirmed to be in good agreement with an orbit following a Monte Carlo calculation. The integrated code is applied to ramp-up simulations for JT-60SA and ITER to confirm its capability and effectiveness in transient analyses. In the integrated simulations, the coupled evolution of the fast ions, plasma profiles, and equilibrium magnetic fields are presented. In addition, the electric acceleration effect on fast ions is shown and discussed. (paper)

  15. Analysing the Effects of Flood-Resilience Technologies in Urban Areas Using a Synthetic Model Approach

    Directory of Open Access Journals (Sweden)

    Reinhard Schinke

    2016-11-01

    Full Text Available Flood protection systems with their spatial effects play an important role in managing and reducing flood risks. The planning and decision process as well as the technical implementation are well organized and often exercised. However, building-related flood-resilience technologies (FReT are often neglected due to the absence of suitable approaches to analyse and to integrate such measures in large-scale flood damage mitigation concepts. Against this backdrop, a synthetic model-approach was extended by few complementary methodical steps in order to calculate flood damage to buildings considering the effects of building-related FReT and to analyse the area-related reduction of flood risks by geo-information systems (GIS with high spatial resolution. It includes a civil engineering based investigation of characteristic properties with its building construction including a selection and combination of appropriate FReT as a basis for derivation of synthetic depth-damage functions. Depending on the real exposition and the implementation level of FReT, the functions can be used and allocated in spatial damage and risk analyses. The application of the extended approach is shown at a case study in Valencia (Spain. In this way, the overall research findings improve the integration of FReT in flood risk management. They provide also some useful information for advising of individuals at risk supporting the selection and implementation of FReT.

  16. Structural identifiability analyses of candidate models for in vitro Pitavastatin hepatic uptake.

    Science.gov (United States)

    Grandjean, Thomas R B; Chappell, Michael J; Yates, James W T; Evans, Neil D

    2014-05-01

    In this paper a review of the application of four different techniques (a version of the similarity transformation approach for autonomous uncontrolled systems, a non-differential input/output observable normal form approach, the characteristic set differential algebra and a recent algebraic input/output relationship approach) to determine the structural identifiability of certain in vitro nonlinear pharmacokinetic models is provided. The Organic Anion Transporting Polypeptide (OATP) substrate, Pitavastatin, is used as a probe on freshly isolated animal and human hepatocytes. Candidate pharmacokinetic non-linear compartmental models have been derived to characterise the uptake process of Pitavastatin. As a prerequisite to parameter estimation, structural identifiability analyses are performed to establish that all unknown parameters can be identified from the experimental observations available. Copyright © 2013. Published by Elsevier Ireland Ltd.

  17. Reading Ability Development from Kindergarten to Junior Secondary: Latent Transition Analyses with Growth Mixture Modeling

    Directory of Open Access Journals (Sweden)

    Yuan Liu

    2016-10-01

    Full Text Available The present study examined the reading ability development of children in the large scale Early Childhood Longitudinal Study (Kindergarten Class of 1998-99 data; Tourangeau, Nord, Lê, Pollack, & Atkins-Burnett, 2006 under the dynamic systems. To depict children's growth pattern, we extended the measurement part of latent transition analysis to the growth mixture model and found that the new model fitted the data well. Results also revealed that most of the children stayed in the same ability group with few cross-level changes in their classes. After adding the environmental factors as predictors, analyses showed that children receiving higher teachers' ratings, with higher socioeconomic status, and of above average poverty status, would have higher probability to transit into the higher ability group.

  18. Estimating required information size by quantifying diversity in random-effects model meta-analyses

    DEFF Research Database (Denmark)

    Wetterslev, Jørn; Thorlund, Kristian; Brok, Jesper

    2009-01-01

    an intervention effect suggested by trials with low-risk of bias. METHODS: Information size calculations need to consider the total model variance in a meta-analysis to control type I and type II errors. Here, we derive an adjusting factor for the required information size under any random-effects model meta......-analysis. RESULTS: We devise a measure of diversity (D2) in a meta-analysis, which is the relative variance reduction when the meta-analysis model is changed from a random-effects into a fixed-effect model. D2 is the percentage that the between-trial variability constitutes of the sum of the between...... and interpreted using several simulations and clinical examples. In addition we show mathematically that diversity is equal to or greater than inconsistency, that is D2 >or= I2, for all meta-analyses. CONCLUSION: We conclude that D2 seems a better alternative than I2 to consider model variation in any random...

  19. What is needed to eliminate new pediatric HIV infections: The contribution of model-based analyses

    Science.gov (United States)

    Doherty, Katie; Ciaranello, Andrea

    2013-01-01

    Purpose of Review Computer simulation models can identify key clinical, operational, and economic interventions that will be needed to achieve the elimination of new pediatric HIV infections. In this review, we summarize recent findings from model-based analyses of strategies for prevention of mother-to-child HIV transmission (MTCT). Recent Findings In order to achieve elimination of MTCT (eMTCT), model-based studies suggest that scale-up of services will be needed in several domains: uptake of services and retention in care (the PMTCT “cascade”), interventions to prevent HIV infections in women and reduce unintended pregnancies (the “four-pronged approach”), efforts to support medication adherence through long periods of pregnancy and breastfeeding, and strategies to make breastfeeding safer and/or shorter. Models also project the economic resources that will be needed to achieve these goals in the most efficient ways to allocate limited resources for eMTCT. Results suggest that currently recommended PMTCT regimens (WHO Option A, Option B, and Option B+) will be cost-effective in most settings. Summary Model-based results can guide future implementation science, by highlighting areas in which additional data are needed to make informed decisions and by outlining critical interventions that will be necessary in order to eliminate new pediatric HIV infections. PMID:23743788

  20. Control designs and stability analyses for Helly’s car-following model

    Science.gov (United States)

    Rosas-Jaimes, Oscar A.; Quezada-Téllez, Luis A.; Fernández-Anaya, Guillermo

    Car-following is an approach to understand traffic behavior restricted to pairs of cars, identifying a “leader” moving in front of a “follower”, which at the same time, it is assumed that it does not surpass to the first one. From the first attempts to formulate the way in which individual cars are affected in a road through these models, linear differential equations were suggested by author like Pipes or Helly. These expressions represent such phenomena quite well, even though they have been overcome by other more recent and accurate models. However, in this paper, we show that those early formulations have some properties that are not fully reported, presenting the different ways in which they can be expressed, and analyzing them in their stability behaviors. Pipes’ model can be extended to what it is known as Helly’s model, which is viewed as a more precise model to emulate this microscopic approach to traffic. Once established some convenient forms of expression, two control designs are suggested herein. These regulation schemes are also complemented with their respective stability analyses, which reflect some important properties with implications in real driving. It is significant that these linear designs can be very easy to understand and to implement, including those important features related to safety and comfort.

  1. Development of steady-state model for MSPT and detailed analyses of receiver

    Science.gov (United States)

    Yuasa, Minoru; Sonoda, Masanori; Hino, Koichi

    2016-05-01

    Molten salt parabolic trough system (MSPT) uses molten salt as heat transfer fluid (HTF) instead of synthetic oil. The demonstration plant of MSPT was constructed by Chiyoda Corporation and Archimede Solar Energy in Italy in 2013. Chiyoda Corporation developed a steady-state model for predicting the theoretical behavior of the demonstration plant. The model was designed to calculate the concentrated solar power and heat loss using ray tracing of incident solar light and finite element modeling of thermal energy transferred into the medium. This report describes the verification of the model using test data on the demonstration plant, detailed analyses on the relation between flow rate and temperature difference on the metal tube of receiver and the effect of defocus angle on concentrated power rate, for solar collector assembly (SCA) development. The model is accurate to an extent of 2.0% as systematic error and 4.2% as random error. The relationships between flow rate and temperature difference on metal tube and the effect of defocus angle on concentrated power rate are shown.

  2. A STRONGLY COUPLED REACTOR CORE ISOLATION COOLING SYSTEM MODEL FOR EXTENDED STATION BLACK-OUT ANALYSES

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Haihua [Idaho National Laboratory; Zhang, Hongbin [Idaho National Laboratory; Zou, Ling [Idaho National Laboratory; Martineau, Richard Charles [Idaho National Laboratory

    2015-03-01

    The reactor core isolation cooling (RCIC) system in a boiling water reactor (BWR) provides makeup cooling water to the reactor pressure vessel (RPV) when the main steam lines are isolated and the normal supply of water to the reactor vessel is lost. The RCIC system operates independently of AC power, service air, or external cooling water systems. The only required external energy source is from the battery to maintain the logic circuits to control the opening and/or closure of valves in the RCIC systems in order to control the RPV water level by shutting down the RCIC pump to avoid overfilling the RPV and flooding the steam line to the RCIC turbine. It is generally considered in almost all the existing station black-out accidents (SBO) analyses that loss of the DC power would result in overfilling the steam line and allowing liquid water to flow into the RCIC turbine, where it is assumed that the turbine would then be disabled. This behavior, however, was not observed in the Fukushima Daiichi accidents, where the Unit 2 RCIC functioned without DC power for nearly three days. Therefore, more detailed mechanistic models for RCIC system components are needed to understand the extended SBO for BWRs. As part of the effort to develop the next generation reactor system safety analysis code RELAP-7, we have developed a strongly coupled RCIC system model, which consists of a turbine model, a pump model, a check valve model, a wet well model, and their coupling models. Unlike the traditional SBO simulations where mass flow rates are typically given in the input file through time dependent functions, the real mass flow rates through the turbine and the pump loops in our model are dynamically calculated according to conservation laws and turbine/pump operation curves. A simplified SBO demonstration RELAP-7 model with this RCIC model has been successfully developed. The demonstration model includes the major components for the primary system of a BWR, as well as the safety

  3. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Tappen, J. J.; Wasiolek, M. A.; Wu, D. W.; Schmitt, J. F.; Smith, A. J.

    2002-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  4. Rockslide and Impulse Wave Modelling in the Vajont Reservoir by DEM-CFD Analyses

    Science.gov (United States)

    Zhao, T.; Utili, S.; Crosta, G. B.

    2016-06-01

    This paper investigates the generation of hydrodynamic water waves due to rockslides plunging into a water reservoir. Quasi-3D DEM analyses in plane strain by a coupled DEM-CFD code are adopted to simulate the rockslide from its onset to the impact with the still water and the subsequent generation of the wave. The employed numerical tools and upscaling of hydraulic properties allow predicting a physical response in broad agreement with the observations notwithstanding the assumptions and characteristics of the adopted methods. The results obtained by the DEM-CFD coupled approach are compared to those published in the literature and those presented by Crosta et al. (Landslide spreading, impulse waves and modelling of the Vajont rockslide. Rock mechanics, 2014) in a companion paper obtained through an ALE-FEM method. Analyses performed along two cross sections are representative of the limit conditions of the eastern and western slope sectors. The max rockslide average velocity and the water wave velocity reach ca. 22 and 20 m/s, respectively. The maximum computed run up amounts to ca. 120 and 170 m for the eastern and western lobe cross sections, respectively. These values are reasonably similar to those recorded during the event (i.e. ca. 130 and 190 m, respectively). Therefore, the overall study lays out a possible DEM-CFD framework for the modelling of the generation of the hydrodynamic wave due to the impact of a rapid moving rockslide or rock-debris avalanche.

  5. Biosphere Modeling and Analyses in Support of Total System Performance Assessment

    International Nuclear Information System (INIS)

    Jeff Tappen; M.A. Wasiolek; D.W. Wu; J.F. Schmitt

    2001-01-01

    The Nuclear Waste Policy Act of 1982 established the obligations of and the relationship between the U.S. Environmental Protection Agency (EPA), the U.S. Nuclear Regulatory Commission (NRC), and the U.S. Department of Energy (DOE) for the management and disposal of high-level radioactive wastes. In 1985, the EPA promulgated regulations that included a definition of performance assessment that did not consider potential dose to a member of the general public. This definition would influence the scope of activities conducted by DOE in support of the total system performance assessment program until 1995. The release of a National Academy of Sciences (NAS) report on the technical basis for a Yucca Mountain-specific standard provided the impetus for the DOE to initiate activities that would consider the attributes of the biosphere, i.e. that portion of the earth where living things, including man, exist and interact with the environment around them. The evolution of NRC and EPA Yucca Mountain-specific regulations, originally proposed in 1999, was critical to the development and integration of biosphere modeling and analyses into the total system performance assessment program. These proposed regulations initially differed in the conceptual representation of the receptor of interest to be considered in assessing performance. The publication in 2001 of final regulations in which the NRC adopted standard will permit the continued improvement and refinement of biosphere modeling and analyses activities in support of assessment activities

  6. APF530 versus ondansetron, each in a guideline-recommended three-drug regimen, for the prevention of chemotherapy-induced nausea and vomiting due to anthracycline plus cyclophosphamide–based highly emetogenic chemotherapy regimens: a post hoc subgroup analysis of the Phase III randomized MAGIC trial

    Directory of Open Access Journals (Sweden)

    Schnadig ID

    2017-05-01

    Full Text Available Ian D Schnadig1, Richy Agajanian2, Christopher Dakhil3, Nashat Gabrail4, Jeffrey Vacirca5, Charles Taylor6, Sharon Wilks7, Eduardo Braun8, Michael C Mosier9, Robert B Geller10, Lee Schwartzberg11, Nicholas Vogelzang12 1Compass Oncology, US Oncology Research, Tualatin, OR, 2The Oncology Institute of Hope and Innovation, Whittier, CA, 3Cancer Center of Kansas, Wichita, KS, 4Gabrail Cancer Center, Canton, OH, 5North Shore Hematology Oncology, East Setauket, NY, 6Tulsa Cancer Institute, Tulsa, OK, 7Cancer Care Centers of South Texas, San Antonio, TX, 8Michiana Hematology Oncology, Westville, IN, 9Biostatistics, EMB Statistical Solutions, LLC, Overland Park, KS, 10Medical Affairs, Heron Therapeutics, Inc., San Diego, CA, 11West Cancer Center, Germantown, TN, 12Comprehensive Cancer Centers of Nevada, Las Vegas, NV, USA Background: APF530, a novel extended-release granisetron injection, was superior to ondansetron in a guideline-recommended three-drug regimen in preventing delayed-phase chemotherapy-induced nausea and vomiting (CINV among patients receiving highly emetogenic chemotherapy (HEC in the double-blind Phase III Modified Absorption of Granisetron In the prevention of CINV (MAGIC trial.Patients and methods: This MAGIC post hoc analysis evaluated CINV prevention efficacy and safety of APF530 versus ondansetron, each with fosaprepitant and dexamethasone, in patient subgroup receiving an anthracycline plus cyclophosphamide (AC regimen. Patients were randomized 1:1 to APF530 500 mg subcutaneously (granisetron 10 mg or ondansetron 0.15 mg/kg intravenously (IV (≤16 mg; stratification was by planned cisplatin ≥50 mg/m2 (yes/no. Patients were to receive fosaprepitant 150 mg IV and dexamethasone 12 mg IV on day 1, then dexamethasone 8 mg orally once daily on day 2 and twice daily on days 3 and 4. Patients were mostly younger females (APF530 arm, mean age 54.1 years, female, 99.3%; ondansetron arm, 53.8 years, female 98.3%. The primary

  7. Challenges of Analysing Gene-Environment Interactions in Mouse Models of Schizophrenia

    Directory of Open Access Journals (Sweden)

    Peter L. Oliver

    2011-01-01

    Full Text Available The modelling of neuropsychiatric disease using the mouse has provided a wealth of information regarding the relationship between specific genetic lesions and behavioural endophenotypes. However, it is becoming increasingly apparent that synergy between genetic and nongenetic factors is a key feature of these disorders that must also be taken into account. With the inherent limitations of retrospective human studies, experiments in mice have begun to tackle this complex association, combining well-established behavioural paradigms and quantitative neuropathology with a range of environmental insults. The conclusions from this work have been varied, due in part to a lack of standardised methodology, although most have illustrated that phenotypes related to disorders such as schizophrenia are consistently modified. Far fewer studies, however, have attempted to generate a “two-hit” model, whereby the consequences of a pathogenic mutation are analysed in combination with environmental manipulation such as prenatal stress. This significant, yet relatively new, approach is beginning to produce valuable new models of neuropsychiatric disease. Focussing on prenatal and perinatal stress models of schizophrenia, this review discusses the current progress in this field, and highlights important issues regarding the interpretation and comparative analysis of such complex behavioural data.

  8. Correlation of Klebsiella pneumoniae comparative genetic analyses with virulence profiles in a murine respiratory disease model.

    Directory of Open Access Journals (Sweden)

    Ramy A Fodah

    Full Text Available Klebsiella pneumoniae is a bacterial pathogen of worldwide importance and a significant contributor to multiple disease presentations associated with both nosocomial and community acquired disease. ATCC 43816 is a well-studied K. pneumoniae strain which is capable of causing an acute respiratory disease in surrogate animal models. In this study, we performed sequencing of the ATCC 43816 genome to support future efforts characterizing genetic elements required for disease. Furthermore, we performed comparative genetic analyses to the previously sequenced genomes from NTUH-K2044 and MGH 78578 to gain an understanding of the conservation of known virulence determinants amongst the three strains. We found that ATCC 43816 and NTUH-K2044 both possess the known virulence determinant for yersiniabactin, as well as a Type 4 secretion system (T4SS, CRISPR system, and an acetonin catabolism locus, all absent from MGH 78578. While both NTUH-K2044 and MGH 78578 are clinical isolates, little is known about the disease potential of these strains in cell culture and animal models. Thus, we also performed functional analyses in the murine macrophage cell lines RAW264.7 and J774A.1 and found that MGH 78578 (K52 serotype was internalized at higher levels than ATCC 43816 (K2 and NTUH-K2044 (K1, consistent with previous characterization of the antiphagocytic properties of K1 and K2 serotype capsules. We also examined the three K. pneumoniae strains in a novel BALB/c respiratory disease model and found that ATCC 43816 and NTUH-K2044 are highly virulent (LD50<100 CFU while MGH 78578 is relatively avirulent.

  9. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    Science.gov (United States)

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  10. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 3

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Corum, J.M.; Bryson, J.W.

    1975-06-01

    The third in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: the experimental data provide design information directly applicable to nozzles in cylindrical vessels; and the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 3 had a 10 in. OD and the nozzle had a 1.29 in. OD, giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios for the cylinder and the nozzle were 50 and 7.68 respectively. Thirteen separate loading cases were analyzed. In each, one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for all the loadings were obtained using 158 three-gage strain rosettes located on the inner and outer surfaces. The loading cases were also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  11. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 4

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-06-01

    The last in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. The models in the series are idealized thin-shell structures consisting of two circular cylindrical shells that intersect at right angles. There are no transitions, reinforcements, or fillets in the junction region. This series of model tests serves two basic purposes: (1) the experimental data provide design information directly applicable to nozzles in cylindrical vessels, and (2) the idealized models provide test results for use in developing and evaluating theoretical analyses applicable to nozzles in cylindrical vessels and to thin piping tees. The cylinder of model 4 had an outside diameter of 10 in., and the nozzle had an outside diameter of 1.29 in., giving a d 0 /D 0 ratio of 0.129. The OD/thickness ratios were 50 and 20.2 for the cylinder and nozzle respectively. Thirteen separate loading cases were analyzed. For each loading condition one end of the cylinder was rigidly held. In addition to an internal pressure loading, three mutually perpendicular force components and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. The experimental stress distributions for each of the 13 loadings were obtained using 157 three-gage strain rosettes located on the inner and outer surfaces. Each of the 13 loading cases was also analyzed theoretically using a finite-element shell analysis developed at the University of California, Berkeley. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good agreement for this model. (U.S.)

  12. Uncertainty and sensitivity analyses for age-dependent unavailability model integrating test and maintenance

    International Nuclear Information System (INIS)

    Kančev, Duško; Čepin, Marko

    2012-01-01

    Highlights: ► Application of analytical unavailability model integrating T and M, ageing, and test strategy. ► Ageing data uncertainty propagation on system level assessed via Monte Carlo simulation. ► Uncertainty impact is growing with the extension of the surveillance test interval. ► Calculated system unavailability dependence on two different sensitivity study ageing databases. ► System unavailability sensitivity insights regarding specific groups of BEs as test intervals extend. - Abstract: The interest in operational lifetime extension of the existing nuclear power plants is growing. Consequently, plants life management programs, considering safety components ageing, are being developed and employed. Ageing represents a gradual degradation of the physical properties and functional performance of different components consequently implying their reduced availability. Analyses, which are being made in the direction of nuclear power plants lifetime extension are based upon components ageing management programs. On the other side, the large uncertainties of the ageing parameters as well as the uncertainties associated with most of the reliability data collections are widely acknowledged. This paper addresses the uncertainty and sensitivity analyses conducted utilizing a previously developed age-dependent unavailability model, integrating effects of test and maintenance activities, for a selected stand-by safety system in a nuclear power plant. The most important problem is the lack of data concerning the effects of ageing as well as the relatively high uncertainty associated to these data, which would correspond to more detailed modelling of ageing. A standard Monte Carlo simulation was coded for the purpose of this paper and utilized in the process of assessment of the component ageing parameters uncertainty propagation on system level. The obtained results from the uncertainty analysis indicate the extent to which the uncertainty of the selected

  13. Normalisation genes for expression analyses in the brown alga model Ectocarpus siliculosus

    Directory of Open Access Journals (Sweden)

    Rousvoal Sylvie

    2008-08-01

    Full Text Available Abstract Background Brown algae are plant multi-cellular organisms occupying most of the world coasts and are essential actors in the constitution of ecological niches at the shoreline. Ectocarpus siliculosus is an emerging model for brown algal research. Its genome has been sequenced, and several tools are being developed to perform analyses at different levels of cell organization, including transcriptomic expression analyses. Several topics, including physiological responses to osmotic stress and to exposure to contaminants and solvents are being studied in order to better understand the adaptive capacity of brown algae to pollution and environmental changes. A series of genes that can be used to normalise expression analyses is required for these studies. Results We monitored the expression of 13 genes under 21 different culture conditions. These included genes encoding proteins and factors involved in protein translation (ribosomal protein 26S, EF1alpha, IF2A, IF4E and protein degradation (ubiquitin, ubiquitin conjugating enzyme or folding (cyclophilin, and proteins involved in both the structure of the cytoskeleton (tubulin alpha, actin, actin-related proteins and its trafficking function (dynein, as well as a protein implicated in carbon metabolism (glucose 6-phosphate dehydrogenase. The stability of their expression level was assessed using the Ct range, and by applying both the geNorm and the Normfinder principles of calculation. Conclusion Comparisons of the data obtained with the three methods of calculation indicated that EF1alpha (EF1a was the best reference gene for normalisation. The normalisation factor should be calculated with at least two genes, alpha tubulin, ubiquitin-conjugating enzyme or actin-related proteins being good partners of EF1a. Our results exclude actin as a good normalisation gene, and, in this, are in agreement with previous studies in other organisms.

  14. Models for regionalizing economic data and their applications within the scope of forensic disaster analyses

    Science.gov (United States)

    Schmidt, Hanns-Maximilian; Wiens, rer. pol. Marcus, , Dr.; Schultmann, rer. pol. Frank, Prof. _., Dr.

    2015-04-01

    The impact of natural hazards on the economic system can be observed in many different regions all over the world. Once the local economic structure is hit by an event direct costs instantly occur. However, the disturbance on a local level (e.g. parts of city or industries along a river bank) might also cause monetary damages in other, indirectly affected sectors. If the impact of an event is strong, these damages are likely to cascade and spread even on an international scale (e.g. the eruption of Eyjafjallajökull and its impact on the automotive sector in Europe). In order to determine these special impacts, one has to gain insights into the directly hit economic structure before being able to calculate these side effects. Especially, regarding the development of a model used for near real-time forensic disaster analyses any simulation needs to be based on data that is rapidly available or easily to be computed. Therefore, we investigated commonly used or recently discussed methodologies for regionalizing economic data. Surprisingly, even for German federal states there is no official input-output data available that can be used, although it might provide detailed figures concerning economic interrelations between different industry sectors. In the case of highly developed countries, such as Germany, we focus on models for regionalizing nationwide input-output table which is usually available at the national statistical offices. However, when it comes to developing countries (e.g. South-East Asia) the data quality and availability is usually much poorer. In this case, other sources need to be found for the proper assessment of regional economic performance. We developed an indicator-based model that can fill this gap because of its flexibility regarding the level of aggregation and the composability of different input parameters. Our poster presentation brings up a literature review and a summary on potential models that seem to be useful for this specific task

  15. Evaluation of Temperature and Humidity Profiles of Unified Model and ECMWF Analyses Using GRUAN Radiosonde Observations

    Directory of Open Access Journals (Sweden)

    Young-Chan Noh

    2016-07-01

    Full Text Available Temperature and water vapor profiles from the Korea Meteorological Administration (KMA and the United Kingdom Met Office (UKMO Unified Model (UM data assimilation systems and from reanalysis fields from the European Centre for Medium-Range Weather Forecasts (ECMWF were assessed using collocated radiosonde observations from the Global Climate Observing System (GCOS Reference Upper-Air Network (GRUAN for January–December 2012. The motivation was to examine the overall performance of data assimilation outputs. The difference statistics of the collocated model outputs versus the radiosonde observations indicated a good agreement for the temperature, amongst datasets, while less agreement was found for the relative humidity. A comparison of the UM outputs from the UKMO and KMA revealed that they are similar to each other. The introduction of the new version of UM into the KMA in May 2012 resulted in an improved analysis performance, particularly for the moisture field. On the other hand, ECMWF reanalysis data showed slightly reduced performance for relative humidity compared with the UM, with a significant humid bias in the upper troposphere. ECMWF reanalysis temperature fields showed nearly the same performance as the two UM analyses. The root mean square differences (RMSDs of the relative humidity for the three models were larger for more humid conditions, suggesting that humidity forecasts are less reliable under these conditions.

  16. Application of a weighted spatial probability model in GIS to analyse landslides in Penang Island, Malaysia

    Directory of Open Access Journals (Sweden)

    Samy Ismail Elmahdy

    2016-01-01

    Full Text Available In the current study, Penang Island, which is one of the several mountainous areas in Malaysia that is often subjected to landslide hazard, was chosen for further investigation. A multi-criteria Evaluation and the spatial probability weighted approach and model builder was applied to map and analyse landslides in Penang Island. A set of automated algorithms was used to construct new essential geological and morphometric thematic maps from remote sensing data. The maps were ranked using the weighted probability spatial model based on their contribution to the landslide hazard. Results obtained showed that sites at an elevation of 100–300 m, with steep slopes of 10°–37° and slope direction (aspect in the E and SE directions were areas of very high and high probability for the landslide occurrence; the total areas were 21.393 km2 (11.84% and 58.690 km2 (32.48%, respectively. The obtained map was verified by comparing variogram models of the mapped and the occurred landslide locations and showed a strong correlation with the locations of occurred landslides, indicating that the proposed method can successfully predict the unpredictable landslide hazard. The method is time and cost effective and can be used as a reference for geological and geotechnical engineers.

  17. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    Directory of Open Access Journals (Sweden)

    Sung-Chien Lin

    2014-07-01

    Full Text Available In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results of this study show that when the number of topics was set to 10, the topic model has the smallest perplexity. Although data scopes and analysis methodsare different to previous studies, the generating topics of this study are consistent with those results produced by analyses of experts. Empirical case studies and measurements of bibliometric indicators were concerned important in every year during the whole analytic period, and the field was increasing stability. Both the two core journals broadly paid more attention to all of the topics in the field of Informetrics. The Journal of Informetricsput particular emphasis on construction and applications ofbibliometric indicators and Scientometrics focused on the evaluation and the factors of productivity of countries, institutions, domains, and journals.

  18. Developing a system dynamics model to analyse environmental problem in construction site

    Science.gov (United States)

    Haron, Fatin Fasehah; Hawari, Nurul Nazihah

    2017-11-01

    This study aims to develop a system dynamics model at a construction site to analyse the impact of environmental problem. Construction sites may cause damages to the environment, and interference in the daily lives of residents. A proper environmental management system must be used to reduce pollution, enhance bio-diversity, conserve water, respect people and their local environment, measure performance and set targets for the environment and sustainability. This study investigates the damaging impact normally occur during the construction stage. Environmental problem will cause costly mistake in project implementation, either because of the environmental damages that are likely to arise during project implementation, or because of modification that may be required subsequently in order to make the action environmentally acceptable. Thus, findings from this study has helped in significantly reducing the damaging impact towards environment, and improve the environmental management system performance at construction site.

  19. Development and application of model RAIA uranium on-line analyser

    International Nuclear Information System (INIS)

    Dong Yanwu; Song Yufen; Zhu Yaokun; Cong Peiyuan; Cui Songru

    1999-01-01

    The working principle, structure, adjustment and application of model RAIA on-line analyser are reported. The performance of this instrument is reliable. For identical sample, the signal fluctuation in continuous monitoring for four months is less than +-1%. According to required measurement range, appropriate length of sample cell is chosen. The precision of measurement process is better than 1% at 100 g/L U. The detection limit is 50 mg/L. The uranium concentration in process stream can be displayed automatically and printed at any time. It presents 4∼20 mA current signal being proportional to the uranium concentration. This makes a long step towards process continuous control and computer management

  20. Applying the Land Use Portfolio Model with Hazus to analyse risk from natural hazard events

    Science.gov (United States)

    Dinitz, Laura B.; Taketa, Richard A.

    2013-01-01

    This paper describes and demonstrates the integration of two geospatial decision-support systems for natural-hazard risk assessment and management. Hazus is a risk-assessment tool developed by the Federal Emergency Management Agency to identify risks and estimate the severity of risk from natural hazards. The Land Use Portfolio Model (LUPM) is a risk-management tool developed by the U.S. Geological Survey to evaluate plans or actions intended to reduce risk from natural hazards. We analysed three mitigation policies for one earthquake scenario in the San Francisco Bay area to demonstrate the added value of using Hazus and the LUPM together. The demonstration showed that Hazus loss estimates can be input to the LUPM to obtain estimates of losses avoided through mitigation, rates of return on mitigation investment, and measures of uncertainty. Together, they offer a more comprehensive approach to help with decisions for reducing risk from natural hazards.

  1. A model for analysing factors which may influence quality management procedures in higher education

    Directory of Open Access Journals (Sweden)

    Cătălin MAICAN

    2015-12-01

    Full Text Available In all universities, the Office for Quality Assurance defines the procedure for assessing the performance of the teaching staff, with a view to establishing students’ perception as regards the teachers’ activity from the point of view of the quality of the teaching process, of the relationship with the students and of the assistance provided for learning. The present paper aims at creating a combined model for evaluation, based on Data Mining statistical methods: starting from the findings revealed by the evaluations teachers performed to students, using the cluster analysis and the discriminant analysis, we identified the subjects which produced significant differences between students’ grades, subjects which were subsequently subjected to an evaluation by students. The results of these analyses allowed the formulation of certain measures for enhancing the quality of the evaluation process.

  2. An application of the 'Bayesian cohort model' to nuclear power plant cost analyses

    International Nuclear Information System (INIS)

    Ono, Kenji; Nakamura, Takashi

    2002-01-01

    We have developed a new method for identifying the effects of calendar year, plant age and commercial operation starting year on the costs and performances of nuclear power plants and also developed an analysis system running on personal computers. The method extends the Bayesian cohort model for time series social survey data proposed by one of the authors. The proposed method was shown to be able to separate the above three effects more properly than traditional methods such as taking simple means by time domain. The analyses of US nuclear plant cost and performance data by using the proposed method suggest that many of the US plants spent relatively long time and much capital cost for modification at their age of about 10 to 20 years, but that, after those ages, they performed fairly well with lower and stabilized O and M and additional capital costs. (author)

  3. Continuous spatial modelling to analyse planning and economic consequences of offshore wind energy

    International Nuclear Information System (INIS)

    Moeller, Bernd

    2011-01-01

    Offshore wind resources appear abundant, but technological, economic and planning issues significantly reduce the theoretical potential. While massive investments are anticipated and planners and developers are scouting for viable locations and consider risk and impact, few studies simultaneously address potentials and costs together with the consequences of proposed planning in an analytical and continuous manner and for larger areas at once. Consequences may be investments short of efficiency and equity, and failed planning routines. A spatial resource economic model for the Danish offshore waters is presented, used to analyse area constraints, technological risks, priorities for development and opportunity costs of maintaining competing area uses. The SCREAM-offshore wind model (Spatially Continuous Resource Economic Analysis Model) uses raster-based geographical information systems (GIS) and considers numerous geographical factors, technology and cost data as well as planning information. Novel elements are weighted visibility analysis and geographically recorded shipping movements as variable constraints. A number of scenarios have been described, which include restrictions of using offshore areas, as well as alternative uses such as conservation and tourism. The results comprise maps, tables and cost-supply curves for further resource economic assessment and policy analysis. A discussion of parameter variations exposes uncertainties of technology development, environmental protection as well as competing area uses and illustrates how such models might assist in ameliorating public planning, while procuring decision bases for the political process. The method can be adapted to different research questions, and is largely applicable in other parts of the world. - Research Highlights: → A model for the spatially continuous evaluation of offshore wind resources. → Assessment of spatial constraints, costs and resources for each location. → Planning tool for

  4. Comparative modeling analyses of Cs-137 fate in the rivers impacted by Chernobyl and Fukushima accidents

    Energy Technology Data Exchange (ETDEWEB)

    Zheleznyak, M.; Kivva, S. [Institute of Environmental Radioactivity, Fukushima University (Japan)

    2014-07-01

    The consequences of two largest nuclear accidents of the last decades - at Chernobyl Nuclear Power Plant (ChNPP) (1986) and at Fukushima Daiichi NPP (FDNPP) (2011) clearly demonstrated that radioactive contamination of water bodies in vicinity of NPP and on the waterways from it, e.g., river- reservoir water after Chernobyl accident and rivers and coastal marine waters after Fukushima accident, in the both cases have been one of the main sources of the public concerns on the accident consequences. The higher weight of water contamination in public perception of the accidents consequences in comparison with the real fraction of doses via aquatic pathways in comparison with other dose components is a specificity of public perception of environmental contamination. This psychological phenomenon that was confirmed after these accidents provides supplementary arguments that the reliable simulation and prediction of the radionuclide dynamics in water and sediments is important part of the post-accidental radioecological research. The purpose of the research is to use the experience of the modeling activities f conducted for the past more than 25 years within the Chernobyl affected Pripyat River and Dnieper River watershed as also data of the new monitoring studies in Japan of Abukuma River (largest in the region - the watershed area is 5400 km{sup 2}), Kuchibuto River, Uta River, Niita River, Natsui River, Same River, as also of the studies on the specific of the 'water-sediment' {sup 137}Cs exchanges in this area to refine the 1-D model RIVTOX and 2-D model COASTOX for the increasing of the predictive power of the modeling technologies. The results of the modeling studies are applied for more accurate prediction of water/sediment radionuclide contamination of rivers and reservoirs in the Fukushima Prefecture and for the comparative analyses of the efficiency of the of the post -accidental measures to diminish the contamination of the water bodies. Document

  5. Parameterization and sensitivity analyses of a radiative transfer model for remote sensing plant canopies

    Science.gov (United States)

    Hall, Carlton Raden

    A major objective of remote sensing is determination of biochemical and biophysical characteristics of plant canopies utilizing high spectral resolution sensors. Canopy reflectance signatures are dependent on absorption and scattering processes of the leaf, canopy properties, and the ground beneath the canopy. This research investigates, through field and laboratory data collection, and computer model parameterization and simulations, the relationships between leaf optical properties, canopy biophysical features, and the nadir viewed above-canopy reflectance signature. Emphasis is placed on parameterization and application of an existing irradiance radiative transfer model developed for aquatic systems. Data and model analyses provide knowledge on the relative importance of leaves and canopy biophysical features in estimating the diffuse absorption a(lambda,m-1), diffuse backscatter b(lambda,m-1), beam attenuation alpha(lambda,m-1), and beam to diffuse conversion c(lambda,m-1 ) coefficients of the two-flow irradiance model. Data sets include field and laboratory measurements from three plant species, live oak (Quercus virginiana), Brazilian pepper (Schinus terebinthifolius) and grapefruit (Citrus paradisi) sampled on Cape Canaveral Air Force Station and Kennedy Space Center Florida in March and April of 1997. Features measured were depth h (m), projected foliage coverage PFC, leaf area index LAI, and zenith leaf angle. Optical measurements, collected with a Spectron SE 590 high sensitivity narrow bandwidth spectrograph, included above canopy reflectance, internal canopy transmittance and reflectance and bottom reflectance. Leaf samples were returned to laboratory where optical and physical and chemical measurements of leaf thickness, leaf area, leaf moisture and pigment content were made. A new term, the leaf volume correction index LVCI was developed and demonstrated in support of model coefficient parameterization. The LVCI is based on angle adjusted leaf

  6. Sensitivity analyses of a colloid-facilitated contaminant transport model for unsaturated heterogeneous soil conditions.

    Science.gov (United States)

    Périard, Yann; José Gumiere, Silvio; Rousseau, Alain N.; Caron, Jean

    2013-04-01

    Certain contaminants may travel faster through soils when they are sorbed to subsurface colloidal particles. Indeed, subsurface colloids may act as carriers of some contaminants accelerating their translocation through the soil into the water table. This phenomenon is known as colloid-facilitated contaminant transport. It plays a significant role in contaminant transport in soils and has been recognized as a source of groundwater contamination. From a mechanistic point of view, the attachment/detachment of the colloidal particles from the soil matrix or from the air-water interface and the straining process may modify the hydraulic properties of the porous media. Šimůnek et al. (2006) developed a model that can simulate the colloid-facilitated contaminant transport in variably saturated porous media. The model is based on the solution of a modified advection-dispersion equation that accounts for several processes, namely: straining, exclusion and attachement/detachement kinetics of colloids through the soil matrix. The solutions of these governing, partial differential equations are obtained using a standard Galerkin-type, linear finite element scheme, implemented in the HYDRUS-2D/3D software (Šimůnek et al., 2012). Modeling colloid transport through the soil and the interaction of colloids with the soil matrix and other contaminants is complex and requires the characterization of many model parameters. In practice, it is very difficult to assess actual transport parameter values, so they are often calibrated. However, before calibration, one needs to know which parameters have the greatest impact on output variables. This kind of information can be obtained through a sensitivity analysis of the model. The main objective of this work is to perform local and global sensitivity analyses of the colloid-facilitated contaminant transport module of HYDRUS. Sensitivity analysis was performed in two steps: (i) we applied a screening method based on Morris' elementary

  7. Modeling and analysing storage systems in agricultural biomass supply chain for cellulosic ethanol production

    International Nuclear Information System (INIS)

    Ebadian, Mahmood; Sowlati, Taraneh; Sokhansanj, Shahab; Townley-Smith, Lawrence; Stumborg, Mark

    2013-01-01

    Highlights: ► Studied the agricultural biomass supply chain for cellulosic ethanol production. ► Evaluated the impact of storage systems on different supply chain actors. ► Developed a combined simulation/optimization model to evaluate storage systems. ► Compared two satellite storage systems with roadside storage in terms of costs and emitted CO 2 . ► SS would lead to a more cost-efficient supply chain compared to roadside storage. -- Abstract: In this paper, a combined simulation/optimization model is developed to better understand and evaluate the impact of the storage systems on the costs incurred by each actor in the agricultural biomass supply chain including farmers, hauling contractors and the cellulosic ethanol plant. The optimization model prescribes the optimum number and location of farms and storages. It also determines the supply radius, the number of farms required to secure the annual supply of biomass and also the assignment of farms to storage locations. Given the specific design of the supply chain determined by the optimization model, the simulation model determines the number of required machines for each operation, their daily working schedule and utilization rates, along with the capacities of storages. To evaluate the impact of the storage systems on the delivered costs, three storage systems are molded and compared: roadside storage (RS) system and two satellite storage (SS) systems including SS with fixed hauling distance (SF) and SS with variable hauling distance (SV). In all storage systems, it is assumed the loading equipment is dedicated to storage locations. The obtained results from a real case study provide detailed cost figures for each storage system since the developed model analyses the supply chain on an hourly basis and considers time-dependence and stochasticity of the supply chain. Comparison of the storage systems shows SV would outperform SF and RS by reducing the total delivered cost by 8% and 6%, respectively

  8. Neural Spike-Train Analyses of the Speech-Based Envelope Power Spectrum Model

    Science.gov (United States)

    Rallapalli, Varsha H.

    2016-01-01

    Diagnosing and treating hearing impairment is challenging because people with similar degrees of sensorineural hearing loss (SNHL) often have different speech-recognition abilities. The speech-based envelope power spectrum model (sEPSM) has demonstrated that the signal-to-noise ratio (SNRENV) from a modulation filter bank provides a robust speech-intelligibility measure across a wider range of degraded conditions than many long-standing models. In the sEPSM, noise (N) is assumed to: (a) reduce S + N envelope power by filling in dips within clean speech (S) and (b) introduce an envelope noise floor from intrinsic fluctuations in the noise itself. While the promise of SNRENV has been demonstrated for normal-hearing listeners, it has not been thoroughly extended to hearing-impaired listeners because of limited physiological knowledge of how SNHL affects speech-in-noise envelope coding relative to noise alone. Here, envelope coding to speech-in-noise stimuli was quantified from auditory-nerve model spike trains using shuffled correlograms, which were analyzed in the modulation-frequency domain to compute modulation-band estimates of neural SNRENV. Preliminary spike-train analyses show strong similarities to the sEPSM, demonstrating feasibility of neural SNRENV computations. Results suggest that individual differences can occur based on differential degrees of outer- and inner-hair-cell dysfunction in listeners currently diagnosed into the single audiological SNHL category. The predicted acoustic-SNR dependence in individual differences suggests that the SNR-dependent rate of susceptibility could be an important metric in diagnosing individual differences. Future measurements of the neural SNRENV in animal studies with various forms of SNHL will provide valuable insight for understanding individual differences in speech-in-noise intelligibility.

  9. Targeting the robo-advice customer: the development of a psychographic segmentation model for financial advice robots

    OpenAIRE

    van Thiel, D.; van Raaij, W.F.

    2017-01-01

    The purpose of this study is to develop the world’s first psychographic market segmentation model that supports personalization, customer education, customer activation, and customer engagement strategies with financial advice robots. As traditional segmentation models in consumer finance primarily focus on externally observed demographics or economic criteria such as profession, age, income, or wealth, post-hoc psychographic segmentation further supports personalization in the digital adviso...

  10. Analysing model fit of psychometric process models: An overview, a new test and an application to the diffusion model.

    Science.gov (United States)

    Ranger, Jochen; Kuhn, Jörg-Tobias; Szardenings, Carsten

    2017-05-01

    Cognitive psychometric models embed cognitive process models into a latent trait framework in order to allow for individual differences. Due to their close relationship to the response process the models allow for profound conclusions about the test takers. However, before such a model can be used its fit has to be checked carefully. In this manuscript we give an overview over existing tests of model fit and show their relation to the generalized moment test of Newey (Econometrica, 53, 1985, 1047) and Tauchen (J. Econometrics, 30, 1985, 415). We also present a new test, the Hausman test of misspecification (Hausman, Econometrica, 46, 1978, 1251). The Hausman test consists of a comparison of two estimates of the same item parameters which should be similar if the model holds. The performance of the Hausman test is evaluated in a simulation study. In this study we illustrate its application to two popular models in cognitive psychometrics, the Q-diffusion model and the D-diffusion model (van der Maas, Molenaar, Maris, Kievit, & Boorsboom, Psychol Rev., 118, 2011, 339; Molenaar, Tuerlinckx, & van der Maas, J. Stat. Softw., 66, 2015, 1). We also compare the performance of the test to four alternative tests of model fit, namely the M 2 test (Molenaar et al., J. Stat. Softw., 66, 2015, 1), the moment test (Ranger et al., Br. J. Math. Stat. Psychol., 2016) and the test for binned time (Ranger & Kuhn, Psychol. Test. Asess. , 56, 2014b, 370). The simulation study indicates that the Hausman test is superior to the latter tests. The test closely adheres to the nominal Type I error rate and has higher power in most simulation conditions. © 2017 The British Psychological Society.

  11. Microsegregation in multicomponent alloy analysed by quantitative phase-field model

    International Nuclear Information System (INIS)

    Ohno, M; Takaki, T; Shibuta, Y

    2015-01-01

    Microsegregation behaviour in a ternary alloy system has been analysed by means of quantitative phase-field (Q-PF) simulations with a particular attention directed at an influence of tie-line shift stemming from different liquid diffusivities of the solute elements. The Q-PF model developed for non-isothermal solidification in multicomponent alloys with non-zero solid diffusivities was applied to analysis of microsegregation in a ternary alloy consisting of fast and slow diffusing solute elements. The accuracy of the Q-PF simulation was first verified by performing the convergence test of segregation ratio with respect to the interface thickness. From one-dimensional analysis, it was found that the microsegregation of slow diffusing element is reduced due to the tie-line shift. In two-dimensional simulations, the refinement of microstructure, viz., the decrease of secondary arms spacing occurs at low cooling rates due to the formation of diffusion layer of slow diffusing element. It yields the reductions of degrees of microsegregation for both the fast and slow diffusing elements. Importantly, in a wide range of cooling rates, the degree of microsegregation of the slow diffusing element is always lower than that of the fast diffusing element, which is entirely ascribable to the influence of tie-line shift. (paper)

  12. Modelling and Analysing Access Control Policies in XACML 3.0

    DEFF Research Database (Denmark)

    Ramli, Carroline Dewi Puspa Kencana

    (c.f. GM03,Mos05,Ris13) and manual analysis of the overall effect and consequences of a large XACML policy set is a very daunting and time-consuming task. In this thesis we address the problem of understanding the semantics of access control policy language XACML, in particular XACML version 3.0....... The main focus of this thesis is modelling and analysing access control policies in XACML 3.0. There are two main contributions in this thesis. First, we study and formalise XACML 3.0, in particular the Policy Decision Point (PDP). The concrete syntax of XACML is based on the XML format, while its standard...... semantics is described normatively using natural language. The use of English text in standardisation leads to the risk of misinterpretation and ambiguity. In order to avoid this drawback, we define an abstract syntax of XACML 3.0 and a formal XACML semantics. Second, we propose a logic-based XACML analysis...

  13. VOC composition of current motor vehicle fuels and vapors, and collinearity analyses for receptor modeling.

    Science.gov (United States)

    Chin, Jo-Yu; Batterman, Stuart A

    2012-03-01

    The formulation of motor vehicle fuels can alter the magnitude and composition of evaporative and exhaust emissions occurring throughout the fuel cycle. Information regarding the volatile organic compound (VOC) composition of motor fuels other than gasoline is scarce, especially for bioethanol and biodiesel blends. This study examines the liquid and vapor (headspace) composition of four contemporary and commercially available fuels: gasoline (gasoline), ultra-low sulfur diesel (ULSD), and B20 (20% soy-biodiesel and 80% ULSD). The composition of gasoline and E85 in both neat fuel and headspace vapor was dominated by aromatics and n-heptane. Despite its low gasoline content, E85 vapor contained higher concentrations of several VOCs than those in gasoline vapor, likely due to adjustments in its formulation. Temperature changes produced greater changes in the partial pressures of 17 VOCs in E85 than in gasoline, and large shifts in the VOC composition. B20 and ULSD were dominated by C(9) to C(16)n-alkanes and low levels of the aromatics, and the two fuels had similar headspace vapor composition and concentrations. While the headspace composition predicted using vapor-liquid equilibrium theory was closely correlated to measurements, E85 vapor concentrations were underpredicted. Based on variance decomposition analyses, gasoline and diesel fuels and their vapors VOC were distinct, but B20 and ULSD fuels and vapors were highly collinear. These results can be used to estimate fuel related emissions and exposures, particularly in receptor models that apportion emission sources, and the collinearity analysis suggests that gasoline- and diesel-related emissions can be distinguished. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. Usefulness of non-linear input-output models for economic impact analyses in tourism and recreation

    NARCIS (Netherlands)

    Klijs, J.; Peerlings, J.H.M.; Heijman, W.J.M.

    2015-01-01

    In tourism and recreation management it is still common practice to apply traditional input–output (IO) economic impact models, despite their well-known limitations. In this study the authors analyse the usefulness of applying a non-linear input–output (NLIO) model, in which price-induced input

  15. Process of Integrating Screening and Detailed Risk-based Modeling Analyses to Ensure Consistent and Scientifically Defensible Results

    International Nuclear Information System (INIS)

    Buck, John W.; McDonald, John P.; Taira, Randal Y.

    2002-01-01

    To support cleanup and closure of these tanks, modeling is performed to understand and predict potential impacts to human health and the environment. Pacific Northwest National Laboratory developed a screening tool for the United States Department of Energy, Office of River Protection that estimates the long-term human health risk, from a strategic planning perspective, posed by potential tank releases to the environment. This tool is being conditioned to more detailed model analyses to ensure consistency between studies and to provide scientific defensibility. Once the conditioning is complete, the system will be used to screen alternative cleanup and closure strategies. The integration of screening and detailed models provides consistent analyses, efficiencies in resources, and positive feedback between the various modeling groups. This approach of conditioning a screening methodology to more detailed analyses provides decision-makers with timely and defensible information and increases confidence in the results on the part of clients, regulators, and stakeholders

  16. A simple beam model to analyse the durability of adhesively bonded tile floorings in presence of shrinkage

    Directory of Open Access Journals (Sweden)

    S. de Miranda

    2014-07-01

    Full Text Available A simple beam model for the evaluation of tile debonding due to substrate shrinkage is presented. The tile-adhesive-substrate package is modeled as an Euler-Bernoulli beam laying on a two-layer elastic foundation. An effective discrete model for inter-tile grouting is introduced with the aim of modelling workmanship defects due to partial filled groutings. The model is validated using the results of a 2D FE model. Different defect configurations and adhesive typologies are analysed, focusing the attention on the prediction of normal stresses in the adhesive layer under the assumption of Mode I failure of the adhesive.

  17. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters

    International Nuclear Information System (INIS)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-01-01

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1

  18. Monte Carlo modeling and analyses of YALINA-booster subcritical assembly part 1: analytical models and main neutronics parameters.

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, M. Y. A.; Nuclear Engineering Division

    2008-09-11

    This study was carried out to model and analyze the YALINA-Booster facility, of the Joint Institute for Power and Nuclear Research of Belarus, with the long term objective of advancing the utilization of accelerator driven systems for the incineration of nuclear waste. The YALINA-Booster facility is a subcritical assembly, driven by an external neutron source, which has been constructed to study the neutron physics and to develop and refine methodologies to control the operation of accelerator driven systems. The external neutron source consists of Californium-252 spontaneous fission neutrons, 2.45 MeV neutrons from Deuterium-Deuterium reactions, or 14.1 MeV neutrons from Deuterium-Tritium reactions. In the latter two cases a deuteron beam is used to generate the neutrons. This study is a part of the collaborative activity between Argonne National Laboratory (ANL) of USA and the Joint Institute for Power and Nuclear Research of Belarus. In addition, the International Atomic Energy Agency (IAEA) has a coordinated research project benchmarking and comparing the results of different numerical codes with the experimental data available from the YALINA-Booster facility and ANL has a leading role coordinating the IAEA activity. The YALINA-Booster facility has been modeled according to the benchmark specifications defined for the IAEA activity without any geometrical homogenization using the Monte Carlo codes MONK and MCNP/MCNPX/MCB. The MONK model perfectly matches the MCNP one. The computational analyses have been extended through the MCB code, which is an extension of the MCNP code with burnup capability because of its additional feature for analyzing source driven multiplying assemblies. The main neutronics parameters of the YALINA-Booster facility were calculated using these computer codes with different nuclear data libraries based on ENDF/B-VI-0, -6, JEF-2.2, and JEF-3.1.

  19. Comparison of prosthetic models produced by traditional and additive manufacturing methods.

    Science.gov (United States)

    Park, Jin-Young; Kim, Hae-Young; Kim, Ji-Hwan; Kim, Jae-Hong; Kim, Woong-Chul

    2015-08-01

    The purpose of this study was to verify the clinical-feasibility of additive manufacturing by comparing the accuracy of four different manufacturing methods for metal coping: the conventional lost wax technique (CLWT); subtractive methods with wax blank milling (WBM); and two additive methods, multi jet modeling (MJM), and micro-stereolithography (Micro-SLA). Thirty study models were created using an acrylic model with the maxillary upper right canine, first premolar, and first molar teeth. Based on the scan files from a non-contact blue light scanner (Identica; Medit Co. Ltd., Seoul, Korea), thirty cores were produced using the WBM, MJM, and Micro-SLA methods, respectively, and another thirty frameworks were produced using the CLWT method. To measure the marginal and internal gap, the silicone replica method was adopted, and the silicone images obtained were evaluated using a digital microscope (KH-7700; Hirox, Tokyo, Japan) at 140X magnification. Analyses were performed using two-way analysis of variance (ANOVA) and Tukey post hoc test (α=.05). The mean marginal gaps and internal gaps showed significant differences according to tooth type (Pmanufacturing method (Pmanufacturing methods were within a clinically allowable range, and, thus, the clinical use of additive manufacturing methods is acceptable as an alternative to the traditional lost wax-technique and subtractive manufacturing.

  20. A comparison of two coaching approaches to enhance implementation of a recovery-oriented service model.

    Science.gov (United States)

    Deane, Frank P; Andresen, Retta; Crowe, Trevor P; Oades, Lindsay G; Ciarrochi, Joseph; Williams, Virginia

    2014-09-01

    Moving to recovery-oriented service provision in mental health may entail retraining existing staff, as well as training new staff. This represents a substantial burden on organisations, particularly since transfer of training into practice is often poor. Follow-up supervision and/or coaching have been found to improve the implementation and sustainment of new approaches. We compared the effect of two coaching conditions, skills-based and transformational coaching, on the implementation of a recovery-oriented model following training. Training followed by coaching led to significant sustained improvements in the quality of care planning in accordance with the new model over the 12-month study period. No interaction effect was observed between the two conditions. However, post hoc analyses suggest that transformational coaching warrants further exploration. The results support the provision of supervision in the form of coaching in the implementation of a recovery-oriented service model, and suggest the need to better elucidate the mechanisms within different coaching approaches that might contribute to improved care.

  1. Effect of one-lung ventilation on end-tidal carbon dioxide during cardiopulmonary resuscitation in a pig model of cardiac arrest.

    Science.gov (United States)

    Ryu, Dong Hyun; Jung, Yong Hun; Jeung, Kyung Woon; Lee, Byung Kook; Jeong, Young Won; Yun, Jong Geun; Lee, Dong Hun; Lee, Sung Min; Heo, Tag; Min, Yong Il

    2018-01-01

    Unrecognized endobronchial intubation frequently occurs after emergency intubation. However, no study has evaluated the effect of one-lung ventilation on end-tidal carbon dioxide (ETCO2) during cardiopulmonary resuscitation (CPR). We compared the hemodynamic parameters, blood gases, and ETCO2 during one-lung ventilation with those during conventional two-lung ventilation in a pig model of CPR, to determine the effect of the former on ETCO2. A randomized crossover study was conducted in 12 pigs intubated with double-lumen endobronchial tube to achieve lung separation. During CPR, the animals underwent three 5-min ventilation trials based on a randomized crossover design: left-lung, right-lung, or two-lung ventilation. Arterial blood gases were measured at the end of each ventilation trial. Ventilation was provided using the same tidal volume throughout the ventilation trials. Comparison using generalized linear mixed model revealed no significant group effects with respect to aortic pressure, coronary perfusion pressure, and carotid blood flow; however, significant group effect in terms of ETCO2 was found (P < 0.001). In the post hoc analyses, ETCO2 was lower during the right-lung ventilation than during the two-lung (P = 0.006) or left-lung ventilation (P < 0.001). However, no difference in ETCO2 was detected between the left-lung and two-lung ventilations. The partial pressure of arterial carbon dioxide (PaCO2), partial pressure of arterial oxygen (PaO2), and oxygen saturation (SaO2) differed among the three types of ventilation (P = 0.003, P = 0.001, and P = 0.001, respectively). The post hoc analyses revealed a higher PaCO2, lower PaO2, and lower SaO2 during right-lung ventilation than during two-lung or left-lung ventilation. However, the levels of these blood gases did not differ between the left-lung and two-lung ventilations. In a pig model of CPR, ETCO2 was significantly lower during right-lung ventilation than during two-lung ventilation. However

  2. A Conceptual Model for Analysing Management Development in the UK Hospitality Industry

    Science.gov (United States)

    Watson, Sandra

    2007-01-01

    This paper presents a conceptual, contingent model of management development. It explains the nature of the UK hospitality industry and its potential influence on MD practices, prior to exploring dimensions and relationships in the model. The embryonic model is presented as a model that can enhance our understanding of the complexities of the…

  3. Inverse analyses of effective diffusion parameters relevant for a two-phase moisture model of cementitious materials

    DEFF Research Database (Denmark)

    Addassi, Mouadh; Johannesson, Björn; Wadsö, Lars

    2018-01-01

    Here we present an inverse analyses approach to determining the two-phase moisture transport properties relevant to concrete durability modeling. The purposed moisture transport model was based on a continuum approach with two truly separate equations for the liquid and gas phase being connected...... test, and, (iv) capillary suction test. Mass change over time, as obtained from the drying test, the two different cup test intervals and the capillary suction test, was used to obtain the effective diffusion parameters using the proposed inverse analyses approach. The moisture properties obtained...

  4. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    Science.gov (United States)

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  5. Results of radiotherapy in craniopharyngiomas analysed by the linear quadratic model

    Energy Technology Data Exchange (ETDEWEB)

    Guerkaynak, M. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Oezyar, E. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Zorlu, F. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Akyol, F.H. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey); Lale Atahan, I. [Dept. of Radiation Oncology, Hacettepe Univ., Ankara (Turkey)

    1994-12-31

    In 23 craniopharyngioma patients treated by limited surgery and external radiotherapy, the results concerning local control were analysed by linear quadratic formula. A biologically effective dose (BED) of 55 Gy, calculated with time factor and an {alpha}/{beta} value of 10 Gy, seemed to be adequate for local control. (orig.).

  6. From intermediate to final behavioral endpoints : Modeling cognitions in (cost-)effectiveness analyses in health promotion

    NARCIS (Netherlands)

    Prenger, Hendrikje Cornelia

    2012-01-01

    Cost-effectiveness analyses (CEAs) are considered an increasingly important tool in health promotion and psychology. In health promotion adequate effectiveness data of innovative interventions are often lacking. In case of many promising interventions the available data are inadequate for CEAs due

  7. Analyses, algorithms, and computations for models of high-temperature superconductivity. Final report

    International Nuclear Information System (INIS)

    Du, Q.

    1997-01-01

    Under the sponsorship of the Department of Energy, the authors have achieved significant progress in the modeling, analysis, and computation of superconducting phenomena. The work so far has focused on mezoscale models as typified by the celebrated Ginzburg-Landau equations; these models are intermediate between the microscopic models (that can be used to understand the basic structure of superconductors and of the atomic and sub-atomic behavior of these materials) and the macroscale, or homogenized, models (that can be of use for the design of devices). The models they have considered include a time dependent Ginzburg-Landau model, a variable thickness thin film model, models for high values of the Ginzburg-landau parameter, models that account for normal inclusions and fluctuations and Josephson effects, and the anisotropic ginzburg-Landau and Lawrence-Doniach models for layered superconductors, including those with high critical temperatures. In each case, they have developed or refined the models, derived rigorous mathematical results that enhance the state of understanding of the models and their solutions, and developed, analyzed, and implemented finite element algorithms for the approximate solution of the model equations

  8. Analyses, algorithms, and computations for models of high-temperature superconductivity. Final technical report

    International Nuclear Information System (INIS)

    Gunzburger, M.D.; Peterson, J.S.

    1998-01-01

    Under the sponsorship of the Department of Energy, the authors have achieved significant progress in the modeling, analysis, and computation of superconducting phenomena. Their work has focused on mezoscale models as typified by the celebrated ginzburg-Landau equations; these models are intermediate between the microscopic models (that can be used to understand the basic structure of superconductors and of the atomic and sub-atomic behavior of these materials) and the macroscale, or homogenized, models (that can be of use for the design of devices). The models the authors have considered include a time dependent Ginzburg-Landau model, a variable thickness thin film model, models for high values of the Ginzburg-Landau parameter, models that account for normal inclusions and fluctuations and Josephson effects, and the anisotropic Ginzburg-Landau and Lawrence-Doniach models for layered superconductors, including those with high critical temperatures. In each case, they have developed or refined the models, derived rigorous mathematical results that enhance the state of understanding of the models and their solutions, and developed, analyzed, and implemented finite element algorithms for the approximate solution of the model equations

  9. Using EEG and stimulus context to probe the modelling of auditory-visual speech.

    Science.gov (United States)

    Paris, Tim; Kim, Jeesun; Davis, Chris

    2016-02-01

    We investigated whether internal models of the relationship between lip movements and corresponding speech sounds [Auditory-Visual (AV) speech] could be updated via experience. AV associations were indexed by early and late event related potentials (ERPs) and by oscillatory power and phase locking. Different AV experience was produced via a context manipulation. Participants were presented with valid (the conventional pairing) and invalid AV speech items in either a 'reliable' context (80% AVvalid items) or an 'unreliable' context (80% AVinvalid items). The results showed that for the reliable context, there was N1 facilitation for AV compared to auditory only speech. This N1 facilitation was not affected by AV validity. Later ERPs showed a difference in amplitude between valid and invalid AV speech and there was significant enhancement of power for valid versus invalid AV speech. These response patterns did not change over the context manipulation, suggesting that the internal models of AV speech were not updated by experience. The results also showed that the facilitation of N1 responses did not vary as a function of the salience of visual speech (as previously reported); in post-hoc analyses, it appeared instead that N1 facilitation varied according to the relative time of the acoustic onset, suggesting for AV events N1 may be more sensitive to the relationship of AV timing than form. Crown Copyright © 2015. Published by Elsevier Ltd. All rights reserved.

  10. Bio-economic farm modelling to analyse agricultural land productivity in Rwanda

    NARCIS (Netherlands)

    Bidogeza, J.C.

    2011-01-01

    Keywords: Rwanda; farm household typology; sustainable technology adoption; multivariate analysis;
    land degradation; food security; bioeconomic model; crop simulation models; organic fertiliser; inorganic fertiliser; policy incentives

    In Rwanda, land degradation contributes to the

  11. Scalable Coupling of Multiscale AEH and PARADYN Analyses for Impact Modeling

    National Research Council Canada - National Science Library

    Valisetty, Rama R; Chung, Peter W; Namburu, Raju R

    2005-01-01

    .... An asymptotic expansion homogenization (AEH)-based microstructural model available for modeling microstructural aspects of modern armor materials is coupled with PARADYN, a parallel explicit Lagrangian finite-element code...

  12. Development of CFD fire models for deterministic analyses of the cable issues in the nuclear power plant

    International Nuclear Information System (INIS)

    Lin, C.-H.; Ferng, Y.-M.; Pei, B.-S.

    2009-01-01

    Additional fire barriers of electrical cables are required for the nuclear power plants (NPPs) in Taiwan due to the separation requirements of Appendix R to 10 CFR Part 50. The risk-informed fire analysis (RIFA) may provide a viable method to resolve these fire barrier issues. However, it is necessary to perform the fire scenario analyses so that RIFA can quantitatively determine the risk related to the fire barrier wrap. The CFD fire models are then proposed in this paper to help the RIFA in resolving these issues. Three typical fire scenarios are selected to assess the present CFD models. Compared with the experimental data and other model's simulations, the present calculated results show reasonable agreements, rendering that present CFD fire models can provide the quantitative information for RIFA analyses to release the cable wrap requirements for NPPs

  13. The application of model with lumped parameters for transient condition analyses of NPP

    International Nuclear Information System (INIS)

    Stankovic, B.; Stevanovic, V.

    1985-01-01

    The transient behaviour of NPP Krsko during the accident of pressurizer spray valve stuck open has been simulated y lumped parameters model of the PWR coolant system components, developed at the faculty of Mechanical Engineering, University of Belgrade. The elementary volumes which are characterised by the process and state parameters, and by junctions which are characterised by the geometrical and flow parameters are basic structure of physical model. The process parameters obtained by the model RESI, show qualitative agreement with the measured valves, in a degree in which the actions of reactor safety engineered system and emergency core cooling system are adequately modelled; in spite of the elementary physical model structure and only the modelling of thermal process in reactor core and equilibrium conditions of pressurizer and steam generator. The pressurizer pressure and liquid level predicted by the non-equilibrium pressurizer model SOP show good agreement until the HIPS (high pressure pumps) is activated. (author)

  14. Conducting requirements analyses for research using routinely collected health data: a model driven approach.

    Science.gov (United States)

    de Lusignan, Simon; Cashman, Josephine; Poh, Norman; Michalakidis, Georgios; Mason, Aaron; Desombre, Terry; Krause, Paul

    2012-01-01

    Medical research increasingly requires the linkage of data from different sources. Conducting a requirements analysis for a new application is an established part of software engineering, but rarely reported in the biomedical literature; and no generic approaches have been published as to how to link heterogeneous health data. Literature review, followed by a consensus process to define how requirements for research, using, multiple data sources might be modeled. We have developed a requirements analysis: i-ScheDULEs - The first components of the modeling process are indexing and create a rich picture of the research study. Secondly, we developed a series of reference models of progressive complexity: Data flow diagrams (DFD) to define data requirements; unified modeling language (UML) use case diagrams to capture study specific and governance requirements; and finally, business process models, using business process modeling notation (BPMN). These requirements and their associated models should become part of research study protocols.

  15. Pathway models for analysing and managing the introduction of alien plant pests—an overview and categorization

    Science.gov (United States)

    J.C. Douma; M. Pautasso; R.C. Venette; C. Robinet; L. Hemerik; M.C.M. Mourits; J. Schans; W. van der Werf

    2016-01-01

    Alien plant pests are introduced into new areas at unprecedented rates through global trade, transport, tourism and travel, threatening biodiversity and agriculture. Increasingly, the movement and introduction of pests is analysed with pathway models to provide risk managers with quantitative estimates of introduction risks and effectiveness of management options....

  16. Economical analyses of build-operate-transfer model in establishing alternative power plants

    Energy Technology Data Exchange (ETDEWEB)

    Yumurtaci, Zehra [Yildiz Technical University, Department of Mechanical Engineering, Y.T.U. Mak. Fak. Mak. Muh. Bolumu, Besiktas, 34349 Istanbul (Turkey)]. E-mail: zyumur@yildiz.edu.tr; Erdem, Hasan Hueseyin [Yildiz Technical University, Department of Mechanical Engineering, Y.T.U. Mak. Fak. Mak. Muh. Bolumu, Besiktas, 34349 Istanbul (Turkey)

    2007-01-15

    The most widely employed method to meet the increasing electricity demand is building new power plants. The most important issue in building new power plants is to find financial funds. Various models are employed, especially in developing countries, in order to overcome this problem and to find a financial source. One of these models is the build-operate-transfer (BOT) model. In this model, the investor raises all the funds for mandatory expenses and provides financing, builds the plant and, after a certain plant operation period, transfers the plant to the national power organization. In this model, the object is to decrease the burden of power plants on the state budget. The most important issue in the BOT model is the dependence of the unit electricity cost on the transfer period. In this study, the model giving the unit electricity cost depending on the transfer of the plants established according to the BOT model, has been discussed. Unit electricity investment cost and unit electricity cost in relation to transfer period for plant types have been determined. Furthermore, unit electricity cost change depending on load factor, which is one of the parameters affecting annual electricity production, has been determined, and the results have been analyzed. This method can be employed for comparing the production costs of different plants that are planned to be established according to the BOT model, or it can be employed to determine the appropriateness of the BOT model.

  17. Economical analyses of build-operate-transfer model in establishing alternative power plants

    International Nuclear Information System (INIS)

    Yumurtaci, Zehra; Erdem, Hasan Hueseyin

    2007-01-01

    The most widely employed method to meet the increasing electricity demand is building new power plants. The most important issue in building new power plants is to find financial funds. Various models are employed, especially in developing countries, in order to overcome this problem and to find a financial source. One of these models is the build-operate-transfer (BOT) model. In this model, the investor raises all the funds for mandatory expenses and provides financing, builds the plant and, after a certain plant operation period, transfers the plant to the national power organization. In this model, the object is to decrease the burden of power plants on the state budget. The most important issue in the BOT model is the dependence of the unit electricity cost on the transfer period. In this study, the model giving the unit electricity cost depending on the transfer of the plants established according to the BOT model, has been discussed. Unit electricity investment cost and unit electricity cost in relation to transfer period for plant types have been determined. Furthermore, unit electricity cost change depending on load factor, which is one of the parameters affecting annual electricity production, has been determined, and the results have been analyzed. This method can be employed for comparing the production costs of different plants that are planned to be established according to the BOT model, or it can be employed to determine the appropriateness of the BOT model

  18. RETRAN nonequilibrium two-phase flow model for operational transient analyses

    International Nuclear Information System (INIS)

    Paulsen, M.P.; Hughes, E.D.

    1982-01-01

    The field balance equations, flow-field models, and equation of state for a nonequilibrium two-phase flow model for RETRAN are given. The differential field balance model equations are: (1) conservation of mixture mass; (2) conservation of vapor mass; (3) balance of mixture momentum; (4) a dynamic-slip model for the velocity difference; and (5) conservation of mixture energy. The equation of state is formulated such that the liquid phase may be subcooled, saturated, or superheated. The vapor phase is constrained to be at the saturation state. The dynamic-slip model includes wall-to-phase and interphase momentum exchanges. A mechanistic vapor generation model is used to describe vapor production under bulk subcooling conditions. The speed of sound for the mixture under nonequilibrium conditions is obtained from the equation of state formulation. The steady-state and transient solution methods are described

  19. Complementary modelling approaches for analysing several effects of privatization on electricity investment

    Energy Technology Data Exchange (ETDEWEB)

    Bunn, D.W.; Vlahos, K. [London Business School (United Kingdom); Larsen, E.R. [Bologna Univ. (Italy)

    1997-11-01

    This chapter examines two modelling approaches optimisation and system dynamics, for describing the effects of the privatisation of the UK electric supply industry. Modelling the transfer of ownership effects is discussed and the implications of the rate of return, tax and debt are considered. The modelling of the competitive effects is addressed, and the effects of market structure, risk and uncertainty, and strategic competition are explored in detail. (UK)

  20. Uncertainty analyses of the calibrated parameter values of a water quality model

    Science.gov (United States)

    Rode, M.; Suhr, U.; Lindenschmidt, K.-E.

    2003-04-01

    For river basin management water quality models are increasingly used for the analysis and evaluation of different management measures. However substantial uncertainties exist in parameter values depending on the available calibration data. In this paper an uncertainty analysis for a water quality model is presented, which considers the impact of available model calibration data and the variance of input variables. The investigation was conducted based on four extensive flowtime related longitudinal surveys in the River Elbe in the years 1996 to 1999 with varying discharges and seasonal conditions. For the model calculations the deterministic model QSIM of the BfG (Germany) was used. QSIM is a one dimensional water quality model and uses standard algorithms for hydrodynamics and phytoplankton dynamics in running waters, e.g. Michaelis Menten/Monod kinetics, which are used in a wide range of models. The multi-objective calibration of the model was carried out with the nonlinear parameter estimator PEST. The results show that for individual flow time related measuring surveys very good agreements between model calculation and measured values can be obtained. If these parameters are applied to deviating boundary conditions, substantial errors in model calculation can occur. These uncertainties can be decreased with an increased calibration database. More reliable model parameters can be identified, which supply reasonable results for broader boundary conditions. The extension of the application of the parameter set on a wider range of water quality conditions leads to a slight reduction of the model precision for the specific water quality situation. Moreover the investigations show that highly variable water quality variables like the algal biomass always allow a smaller forecast accuracy than variables with lower coefficients of variation like e.g. nitrate.

  1. Comparative study analysing women's childbirth satisfaction and obstetric outcomes across two different models of maternity care

    Science.gov (United States)

    Conesa Ferrer, Ma Belén; Canteras Jordana, Manuel; Ballesteros Meseguer, Carmen; Carrillo García, César; Martínez Roche, M Emilia

    2016-01-01

    Objectives To describe the differences in obstetrical results and women's childbirth satisfaction across 2 different models of maternity care (biomedical model and humanised birth). Setting 2 university hospitals in south-eastern Spain from April to October 2013. Design A correlational descriptive study. Participants A convenience sample of 406 women participated in the study, 204 of the biomedical model and 202 of the humanised model. Results The differences in obstetrical results were (biomedical model/humanised model): onset of labour (spontaneous 66/137, augmentation 70/1, p=0.0005), pain relief (epidural 172/132, no pain relief 9/40, p=0.0005), mode of delivery (normal vaginal 140/165, instrumental 48/23, p=0.004), length of labour (0–4 hours 69/93, >4 hours 133/108, p=0.011), condition of perineum (intact perineum or tear 94/178, episiotomy 100/24, p=0.0005). The total questionnaire score (100) gave a mean (M) of 78.33 and SD of 8.46 in the biomedical model of care and an M of 82.01 and SD of 7.97 in the humanised model of care (p=0.0005). In the analysis of the results per items, statistical differences were found in 8 of the 9 subscales. The highest scores were reached in the humanised model of maternity care. Conclusions The humanised model of maternity care offers better obstetrical outcomes and women's satisfaction scores during the labour, birth and immediate postnatal period than does the biomedical model. PMID:27566632

  2. Solving scheduling problems by untimed model checking. The clinical chemical analyser case study

    NARCIS (Netherlands)

    Margaria, T.; Wijs, Anton J.; Massink, M.; van de Pol, Jan Cornelis; Bortnik, Elena M.

    2009-01-01

    In this article, we show how scheduling problems can be modelled in untimed process algebra, by using special tick actions. A minimal-cost trace leading to a particular action, is one that minimises the number of tick steps. As a result, we can use any (timed or untimed) model checking tool to find

  3. A dynamic bivariate Poisson model for analysing and forecasting match results in the English Premier League

    NARCIS (Netherlands)

    Koopman, S.J.; Lit, R.

    2015-01-01

    Summary: We develop a statistical model for the analysis and forecasting of football match results which assumes a bivariate Poisson distribution with intensity coefficients that change stochastically over time. The dynamic model is a novelty in the statistical time series analysis of match results

  4. Integrated freight network model : a GIS-based platform for transportation analyses.

    Science.gov (United States)

    2015-01-01

    The models currently used to examine the behavior transportation systems are usually mode-specific. That is, they focus on a single mode (i.e. railways, highways, or waterways). The lack of : integration limits the usefulness of models to analyze the...

  5. A laboratory-calibrated model of coho salmon growth with utility for ecological analyses

    Science.gov (United States)

    Manhard, Christopher V.; Som, Nicholas A.; Perry, Russell W.; Plumb, John M.

    2018-01-01

    We conducted a meta-analysis of laboratory- and hatchery-based growth data to estimate broadly applicable parameters of mass- and temperature-dependent growth of juvenile coho salmon (Oncorhynchus kisutch). Following studies of other salmonid species, we incorporated the Ratkowsky growth model into an allometric model and fit this model to growth observations from eight studies spanning ten different populations. To account for changes in growth patterns with food availability, we reparameterized the Ratkowsky model to scale several of its parameters relative to ration. The resulting model was robust across a wide range of ration allocations and experimental conditions, accounting for 99% of the variation in final body mass. We fit this model to growth data from coho salmon inhabiting tributaries and constructed ponds in the Klamath Basin by estimating habitat-specific indices of food availability. The model produced evidence that constructed ponds provided higher food availability than natural tributaries. Because of their simplicity (only mass and temperature are required as inputs) and robustness, ration-varying Ratkowsky models have utility as an ecological tool for capturing growth in freshwater fish populations.

  6. Transport of nutrients from land to sea: Global modeling approaches and uncertainty analyses

    NARCIS (Netherlands)

    Beusen, A.H.W.

    2014-01-01

    This thesis presents four examples of global models developed as part of the Integrated Model to Assess the Global Environment (IMAGE). They describe different components of global biogeochemical cycles of the nutrients nitrogen (N), phosphorus (P) and silicon (Si), with a focus on approaches to

  7. Comparative Analyses of MIRT Models and Software (BMIRT and flexMIRT)

    Science.gov (United States)

    Yavuz, Guler; Hambleton, Ronald K.

    2017-01-01

    Application of MIRT modeling procedures is dependent on the quality of parameter estimates provided by the estimation software and techniques used. This study investigated model parameter recovery of two popular MIRT packages, BMIRT and flexMIRT, under some common measurement conditions. These packages were specifically selected to investigate the…

  8. Development and preliminary analyses of material balance evaluation model in nuclear fuel cycle

    International Nuclear Information System (INIS)

    Matsumura, Tetsuo

    1994-01-01

    Material balance evaluation model in nuclear fuel cycle has been developed using ORIGEN-2 code as basic engine. This model has feature of: It can treat more than 1000 nuclides including minor actinides and fission products. It has flexibility of modeling and graph output using a engineering work station. I made preliminary calculation of LWR fuel high burnup effect (reloading fuel average burnup of 60 GWd/t) on nuclear fuel cycle. The preliminary calculation shows LWR fuel high burnup has much effect on Japanese Pu balance problem. (author)

  9. Analysing the distribution of synaptic vesicles using a spatial point process model

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh; Waagepetersen, Rasmus; Nava, Nicoletta

    2014-01-01

    functionality by statistically modelling the distribution of the synaptic vesicles in two groups of rats: a control group subjected to sham stress and a stressed group subjected to a single acute foot-shock (FS)-stress episode. We hypothesize that the synaptic vesicles have different spatial distributions...... in the two groups. The spatial distributions are modelled using spatial point process models with an inhomogeneous conditional intensity and repulsive pairwise interactions. Our results verify the hypothesis that the two groups have different spatial distributions....

  10. Neural Network-Based Model for Landslide Susceptibility and Soil Longitudinal Profile Analyses

    DEFF Research Database (Denmark)

    Farrokhzad, F.; Barari, Amin; Choobbasti, A. J.

    2011-01-01

    The purpose of this study was to create an empirical model for assessing the landslide risk potential at Savadkouh Azad University, which is located in the rural surroundings of Savadkouh, about 5 km from the city of Pol-Sefid in northern Iran. The soil longitudinal profile of the city of Babol......, located 25 km from the Caspian Sea, also was predicted with an artificial neural network (ANN). A multilayer perceptron neural network model was applied to the landslide area and was used to analyze specific elements in the study area that contributed to previous landsliding events. The ANN models were...... studies in landslide susceptibility zonation....

  11. Statistical Modelling of Synaptic Vesicles Distribution and Analysing their Physical Characteristics

    DEFF Research Database (Denmark)

    Khanmohammadi, Mahdieh

    transmission electron microscopy is used to acquire images from two experimental groups of rats: 1) rats subjected to a behavioral model of stress and 2) rats subjected to sham stress as the control group. The synaptic vesicle distribution and interactions are modeled by employing a point process approach......This Ph.D. thesis deals with mathematical and statistical modeling of synaptic vesicle distribution, shape, orientation and interactions. The first major part of this thesis treats the problem of determining the effect of stress on synaptic vesicle distribution and interactions. Serial section...... on differences of statistical measures in section and the same measures in between sections. Three-dimensional (3D) datasets are reconstructed by using image registration techniques and estimated thicknesses. We distinguish the effect of stress by estimating the synaptic vesicle densities and modeling...

  12. Modeling of Control Costs, Emissions, and Control Retrofits for Cost Effectiveness and Feasibility Analyses

    Science.gov (United States)

    Learn about EPA’s use of the Integrated Planning Model (IPM) to develop estimates of SO2 and NOx emission control costs, projections of futureemissions, and projections of capacity of future control retrofits, assuming controls on EGUs.

  13. Assessment applicability of selected models of multiple discriminant analyses to forecast financial situation of Polish wood sector enterprises

    Directory of Open Access Journals (Sweden)

    Adamowicz Krzysztof

    2017-03-01

    Full Text Available In the last three decades forecasting bankruptcy of enterprises has been an important and difficult problem, used as an impulse for many research projects (Ribeiro et al. 2012. At present many methods of bankruptcy prediction are available. In view of the specific character of economic activity in individual sectors, specialised methods adapted to a given branch of industry are being used increasingly often. For this reason an important scientific problem is related with the indication of an appropriate model or group of models to prepare forecasts for a given branch of industry. Thus research has been conducted to select an appropriate model of Multiple Discriminant Analysis (MDA, best adapted to forecasting changes in the wood industry. This study analyses 10 prediction models popular in Poland. Effectiveness of the model proposed by Jagiełło, developed for all industrial enterprises, may be labelled accidental. That model is not adapted to predict financial changes in wood sector companies in Poland.

  14. Numerical tools for musical instruments acoustics: analysing nonlinear physical models using continuation of periodic solutions

    OpenAIRE

    Karkar , Sami; Vergez , Christophe; Cochelin , Bruno

    2012-01-01

    International audience; We propose a new approach based on numerical continuation and bifurcation analysis for the study of physical models of instruments that produce self- sustained oscillation. Numerical continuation consists in following how a given solution of a set of equations is modified when one (or several) parameter of these equations are allowed to vary. Several physical models (clarinet, saxophone, and violin) are formulated as nonlinear dynamical systems, whose periodic solution...

  15. Analysing stratified medicine business models and value systems: innovation-regulation interactions.

    Science.gov (United States)

    Mittra, James; Tait, Joyce

    2012-09-15

    Stratified medicine offers both opportunities and challenges to the conventional business models that drive pharmaceutical R&D. Given the increasingly unsustainable blockbuster model of drug development, due in part to maturing product pipelines, alongside increasing demands from regulators, healthcare providers and patients for higher standards of safety, efficacy and cost-effectiveness of new therapies, stratified medicine promises a range of benefits to pharmaceutical and diagnostic firms as well as healthcare providers and patients. However, the transition from 'blockbusters' to what might now be termed 'niche-busters' will require the adoption of new, innovative business models, the identification of different and perhaps novel types of value along the R&D pathway, and a smarter approach to regulation to facilitate innovation in this area. In this paper we apply the Innogen Centre's interdisciplinary ALSIS methodology, which we have developed for the analysis of life science innovation systems in contexts where the value creation process is lengthy, expensive and highly uncertain, to this emerging field of stratified medicine. In doing so, we consider the complex collaboration, timing, coordination and regulatory interactions that shape business models, value chains and value systems relevant to stratified medicine. More specifically, we explore in some depth two convergence models for co-development of a therapy and diagnostic before market authorisation, highlighting the regulatory requirements and policy initiatives within the broader value system environment that have a key role in determining the probable success and sustainability of these models. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Analysing the Costs of Integrated Care: A Case on Model Selection for Chronic Care Purposes

    Directory of Open Access Journals (Sweden)

    Marc Carreras

    2016-08-01

    Full Text Available Background: The objective of this study is to investigate whether the algorithm proposed by Manning and Mullahy, a consolidated health economics procedure, can also be used to estimate individual costs for different groups of healthcare services in the context of integrated care. Methods: A cross-sectional study focused on the population of the Baix Empordà (Catalonia-Spain for the year 2012 (N = 92,498 individuals. A set of individual cost models as a function of sex, age and morbidity burden were adjusted and individual healthcare costs were calculated using a retrospective full-costing system. The individual morbidity burden was inferred using the Clinical Risk Groups (CRG patient classification system. Results: Depending on the characteristics of the data, and according to the algorithm criteria, the choice of model was a linear model on the log of costs or a generalized linear model with a log link. We checked for goodness of fit, accuracy, linear structure and heteroscedasticity for the models obtained. Conclusion: The proposed algorithm identified a set of suitable cost models for the distinct groups of services integrated care entails. The individual morbidity burden was found to be indispensable when allocating appropriate resources to targeted individuals.

  17. Scenario sensitivity analyses performed on the PRESTO-EPA LLW risk assessment models

    International Nuclear Information System (INIS)

    Bandrowski, M.S.

    1988-01-01

    The US Environmental Protection Agency (EPA) is currently developing standards for the land disposal of low-level radioactive waste. As part of the standard development, EPA has performed risk assessments using the PRESTO-EPA codes. A program of sensitivity analysis was conducted on the PRESTO-EPA codes, consisting of single parameter sensitivity analysis and scenario sensitivity analysis. The results of the single parameter sensitivity analysis were discussed at the 1987 DOE LLW Management Conference. Specific scenario sensitivity analyses have been completed and evaluated. Scenario assumptions that were analyzed include: site location, disposal method, form of waste, waste volume, analysis time horizon, critical radionuclides, use of buffer zones, and global health effects

  18. Antiapoptotic and neuroprotective role of Curcumin in Pentylenetetrazole (PTZ) induced kindling model in rat.

    Science.gov (United States)

    Saha, Lekha; Chakrabarti, Amitava; Kumari, Sweta; Bhatia, Alka; Banerjee, Dibyojyoti

    2016-02-01

    Kindling, a sub threshold chemical or electrical stimulation, increases seizure duration and enhances accompanied behavior until it reaches a sort of equilibrium state. The present study aimed to explore the effect of curcumin on the development of kindling in PTZ kindled rats and its role in apoptosis and neuronal damage. In a PTZ kindled Wistar rat model, different doses of curcumin (100, 200 and 300 mg/kg) were administrated orally one hour before the PTZ injections on alternate day during the whole kindling days. The following parameters were compared between control and experimental groups: the course of kindling, stages of seizures, Histopathological scoring of hippocampus, antioxidant parameters in the hippocampus, DNA fragmentation and caspase-3 expression in hippocampus, and neuron-specific enolase in the blood. One way ANOVA followed by Bonferroni post hoc analysis and Fischer's Exact test were used for statistical analyses. PTZ, 30 mg/kg, induced kindling in rats after 32.0 ± 1.4 days. Curcumin showed dose-dependent anti-seizure effect. Curcumin (300 mg/kg) significantly increased the latency to myoclonic jerks, clonic seizures as well as generalized tonic-clonic seizures, improved the seizure score and decreased the number of myoclonic jerks. PTZ kindling induced a significant neuronal injury, oxidative stress and apoptosis which were reversed by pretreatment with curcumin in a dose-dependent manner. Our study suggests that curcumin has a potential antiepileptogenic effect on kindling-induced epileptogenesis.

  19. Comprehensive analyses of ventricular myocyte models identify targets exhibiting favorable rate dependence.

    Directory of Open Access Journals (Sweden)

    Megan A Cummins

    2014-03-01

    Full Text Available Reverse rate dependence is a problematic property of antiarrhythmic drugs that prolong the cardiac action potential (AP. The prolongation caused by reverse rate dependent agents is greater at slow heart rates, resulting in both reduced arrhythmia suppression at fast rates and increased arrhythmia risk at slow rates. The opposite property, forward rate dependence, would theoretically overcome these parallel problems, yet forward rate dependent (FRD antiarrhythmics remain elusive. Moreover, there is evidence that reverse rate dependence is an intrinsic property of perturbations to the AP. We have addressed the possibility of forward rate dependence by performing a comprehensive analysis of 13 ventricular myocyte models. By simulating populations of myocytes with varying properties and analyzing population results statistically, we simultaneously predicted the rate-dependent effects of changes in multiple model parameters. An average of 40 parameters were tested in each model, and effects on AP duration were assessed at slow (0.2 Hz and fast (2 Hz rates. The analysis identified a variety of FRD ionic current perturbations and generated specific predictions regarding their mechanisms. For instance, an increase in L-type calcium current is FRD when this is accompanied by indirect, rate-dependent changes in slow delayed rectifier potassium current. A comparison of predictions across models identified inward rectifier potassium current and the sodium-potassium pump as the two targets most likely to produce FRD AP prolongation. Finally, a statistical analysis of results from the 13 models demonstrated that models displaying minimal rate-dependent changes in AP shape have little capacity for FRD perturbations, whereas models with large shape changes have considerable FRD potential. This can explain differences between species and between ventricular cell types. Overall, this study provides new insights, both specific and general, into the determinants of

  20. Thermo-mechanical analyses and model validation in the HAW test field. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Heijdra, J J; Broerse, J; Prij, J

    1995-01-01

    An overview is given of the thermo-mechanical analysis work done for the design of the High Active Waste experiment and for the purpose of validation of the used models through comparison with experiments. A brief treatise is given on the problems of validation of models used for the prediction of physical behaviour which cannot be determined with experiments. The analysis work encompasses investigations into the initial state of stress in the field, the constitutive relations, the temperature rise, and the pressure on the liner tubes inserted in the field to guarantee the retrievability of the radioactive sources used for the experiment. The measurements of temperatures, deformations, and stresses are described and an evaluation is given of the comparison of measured and calculated data. An attempt has been made to qualify or even quantify the discrepancies, if any, between measurements and calculations. It was found that the model for the temperature calculations performed adequately. For the stresses the general tendency was good, however, large discrepancies exist mainly due to inaccuracies in the measurements. For the deformations again the general tendency of the model predictions was in accordance with the measurements. However, from the evaluation it appears that in spite of the efforts to estimate the correct initial rock pressure at the location of the experiment, this pressure has been underestimated. The evaluation has contributed to a considerable increase in confidence in the models and gives no reason to question the constitutive model for rock salt. However, due to the quality of the measurements of the stress and the relatively short period of the experiments no quantitatively firm support for the constitutive model is acquired. Collections of graphs giving the measured and calculated data are attached as appendices. (orig.).

  1. Thermo-mechanical analyses and model validation in the HAW test field. Final report

    International Nuclear Information System (INIS)

    Heijdra, J.J.; Broerse, J.; Prij, J.

    1995-01-01

    An overview is given of the thermo-mechanical analysis work done for the design of the High Active Waste experiment and for the purpose of validation of the used models through comparison with experiments. A brief treatise is given on the problems of validation of models used for the prediction of physical behaviour which cannot be determined with experiments. The analysis work encompasses investigations into the initial state of stress in the field, the constitutive relations, the temperature rise, and the pressure on the liner tubes inserted in the field to guarantee the retrievability of the radioactive sources used for the experiment. The measurements of temperatures, deformations, and stresses are described and an evaluation is given of the comparison of measured and calculated data. An attempt has been made to qualify or even quantify the discrepancies, if any, between measurements and calculations. It was found that the model for the temperature calculations performed adequately. For the stresses the general tendency was good, however, large discrepancies exist mainly due to inaccuracies in the measurements. For the deformations again the general tendency of the model predictions was in accordance with the measurements. However, from the evaluation it appears that in spite of the efforts to estimate the correct initial rock pressure at the location of the experiment, this pressure has been underestimated. The evaluation has contributed to a considerable increase in confidence in the models and gives no reason to question the constitutive model for rock salt. However, due to the quality of the measurements of the stress and the relatively short period of the experiments no quantitatively firm support for the constitutive model is acquired. Collections of graphs giving the measured and calculated data are attached as appendices. (orig.)

  2. Analysing bifurcations encountered in numerical modelling of current transfer to cathodes of dc glow and arc discharges

    International Nuclear Information System (INIS)

    Almeida, P G C; Benilov, M S; Cunha, M D; Faria, M J

    2009-01-01

    Bifurcations and/or their consequences are frequently encountered in numerical modelling of current transfer to cathodes of gas discharges, also in apparently simple situations, and a failure to recognize and properly analyse a bifurcation may create difficulties in the modelling and hinder the understanding of numerical results and the underlying physics. This work is concerned with analysis of bifurcations that have been encountered in the modelling of steady-state current transfer to cathodes of glow and arc discharges. All basic types of steady-state bifurcations (fold, transcritical, pitchfork) have been identified and analysed. The analysis provides explanations to many results obtained in numerical modelling. In particular, it is shown that dramatic changes in patterns of current transfer to cathodes of both glow and arc discharges, described by numerical modelling, occur through perturbed transcritical bifurcations of first- and second-order contact. The analysis elucidates the reason why the mode of glow discharge associated with the falling section of the current-voltage characteristic in the solution of von Engel and Steenbeck seems not to appear in 2D numerical modelling and the subnormal and normal modes appear instead. A similar effect has been identified in numerical modelling of arc cathodes and explained.

  3. Modeling of in-vessel fission product release including fuel morphology effects for severe accident analyses

    International Nuclear Information System (INIS)

    Suh, K.Y.

    1989-10-01

    A new in-vessel fission product release model has been developed and implemented to perform best-estimate calculations of realistic source terms including fuel morphology effects. The proposed bulk mass transfer correlation determines the product of fission product release and equiaxed grain size as a function of the inverse fuel temperature. The model accounts for the fuel-cladding interaction over the temperature range between 770 K and 3000 K in the steam environment. A separate driver has been developed for the in-vessel thermal hydraulic and fission product behavior models that were developed by the Department of Energy for the Modular Accident Analysis Package (MAAP). Calculational results of these models have been compared to the results of the Power Burst Facility Severe Fuel Damage tests. The code predictions utilizing the mass transfer correlation agreed with the experimentally determined fractional release rates during the course of the heatup, power hold, and cooldown phases of the high temperature transients. Compared to such conventional literature correlations as the steam oxidation model and the NUREG-0956 correlation, the mass transfer correlation resulted in lower and less rapid releases in closer agreement with the on-line and grab sample data from the Severe Fuel Damage tests. The proposed mass transfer correlation can be applied for best-estimate calculations of fission products release from the UO 2 fuel in both nominal and severe accident conditions. 15 refs., 10 figs., 2 tabs

  4. Modelling and simulation of complex sociotechnical systems: envisioning and analysing work environments

    Science.gov (United States)

    Hettinger, Lawrence J.; Kirlik, Alex; Goh, Yang Miang; Buckle, Peter

    2015-01-01

    Accurate comprehension and analysis of complex sociotechnical systems is a daunting task. Empirically examining, or simply envisioning the structure and behaviour of such systems challenges traditional analytic and experimental approaches as well as our everyday cognitive capabilities. Computer-based models and simulations afford potentially useful means of accomplishing sociotechnical system design and analysis objectives. From a design perspective, they can provide a basis for a common mental model among stakeholders, thereby facilitating accurate comprehension of factors impacting system performance and potential effects of system modifications. From a research perspective, models and simulations afford the means to study aspects of sociotechnical system design and operation, including the potential impact of modifications to structural and dynamic system properties, in ways not feasible with traditional experimental approaches. This paper describes issues involved in the design and use of such models and simulations and describes a proposed path forward to their development and implementation. Practitioner Summary: The size and complexity of real-world sociotechnical systems can present significant barriers to their design, comprehension and empirical analysis. This article describes the potential advantages of computer-based models and simulations for understanding factors that impact sociotechnical system design and operation, particularly with respect to process and occupational safety. PMID:25761227

  5. GSEVM v.2: MCMC software to analyse genetically structured environmental variance models

    DEFF Research Database (Denmark)

    Ibáñez-Escriche, N; Garcia, M; Sorensen, D

    2010-01-01

    This note provides a description of software that allows to fit Bayesian genetically structured variance models using Markov chain Monte Carlo (MCMC). The gsevm v.2 program was written in Fortran 90. The DOS and Unix executable programs, the user's guide, and some example files are freely available...... for research purposes at http://www.bdporc.irta.es/estudis.jsp. The main feature of the program is to compute Monte Carlo estimates of marginal posterior distributions of parameters of interest. The program is quite flexible, allowing the user to fit a variety of linear models at the level of the mean...

  6. Analysing Amazonian forest productivity using a new individual and trait-based model (TFS v.1)

    Science.gov (United States)

    Fyllas, N. M.; Gloor, E.; Mercado, L. M.; Sitch, S.; Quesada, C. A.; Domingues, T. F.; Galbraith, D. R.; Torre-Lezama, A.; Vilanova, E.; Ramírez-Angulo, H.; Higuchi, N.; Neill, D. A.; Silveira, M.; Ferreira, L.; Aymard C., G. A.; Malhi, Y.; Phillips, O. L.; Lloyd, J.

    2014-07-01

    Repeated long-term censuses have revealed large-scale spatial patterns in Amazon basin forest structure and dynamism, with some forests in the west of the basin having up to a twice as high rate of aboveground biomass production and tree recruitment as forests in the east. Possible causes for this variation could be the climatic and edaphic gradients across the basin and/or the spatial distribution of tree species composition. To help understand causes of this variation a new individual-based model of tropical forest growth, designed to take full advantage of the forest census data available from the Amazonian Forest Inventory Network (RAINFOR), has been developed. The model allows for within-stand variations in tree size distribution and key functional traits and between-stand differences in climate and soil physical and chemical properties. It runs at the stand level with four functional traits - leaf dry mass per area (Ma), leaf nitrogen (NL) and phosphorus (PL) content and wood density (DW) varying from tree to tree - in a way that replicates the observed continua found within each stand. We first applied the model to validate canopy-level water fluxes at three eddy covariance flux measurement sites. For all three sites the canopy-level water fluxes were adequately simulated. We then applied the model at seven plots, where intensive measurements of carbon allocation are available. Tree-by-tree multi-annual growth rates generally agreed well with observations for small trees, but with deviations identified for larger trees. At the stand level, simulations at 40 plots were used to explore the influence of climate and soil nutrient availability on the gross (ΠG) and net (ΠN) primary production rates as well as the carbon use efficiency (CU). Simulated ΠG, ΠN and CU were not associated with temperature. On the other hand, all three measures of stand level productivity were positively related to both mean annual precipitation and soil nutrient status

  7. Influence of the Human Skin Tumor Type in Photodynamic Therapy Analysed by a Predictive Model

    Directory of Open Access Journals (Sweden)

    I. Salas-García

    2012-01-01

    Full Text Available Photodynamic Therapy (PDT modeling allows the prediction of the treatment results depending on the lesion properties, the photosensitizer distribution, or the optical source characteristics. We employ a predictive PDT model and apply it to different skin tumors. It takes into account optical radiation distribution, a nonhomogeneous topical photosensitizer spatial temporal distribution, and the time-dependent photochemical interaction. The predicted singlet oxygen molecular concentrations with varying optical irradiance are compared and could be directly related with the necrosis area. The results show a strong dependence on the particular lesion. This suggests the need to design optimal PDT treatment protocols adapted to the specific patient and lesion.

  8. Analyses of Research Topics in the Field of Informetrics Based on the Method of Topic Modeling

    OpenAIRE

    Sung-Chien Lin

    2014-01-01

    In this study, we used the approach of topic modeling to uncover the possible structure of research topics in the field of Informetrics, to explore the distribution of the topics over years, and to compare the core journals. In order to infer the structure of the topics in the field, the data of the papers published in the Journal of Informetricsand Scientometrics during 2007 to 2013 are retrieved from the database of the Web of Science as input of the approach of topic modeling. The results ...

  9. Studies of the Earth Energy Budget and Water Cycle Using Satellite Observations and Model Analyses

    Science.gov (United States)

    Campbell, G. G.; VonderHarr, T. H.; Randel, D. L.; Kidder, S. Q.

    1997-01-01

    During this research period we have utilized the ERBE data set in comparisons to surface properties and water vapor observations in the atmosphere. A relationship between cloudiness and surface temperature anomalies was found. This same relationship was found in a general circulation model, verifying the model. The attempt to construct a homogeneous time series from Nimbus 6, Nimbus 7 and ERBE data is not complete because we are still waiting for the ERBE reanalysis to be completed. It will be difficult to merge the Nimbus 6 data in because its observations occurred when the average weather was different than the other periods, so regression adjustments are not effective.

  10. Modeling human papillomavirus and cervical cancer in the United States for analyses of screening and vaccination

    Directory of Open Access Journals (Sweden)

    Ortendahl Jesse

    2007-10-01

    Full Text Available Abstract Background To provide quantitative insight into current U.S. policy choices for cervical cancer prevention, we developed a model of human papillomavirus (HPV and cervical cancer, explicitly incorporating uncertainty about the natural history of disease. Methods We developed a stochastic microsimulation of cervical cancer that distinguishes different HPV types by their incidence, clearance, persistence, and progression. Input parameter sets were sampled randomly from uniform distributions, and simulations undertaken with each set. Through systematic reviews and formal data synthesis, we established multiple epidemiologic targets for model calibration, including age-specific prevalence of HPV by type, age-specific prevalence of cervical intraepithelial neoplasia (CIN, HPV type distribution within CIN and cancer, and age-specific cancer incidence. For each set of sampled input parameters, likelihood-based goodness-of-fit (GOF scores were computed based on comparisons between model-predicted outcomes and calibration targets. Using 50 randomly resampled, good-fitting parameter sets, we assessed the external consistency and face validity of the model, comparing predicted screening outcomes to independent data. To illustrate the advantage of this approach in reflecting parameter uncertainty, we used the 50 sets to project the distribution of health outcomes in U.S. women under different cervical cancer prevention strategies. Results Approximately 200 good-fitting parameter sets were identified from 1,000,000 simulated sets. Modeled screening outcomes were externally consistent with results from multiple independent data sources. Based on 50 good-fitting parameter sets, the expected reductions in lifetime risk of cancer with annual or biennial screening were 76% (range across 50 sets: 69–82% and 69% (60–77%, respectively. The reduction from vaccination alone was 75%, although it ranged from 60% to 88%, reflecting considerable parameter

  11. Empirical analyses of a choice model that captures ordering among attribute values

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard

    2017-01-01

    an alternative additionally because it has the highest price. In this paper, we specify a discrete choice model that takes into account the ordering of attribute values across alternatives. This model is used to investigate the effect of attribute value ordering in three case studies related to alternative-fuel...... vehicles, mode choice, and route choice. In our application to choices among alternative-fuel vehicles, we see that especially the price coefficient is sensitive to changes in ordering. The ordering effect is also found in the applications to mode and route choice data where both travel time and cost...

  12. Accounting for Heterogeneity in Relative Treatment Effects for Use in Cost-Effectiveness Models and Value-of-Information Analyses.

    Science.gov (United States)

    Welton, Nicky J; Soares, Marta O; Palmer, Stephen; Ades, Anthony E; Harrison, David; Shankar-Hari, Manu; Rowan, Kathy M

    2015-07-01

    Cost-effectiveness analysis (CEA) models are routinely used to inform health care policy. Key model inputs include relative effectiveness of competing treatments, typically informed by meta-analysis. Heterogeneity is ubiquitous in meta-analysis, and random effects models are usually used when there is variability in effects across studies. In the absence of observed treatment effect modifiers, various summaries from the random effects distribution (random effects mean, predictive distribution, random effects distribution, or study-specific estimate [shrunken or independent of other studies]) can be used depending on the relationship between the setting for the decision (population characteristics, treatment definitions, and other contextual factors) and the included studies. If covariates have been measured that could potentially explain the heterogeneity, then these can be included in a meta-regression model. We describe how covariates can be included in a network meta-analysis model and how the output from such an analysis can be used in a CEA model. We outline a model selection procedure to help choose between competing models and stress the importance of clinical input. We illustrate the approach with a health technology assessment of intravenous immunoglobulin for the management of adult patients with severe sepsis in an intensive care setting, which exemplifies how risk of bias information can be incorporated into CEA models. We show that the results of the CEA and value-of-information analyses are sensitive to the model and highlight the importance of sensitivity analyses when conducting CEA in the presence of heterogeneity. The methods presented extend naturally to heterogeneity in other model inputs, such as baseline risk. © The Author(s) 2015.

  13. Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses

    CERN Document Server

    Li, Shu; The ATLAS collaboration

    2017-01-01

    Proceeding for the poster presentation at LHCP2017, Shanghai, China on the topic of "Monte Carlo modeling of Standard Model multi-boson production processes for $\\sqrt{s} = 13$ TeV ATLAS analyses" (ATL-PHYS-SLIDE-2017-265 https://cds.cern.ch/record/2265389) Deadline: 01/09/2017

  14. An LP-model to analyse economic and ecological sustainability on Dutch dairy farms: model presentation and application for experimental farm "de Marke"

    NARCIS (Netherlands)

    Calker, van K.J.; Berentsen, P.B.M.; Boer, de I.J.M.; Giesen, G.W.J.; Huirne, R.B.M.

    2004-01-01

    Farm level modelling can be used to determine how farm management adjustments and environmental policy affect different sustainability indicators. In this paper indicators were included in a dairy farm LP (linear programming)-model to analyse the effects of environmental policy and management

  15. Balmorel: A model for analyses of the electricity and CHP markets in the Baltic Sea Region. Appendices

    International Nuclear Information System (INIS)

    Ravn, H.F.; Munksgaard, J.; Ramskov, J.; Grohnheit, P.E.; Larsen, H.V.

    2001-03-01

    This report describes the motivations behind the development of the Balmorel model as well as the model itself. The purpose of the Balmorel project is to develop a model for analyses of the power and CHP sectors in the Baltic Sea Region. The model is directed towards the analysis of relevant policy questions to the extent that they contain substantial international aspects. The model is developed in response to the trend towards internationalisation in the electricity sector. This trend is seen in increased international trade of electricity, in investment strategies among producers and otherwise. Also environmental considerations and policies are to an increasing extent gaining an international perspective in relation to the greenhouse gasses. Further, the ongoing process of deregulation of the energy sector highlights this and contributes to the need for overview and analysis. A guiding principle behind the construction of the model has been that it may serve as a means of communication in relation to the policy issues that already are or that may become important for the region. Therefore, emphasis has been put on documentation, transparency and flexibility of the model. This is achieved in part by formulating the model in a high level modelling language, and by making the model, including data, available at the internet. Potential users of the Balmorel model include research institutions, consulting companies, energy authorities, transmission system operators and energy companies. (au)

  16. Spent fuel waste disposal: analyses of model uncertainty in the MICADO project

    International Nuclear Information System (INIS)

    Grambow, B.; Ferry, C.; Casas, I.; Bruno, J.; Quinones, J.; Johnson, L.

    2010-01-01

    The objective was to find out whether international research has now provided sufficiently reliable models to assess the corrosion behavior of spent fuel in groundwater and by this to contribute to answering the question whether the highly radioactive used fuel from nuclear reactors can be disposed of safely in a geological repository. Principal project results are described in the paper

  17. Analyses of gust fronts by means of limited area NWP model outputs

    Czech Academy of Sciences Publication Activity Database

    Kašpar, Marek

    67-68, - (2003), s. 559-572 ISSN 0169-8095 R&D Projects: GA ČR GA205/00/1451 Institutional research plan: CEZ:AV0Z3042911 Keywords : gust front * limited area NWP model * output Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 1.012, year: 2003

  18. Analysing outsourcing policies in an asset management context : A six-stage model

    NARCIS (Netherlands)

    Schoenmaker, R.; Verlaan, J.G.

    2013-01-01

    Asset managers of civil infrastructure are increasingly outsourcing their maintenance. Whereas maintenance is a cyclic process, decisions to outsource decisions are often project-based, and confusing the discussion on the degree of outsourcing. This paper presents a six-stage model that facilitates

  19. Cyclodextrin--piroxicam inclusion complexes: analyses by mass spectrometry and molecular modelling

    Science.gov (United States)

    Gallagher, Richard T.; Ball, Christopher P.; Gatehouse, Deborah R.; Gates, Paul J.; Lobell, Mario; Derrick, Peter J.

    1997-11-01

    Mass spectrometry has been used to investigate the natures of non-covalent complexes formed between the anti-inflammatory drug piroxicam and [alpha]-, [beta]- and [gamma]-cyclodextrins. Energies of these complexes have been calculated by means of molecular modelling. There is a correlation between peak intensities in the mass spectra and the calculated energies.

  20. Testing Mediation Using Multiple Regression and Structural Equation Modeling Analyses in Secondary Data

    Science.gov (United States)

    Li, Spencer D.

    2011-01-01

    Mediation analysis in child and adolescent development research is possible using large secondary data sets. This article provides an overview of two statistical methods commonly used to test mediated effects in secondary analysis: multiple regression and structural equation modeling (SEM). Two empirical studies are presented to illustrate the…

  1. Wave modelling for the North Indian Ocean using MSMR analysed winds

    Digital Repository Service at National Institute of Oceanography (India)

    Vethamony, P.; Sudheesh, K.; Rupali, S.P.; Babu, M.T.; Jayakumar, S.; Saran, A; Basu, S.K.; Kumar, R.; Sarkar, A

    prediction when NCMRWF winds blended with MSMR winds are utilised in the wave model. A comparison between buoy and TOPEX wave heights of May 2000 at 4 buoy locations provides a good match, showing the merit of using altimeter data, wherever it is difficult...

  2. Automated analyses of model-driven artifacts : obtaining insights into industrial application of MDE

    NARCIS (Netherlands)

    Mengerink, J.G.M.; Serebrenik, A.; Schiffelers, R.R.H.; van den Brand, M.G.J.

    2017-01-01

    Over the past years, there has been an increase in the application of model driven engineering in industry. Similar to traditional software engineering, understanding how technologies are actually used in practice is essential for developing good tooling, and decision making processes.

  3. Analysing the Severity and Frequency of Traffic Crashes in Riyadh City Using Statistical Models

    Directory of Open Access Journals (Sweden)

    Saleh Altwaijri

    2012-12-01

    Full Text Available Traffic crashes in Riyadh city cause losses in the form of deaths, injuries and property damages, in addition to the pain and social tragedy affecting families of the victims. In 2005, there were a total of 47,341 injury traffic crashes occurred in Riyadh city (19% of the total KSA crashes and 9% of those crashes were severe. Road safety in Riyadh city may have been adversely affected by: high car ownership, migration of people to Riyadh city, high daily trips reached about 6 million, high rate of income, low-cost of petrol, drivers from different nationalities, young drivers and tremendous growth in population which creates a high level of mobility and transport activities in the city. The primary objective of this paper is therefore to explore factors affecting the severity and frequency of road crashes in Riyadh city using appropriate statistical models aiming to establish effective safety policies ready to be implemented to reduce the severity and frequency of road crashes in Riyadh city. Crash data for Riyadh city were collected from the Higher Commission for the Development of Riyadh (HCDR for a period of five years from 1425H to 1429H (roughly corresponding to 2004-2008. Crash data were classified into three categories: fatal, serious-injury and slight-injury. Two nominal response models have been developed: a standard multinomial logit model (MNL and a mixed logit model to injury-related crash data. Due to a severe underreporting problem on the slight injury crashes binary and mixed binary logistic regression models were also estimated for two categories of severity: fatal and serious crashes. For frequency, two count models such as Negative Binomial (NB models were employed and the unit of analysis was 168 HAIs (wards in Riyadh city. Ward-level crash data are disaggregated by severity of the crash (such as fatal and serious injury crashes. The results from both multinomial and binary response models are found to be fairly consistent but

  4. Using species abundance distribution models and diversity indices for biogeographical analyses

    Science.gov (United States)

    Fattorini, Simone; Rigal, François; Cardoso, Pedro; Borges, Paulo A. V.

    2016-01-01

    We examine whether Species Abundance Distribution models (SADs) and diversity indices can describe how species colonization status influences species community assembly on oceanic islands. Our hypothesis is that, because of the lack of source-sink dynamics at the archipelago scale, Single Island Endemics (SIEs), i.e. endemic species restricted to only one island, should be represented by few rare species and consequently have abundance patterns that differ from those of more widespread species. To test our hypothesis, we used arthropod data from the Azorean archipelago (North Atlantic). We divided the species into three colonization categories: SIEs, archipelagic endemics (AZEs, present in at least two islands) and native non-endemics (NATs). For each category, we modelled rank-abundance plots using both the geometric series and the Gambin model, a measure of distributional amplitude. We also calculated Shannon entropy and Buzas and Gibson's evenness. We show that the slopes of the regression lines modelling SADs were significantly higher for SIEs, which indicates a relative predominance of a few highly abundant species and a lack of rare species, which also depresses diversity indices. This may be a consequence of two factors: (i) some forest specialist SIEs may be at advantage over other, less adapted species; (ii) the entire populations of SIEs are by definition concentrated on a single island, without possibility for inter-island source-sink dynamics; hence all populations must have a minimum number of individuals to survive natural, often unpredictable, fluctuations. These findings are supported by higher values of the α parameter of the Gambin mode for SIEs. In contrast, AZEs and NATs had lower regression slopes, lower α but higher diversity indices, resulting from their widespread distribution over several islands. We conclude that these differences in the SAD models and diversity indices demonstrate that the study of these metrics is useful for

  5. Structural Equation Modeling with Mplus Basic Concepts, Applications, and Programming

    CERN Document Server

    Byrne, Barbara M

    2011-01-01

    Modeled after Barbara Byrne's other best-selling structural equation modeling (SEM) books, this practical guide reviews the basic concepts and applications of SEM using Mplus Versions 5 & 6. The author reviews SEM applications based on actual data taken from her own research. Using non-mathematical language, it is written for the novice SEM user. With each application chapter, the author "walks" the reader through all steps involved in testing the SEM model including: an explanation of the issues addressed illustrated and annotated testing of the hypothesized and post hoc models expl

  6. Multicollinearity in prognostic factor analyses using the EORTC QLQ-C30: identification and impact on model selection.

    Science.gov (United States)

    Van Steen, Kristel; Curran, Desmond; Kramer, Jocelyn; Molenberghs, Geert; Van Vreckem, Ann; Bottomley, Andrew; Sylvester, Richard

    2002-12-30

    Clinical and quality of life (QL) variables from an EORTC clinical trial of first line chemotherapy in advanced breast cancer were used in a prognostic factor analysis of survival and response to chemotherapy. For response, different final multivariate models were obtained from forward and backward selection methods, suggesting a disconcerting instability. Quality of life was measured using the EORTC QLQ-C30 questionnaire completed by patients. Subscales on the questionnaire are known to be highly correlated, and therefore it was hypothesized that multicollinearity contributed to model instability. A correlation matrix indicated that global QL was highly correlated with 7 out of 11 variables. In a first attempt to explore multicollinearity, we used global QL as dependent variable in a regression model with other QL subscales as predictors. Afterwards, standard diagnostic tests for multicollinearity were performed. An exploratory principal components analysis and factor analysis of the QL subscales identified at most three important components and indicated that inclusion of global QL made minimal difference to the loadings on each component, suggesting that it is redundant in the model. In a second approach, we advocate a bootstrap technique to assess the stability of the models. Based on these analyses and since global QL exacerbates problems of multicollinearity, we therefore recommend that global QL be excluded from prognostic factor analyses using the QLQ-C30. The prognostic factor analysis was rerun without global QL in the model, and selected the same significant prognostic factors as before. Copyright 2002 John Wiley & Sons, Ltd.

  7. Healthy volunteers can be phenotyped using cutaneous sensitization pain models.

    Directory of Open Access Journals (Sweden)

    Mads U Werner

    Full Text Available BACKGROUND: Human experimental pain models leading to development of secondary hyperalgesia are used to estimate efficacy of analgesics and antihyperalgesics. The ability to develop an area of secondary hyperalgesia varies substantially between subjects, but little is known about the agreement following repeated measurements. The aim of this study was to determine if the areas of secondary hyperalgesia were consistently robust to be useful for phenotyping subjects, based on their pattern of sensitization by the heat pain models. METHODS: We performed post-hoc analyses of 10 completed healthy volunteer studies (n = 342 [409 repeated measurements]. Three different models were used to induce secondary hyperalgesia to monofilament stimulation: the heat/capsaicin sensitization (H/C, the brief thermal sensitization (BTS, and the burn injury (BI models. Three studies included both the H/C and BTS models. RESULTS: Within-subject compared to between-subject variability was low, and there was substantial strength of agreement between repeated induction-sessions in most studies. The intraclass correlation coefficient (ICC improved little with repeated testing beyond two sessions. There was good agreement in categorizing subjects into 'small area' (1(st quartile [75%] responders: 56-76% of subjects consistently fell into same 'small-area' or 'large-area' category on two consecutive study days. There was moderate to substantial agreement between the areas of secondary hyperalgesia induced on the same day using the H/C (forearm and BTS (thigh models. CONCLUSION: Secondary hyperalgesia induced by experimental heat pain models seem a consistent measure of sensitization in pharmacodynamic and physiological research. The analysis indicates that healthy volunteers can be phenotyped based on their pattern of sensitization by the heat [and heat plus capsaicin] pain models.

  8. Evaluation and Improvement of Cloud and Convective Parameterizations from Analyses of ARM Observations and Models

    Energy Technology Data Exchange (ETDEWEB)

    Del Genio, Anthony D. [NASA Goddard Inst. for Space Studies (GISS), New York, NY (United States)

    2016-03-11

    Over this period the PI and his performed a broad range of data analysis, model evaluation, and model improvement studies using ARM data. These included cloud regimes in the TWP and their evolution over the MJO; M-PACE IOP SCM-CRM intercomparisons; simulations of convective updraft strength and depth during TWP-ICE; evaluation of convective entrainment parameterizations using TWP-ICE simulations; evaluation of GISS GCM cloud behavior vs. long-term SGP cloud statistics; classification of aerosol semi-direct effects on cloud cover; depolarization lidar constraints on cloud phase; preferred states of the winter Arctic atmosphere, surface, and sub-surface; sensitivity of convection to tropospheric humidity; constraints on the parameterization of mesoscale organization from TWP-ICE WRF simulations; updraft and downdraft properties in TWP-ICE simulated convection; insights from long-term ARM records at Manus and Nauru.

  9. Complementary modelling approaches for analysing several effects of privatization on electricity investment

    Energy Technology Data Exchange (ETDEWEB)

    Bunn, D.W.; Larsen, E.R.; Vlahos, K. (London Business School (United Kingdom))

    1993-10-01

    Through the impacts of higher required rates of return, debt, taxation changes and a new competitive structure for the industry, investment in electricity generating capacity has taken a shift to less capital-intensive technologies in the UK. This paper reports on the use of large-scale, long-term capacity planning models, of both an optimization and system dynamics nature, to reflect these separate factors, investigate their sensitivities and to generate future scenarios for the investment in the industry. Some new policy implications for the regulation of the industry become apparent, but the main focus of the paper is to develop some of the methodological changes required by the planning models to suit the privatized context. (Author)

  10. Modeling Freight Ocean Rail and Truck Transportation Flows to Support Policy Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Gearhart, Jared Lee [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wang, Hao [Cornell Univ., Ithaca, NY (United States); Nozick, Linda Karen [Cornell Univ., Ithaca, NY (United States); Xu, Ningxiong [Cornell Univ., Ithaca, NY (United States)

    2017-11-01

    Freight transportation represents about 9.5% of GDP, is responsible for about 8% of greenhouse gas emissions and supports the import and export of about 3.6 trillion in international trade; hence it is important that our national freight transportation system is designed and operated efficiently and embodies user fees and other policies that balance costs and environmental consequences. Hence, this paper develops a mathematical model to estimate international and domestic freight flows across ocean, rail and truck modes which can be used to study the impacts of changes in our infrastructure as well as the imposition of new user fees and changes in operating policies. This model is applied to two case studies: (1) a disruption of the maritime ports at Los Angeles/Long Beach similar to the impacts that would be felt in an earthquake; and (2) implementation of new user fees at the California ports.

  11. MONTE CARLO ANALYSES OF THE YALINA THERMAL FACILITY WITH SERPENT STEREOLITHOGRAPHY GEOMETRY MODEL

    Energy Technology Data Exchange (ETDEWEB)

    Talamo, A.; Gohar, Y.

    2015-01-01

    This paper analyzes the YALINA Thermal subcritical assembly of Belarus using two different Monte Carlo transport programs, SERPENT and MCNP. The MCNP model is based on combinatorial geometry and universes hierarchy, while the SERPENT model is based on Stereolithography geometry. The latter consists of unstructured triangulated surfaces defined by the normal and vertices. This geometry format is used by 3D printers and it has been created by: the CUBIT software, MATLAB scripts, and C coding. All the Monte Carlo simulations have been performed using the ENDF/B-VII.0 nuclear data library. Both MCNP and SERPENT share the same geometry specifications, which describe the facility details without using any material homogenization. Three different configurations have been studied with different number of fuel rods. The three fuel configurations use 216, 245, or 280 fuel rods, respectively. The numerical simulations show that the agreement between SERPENT and MCNP results is within few tens of pcms.

  12. Integrate urban‐scale seismic hazard analyses with the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Luco, Nicolas; Frankel, Arthur; Petersen, Mark D.; Aagaard, Brad T.; Baltay, Annemarie S.; Blanpied, Michael; Boyd, Oliver; Briggs, Richard; Gold, Ryan D.; Graves, Robert; Hartzell, Stephen; Rezaeian, Sanaz; Stephenson, William J.; Wald, David J.; Williams, Robert A.; Withers, Kyle

    2018-01-01

    For more than 20 yrs, damage patterns and instrumental recordings have highlighted the influence of the local 3D geologic structure on earthquake ground motions (e.g., M">M 6.7 Northridge, California, Gao et al., 1996; M">M 6.9 Kobe, Japan, Kawase, 1996; M">M 6.8 Nisqually, Washington, Frankel, Carver, and Williams, 2002). Although this and other local‐scale features are critical to improving seismic hazard forecasts, historically they have not been explicitly incorporated into the U.S. National Seismic Hazard Model (NSHM, national model and maps), primarily because the necessary basin maps and methodologies were not available at the national scale. Instead,...

  13. Analysing the strength of friction stir welded dissimilar aluminium alloys using Sugeno Fuzzy model

    Science.gov (United States)

    Barath, V. R.; Vaira Vignesh, R.; Padmanaban, R.

    2018-02-01

    Friction stir welding (FSW) is a promising solid state joining technique for aluminium alloys. In this study, FSW trials were conducted on two dissimilar plates of aluminium alloy AA2024 and AA7075 by varying the tool rotation speed (TRS) and welding speed (WS). Tensile strength (TS) of the joints were measured and a Sugeno - Fuzzy model was developed to interconnect the FSW process parameters with the tensile strength. From the developed model, it was observed that the optimum heat generation at WS of 15 mm.min-1 and TRS of 1050 rpm resulted in dynamic recovery and dynamic recrystallization of the material. This refined the grains in the FSW zone and resulted in peak tensile strength among the tested specimens. Crest parabolic trend was observed in tensile strength with variation of TRS from 900 rpm to 1200 rpm and TTS from 10 mm.min-1 to 20 mm.min-1.

  14. Analyses of the energy-dependent single separable potential models for the NN scattering

    International Nuclear Information System (INIS)

    Ahmad, S.S.; Beghi, L.

    1981-08-01

    Starting from a systematic study of the salient features regarding the quantum-mechanical two-particle scattering off an energy-dependent (ED) single separable potential and its connection with the rank-2 energy-independent (EI) separable potential in the T-(K-) amplitude formulation, the present status of the ED single separable potential models due to Tabakin (M1), Garcilazo (M2) and Ahmad (M3) has been discussed. It turned out that the incorporation of a self-consistent optimization procedure improves considerably the results of the 1 S 0 and 3 S 1 scattering phase shifts for the models (M2) and (M3) up to the CM wave number q=2.5 fm -1 , although the extrapolation of the results up to q=10 fm -1 reveals that the two models follow the typical behaviour of the well-known super-soft core potentials. It has been found that a variant of (M3) - i.e. (M4) involving one more parameter - gives the phase shifts results which are generally in excellent agreement with the data up to q=2.5 fm -1 and the extrapolation of the results for the 1 S 0 case in the higher wave number range not only follows the corresponding data qualitatively but also reflects a behaviour similar to the Reid soft core and Hamada-Johnston potentials together with a good agreement with the recent [4/3] Pade fits. A brief discussion regarding the features resulting from the variations in the ED parts of all the four models under consideration and their correlations with the inverse scattering theory methodology concludes the paper. (author)

  15. Analyses of Spring Barley Evapotranspiration Rates Based on Gradient Measurements and Dual Crop Coefficient Model

    Czech Academy of Sciences Publication Activity Database

    Pozníková, Gabriela; Fischer, Milan; Pohanková, Eva; Trnka, Miroslav

    2014-01-01

    Roč. 62, č. 5 (2014), s. 1079-1086 ISSN 1211-8516 R&D Projects: GA MŠk LH12037; GA MŠk(CZ) EE2.3.20.0248 Institutional support: RVO:67179843 Keywords : evapotranspiration * dual crop coefficient model * Bowen ratio/energy balance method * transpiration * soil evaporation * spring barley Subject RIV: EH - Ecology, Behaviour OBOR OECD: Environmental sciences (social aspects to be 5.7)

  16. Modeling and analyses of postulated UF6 release accidents in gaseous diffusion plant

    International Nuclear Information System (INIS)

    Kim, S.H.; Taleyarkhan, R.P.; Keith, K.D.; Schmidt, R.W.; Carter, J.C.; Dyer, R.H.

    1995-10-01

    Computer models have been developed to simulate the transient behavior of aerosols and vapors as a result of a postulated accident involving the release of uranium hexafluoride (UF 6 ) into the process building of a gaseous diffusion plant. UF 6 undergoes an exothermic chemical reaction with moisture (H 2 O) in the air to form hydrogen fluoride (HF) and radioactive uranyl fluoride (UO 2 F 2 ). As part of a facility-wide safety evaluation, this study evaluated source terms consisting of UO 2 F 2 as well as HF during a postulated UF 6 release accident in a process building. In the postulated accident scenario, ∼7900 kg (17,500 lb) of hot UF 6 vapor is released over a 5 min period from the process piping into the atmosphere of a large process building. UO 2 F 2 mainly remains as airborne-solid particles (aerosols), and HF is in a vapor form. Some UO 2 F 2 aerosols are removed from the air flow due to gravitational settling. The HF and the remaining UO 2 F 2 are mixed with air and exhausted through the building ventilation system. The MELCOR computer code was selected for simulating aerosols and vapor transport in the process building. MELCOR model was first used to develop a single volume representation of a process building and its results were compared with those from past lumped parameter models specifically developed for studying UF 6 release accidents. Preliminary results indicate that MELCOR predicted results (using a lumped formulation) are comparable with those from previously developed models

  17. Models and error analyses of measuring instruments in accountability systems in safeguards control

    International Nuclear Information System (INIS)

    Dattatreya, E.S.

    1977-05-01

    Essentially three types of measuring instruments are used in plutonium accountability systems: (1) the bubblers, for measuring the total volume of liquid in the holding tanks, (2) coulometers, titration apparatus and calorimeters, for measuring the concentration of plutonium; and (3) spectrometers, for measuring isotopic composition. These three classes of instruments are modeled and analyzed. Finally, the uncertainty in the estimation of total plutonium in the holding tank is determined

  18. Analysing the uncertain future of copper with three exploratory system dynamics models

    OpenAIRE

    Auping, W.; Pruyt, E.; Kwakkel, J.H.

    2012-01-01

    High copper prices, the prospect of a transition to a more sustainable energy mix and increasing copper demands from emerging economies have not led to an in-creased attention to the base metal copper in mineral scarcity discussions. The copper system is well documented, but especially regarding the demand of copper many uncertainties exist. In order to create insight in this systems behaviour in the coming 40 years, an Exploratory System Dynamics Modelling and Analysis study was performed. T...

  19. Stability, convergence and Hopf bifurcation analyses of the classical car-following model

    OpenAIRE

    Kamath, Gopal Krishna; Jagannathan, Krishna; Raina, Gaurav

    2016-01-01

    Reaction delays play an important role in determining the qualitative dynamical properties of a platoon of vehicles traversing a straight road. In this paper, we investigate the impact of delayed feedback on the dynamics of the Classical Car-Following Model (CCFM). Specifically, we analyze the CCFM in no delay, small delay and arbitrary delay regimes. First, we derive a sufficient condition for local stability of the CCFM in no-delay and small-delay regimes using. Next, we derive the necessar...

  20. Transformation of Baumgarten's aesthetics into a tool for analysing works and for modelling

    DEFF Research Database (Denmark)

    Thomsen, Bente Dahl

    2006-01-01

      Abstract: Is this the best form, or does it need further work? The aesthetic object does not possess the perfect qualities; but how do I proceed with the form? These are questions that all modellers ask themselves at some point, and with which they can grapple for days - even weeks - before the......, or convince him-/herself about its strengths. The cards also contain aesthetical reflections that may be of inspiration in the development of the form....

  1. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  2. Distributed organization of a brain microcircuit analysed by three-dimensional modeling: the olfactory bulb

    Directory of Open Access Journals (Sweden)

    Michele eMigliore

    2014-04-01

    Full Text Available The functional consequences of the laminar organization observed in cortical systems cannot be easily studied using standard experimental techniques, abstract theoretical representations, or dimensionally reduced models built from scratch. To solve this problem we have developed a full implementation of an olfactory bulb microcircuit using realistic three-dimensional inputs, cell morphologies, and network connectivity. The results provide new insights into the relations between the functional properties of individual cells and the networks in which they are embedded. To our knowledge, this is the first model of the mitral-granule cell network to include a realistic representation of the experimentally-recorded complex spatial patterns elicited in the glomerular layer by natural odor stimulation. Although the olfactory bulb, due to its organization, has unique advantages with respect to other brain systems, the method is completely general, and can be integrated with more general approaches to other systems. The model makes experimentally testable predictions on distributed processing and on the differential backpropagation of somatic action potentials in each lateral dendrite following odor learning, providing a powerful three-dimensional framework for investigating the functions of brain microcircuits.

  3. Drying of mint leaves in a solar dryer and under open sun: Modelling, performance analyses

    International Nuclear Information System (INIS)

    Akpinar, E. Kavak

    2010-01-01

    In this study was investigated the thin-layer drying characteristics in solar dryer with forced convection and under open sun with natural convection of mint leaves, and, performed energy analysis and exergy analysis of solar drying process of mint leaves. An indirect forced convection solar dryer consisting of a solar air collector and drying cabinet was used in the experiments. The drying data were fitted to ten the different mathematical models. Among the models, Wang and Singh model for the forced solar drying and the natural sun drying were found to best explain thin-layer drying behaviour of mint leaves. Using the first law of thermodynamics, the energy analysis throughout solar drying process was estimated. However, exergy analysis during solar drying process was determined by applying the second law of thermodynamics. Energy utilization ratio (EUR) values of drying cabinet varied in the ranges between 7.826% and 46.285%. The values of exergetic efficiency were found to be in the range of 34.760-87.717%. The values of improvement potential varied between 0 and 0.017 kJ s -1 . Energy utilization ratio and improvement potential decreased with increasing drying time and ambient temperature while exergetic efficiency increased.

  4. Bag-model analyses of proton-antiproton scattering and atomic bound states

    International Nuclear Information System (INIS)

    Alberg, M.A.; Freedman, R.A.; Henley, E.M.; Hwang, W.P.; Seckel, D.; Wilets, L.

    1983-01-01

    We study proton-antiproton (pp-bar ) scattering using the static real potential of Bryan and Phillips outside a cutoff radius rsub0 and two different shapes for the imaginary potential inside a radius R*. These forms, motivated by bag models, are a one-gluon-annihilation potential and a simple geometric-overlap form. In both cases there are three adjustable parameters: the effective bag radius R*, the effective strong coupling constant αsubssup*, and rsub0. There is also a choice for the form of the real potential inside the cutoff radius rsub0. Analysis of the pp-bar scattering data in the laboratory-momentum region 0.4--0.7 GeV/c yields an effective nucleon bag radius R* in the range 0.6--1.1 fm, with the best fit obtained for R* = 0.86 fm. Arguments are presented that the deduced value of R* is likely to be an upper bound on the isolated nucleon bag radius. The present results are consistent with the range of bag radii in current bag models. We have also used the resultant optical potential to calculate the shifts and widths of the sup3Ssub1 and sup1Ssub0 atomic bound states of the pp-bar system. For both states we find upward (repulsive) shifts and widths of about 1 keV. We find no evidence for narrow, strongly bound pp-bar states in our potential model

  5. Preliminary sensitivity analyses of corrosion models for BWIP [Basalt Waste Isolation Project] container materials

    International Nuclear Information System (INIS)

    Anantatmula, R.P.

    1984-01-01

    A preliminary sensitivity analysis was performed for the corrosion models developed for Basalt Waste Isolation Project container materials. The models describe corrosion behavior of the candidate container materials (low carbon steel and Fe9Cr1Mo), in various environments that are expected in the vicinity of the waste package, by separate equations. The present sensitivity analysis yields an uncertainty in total uniform corrosion on the basis of assumed uncertainties in the parameters comprising the corrosion equations. Based on the sample scenario and the preliminary corrosion models, the uncertainty in total uniform corrosion of low carbon steel and Fe9Cr1Mo for the 1000 yr containment period are 20% and 15%, respectively. For containment periods ≥ 1000 yr, the uncertainty in corrosion during the post-closure aqueous periods controls the uncertainty in total uniform corrosion for both low carbon steel and Fe9Cr1Mo. The key parameters controlling the corrosion behavior of candidate container materials are temperature, radiation, groundwater species, etc. Tests are planned in the Basalt Waste Isolation Project containment materials test program to determine in detail the sensitivity of corrosion to these parameters. We also plan to expand the sensitivity analysis to include sensitivity coefficients and other parameters in future studies. 6 refs., 3 figs., 9 tabs

  6. A model for asymmetric ballooning and analyses of ballooning behaviour of single rods with probabilistic methods

    International Nuclear Information System (INIS)

    Keusenhoff, J.G.; Schubert, J.D.; Chakraborty, A.K.

    1985-01-01

    Plastic deformation behaviour of Zircaloy cladding has been extensively examined in the past and can be described best by a model for asymmetric deformation. Slight displacement between the pellet and cladding will always exist and this will lead to the formation of azimuthal temperature differences. The ballooning process is strongly temperature dependent and, as a result of the built up temperature differences, differing deformation behaviours along the circumference of the cladding result. The calculated ballooning of cladding is mainly influenced by its temperature, the applied burst criterion and the parameters used in the deformation model. All these influencing parameters possess uncertainties. In order to quantify these uncertainties and to estimate distribution functions of important parameters such as temperature and deformation the response surface method was applied. For a hot rod the calculated standard deviation of cladding temperature amounts to 50 K. From this high value the large influence of the external cooling conditions on the deformation and burst behaviour of cladding can be estimated. In an additional statistical examination the parameters of deformation and burst models have been included and their influence on the deformation of the rod has been studied. (author)

  7. Analysing black phosphorus transistors using an analytic Schottky barrier MOSFET model.

    Science.gov (United States)

    Penumatcha, Ashish V; Salazar, Ramon B; Appenzeller, Joerg

    2015-11-13

    Owing to the difficulties associated with substitutional doping of low-dimensional nanomaterials, most field-effect transistors built from carbon nanotubes, two-dimensional crystals and other low-dimensional channels are Schottky barrier MOSFETs (metal-oxide-semiconductor field-effect transistors). The transmission through a Schottky barrier-MOSFET is dominated by the gate-dependent transmission through the Schottky barriers at the metal-to-channel interfaces. This makes the use of conventional transistor models highly inappropriate and has lead researchers in the past frequently to extract incorrect intrinsic properties, for example, mobility, for many novel nanomaterials. Here we propose a simple modelling approach to quantitatively describe the transfer characteristics of Schottky barrier-MOSFETs from ultra-thin body materials accurately in the device off-state. In particular, after validating the model through the analysis of a set of ultra-thin silicon field-effect transistor data, we have successfully applied our approach to extract Schottky barrier heights for electrons and holes in black phosphorus devices for a large range of body thicknesses.

  8. Thermal conductivity degradation analyses of LWR MOX fuel by the quasi-two phase material model

    International Nuclear Information System (INIS)

    Kosaka, Yuji; Kurematsu, Shigeru; Kitagawa, Takaaki; Suzuki, Akihiro; Terai, Takayuki

    2012-01-01

    The temperature measurements of mixed oxide (MOX) and UO 2 fuels during irradiation suggested that the thermal conductivity degradation rate of the MOX fuel with burnup should be slower than that of the UO 2 fuel. In order to explain the difference of the degradation rates, the quasi-two phase material model is proposed to assess the thermal conductivity degradation of the MIMAS MOX fuel, which takes into account the Pu agglomerate distributions in the MOX fuel matrix as fabricated. As a result, the quasi-two phase model calculation shows the gradual increase of the difference with burnup and may expect more than 10% higher thermal conductivity values around 75 GWd/t. While these results are not fully suitable for thermal conductivity degradation models implemented by some industrial fuel manufacturers, they are consistent with the results from the irradiation tests and indicate that the inhomogeneity of Pu content in the MOX fuel can be one of the major reasons for the moderation of the thermal conductivity degradation of the MOX fuel. (author)

  9. Analysing the origin of long-range interactions in proteins using lattice models

    Directory of Open Access Journals (Sweden)

    Unger Ron

    2009-01-01

    Full Text Available Abstract Background Long-range communication is very common in proteins but the physical basis of this phenomenon remains unclear. In order to gain insight into this problem, we decided to explore whether long-range interactions exist in lattice models of proteins. Lattice models of proteins have proven to capture some of the basic properties of real proteins and, thus, can be used for elucidating general principles of protein stability and folding. Results Using a computational version of double-mutant cycle analysis, we show that long-range interactions emerge in lattice models even though they are not an input feature of them. The coupling energy of both short- and long-range pairwise interactions is found to become more positive (destabilizing in a linear fashion with increasing 'contact-frequency', an entropic term that corresponds to the fraction of states in the conformational ensemble of the sequence in which the pair of residues is in contact. A mathematical derivation of the linear dependence of the coupling energy on 'contact-frequency' is provided. Conclusion Our work shows how 'contact-frequency' should be taken into account in attempts to stabilize proteins by introducing (or stabilizing contacts in the native state and/or through 'negative design' of non-native contacts.

  10. A new compact solid-state neutral particle analyser at ASDEX Upgrade: Setup and physics modeling

    Energy Technology Data Exchange (ETDEWEB)

    Schneider, P. A.; Blank, H.; Geiger, B.; Mank, K.; Martinov, S.; Ryter, F.; Weiland, M.; Weller, A. [Max-Planck-Institut für Plasmaphysik, Garching (Germany)

    2015-07-15

    At ASDEX Upgrade (AUG), a new compact solid-state detector has been installed to measure the energy spectrum of fast neutrals based on the principle described by Shinohara et al. [Rev. Sci. Instrum. 75, 3640 (2004)]. The diagnostic relies on the usual charge exchange of supra-thermal fast-ions with neutrals in the plasma. Therefore, the measured energy spectra directly correspond to those of confined fast-ions with a pitch angle defined by the line of sight of the detector. Experiments in AUG showed the good signal to noise characteristics of the detector. It is energy calibrated and can measure energies of 40-200 keV with count rates of up to 140 kcps. The detector has an active view on one of the heating beams. The heating beam increases the neutral density locally; thereby, information about the central fast-ion velocity distribution is obtained. The measured fluxes are modeled with a newly developed module for the 3D Monte Carlo code F90FIDASIM [Geiger et al., Plasma Phys. Controlled Fusion 53, 65010 (2011)]. The modeling allows to distinguish between the active (beam) and passive contributions to the signal. Thereby, the birth profile of the measured fast neutrals can be reconstructed. This model reproduces the measured energy spectra with good accuracy when the passive contribution is taken into account.

  11. Theoretical and experimental stress analyses of ORNL thin-shell cylinder-to-cylinder model 2

    International Nuclear Information System (INIS)

    Gwaltney, R.C.; Bolt, S.E.; Bryson, J.W.

    1975-10-01

    Model 2 in a series of four thin-shell cylinder-to-cylinder models was tested, and the experimentally determined elastic stress distributions were compared with theoretical predictions obtained from a thin-shell finite-element analysis. Both the cylinder and the nozzle of model 2 had outside diameters of 10 in., giving a d 0 /D 0 ratio of 1.0, and both had outside diameter/thickness ratios of 100. Sixteen separate loading cases in which one end of the cylinder was rigidly held were analyzed. An internal pressure loading, three mutually perpendicular force components, and three mutually perpendicular moment components were individually applied at the free end of the cylinder and at the end of the nozzle. In addition to these 13 loadings, 3 additional loads were applied to the nozzle (in-plane bending moment, out-of-plane bending moment, and axial force) with the free end of the cylinder restrained. The experimental stress distributions for each of the 16 loadings were obtained using 152 three-gage strain rosettes located on the inner and outer surfaces. All the 16 loading cases were also analyzed theoretically using a finite-element shell analysis. The analysis used flat-plate elements and considered five degrees of freedom per node in the final assembled equations. The comparisons between theory and experiment show reasonably good general agreement, and it is felt that the analysis would be satisfactory for most engineering purposes. (auth)

  12. Reproduction of the Yucca Mountain Project TSPA-LA Uncertainty and Sensitivity Analyses and Preliminary Upgrade of Models

    Energy Technology Data Exchange (ETDEWEB)

    Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis; Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States). Nuclear Waste Disposal Research and Analysis

    2016-09-01

    Sandia National Laboratories (SNL) continued evaluation of total system performance assessment (TSPA) computing systems for the previously considered Yucca Mountain Project (YMP). This was done to maintain the operational readiness of the computing infrastructure (computer hardware and software) and knowledge capability for total system performance assessment (TSPA) type analysis, as directed by the National Nuclear Security Administration (NNSA), DOE 2010. This work is a continuation of the ongoing readiness evaluation reported in Lee and Hadgu (2014) and Hadgu et al. (2015). The TSPA computing hardware (CL2014) and storage system described in Hadgu et al. (2015) were used for the current analysis. One floating license of GoldSim with Versions 9.60.300, 10.5 and 11.1.6 was installed on the cluster head node, and its distributed processing capability was mapped on the cluster processors. Other supporting software were tested and installed to support the TSPA-type analysis on the server cluster. The current tasks included verification of the TSPA-LA uncertainty and sensitivity analyses, and preliminary upgrade of the TSPA-LA from Version 9.60.300 to the latest version 11.1. All the TSPA-LA uncertainty and sensitivity analyses modeling cases were successfully tested and verified for the model reproducibility on the upgraded 2014 server cluster (CL2014). The uncertainty and sensitivity analyses used TSPA-LA modeling cases output generated in FY15 based on GoldSim Version 9.60.300 documented in Hadgu et al. (2015). The model upgrade task successfully converted the Nominal Modeling case to GoldSim Version 11.1. Upgrade of the remaining of the modeling cases and distributed processing tasks will continue. The 2014 server cluster and supporting software systems are fully operational to support TSPA-LA type analysis.

  13. Towards an Industrial Application of Statistical Uncertainty Analysis Methods to Multi-physical Modelling and Safety Analyses

    International Nuclear Information System (INIS)

    Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe

    2013-01-01

    Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)

  14. Computational and Statistical Analyses of Insertional Polymorphic Endogenous Retroviruses in a Non-Model Organism

    Directory of Open Access Journals (Sweden)

    Le Bao

    2014-11-01

    Full Text Available Endogenous retroviruses (ERVs are a class of transposable elements found in all vertebrate genomes that contribute substantially to genomic functional and structural diversity. A host species acquires an ERV when an exogenous retrovirus infects a germ cell of an individual and becomes part of the genome inherited by viable progeny. ERVs that colonized ancestral lineages are fixed in contemporary species. However, in some extant species, ERV colonization is ongoing, which results in variation in ERV frequency in the population. To study the consequences of ERV colonization of a host genome, methods are needed to assign each ERV to a location in a species’ genome and determine which individuals have acquired each ERV by descent. Because well annotated reference genomes are not widely available for all species, de novo clustering approaches provide an alternative to reference mapping that are insensitive to differences between query and reference and that are amenable to mobile element studies in both model and non-model organisms. However, there is substantial uncertainty in both identifying ERV genomic position and assigning each unique ERV integration site to individuals in a population. We present an analysis suitable for detecting ERV integration sites in species without the need for a reference genome. Our approach is based on improved de novo clustering methods and statistical models that take the uncertainty of assignment into account and yield a probability matrix of shared ERV integration sites among individuals. We demonstrate that polymorphic integrations of a recently identified endogenous retrovirus in deer reflect contemporary relationships among individuals and populations.

  15. Diffusion model analyses of the experimental data of 12C+27Al, 40Ca dissipative collisions

    International Nuclear Information System (INIS)

    SHEN Wen-qing; QIAO Wei-min; ZHU Yong-tai; ZHAN Wen-long

    1985-01-01

    Assuming that the intermediate system decays with a statistical lifetime, the general behavior of the threefold differential cross section d 3 tau/dZdEdtheta in the dissipative collisions of 68 MeV 12 C+ 27 Al and 68.6 MeV 12 C+ 40 Ca system is analyzed in the diffusion model framework. The lifetime of the intermediate system and the separation distance for the completely damped deep-inelastic component are obtained. The calculated results and the experimental data of the angular distributions and Wilczynski plots are compared. The probable reasons for the differences between them are briefly discussed

  16. IMPROVEMENTS IN HANFORD TRANSURANIC (TRU) PROGRAM UTILIZING SYSTEMS MODELING AND ANALYSES

    International Nuclear Information System (INIS)

    UYTIOCO EM

    2007-01-01

    Hanford's Transuranic (TRU) Program is responsible for certifying contact-handled (CH) TRU waste and shipping the certified waste to the Waste Isolation Pilot Plant (WIPP). Hanford's CH TRU waste includes material that is in retrievable storage as well as above ground storage, and newly generated waste. Certifying a typical container entails retrieving and then characterizing it (Real-Time Radiography, Non-Destructive Assay, and Head Space Gas Sampling), validating records (data review and reconciliation), and designating the container for a payload. The certified payload is then shipped to WIPP. Systems modeling and analysis techniques were applied to Hanford's TRU Program to help streamline the certification process and increase shipping rates

  17. Water requirements of short rotation poplar coppice: Experimental and modelling analyses across Europe

    Czech Academy of Sciences Publication Activity Database

    Fischer, Milan; Zenone, T.; Trnka, Miroslav; Orság, Matěj; Montagnani, L.; Ward, E. J.; Tripathi, Abishek; Hlavinka, Petr; Seufert, G.; Žalud, Zdeněk; King, J.; Ceulemans, R.

    2018-01-01

    Roč. 250, MAR (2018), s. 343-360 ISSN 0168-1923 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:86652079 Keywords : energy-balance closure * dual crop coefficient * radiation use efficiency * simulate yield response * below-ground carbon * vs. 2nd rotation * flux data * biomass production * forest model * stand-scale * Bioenergy * Bowen ratio and energy balance * Crop coefficient * Eddy covariance * Evapotranspiration * Water balance Subject RIV: GC - Agronomy OBOR OECD: Agriculture Impact factor: 3.887, year: 2016

  18. Domain analyses of Usher syndrome causing Clarin-1 and GPR98 protein models.

    Science.gov (United States)

    Khan, Sehrish Haider; Javed, Muhammad Rizwan; Qasim, Muhammad; Shahzadi, Samar; Jalil, Asma; Rehman, Shahid Ur

    2014-01-01

    Usher syndrome is an autosomal recessive disorder that causes hearing loss, Retinitis Pigmentosa (RP) and vestibular dysfunction. It is clinically and genetically heterogeneous disorder which is clinically divided into three types i.e. type I, type II and type III. To date, there are about twelve loci and ten identified genes which are associated with Usher syndrome. A mutation in any of these genes e.g. CDH23, CLRN1, GPR98, MYO7A, PCDH15, USH1C, USH1G, USH2A and DFNB31 can result in Usher syndrome or non-syndromic deafness. These genes provide instructions for making proteins that play important roles in normal hearing, balance and vision. Studies have shown that protein structures of only seven genes have been determined experimentally and there are still three genes whose structures are unavailable. These genes are Clarin-1, GPR98 and Usherin. In the absence of an experimentally determined structure, homology modeling and threading often provide a useful 3D model of a protein. Therefore in the current study Clarin-1 and GPR98 proteins have been analyzed for signal peptide, domains and motifs. Clarin-1 protein was found to be without any signal peptide and consists of prokar lipoprotein domain. Clarin-1 is classified within claudin 2 super family and consists of twelve motifs. Whereas, GPR98 has a 29 amino acids long signal peptide and classified within GPCR family 2 having Concanavalin A-like lectin/glucanase superfamily. It was found to be consists of GPS and G protein receptor F2 domains and twenty nine motifs. Their 3D structures have been predicted using I-TASSER server. The model of Clarin-1 showed only α-helix but no beta sheets while model of GPR98 showed both α-helix and β sheets. The predicted structures were then evaluated and validated by MolProbity and Ramachandran plot. The evaluation of the predicted structures showed 78.9% residues of Clarin-1 and 78.9% residues of GPR98 within favored regions. The findings of present study has resulted in the

  19. Analyses of Methods and Algorithms for Modelling and Optimization of Biotechnological Processes

    Directory of Open Access Journals (Sweden)

    Stoyan Stoyanov

    2009-08-01

    Full Text Available A review of the problems in modeling, optimization and control of biotechnological processes and systems is given in this paper. An analysis of existing and some new practical optimization methods for searching global optimum based on various advanced strategies - heuristic, stochastic, genetic and combined are presented in the paper. Methods based on the sensitivity theory, stochastic and mix strategies for optimization with partial knowledge about kinetic, technical and economic parameters in optimization problems are discussed. Several approaches for the multi-criteria optimization tasks are analyzed. The problems concerning optimal controls of biotechnological systems are also discussed.

  20. A Conceptual Model for Analysing Collaborative Work and Products in Groupware Systems

    Science.gov (United States)

    Duque, Rafael; Bravo, Crescencio; Ortega, Manuel

    Collaborative work using groupware systems is a dynamic process in which many tasks, in different application domains, are carried out. Currently, one of the biggest challenges in the field of CSCW (Computer-Supported Cooperative Work) research is to establish conceptual models which allow for the analysis of collaborative activities and their resulting products. In this article, we propose an ontology that conceptualizes the required elements which enable an analysis to infer a set of analysis indicators, thus evaluating both the individual and group work and the artefacts which are produced.

  1. The variants of an LOD of a 3D building model and their influence on spatial analyses

    Science.gov (United States)

    Biljecki, Filip; Ledoux, Hugo; Stoter, Jantien; Vosselman, George

    2016-06-01

    The level of detail (LOD) of a 3D city model indicates the model's grade and usability. However, there exist multiple valid variants of each LOD. As a consequence, the LOD concept is inconclusive as an instruction for the acquisition of 3D city models. For instance, the top surface of an LOD1 block model may be modelled at the eaves of a building or at its ridge height. Such variants, which we term geometric references, are often overlooked and are usually not documented in the metadata. Furthermore, the influence of a particular geometric reference on the performance of a spatial analysis is not known. In response to this research gap, we investigate a variety of LOD1 and LOD2 geometric references that are commonly employed, and perform numerical experiments to investigate their relative difference when used as input for different spatial analyses. We consider three use cases (estimation of the area of the building envelope, building volume, and shadows cast by buildings), and compute the deviations in a Monte Carlo simulation. The experiments, carried out with procedurally generated models, indicate that two 3D models representing the same building at the same LOD, but modelled according to different geometric references, may yield substantially different results when used in a spatial analysis. The outcome of our experiments also suggests that the geometric reference may have a bigger influence than the LOD, since an LOD1 with a specific geometric reference may yield a more accurate result than when using LOD2 models.

  2. Integration of 3d Models and Diagnostic Analyses Through a Conservation-Oriented Information System

    Science.gov (United States)

    Mandelli, A.; Achille, C.; Tommasi, C.; Fassi, F.

    2017-08-01

    In the recent years, mature technologies for producing high quality virtual 3D replicas of Cultural Heritage (CH) artefacts has grown thanks to the progress of Information Technologies (IT) tools. These methods are an efficient way to present digital models that can be used with several scopes: heritage managing, support to conservation, virtual restoration, reconstruction and colouring, art cataloguing and visual communication. The work presented is an emblematic case of study oriented to the preventive conservation through monitoring activities, using different acquisition methods and instruments. It was developed inside a project founded by Lombardy Region, Italy, called "Smart Culture", which was aimed to realise a platform that gave the users the possibility to easily access to the CH artefacts, using as an example a very famous statue. The final product is a 3D reality-based model that contains a lot of information inside it, and that can be consulted through a common web browser. In the end, it was possible to define the general strategies oriented to the maintenance and the valorisation of CH artefacts, which, in this specific case, must consider the integration of different techniques and competencies, to obtain a complete, accurate and continuative monitoring of the statue.

  3. Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.

    Science.gov (United States)

    Samoli, Evangelia; Butland, Barbara K

    2017-12-01

    Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.

  4. Why weight? Modelling sample and observational level variability improves power in RNA-seq analyses.

    Science.gov (United States)

    Liu, Ruijie; Holik, Aliaksei Z; Su, Shian; Jansz, Natasha; Chen, Kelan; Leong, Huei San; Blewitt, Marnie E; Asselin-Labat, Marie-Liesse; Smyth, Gordon K; Ritchie, Matthew E

    2015-09-03

    Variations in sample quality are frequently encountered in small RNA-sequencing experiments, and pose a major challenge in a differential expression analysis. Removal of high variation samples reduces noise, but at a cost of reducing power, thus limiting our ability to detect biologically meaningful changes. Similarly, retaining these samples in the analysis may not reveal any statistically significant changes due to the higher noise level. A compromise is to use all available data, but to down-weight the observations from more variable samples. We describe a statistical approach that facilitates this by modelling heterogeneity at both the sample and observational levels as part of the differential expression analysis. At the sample level this is achieved by fitting a log-linear variance model that includes common sample-specific or group-specific parameters that are shared between genes. The estimated sample variance factors are then converted to weights and combined with observational level weights obtained from the mean-variance relationship of the log-counts-per-million using 'voom'. A comprehensive analysis involving both simulations and experimental RNA-sequencing data demonstrates that this strategy leads to a universally more powerful analysis and fewer false discoveries when compared to conventional approaches. This methodology has wide application and is implemented in the open-source 'limma' package. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  5. Innovative three-dimensional neutronics analyses directly coupled with cad models of geometrically complex fusion systems

    International Nuclear Information System (INIS)

    Sawan, M.; Wilson, P.; El-Guebaly, L.; Henderson, D.; Sviatoslavsky, G.; Bohm, T.; Kiedrowski, B.; Ibrahim, A.; Smith, B.; Slaybaugh, R.; Tautges, T.

    2007-01-01

    Fusion systems are, in general, geometrically complex requiring detailed three-dimensional (3-D) nuclear analysis. This analysis is required to address tritium self-sufficiency, nuclear heating, radiation damage, shielding, and radiation streaming issues. To facilitate such calculations, we developed an innovative computational tool that is based on the continuous energy Monte Carlo code MCNP and permits the direct use of CAD-based solid models in the ray-tracing. This allows performing the neutronics calculations in a model that preserves the geometrical details without any simplification, eliminates possible human error in modeling the geometry for MCNP, and allows faster design iterations. In addition to improving the work flow for simulating complex 3- D geometries, it allows a richer representation of the geometry compared to the standard 2nd order polynomial representation. This newly developed tool has been successfully tested for a detailed 40 degree sector benchmark of the International Thermonuclear Experimental Reactor (ITER). The calculations included determining the poloidal variation of the neutron wall loading, flux and nuclear heating in the divertor components, nuclear heating in toroidal field coils, and radiation streaming in the mid-plane port. The tool has been applied to perform 3-D nuclear analysis for several fusion designs including the ARIES Compact Stellarator (ARIES-CS), the High Average Power Laser (HAPL) inertial fusion power plant, and ITER first wall/shield (FWS) modules. The ARIES-CS stellarator has a first wall shape and a plasma profile that varies toroidally within each field period compared to the uniform toroidal shape in tokamaks. Such variation cannot be modeled analytically in the standard MCNP code. The impact of the complex helical geometry and the non-uniform blanket and divertor on the overall tritium breeding ratio and total nuclear heating was determined. In addition, we calculated the neutron wall loading variation in

  6. A model using marginal efficiency of investment to analyse carbon and nitrogen interactions in forested ecosystems

    Science.gov (United States)

    Thomas, R. Q.; Williams, M.

    2014-12-01

    Carbon (C) and nitrogen (N) cycles are coupled in terrestrial ecosystems through multiple processes including photosynthesis, tissue allocation, respiration, N fixation, N uptake, and decomposition of litter and soil organic matter. Capturing the constraint of N on terrestrial C uptake and storage has been a focus of the Earth System modelling community. Here we explore the trade-offs and sensitivities of allocating C and N to different tissues in order to optimize the productivity of plants using a new, simple model of ecosystem C-N cycling and interactions (ACONITE). ACONITE builds on theory related to plant economics in order to predict key ecosystem properties (leaf area index, leaf C:N, N fixation, and plant C use efficiency) based on the optimization of the marginal change in net C or N uptake associated with a change in allocation of C or N to plant tissues. We simulated and evaluated steady-state and transient ecosystem stocks and fluxes in three different forest ecosystems types (tropical evergreen, temperate deciduous, and temperate evergreen). Leaf C:N differed among the three ecosystem types (temperate deciduous database describing plant traits. Gross primary productivity (GPP) and net primary productivity (NPP) estimates compared well to observed fluxes at the simulation sites. A sensitivity analysis revealed that parameterization of the relationship between leaf N and leaf respiration had the largest influence on leaf area index and leaf C:N. Also, a widely used linear leaf N-respiration relationship did not yield a realistic leaf C:N, while a more recently reported non-linear relationship simulated leaf C:N that compared better to the global trait database than the linear relationship. Overall, our ability to constrain leaf area index and allow spatially and temporally variable leaf C:N can help address challenges simulating these properties in ecosystem and Earth System models. Furthermore, the simple approach with emergent properties based on

  7. Model-based analyses to compare health and economic outcomes of cancer control: inclusion of disparities.

    Science.gov (United States)

    Goldie, Sue J; Daniels, Norman

    2011-09-21

    Disease simulation models of the health and economic consequences of different prevention and treatment strategies can guide policy decisions about cancer control. However, models that also consider health disparities can identify strategies that improve both population health and its equitable distribution. We devised a typology of cancer disparities that considers types of inequalities among black, white, and Hispanic populations across different cancers and characteristics important for near-term policy discussions. We illustrated the typology in the specific example of cervical cancer using an existing disease simulation model calibrated to clinical, epidemiological, and cost data for the United States. We calculated average reduction in cancer incidence overall and for black, white, and Hispanic women under five different prevention strategies (Strategies A1, A2, A3, B, and C) and estimated average costs and life expectancy per woman, and the cost-effectiveness ratio for each strategy. Strategies that may provide greater aggregate health benefit than existing options may also exacerbate disparities. Combining human papillomavirus vaccination (Strategy A2) with current cervical cancer screening patterns (Strategy A1) resulted in an average reduction of 69% in cancer incidence overall but a 71.6% reduction for white women, 68.3% for black women, and 63.9% for Hispanic women. Other strategies targeting risk-based screening to racial and ethnic minorities reduced disparities among racial subgroups and resulted in more equitable distribution of benefits among subgroups (reduction in cervical cancer incidence, white vs. Hispanic women, 69.7% vs. 70.1%). Strategies that employ targeted risk-based screening and new screening algorithms, with or without vaccination (Strategies B and C), provide excellent value. The most effective strategy (Strategy C) had a cost-effectiveness ratio of $28,200 per year of life saved when compared with the same strategy without

  8. A review on design of experiments and surrogate models in aircraft real-time and many-query aerodynamic analyses

    Science.gov (United States)

    Yondo, Raul; Andrés, Esther; Valero, Eusebio

    2018-01-01

    Full scale aerodynamic wind tunnel testing, numerical simulation of high dimensional (full-order) aerodynamic models or flight testing are some of the fundamental but complex steps in the various design phases of recent civil transport aircrafts. Current aircraft aerodynamic designs have increase in complexity (multidisciplinary, multi-objective or multi-fidelity) and need to address the challenges posed by the nonlinearity of the objective functions and constraints, uncertainty quantification in aerodynamic problems or the restrained computational budgets. With the aim to reduce the computational burden and generate low-cost but accurate models that mimic those full order models at different values of the design variables, Recent progresses have witnessed the introduction, in real-time and many-query analyses, of surrogate-based approaches as rapid and cheaper to simulate models. In this paper, a comprehensive and state-of-the art survey on common surrogate modeling techniques and surrogate-based optimization methods is given, with an emphasis on models selection and validation, dimensionality reduction, sensitivity analyses, constraints handling or infill and stopping criteria. Benefits, drawbacks and comparative discussions in applying those methods are described. Furthermore, the paper familiarizes the readers with surrogate models that have been successfully applied to the general field of fluid dynamics, but not yet in the aerospace industry. Additionally, the review revisits the most popular sampling strategies used in conducting physical and simulation-based experiments in aircraft aerodynamic design. Attractive or smart designs infrequently used in the field and discussions on advanced sampling methodologies are presented, to give a glance on the various efficient possibilities to a priori sample the parameter space. Closing remarks foster on future perspectives, challenges and shortcomings associated with the use of surrogate models by aircraft industrial

  9. Use of model analysis to analyse Thai students’ attitudes and approaches to physics problem solving

    Science.gov (United States)

    Rakkapao, S.; Prasitpong, S.

    2018-03-01

    This study applies the model analysis technique to explore the distribution of Thai students’ attitudes and approaches to physics problem solving and how those attitudes and approaches change as a result of different experiences in physics learning. We administered the Attitudes and Approaches to Problem Solving (AAPS) survey to over 700 Thai university students from five different levels, namely students entering science, first-year science students, and second-, third- and fourth-year physics students. We found that their inferred mental states were generally mixed. The largest gap between physics experts and all levels of the students was about the role of equations and formulas in physics problem solving, and in views towards difficult problems. Most participants of all levels believed that being able to handle the mathematics is the most important part of physics problem solving. Most students’ views did not change even though they gained experiences in physics learning.

  10. Statistical Analyses and Modeling of the Implementation of Agile Manufacturing Tactics in Industrial Firms

    Directory of Open Access Journals (Sweden)

    Mohammad D. AL-Tahat

    2012-01-01

    Full Text Available This paper provides a review and introduction on agile manufacturing. Tactics of agile manufacturing are mapped into different production areas (eight-construct latent: manufacturing equipment and technology, processes technology and know-how, quality and productivity improvement, production planning and control, shop floor management, product design and development, supplier relationship management, and customer relationship management. The implementation level of agile manufacturing tactics is investigated in each area. A structural equation model is proposed. Hypotheses are formulated. Feedback from 456 firms is collected using five-point-Likert-scale questionnaire. Statistical analysis is carried out using IBM SPSS and AMOS. Multicollinearity, content validity, consistency, construct validity, ANOVA analysis, and relationships between agile components are tested. The results of this study prove that the agile manufacturing tactics have positive effect on the overall agility level. This conclusion can be used by manufacturing firms to manage challenges when trying to be agile.

  11. Analysing improvements to on-street public transport systems: a mesoscopic model approach

    DEFF Research Database (Denmark)

    Ingvardson, Jesper Bláfoss; Kornerup Jensen, Jonas; Nielsen, Otto Anker

    2017-01-01

    and other advanced public transport systems (APTS), the attractiveness of such systems depends heavily on their implementation. In the early planning stage it is advantageous to deploy simple and transparent models to evaluate possible ways of implementation. For this purpose, the present study develops...... headway time regularity and running time variability, i.e. taking into account waiting time and in-vehicle time. The approach was applied on a case study by assessing the effects of implementing segregated infrastructure and APTS elements, individually and in combination. The results showed...... that the reliability of on-street public transport operations mainly depends on APTS elements, and especially holding strategies, whereas pure infrastructure improvements induced travel time reductions. The results further suggested that synergy effects can be obtained by planning on-street public transport coherently...

  12. Biogeochemical Responses and Feedbacks to Climate Change: Synthetic Meta-Analyses Relevant to Earth System Models

    Energy Technology Data Exchange (ETDEWEB)

    van Gestel, Natasja; Jan van Groenigen, Kees; Osenberg, Craig; Dukes, Jeffrey; Dijkstra, Paul

    2018-03-20

    This project examined the sensitivity of carbon in land ecosystems to environmental change, focusing on carbon contained in soil, and the role of carbon-nitrogen interactions in regulating ecosystem carbon storage. The project used a combination of empirical measurements, mathematical models, and statistics to partition effects of climate change on soil into processes enhancing soil carbon and processes through which it decomposes. By synthesizing results from experiments around the world, the work provided novel insight on ecological controls and responses across broad spatial and temporal scales. The project developed new approaches in meta-analysis using principles of element mass balance and large datasets to derive metrics of ecosystem responses to environmental change. The project used meta-analysis to test how nutrients regulate responses of ecosystems to elevated CO2 and warming, in particular responses of nitrogen fixation, critical for regulating long-term C balance.

  13. Analysing pseudoephedrine/methamphetamine policy options in Australia using multi-criteria decision modelling.

    Science.gov (United States)

    Manning, Matthew; Wong, Gabriel T W; Ransley, Janet; Smith, Christine

    2016-06-01

    In this paper we capture and synthesize the unique knowledge of experts so that choices regarding policy measures to address methamphetamine consumption and dependency in Australia can be strengthened. We examine perceptions of the: (1) influence of underlying factors that impact on the methamphetamine problem; (2) importance of various models of intervention that have the potential to affect the success of policies; and (3) efficacy of alternative pseudoephedrine policy options. We adopt a multi-criteria decision model to unpack factors that affect decisions made by experts and examine potential variations on weight/preference among groups. Seventy experts from five groups (i.e. academia (18.6%), government and policy (27.1%), health (18.6%), pharmaceutical (17.1%) and police (18.6%)) in Australia participated in the survey. Social characteristics are considered the most important underlying factor, prevention the most effective strategy and Project STOP the most preferred policy option with respect to reducing methamphetamine consumption and dependency in Australia. One-way repeated ANOVAs indicate a statistically significant difference with regards to the influence of underlying factors (F(2.3, 144.5)=11.256, pmethamphetamine consumption and dependency. Most experts support the use of preventative mechanisms to inhibit drug initiation and delayed drug uptake. Compared to other policies, Project STOP (which aims to disrupt the initial diversion of pseudoephedrine) appears to be a more preferable preventative mechanism to control the production and subsequent sale and use of methamphetamine. This regulatory civil law lever engages third parties in controlling drug-related crime. The literature supports third-party partnerships as it engages experts who have knowledge and expertise with respect to prevention and harm minimization. Copyright © 2016 Elsevier B.V. All rights reserved.

  14. Analyses of non-fatal accidents in an opencast mine by logistic regression model - a case study.

    Science.gov (United States)

    Onder, Seyhan; Mutlu, Mert

    2017-09-01

    Accidents cause major damage for both workers and enterprises in the mining industry. To reduce the number of occupational accidents, these incidents should be properly registered and carefully analysed. This study efficiently examines the Aegean Lignite Enterprise (ELI) of Turkish Coal Enterprises (TKI) in Soma between 2006 and 2011, and opencast coal mine occupational accident records were used for statistical analyses. A total of 231 occupational accidents were analysed for this study. The accident records were categorized into seven groups: area, reason, occupation, part of body, age, shift hour and lost days. The SPSS package program was used in this study for logistic regression analyses, which predicted the probability of accidents resulting in greater or less than 3 lost workdays for non-fatal injuries. Social facilities-area of surface installations, workshops and opencast mining areas are the areas with the highest probability for accidents with greater than 3 lost workdays for non-fatal injuries, while the reasons with the highest probability for these types of accidents are transporting and manual handling. Additionally, the model was tested for such reported accidents that occurred in 2012 for the ELI in Soma and estimated the probability of exposure to accidents with lost workdays correctly by 70%.

  15. Cutting Edge PBPK Models and Analyses: Providing the Basis for Future Modeling Efforts and Bridges to Emerging Toxicology Paradigms

    Directory of Open Access Journals (Sweden)

    Jane C. Caldwell

    2012-01-01

    Full Text Available Physiologically based Pharmacokinetic (PBPK models are used for predictions of internal or target dose from environmental and pharmacologic chemical exposures. Their use in human risk assessment is dependent on the nature of databases (animal or human used to develop and test them, and includes extrapolations across species, experimental paradigms, and determination of variability of response within human populations. Integration of state-of-the science PBPK modeling with emerging computational toxicology models is critical for extrapolation between in vitro exposures, in vivo physiologic exposure, whole organism responses, and long-term health outcomes. This special issue contains papers that can provide the basis for future modeling efforts and provide bridges to emerging toxicology paradigms. In this overview paper, we present an overview of the field and introduction for these papers that includes discussions of model development, best practices, risk-assessment applications of PBPK models, and limitations and bridges of modeling approaches for future applications. Specifically, issues addressed include: (a increased understanding of human variability of pharmacokinetics and pharmacodynamics in the population, (b exploration of mode of action hypotheses (MOA, (c application of biological modeling in the risk assessment of individual chemicals and chemical mixtures, and (d identification and discussion of uncertainties in the modeling process.

  16. Parametric analyses of DEMO Divertor using two dimensional transient thermal hydraulic modelling

    Science.gov (United States)

    Domalapally, Phani; Di Caro, Marco

    2018-05-01

    Among the options considered for cooling of the Plasma facing components of the DEMO reactor, water cooling is a conservative option because of its high heat removal capability. In this work a two-dimensional transient thermal hydraulic code is developed to support the design of the divertor for the projected DEMO reactor with water as a coolant. The mathematical model accounts for transient 2D heat conduction in the divertor section. Temperature-dependent properties are used for more accurate analysis. Correlations for single phase flow forced convection, partially developed subcooled nucleate boiling, fully developed subcooled nucleate boiling and film boiling are used to calculate the heat transfer coefficients on the channel side considering the swirl flow, wherein different correlations found in the literature are compared against each other. Correlation for the Critical Heat Flux is used to estimate its limit for a given flow conditions. This paper then investigates the results of the parametric analysis performed, whereby flow velocity, diameter of the coolant channel, thickness of the coolant pipe, thickness of the armor material, inlet temperature and operating pressure affect the behavior of the divertor under steady or transient heat fluxes. This code will help in understanding the basic parameterś effect on the behavior of the divertor, to achieve a better design from a thermal hydraulic point of view.

  17. Analysing hydro-mechanical behaviour of reinforced slopes through centrifuge modelling

    Science.gov (United States)

    Veenhof, Rick; Wu, Wei

    2017-04-01

    Every year, slope instability is causing casualties and damage to properties and the environment. The behaviour of slopes during and after these kind of events is complex and depends on meteorological conditions, slope geometry, hydro-mechanical soil properties, boundary conditions and the initial state of the soils. This study describes the effects of adding reinforcement, consisting of randomly distributed polyolefin monofilament fibres or Ryegrass (Lolium), on the behaviour of medium-fine sand in loose and medium dense conditions. Direct shear tests were performed on sand specimens with different void ratios, water content and fibre or root density, respectively. To simulate the stress state of real scale field situations, centrifuge model tests were conducted on sand specimens with different slope angles, thickness of the reinforced layer, fibre density, void ratio and water content. An increase in peak shear strength is observed in all reinforced cases. Centrifuge tests show that for slopes that are reinforced the period until failure is extended. The location of shear band formation and patch displacement behaviour indicate that the design of slope reinforcement has a significant effect on the failure behaviour. Future research will focus on the effect of plant water uptake on soil cohesion.

  18. Modelling and Analysing Deadlock in Flexible Manufacturing System using Timed Petri Net

    Directory of Open Access Journals (Sweden)

    Assem Hatem Taha

    2017-03-01

    Full Text Available Flexible manufacturing system (FMS has several advantages compared to conventional systems such as higher machine utilization, higher efficiency, less inventory, and less production time. On the other hand, FMS is expensive and complicated. One of the main problems that may happen is the deadlock. Deadlock is a case that happens when one operation or more are unable to complete their tasks because of waiting of resources that are used by other processes. This may occur due to inappropriate sharing of the resources or improper resource allocation logic which may lead to deadlock occurrence due to the complexity of assigning shared resources to different tasks in an efficient way. One of the most effective tools to model and detect the deadlocks is the petri net. In this research the Matlab software has been used to detect the deadlock in two parallel lines with one shared machines. The analysis shows that deadlock exists at transition with high utilization and place with high waiting time

  19. Microarray and bioinformatic analyses suggest models for carbon metabolism in the autotroph Acidithiobacillus ferrooxidans

    Energy Technology Data Exchange (ETDEWEB)

    C. Appia-ayme; R. Quatrini; Y. Denis; F. Denizot; S. Silver; F. Roberto; F. Veloso; J. Valdes; J. P. Cardenas; M. Esparza; O. Orellana; E. Jedlicki; V. Bonnefoy; D. Holmes

    2006-09-01

    Acidithiobacillus ferrooxidans is a chemolithoautotrophic bacterium that uses iron or sulfur as an energy and electron source. Bioinformatic analysis was used to identify putative genes and potential metabolic pathways involved in CO2 fixation, 2P-glycolate detoxification, carboxysome formation and glycogen utilization in At. ferrooxidans. Microarray transcript profiling was carried out to compare the relative expression of the predicted genes of these pathways when the microorganism was grown in the presence of iron versus sulfur. Several gene expression patterns were confirmed by real-time PCR. Genes for each of the above predicted pathways were found to be organized into discrete clusters. Clusters exhibited differential gene expression depending on the presence of iron or sulfur in the medium. Concordance of gene expression within each cluster, suggested that they are operons Most notably, clusters of genes predicted to be involved in CO2 fixation, carboxysome formation, 2P-glycolate detoxification and glycogen biosynthesis were up-regulated in sulfur medium, whereas genes involved in glycogen utilization were preferentially expressed in iron medium. These results can be explained in terms of models of gene regulation that suggest how A. ferrooxidans can adjust its central carbon management to respond to changing environmental conditions.

  20. Three-dimensional finite element model for flexible pavement analyses based field modulus measurements

    International Nuclear Information System (INIS)

    Lacey, G.; Thenoux, G.; Rodriguez-Roa, F.

    2008-01-01

    In accordance with the present development of empirical-mechanistic tools, this paper presents an alternative to traditional analysis methods for flexible pavements using a three-dimensional finite element formulation based on a liner-elastic perfectly-plastic Drucker-Pager model for granular soil layers and a linear-elastic stress-strain law for the asphalt layer. From the sensitivity analysis performed, it was found that variations of +-4 degree in the internal friction angle of granular soil layers did not significantly affect the analyzed pavement response. On the other hand, a null dilation angle is conservatively proposed for design purposes. The use of a Light Falling Weight Deflectometer is also proposed as an effective and practical tool for on-site elastic modulus determination of granular soil layers. However, the stiffness value obtained from the tested layer should be corrected when the measured peak deflection and the peak force do not occur at the same time. In addition, some practical observations are given to achieve successful field measurements. The importance of using a 3D FE analysis to predict the maximum tensile strain at the bottom of the asphalt layer (related to pavement fatigue) and the maximum vertical comprehensive strain transmitted to the top of the granular soil layers (related to rutting) is also shown. (author)

  1. Modelling and optimization of combined cycle power plant based on exergoeconomic and environmental analyses

    International Nuclear Information System (INIS)

    Ganjehkaviri, A.; Mohd Jaafar, M.N.; Ahmadi, P.; Barzegaravval, H.

    2014-01-01

    This research paper presents a study on a comprehensive thermodynamic modelling of a combined cycle power plant (CCPP). The effects of economic strategies and design parameters on the plant optimization are also studied. Exergoeconomic analysis is conducted in order to determine the cost of electricity and cost of exergy destruction. In addition, a comprehensive optimization study is performed to determine the optimal design parameters of the power plant. Next, the effects of economic parameters variations on the sustainability, carbon dioxide emission and fuel consumption of the plant are investigated and are presented for a typical combined cycle power plant. Therefore, the changes in economic parameters caused the balance between cash flows and fix costs of the plant changes at optimum point. Moreover, economic strategies greatly limited the maximum reasonable carbon emission and fuel consumption reduction. The results showed that by using the optimum values, the exergy efficiency increases for about 6%, while CO 2 emission decreases by 5.63%. However, the variation in the cost was less than 1% due to the fact that a cost constraint was implemented. In addition, the sensitivity analysis for the optimization study was curtailed to be carried out; therefore, the optimization process and results to two important parameters are presented and discussed.

  2. Alpins and thibos vectorial astigmatism analyses: proposal of a linear regression model between methods

    Directory of Open Access Journals (Sweden)

    Giuliano de Oliveira Freitas

    2013-10-01

    Full Text Available PURPOSE: To determine linear regression models between Alpins descriptive indices and Thibos astigmatic power vectors (APV, assessing the validity and strength of such correlations. METHODS: This case series prospectively assessed 62 eyes of 31 consecutive cataract patients with preoperative corneal astigmatism between 0.75 and 2.50 diopters in both eyes. Patients were randomly assorted among two phacoemulsification groups: one assigned to receive AcrySof®Toric intraocular lens (IOL in both eyes and another assigned to have AcrySof Natural IOL associated with limbal relaxing incisions, also in both eyes. All patients were reevaluated postoperatively at 6 months, when refractive astigmatism analysis was performed using both Alpins and Thibos methods. The ratio between Thibos postoperative APV and preoperative APV (APVratio and its linear regression to Alpins percentage of success of astigmatic surgery, percentage of astigmatism corrected and percentage of astigmatism reduction at the intended axis were assessed. RESULTS: Significant negative correlation between the ratio of post- and preoperative Thibos APVratio and Alpins percentage of success (%Success was found (Spearman's ρ=-0.93; linear regression is given by the following equation: %Success = (-APVratio + 1.00x100. CONCLUSION: The linear regression we found between APVratio and %Success permits a validated mathematical inference concerning the overall success of astigmatic surgery.

  3. Model tests and numerical analyses on horizontal impedance functions of inclined single piles embedded in cohesionless soil

    Science.gov (United States)

    Goit, Chandra Shekhar; Saitoh, Masato

    2013-03-01

    Horizontal impedance functions of inclined single piles are measured experimentally for model soil-pile systems with both the effects of local soil nonlinearity and resonant characteristics. Two practical pile inclinations of 5° and 10° in addition to a vertical pile embedded in cohesionless soil and subjected to lateral harmonic pile head loadings for a wide range of frequencies are considered. Results obtained with low-to-high amplitude of lateral loadings on model soil-pile systems encased in a laminar shear box show that the local nonlinearities have a profound impact on the horizontal impedance functions of piles. Horizontal impedance functions of inclined piles are found to be smaller than the vertical pile and the values decrease as the angle of pile inclination increases. Distinct values of horizontal impedance functions are obtained for the `positive' and `negative' cycles of harmonic loadings, leading to asymmetric force-displacement relationships for the inclined piles. Validation of these experimental results is carried out through three-dimensional nonlinear finite element analyses, and the results from the numerical models are in good agreement with the experimental data. Sensitivity analyses conducted on the numerical models suggest that the consideration of local nonlinearity at the vicinity of the soil-pile interface influence the response of the soil-pile systems.

  4. Analyses and optimization of Lee propagation model for LoRa 868 MHz network deployments in urban areas

    Directory of Open Access Journals (Sweden)

    Dobrilović Dalibor

    2017-01-01

    Full Text Available In the recent period, fast ICT expansion and rapid appearance of new technologies raised the importance of fast and accurate planning and deployment of emerging communication technologies, especially wireless ones. In this paper is analyzed possible usage of Lee propagation model for planning, design and management of networks based on LoRa 868MHz technology. LoRa is wireless technology which can be deployed in various Internet of Things and Smart City scenarios in urban areas. The analyses are based on comparison of field measurements with model calculations. Besides the analyses of Lee propagation model usability, the possible optimization of the model is discussed as well. The research results can be used for accurate design, planning and for preparation of high-performance wireless resource management of various Internet of Things and Smart City applications in urban areas based on LoRa or similar wireless technology. The equipment used for measurements is based on open-source hardware.

  5. Analysing movements in investor’s risk aversion using the Heston volatility model

    Directory of Open Access Journals (Sweden)

    Alexie ALUPOAIEI

    2013-03-01

    Full Text Available In this paper we intend to identify and analyze, if it is the case, an “epidemiological” relationship between forecasts of professional investors and short-term developments in the EUR/RON exchange rate. Even that we don’t call a typical epidemiological model as those ones used in biology fields of research, we investigated the hypothesis according to which after the Lehman Brothers crash and implicit the generation of the current financial crisis, the forecasts of professional investors pose a significant explanatory power on the futures short-run movements of EUR/RON. How does it work this mechanism? Firstly, the professional forecasters account for the current macro, financial and political states, then they elaborate forecasts. Secondly, based on that forecasts they get positions in the Romanian exchange market for hedging and/or speculation purposes. But their positions incorporate in addition different degrees of uncertainty. In parallel, a part of their anticipations are disseminated to the public via media channels. Since some important movements are viewed within macro, financial or political fields, the positions of professsional investors from FX derivative market are activated. The current study represents a first step in that direction of analysis for Romanian case. For the above formulated objectives, in this paper different measures of EUR/RON rate volatility have been estimated and compared with implied volatilities. In a second timeframe we called the co-integration and dynamic correlation based tools in order to investigate the relationship between implied volatility and daily returns of EUR/RON exchange rate.

  6. Model-based performance and energy analyses of reverse osmosis to reuse wastewater in a PVC production site.

    Science.gov (United States)

    Hu, Kang; Fiedler, Thorsten; Blanco, Laura; Geissen, Sven-Uwe; Zander, Simon; Prieto, David; Blanco, Angeles; Negro, Carlos; Swinnen, Nathalie

    2017-11-10

    A pilot-scale reverse osmosis (RO) followed behind a membrane bioreactor (MBR) was developed for the desalination to reuse wastewater in a PVC production site. The solution-diffusion-film model (SDFM) based on the solution-diffusion model (SDM) and the film theory was proposed to describe rejections of electrolyte mixtures in the MBR effluent which consists of dominant ions (Na + and Cl - ) and several trace ions (Ca 2+ , Mg 2+ , K + and SO 4 2- ). The universal global optimisation method was used to estimate the ion permeability coefficients (B) and mass transfer coefficients (K) in SDFM. Then, the membrane performance was evaluated based on the estimated parameters which demonstrated that the theoretical simulations were in line with the experimental results for the dominant ions. Moreover, an energy analysis model with the consideration of limitation imposed by the thermodynamic restriction was proposed to analyse the specific energy consumption of the pilot-scale RO system in various scenarios.

  7. Computational Modeling of Oxygen Transport in the Microcirculation: From an Experiment-Based Model to Theoretical Analyses

    OpenAIRE

    Lücker, Adrien

    2017-01-01

    Oxygen supply to cells by the cardiovascular system involves multiple physical and chemical processes that aim to satisfy fluctuating metabolic demand. Regulation mechanisms range from increased heart rate to minute adaptations in the microvasculature. The challenges and limitations of experimental studies in vivo make computational models an invaluable complement. In this thesis, oxygen transport from capillaries to tissue is investigated using a new numerical model that is tailored for vali...

  8. Devising a New Model of Demand-Based Learning Integrated with Social Networks and Analyses of its Performance

    Directory of Open Access Journals (Sweden)

    Bekim Fetaji

    2018-02-01

    Full Text Available The focus of the research study is to devise a new model for demand based learning that will be integrated with social networks such as Facebook, twitter and other. The study investigates this by reviewing the published literature and realizes a case study analyses in order to analyze the new models’ analytical perspectives of practical implementation. The study focuses on analyzing demand-based learning and investigating how it can be improved by devising a specific model that incorporates social network use. Statistical analyses of the results of the questionnaire through research of the raised questions and hypothesis showed that there is a need for introducing new models in the teaching process. The originality stands on the prologue of the social login approach to an educational environment, whereas the approach is counted as a contribution of developing a demand-based web application, which aims to modernize the educational pattern of communication, introduce the social login approach, and increase the process of knowledge transfer as well as improve learners’ performance and skills. Insights and recommendations are provided, argumented and discussed.

  9. Extending and Applying Spartan to Perform Temporal Sensitivity Analyses for Predicting Changes in Influential Biological Pathways in Computational Models.

    Science.gov (United States)

    Alden, Kieran; Timmis, Jon; Andrews, Paul S; Veiga-Fernandes, Henrique; Coles, Mark

    2017-01-01

    Through integrating real time imaging, computational modelling, and statistical analysis approaches, previous work has suggested that the induction of and response to cell adhesion factors is the key initiating pathway in early lymphoid tissue development, in contrast to the previously accepted view that the process is triggered by chemokine mediated cell recruitment. These model derived hypotheses were developed using spartan, an open-source sensitivity analysis toolkit designed to establish and understand the relationship between a computational model and the biological system that model captures. Here, we extend the functionality available in spartan to permit the production of statistical analyses that contrast the behavior exhibited by a computational model at various simulated time-points, enabling a temporal analysis that could suggest whether the influence of biological mechanisms changes over time. We exemplify this extended functionality by using the computational model of lymphoid tissue development as a time-lapse tool. By generating results at twelve- hour intervals, we show how the extensions to spartan have been used to suggest that lymphoid tissue development could be biphasic, and predict the time-point when a switch in the influence of biological mechanisms might occur.

  10. An efficient modeling of fine air-gaps in tokamak in-vessel components for electromagnetic analyses

    International Nuclear Information System (INIS)

    Oh, Dong Keun; Pak, Sunil; Jhang, Hogun

    2012-01-01

    Highlights: ► A simple and efficient modeling technique is introduced to avoid undesirable massive air mesh which is usually encountered at the modeling of fine structures in tokamak in-vessel component. ► This modeling method is based on the decoupled nodes at the boundary element mocking the air gaps. ► We demonstrated the viability and efficacy, comparing this method with brute force modeling of air-gaps and effective resistivity approximation instead of detail modeling. ► Application of the method to the ITER machine was successfully carried out without sacrificing computational resources and speed. - Abstract: A simple and efficient modeling technique is presented for a proper analysis of complicated eddy current flows in conducting structures with fine air gaps. It is based on the idea of replacing a slit with the decoupled boundary of finite elements. The viability and efficacy of the technique is demonstrated in a simple problem. Application of the method to electromagnetic load analyses during plasma disruptions in ITER has been successfully carried out without sacrificing computational resources and speed. This shows the proposed method is applicable to a practical system with complicated geometrical structures.

  11. An assessment of the wind re-analyses in the modelling of an extreme sea state in the Black Sea

    Science.gov (United States)

    Akpinar, Adem; Ponce de León, S.

    2016-03-01

    This study aims at an assessment of wind re-analyses for modelling storms in the Black Sea. A wind-wave modelling system (Simulating WAve Nearshore, SWAN) is applied to the Black Sea basin and calibrated with buoy data for three recent re-analysis wind sources, namely the European Centre for Medium-Range Weather Forecasts Reanalysis-Interim (ERA-Interim), Climate Forecast System Reanalysis (CFSR), and Modern Era Retrospective Analysis for Research and Applications (MERRA) during an extreme wave condition that occurred in the north eastern part of the Black Sea. The SWAN model simulations are carried out for default and tuning settings for deep water source terms, especially whitecapping. Performances of the best model configurations based on calibration with buoy data are discussed using data from the JASON2, TOPEX-Poseidon, ENVISAT and GFO satellites. The SWAN model calibration shows that the best configuration is obtained with Janssen and Komen formulations with whitecapping coefficient (Cds) equal to 1.8e-5 for wave generation by wind and whitecapping dissipation using ERA-Interim. In addition, from the collocated SWAN results against the satellite records, the best configuration is determined to be the SWAN using the CFSR winds. Numerical results, thus show that the accuracy of a wave forecast will depend on the quality of the wind field and the ability of the SWAN model to simulate the waves under extreme wind conditions in fetch limited wave conditions.

  12. Thermodynamic analysis and modeling of thermo compressor; Analyse et modelisation thermodynamique du mouvement du piston d'un thermocompresseur

    Energy Technology Data Exchange (ETDEWEB)

    Arques, Ph. [Ecole Centrale de Lyon, 69 - Ecully (France)

    1998-07-01

    A thermo-compressor is a compressor that transforms directly the heat release by a source in an energy of pressure without intermediate mechanical work. It is a conversion of the Stirling engine in driven machine in order that the piston that provides the work has been suppressed. In this article, we present the analytical and numerical analyses of heat and mass transfers modeling in the different volumes of the thermo-compressor. This engine comprises a free piston displacer that separates cold and hot gas. (author)

  13. Modeling and stress analyses of a normal foot-ankle and a prosthetic foot-ankle complex.

    Science.gov (United States)

    Ozen, Mustafa; Sayman, Onur; Havitcioglu, Hasan

    2013-01-01

    Total ankle replacement (TAR) is a relatively new concept and is becoming more popular for treatment of ankle arthritis and fractures. Because of the high costs and difficulties of experimental studies, the developments of TAR prostheses are progressing very slowly. For this reason, the medical imaging techniques such as CT, and MR have become more and more useful. The finite element method (FEM) is a widely used technique to estimate the mechanical behaviors of materials and structures in engineering applications. FEM has also been increasingly applied to biomechanical analyses of human bones, tissues and organs, thanks to the development of both the computing capabilities and the medical imaging techniques. 3-D finite element models of the human foot and ankle from reconstruction of MR and CT images have been investigated by some authors. In this study, data of geometries (used in modeling) of a normal and a prosthetic foot and ankle were obtained from a 3D reconstruction of CT images. The segmentation software, MIMICS was used to generate the 3D images of the bony structures, soft tissues and components of prosthesis of normal and prosthetic ankle-foot complex. Except the spaces between the adjacent surface of the phalanges fused, metatarsals, cuneiforms, cuboid, navicular, talus and calcaneus bones, soft tissues and components of prosthesis were independently developed to form foot and ankle complex. SOLIDWORKS program was used to form the boundary surfaces of all model components and then the solid models were obtained from these boundary surfaces. Finite element analyses software, ABAQUS was used to perform the numerical stress analyses of these models for balanced standing position. Plantar pressure and von Mises stress distributions of the normal and prosthetic ankles were compared with each other. There was a peak pressure increase at the 4th metatarsal, first metatarsal and talus bones and a decrease at the intermediate cuneiform and calcaneus bones, in

  14. Gamma-ray pulsar physics: gap-model populations and light-curve analyses in the Fermi era

    International Nuclear Information System (INIS)

    Pierbattista, M.

    2010-01-01

    This thesis research focusses on the study of the young and energetic isolated ordinary pulsar population detected by the Fermi gamma-ray space telescope. We compared the model expectations of four emission models and the LAT data. We found that all the models fail to reproduce the LAT detections, in particular the large number of high E objects observed. This inconsistency is not model dependent. A discrepancy between the radio-loud/radio-quiet objects ratio was also found between the observed and predicted samples. The L γ α E 0.5 relation is robustly confirmed by all the assumed models with particular agreement in the slot gap (SG) case. On luminosity bases, the intermediate altitude emission of the two pole caustic SG model is favoured. The beaming factor f Ω shows an E dependency that is slightly visible in the SG case. Estimates of the pulsar orientations have been obtained to explain the simultaneous gamma and radio light-curves. By analysing the solutions we found a relation between the observed energy cutoff and the width of the emission slot gap. This relation has been theoretically predicted. A possible magnetic obliquity α alignment with time is rejected -for all the models- on timescale of the order of 10 6 years. The light-curve morphology study shows that the outer magnetosphere gap emission (OGs) are favoured to explain the observed radio-gamma lag. The light curve moment studies (symmetry and sharpness) on the contrary favour a two pole caustic SG emission. All the model predictions suggest a different magnetic field layout with an hybrid two pole caustic and intermediate altitude emission to explain both the pulsar luminosity and light curve morphology. The low magnetosphere emission mechanism of the polar cap model, is systematically rejected by all the tests done. (author) [fr

  15. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Science.gov (United States)

    Ligmann-Zielinska, Arika; Kramer, Daniel B; Spence Cheruvelil, Kendra; Soranno, Patricia A

    2014-01-01

    Agent-based models (ABMs) have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1) efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2) conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  16. Using uncertainty and sensitivity analyses in socioecological agent-based models to improve their analytical performance and policy relevance.

    Directory of Open Access Journals (Sweden)

    Arika Ligmann-Zielinska

    Full Text Available Agent-based models (ABMs have been widely used to study socioecological systems. They are useful for studying such systems because of their ability to incorporate micro-level behaviors among interacting agents, and to understand emergent phenomena due to these interactions. However, ABMs are inherently stochastic and require proper handling of uncertainty. We propose a simulation framework based on quantitative uncertainty and sensitivity analyses to build parsimonious ABMs that serve two purposes: exploration of the outcome space to simulate low-probability but high-consequence events that may have significant policy implications, and explanation of model behavior to describe the system with higher accuracy. The proposed framework is applied to the problem of modeling farmland conservation resulting in land use change. We employ output variance decomposition based on quasi-random sampling of the input space and perform three computational experiments. First, we perform uncertainty analysis to improve model legitimacy, where the distribution of results informs us about the expected value that can be validated against independent data, and provides information on the variance around this mean as well as the extreme results. In our last two computational experiments, we employ sensitivity analysis to produce two simpler versions of the ABM. First, input space is reduced only to inputs that produced the variance of the initial ABM, resulting in a model with output distribution similar to the initial model. Second, we refine the value of the most influential input, producing a model that maintains the mean of the output of initial ABM but with less spread. These simplifications can be used to 1 efficiently explore model outcomes, including outliers that may be important considerations in the design of robust policies, and 2 conduct explanatory analysis that exposes the smallest number of inputs influencing the steady state of the modeled system.

  17. Modeling and analyses for an extended car-following model accounting for drivers' situation awareness from cyber physical perspective

    Science.gov (United States)

    Chen, Dong; Sun, Dihua; Zhao, Min; Zhou, Tong; Cheng, Senlin

    2018-07-01

    In fact, driving process is a typical cyber physical process which couples tightly the cyber factor of traffic information with the physical components of the vehicles. Meanwhile, the drivers have situation awareness in driving process, which is not only ascribed to the current traffic states, but also extrapolates the changing trend. In this paper, an extended car-following model is proposed to account for drivers' situation awareness. The stability criterion of the proposed model is derived via linear stability analysis. The results show that the stable region of proposed model will be enlarged on the phase diagram compared with previous models. By employing the reductive perturbation method, the modified Korteweg de Vries (mKdV) equation is obtained. The kink-antikink soliton of mKdV equation reveals theoretically the evolution of traffic jams. Numerical simulations are conducted to verify the analytical results. Two typical traffic Scenarios are investigated. The simulation results demonstrate that drivers' situation awareness plays a key role in traffic flow oscillations and the congestion transition.

  18. Predictive models for conversion of prediabetes to diabetes.

    Science.gov (United States)

    Yokota, N; Miyakoshi, T; Sato, Y; Nakasone, Y; Yamashita, K; Imai, T; Hirabayashi, K; Koike, H; Yamauchi, K; Aizawa, T

    2017-08-01

    To clarify the natural course of prediabetes and develop predictive models for conversion to diabetes. A retrospective longitudinal study of 2105 adults with prediabetes was carried out with a mean observation period of 4.7years. Models were developed using multivariate logistic regression analysis and verified by 10-fold cross-validation. The relationship between [final BMI minus baseline BMI] (δBMI) and incident diabetes was analyzed post hoc by comparing the diabetes conversion rate for low (Prediabetes conversion to diabetes could be predicted with accuracy, and weight reduction during the observation was associated with lowered conversion rate. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Perspectives on econometric modelling to inform policy: a UK qualitative case study of minimum unit pricing of alcohol.

    Science.gov (United States)

    Katikireddi, Srinivasa V; Bond, Lyndal; Hilton, Shona

    2014-06-01

    Novel policy interventions may lack evaluation-based evidence. Considerations to introduce minimum unit pricing (MUP) of alcohol in the UK were informed by econometric modelling (the 'Sheffield model'). We aim to investigate policy stakeholders' views of the utility of modelling studies for public health policy. In-depth qualitative interviews with 36 individuals involved in MUP policy debates (purposively sampled to include civil servants, politicians, academics, advocates and industry-related actors) were conducted and thematically analysed. Interviewees felt familiar with modelling studies and often displayed detailed understandings of the Sheffield model. Despite this, many were uneasy about the extent to which the Sheffield model could be relied on for informing policymaking and preferred traditional evaluations. A tension was identified between this preference for post hoc evaluations and a desire for evidence derived from local data, with modelling seen to offer high external validity. MUP critics expressed concern that the Sheffield model did not adequately capture the 'real life' world of the alcohol market, which was conceptualized as a complex and, to some extent, inherently unpredictable system. Communication of modelling results was considered intrinsically difficult but presenting an appropriate picture of the uncertainties inherent in modelling was viewed as desirable. There was general enthusiasm for increased use of econometric modelling to inform future policymaking but an appreciation that such evidence should only form one input into the process. Modelling studies are valued by policymakers as they provide contextually relevant evidence for novel policies, but tensions exist with views of traditional evaluation-based evidence. © The Author 2013. Published by Oxford University Press on behalf of the European Public Health Association.

  20. Genetic analyses using GGE model and a mixed linear model approach, and stability analyses using AMMI bi-plot for late-maturity alpha-amylase activity in bread wheat genotypes.

    Science.gov (United States)

    Rasul, Golam; Glover, Karl D; Krishnan, Padmanaban G; Wu, Jixiang; Berzonsky, William A; Fofana, Bourlaye

    2017-06-01

    Low falling number and discounting grain when it is downgraded in class are the consequences of excessive late-maturity α-amylase activity (LMAA) in bread wheat (Triticum aestivum L.). Grain expressing high LMAA produces poorer quality bread products. To effectively breed for low LMAA, it is necessary to understand what genes control it and how they are expressed, particularly when genotypes are grown in different environments. In this study, an International Collection (IC) of 18 spring wheat genotypes and another set of 15 spring wheat cultivars adapted to South Dakota (SD), USA were assessed to characterize the genetic component of LMAA over 5 and 13 environments, respectively. The data were analysed using a GGE model with a mixed linear model approach and stability analysis was presented using an AMMI bi-plot on R software. All estimated variance components and their proportions to the total phenotypic variance were highly significant for both sets of genotypes, which were validated by the AMMI model analysis. Broad-sense heritability for LMAA was higher in SD adapted cultivars (53%) compared to that in IC (49%). Significant genetic effects and stability analyses showed some genotypes, e.g. 'Lancer', 'Chester' and 'LoSprout' from IC, and 'Alsen', 'Traverse' and 'Forefront' from SD cultivars could be used as parents to develop new cultivars expressing low levels of LMAA. Stability analysis using an AMMI bi-plot revealed that 'Chester', 'Lancer' and 'Advance' were the most stable across environments, while in contrast, 'Kinsman', 'Lerma52' and 'Traverse' exhibited the lowest stability for LMAA across environments.

  1. Data Assimilation Tools for CO2 Reservoir Model Development – A Review of Key Data Types, Analyses, and Selected Software

    Energy Technology Data Exchange (ETDEWEB)

    Rockhold, Mark L.; Sullivan, E. C.; Murray, Christopher J.; Last, George V.; Black, Gary D.

    2009-09-30

    Pacific Northwest National Laboratory (PNNL) has embarked on an initiative to develop world-class capabilities for performing experimental and computational analyses associated with geologic sequestration of carbon dioxide. The ultimate goal of this initiative is to provide science-based solutions for helping to mitigate the adverse effects of greenhouse gas emissions. This Laboratory-Directed Research and Development (LDRD) initiative currently has two primary focus areas—advanced experimental methods and computational analysis. The experimental methods focus area involves the development of new experimental capabilities, supported in part by the U.S. Department of Energy’s (DOE) Environmental Molecular Science Laboratory (EMSL) housed at PNNL, for quantifying mineral reaction kinetics with CO2 under high temperature and pressure (supercritical) conditions. The computational analysis focus area involves numerical simulation of coupled, multi-scale processes associated with CO2 sequestration in geologic media, and the development of software to facilitate building and parameterizing conceptual and numerical models of subsurface reservoirs that represent geologic repositories for injected CO2. This report describes work in support of the computational analysis focus area. The computational analysis focus area currently consists of several collaborative research projects. These are all geared towards the development and application of conceptual and numerical models for geologic sequestration of CO2. The software being developed for this focus area is referred to as the Geologic Sequestration Software Suite or GS3. A wiki-based software framework is being developed to support GS3. This report summarizes work performed in FY09 on one of the LDRD projects in the computational analysis focus area. The title of this project is Data Assimilation Tools for CO2 Reservoir Model Development. Some key objectives of this project in FY09 were to assess the current state

  2. ATOP - The Advanced Taiwan Ocean Prediction System Based on the mpiPOM. Part 1: Model Descriptions, Analyses and Results

    Directory of Open Access Journals (Sweden)

    Leo Oey

    2013-01-01

    Full Text Available A data-assimilated Taiwan Ocean Prediction (ATOP system is being developed at the National Central University, Taiwan. The model simulates sea-surface height, three-dimensional currents, temperature and salinity and turbulent mixing. The model has options for tracer and particle-tracking algorithms, as well as for wave-induced Stokes drift and wave-enhanced mixing and bottom drag. Two different forecast domains have been tested: a large-grid domain that encompasses the entire North Pacific Ocean at 0.1° × 0.1° horizontal resolution and 41 vertical sigma levels, and a smaller western North Pacific domain which at present also has the same horizontal resolution. In both domains, 25-year spin-up runs from 1988 - 2011 were first conducted, forced by six-hourly Cross-Calibrated Multi-Platform (CCMP and NCEP reanalysis Global Forecast System (GSF winds. The results are then used as initial conditions to conduct ocean analyses from January 2012 through February 2012, when updated hindcasts and real-time forecasts begin using the GFS winds. This paper describes the ATOP system and compares the forecast results against satellite altimetry data for assessing model skills. The model results are also shown to compare well with observations of (i the Kuroshio intrusion in the northern South China Sea, and (ii subtropical counter current. Review and comparison with other models in the literature of ¡§(i¡¨ are also given.

  3. One size does not fit all: On how Markov model order dictates performance of genomic sequence analyses

    Science.gov (United States)

    Narlikar, Leelavati; Mehta, Nidhi; Galande, Sanjeev; Arjunwadkar, Mihir

    2013-01-01

    The structural simplicity and ability to capture serial correlations make Markov models a popular modeling choice in several genomic analyses, such as identification of motifs, genes and regulatory elements. A critical, yet relatively unexplored, issue is the determination of the order of the Markov model. Most biological applications use a predetermined order for all data sets indiscriminately. Here, we show the vast variation in the performance of such applications with the order. To identify the ‘optimal’ order, we investigated two model selection criteria: Akaike information criterion and Bayesian information criterion (BIC). The BIC optimal order delivers the best performance for mammalian phylogeny reconstruction and motif discovery. Importantly, this order is different from orders typically used by many tools, suggesting that a simple additional step determining this order can significantly improve results. Further, we describe a novel classification approach based on BIC optimal Markov models to predict functionality of tissue-specific promoters. Our classifier discriminates between promoters active across 12 different tissues with remarkable accuracy, yielding 3 times the precision expected by chance. Application to the metagenomics problem of identifying the taxum from a short DNA fragment yields accuracies at least as high as the more complex mainstream methodologies, while retaining conceptual and computational simplicity. PMID:23267010

  4. Using FOSM-Based Data Worth Analyses to Design Geophysical Surveys to Reduce Uncertainty in a Regional Groundwater Model Update

    Science.gov (United States)

    Smith, B. D.; White, J.; Kress, W. H.; Clark, B. R.; Barlow, J.

    2016-12-01

    Hydrogeophysical surveys have become an integral part of understanding hydrogeological frameworks used in groundwater models. Regional models cover a large area where water well data is, at best, scattered and irregular. Since budgets are finite, priorities must be assigned to select optimal areas for geophysical surveys. For airborne electromagnetic (AEM) geophysical surveys, optimization of mapping depth and line spacing needs to take in account the objectives of the groundwater models. The approach discussed here uses a first-order, second-moment (FOSM) uncertainty analyses which assumes an approximate linear relation between model parameters and observations. This assumption allows FOSM analyses to be applied to estimate the value of increased parameter knowledge to reduce forecast uncertainty. FOSM is used to facilitate optimization of yet-to-be-completed geophysical surveying to reduce model forecast uncertainty. The main objective of geophysical surveying is assumed to estimate values and spatial variation in hydrologic parameters (i.e. hydraulic conductivity) as well as map lower permeability layers that influence the spatial distribution of recharge flux. The proposed data worth analysis was applied to Mississippi Embayment Regional Aquifer Study (MERAS) which is being updated. The objective of MERAS is to assess the ground-water availability (status and trends) of the Mississippi embayment aquifer system. The study area covers portions of eight states including Alabama, Arkansas, Illinois, Kentucky, Louisiana, Mississippi, Missouri, and Tennessee. The active model grid covers approximately 70,000 square miles, and incorporates some 6,000 miles of major rivers and over 100,000 water wells. In the FOSM analysis, a dense network of pilot points was used to capture uncertainty in hydraulic conductivity and recharge. To simulate the effect of AEM flight lines, the prior uncertainty for hydraulic conductivity and recharge pilots along potential flight lines was

  5. Using plant growth modeling to analyse C source-sink relations under drought: inter and intra specific comparison

    Directory of Open Access Journals (Sweden)

    Benoit ePallas

    2013-11-01

    Full Text Available The ability to assimilate C and allocate NSC (non structural carbohydrates to the most appropriate organs is crucial to maximize plant ecological or agronomic performance. Such C source and sink activities are differentially affected by environmental constraints. Under drought, plant growth is generally more sink than source limited as organ expansion or appearance rate is earlier and stronger affected than C assimilation. This favors plant survival and recovery but not always agronomic performance as NSC are stored rather than used for growth due to a modified metabolism in source and sink leaves. Such interactions between plant C and water balance are complex and plant modeling can help analyzing their impact on plant phenotype. This paper addresses the impact of trade-offs between C sink and source activities and plant production under drought, combining experimental and modeling approaches. Two contrasted monocotyledonous species (rice, oil palm were studied. Experimentally, the sink limitation of plant growth under moderate drought was confirmed as well as the modifications in NSC metabolism in source and sink organs. Under severe stress, when C source became limiting, plant NSC concentration decreased. Two plant models dedicated to oil palm and rice morphogenesis were used to perform a sensitivity analysis and further explore how to optimize C sink and source drought sensitivity to maximize plant growth. Modeling results highlighted that optimal drought sensitivity depends both on drought type and species and that modeling is a great opportunity to analyse such complex processes. Further modeling needs and more generally the challenge of using models to support complex trait breeding are discussed.

  6. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Directory of Open Access Journals (Sweden)

    Catherine L Worth

    Full Text Available BACKGROUND: Up until recently the only available experimental (high resolution structure of a G-protein-coupled receptor (GPCR was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. METHODOLOGY: We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s for building a comparative molecular model. CONCLUSIONS: The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying

  7. Comparative sequence and structural analyses of G-protein-coupled receptor crystal structures and implications for molecular models.

    Science.gov (United States)

    Worth, Catherine L; Kleinau, Gunnar; Krause, Gerd

    2009-09-16

    Up until recently the only available experimental (high resolution) structure of a G-protein-coupled receptor (GPCR) was that of bovine rhodopsin. In the past few years the determination of GPCR structures has accelerated with three new receptors, as well as squid rhodopsin, being successfully crystallized. All share a common molecular architecture of seven transmembrane helices and can therefore serve as templates for building molecular models of homologous GPCRs. However, despite the common general architecture of these structures key differences do exist between them. The choice of which experimental GPCR structure(s) to use for building a comparative model of a particular GPCR is unclear and without detailed structural and sequence analyses, could be arbitrary. The aim of this study is therefore to perform a systematic and detailed analysis of sequence-structure relationships of known GPCR structures. We analyzed in detail conserved and unique sequence motifs and structural features in experimentally-determined GPCR structures. Deeper insight into specific and important structural features of GPCRs as well as valuable information for template selection has been gained. Using key features a workflow has been formulated for identifying the most appropriate template(s) for building homology models of GPCRs of unknown structure. This workflow was applied to a set of 14 human family A GPCRs suggesting for each the most appropriate template(s) for building a comparative molecular model. The available crystal structures represent only a subset of all possible structural variation in family A GPCRs. Some GPCRs have structural features that are distributed over different crystal structures or which are not present in the templates suggesting that homology models should be built using multiple templates. This study provides a systematic analysis of GPCR crystal structures and a consistent method for identifying suitable templates for GPCR homology modelling that will

  8. Predicting Young Adults Binge Drinking in Nightlife Scenes: An Evaluation of the D-ARIANNA Risk Estimation Model.

    Science.gov (United States)

    Crocamo, Cristina; Bartoli, Francesco; Montomoli, Cristina; Carrà, Giuseppe

    2018-05-25

    Binge drinking (BD) among young people has significant public health implications. Thus, there is the need to target users most at risk. We estimated the discriminative accuracy of an innovative model nested in a recently developed e-Health app (Digital-Alcohol RIsk Alertness Notifying Network for Adolescents and young adults [D-ARIANNA]) for BD in young people, examining its performance to predict short-term BD episodes. We consecutively recruited young adults in pubs, discos, or live music events. Participants self-administered the app D-ARIANNA, which incorporates an evidence-based risk estimation model for the dependent variable BD. They were re-evaluated after 2 weeks using a single-item BD behavior as reference. We estimated D-ARIANNA discriminative ability through measures of sensitivity and specificity, and also likelihood ratios. ROC curve analyses were carried out, exploring variability of discriminative ability across subgroups. The analyses included 507 subjects, of whom 18% reported at least 1 BD episode at follow-up. The majority of these had been identified as at high/moderate or high risk (65%) at induction. Higher scores from the D-ARIANNA risk estimation model reflected an increase in the likelihood of BD. Additional risk factors such as high pocket money availability and alcohol expectancies influence the predictive ability of the model. The D-ARIANNA model showed an appreciable, though modest, predictive ability for subsequent BD episodes. Post-hoc model showed slightly better predictive properties. Using up-to-date technology, D-ARIANNA appears an innovative and promising screening tool for BD among young people. Long-term impact remains to be established, and also the role of additional social and environmental factors.

  9. Comparative analyses reveal potential uses of Brachypodium distachyon as a model for cold stress responses in temperate grasses

    Directory of Open Access Journals (Sweden)

    Li Chuan

    2012-05-01

    Full Text Available Abstract Background Little is known about the potential of Brachypodium distachyon as a model for low temperature stress responses in Pooideae. The ice recrystallization inhibition protein (IRIP genes, fructosyltransferase (FST genes, and many C-repeat binding factor (CBF genes are Pooideae specific and important in low temperature responses. Here we used comparative analyses to study conservation and evolution of these gene families in B. distachyon to better understand its potential as a model species for agriculturally important temperate grasses. Results Brachypodium distachyon contains cold responsive IRIP genes which have evolved through Brachypodium specific gene family expansions. A large cold responsive CBF3 subfamily was identified in B. distachyon, while CBF4 homologs are absent from the genome. No B. distachyon FST gene homologs encode typical core Pooideae FST-motifs and low temperature induced fructan accumulation was dramatically different in B. distachyon compared to core Pooideae species. Conclusions We conclude that B. distachyon can serve as an interesting model for specific molecular mechanisms involved in low temperature responses in core Pooideae species. However, the evolutionary history of key genes involved in low temperature responses has been different in Brachypodium and core Pooideae species. These differences limit the use of B. distachyon as a model for holistic studies relevant for agricultural core Pooideae species.

  10. Assessing models of speciation under different biogeographic scenarios; An empirical study using multi-locus and RNA-seq analyses

    Science.gov (United States)

    Edwards, Taylor; Tollis, Marc; Hsieh, PingHsun; Gutenkunst, Ryan N.; Liu, Zhen; Kusumi, Kenro; Culver, Melanie; Murphy, Robert W.

    2016-01-01

    Evolutionary biology often seeks to decipher the drivers of speciation, and much debate persists over the relative importance of isolation and gene flow in the formation of new species. Genetic studies of closely related species can assess if gene flow was present during speciation, because signatures of past introgression often persist in the genome. We test hypotheses on which mechanisms of speciation drove diversity among three distinct lineages of desert tortoise in the genus Gopherus. These lineages offer a powerful system to study speciation, because different biogeographic patterns (physical vs. ecological segregation) are observed at opposing ends of their distributions. We use 82 samples collected from 38 sites, representing the entire species' distribution and generate sequence data for mtDNA and four nuclear loci. A multilocus phylogenetic analysis in *BEAST estimates the species tree. RNA-seq data yield 20,126 synonymous variants from 7665 contigs from two individuals of each of the three lineages. Analyses of these data using the demographic inference package ∂a∂i serve to test the null hypothesis of no gene flow during divergence. The best-fit demographic model for the three taxa is concordant with the *BEAST species tree, and the ∂a∂i analysis does not indicate gene flow among any of the three lineages during their divergence. These analyses suggest that divergence among the lineages occurred in the absence of gene flow and in this scenario the genetic signature of ecological isolation (parapatric model) cannot be differentiated from geographic isolation (allopatric model).

  11. Clustering structures of large proteins using multifractal analyses based on a 6-letter model and hydrophobicity scale of amino acids

    International Nuclear Information System (INIS)

    Yang Jianyi; Yu Zuguo; Anh, Vo

    2009-01-01

    The Schneider and Wrede hydrophobicity scale of amino acids and the 6-letter model of protein are proposed to study the relationship between the primary structure and the secondary structural classification of proteins. Two kinds of multifractal analyses are performed on the two measures obtained from these two kinds of data on large proteins. Nine parameters from the multifractal analyses are considered to construct the parameter spaces. Each protein is represented by one point in these spaces. A procedure is proposed to separate large proteins in the α, β, α + β and α/β structural classes in these parameter spaces. Fisher's linear discriminant algorithm is used to assess our clustering accuracy on the 49 selected large proteins. Numerical results indicate that the discriminant accuracies are satisfactory. In particular, they reach 100.00% and 84.21% in separating the α proteins from the {β, α + β, α/β} proteins in a parameter space; 92.86% and 86.96% in separating the β proteins from the {α + β, α/β} proteins in another parameter space; 91.67% and 83.33% in separating the α/β proteins from the α + β proteins in the last parameter space.

  12. A hidden Markov model for reconstructing animal paths from solar geolocation loggers using templates for light intensity.

    Science.gov (United States)

    Rakhimberdiev, Eldar; Winkler, David W; Bridge, Eli; Seavy, Nathaniel E; Sheldon, Daniel; Piersma, Theunis; Saveliev, Anatoly

    2015-01-01

    Solar archival tags (henceforth called geolocators) are tracking devices deployed on animals to reconstruct their long-distance movements on the basis of locations inferred post hoc with reference to the geographical and seasonal variations in the timing and speeds of sunrise and sunset. The increased use of geolocators has created a need for analytical tools to produce accurate and objective estimates of migration routes that are explicit in their uncertainty about the position estimates. We developed a hidden Markov chain model for the analysis of geolocator data. This model estimates tracks for animals with complex migratory behaviour by combining: (1) a shading-insensitive, template-fit physical model, (2) an uncorrelated random walk movement model that includes migratory and sedentary behavioural states, and (3) spatially explicit behavioural masks. The model is implemented in a specially developed open source R package FLightR. We used the particle filter (PF) algorithm to provide relatively fast model posterior computation. We illustrate our modelling approach with analysis of simulated data for stationary tags and of real tracks of both a tree swallow Tachycineta bicolor migrating along the east and a golden-crowned sparrow Zonotrichia atricapilla migrating along the west coast of North America. We provide a model that increases accuracy in analyses of noisy data and movements of animals with complicated migration behaviour. It provides posterior distributions for the positions of animals, their behavioural states (e.g., migrating or sedentary), and distance and direction of movement. Our approach allows biologists to estimate locations of animals with complex migratory behaviour based on raw light data. This model advances the current methods for estimating migration tracks from solar geolocation, and will benefit a fast-growing number of tracking studies with this technology.

  13. Exploratory multinomial logit model-based driver injury severity analyses for teenage and adult drivers in intersection-related crashes.

    Science.gov (United States)

    Wu, Qiong; Zhang, Guohui; Ci, Yusheng; Wu, Lina; Tarefder, Rafiqul A; Alcántara, Adélamar Dely

    2016-05-18

    Teenage drivers are more likely to be involved in severely incapacitating and fatal crashes compared to adult drivers. Moreover, because two thirds of urban vehicle miles traveled are on signal-controlled roadways, significant research efforts are needed to investigate intersection-related teenage driver injury severities and their contributing factors in terms of driver behavior, vehicle-infrastructure interactions, environmental characteristics, roadway geometric features, and traffic compositions. Therefore, this study aims to explore the characteristic differences between teenage and adult drivers in intersection-related crashes, identify the significant contributing attributes, and analyze their impacts on driver injury severities. Using crash data collected in New Mexico from 2010 to 2011, 2 multinomial logit regression models were developed to analyze injury severities for teenage and adult drivers, respectively. Elasticity analyses and transferability tests were conducted to better understand the quantitative impacts of these factors and the teenage driver injury severity model's generality. The results showed that although many of the same contributing factors were found to be significant in the both teenage and adult driver models, certain different attributes must be distinguished to specifically develop effective safety solutions for the 2 driver groups. The research findings are helpful to better understand teenage crash uniqueness and develop cost-effective solutions to reduce intersection-related teenage injury severities and facilitate driver injury mitigation research.

  14. LOCO - a linearised model for analysing the onset of coolant oscillations and frequency response of boiling channels

    International Nuclear Information System (INIS)

    Romberg, T.M.

    1982-12-01

    Industrial plant such as heat exchangers and nuclear and conventional boilers are prone to coolant flow oscillations which may not be detected. In this report, a hydrodynamic model is formulated in which the one-dimensional, non-linear, partial differential equations for the conservation of mass, energy and momentum are perturbed with respect to time, linearised, and Laplace-transformed into the s-domain for frequency response analysis. A computer program has been developed to integrate numerically the resulting non-linear ordinary differential equations by finite difference methods. A sample problem demonstrates how the computer code is used to analyse the frequency response and flow stability characteristics of a heated channel

  15. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  16. Motivation and justification: a dual-process model of culture in action.

    Science.gov (United States)

    Vaisey, Stephen

    2009-05-01

    This article presents a new model of culture in action. Although most sociologists who study culture emphasize its role in post hoc sense making, sociologists of religion and social psychologists tend to focus on the role beliefs play in motivation. The dual-process model integrates justificatory and motivational approaches by distinguishing between "discursive" and "practical" modes of culture and cognition. The author uses panel data from the National Study of Youth and Religion to illustrate the model's usefulness. Consistent with its predictions, he finds that though respondents cannot articulate clear principles of moral judgment, their choice from a list of moral-cultural scripts strongly predicts later behavior.

  17. Using niche-modelling and species-specific cost analyses to determine a multispecies corridor in a fragmented landscape

    Science.gov (United States)

    Zurano, Juan Pablo; Selleski, Nicole; Schneider, Rosio G.

    2017-01-01

    types independent of the degree of legal protection. These data used with multifocal GIS analyses balance the varying degree of overlap and unique properties among them allowing for comprehensive conservation strategies to be developed relatively rapidly. Our comprehensive approach serves as a model to other regions faced with habitat loss and lack of data. The five carnivores focused on in our study have wide ranges, so the results from this study can be expanded and combined with surrounding countries, with analyses at the species or community level. PMID:28841692

  18. Comparison of optical-model and Lane-model analyses of sub-Coulomb protons on /sup 92,94/Zr

    International Nuclear Information System (INIS)

    Schrils, R.; Flynn, D.S.; Hershberger, R.L.; Gabbard, F.

    1979-01-01

    Accurate proton elastic-scattering cross sections were measured with enriched targets of /sup 92,94/Zr from E/sub p/ = 2.0 to 6.5 MeV. The elastic-scattering cross sections, together with absorption cross sections, were analyzed with a Lane model which employed the optical potential of Johnson et al. The resulting parameters were compared with those obtained with a single-channel optical model and negligible differences were found. Significant differences between the 92 Zr and 94 Zr real diffusenesses resulted from the inclusion of the (p,p) data in the analyses

  19. Computational modeling and statistical analyses on individual contact rate and exposure to disease in complex and confined transportation hubs

    Science.gov (United States)

    Wang, W. L.; Tsui, K. L.; Lo, S. M.; Liu, S. B.

    2018-01-01

    Crowded transportation hubs such as metro stations are thought as ideal places for the development and spread of epidemics. However, for the special features of complex spatial layout, confined environment with a large number of highly mobile individuals, it is difficult to quantify human contacts in such environments, wherein disease spreading dynamics were less explored in the previous studies. Due to the heterogeneity and dynamic nature of human interactions, increasing studies proved the importance of contact distance and length of contact in transmission probabilities. In this study, we show how detailed information on contact and exposure patterns can be obtained by statistical analyses on microscopic crowd simulation data. To be specific, a pedestrian simulation model-CityFlow was employed to reproduce individuals' movements in a metro station based on site survey data, values and distributions of individual contact rate and exposure in different simulation cases were obtained and analyzed. It is interesting that Weibull distribution fitted the histogram values of individual-based exposure in each case very well. Moreover, we found both individual contact rate and exposure had linear relationship with the average crowd densities of the environments. The results obtained in this paper can provide reference to epidemic study in complex and confined transportation hubs and refine the existing disease spreading models.

  20. Treatment of visceral leishmaniasis: model-based analyses on the spread of antimony-resistant L. donovani in Bihar, India.

    Directory of Open Access Journals (Sweden)

    Anette Stauch

    Full Text Available BACKGROUND: Pentavalent antimonials have been the mainstay of antileishmanial therapy for decades, but increasing failure rates under antimonial treatment have challenged further use of these drugs in the Indian subcontinent. Experimental evidence has suggested that parasites which are resistant against antimonials have superior survival skills than sensitive ones even in the absence of antimonial treatment. METHODS AND FINDINGS: We use simulation studies based on a mathematical L. donovani transmission model to identify parameters which can explain why treatment failure rates under antimonial treatment increased up to 65% in Bihar between 1980 and 1997. Model analyses suggest that resistance to treatment alone cannot explain the observed treatment failure rates. We explore two hypotheses referring to an increased fitness of antimony-resistant parasites: the additional fitness is (i disease-related, by causing more clinical cases (higher pathogenicity or more severe disease (higher virulence, or (ii is transmission-related, by increasing the transmissibility from sand flies to humans or vice versa. CONCLUSIONS: Both hypotheses can potentially explain the Bihar observations. However, increased transmissibility as an explanation appears more plausible because it can occur in the background of asymptomatically transmitted infection whereas disease-related factors would most probably be observable. Irrespective of the cause of fitness, parasites with a higher fitness will finally replace sensitive parasites, even if antimonials are replaced by another drug.

  1. Bayesian salamanders: analysing the demography of an underground population of the European plethodontid Speleomantes strinatii with state-space modelling

    Directory of Open Access Journals (Sweden)

    Salvidio Sebastiano

    2010-02-01

    Full Text Available Abstract Background It has been suggested that Plethodontid salamanders are excellent candidates for indicating ecosystem health. However, detailed, long-term data sets of their populations are rare, limiting our understanding of the demographic processes underlying their population fluctuations. Here we present a demographic analysis based on a 1996 - 2008 data set on an underground population of Speleomantes strinatii (Aellen in NW Italy. We utilised a Bayesian state-space approach allowing us to parameterise a stage-structured Lefkovitch model. We used all the available population data from annual temporary removal experiments to provide us with the baseline data on the numbers of juveniles, subadults and adult males and females present at any given time. Results Sampling the posterior chains of the converged state-space model gives us the likelihood distributions of the state-specific demographic rates and the associated uncertainty of these estimates. Analysing the resulting parameterised Lefkovitch matrices shows that the population growth is very close to 1, and that at population equilibrium we expect half of the individuals present to be adults of reproductive age which is what we also observe in the data. Elasticity analysis shows that adult survival is the key determinant for population growth. Conclusion This analysis demonstrates how an understanding of population demography can be gained from structured population data even in a case where following marked individuals over their whole lifespan is not practical.

  2. Round-robin pretest analyses of a 1:6-scale reinforced concrete containment model subject to static internal pressurization

    International Nuclear Information System (INIS)

    Clauss, D.B.

    1987-05-01

    Analyses of a 1:6-scale reinforced concrete containment model that will be tested to failure at Sandia National Laboratories in the spring of 1987 were conducted by the following organizations in the United States and Europe: Sandia National Laboratories (USA), Argonne National Laboratory (USA), Electric Power Research Institute (USA), Commissariat a L'Energie Atomique (France), HM Nuclear Installations Inspectorate (UK), Comitato Nazionale per la ricerca e per lo sviluppo dell'Energia Nucleare e delle Energie Alternative (Italy), UK Atomic Energy Authority, Safety and Reliability Directorate (UK), Gesellschaft fuer Reaktorsicherheit (FRG), Brookhaven National Laboratory (USA), and Central Electricity Generating Board (UK). Each organization was supplied with a standard information package, which included construction drawings and actual material properties for most of the materials used in the model. Each organization worked independently using their own analytical methods. This report includes descriptions of the various analytical approaches and pretest predictions submitted by each organization. Significant milestones that occur with increasing pressure, such as damage to the concrete (cracking and crushing) and yielding of the steel components, and the failure pressure (capacity) and failure mechanism are described. Analytical predictions for pressure histories of strain in the liner and rebar and displacements are compared at locations where experimental results will be available after the test. Thus, these predictions can be compared to one another and to experimental results after the test

  3. Longitudinal changes in telomere length and associated genetic parameters in dairy cattle analysed using random regression models.

    Directory of Open Access Journals (Sweden)

    Luise A Seeker

    Full Text Available Telomeres cap the ends of linear chromosomes and shorten with age in many organisms. In humans short telomeres have been linked to morbidity and mortality. With the accumulation of longitudinal datasets the focus shifts from investigating telomere length (TL to exploring TL change within individuals over time. Some studies indicate that the speed of telomere attrition is predictive of future disease. The objectives of the present study were to 1 characterize the change in bovine relative leukocyte TL (RLTL across the lifetime in Holstein Friesian dairy cattle, 2 estimate genetic parameters of RLTL over time and 3 investigate the association of differences in individual RLTL profiles with productive lifespan. RLTL measurements were analysed using Legendre polynomials in a random regression model to describe TL profiles and genetic variance over age. The analyses were based on 1,328 repeated RLTL measurements of 308 female Holstein Friesian dairy cattle. A quadratic Legendre polynomial was fitted to the fixed effect of age in months and to the random effect of the animal identity. Changes in RLTL, heritability and within-trait genetic correlation along the age trajectory were calculated and illustrated. At a population level, the relationship between RLTL and age was described by a positive quadratic function. Individuals varied significantly regarding the direction and amount of RLTL change over life. The heritability of RLTL ranged from 0.36 to 0.47 (SE = 0.05-0.08 and remained statistically unchanged over time. The genetic correlation of RLTL at birth with measurements later in life decreased with the time interval between samplings from near unity to 0.69, indicating that TL later in life might be regulated by different genes than TL early in life. Even though animals differed in their RLTL profiles significantly, those differences were not correlated with productive lifespan (p = 0.954.

  4. A Meta-Meta-Analysis: Empirical Review of Statistical Power, Type I Error Rates, Effect Sizes, and Model Selection of Meta-Analyses Published in Psychology

    Science.gov (United States)

    Cafri, Guy; Kromrey, Jeffrey D.; Brannick, Michael T.

    2010-01-01

    This article uses meta-analyses published in "Psychological Bulletin" from 1995 to 2005 to describe meta-analyses in psychology, including examination of statistical power, Type I errors resulting from multiple comparisons, and model choice. Retrospective power estimates indicated that univariate categorical and continuous moderators, individual…

  5. Statistical Analyses of High-Resolution Aircraft and Satellite Observations of Sea Ice: Applications for Improving Model Simulations

    Science.gov (United States)

    Farrell, S. L.; Kurtz, N. T.; Richter-Menge, J.; Harbeck, J. P.; Onana, V.

    2012-12-01

    Satellite-derived estimates of ice thickness and observations of ice extent over the last decade point to a downward trend in the basin-scale ice volume of the Arctic Ocean. This loss has broad-ranging impacts on the regional climate and ecosystems, as well as implications for regional infrastructure, marine navigation, national security, and resource exploration. New observational datasets at small spatial and temporal scales are now required to improve our understanding of physical processes occurring within the ice pack and advance parameterizations in the next generation of numerical sea-ice models. High-resolution airborne and satellite observations of the sea ice are now available at meter-scale resolution or better that provide new details on the properties and morphology of the ice pack across basin scales. For example the NASA IceBridge airborne campaign routinely surveys the sea ice of the Arctic and Southern Oceans with an advanced sensor suite including laser and radar altimeters and digital cameras that together provide high-resolution measurements of sea ice freeboard, thickness, snow depth and lead distribution. Here we present statistical analyses of the ice pack primarily derived from the following IceBridge instruments: the Digital Mapping System (DMS), a nadir-looking, high-resolution digital camera; the Airborne Topographic Mapper, a scanning lidar; and the University of Kansas snow radar, a novel instrument designed to estimate snow depth on sea ice. Together these instruments provide data from which a wide range of sea ice properties may be derived. We provide statistics on lead distribution and spacing, lead width and area, floe size and distance between floes, as well as ridge height, frequency and distribution. The goals of this study are to (i) identify unique statistics that can be used to describe the characteristics of specific ice regions, for example first-year/multi-year ice, diffuse ice edge/consolidated ice pack, and convergent

  6. On groundwater flow modelling in safety analyses of spent fuel disposal. A comparative study with emphasis on boundary conditions

    Energy Technology Data Exchange (ETDEWEB)

    Jussila, P

    1999-11-01

    Modelling groundwater flow is an essential part of the safety assessment of spent fuel disposal because moving groundwater makes a physical connection between a geological repository and the biosphere. Some of the common approaches to model groundwater flow in bedrock are equivalent porous continuum (EC), stochastic continuum and various fracture network concepts. The actual flow system is complex and measuring data are limited. Multiple distinct approaches and models, alternative scenarios as well as calibration and sensitivity analyses are used to give confidence on the results of the calculations. The correctness and orders of magnitude of results of such complex research can be assessed by comparing them to the results of simplified and robust approaches. The first part of this study is a survey of the objects, contents and methods of the groundwater flow modelling performed in the safety assessment of the spent fuel disposal in Finland and Sweden. The most apparent difference of the Swedish studies compared to the Finnish ones is the approach of using more different models, which is enabled by the more resources available in Sweden. The results of more comprehensive approaches provided by international co-operation are very useful to give perspective to the results obtained in Finland. In the second part of this study, the influence of boundary conditions on the flow fields of a simple 2D model is examined. The assumptions and simplifications in this approach include e.g. the following: (1) the EC model is used, in which the 2-dimensional domain is considered a continuum of equivalent properties without fractures present, (2) the calculations are done for stationary fields, without sources or sinks present in the domain and with a constant density of the groundwater, (3) the repository is represented by an isotropic plate, the hydraulic conductivity of which is given fictitious values, (4) the hydraulic conductivity of rock is supposed to have an exponential

  7. Multiscale Thermohydrologic Model Analyses of Heterogeneity and Thermal-Loading Factors for the Proposed Repository at Yucca Mountain

    International Nuclear Information System (INIS)

    Glascoe, L.G.; Buscheck, T.A.; Gansemer, J.; Sun, Y.; Lee, K.

    2002-01-01

    The MultiScale ThermoHydrologic Model (MSTHM) predicts thermohydrologic (TH) conditions in emplacement drifts and the adjoining host rock throughout the proposed nuclear-waste repository at Yucca Mountain. The MSTHM is a computationally efficient approach that accounts for TH processes occurring at a scale of a few tens of centimeters around individual waste packages and emplacement drifts, and for heat flow at the multi-kilometer scale at Yucca Mountain. The modeling effort presented here is an early investigation of the repository and is simulated at a lower temperature mode and with a different panel loading than the repository currently being considered for license application. We present these recent lower temperature mode MSTHM simulations that address the influence of repository-scale thermal-conductivity heterogeneity and the influence of preclosure operational factors affecting thermal-loading conditions. We can now accommodate a complex repository layout with emplacement drifts lying in non-parallel planes using a superposition process that combines results from multiple mountain-scale submodels. This development, along with other improvements to the MSTHM, enables more rigorous analyses of preclosure operational factors. These improvements include the ability to (1) predict TH conditions on a drift-by-drift basis, (2) represent sequential emplacement of waste packages along the drifts, and (3) incorporate distance- and time-dependent heat-removal efficiency associated with drift ventilation. Alternative approaches to addressing repository-scale thermal-conductivity heterogeneity are investigated. We find that only one of the four MSTHM submodel types needs to incorporate thermal-conductivity heterogeneity. For a particular repository design, we find that the most influential parameters are (1) percolation-flux distribution, (2) thermal-conductivity heterogeneity within the host-rock units, (3) the sequencing of waste-package emplacement, and (4) the

  8. Water flow experiments and analyses on the cross-flow type mercury target model with the flow guide plates

    CERN Document Server

    Haga, K; Kaminaga, M; Hino, R

    2001-01-01

    A mercury target is used in the spallation neutron source driven by a high-intensity proton accelerator. In this study, the effectiveness of the cross-flow type mercury target structure was evaluated experimentally and analytically. Prior to the experiment, the mercury flow field and the temperature distribution in the target container were analyzed assuming a proton beam energy and power of 1.5 GeV and 5 MW, respectively, and the feasibility of the cross-flow type target was evaluated. Then the average water flow velocity field in the target mock-up model, which was fabricated from Plexiglass for a water experiment, was measured at room temperature using the PIV technique. Water flow analyses were conducted and the analytical results were compared with the experimental results. The experimental results showed that the cross-flow could be realized in most of the proton beam path area and the analytical result of the water flow velocity field showed good correspondence to the experimental results in the case w...

  9. Discontinuation of continuous renal replacement therapy: A post hoc analysis of a prospective multicenter observational study

    NARCIS (Netherlands)

    Uchino, Shigehiko; Bellomo, Rinaldo; Morimatsu, Hiroshi; Morgera, Stanislao; Schetz, Miet; Tan, Ian; Bouman, Catherine; Macedo, Ettiene; Gibney, Noel; Tolwani, Ashita; Oudemans-van Straaten, Heleen; Ronco, Claudio; Kellum, John A.

    2009-01-01

    Objectives: To describe current practice for the discontinuation of continuous renal replacement therapy in a multinational setting and to identify variables associated with successful discontinuation. The approach to discontinue continuous renal replacement therapy may affect patient outcomes.

  10. Telephone audit for monitoring stroke unit facilities: a post hoc analysis from PROSIT study.

    Science.gov (United States)

    Candelise, Livia; Gattinoni, Monica; Bersano, Anna

    2015-01-01

    Although several valid approaches exist to measure the number and the quality of acute stroke units, only few studies tested their reliability. This study is aimed at establishing whether the telephone administration of the PROject of Stroke unIt ITaly (PROSIT) audit questionnaire is reliable compared with direct face-to-face interview. Forty-three medical leaders in charge of in-hospital stroke services were interviewed twice using the same PROSIT questionnaire with 2 different modalities. First, the interviewers approached the medical leaders by telephone. Thereafter, they went to the hospital site and performed a direct face-to-face interview. Six independent couples of trained researchers conducted the audit interviews. The degree of intermodality agreement was measured with kappa statistic. We found a perfect agreement for stroke units identification between the 2 different audit modalities (K = 1.00; standard error [SE], 1.525). The agreement was also very good for stroke dedicated beds (K = 1.00; SE, 1.525) and dedicated personnel (K = 1.00; SE, 1.525), which are the 2 components of stroke unit definition. The agreement was lower for declared in use process of care and availability of diagnostic investigations. The telephone audit can be used for monitoring stroke unit structures. It is more rapid, less expensive, and can repeatedly be used at appropriate intervals. However, a reliable description of the process of care and diagnostic investigations indicators should be obtained by either local site audit visit or prospective stroke register based on individual patient data. Copyright © 2015 National Stroke Association. Published by Elsevier Inc. All rights reserved.

  11. Post Hoc Analysis of