WorldWideScience

Sample records for model significantly overestimates

  1. Resource overestimates

    Indian Academy of Sciences (India)

    First page Back Continue Last page Graphics. Extensive field studies revealed over-estimates of bamboo stocks by a factor of ten! Extensive field studies revealed over-estimates of bamboo stocks by a factor of ten! Forest compartments that had been completely clear felled to set up WCPM still showed large stocks because ...

  2. Head multidetector computed tomography: emergency medicine physicians overestimate the pretest probability and legal risk of significant findings.

    Science.gov (United States)

    Baskerville, Jerry Ray; Herrick, John

    2012-02-01

    This study focuses on clinically assigned prospective estimated pretest probability and pretest perception of legal risk as independent variables in the ordering of multidetector computed tomographic (MDCT) head scans. Our primary aim is to measure the association between pretest probability of a significant finding and pretest perception of legal risk. Secondarily, we measure the percentage of MDCT scans that physicians would not order if there was no legal risk. This study is a prospective, cross-sectional, descriptive analysis of patients 18 years and older for whom emergency medicine physicians ordered a head MDCT. We collected a sample of 138 patients subjected to head MDCT scans. The prevalence of a significant finding in our population was 6%, yet the pretest probability expectation of a significant finding was 33%. The legal risk presumed was even more dramatic at 54%. These data support the hypothesis that physicians presume the legal risk to be significantly higher than the risk of a significant finding. A total of 21% or 15% patients (95% confidence interval, ±5.9%) would not have been subjected to MDCT if there was no legal risk. Physicians overestimated the probability that the computed tomographic scan would yield a significant result and indicated an even greater perceived medicolegal risk if the scan was not obtained. Physician test-ordering behavior is complex, and our study queries pertinent aspects of MDCT testing. The magnification of legal risk vs the pretest probability of a significant finding is demonstrated. Physicians significantly overestimated pretest probability of a significant finding on head MDCT scans and presumed legal risk. Copyright © 2012 Elsevier Inc. All rights reserved.

  3. Predictors and overestimation of recalled mobile phone use among children and adolescents.

    Science.gov (United States)

    Aydin, Denis; Feychting, Maria; Schüz, Joachim; Andersen, Tina Veje; Poulsen, Aslak Harbo; Prochazka, Michaela; Klæboe, Lars; Kuehni, Claudia E; Tynes, Tore; Röösli, Martin

    2011-12-01

    A growing body of literature addresses possible health effects of mobile phone use in children and adolescents by relying on the study participants' retrospective reconstruction of mobile phone use. In this study, we used data from the international case-control study CEFALO to compare self-reported with objectively operator-recorded mobile phone use. The aim of the study was to assess predictors of level of mobile phone use as well as factors that are associated with overestimating own mobile phone use. For cumulative number and duration of calls as well as for time since first subscription we calculated the ratio of self-reported to operator-recorded mobile phone use. We used multiple linear regression models to assess possible predictors of the average number and duration of calls per day and logistic regression models to assess possible predictors of overestimation. The cumulative number and duration of calls as well as the time since first subscription of mobile phones were overestimated on average by the study participants. Likelihood to overestimate number and duration of calls was not significantly different for controls compared to cases (OR=1.1, 95%-CI: 0.5 to 2.5 and OR=1.9, 95%-CI: 0.85 to 4.3, respectively). However, likelihood to overestimate was associated with other health related factors such as age and sex. As a consequence, such factors act as confounders in studies relying solely on self-reported mobile phone use and have to be considered in the analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Overestimating resource value and its effects on fighting decisions.

    Directory of Open Access Journals (Sweden)

    Lee Alan Dugatkin

    Full Text Available Much work in behavioral ecology has shown that animals fight over resources such as food, and that they make strategic decisions about when to engage in such fights. Here, we examine the evolution of one, heretofore unexamined, component of that strategic decision about whether to fight for a resource. We present the results of a computer simulation that examined the evolution of over- or underestimating the value of a resource (food as a function of an individual's current hunger level. In our model, animals fought for food when they perceived their current food level to be below the mean for the environment. We considered seven strategies for estimating food value: 1 always underestimate food value, 2 always overestimate food value, 3 never over- or underestimate food value, 4 overestimate food value when hungry, 5 underestimate food value when hungry, 6 overestimate food value when relatively satiated, and 7 underestimate food value when relatively satiated. We first competed all seven strategies against each other when they began at approximately equal frequencies. In such a competition, two strategies--"always overestimate food value," and "overestimate food value when hungry"--were very successful. We next competed each of these strategies against the default strategy of "never over- or underestimate," when the default strategy was set at 99% of the population. Again, the strategies of "always overestimate food value" and "overestimate food value when hungry" fared well. Our results suggest that overestimating food value when deciding whether to fight should be favored by natural selection.

  5. Predictive Validity of Explicit and Implicit Threat Overestimation in Contamination Fear

    Science.gov (United States)

    Green, Jennifer S.; Teachman, Bethany A.

    2012-01-01

    We examined the predictive validity of explicit and implicit measures of threat overestimation in relation to contamination-fear outcomes using structural equation modeling. Undergraduate students high in contamination fear (N = 56) completed explicit measures of contamination threat likelihood and severity, as well as looming vulnerability cognitions, in addition to an implicit measure of danger associations with potential contaminants. Participants also completed measures of contamination-fear symptoms, as well as subjective distress and avoidance during a behavioral avoidance task, and state looming vulnerability cognitions during an exposure task. The latent explicit (but not implicit) threat overestimation variable was a significant and unique predictor of contamination fear symptoms and self-reported affective and cognitive facets of contamination fear. On the contrary, the implicit (but not explicit) latent measure predicted behavioral avoidance (at the level of a trend). Results are discussed in terms of differential predictive validity of implicit versus explicit markers of threat processing and multiple fear response systems. PMID:24073390

  6. Reassessment of soil erosion on the Chinese loess plateau: were rates overestimated?

    Science.gov (United States)

    Zhao, Jianlin; Govers, Gerard

    2014-05-01

    Several studies have estimated regional soil erosion rates (rill and interrill erosion) on the Chinese loess plateau using an erosion model such as the RUSLE (e.g. Fu et al., 2011; Sun et al., 2013). However, the question may be asked whether such estimates are realistic: studies have shown that the use of models for large areas may lead to significant overestimations (Quinton et al., 2010). In this study, soil erosion rates on the Chinese loess plateau were reevaluated by using field measured soil erosion data from erosion plots (216 plots and 1380 plot years) in combination with a careful extrapolation procedure. Data analysis showed that the relationship between slope and erosion rate on arable land could be well described by erosion-slope relationships reported in the literature (Nearing, 1997). The increase of average erosion rate with slope length was clearly degressive, as could be expected from earlier research. However, for plots with permanent vegetation (grassland, shrub, forest) no relationship was found between erosion rates and slope gradient and/or slope length. This is important, as it implies that spatial variations of erosion on permanently vegetated areas cannot be modeled using topographical functions derived from observations on arable land. Application of relationships developed for arable land will lead to a significant overestimation of soil erosion rates. Based on our analysis we estimate the total soil erosion rate in the Chinese Loess plateau averages ca. 6.78 t ha-1 yr-1 for the whole loess plateau, resulting in a total sediment mobilisation of ca. 0.38 Gt yr-1. Erosion rates on arable land average ca. 15.10 t ha-1 yr-1. These estimates are 2 to 3 times lower than previously published estimates. The main reason why previous estimates are likely to be too high is that the values of (R)USLE parameters such as K, P and LS factor were overestimated. Overestimations of the K factor are due to the reliance of nomograph calculations, resulting

  7. Calcified Plaque of Coronary Artery: Factors Influencing Overestimation of Coronary Artery Stenosis on Coronary CT Angiography

    International Nuclear Information System (INIS)

    Kim, Mok Hee; Kim, Yun Hyeon; Choi, Song; Seon, Hyun Ju; Jeong, Gwang Woo; Park, Jin Gyoon; Kang, Heoung Keun; Ko, Joon Seok

    2010-01-01

    To assess the influence of calcified plaque characteristics on the overestimation of coronary arterial stenosis on a coronary CT angiography (CCTA). The study included 271 coronary arteries with calcified plaques identified by CCTA, and based on 928 coronary arteries from 232 patients who underwent both CCTA and invasive coronary angiography (ICA). Individual coronary arteries were classified into two groups by agreement based on the degree of stenosis from each CCTA and ICA: 1) group A includes patients with concordant CCTA and ICA results and, 2) group B includes patients with an overestimation of CCTA compared to ICA. Parameters including total calcium score, calcium score of an individual coronary artery, calcium burden number of an individual coronary artery, and the density of each calcified plaque (calcium score / number of calcium burden) for each individual coronary artery were compared between the two groups. Of the 271 coronary arteries, 164 (60.5%) were overestimated on CCTA. The left anterior descending artery (LAD) had a significantly low rate of overestimation (47.1%) compared to the other coronary arteries (p=0.001). No significant differences for total calcium score, calcium score of individual coronary artery, and the density of each calcified plaque from individual coronary arteries between two groups was observed. However, a decreasing tendency for the rate of overestimation on CCTA was observed with an increase in calcium burden of individual coronary arteries (p<0.05). The evaluation of coronary arteries suggests that the degree of coronary arterial stenosis had a tendency to be overestimated by calcified plaques on CCTA. However, the rate of overestimation for the degree of coronary arterial stenosis by calcified plaques was not significantly influenced by total calcium score, calcium score of individual coronary artery, and density of each calcified plaque

  8. Prediction Equations Overestimate the Energy Requirements More for Obesity-Susceptible Individuals.

    Science.gov (United States)

    McLay-Cooke, Rebecca T; Gray, Andrew R; Jones, Lynnette M; Taylor, Rachael W; Skidmore, Paula M L; Brown, Rachel C

    2017-09-13

    Predictive equations to estimate resting metabolic rate (RMR) are often used in dietary counseling and by online apps to set energy intake goals for weight loss. It is critical to know whether such equations are appropriate for those susceptible to obesity. We measured RMR by indirect calorimetry after an overnight fast in 26 obesity susceptible (OSI) and 30 obesity resistant (ORI) individuals, identified using a simple 6-item screening tool. Predicted RMR was calculated using the FAO/WHO/UNU (Food and Agricultural Organisation/World Health Organisation/United Nations University), Oxford and Miflin-St Jeor equations. Absolute measured RMR did not differ significantly between OSI versus ORI (6339 vs. 5893 kJ·d -1 , p = 0.313). All three prediction equations over-estimated RMR for both OSI and ORI when measured RMR was ≤5000 kJ·d -1 . For measured RMR ≤7000 kJ·d -1 there was statistically significant evidence that the equations overestimate RMR to a greater extent for those classified as obesity susceptible with biases ranging between around 10% to nearly 30% depending on the equation. The use of prediction equations may overestimate RMR and energy requirements particularly in those who self-identify as being susceptible to obesity, which has implications for effective weight management.

  9. Total body surface area overestimation at referring institutions in children transferred to a burn center.

    Science.gov (United States)

    Swords, Douglas S; Hadley, Edmund D; Swett, Katrina R; Pranikoff, Thomas

    2015-01-01

    Total body surface area (TBSA) burned is a powerful descriptor of burn severity and influences the volume of resuscitation required in burn patients. The incidence and severity of TBSA overestimation by referring institutions (RIs) in children transferred to a burn center (BC) are unclear. The association between TBSA overestimation and overresuscitation is unknown as is that between TBSA overestimation and outcome. The trauma registry at a BC was queried over 7.25 years for children presenting with burns. TBSA estimate at RIs and BC, total fluid volume given before arrival at a BC, demographic variables, and clinical variables were reviewed. Nearly 20 per cent of children arrived from RIs without TBSA estimation. Nearly 50 per cent were overestimated by 5 per cent or greater TBSA and burn sizes were overestimated by up to 44 per cent TBSA. Average TBSA measured at BC was 9.5 ± 8.3 per cent compared with 15.5 ± 11.8 per cent as measured at RIs (P < 0.0001). Burns between 10 and 19.9 per cent TBSA were overestimated most often and by the greatest amounts. There was a statistically significant relationship between overestimation of TBSA by 5 per cent or greater and overresuscitation by 10 mL/kg or greater (P = 0.02). No patient demographic or clinical factors were associated with TBSA overestimation. Education efforts aimed at emergency department physicians regarding the importance of always calculating TBSA as well as the mechanics of TBSA estimation and calculating resuscitation volume are needed. Further studies should evaluate the association of TBSA overestimation by RIs with adverse outcomes and complications in the burned child.

  10. Overestimation of own body weights in female university students: associations with lifestyles, weight control behaviors and depression.

    Science.gov (United States)

    Kim, Miso; Lee, Hongmie

    2010-12-01

    The study aimed to analyze the lifestyles, weight control behavior, dietary habits, and depression of female university students. The subjects were 532 students from 8 universities located in 4 provinces in Korea. According to percent ideal body weight, 33 (6.4%), 181 (34.0%), 283 (53.2%), 22 (4.1%) and 13 (2.5%) were severely underweight, underweight, normal, overweight and obese, respectively, based on self-reported height and weight. As much as 64.1% and only 2.4%, respectively, overestimated and underestimated their body weight status. Six overweight subjects were excluded from overestimation group for the purpose of this study, resulting in overestimation group consisting of only underweight and normal weight subjects. Compared to those from the normal perception group, significantly more subjects from the overestimation group were currently smoking (P = 0.017) and drank more often than once a week (P = 0.015), without any significant differences in dietary habits. Despite similar BMIs, subjects who overestimated their own weight statuses had significantly higher weight dissatisfaction (P = 0.000), obesity stress (P = 0.000), obsession to lose weight (P = 0.007) and depression (P = 0.018). Also, more of them wanted to lose weight (P = 0.000), checked their body weights more often than once a week (P = 0.025) and had dieting experiences using 'reducing meal size' (P = 0.012), 'reducing snacks' (P = 0.042) and 'taking prescribed pills' (P = 0.032), and presented 'for a wider range of clothes selection' as the reason for weight loss (P = 0.039), although none was actually overweight or obese. Unlike the case with overestimating one's own weight, being overweight was associated with less drinking (P = 0.035) and exercising more often (P = 0.001) and for longer (P = 0.001) and healthier reasons for weight control (P = 0.002), despite no differences in frequency of weighing and depression. The results showed that weight overestimation, independent of weight status

  11. Americans Still Overestimate Social Class Mobility: A Pre-Registered Self-Replication.

    Science.gov (United States)

    Kraus, Michael W

    2015-01-01

    Kraus and Tan (2015) hypothesized that Americans tend to overestimate social class mobility in society, and do so because they seek to protect the self. This paper reports a pre-registered exact replication of Study 3 from this original paper and finds, consistent with the original study, that Americans substantially overestimate social class mobility, that people provide greater overestimates when made while thinking of similar others, and that high perceived social class is related to greater overestimates. The current results provide additional evidence consistent with the idea that people overestimate class mobility to protect their beliefs in the promise of equality of opportunity. Discussion considers the utility of pre-registered self-replications as one tool for encouraging replication efforts and assessing the robustness of effect sizes.

  12. Americans Still Overestimate Social Class Mobility: A Pre-Registered Self-Replication

    Directory of Open Access Journals (Sweden)

    Michael W. Kraus

    2015-11-01

    Full Text Available Kraus and Tan (2015 hypothesized that Americans tend to overestimate social class mobility in society, and do so because they seek to protect the self. This paper reports a pre-registered exact replication of Study 3 from this original paper and finds, consistent with the original study, that Americans substantially overestimate social class mobility, that people provide greater overestimates when made while thinking of similar others, and that high perceived social class is related to greater overestimates. The current results provide additional evidence consistent with the idea that people overestimate class mobility to protect their beliefs in the promise of equality of opportunity. Discussion considers the utility of pre-registered self-replications as one tool for encouraging replication efforts and assessing the robustness of effect sizes.

  13. Factors associated with overestimation of asthma control: A cross-sectional study in Australia.

    Science.gov (United States)

    Bereznicki, Bonnie J; Chapman, Millicent P; Bereznicki, Luke R E

    2017-05-01

    To investigate actual and perceived disease control in Australians with asthma, and identify factors associated with overestimation of asthma control. This was a cross-sectional study of Australian adults with asthma, who were recruited via Facebook to complete an online survey. The survey included basic demographic questions, and validated tools assessing asthma knowledge, medication adherence, medicine beliefs, illness perception and asthma control. Items that measured symptoms and frequency of reliever medication use were compared to respondents' self-rating of their own asthma control. Predictors of overestimation of asthma control were determined using multivariate logistic regression. Of 2971 survey responses, 1950 (65.6%) were complete and eligible for inclusion. Overestimation of control was apparent in 45.9% of respondents. Factors independently associated with overestimation of asthma control included education level (OR = 0.755, 95% CI: 0.612-0.931, P = 0.009), asthma knowledge (OR = 0.942, 95% CI: 0.892-0.994, P = 0.029), total asthma control, (OR = 0.842, 95% CI: 0.818-0.867, P addictive (OR = 1.144, 95% CI: 1.017-1.287, P = 0.025), and increased feelings of control over asthma (OR = 1.261, 95% CI: 1.191-1.335), P < 0.001). Overestimation of asthma control remains a significant issue in Australians with asthma. The study highlights the importance of encouraging patients to express their feelings about asthma control and beliefs about medicines, and to be more forthcoming with their asthma symptoms. This would help to reveal any discrepancies between perceived and actual asthma control.

  14. Instantaneous-to-daily GPP upscaling schemes based on a coupled photosynthesis-stomatal conductance model: correcting the overestimation of GPP by directly using daily average meteorological inputs.

    Science.gov (United States)

    Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin

    2014-11-01

    Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.

  15. Kaplan-Meier survival analysis overestimates cumulative incidence of health-related events in competing risk settings: a meta-analysis.

    Science.gov (United States)

    Lacny, Sarah; Wilson, Todd; Clement, Fiona; Roberts, Derek J; Faris, Peter; Ghali, William A; Marshall, Deborah A

    2018-01-01

    Kaplan-Meier survival analysis overestimates cumulative incidence in competing risks (CRs) settings. The extent of overestimation (or its clinical significance) has been questioned, and CRs methods are infrequently used. This meta-analysis compares the Kaplan-Meier method to the cumulative incidence function (CIF), a CRs method. We searched MEDLINE, EMBASE, BIOSIS Previews, Web of Science (1992-2016), and article bibliographies for studies estimating cumulative incidence using the Kaplan-Meier method and CIF. For studies with sufficient data, we calculated pooled risk ratios (RRs) comparing Kaplan-Meier and CIF estimates using DerSimonian and Laird random effects models. We performed stratified meta-analyses by clinical area, rate of CRs (CRs/events of interest), and follow-up time. Of 2,192 identified abstracts, we included 77 studies in the systematic review and meta-analyzed 55. The pooled RR demonstrated the Kaplan-Meier estimate was 1.41 [95% confidence interval (CI): 1.36, 1.47] times higher than the CIF. Overestimation was highest among studies with high rates of CRs [RR = 2.36 (95% CI: 1.79, 3.12)], studies related to hepatology [RR = 2.60 (95% CI: 2.12, 3.19)], and obstetrics and gynecology [RR = 1.84 (95% CI: 1.52, 2.23)]. The Kaplan-Meier method overestimated the cumulative incidence across 10 clinical areas. Using CRs methods will ensure accurate results inform clinical and policy decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Lake Wobegon’s Guns: Overestimating Our Gun-Related Competences

    Directory of Open Access Journals (Sweden)

    Emily Stark

    2016-02-01

    Full Text Available The Lake Wobegon Effect is a general tendency for people to overestimate their own abilities. In this study, the authors conducted a large, nationally-representative survey of U.S. citizens to test whether Americans overestimate their own gun-relevant personality traits, gun safety knowledge, and ability to use a gun in an emergency. The authors also tested how gun control attitudes, political identification, gender, and gun experience affect self-perceptions. Consistent with prior research on the Lake Wobegon Effect, participants overestimated their gun-related competencies. Conservatives, males, and pro-gun advocates self-enhanced somewhat more than their counterparts but this effect was primarily due to increased gun experience among these participants. These findings are important to policymakers in the area of gun use, because overconfidence in one’s gun-related abilities may lead to a reduced perceived need for gun training.

  17. Voltage and pace-capture mapping of linear ablation lesions overestimates chronic ablation gap size.

    Science.gov (United States)

    O'Neill, Louisa; Harrison, James; Chubb, Henry; Whitaker, John; Mukherjee, Rahul K; Bloch, Lars Ølgaard; Andersen, Niels Peter; Dam, Høgni; Jensen, Henrik K; Niederer, Steven; Wright, Matthew; O'Neill, Mark; Williams, Steven E

    2018-04-26

    Conducting gaps in lesion sets are a major reason for failure of ablation procedures. Voltage mapping and pace-capture have been proposed for intra-procedural identification of gaps. We aimed to compare gap size measured acutely and chronically post-ablation to macroscopic gap size in a porcine model. Intercaval linear ablation was performed in eight Göttingen minipigs with a deliberate gap of ∼5 mm left in the ablation line. Gap size was measured by interpolating ablation contact force values between ablation tags and thresholding at a low force cut-off of 5 g. Bipolar voltage mapping and pace-capture mapping along the length of the line were performed immediately, and at 2 months, post-ablation. Animals were euthanized and gap sizes were measured macroscopically. Voltage thresholds to define scar were determined by receiver operating characteristic analysis as voltage, pace-capture, and ablation contact force maps. All modalities overestimated chronic gap size, by 1.4 ± 2.0 mm (ablation contact force map), 5.1 ± 3.4 mm (pace-capture), and 9.5 ± 3.8 mm (voltage mapping). Error on ablation contact force map gap measurements were significantly less than for voltage mapping (P = 0.003, Tukey's multiple comparisons test). Chronically, voltage mapping and pace-capture mapping overestimated macroscopic gap size by 11.9 ± 3.7 and 9.8 ± 3.5 mm, respectively. Bipolar voltage and pace-capture mapping overestimate the size of chronic gap formation in linear ablation lesions. The most accurate estimation of chronic gap size was achieved by analysis of catheter-myocardium contact force during ablation.

  18. Reducing WCET Overestimations by Correcting Errors in Loop Bound Constraints

    Directory of Open Access Journals (Sweden)

    Fanqi Meng

    2017-12-01

    Full Text Available In order to reduce overestimations of worst-case execution time (WCET, in this article, we firstly report a kind of specific WCET overestimation caused by non-orthogonal nested loops. Then, we propose a novel correction approach which has three basic steps. The first step is to locate the worst-case execution path (WCEP in the control flow graph and then map it onto source code. The second step is to identify non-orthogonal nested loops from the WCEP by means of an abstract syntax tree. The last step is to recursively calculate the WCET errors caused by the loose loop bound constraints, and then subtract the total errors from the overestimations. The novelty lies in the fact that the WCET correction is only conducted on the non-branching part of WCEP, thus avoiding potential safety risks caused by possible WCEP switches. Experimental results show that our approach reduces the specific WCET overestimation by an average of more than 82%, and 100% of corrected WCET is no less than the actual WCET. Thus, our approach is not only effective but also safe. It will help developers to design energy-efficient and safe real-time systems.

  19. Partners' Overestimation of Patients' Pain Severity: Relationships with Partners' Interpersonal Responses.

    Science.gov (United States)

    Junghaenel, Doerte U; Schneider, Stefan; Broderick, Joan E

    2017-09-26

    The present study examined whether concordance between patients' and their partners' reports of patient pain severity relates to partners' social support and behavioral responses in couples coping with chronic pain. Fifty-two couples completed questionnaires about the patient's pain severity. Both dyad members also rated the partner's social support and negative, solicitous, and distracting responses toward the patient when in pain. Bivariate correlations showed moderate correspondence between patient and partner ratings of pain severity (r = 0.55) and negative (r = 0.46), solicitous (r = 0.47), and distracting responses (r = 0.53), but lower correspondence for social support (r = 0.28). Twenty-eight couples (54%) were concordant in their perceptions of patient pain; partners overestimated pain in 14 couples (27%), and partners underestimated pain in 10 couples (19%). Couple concordance in pain perceptions was not related to patients' reports; however, it significantly predicted partners' reports: Partners who overestimated pain reported giving more social support (β = 0.383, P = 0.016), fewer negative responses (β = -0.332, P = 0.029), and more solicitous responses (β = 0.438, P = 0.016) than partners who were in agreement or who underestimated pain. Partner overestimation of pain severity is associated with partner-reported but not with patient-reported support-related responses. This finding has important clinical implications for couple interventions in chronic pain. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  20. Adolescent-perceived parent and teacher overestimation of mathematics ability: Developmental implications for students' mathematics task values.

    Science.gov (United States)

    Gniewosz, Burkhard; Watt, Helen M G

    2017-07-01

    This study examines whether and how student-perceived parents' and teachers' overestimation of students' own perceived mathematical ability can explain trajectories for adolescents' mathematical task values (intrinsic and utility) controlling for measured achievement, following expectancy-value and self-determination theories. Longitudinal data come from a 3-cohort (mean ages 13.25, 12.36, and 14.41 years; Grades 7-10), 4-wave data set of 1,271 Australian secondary school students. Longitudinal structural equation models revealed positive effects of student-perceived overestimation of math ability by parents and teachers on students' intrinsic and utility math task values development. Perceived parental overestimations predicted intrinsic task value changes between all measurement occasions, whereas utility task value changes only were predicted between Grades 9 and 10. Parental influences were stronger for intrinsic than utility task values. Teacher influences were similar for both forms of task values and commenced after the curricular school transition in Grade 8. Results support the assumptions that the perceived encouragement conveyed by student-perceived mathematical ability beliefs of parents and teachers, promote positive mathematics task values development. Moreover, results point to different mechanisms underlying parents' and teachers' support. Finally, the longitudinal changes indicate transition-related increases in the effects of student-perceived overestimations and stronger effects for intrinsic than utility values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Peer substance use overestimation among French university students: a cross-sectional survey

    Directory of Open Access Journals (Sweden)

    Dautzenberg Bertrand

    2010-03-01

    Full Text Available Abstract Background Normative misperceptions have been widely documented for alcohol use among U.S. college students. There is less research on other substances or European cultural contexts. This study explores which factors are associated with alcohol, tobacco and cannabis use misperceptions among French college students, focusing on substance use. Methods 12 classes of second-year college students (n = 731 in sociology, medicine, nursing or foreign language estimated the proportion of tobacco, cannabis, alcohol use and heavy episodic drinking among their peers and reported their own use. Results Peer substance use overestimation frequency was 84% for tobacco, 55% for cannabis, 37% for alcohol and 56% for heavy episodic drinking. Cannabis users (p = 0.006, alcohol (p = 0.003 and heavy episodic drinkers (p = 0.002, are more likely to overestimate the prevalence of use of these consumptions. Tobacco users are less likely to overestimate peer prevalence of smoking (p = 0.044. Women are more likely to overestimate tobacco (p Conclusions Local interventions that focus on creating realistic perceptions of substance use prevalence could be considered for cannabis and alcohol prevention in French campuses.

  2. Do young novice drivers overestimate their driving skills?

    NARCIS (Netherlands)

    Craen, S. de Twisk, D.A.M. Hagenzieker, M.P. Elffers, H. & Brookhuis, K.A.

    2007-01-01

    In this study the authors argue that, in order to sufficiently adapt to task demands in traffic, drivers have to make an assessment of their own driving skills. There are indications that drivers in general, and novice drivers in particular, overestimate their driving skills. The objective of this

  3. Coronal 2D MR cholangiography overestimates the length of the right hepatic duct in liver transplantation donors

    International Nuclear Information System (INIS)

    Kim, Bohyun; Kim, Kyoung Won; Kim, So Yeon; Park, So Hyun; Lee, Jeongjin; Song, Gi Won; Jung, Dong-Hwan; Ha, Tae-Yong; Lee, Sung Gyu

    2017-01-01

    To compare the length of the right hepatic duct (RHD) measured on rotatory coronal 2D MR cholangiography (MRC), rotatory axial 2D MRC, and reconstructed 3D MRC. Sixty-seven donors underwent coronal and axial 2D projection MRC and 3D MRC. RHD length was measured and categorized as ultrashort (≤1 mm), short (>1-14 mm), and long (>14 mm). The measured length, frequency of overestimation, and the degree of underestimation between two 2D MRC sets were compared to 3D MRC. The length of the RHD from 3D MRC, coronal 2D MRC, and axial 2D MRC showed significant difference (p < 0.05). RHD was frequently overestimated on the coronal than on axial 2D MRC (61.2 % vs. 9 %; p <.0001). On coronal 2D MRC, four (6 %) with short RHD and one (1.5 %) with ultrashort RHD were over-categorized as long RHD. On axial 2D MRC, overestimation was mostly <1 mm (83.3 %), none exceeding 3 mm or over-categorized. The degree of underestimation between the two projection planes was comparable. Coronal 2D MRC overestimates the RHD in liver donors. We suggest adding axial 2D MRC to conventional coronal 2D MRC in the preoperative workup protocol for living liver donors to avoid unexpected confrontation with multiple ductal openings when harvesting the graft. (orig.)

  4. Coronal 2D MR cholangiography overestimates the length of the right hepatic duct in liver transplantation donors

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Bohyun [University of Ulsan College of Medicine, Department of Radiology, Asan Medical Center, 88, Olympic-ro 43-gil, Songpa-gu, Seoul (Korea, Republic of); Ajou University School of Medicine, Department of Radiology, Ajou University Medical Center, Suwon (Korea, Republic of); Kim, Kyoung Won; Kim, So Yeon; Park, So Hyun [University of Ulsan College of Medicine, Department of Radiology, Asan Medical Center, 88, Olympic-ro 43-gil, Songpa-gu, Seoul (Korea, Republic of); Lee, Jeongjin [Soongsil University, School of Computer Science and Engineering, Seoul (Korea, Republic of); Song, Gi Won; Jung, Dong-Hwan; Ha, Tae-Yong; Lee, Sung Gyu [University of Ulsan College of Medicine, Department of Surgery, Division of Hepatobiliary and Liver Transplantation Surgery, Asan Medical Center, Seoul (Korea, Republic of)

    2017-05-15

    To compare the length of the right hepatic duct (RHD) measured on rotatory coronal 2D MR cholangiography (MRC), rotatory axial 2D MRC, and reconstructed 3D MRC. Sixty-seven donors underwent coronal and axial 2D projection MRC and 3D MRC. RHD length was measured and categorized as ultrashort (≤1 mm), short (>1-14 mm), and long (>14 mm). The measured length, frequency of overestimation, and the degree of underestimation between two 2D MRC sets were compared to 3D MRC. The length of the RHD from 3D MRC, coronal 2D MRC, and axial 2D MRC showed significant difference (p < 0.05). RHD was frequently overestimated on the coronal than on axial 2D MRC (61.2 % vs. 9 %; p <.0001). On coronal 2D MRC, four (6 %) with short RHD and one (1.5 %) with ultrashort RHD were over-categorized as long RHD. On axial 2D MRC, overestimation was mostly <1 mm (83.3 %), none exceeding 3 mm or over-categorized. The degree of underestimation between the two projection planes was comparable. Coronal 2D MRC overestimates the RHD in liver donors. We suggest adding axial 2D MRC to conventional coronal 2D MRC in the preoperative workup protocol for living liver donors to avoid unexpected confrontation with multiple ductal openings when harvesting the graft. (orig.)

  5. Why do general circulation models overestimate the aerosol cloud lifetime effect? A case study comparing CAM5 and a CRM

    Science.gov (United States)

    Zhou, Cheng; Penner, Joyce E.

    2017-01-01

    Observation-based studies have shown that the aerosol cloud lifetime effect or the increase of cloud liquid water path (LWP) with increased aerosol loading may have been overestimated in climate models. Here, we simulate shallow warm clouds on 27 May 2011 at the southern Great Plains (SGP) measurement site established by the Department of Energy's (DOE) Atmospheric Radiation Measurement (ARM) program using a single-column version of a global climate model (Community Atmosphere Model or CAM) and a cloud resolving model (CRM). The LWP simulated by CAM increases substantially with aerosol loading while that in the CRM does not. The increase of LWP in CAM is caused by a large decrease of the autoconversion rate when cloud droplet number increases. In the CRM, the autoconversion rate is also reduced, but this is offset or even outweighed by the increased evaporation of cloud droplets near the cloud top, resulting in an overall decrease in LWP. Our results suggest that climate models need to include the dependence of cloud top growth and the evaporation/condensation process on cloud droplet number concentrations.

  6. Volume-Dependent Overestimation of Spontaneous Intracerebral Hematoma Volume by the ABC/2 Formula

    International Nuclear Information System (INIS)

    Chih-Wei Wang; Chun-Jung Juan; Hsian-He Hsu; Hua-Shan Liu; Cheng-Yu Chen; Chun-Jen Hsueh; Hung-Wen Kao; Guo-Shu Huang; Yi-Jui Liu; Chung-Ping Lo

    2009-01-01

    Background: Although the ABC/2 formula has been widely used to estimate the volume of intracerebral hematoma (ICH), the formula tends to overestimate hematoma volume. The volume-related imprecision of the ABC/2 formula has not been documented quantitatively. Purpose: To investigate the volume-dependent overestimation of the ABC/2 formula by comparing it with computer-assisted volumetric analysis (CAVA). Material and Methods: Forty patients who had suffered spontaneous ICH and who had undergone non-enhanced brain computed tomography scans were enrolled in this study. The ICH volume was estimated based on the ABC/2 formula and also calculated by CAVA. Based on the ICH volume calculated by the CAVA method, the patients were divided into three groups: group 1 consisted of 17 patients with an ICH volume of less than 20 ml; group 2 comprised 13 patients with an ICH volume of 20 to 40 ml; and group 3 was composed of 10 patients with an ICH volume larger than 40 ml. Results: The mean estimated hematoma volume was 43.6 ml when using the ABC/2 formula, compared with 33.8 ml when using the CAVA method. The mean estimated difference was 1.3 ml, 4.4 ml, and 31.4 ml for groups 1, 2, and 3, respectively, corresponding to an estimation error of 9.9%, 16.7%, and 37.1% by the ABC/2 formula (P<0.05). Conclusion: The ABC/2 formula significantly overestimates the volume of ICH. A positive association between the estimation error and the volume of ICH is demonstrated

  7. Neglect Of Parameter Estimation Uncertainty Can Significantly Overestimate Structural Reliability

    Directory of Open Access Journals (Sweden)

    Rózsás Árpád

    2015-12-01

    Full Text Available Parameter estimation uncertainty is often neglected in reliability studies, i.e. point estimates of distribution parameters are used for representative fractiles, and in probabilistic models. A numerical example examines the effect of this uncertainty on structural reliability using Bayesian statistics. The study reveals that the neglect of parameter estimation uncertainty might lead to an order of magnitude underestimation of failure probability.

  8. Kaplan-Meier Survival Analysis Overestimates the Risk of Revision Arthroplasty: A Meta-analysis.

    Science.gov (United States)

    Lacny, Sarah; Wilson, Todd; Clement, Fiona; Roberts, Derek J; Faris, Peter D; Ghali, William A; Marshall, Deborah A

    2015-11-01

    Although Kaplan-Meier survival analysis is commonly used to estimate the cumulative incidence of revision after joint arthroplasty, it theoretically overestimates the risk of revision in the presence of competing risks (such as death). Because the magnitude of overestimation is not well documented, the potential associated impact on clinical and policy decision-making remains unknown. We performed a meta-analysis to answer the following questions: (1) To what extent does the Kaplan-Meier method overestimate the cumulative incidence of revision after joint replacement compared with alternative competing-risks methods? (2) Is the extent of overestimation influenced by followup time or rate of competing risks? We searched Ovid MEDLINE, EMBASE, BIOSIS Previews, and Web of Science (1946, 1980, 1980, and 1899, respectively, to October 26, 2013) and included article bibliographies for studies comparing estimated cumulative incidence of revision after hip or knee arthroplasty obtained using both Kaplan-Meier and competing-risks methods. We excluded conference abstracts, unpublished studies, or studies using simulated data sets. Two reviewers independently extracted data and evaluated the quality of reporting of the included studies. Among 1160 abstracts identified, six studies were included in our meta-analysis. The principal reason for the steep attrition (1160 to six) was that the initial search was for studies in any clinical area that compared the cumulative incidence estimated using the Kaplan-Meier versus competing-risks methods for any event (not just the cumulative incidence of hip or knee revision); we did this to minimize the likelihood of missing any relevant studies. We calculated risk ratios (RRs) comparing the cumulative incidence estimated using the Kaplan-Meier method with the competing-risks method for each study and used DerSimonian and Laird random effects models to pool these RRs. Heterogeneity was explored using stratified meta-analyses and

  9. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.

    Science.gov (United States)

    Kepes, Sven; McDaniel, Michael A

    2015-01-01

    Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.

  10. MRI Overestimates Excitotoxic Amygdala Lesion Damage in Rhesus Monkeys

    Directory of Open Access Journals (Sweden)

    Benjamin M. Basile

    2017-06-01

    Full Text Available Selective, fiber-sparing excitotoxic lesions are a state-of-the-art tool for determining the causal contributions of different brain areas to behavior. For nonhuman primates especially, it is advantageous to keep subjects with high-quality lesions alive and contributing to science for many years. However, this requires the ability to estimate lesion extent accurately. Previous research has shown that in vivo T2-weighted magnetic resonance imaging (MRI accurately estimates damage following selective ibotenic acid lesions of the hippocampus. Here, we show that the same does not apply to lesions of the amygdala. Across 19 hemispheres from 13 rhesus monkeys, MRI assessment consistently overestimated amygdala damage as assessed by microscopic examination of Nissl-stained histological material. Two outliers suggested a linear relation for lower damage levels, and values of unintended amygdala damage from a previous study fell directly on that regression line, demonstrating that T2 hypersignal accurately predicts damage levels below 50%. For unintended damage, MRI estimates correlated with histological assessment for entorhinal cortex, perirhinal cortex and hippocampus, though MRI significantly overestimated the extent of that damage in all structures. Nevertheless, ibotenic acid injections routinely produced extensive intentional amygdala damage with minimal unintended damage to surrounding structures, validating the general success of the technique. The field will benefit from more research into in vivo lesion assessment techniques, and additional evaluation of the accuracy of MRI assessment in different brain areas. For now, in vivo MRI assessment of ibotenic acid lesions of the amygdala can be used to confirm successful injections, but MRI estimates of lesion extent should be interpreted with caution.

  11. The prevalence of maternal F cells in a pregnant population and potential overestimation of foeto-maternal haemorrhage as a consequence.

    LENUS (Irish Health Repository)

    Corcoran, Deirdre

    2014-06-12

    Acid elution (AE) is used to estimate foeto-maternal haemorrhage (FMH). However AE cannot differentiate between cells containing foetal or adult haemoglobin F (F cells), potentially leading to false positive results or an overestimate of the amount of FMH. The prevalence of F cells in pregnant populations remains poorly characterised. The purpose of this study was to ascertain the incidence of HbF-containing red cells in our pregnant population using anti-HbF-fluorescein isothiocyanate flow cytometry (anti-HbF FC) and to assess whether its presence leads to a significant overestimate of FMH.

  12. Are the performance overestimates given by boys with ADHD self-protective?

    Science.gov (United States)

    Ohan, Jeneva L; Johnston, Charlotte

    2002-06-01

    Tested the self-protective hypothesis that boys with attention deficit hyperactivity disorder (ADHD) overestimate their performance to protect a positive self-image. We examined the impact of performance feedback on the social and academic performance self-perceptions of 45 boys with and 43 boys without ADHD ages 7 to 12. Consistent with the self-protective hypothesis, positive feedback led to increases in social performance estimates in boys without ADHD but to decreases in estimates given by boys with ADHD. This suggests that boys with ADHD can give more realistic self-appraisals when their self-image has been bolstered. In addition, social performance estimates in boys with ADHD were correlated with measures of self-esteem and positive presentation bias. In contrast, for academic performance estimates, boys in both groups increased their performance estimates after receiving positive versus average or no feedback, and estimates were not correlated with self-esteem or social desirability for boys with ADHD. We conclude that the self-protective hypothesis can account for social performance overestimations given by boys with ADHD but that other factors may better account for their academic performance overestimates.

  13. Overestimation of body size in eating disorders and its association to body-related avoidance behavior.

    Science.gov (United States)

    Vossbeck-Elsebusch, Anna N; Waldorf, Manuel; Legenbauer, Tanja; Bauer, Anika; Cordes, Martin; Vocks, Silja

    2015-06-01

    Body-related avoidance behavior, e.g., not looking in the mirror, is a common feature of eating disorders. It is assumed that it leads to insufficient feedback concerning one's own real body form and might thus contribute to distorted mental representation of one's own body. However, this assumption still lacks empirical foundation. Therefore, the aim of the present study was to examine the relationship between misperception of one's own body and body-related avoidance behavior in N = 78 female patients with Bulimia nervosa and eating disorder not otherwise specified. Body-size misperception was assessed using a digital photo distortion technique based on an individual picture of each participant which was taken in a standardized suit. In a regression analysis with body-related avoidance behavior, body mass index and weight and shape concerns as predictors, only body-related avoidance behavior significantly contributed to the explanation of body-size overestimation. This result supports the theoretical assumption that body-related avoidance behavior makes body-size overestimation more likely.

  14. Overestimation of infant and toddler energy intake by 24-h recall compared with weighed food records.

    Science.gov (United States)

    Fisher, Jennifer O; Butte, Nancy F; Mendoza, Patricia M; Wilson, Theresa A; Hodges, Eric A; Reidy, Kathleen C; Deming, Denise

    2008-08-01

    Twenty-four-hour dietary recalls have been used in large surveys of infant and toddler energy intake, but the accuracy of the method for young children is not well documented. We aimed to determine the accuracy of infant and toddler energy intakes by a single, telephone-administered, multiple-pass 24-h recall as compared with 3-d weighed food records. A within-subjects design was used in which a 24-h recall and 3-d weighed food records were completed within 2 wk by 157 mothers (56 non-Hispanic white, 51 non-Hispanic black, and 50 Hispanic) of 7-11-mo-old infants or 12-24-mo-old toddlers. Child and caregiver anthropometrics, child eating patterns, and caregiver demographics and social desirability were evaluated as correlates of reporting bias. Intakes based on 3-d weighed food records were within 5% of estimated energy requirements. Compared with the 3-d weighed food records, the 24-h recall overestimated energy intake by 13% among infants (740 +/- 154 and 833 +/- 255 kcal, respectively) and by 29% among toddlers (885 +/- 197 and 1140 +/- 299 kcal, respectively). Eating patterns (ie, frequency and location) did not differ appreciably between methods. Macronutrient and micronutrient intakes were higher by 24-h recall than by 3-d weighed food record. Dairy and grains contributed the most energy to the diet and accounted for 74% and 54% of the overestimation seen in infants and toddlers, respectively. Greater overestimation was associated with a greater number of food items reported by the caregiver and lower child weight-for-length z scores. The use of a single, telephone-administered, multiple-pass 24-h recall may significantly overestimate infant or toddler energy and nutrient intakes because of portion size estimation errors.

  15. Influencing Factors on the Overestimation of Self-Reported Physical Activity: A Cross-Sectional Analysis of Low Back Pain Patients and Healthy Controls

    Directory of Open Access Journals (Sweden)

    Andrea Schaller

    2016-01-01

    Full Text Available Introduction. The aim of the present study was to determine the closeness of agreement between a self-reported and an objective measure of physical activity in low back pain patients and healthy controls. Beyond, influencing factors on overestimation were identified. Methods. 27 low back pain patients and 53 healthy controls wore an accelerometer (objective measure for seven consecutive days and answered a questionnaire on physical activity (self-report over the same period of time. Differences between self-reported and objective data were tested by Wilcoxon test. Bland-Altman analysis was conducted for describing the closeness of agreement. Linear regression models were calculated to identify the influence of age, sex, and body mass index on the overestimation by self-report. Results. Participants overestimated self-reported moderate activity in average by 42 min/day (p=0.003 and vigorous activity by 39 min/day (p<0.001. Self-reported sedentary time was underestimated by 122 min/day (p<0.001. No individual-related variables influenced the overestimation of physical activity. Low back pain patients were more likely to underestimate sedentary time compared to healthy controls. Discussion. In rehabilitation and health promotion, the application-oriented measurement of physical activity remains a challenge. The present results contradict other studies that had identified an influence of age, sex, and body mass index on the overestimation of physical activity.

  16. Forgetting to remember our experiences: People overestimate how much they will retrospect about personal events.

    Science.gov (United States)

    Tully, Stephanie; Meyvis, Tom

    2017-12-01

    People value experiences in part because of the memories they create. Yet, we find that people systematically overestimate how much they will retrospect about their experiences. This overestimation results from people focusing on their desire to retrospect about experiences, while failing to consider the experience's limited enduring accessibility in memory. Consistent with this view, we find that desirability is a stronger predictor of forecasted retrospection than it is of reported retrospection, resulting in greater overestimation when the desirability of retrospection is higher. Importantly, the desire to retrospect does not change over time. Instead, past experiences become less top-of-mind over time and, as a result, people simply forget to remember. In line with this account, our results show that obtaining physical reminders of an experience reduces the overestimation of retrospection by increasing how much people retrospect, bringing their realized retrospection more in line with their forecasts (and aspirations). We further observe that the extent to which reported retrospection falls short of forecasted retrospection reliably predicts declining satisfaction with an experience over time. Despite this potential negative consequence of retrospection falling short of expectations, we suggest that the initial overestimation itself may in fact be adaptive. This possibility and other potential implications of this work are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Were mercury emission factors for Chinese non-ferrous metal smelters overestimated? Evidence from onsite measurements in six smelters

    International Nuclear Information System (INIS)

    Zhang Lei; Wang Shuxiao; Wu Qingru; Meng Yang; Yang Hai; Wang Fengyang; Hao Jiming

    2012-01-01

    Non-ferrous metal smelting takes up a large proportion of the anthropogenic mercury emission inventory in China. Zinc, lead and copper smelting are three leading sources. Onsite measurements of mercury emissions were conducted for six smelters. The mercury emission factors were 0.09–2.98 g Hg/t metal produced. Acid plants with the double-conversion double-absorption process had mercury removal efficiency of over 99%. In the flue gas after acid plants, 45–88% was oxidized mercury which can be easily scavenged in the flue gas scrubber. 70–97% of the mercury was removed from the flue gas to the waste water and 1–17% to the sulfuric acid product. Totally 0.3–13.5% of the mercury in the metal concentrate was emitted to the atmosphere. Therefore, acid plants in non-ferrous metal smelters have significant co-benefit on mercury removal, and the mercury emission factors from Chinese non-ferrous metal smelters were probably overestimated in previous studies. - Highlights: ► Acid plants in smelters provide significant co-benefits for mercury removal (over 99%). ► Most of the mercury in metal concentrates for smelting ended up in waste water. ► Previously published emission factors for Chinese metal smelters were probably overestimated. - Acid plants in smelters have high mercury removal efficiency, and thus mercury emission factors for Chinese non-ferrous metal smelters were probably overestimated.

  18. Overestimation of Knowledge about Word Meanings: The "Misplaced Meaning" Effect

    Science.gov (United States)

    Kominsky, Jonathan F.; Keil, Frank C.

    2014-01-01

    Children and adults may not realize how much they depend on external sources in understanding word meanings. Four experiments investigated the existence and developmental course of a "Misplaced Meaning" (MM) effect, wherein children and adults overestimate their knowledge about the meanings of various words by underestimating how much…

  19. Mobility overestimation due to gated contacts in organic field-effect transistors

    Science.gov (United States)

    Bittle, Emily G.; Basham, James I.; Jackson, Thomas N.; Jurchescu, Oana D.; Gundlach, David J.

    2016-01-01

    Parameters used to describe the electrical properties of organic field-effect transistors, such as mobility and threshold voltage, are commonly extracted from measured current–voltage characteristics and interpreted by using the classical metal oxide–semiconductor field-effect transistor model. However, in recent reports of devices with ultra-high mobility (>40 cm2 V−1 s−1), the device characteristics deviate from this idealized model and show an abrupt turn-on in the drain current when measured as a function of gate voltage. In order to investigate this phenomenon, here we report on single crystal rubrene transistors intentionally fabricated to exhibit an abrupt turn-on. We disentangle the channel properties from the contact resistance by using impedance spectroscopy and show that the current in such devices is governed by a gate bias dependence of the contact resistance. As a result, extracted mobility values from d.c. current–voltage characterization are overestimated by one order of magnitude or more. PMID:26961271

  20. Gun Carrying by High School Students in Boston, MA: Does Overestimation of Peer Gun Carrying Matter?

    Science.gov (United States)

    Hemenway, David; Vriniotis, Mary; Johnson, Renee M.; Miller, Matthew; Azrael, Deborah

    2011-01-01

    This paper investigates: (1) whether high school students overestimate gun carrying by their peers, and (2) whether those students who overestimate peer gun carrying are more likely to carry firearms. Data come from a randomly sampled survey conducted in 2008 of over 1700 high school students in Boston, MA. Over 5% of students reported carrying a…

  1. Debate on the Chernobyl disaster: on the causes of Chernobyl overestimation.

    Science.gov (United States)

    Jargin, Sergei V

    2012-01-01

    After the Chernobyl accident, many publications appeared that overestimated its medical consequences. Some of them are discussed in this article. Among the motives for the overestimation were anti-nuclear sentiments, widespread among some adherents of the Green movement; however, their attitude has not been wrong: nuclear facilities should have been prevented from spreading to overpopulated countries governed by unstable regimes and regions where conflicts and terrorism cannot be excluded. The Chernobyl accident has hindered worldwide development of atomic industry. Today, there are no alternatives to nuclear power: nonrenewable fossil fuels will become more and more expensive, contributing to affluence in the oil-producing countries and poverty in the rest of the world. Worldwide introduction of nuclear energy will become possible only after a concentration of authority within an efficient international executive. This will enable construction of nuclear power plants in optimally suitable places, considering all sociopolitical, geographic, geologic, and other preconditions. In this way, accidents such as that in Japan in 2011 will be prevented.

  2. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  3. Do children overestimate the extent of smoking among their peers? A feasibility study of the social norms approach to prevent smoking.

    Science.gov (United States)

    Elsey, Helen; Owiredu, Elizabeth; Thomson, Heather; Mann, Gemma; Mehta, Rashesh; Siddiqi, Kamran

    2015-02-01

    Social norms approaches (SNA) are based on the premise that we frequently overestimate risk behaviours among our peers. By conducting campaigns to reduce these misperceptions, SNAs aim to reduce risk behaviours. This study examines the extent to which 12 to 13year old pupils overestimate smoking among their peers and explores the appropriateness of using SNA in secondary schools to prevent smoking uptake. The extent of overestimation of smoking among peers was assessed through an on-line SNA questionnaire in five schools (n=595). Based on questionnaire results, pupils developed SNA campaigns in each school. Qualitative methods of focus groups (7), interviews (7) and observation were used to explore in-depth, from the perspective of staff and pupils, the appropriateness and feasibility of the SNA to prevent smoking uptake in secondary schools. A quarter of pupils, 25.9% (95% CI 25.6% to 26.1%) believed that most of their peers smoked, however, only 3% (95% CI 2.8% to 3.3%) reported that they actually did; a difference of 22.9% (95% CI 19.1% to 26.6%). Self-reported smoking was not significantly different between schools (X(2)=8.7 p=0.064), however, perceptions of year group smoking was significantly different across schools (X(2)=63.9 psmoking among peers in secondary schools, thus supporting a key premise of social norms theory. Implementing SNAs and studying effects is feasible within secondary schools. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Existing creatinine-based equations overestimate glomerular filtration rate in Indians.

    Science.gov (United States)

    Kumar, Vivek; Yadav, Ashok Kumar; Yasuda, Yoshinari; Horio, Masaru; Kumar, Vinod; Sahni, Nancy; Gupta, Krishan L; Matsuo, Seiichi; Kohli, Harbir Singh; Jha, Vivekanand

    2018-02-01

    Accurate estimation of glomerular filtration rate (GFR) is important for diagnosis and risk stratification in chronic kidney disease and for selection of living donors. Ethnic differences have required correction factors in the originally developed creatinine-based GFR estimation equations for populations around the world. Existing equations have not been validated in the vegetarian Indian population. We examined the performance of creatinine and cystatin-based GFR estimating equations in Indians. GFR was measured by urinary clearance of inulin. Serum creatinine was measured using IDMS-traceable Jaffe's and enzymatic assays, and cystatin C by colloidal gold immunoassay. Dietary protein intake was calculated by measuring urinary nitrogen appearance. Bias, precision and accuracy were calculated for the eGFR equations. A total of 130 participants (63 healthy kidney donors and 67 with CKD) were studied. About 50% were vegetarians, and the remainder ate meat 3.8 times every month. The average creatinine excretion were 14.7 mg/kg/day (95% CI: 13.5 to 15.9 mg/kg/day) and 12.4 mg/kg/day (95% CI: 11.2 to 13.6 mg/kg/day) in males and females, respectively. The average daily protein intake was 46.1 g/day (95% CI: 43.2 to 48.8 g/day). The mean mGFR in the study population was 51.66 ± 31.68 ml/min/1.73m 2 . All creatinine-based eGFR equations overestimated GFR (p < 0.01 for each creatinine based eGFR equation). However, eGFR by CKD-EPI Cys was not significantly different from mGFR (p = 0.38). The CKD-EPI Cys exhibited lowest bias [mean bias: -3.53 ± 14.70 ml/min/1.73m 2 (95% CI: -0.608 to -0.98)] and highest accuracy (P 30 : 74.6%). The GFR in the healthy population was 79.44 ± 20.19 (range: 41.90-134.50) ml/min/1.73m 2 . Existing creatinine-based GFR estimating equations overestimate GFR in Indians. An appropriately powered study is needed to develop either a correction factor or a new equation for accurate assessment of kidney function in the

  5. The Overestimation Phenomenon in a Skill-Based Gaming Context: The Case of March Madness Pools.

    Science.gov (United States)

    Kwak, Dae Hee

    2016-03-01

    Over 100 million people are estimated to take part in the NCAA Men's Basketball Tournament Championship bracket contests. However, relatively little is known about consumer behavior in skill-based gaming situations (e.g., sports betting). In two studies, we investigated the overestimation phenomenon in the "March Madness" context. In Study 1 (N = 81), we found that individuals who were allowed to make their own predictions were significantly more optimistic about their performance than individuals who did not make their own selections. In Study 2 (N = 197), all subjects participated in a mock competitive bracket pool. In line with the illusion of control theory, results showed that higher self-ratings of probability of winning significantly increased maximum willingness to wager but did not improve actual performance. Lastly, perceptions of high probability of winning significantly contributed to consumers' enjoyment and willingness to participate in a bracket pool in the future.

  6. Overestimation of Knowledge About Word Meanings: The “Misplaced Meaning” Effect

    OpenAIRE

    Kominsky, Jonathan F.; Keil, Frank C.

    2014-01-01

    Children and adults may not realize how much they depend on external sources in understanding word meanings. Four experiments investigated the existence and developmental course of a “Misplaced Meaning” (MM) effect, wherein children and adults overestimate their knowledge about the meanings of various words by underestimating how much they rely on outside sources to determine precise reference. Studies 1 & 2 demonstrate that children and adults show a highly consistent MM effect, and that it ...

  7. The number of patients and events required to limit the risk of overestimation of intervention effects in meta-analysis--a simulation study

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Imberger, Georgina; Walsh, Michael

    2011-01-01

    Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact...... of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been...

  8. Field significance of performance measures in the context of regional climate model evaluation. Part 1: temperature

    Science.gov (United States)

    Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker

    2018-04-01

    A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as "field" or "global" significance. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Monthly temperature climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. In winter and in most regions in summer, the downscaled distributions are statistically indistinguishable from the observed ones. A systematic cold summer bias occurs in deep river valleys due to overestimated elevations, in coastal areas due probably to enhanced sea breeze circulation, and over large lakes due to the interpolation of water temperatures. Urban areas in concave topography forms have a warm summer bias due to the strong heat islands, not reflected in the observations. WRF-NOAH generates appropriate fine-scale features in the monthly temperature field over regions of complex topography, but over spatially homogeneous areas even small biases can lead to significant deteriorations relative to the driving reanalysis. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the

  9. Biased processing of threat-related information rather than knowledge deficits contributes to overestimation of threat in obsessive-compulsive disorder.

    Science.gov (United States)

    Moritz, Steffen; Pohl, Rüdiger F

    2009-11-01

    Overestimation of threat (OET) has been implicated in the pathogenesis of obsessive-compulsive disorder (OCD). The present study deconstructed this complex concept and looked for specific deviances in OCD relative to controls. A total of 46 participants with OCD and 51 nonclinical controls were asked: (a) to estimate the incidence rate for 20 events relating to washing, checking, positive, or negative incidents. Furthermore, they were required (b) to assess their personal vulnerability to experience each event type, and (c) to judge the degree of accompanying worry. Later, participants were confronted with the correct statistics and asked (d) to rate their degree of worry versus relief. OCD participants did not provide higher estimates for OCD-related events than healthy participants, thus rendering a knowledge deficit unlikely. The usual unrealistic optimism bias was found in both groups but was markedly attenuated in OCD participants. OCD-related events worried OCD participants more than controls. Confrontation with the correct statistics appeased OCD participants less than healthy participants. Even in the case of large initial overestimations for OCD-related events, correct information appeased OCD participants significantly less than healthy participants. Our results suggest that OCD is not associated with a knowledge deficit regarding OCD-related events but that patients feel personally more vulnerable than nonclinical controls.

  10. Over-estimation of glomerular filtration rate by single injection [51Cr]EDTA plasma clearance determination in patients with ascites

    DEFF Research Database (Denmark)

    Henriksen, Jens Henrik Sahl; Brøchner-Mortensen, J; Malchow-Møller, A

    1980-01-01

    The total plasma (Clt) and the renal plasma (Clr) clearances of [51Cr]EDTA were determined simultaneously in nine patients with ascites due to liver cirrhosis. Clt (mean 78 ml/min, range 34-115 ml/min) was significantly higher than Clr (mean 52 ml/min, range 13-96 ml/min, P ... fluid-plasma activity ratio of [51Cr]EDTA increased throughout the investigation period (5h). The results suggest that [51Cr]EDTA equilibrates slowly with the peritoneal space which indicates that Clt will over-estimate the glomerular filtration rate by approximately 20 ml/min in patients with ascites...

  11. Overestimation of molecular and modelling methods and underestimation of traditional taxonomy leads to real problems in assessing and handling of the world's biodiversity.

    Science.gov (United States)

    Löbl, Ivan

    2014-02-27

    Since the 1992 Rio Convention on Biological Diversity, the earth's biodiversity is a matter of constant public interest, but the community of scientists who describe and delimit species in mega-diverse animal groups, i.e. the bulk of global biodiversity, faces ever-increasing impediments. The problems are rooted in poor understanding of specificity of taxonomy, and overestimation of quantitative approaches and modern technology. A high proportion of the animal species still remains to be discovered and studied, so a more balanced approach to the situation is needed.

  12. Responsibility/Threat Overestimation Moderates the Relationship Between Contamination-Based Disgust and Obsessive-Compulsive Concerns About Sexual Orientation.

    Science.gov (United States)

    Ching, Terence H W; Williams, Monnica T; Siev, Jedidiah; Olatunji, Bunmi O

    2018-05-01

    Disgust has been shown to perform a "disease-avoidance" function in contamination fears. However, no studies have examined the relevance of disgust to obsessive-compulsive (OC) concerns about sexual orientation (e.g., fear of one's sexual orientation transforming against one's will, and compulsive avoidance of same-sex and/or gay or lesbian individuals to prevent that from happening). Therefore, we investigated whether the specific domain of contamination-based disgust (i.e., evoked by the perceived threat of transmission of essences between individuals) predicted OC concerns about sexual orientation, and whether this effect was moderated/amplified by obsessive beliefs, in evaluation of a "sexual orientation transformation-avoidance" function. We recruited 283 self-identified heterosexual college students (152 females, 131 males; mean age = 20.88 years, SD = 3.19) who completed three measures assessing disgust, obsessive beliefs, and OC concerns about sexual orientation. Results showed that contamination-based disgust (β = .17), responsibility/threat overestimation beliefs (β = .15), and their interaction (β = .17) each uniquely predicted OC concerns about sexual orientation, ts = 2.22, 2.50, and 2.90, ps contamination-based disgust accompanied by strong responsibility/threat overestimation beliefs predicted more severe OC concerns about sexual orientation, β = .48, t = 3.24, p contamination-based disgust, and exacerbated by responsibility/threat overestimation beliefs. Treatment for OC concerns about sexual orientation should target such beliefs.

  13. Overestimation of closed-chamber soil CO2 effluxes at low atmospheric turbulence

    DEFF Research Database (Denmark)

    Brændholt, Andreas; Larsen, Klaus Steenberg; Ibrom, Andreas

    2017-01-01

    Soil respiration (R-s) is an important component of ecosystem carbon balance, and accurate quantification of the diurnal and seasonal variation of R-s is crucial for a correct interpretation of the response of R-s to biotic and abiotic factors, as well as for estimating annual soil CO2 efflux rates...... be eliminated if proper mixing of air is ensured, and indeed the use of fans removed the overestimation of R-s rates during low u(*). Artificial turbulent air mixing may thus provide a method to overcome the problems of using closed-chamber gas-exchange measurement techniques during naturally occurring low...

  14. Ignoring detailed fast-changing dynamics of land use overestimates regional terrestrial carbon sequestration

    Directory of Open Access Journals (Sweden)

    S. Q. Zhao

    2009-08-01

    Full Text Available Land use change is critical in determining the distribution, magnitude and mechanisms of terrestrial carbon budgets at the local to global scales. To date, almost all regional to global carbon cycle studies are driven by a static land use map or land use change statistics with decadal time intervals. The biases in quantifying carbon exchange between the terrestrial ecosystems and the atmosphere caused by using such land use change information have not been investigated. Here, we used the General Ensemble biogeochemical Modeling System (GEMS, along with consistent and spatially explicit land use change scenarios with different intervals (1 yr, 5 yrs, 10 yrs and static, respectively, to evaluate the impacts of land use change data frequency on estimating regional carbon sequestration in the southeastern United States. Our results indicate that ignoring the detailed fast-changing dynamics of land use can lead to a significant overestimation of carbon uptake by the terrestrial ecosystem. Regional carbon sequestration increased from 0.27 to 0.69, 0.80 and 0.97 Mg C ha−1 yr−1 when land use change data frequency shifting from 1 year to 5 years, 10 years interval and static land use information, respectively. Carbon removal by forest harvesting and prolonged cumulative impacts of historical land use change on carbon cycle accounted for the differences in carbon sequestration between static and dynamic land use change scenarios. The results suggest that it is critical to incorporate the detailed dynamics of land use change into local to global carbon cycle studies. Otherwise, it is impossible to accurately quantify the geographic distributions, magnitudes, and mechanisms of terrestrial carbon sequestration at the local to global scales.

  15. Overestimation of Knowledge About Word Meanings: The “Misplaced Meaning” Effect

    Science.gov (United States)

    Kominsky, Jonathan F.; Keil, Frank C.

    2014-01-01

    Children and adults may not realize how much they depend on external sources in understanding word meanings. Four experiments investigated the existence and developmental course of a “Misplaced Meaning” (MM) effect, wherein children and adults overestimate their knowledge about the meanings of various words by underestimating how much they rely on outside sources to determine precise reference. Studies 1 & 2 demonstrate that children and adults show a highly consistent MM effect, and that it is stronger in young children. Study 3 demonstrates that adults are explicitly aware of the availability of outside knowledge, and that this awareness may be related to the strength of the MM effect. Study 4 rules out general overconfidence effects by examining a metalinguistic task in which adults are well-calibrated. PMID:24890038

  16. Overestimation of reliability by Guttman’s λ4, λ5, and λ6, and the greatest lower bound

    NARCIS (Netherlands)

    Oosterwijk, P.R.; van der Ark, L.A.; Sijtsma, K.; van der Ark, L.A.; Wiberg, M.; Culpepper, S.A.; Douglas, J.A.; Wang, W.-C.

    2017-01-01

    For methods using statistical optimization to estimate lower bounds to test-score reliability, we investigated the degree to which they overestimate true reliability. Optimization methods do not only exploit real relationships between items but also tend to capitalize on sampling error and do this

  17. Do young novice drivers overestimate their driving skills more than experienced drivers? : different methods lead to different conclusions.

    NARCIS (Netherlands)

    Craen, S. de Twisk, D.A.M. Hagenzieker, M.P. Elffers, H. & Brookhuis, K.A.

    2011-01-01

    In this study the authors argue that drivers have to make an assessment of their own driving skills, in order to sufficiently adapt to their task demands in traffic. There are indications that drivers in general, but novice drivers in particular, overestimate their driving skills. However, study

  18. Generalized PSF modeling for optimized quantitation in PET imaging.

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF

  19. The economic impact of subclinical ketosis at the farm level: Tackling the challenge of over-estimation due to multiple interactions.

    Science.gov (United States)

    Raboisson, D; Mounié, M; Khenifar, E; Maigné, E

    2015-12-01

    Subclinical ketosis (SCK) is a major metabolic disorder that affects dairy cows, and its lactational prevalence in Europe is estimated to be at 25%. Nonetheless, few data are available on the economics of SCK, although its management clearly must be improved. With this in mind, this study develops a double-step stochastic approach to evaluate the total cost of SCK to dairy farming. First, all the production and reproduction changes and all the health disorders associated with SCK were quantified using the meta-analysis from a previous study. Second, the total cost of SCK was determined with a stochastic model using distribution laws as input parameters. The mean total cost of SCK was estimated to be Є257 per calving cow with SCK (95% prediction interval (PI): Є72-442). The margin over feeding costs slightly influenced the results. When the parameters of the model are not modified to account for the conclusions from the meta-analysis and for the prevalence of health disorders in the population without SCK, the mean cost of SCK was overestimated by 68%, reaching Є434 per calving cow (95%PI: Є192-676). This result indicates that the total cost of complex health disorders is likely to be substantially overestimated when calculations use raw results from the literature or-even worse-punctual data. Excluding labour costs in the estimation reduced the SCK total cost by 12%, whereas excluding contributors with scarce data and imprecise calibrations (for lameness and udder health) reduced costs by another 18-20% (Є210, 95%PI=30-390). The proposed method accounted for uncertainty and variability in inputs by using distributions instead of point estimates. The mean value and associated prediction intervals (PIs) yielded good insight into the economic consequences of this complex disease and can be easily and practically used by decision makers in the field while simultaneously accounting for biological variability. Moreover, PIs can help prevent the blind use of economic

  20. Advances in the physics modelling of CANDU liquid injection shutdown systems

    International Nuclear Information System (INIS)

    Smith, H.J.; Robinson, R.; Guertin, C.

    1993-01-01

    The physics modelling of liquid poison injection shutdown systems in CANDU reactors accounts for the major phenomena taking place by combining the effects of both moderator hydraulics and neutronics. This paper describes the advances in the physics modelling of liquid poison injection shutdown systems (LISS), discusses some of the effects of the more realistic modelling, and briefly describes the automation methodology. Modifications to the LISS methodology have improved the realism of the physics modelling, showing that the previous methodology significantly overestimated energy deposition during the simulation of a loss of coolant transient in Bruce A, by overestimating the reactivity transient. Furthermore, the automation of the modelling process has reduced the time needed to carry put LISS evaluations to the same level as required for shutoff-rod evaluations, while at the same time minimizing the amount of input, and providing a method for tracing all files used, thus adding a level of quality assurance to the calculation. 5 refs., 11 figs

  1. The modification of the typhoon rainfall climatology model in Taiwan

    Directory of Open Access Journals (Sweden)

    C.-S. Lee

    2013-01-01

    Full Text Available This study is focused on the modification of a typhoon rainfall climatological model, by using the dataset up to 2006 and including data collected from rain gauge stations established after the 921 earthquake (1999. Subsequently, the climatology rainfall models for westward- and northward-moving typhoons are established by using the typhoon track classification from the Central Weather Bureau. These models are also evaluated and examined using dependent cases collected between 1989 and 2006 and independent cases collected from 2007 to 2011. For the dependent cases, the average total rainfall at all rain gauge stations forecasted using the climatology rainfall models for westward- (W-TRCM12 and northward-moving (N-TRCM12 typhoons is superior to that obtained using the original climatological model (TRCM06. Model W-TRCM12 significantly improves the precipitation underestimation of model TRCM06. The independent cases show that model W-TRCM12 provides better accumulated rainfall forecasts and distributions than model TRCM06. A climatological model for accompanied northeastern monsoons (A-TRCM12 for special typhoon types has also been established. The current A-TRCM12 model only contains five historical cases and various typhoon combinations can cause precipitation in different regions. Therefore, precipitation is likely to be significantly overestimated and high false alarm ratios are likely to occur in specific regions. For example, model A-TRCM12 significantly overestimates the rainfall forecast for Typhoon Mitag, an independent case from 2007. However, it has a higher probability of detection than model TRCM06. From a disaster prevention perspective, a high probability of detection is much more important than a high false alarm ratio. The modified models can contribute significantly to operational forecast.

  2. Back-calculating baseline creatinine overestimates prevalence of acute kidney injury with poor sensitivity.

    Science.gov (United States)

    Kork, F; Balzer, F; Krannich, A; Bernardi, M H; Eltzschig, H K; Jankowski, J; Spies, C

    2017-03-01

    Acute kidney injury (AKI) is diagnosed by a 50% increase in creatinine. For patients without a baseline creatinine measurement, guidelines suggest estimating baseline creatinine by back-calculation. The aim of this study was to evaluate different glomerular filtration rate (GFR) equations and different GFR assumptions for back-calculating baseline creatinine as well as the effect on the diagnosis of AKI. The Modification of Diet in Renal Disease, the Chronic Kidney Disease Epidemiology (CKD-EPI) and the Mayo quadratic (MQ) equation were evaluated to estimate baseline creatinine, each under the assumption of either a fixed GFR of 75 mL min -1  1.73 m -2 or an age-adjusted GFR. Estimated baseline creatinine, diagnoses and severity stages of AKI based on estimated baseline creatinine were compared to measured baseline creatinine and corresponding diagnoses and severity stages of AKI. The data of 34 690 surgical patients were analysed. Estimating baseline creatinine overestimated baseline creatinine. Diagnosing AKI based on estimated baseline creatinine had only substantial agreement with AKI diagnoses based on measured baseline creatinine [Cohen's κ ranging from 0.66 (95% CI 0.65-0.68) to 0.77 (95% CI 0.76-0.79)] and overestimated AKI prevalence with fair sensitivity [ranging from 74.3% (95% CI 72.3-76.2) to 90.1% (95% CI 88.6-92.1)]. Staging AKI severity based on estimated baseline creatinine had moderate agreement with AKI severity based on measured baseline creatinine [Cohen's κ ranging from 0.43 (95% CI 0.42-0.44) to 0.53 (95% CI 0.51-0.55)]. Diagnosing AKI and staging AKI severity on the basis of estimated baseline creatinine in surgical patients is not feasible. Patients at risk for post-operative AKI should have a pre-operative creatinine measurement to adequately assess post-operative AKI. © 2016 Scandinavian Physiological Society. Published by John Wiley & Sons Ltd.

  3. Simple additive simulation overestimates real influence: altered nitrogen and rainfall modulate the effect of warming on soil carbon fluxes.

    Science.gov (United States)

    Ni, Xiangyin; Yang, Wanqin; Qi, Zemin; Liao, Shu; Xu, Zhenfeng; Tan, Bo; Wang, Bin; Wu, Qinggui; Fu, Changkun; You, Chengming; Wu, Fuzhong

    2017-08-01

    Experiments and models have led to a consensus that there is positive feedback between carbon (C) fluxes and climate warming. However, the effect of warming may be altered by regional and global changes in nitrogen (N) and rainfall levels, but the current understanding is limited. Through synthesizing global data on soil C pool, input and loss from experiments simulating N deposition, drought and increased precipitation, we quantified the responses of soil C fluxes and equilibrium to the three single factors and their interactions with warming. We found that warming slightly increased the soil C input and loss by 5% and 9%, respectively, but had no significant effect on the soil C pool. Nitrogen deposition alone increased the soil C input (+20%), but the interaction of warming and N deposition greatly increased the soil C input by 49%. Drought alone decreased the soil C input by 17%, while the interaction of warming and drought decreased the soil C input to a greater extent (-22%). Increased precipitation stimulated the soil C input by 15%, but the interaction of warming and increased precipitation had no significant effect on the soil C input. However, the soil C loss was not significantly affected by any of the interactions, although it was constrained by drought (-18%). These results implied that the positive C fluxes-climate warming feedback was modulated by the changing N and rainfall regimes. Further, we found that the additive effects of [warming × N deposition] and [warming × drought] on the soil C input and of [warming × increased precipitation] on the soil C loss were greater than their interactions, suggesting that simple additive simulation using single-factor manipulations may overestimate the effects on soil C fluxes in the real world. Therefore, we propose that more multifactorial experiments should be considered in studying Earth systems. © 2016 John Wiley & Sons Ltd.

  4. Extrinsic value orientation and affective forecasting: overestimating the rewards, underestimating the costs.

    Science.gov (United States)

    Sheldon, Kennon M; Gunz, Alexander; Nichols, Charles P; Ferguson, Yuna

    2010-02-01

    We examined affective forecasting errors as a possible explanation of the perennial appeal of extrinsic values and goals. Study 1 found that although people relatively higher in extrinsic (money, fame, image) compared to intrinsic (growth, intimacy, community) value orientation (REVO) are less happy, they nevertheless believe that attaining extrinsic goals offers a strong potential route to happiness. Study 2's longitudinal experimental design randomly assigned participants to pursue either 3 extrinsic or 3 intrinsic goals over 4 weeks, and REVO again predicted stronger forecasts regarding extrinsic goals. However, not even extrinsically oriented participants gained well-being benefits from attaining extrinsic goals, whereas all participants tended to gain in happiness from attaining intrinsic goals. Study 3 showed that the effect of REVO on forecasts is mediated by extrinsic individuals' belief that extrinsic goals will satisfy autonomy and competence needs. It appears that some people overestimate the emotional benefits of achieving extrinsic goals, to their potential detriment.

  5. Partial report and other sampling procedures overestimate the duration of iconic memory.

    Science.gov (United States)

    Appelman, I B

    1980-03-01

    In three experiments, subjects estimated the duration of a brief visual image (iconic memory) either directly by adjusting onset of a click to offset of the visual image, or indirectly with a Sperling partial report (sampling) procedure. The results indicated that partial report and other sampling procedures may reflect other brief phenomena along with iconic memory. First, the partial report procedure yields a greater estimate of the duration of iconic memory than the more direct click method. Second, the partial report estimate of the duration of iconic memory is affected if the subject is required to simultaneously retain a list of distractor items (memory load), while the click method estimate of the duration of iconic memory is not affected by a memory load. Finally, another sampling procedure based on visual cuing yields different estimates of the duration of iconic memory depending on how many items are cued. It was concluded that partial report and other sampling procedures overestimate the duration of iconic memory.

  6. Why do Models Overestimate Surface Ozone in the Southeastern United States?

    Science.gov (United States)

    Travis, Katherine R.; Jacob, Daniel J.; Fisher, Jenny A.; Kim, Patrick S.; Marais, Eloise A.; Zhu, Lei; Yu, Karen; Miller, Christopher C.; Yantosca, Robert M.; Sulprizio, Melissa P.; Thompson, Anne M.; Wennberg, Paul O.; Crounse, John D.; St Clair, Jason M.; Cohen, Ronald C.; Laughner, Joshua L.; Dibb, Jack E.; Hall, Samuel R.; Ullmann, Kirk; Wolfe, Glenn M.; Pollack, Illana B.; Peischl, Jeff; Neuman, Jonathan A.; Zhou, Xianliang

    2018-01-01

    Ozone pollution in the Southeast US involves complex chemistry driven by emissions of anthropogenic nitrogen oxide radicals (NOx ≡ NO + NO2) and biogenic isoprene. Model estimates of surface ozone concentrations tend to be biased high in the region and this is of concern for designing effective emission control strategies to meet air quality standards. We use detailed chemical observations from the SEAC4RS aircraft campaign in August and September 2013, interpreted with the GEOS-Chem chemical transport model at 0.25°×0.3125° horizontal resolution, to better understand the factors controlling surface ozone in the Southeast US. We find that the National Emission Inventory (NEI) for NOx from the US Environmental Protection Agency (EPA) is too high. This finding is based on SEAC4RS observations of NOx and its oxidation products, surface network observations of nitrate wet deposition fluxes, and OMI satellite observations of tropospheric NO2 columns. Our results indicate that NEI NOx emissions from mobile and industrial sources must be reduced by 30–60%, dependent on the assumption of the contribution by soil NOx emissions. Upper tropospheric NO2 from lightning makes a large contribution to satellite observations of tropospheric NO2 that must be accounted for when using these data to estimate surface NOx emissions. We find that only half of isoprene oxidation proceeds by the high-NOx pathway to produce ozone; this fraction is only moderately sensitive to changes in NOx emissions because isoprene and NOx emissions are spatially segregated. GEOS-Chem with reduced NOx emissions provides an unbiased simulation of ozone observations from the aircraft, and reproduces the observed ozone production efficiency in the boundary layer as derived from a regression of ozone and NOx oxidation products. However, the model is still biased high by 8±13 ppb relative to observed surface ozone in the Southeast US. Ozonesondes launched during midday hours show a 7 ppb ozone decrease

  7. Why do models overestimate surface ozone in the Southeast United States?

    Directory of Open Access Journals (Sweden)

    K. R. Travis

    2016-11-01

    Full Text Available Ozone pollution in the Southeast US involves complex chemistry driven by emissions of anthropogenic nitrogen oxide radicals (NOx  ≡  NO + NO2 and biogenic isoprene. Model estimates of surface ozone concentrations tend to be biased high in the region and this is of concern for designing effective emission control strategies to meet air quality standards. We use detailed chemical observations from the SEAC4RS aircraft campaign in August and September 2013, interpreted with the GEOS-Chem chemical transport model at 0.25°  ×  0.3125° horizontal resolution, to better understand the factors controlling surface ozone in the Southeast US. We find that the National Emission Inventory (NEI for NOx from the US Environmental Protection Agency (EPA is too high. This finding is based on SEAC4RS observations of NOx and its oxidation products, surface network observations of nitrate wet deposition fluxes, and OMI satellite observations of tropospheric NO2 columns. Our results indicate that NEI NOx emissions from mobile and industrial sources must be reduced by 30–60 %, dependent on the assumption of the contribution by soil NOx emissions. Upper-tropospheric NO2 from lightning makes a large contribution to satellite observations of tropospheric NO2 that must be accounted for when using these data to estimate surface NOx emissions. We find that only half of isoprene oxidation proceeds by the high-NOx pathway to produce ozone; this fraction is only moderately sensitive to changes in NOx emissions because isoprene and NOx emissions are spatially segregated. GEOS-Chem with reduced NOx emissions provides an unbiased simulation of ozone observations from the aircraft and reproduces the observed ozone production efficiency in the boundary layer as derived from a regression of ozone and NOx oxidation products. However, the model is still biased high by 6 ± 14 ppb relative to observed surface ozone in the Southeast US. Ozonesondes

  8. Why do Models Overestimate Surface Ozone in the Southeastern United States?

    Science.gov (United States)

    Travis, Katherine R.; Jacob, Daniel J.; Fisher, Jenny A.; Kim, Patrick S.; Marais, Eloise A.; Zhu, Lei; Yu, Karen; Miller, Christopher C.; Yantosca, Robert M.; Sulprizio, Melissa P.; hide

    2016-01-01

    Ozone pollution in the Southeast US involves complex chemistry driven by emissions of anthropogenic nitrogen oxide radicals (NOx = NO + NO2) and biogenic isoprene. Model estimates of surface ozone concentrations tend to be biased high in the region and this is of concern for designing effective emission control strategies to meet air quality standards. We use detailed chemical observations from the SEAC4RS aircraft campaign in August and September 2013, interpreted with the GEOS-Chem chemical transport model at 0.25 deg. x 0.3125 deg. horizontal resolution, to better understand the factors controlling surface ozone in the Southeast US. We find that the National Emission Inventory (NEI) for NOx from the US Environmental Protection Agency (EPA) is too high. This finding is based on SEAC4RS observations of NOx and its oxidation products, surface network observations of nitrate wet deposition fluxes, and OMI satellite observations of tropospheric NO2 columns. Our results indicate that NEI NOx emissions from mobile and industrial sources must be reduced by 30-60%, dependent on the assumption of the contribution by soil NOx emissions. Upper tropospheric NO2 from lightning makes a large contribution to satellite observations of tropospheric NO2 that must be accounted for when using these data to estimate surface NOx emissions. We find that only half of isoprene oxidation proceeds by the high-NOx pathway to produce ozone; this fraction is only moderately sensitive to changes in NOx emissions because isoprene and NOx emissions are spatially segregated. GEOS-Chem with reduced NOx emissions provides an unbiased simulation of ozone observations from the aircraft, and reproduces the observed ozone production efficiency in the boundary layer as derived from a 15 regression of ozone and NOx oxidation products. However, the model is still biased high by 8 +/- 13 ppb relative to observed surface ozone in the Southeast US. Ozonesondes launched during midday hours show a 7 ppb ozone

  9. Over-estimation of sea level measurements arising from water density anomalies within tide-wells - A case study at Zuari Estuary, Goa

    Digital Repository Service at National Institute of Oceanography (India)

    Joseph, A.; VijayKumar, K.; Desa, E.S.; Desa, E.; Peshwe, V.B.

    at the mouth of the Zuari estuary, and anomalies were reported at all periods except during peak summer and the onset of the summer monsoon. These anomalies lead to an over-estimation of sea level by a tide-well based gauge. The density difference, delta p...

  10. Overestimation of organic phosphorus in wetland soils by alkaline extraction and molybdate colorimetry.

    Science.gov (United States)

    Turner, Benjamin L; Newman, Susan; Reddy, K Ramesh

    2006-05-15

    Accurate information on the chemical nature of soil phosphorus is essential for understanding its bioavailability and fate in wetland ecosystems. Solution phosphorus-31 nuclear magnetic resonance (31P NMR) spectroscopy was used to assess the conventional colorimetric procedure for phosphorus speciation in alkaline extracts of organic soils from the Florida Everglades. Molybdate colorimetry markedly overestimated organic phosphorus by between 30 and 54% compared to NMR spectroscopy. This was due in large part to the association of inorganic phosphate with organic matter, although the error was exacerbated in some samples by the presence of pyrophosphate, an inorganic polyphosphate that is not detected by colorimetry. The results have important implications for our understanding of phosphorus biogeochemistry in wetlands and suggest that alkaline extraction and solution 31p NMR spectroscopy is the only accurate method for quantifying organic phosphorus in wetland soils.

  11. Evaluating significance in linear mixed-effects models in R.

    Science.gov (United States)

    Luke, Steven G

    2017-08-01

    Mixed-effects models are being used ever more frequently in the analysis of experimental data. However, in the lme4 package in R the standards for evaluating significance of fixed effects in these models (i.e., obtaining p-values) are somewhat vague. There are good reasons for this, but as researchers who are using these models are required in many cases to report p-values, some method for evaluating the significance of the model output is needed. This paper reports the results of simulations showing that the two most common methods for evaluating significance, using likelihood ratio tests and applying the z distribution to the Wald t values from the model output (t-as-z), are somewhat anti-conservative, especially for smaller sample sizes. Other methods for evaluating significance, including parametric bootstrapping and the Kenward-Roger and Satterthwaite approximations for degrees of freedom, were also evaluated. The results of these simulations suggest that Type 1 error rates are closest to .05 when models are fitted using REML and p-values are derived using the Kenward-Roger or Satterthwaite approximations, as these approximations both produced acceptable Type 1 error rates even for smaller samples.

  12. Overestimation of Crop Root Biomass in Field Experiments Due to Extraneous Organic Matter.

    Science.gov (United States)

    Hirte, Juliane; Leifeld, Jens; Abiven, Samuel; Oberholzer, Hans-Rudolf; Hammelehle, Andreas; Mayer, Jochen

    2017-01-01

    Root biomass is one of the most relevant root parameters for studies of plant response to environmental change, soil carbon modeling or estimations of soil carbon sequestration. A major source of error in root biomass quantification of agricultural crops in the field is the presence of extraneous organic matter in soil: dead roots from previous crops, weed roots, incorporated above ground plant residues and organic soil amendments, or remnants of soil fauna. Using the isotopic difference between recent maize root biomass and predominantly C3-derived extraneous organic matter, we determined the proportions of maize root biomass carbon of total carbon in root samples from the Swiss long-term field trial "DOK." We additionally evaluated the effects of agricultural management (bio-organic and conventional), sampling depth (0-0.25, 0.25-0.5, 0.5-0.75 m) and position (within and between maize rows), and root size class (coarse and fine roots) as defined by sieve mesh size (2 and 0.5 mm) on those proportions, and quantified the success rate of manual exclusion of extraneous organic matter from root samples. Only 60% of the root mass that we retrieved from field soil cores was actual maize root biomass from the current season. While the proportions of maize root biomass carbon were not affected by agricultural management, they increased consistently with soil depth, were higher within than between maize rows, and were higher in coarse (>2 mm) than in fine (≤2 and >0.5) root samples. The success rate of manual exclusion of extraneous organic matter from root samples was related to agricultural management and, at best, about 60%. We assume that the composition of extraneous organic matter is strongly influenced by agricultural management and soil depth and governs the effect size of the investigated factors. Extraneous organic matter may result in severe overestimation of recovered root biomass and has, therefore, large implications for soil carbon modeling and estimations

  13. Students with Non-Proficient Information Seeking Skills Greatly Over-Estimate Their Abilities. A Review of: Gross, Melissa, and Don Latham.

    Directory of Open Access Journals (Sweden)

    David Herron

    2008-06-01

    the actual percentage correct was 65%. The estimated score was 50 and the actual 39, and the estimated comparison with their peers was 77% and the actual 53%. All three measures demonstrated a significant difference between estimated and actual values. On the ILT, the mean score for the bottom tier of the students was 34, and the mean score for the top tier was 42, showing a significant difference between the top and bottom tier in a t-test. On the ILT, 23 students scored less than 39 (= nonproficient, 27 scored between 39-53 (proficient. Only one student, a top quartile participant, showed advanced information literacy skills (a score above 53. In the post-survey, the students still over-estimated their performance, but to a lesser degree. All three groups adjusted their self-estimates in the post-survey in response to information skills testing, but the non-proficient group over-estimated their skills to a higher degree, on both pre-and post-surveys. The estimated percentage correct answers for the whole group was69%, but the actual was 65%, the estimated score was 44 and the actual 39, and the estimated comparison with their peers was 70% and the actual 53%. All three measures demonstrated a significant difference. All results show that the original hypothesis holds: non-proficient students over-estimate their abilities to a greater degree, both in terms of score and performance relative totheir peers. The LAS was used to see if there was a relationship between student scores on the ILT and library anxiety. Bi-variate analysis was performed on the ILT scores and student total scores on the LAS, and the results show that library anxiety tended to decrease with higher scores on the ILT. This result was not expected from the theory. In the pre-survey, the students were asked how they had obtained their information literacy skills. The top tier indicated a reliance on more formal sources (e.g. school library media center, classroom, and/or public library, while the

  14. Ecosystem Model Performance at Wetlands: Results from the North American Carbon Program Site Synthesis

    Science.gov (United States)

    Sulman, B. N.; Desai, A. R.; Schroeder, N. M.; NACP Site Synthesis Participants

    2011-12-01

    Northern peatlands contain a significant fraction of the global carbon pool, and their responses to hydrological change are likely to be important factors in future carbon cycle-climate feedbacks. Global-scale carbon cycle modeling studies typically use general ecosystem models with coarse spatial resolution, often without peatland-specific processes. Here, seven ecosystem models were used to simulate CO2 fluxes at three field sites in Canada and the northern United States, including two nutrient-rich fens and one nutrient-poor, sphagnum-dominated bog, from 2002-2006. Flux residuals (simulated - observed) were positively correlated with measured water table for both gross ecosystem productivity (GEP) and ecosystem respiration (ER) at the two fen sites for all models, and were positively correlated with water table at the bog site for the majority of models. Modeled diurnal cycles at fen sites agreed well with eddy covariance measurements overall. Eddy covariance GEP and ER were higher during dry periods than during wet periods, while model results predicted either the opposite relationship or no significant difference. At the bog site, eddy covariance GEP had no significant dependence on water table, while models predicted higher GEP during wet periods. All models significantly over-estimated GEP at the bog site, and all but one over-estimated ER at the bog site. Carbon cycle models in peatland-rich regions could be improved by incorporating better models or measurements of hydrology and by inhibiting GEP and ER rates under saturated conditions. Bogs and fens likely require distinct treatments in ecosystem models due to differences in nutrients, peat properties, and plant communities.

  15. Standard duplex criteria overestimate the degree of stenosis after eversion carotid endarterectomy.

    Science.gov (United States)

    Benzing, Travis; Wilhoit, Cameron; Wright, Sharee; McCann, P Aaron; Lessner, Susan; Brothers, Thomas E

    2015-06-01

    , 146-432 cm/s) after eCEA that were subsequently examined by axial imaging, the mean percentage stenosis was 8% ± 11% by NASCET, 11% ± 5% by ECST, and 20% ± 9% by CSA criteria. For eight pCEA arteries with PSV >125 cm/s (median velocity, 148 cm/s; interquartile range, 139-242 cm/s), the corresponding NASCET, ECST, and CSA stenoses were 8% ± 35%, 26% ± 32%, and 25% ± 33%, respectively. NASCET internal carotid diameter reduction of at least 50% was noted by axial imaging after two of the eight pCEAs, and the PSV exceeded 200 cm/s in each case. The presence of hemodynamically significant carotid artery restenosis may be overestimated by standard duplex criteria after eCEA and perhaps after pCEA. Insufficient information currently exists to determine what PSV does correspond to hemodynamically significant restenosis. Published by Elsevier Inc.

  16. Plant pathogens as biocontrol agents of Cirsium arvense – an overestimated approach?

    Directory of Open Access Journals (Sweden)

    Esther Müller

    2011-11-01

    Full Text Available Cirsium arvense is one of the worst weeds in agriculture. As herbicides are not very effective and not accepted by organic farming and special habitats, possible biocontrol agents have been investigated since many decades. In particular plant pathogens of C. arvense have received considerable interest and have been promoted as “mycoherbicides” or “bioherbicides”. A total of 10 fungi and one bacterium have been proposed and tested as biocontrol agents against C. arvense. A variety of experiments analysed the noxious influence of spores or other parts of living fungi or bacteria on plants while others used fungal or bacterial products, usually toxins. Also combinations of spores with herbicides and combinations of several pathogens were tested. All approaches turned out to be inappropriate with regard to target plant specificity, effectiveness and application possibilities. As yet, none of the tested species or substances has achieved marketability, despite two patents on the use of Septoria cirsii and Phomopsis cirsii. We conclude that the potential of pathogens for biocontrol of C. arvense has largely been overestimated.

  17. Overestimation of myocardial infarct size on two-dimensional echocardiograms due to remodelling of the infarct zone.

    Science.gov (United States)

    Johnston, B J; Blinston, G E; Jugdutt, B I

    1994-01-01

    To assess the effect of early regional diastolic shape distortion or bulging of infarct zones due to infarct expansion on estimates of regional left ventricular dysfunction and infarct size by two-dimensional echocardiographic imaging. Quantitative two-dimensional echocardiograms from patients with a first Q wave myocardial infarction and creatine kinase infarct size data, and normal subjects, were subjected to detailed analysis of regional left ventricular dysfunction and shape distortion in short-axis images by established methods. Regional left ventricular asynergy (akinesis and dyskinesis) and shape distortion indices (eg, peak [Pk]/radius [ri]) were measured on endocardial diastolic outlines of short-axis images in 43 postinfarction patients (28 anterior and 15 inferior, 5.9 h after onset) and 11 normal subjects (controls). In the infarction group, endocardial surface area of asynergy was calculated by three-dimensional reconstruction of the images and infarct size from serial creatine kinase blood levels. Diastolic bulging of asynergic zones was found in all infarction patients. The regional shape distortion indices characterizing the area between the 'actual' bulging asynergic segment and the derived 'ideal' circular segment (excluding the bulge) on indexed sections were greater in infarct than control groups (Pk/ri 0.31 versus 0, P 0.001). Importantly, the degree of distortion correlated with overestimation of asynergy (r = 0.89, P < 0.001), and the relation between infarct size and total 'ideal' asynergy showed a leftward shift from that with 'actual' asynergy. Early regional diastolic bulging of the infarct zone results in overestimation of regional ventricular dysfunction, especially in patients with anterior infarction. This effect should be considered when assessing effects of therapy on infarct size, remodelling and dysfunction using tomographical imaging.

  18. Central Pressure Appraisal: Clinical Validation of a Subject-Specific Mathematical Model.

    Directory of Open Access Journals (Sweden)

    Francesco Tosello

    Full Text Available Current evidence suggests that aortic blood pressure has a superior prognostic value with respect to brachial pressure for cardiovascular events, but direct measurement is not feasible in daily clinical practice.The aim of the present study is the clinical validation of a multiscale mathematical model for non-invasive appraisal of central blood pressure from subject-specific characteristics.A total of 51 young male were selected for the present study. Aortic systolic and diastolic pressure were estimated with a mathematical model and were compared to the most-used non-invasive validated technique (SphygmoCor device, AtCor Medical, Australia. SphygmoCor was calibrated through diastolic and systolic brachial pressure obtained with a sphygmomanometer, while model inputs consist of brachial pressure, height, weight, age, left-ventricular end-systolic and end-diastolic volumes, and data from a pulse wave velocity study.Model-estimated systolic and diastolic central blood pressures resulted to be significantly related to SphygmoCor-assessed central systolic (r = 0.65 p <0.0001 and diastolic (r = 0.84 p<0.0001 blood pressures. The model showed a significant overestimation of systolic pressure (+7.8 (-2.2;14 mmHg, p = 0.0003 and a significant underestimation of diastolic values (-3.2 (-7.5;1.6, p = 0.004, which imply a significant overestimation of central pulse pressure. Interestingly, model prediction errors mirror the mean errors reported in large meta-analysis characterizing the use of the SphygmoCor when non-invasive calibration is performed.In conclusion, multi-scale mathematical model predictions result to be significantly related to SphygmoCor ones. Model-predicted systolic and diastolic aortic pressure resulted in difference of less than 10 mmHg in the 51% and 84% of the subjects, respectively, when compared with SphygmoCor-obtained pressures.

  19. Evaluation of dust and trace metal estimates from the Community Multiscale Air Quality (CMAQ model version 5.0

    Directory of Open Access Journals (Sweden)

    K. W. Appel

    2013-07-01

    Full Text Available The Community Multiscale Air Quality (CMAQ model is a state-of-the-science air quality model that simulates the emission, transformation, transport, and fate of the many different air pollutant species that comprise particulate matter (PM, including dust (or soil. The CMAQ model version 5.0 (CMAQv5.0 has several enhancements over the previous version of the model for estimating the emission and transport of dust, including the ability to track the specific elemental constituents of dust and have the model-derived concentrations of those elements participate in chemistry. The latest version of the model also includes a parameterization to estimate emissions of dust due to wind action. The CMAQv5.0 modeling system was used to simulate the entire year 2006 for the continental United States, and the model estimates were evaluated against daily surface-based measurements from several air quality networks. The CMAQ modeling system overall did well replicating the observed soil concentrations in the western United States (mean bias generally around ±0.5 μg m−3; however, the model consistently overestimated the observed soil concentrations in the eastern United States (mean bias generally between 0.5–1.5 μg m−3, regardless of season. The performance of the individual trace metals was highly dependent on the network, species, and season, with relatively small biases for Fe, Al, Si, and Ti throughout the year at the Interagency Monitoring of Protected Visual Environments (IMPROVE sites, while Ca, K, and Mn were overestimated and Mg underestimated. For the urban Chemical Speciation Network (CSN sites, Fe, Mg, and Mn, while overestimated, had comparatively better performance throughout the year than the other trace metals, which were consistently overestimated, including very large overestimations of Al (380%, Ti (370% and Si (470% in the fall. An underestimation of nighttime mixing in the urban areas appears to contribute to the overestimation of

  20. Corrigendum to 'A novel model evaluation approach focusing on local and advected contributions to urban PM2.5 levels - application to Paris, France' published in Geosci. Model Dev., 7, 1483-1505, 2014

    International Nuclear Information System (INIS)

    Petetin, H.; Beekmann, M.; Sciare, J.; Bressi, M.; Rosso, A.; Sanchez, O.; Ghersi, V.

    2014-01-01

    Complete text of publication follows: Due to an oversight in the production process, an essential word (overestimation) was left out of the abstract. The correct version of the abstract can be seen below. Aerosol simulations in chemistry transport models (CTMs) still suffer from numerous uncertainties, and diagnostic evaluations are required to point out major error sources. This paper presents an original approach to evaluate CTMs based on local and imported contributions in a large mega-city rather than urban background concentrations. The study is applied to the CHIMERE model in the Paris region (France) and considers the fine particulate matter (PM2.5) and its main chemical constituents (elemental and organic carbon, nitrate, sulfate and ammonium), for which daily measurements are available during a whole year at various stations (PARTICULES project). Back-trajectory data are used to locate the upwind station, from which the concentration is identified as the import, the local production being deduced from the urban concentration by subtraction. Uncertainties on these contributions are quantified. Small biases in urban background PM2.5 simulations (bias of +16 %) hide significant error compensations between local and advected contributions, as well as in PM2.5 chemical compounds. In particular, winter time organic matter (OM) imports appear strongly underestimated while local OM and elemental carbon (EC) production is overestimated all along the year. Erroneous continental wood burning emissions and missing secondary organic aerosol (SOA) pathways may explain errors on advected OM, while the carbonaceous compounds overestimation is likely to be related to errors in emissions and dynamics. A statistically significant local formation of nitrate is also highlighted from observations, but missed by the model. Together with the overestimation of nitrate imports, it leads to a bias of +51% on the local PM2.5 contribution. Such an evaluation finally gives more

  1. Why overestimate or underestimate chronic kidney disease when correct estimation is possible?

    Science.gov (United States)

    De Broe, Marc E; Gharbi, Mohamed Benghanem; Zamd, Mohamed; Elseviers, Monique

    2017-04-01

    There is no doubt that the introduction of the Kidney Disease: Improving Global Outcomes (KDIGO) guidelines 14 years ago, and their subsequent updates, have substantially contributed to the early detection of different stages of chronic kidney disease (CKD). Several recent studies from different parts of the world mention a CKD prevalence of 8-13%. However, some editorials and reviews have begun to describe the weaknesses of a substantial number of studies. Maremar (maladies rénales chroniques au Maroc) is a recently published prevalence study of CKD, hypertension, diabetes and obesity in a randomized, representative and high response rate (85%) sample of the adult population of Morocco that strictly applied the KDIGO guidelines. When adjusted to the actual adult population of Morocco (2015), a rather low prevalence of CKD (2.9%) was found. Several reasons for this low prevalence were identified; the tagine-like population pyramid of the Maremar population was a factor, but even more important were the confirmation of proteinuria found at first screening and the proof of chronicity of decreased estimated glomerular filtration rate (eGFR), eliminating false positive results. In addition, it was found that when an arbitrary single threshold of eGFR (55 years of age), particularly in those without proteinuria, haematuria or hypertension. It also resulted in a significant 'underdiagnosis' (false negatives) in younger individuals with an eGFR >60 mL/min/1.73 m2 and below the third percentile of their age-/gender-category. The use of the third percentile eGFR level as a cut-off, based on age-gender-specific reference values of eGFR, allows the detection of these false positives and negatives. There is an urgent need for additional quality studies of the prevalence of CKD using the recent KDIGO guidelines in the correct way, to avoid overestimation of the true disease state of CKD by ≥50% with potentially dramatic consequences. © The Author 2017. Published by Oxford

  2. Overestimation of heterosexually attributed AIDS deaths is associated with immature psychological defence mechanisms and clitoral masturbation during penile-vaginal intercourse.

    Science.gov (United States)

    Brody, S; Costa, R M

    2009-12-01

    Research shows that (1) greater use of immature psychological defence mechanisms (associated with psychopathology) is associated with lesser orgasmic consistency from penile-vaginal intercourse (PVI), but greater frequency of other sexual behaviours and greater condom use for PVI, and (2) unlike the vectors of receptive anal intercourse and punctures, HIV acquisition during PVI is extremely unlikely in reasonably healthy persons. However, the relationship between overestimation of AIDS deaths due to 'heterosexual transmission' (often misunderstood as only PVI), sexual behaviour and mental health has been lacking. Two hundred and twenty-one Scottish women completed the Defense Style Questionnaire, reported past month frequencies of their various sexual activities, and estimated the total number of women who died from AIDS in Scotland nominally as a result of heterosexual transmission in the UK from a partner not known to be an injecting drug user, bisexual or infected through transfusion. The average respondent overestimated by 226,000%. Women providing lower estimates were less likely to use immature psychological defences, and had a lower frequency of orgasms from clitoral masturbation during PVI and from vibrator use. The results indicate that those who perceive 'heterosexual transmission' led to many AIDS deaths have poorer psychological functioning, and might be less able to appreciate PVI.

  3. Validation of lower tropospheric carbon monoxide inferred from MOZART model simulation over India

    Science.gov (United States)

    Yarragunta, Y.; Srivastava, S.; Mitra, D.

    2017-02-01

    In the present study, MOZART-4 (Model for Ozone and Related chemical Tracers-Version-4) simulation has been made from 2003 to 2007 and compared with satellite and in-situ observations with a specific focus on Indian subcontinent to illustrate the capabilities of MOZART-4 model. The model simulated CO have been compared with latest version (version-6) of MOPITT (Measurement Of Pollution In The Troposphere) carbon monoxide (CO) retrievals at 900, 800 and 700 hPa. Model reproduces major features present in satellite observations. However model significantly overestimates CO over the entire Indian region at 900 hPa and moderately overestimates at 800 hPa and 700 hPa. The frequency distribution of all simulated data points with respect to MOZART error shows maximum in the error range of 10-20% at all pressure levels. Over total Indian landmass, the percentage of gridded CO data that are being overestimated in the range of 0-30% at 900 hPa, 800 hPa and 700 hPa are 58%, 62% and 66% respectively. The study reflects very good correlation between two datasets over Central India (CI) and Southern India (SI). The coefficient of determination (r2) is found to be 0.68-0.78 and 0.70-0.78 over the CI and SI respectively. The weak correlation is evident over Northern India (NI) with r2 values of 0.1-0.3. Over Eastern India (EI), Good correlation at 800 hPa (r2 = 0.72) and 700 hPa (r2 = 0.66) whereas moderately weak correlation at 900 hPa (r2 = 0.48) has been observed. In contrast, Over Western India (WI), strong correlation is evident at 900 hPa (r2 = 0.64) and moderately weak association is found to be present at 800 hPa and 700 hPa. Model fairly reproduces seasonal cycle of CO in the lower troposphere over most of the Indian regions. However, during June to December, model shows overestimation over NI. The magnitude of overestimation is increasing linearly from 900 hPa to 700 hPa level. During April-June months, model results are coinciding with observed CO concentrations over SI

  4. Fronts and precipitation in CMIP5 models for the austral winter of the Southern Hemisphere

    Science.gov (United States)

    Blázquez, Josefina; Solman, Silvina A.

    2018-04-01

    Wintertime fronts climatology and the relationship between fronts and precipitation as depicted by a group of CMIP5 models are evaluated over the Southern Hemisphere (SH). The frontal activity is represented by an index that takes into account the vorticity, the gradient of temperature and the specific humidity at the 850 hPa level. ERA-Interim reanalysis and GPCP datasets are used to assess the performance of the models in the present climate. Overall, it is found that the models can reproduce adequately the main features of frontal activity and front frequency over the SH. The total precipitation is overestimated in most of the models, especially the maximum values over the mid latitudes. This overestimation could be related to the high values of precipitation frequency that are identified in some of the models evaluated. The relationship between fronts and precipitation has also been evaluated in terms of both frequency of frontal precipitation and percentage of precipitation due to fronts. In general terms, the models overestimate the proportion between frontal and total precipitation. In contrast with frequency of total precipitation, the frequency of frontal precipitation is well reproduced by the models, with the higher values located at the mid latitudes. The results suggest that models represent very well the dynamic forcing (fronts) and the frequency of frontal precipitation, though the amount of precipitation due to fronts is overestimated.

  5. Assessment of an extended version of the Jenkinson-Collison classification on CMIP5 models over Europe

    Science.gov (United States)

    Otero, Noelia; Sillmann, Jana; Butler, Tim

    2018-03-01

    A gridded, geographically extended weather type classification has been developed based on the Jenkinson-Collison (JC) classification system and used to evaluate the representation of weather types over Europe in a suite of climate model simulations. To this aim, a set of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) is compared with the circulation from two reanalysis products. Furthermore, we examine seasonal changes between simulated frequencies of weather types at present and future climate conditions. The models are in reasonably good agreement with the reanalyses, but some discrepancies occur in cyclonic days being overestimated over North, and underestimated over South Europe, while anticyclonic situations were overestimated over South, and underestimated over North Europe. Low flow conditions were generally underestimated, especially in summer over South Europe, and Westerly conditions were generally overestimated. The projected frequencies of weather types in the late twenty-first century suggest an increase of Anticyclonic days over South Europe in all seasons except summer, while Westerly days increase over North and Central Europe, particularly in winter. We find significant changes in the frequency of Low flow conditions and the Easterly type that become more frequent during the warmer seasons over Southeast and Southwest Europe, respectively. Our results indicate that in winter the Westerly type has significant impacts on positive anomalies of maximum and minimum temperature over most of Europe. Except in winter, the warmer temperatures are linked to Easterlies, Anticyclonic and Low Flow conditions, especially over the Mediterranean area. Furthermore, we show that changes in the frequency of weather types represent a minor contribution of the total change of European temperatures, which would be mainly driven by changes in the temperature anomalies associated with the weather types themselves.

  6. Recent progress in biomass burning research: a perspective from analyses of satellite data and model studies. (Invited)

    Science.gov (United States)

    Logan, J. A.

    2010-12-01

    Significant progress has been made in using satellite data to provide bottom-up constraints on biomass burning (BB) emissions. However, inverse studies with CO satellite data imply that tropical emissions are underestimated by current inventories, while model simulations of the ARCTAS period imply that the FLAMBE estimates of extratropical emissions are significantly overestimated. Injection heights of emissions from BB have been quantified recently using MISR data, and these data provide some constraints on 1-d plume models. I will discuss recent results in these areas, highlighting future research needs.

  7. On low-frequency errors of uniformly modulated filtered white-noise models for ground motions

    Science.gov (United States)

    Safak, Erdal; Boore, David M.

    1988-01-01

    Low-frequency errors of a commonly used non-stationary stochastic model (uniformly modulated filtered white-noise model) for earthquake ground motions are investigated. It is shown both analytically and by numerical simulation that uniformly modulated filter white-noise-type models systematically overestimate the spectral response for periods longer than the effective duration of the earthquake, because of the built-in low-frequency errors in the model. The errors, which are significant for low-magnitude short-duration earthquakes, can be eliminated by using the filtered shot-noise-type models (i. e. white noise, modulated by the envelope first, and then filtered).

  8. Multi-model evaluation of short-lived pollutant distributions over east Asia during summer 2008

    Science.gov (United States)

    Quennehen, B.; Raut, J.-C.; Law, K. S.; Daskalakis, N.; Ancellet, G.; Clerbaux, C.; Kim, S.-W.; Lund, M. T.; Myhre, G.; Olivié, D. J. L.; Safieddine, S.; Skeie, R. B.; Thomas, J. L.; Tsyro, S.; Bazureau, A.; Bellouin, N.; Hu, M.; Kanakidou, M.; Klimont, Z.; Kupiainen, K.; Myriokefalitakis, S.; Quaas, J.; Rumbold, S. T.; Schulz, M.; Cherian, R.; Shimizu, A.; Wang, J.; Yoon, S.-C.; Zhu, T.

    2016-08-01

    The ability of seven state-of-the-art chemistry-aerosol models to reproduce distributions of tropospheric ozone and its precursors, as well as aerosols over eastern Asia in summer 2008, is evaluated. The study focuses on the performance of models used to assess impacts of pollutants on climate and air quality as part of the EU ECLIPSE project. Models, run using the same ECLIPSE emissions, are compared over different spatial scales to in situ surface, vertical profiles and satellite data. Several rather clear biases are found between model results and observations, including overestimation of ozone at rural locations downwind of the main emission regions in China, as well as downwind over the Pacific. Several models produce too much ozone over polluted regions, which is then transported downwind. Analysis points to different factors related to the ability of models to simulate VOC-limited regimes over polluted regions and NOx limited regimes downwind. This may also be linked to biases compared to satellite NO2, indicating overestimation of NO2 over and to the north of the northern China Plain emission region. On the other hand, model NO2 is too low to the south and west of this region and over South Korea/Japan. Overestimation of ozone is linked to systematic underestimation of CO particularly at rural sites and downwind of the main Chinese emission regions. This is likely to be due to enhanced destruction of CO by OH. Overestimation of Asian ozone and its transport downwind implies that radiative forcing from this source may be overestimated. Model-observation discrepancies over Beijing do not appear to be due to emission controls linked to the Olympic Games in summer 2008.With regard to aerosols, most models reproduce the satellite-derived AOD patterns over eastern China. Our study nevertheless reveals an overestimation of ECLIPSE model mean surface BC and sulphate aerosols in urban China in summer 2008. The effect of the short-term emission mitigation in Beijing

  9. Multi-model evaluation of short-lived pollutant distributions over east Asia during summer 2008

    Directory of Open Access Journals (Sweden)

    B. Quennehen

    2016-08-01

    Full Text Available The ability of seven state-of-the-art chemistry–aerosol models to reproduce distributions of tropospheric ozone and its precursors, as well as aerosols over eastern Asia in summer 2008, is evaluated. The study focuses on the performance of models used to assess impacts of pollutants on climate and air quality as part of the EU ECLIPSE project. Models, run using the same ECLIPSE emissions, are compared over different spatial scales to in situ surface, vertical profiles and satellite data. Several rather clear biases are found between model results and observations, including overestimation of ozone at rural locations downwind of the main emission regions in China, as well as downwind over the Pacific. Several models produce too much ozone over polluted regions, which is then transported downwind. Analysis points to different factors related to the ability of models to simulate VOC-limited regimes over polluted regions and NOx limited regimes downwind. This may also be linked to biases compared to satellite NO2, indicating overestimation of NO2 over and to the north of the northern China Plain emission region. On the other hand, model NO2 is too low to the south and west of this region and over South Korea/Japan. Overestimation of ozone is linked to systematic underestimation of CO particularly at rural sites and downwind of the main Chinese emission regions. This is likely to be due to enhanced destruction of CO by OH. Overestimation of Asian ozone and its transport downwind implies that radiative forcing from this source may be overestimated. Model-observation discrepancies over Beijing do not appear to be due to emission controls linked to the Olympic Games in summer 2008.With regard to aerosols, most models reproduce the satellite-derived AOD patterns over eastern China. Our study nevertheless reveals an overestimation of ECLIPSE model mean surface BC and sulphate aerosols in urban China in summer 2008. The effect of the short-term emission

  10. Instability of Reference Diameter in the Evaluation of Stenosis After Coronary Angioplasty: Percent Diameter Stenosis Overestimates Dilative Effects Due to Reference Diameter Reduction

    International Nuclear Information System (INIS)

    Hirami, Ryouichi; Iwasaki, Kohichiro; Kusachi, Shozo; Murakami, Takashi; Hina, Kazuyoshi; Matano, Shigeru; Murakami, Masaaki; Kita, Toshimasa; Sakakibara, Noburu; Tsuji, Takao

    2000-01-01

    Purpose: To examine changes in the reference segment luminal diameter after coronary angioplasty.Methods: Sixty-one patients with stable angina pectoris or old myocardial infarction were examined. Coronary angiograms were recorded before coronary angioplasty (pre-angioplasty) and immediately after (post-angioplasty), as well as 3 months after. Artery diameters were measured on cine-film using quantitative coronary angiographic analysis.Results: The diameters of the proximal segment not involved in the balloon inflation and segments in the other artery did not change significantly after angioplasty, but the reference segment diameter significantly decreased (4.7%). More than 10% luminal reduction was observed in seven patients (11%) and more than 5% reduction was observed in 25 patients (41%). More than 5% underestimation of the stenosis was observed in 22 patients (36%) when the post-angioplasty reference diameter was used as the reference diameter, compared with when the pre-angioplasty measurement was used and more than 10% underestimation was observed in five patients (8%).Conclusion: This study indicated that evaluation by percent diameter stenosis, with the reference diameter from immediately after angioplasty, overestimates the dilative effects of coronary angioplasty, and that it is thus better to evaluate the efficacy of angioplasty using the absolute diameter in addition to percent luminal stenosis

  11. The Effect of Primary Cancer Cell Culture Models on the Results of Drug Chemosensitivity Assays: The Application of Perfusion Microbioreactor System as Cell Culture Vessel

    Science.gov (United States)

    Chen, Yi-Dao; Huang, Shiang-Fu; Wang, Hung-Ming

    2015-01-01

    To precisely and faithfully perform cell-based drug chemosensitivity assays, a well-defined and biologically relevant culture condition is required. For the former, a perfusion microbioreactor system capable of providing a stable culture condition was adopted. For the latter, however, little is known about the impact of culture models on the physiology and chemosensitivity assay results of primary oral cavity cancer cells. To address the issues, experiments were performed. Results showed that minor environmental pH change could significantly affect the metabolic activity of cells, demonstrating the importance of stable culture condition for such assays. Moreover, the culture models could also significantly influence the metabolic activity and proliferation of cells. Furthermore, the choice of culture models might lead to different outcomes of chemosensitivity assays. Compared with the similar test based on tumor-level assays, the spheroid model could overestimate the drug resistance of cells to cisplatin, whereas the 2D and 3D culture models might overestimate the chemosensitivity of cells to such anticancer drug. In this study, the 3D culture models with same cell density as that in tumor samples showed comparable chemosensitivity assay results as the tumor-level assays. Overall, this study has provided some fundamental information for establishing a precise and faithful drug chemosensitivity assay. PMID:25654105

  12. Evaluation of the WRF-Urban Modeling System Coupled to Noah and Noah-MP Land Surface Models Over a Semiarid Urban Environment

    Science.gov (United States)

    Salamanca, Francisco; Zhang, Yizhou; Barlage, Michael; Chen, Fei; Mahalov, Alex; Miao, Shiguang

    2018-03-01

    We have augmented the existing capabilities of the integrated Weather Research and Forecasting (WRF)-urban modeling system by coupling three urban canopy models (UCMs) available in the WRF model with the new community Noah with multiparameterization options (Noah-MP) land surface model (LSM). The WRF-urban modeling system's performance has been evaluated by conducting six numerical experiments at high spatial resolution (1 km horizontal grid spacing) during a 15 day clear-sky summertime period for a semiarid urban environment. To assess the relative importance of representing urban surfaces, three different urban parameterizations are used with the Noah and Noah-MP LSMs, respectively, over the two major cities of Arizona: Phoenix and Tucson metropolitan areas. Our results demonstrate that Noah-MP reproduces somewhat better than Noah the daily evolution of surface skin temperature and near-surface air temperature (especially nighttime temperature) and wind speed. Concerning the urban areas, bulk urban parameterization overestimates nighttime 2 m air temperature compared to the single-layer and multilayer UCMs that reproduce more accurately the daily evolution of near-surface air temperature. Regarding near-surface wind speed, only the multilayer UCM was able to reproduce realistically the daily evolution of wind speed, although maximum winds were slightly overestimated, while both the single-layer and bulk urban parameterizations overestimated wind speed considerably. Based on these results, this paper demonstrates that the new community Noah-MP LSM coupled to an UCM is a promising physics-based predictive modeling tool for urban applications.

  13. Overestimation of Albumin Measured by Bromocresol Green vs Bromocresol Purple Method: Influence of Acute-Phase Globulins.

    Science.gov (United States)

    Garcia Moreira, Vanessa; Beridze Vaktangova, Nana; Martinez Gago, Maria Dolores; Laborda Gonzalez, Belen; Garcia Alonso, Sara; Fernandez Rodriguez, Eloy

    2018-05-22

    Usually serum albumin is measured with dye-binding assay as bromocresol green (BCG) and bromocresol purple (BCP) methods. The aim of this paper was to examine the differences in albumin measurements between the Advia2400 BCG method (AlbBCG), Dimension RxL BCP (AlbBCP) and capillary zone electrophoresis (CZE). Albumin concentrations from 165 serum samples were analysed using AlbBCG, AlbBCP and CZE. CZE was employed to estimate different serum protein fractions. Influence of globulins on albumin concentration discrepancies between methods was estimated as well as the impact of the albumin method on aCa concentrations. Medcalc was employed for statistical analysis, setting a value of P albumin concentrations. AlbBCG were positively biased versus CZE (3.54 g/L). There was good agreement between CZE and ALbBCP (Albumin results from the BCP and BCG methods may result in unacceptable differences and clinical confusion, especially at lower albumin concentrations. Serum acute phase proteins contribute to overestimating the albumin concentration using AlbBCG.

  14. A review and model assessment of 32P and 33P uptake to biota in freshwater systems

    International Nuclear Information System (INIS)

    Smith, J.T.; Bowes, M.J.; Cailes, C.R.

    2011-01-01

    Bioaccumulation of key short-lived radionuclides such as 131 I and 32,33 P may be over-estimated since concentration ratios (CRs) are often based on values for the corresponding stable isotope which do not account for radioactive decay during uptake via the food chain. This study presents estimates for bioaccumulation of radioactive phosphorus which account for both radioactive decay and varying ambient levels of stable P in the environment. Recommended interim CR values for radioactive forms of P as a function of bioavailable stable phosphorus in the water body are presented. Values of CR are presented for three different trophic levels of the aquatic food chain; foodstuffs from all three trophic levels may potentially be consumed by humans. It is concluded that current recommended values of the CR are likely to be significantly over-estimated for radioactive phosphorus in many freshwater systems, particularly lowland rivers. Further research is recommended to field-validate these models and assess their uncertainty. The relative importance of food-chain uptake and direct uptake from water are also assessed from a review of the literature. It can be concluded that food-chain uptake is the dominant accumulation pathway in fish and hence accumulation factors for radioactive phosphorus in farmed fish are likely to be significantly lower than those for wild fish. - Highlights: → A model is developed for radiophosphorus uptake to fish. → Concentration ratios for 32,33 P in fish may be over-estimated in freshwater systems. → New recommended values for 32,33 P concentration ratios are given. → Farmed fish are likely to have much lower 32,33 P uptake than wild fish.

  15. The reliability of grazing rate estimates from dilution experiments: Have we over-estimated rates of organic carbon consumption by microzooplankton?

    Directory of Open Access Journals (Sweden)

    J. R. Dolan,

    2005-01-01

    Full Text Available According to a recent global analysis, microzooplankton grazing is surprisingly invariant, ranging only between 59 and 74% of phytoplankton primary production across systems differing in seasonality, trophic status, latitude, or salinity. Thus an important biological process in the world ocean, the daily consumption of recently fixed carbon, appears nearly constant. We believe this conclusion is an artefact because dilution experiments are 1 prone to providing over-estimates of grazing rates and 2 unlikely to furnish evidence of low grazing rates. In our view the overall average rate of microzooplankton grazing probably does not exceed 50% of primary production and may be even lower in oligotrophic systems.

  16. Teaching physical activities to students with significant disabilities using video modeling.

    Science.gov (United States)

    Cannella-Malone, Helen I; Mizrachi, Sharona V; Sabielny, Linsey M; Jimenez, Eliseo D

    2013-06-01

    The objective of this study was to examine the effectiveness of video modeling on teaching physical activities to three adolescents with significant disabilities. The study implemented a multiple baseline across six physical activities (three per student): jumping rope, scooter board with cones, ladder drill (i.e., feet going in and out), ladder design (i.e., multiple steps), shuttle run, and disc ride. Additional prompt procedures (i.e., verbal, gestural, visual cues, and modeling) were implemented within the study. After the students mastered the physical activities, we tested to see if they would link the skills together (i.e., complete an obstacle course). All three students made progress learning the physical activities, but only one learned them with video modeling alone (i.e., without error correction). Video modeling can be an effective tool for teaching students with significant disabilities various physical activities, though additional prompting procedures may be needed.

  17. Carbon and energy fluxes in cropland ecosystems: a model-data comparison

    Science.gov (United States)

    Lokupitiya, E.; Denning, A. Scott; Schaefer, K.; Ricciuto, D.; Anderson, R.; Arain, M. A.; Baker, I.; Barr, A. G.; Chen, G.; Chen, J.M.; Ciais, P.; Cook, D.R.; Dietze, M.C.; El Maayar, M.; Fischer, M.; Grant, R.; Hollinger, D.; Izaurralde, C.; Jain, A.; Kucharik, C.J.; Li, Z.; Liu, S.; Li, L.; Matamala, R.; Peylin, P.; Price, D.; Running, S. W.; Sahoo, A.; Sprintsin, M.; Suyker, A.E.; Tian, H.; Tonitto, Christina; Torn, M.S.; Verbeeck, Hans; Verma, S.B.; Xue, Y.

    2016-01-01

    Croplands are highly productive ecosystems that contribute to land–atmosphere exchange of carbon, energy, and water during their short growing seasons. We evaluated and compared net ecosystem exchange (NEE), latent heat flux (LE), and sensible heat flux (H) simulated by a suite of ecosystem models at five agricultural eddy covariance flux tower sites in the central United States as part of the North American Carbon Program Site Synthesis project. Most of the models overestimated H and underestimated LE during the growing season, leading to overall higher Bowen ratios compared to the observations. Most models systematically under predicted NEE, especially at rain-fed sites. Certain crop-specific models that were developed considering the high productivity and associated physiological changes in specific crops better predicted the NEE and LE at both rain-fed and irrigated sites. Models with specific parameterization for different crops better simulated the inter-annual variability of NEE for maize-soybean rotation compared to those models with a single generic crop type. Stratification according to basic model formulation and phenological methodology did not explain significant variation in model performance across these sites and crops. The under prediction of NEE and LE and over prediction of H by most of the models suggests that models developed and parameterized for natural ecosystems cannot accurately predict the more robust physiology of highly bred and intensively managed crop ecosystems. When coupled in Earth System Models, it is likely that the excessive physiological stress simulated in many land surface component models leads to overestimation of temperature and atmospheric boundary layer depth, and underestimation of humidity and CO2 seasonal uptake over agricultural regions.

  18. Carbon and energy fluxes in cropland ecosystems: a model-data comparison

    Energy Technology Data Exchange (ETDEWEB)

    Lokupitiya, E.; Denning, A. S.; Schaefer, K.; Ricciuto, D.; Anderson, R.; Arain, M. A.; Baker, I.; Barr, A. G.; Chen, G.; Chen, J. M.; Ciais, P.; Cook, D. R.; Dietze, M.; El Maayar, M.; Fischer, M.; Grant, R.; Hollinger, D.; Izaurralde, C.; Jain, A.; Kucharik, C.; Li, Z.; Liu, S.; Li, L.; Matamala, R.; Peylin, P.; Price, D.; Running, S. W.; Sahoo, A.; Sprintsin, M.; Suyker, A. E.; Tian, H.; Tonitto, C.; Torn, M.; Verbeeck, Hans; Verma, S. B.; Xue, Y.

    2016-06-03

    Croplands are highly productive ecosystems that contribute to land–atmosphere exchange of carbon, energy, and water during their short growing seasons. We evaluated and compared net ecosystem exchange (NEE), latent heat flux (LE), and sensible heat flux (H) simulated by a suite of ecosystem models at five agricultural eddy covariance flux tower sites in the central United States as part of the North American Carbon Program Site Synthesis project. Most of the models overestimated H and underestimated LE during the growing season, leading to overall higher Bowen ratios compared to the observations. Most models systematically under predicted NEE, especially at rain-fed sites. Certain crop-specific models that were developed considering the high productivity and associated physiological changes in specific crops better predicted the NEE and LE at both rain-fed and irrigated sites. Models with specific parameterization for different crops better simulated the inter-annual variability of NEE for maize-soybean rotation compared to those models with a single generic crop type. Stratification according to basic model formulation and phenological methodology did not explain significant variation in model performance across these sites and crops. The under prediction of NEE and LE and over prediction of H by most of the models suggests that models developed and parameterized for natural ecosystems cannot accurately predict the more robust physiology of highly bred and intensively managed crop ecosystems. When coupled in Earth System Models, it is likely that the excessive physiological stress simulated in many land surface component models leads to overestimation of temperature and atmospheric boundary layer depth, and underestimation of humidity and CO2 seasonal uptake over agricultural regions.

  19. Influence of an urban canopy model and PBL schemes on vertical mixing for air quality modeling over Greater Paris

    Science.gov (United States)

    Kim, Youngseob; Sartelet, Karine; Raut, Jean-Christophe; Chazette, Patrick

    2015-04-01

    Impacts of meteorological modeling in the planetary boundary layer (PBL) and urban canopy model (UCM) on the vertical mixing of pollutants are studied. Concentrations of gaseous chemical species, including ozone (O3) and nitrogen dioxide (NO2), and particulate matter over Paris and the near suburbs are simulated using the 3-dimensional chemistry-transport model Polair3D of the Polyphemus platform. Simulated concentrations of O3, NO2 and PM10/PM2.5 (particulate matter of aerodynamic diameter lower than 10 μm/2.5 μm, respectively) are first evaluated using ground measurements. Higher surface concentrations are obtained for PM10, PM2.5 and NO2 with the MYNN PBL scheme than the YSU PBL scheme because of lower PBL heights in the MYNN scheme. Differences between simulations using different PBL schemes are lower than differences between simulations with and without the UCM and the Corine land-use over urban areas. Regarding the root mean square error, the simulations using the UCM and the Corine land-use tend to perform better than the simulations without it. At urban stations, the PM10 and PM2.5 concentrations are over-estimated and the over-estimation is reduced using the UCM and the Corine land-use. The ability of the model to reproduce vertical mixing is evaluated using NO2 measurement data at the upper air observation station of the Eiffel Tower, and measurement data at a ground station near the Eiffel Tower. Although NO2 is under-estimated in all simulations, vertical mixing is greatly improved when using the UCM and the Corine land-use. Comparisons of the modeled PM10 vertical distributions to distributions deduced from surface and mobile lidar measurements are performed. The use of the UCM and the Corine land-use is crucial to accurately model PM10 concentrations during nighttime in the center of Paris. In the nocturnal stable boundary layer, PM10 is relatively well modeled, although it is over-estimated on 24 May and under-estimated on 25 May. However, PM10 is

  20. The use of Chernobyl fallout to test model predictions of the transfer of radioiodine from air to vegetation to milk

    International Nuclear Information System (INIS)

    Hoffman, F.O.; Amaral, E.

    1989-01-01

    Comparison of observed values with model predictions indicate a tendency for the models to overpredict the air-vegetation-milk transfer of Chernobyl I-131 by one to two orders of magnitude. Detailed analysis of the data indicated that, in general, most overpredictions were accounted for by the portion of the air-pasture-cow-milk pathway dealing with the transfer from air to pasture vegetation rather than the transfer from vegetation to milk. A partial analysis using available data to infer site-specific conditions and parameter values indicates that differences between model predictions and observations can be explained by: 1) overestimation of the fraction of the total amount of I-131 in air that was present as molecular vapour, 2) overestimation of wet and dry deposition of elemental and organic iodine and particulate aerosols, 3) overestimation of initial vegetation interception of material deposited during sever thunderstorms, 4) underestimation of the rates of weathering and growth dilution of material deposited on vegetation during periods of spring growth, 5) underestimation of the amount of uncontaminated feed consumed by dairy cows, and 6) overestimation of the diet-to-milk transfer coefficient for I-131. (orig./HP)

  1. [Can overestimating one's own capacities of action lead to fall? A study on the perception of affordance in the elderly].

    Science.gov (United States)

    Luyat, Marion; Domino, Delphine; Noël, Myriam

    2008-12-01

    Falls are frequent in the elderly and account for medical complications and loss of autonomy. Affordance, a concept proposed by Gibson, can help to understand a possible cause of falls. An affordance is defined as a potentiality of action offered by the environment in relation with both the properties of this environment and the properties of the organism. Most of our daily activities reflect a perfect adjustment between the perception of these potentialities of action and our actual action abilities. In other words, we correctly perceive affordances. However, in the elderly, postural abilities are reduced and equilibration is more unstable. Thus, some falls could result from a misperception of the affordances of posturability. The aim of our study was to test the hypothesis that cognitive overestimation of real postural abilities in the elderly may cause falls. There would be a gap between what the old subjects believe to be able to do and what they actually can do. Fifteen young adults (mean age = 24 years) and fifteen older adults (mean age = 72 years) had to judge if they were able to stand upright on an inclined surface. The exploration of the inclined surface was made in two conditions: visually and also by haptics (without vision with a cane). In a second part, we measured their real postural stance on the inclined surface. The results show that the perceptual judgments were not different among old and young people. However, as expected, the old subjects had lower postural boundaries than the younger. They could stand on lower inclinations of the surface. These results show an involution of the perception of the affordances in aging. They support the hypothesis of a cognitive overestimation of action abilities in the elderly, possibly due to a difficulty to actualize the new limits for action.

  2. Significant uncertainty in global scale hydrological modeling from precipitation data erros

    NARCIS (Netherlands)

    Sperna Weiland, F.; Vrugt, J.A.; Beek, van P.H.; Weerts, A.H.; Bierkens, M.F.P.

    2015-01-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we

  3. Significant uncertainty in global scale hydrological modeling from precipitation data errors

    NARCIS (Netherlands)

    Weiland, Frederiek C. Sperna; Vrugt, Jasper A.; van Beek, Rens (L. ) P. H.; Weerts, Albrecht H.; Bierkens, Marc F. P.

    2015-01-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we

  4. The asymmetric effects of El Niño and La Niña on the East Asian winter monsoon and their simulation by CMIP5 atmospheric models

    Science.gov (United States)

    Guo, Zhun; Zhou, Tianjun; Wu, Bo

    2017-02-01

    El Niño-Southern Oscillation (ENSO) events significantly affect the year-by-year variations of the East Asian winter monsoon (EAWM). However, the effect of La Niña events on the EAWM is not a mirror image of that of El Niño events. Although the EAWM becomes generally weaker during El Niño events and stronger during La Niña winters, the enhanced precipitation over the southeastern China and warmer surface air temperature along the East Asian coastline during El Niño years are more significant. These asymmetric effects are caused by the asymmetric longitudinal positions of the western North Pacific (WNP) anticyclone during El Niño events and the WNP cyclone during La Niña events; specifically, the center of the WNP cyclone during La Niña events is westward-shifted relative to its El Niño counterpart. This central-position shift results from the longitudinal shift of remote El Niño and La Niña anomalous heating, and asymmetry in the amplitude of local sea surface temperature anomalies over the WNP. However, such asymmetric effects of ENSO on the EAWM are barely reproduced by the atmospheric models of Phase 5 of the Coupled Model Intercomparison Project (CMIP5), although the spatial patterns of anomalous circulations are reasonably reproduced. The major limitation of the CMIP5 models is an overestimation of the anomalous WNP anticyclone/cyclone, which leads to stronger EAWM rainfall responses. The overestimated latent heat flux anomalies near the South China Sea and the northern WNP might be a key factor behind the overestimated anomalous circulations.

  5. Geometric mean IELT and premature ejaculation: appropriate statistics to avoid overestimation of treatment efficacy.

    Science.gov (United States)

    Waldinger, Marcel D; Zwinderman, Aeilko H; Olivier, Berend; Schweitzer, Dave H

    2008-02-01

    The intravaginal ejaculation latency time (IELT) behaves in a skewed manner and needs the appropriate statistics for correct interpretation of treatment results. To explain the rightful use of geometrical mean IELT values and the fold increase of the geometric mean IELT because of the positively skewed IELT distribution. Linking theoretical arguments to the outcome of several selective serotonin reuptake inhibitor and modern antidepressant study results. Geometric mean IELT and fold increase of geometrical mean IELT. Log-transforming each separate IELT measurement of each individual man is the basis for the calculation of the geometric mean IELT. A drug-induced positively skewed IELT distribution necessitates the calculation of the geometric mean IELTs at baseline and during drug treatment. In a positively skewed IELT distribution, the use of the "arithmetic" mean IELT risks an overestimation of the drug-induced ejaculation delay as the mean IELT is always higher than the geometric mean IELT. Strong ejaculation-delaying drugs give rise to a strong positively skewed IELT distribution, whereas weak ejaculation-delaying drugs give rise to (much) less skewed IELT distributions. Ejaculation delay is expressed in fold increase of the geometric mean IELT. Drug-induced ejaculatory performance discloses a positively skewed IELT distribution, requiring the use of the geometric mean IELT and the fold increase of the geometric mean IELT.

  6. How Often Is the Misfit of Item Response Theory Models Practically Significant?

    Science.gov (United States)

    Sinharay, Sandip; Haberman, Shelby J.

    2014-01-01

    Standard 3.9 of the Standards for Educational and Psychological Testing ([, 1999]) demands evidence of model fit when item response theory (IRT) models are employed to data from tests. Hambleton and Han ([Hambleton, R. K., 2005]) and Sinharay ([Sinharay, S., 2005]) recommended the assessment of practical significance of misfit of IRT models, but…

  7. A mechanistic diagnosis of the simulation of soil CO2 efflux of the ACME Land Model

    Science.gov (United States)

    Liang, J.; Ricciuto, D. M.; Wang, G.; Gu, L.; Hanson, P. J.; Mayes, M. A.

    2017-12-01

    Accurate simulation of the CO2 efflux from soils (i.e., soil respiration) to the atmosphere is critical to project global biogeochemical cycles and the magnitude of climate change in Earth system models (ESMs). Currently, the simulated soil respiration by ESMs still have a large uncertainty. In this study, a mechanistic diagnosis of soil respiration in the Accelerated Climate Model for Energy (ACME) Land Model (ALM) was conducted using long-term observations at the Missouri Ozark AmeriFlux (MOFLUX) forest site in the central U.S. The results showed that the ALM default run significantly underestimated annual soil respiration and gross primary production (GPP), while incorrectly estimating soil water potential. Improved simulations of soil water potential with site-specific data significantly improved the modeled annual soil respiration, primarily because annual GPP was simultaneously improved. Therefore, accurate simulations of soil water potential must be carefully calibrated in ESMs. Despite improved annual soil respiration, the ALM continued to underestimate soil respiration during peak growing seasons, and to overestimate soil respiration during non-peak growing seasons. Simulations involving increased GPP during peak growing seasons increased soil respiration, while neither improved plant phenology nor increased temperature sensitivity affected the simulation of soil respiration during non-peak growing seasons. One potential reason for the overestimation of the soil respiration during non-peak growing seasons may be that the current model structure is substrate-limited, while microbial dormancy under stress may cause the system to become decomposer-limited. Further studies with more microbial data are required to provide adequate representation of soil respiration and to understand the underlying reasons for inaccurate model simulations.

  8. Transcriptional responses of zebrafish to complex metal mixtures in laboratory studies overestimates the responses observed with environmental water.

    Science.gov (United States)

    Pradhan, Ajay; Ivarsson, Per; Ragnvaldsson, Daniel; Berg, Håkan; Jass, Jana; Olsson, Per-Erik

    2017-04-15

    Metals released into the environment continue to be of concern for human health. However, risk assessment of metal exposure is often based on total metal levels and usually does not take bioavailability data, metal speciation or matrix effects into consideration. The continued development of biological endpoint analyses are therefore of high importance for improved eco-toxicological risk analyses. While there is an on-going debate concerning synergistic or additive effects of low-level mixed exposures there is little environmental data confirming the observations obtained from laboratory experiments. In the present study we utilized qRT-PCR analysis to identify key metal response genes to develop a method for biomonitoring and risk-assessment of metal pollution. The gene expression patterns were determined for juvenile zebrafish exposed to waters from sites down-stream of a closed mining operation. Genes representing different physiological processes including stress response, inflammation, apoptosis, drug metabolism, ion channels and receptors, and genotoxicity were analyzed. The gene expression patterns of zebrafish exposed to laboratory prepared metal mixes were compared to the patterns obtained with fish exposed to the environmental samples with the same metal composition and concentrations. Exposure to environmental samples resulted in fewer alterations in gene expression compared to laboratory mixes. A biotic ligand model (BLM) was used to approximate the bioavailability of the metals in the environmental setting. However, the BLM results were not in agreement with the experimental data, suggesting that the BLM may be overestimating the risk in the environment. The present study therefore supports the inclusion of site-specific biological analyses to complement the present chemical based assays used for environmental risk-assessment. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Have We Overestimated Saline Aquifer CO2 Storage Capacities?

    International Nuclear Information System (INIS)

    Thibeau, S.; Mucha, V.

    2011-01-01

    approach, it is applied to the Utsira aquifer in the North Sea. In Sections 3 and 4, we discuss possible effects that may lead to higher or lower CO 2 storage efficiencies. Water production appears to be an attractive strategy in order to address regional scale pressure build up and, consequently, to increase the storage capacity. Following these quantitative applications, we recommend to evaluate the CO 2 storage capacities of an aquifer, during a screening study for ranking purposes, using a pressure and compressibility formula rather than a volumetric approach, in order to avoid large overestimation of the aquifer storage capacity. Further studies are naturally required to validate the storage capacities at a qualification stage. (authors)

  10. Evaluation of a seven-year air quality simulation using the Weather Research and Forecasting (WRF)/Community Multiscale Air Quality (CMAQ) models in the eastern United States.

    Science.gov (United States)

    Zhang, Hongliang; Chen, Gang; Hu, Jianlin; Chen, Shu-Hua; Wiedinmyer, Christine; Kleeman, Michael; Ying, Qi

    2014-03-01

    The performance of the Weather Research and Forecasting (WRF)/Community Multi-scale Air Quality (CMAQ) system in the eastern United States is analyzed based on results from a seven-year modeling study with a 4-km spatial resolution. For 2-m temperature, the monthly averaged mean bias (MB) and gross error (GE) values are generally within the recommended performance criteria, although temperature is over-predicted with MB values up to 2K. Water vapor at 2-m is well-predicted but significant biases (>2 g kg(-1)) were observed in wintertime. Predictions for wind speed are satisfactory but biased towards over-prediction with 0nitrate and sulfate concentrations are also well reproduced. The other unresolved PM2.5 components (OTHER) are significantly overestimated by more than a factor of two. No conclusive explanations can be made regarding the possible cause of this universal overestimation, which warrants a follow-up study to better understand this problem. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Comparison of midlatitude ionospheric F region peak parameters and topside Ne profiles from IRI2012 model prediction with ground-based ionosonde and Alouette II observations

    Science.gov (United States)

    Gordiyenko, G. I.; Yakovets, A. F.

    2017-07-01

    difference in the shape of the Alouette-, NeQuick-, IRI02-coorr, and IRI2001-derived Ne profiles, with overestimated Ne values at some altitudes and underestimated Ne values at others. The results obtained in the study showed that the observation-model differences were significant especially for the real observed (not median) data. For practical application, it is clearly important for the IRI2012 model to be adapted to the observed F2-layer peak parameters. However, the model does not offer a simple solution to predict the shape of the vertical electron density profile in the topside ionosphere, because of the problem with the topside shape parameters.

  12. A parameterization of the heterogeneous hydrolysis of N2O5 for mass-based aerosol models: improvement of particulate nitrate prediction

    Science.gov (United States)

    Chen, Ying; Wolke, Ralf; Ran, Liang; Birmili, Wolfram; Spindler, Gerald; Schröder, Wolfram; Su, Hang; Cheng, Yafang; Tegen, Ina; Wiedensohler, Alfred

    2018-01-01

    The heterogeneous hydrolysis of N2O5 on the surface of deliquescent aerosol leads to HNO3 formation and acts as a major sink of NOx in the atmosphere during night-time. The reaction constant of this heterogeneous hydrolysis is determined by temperature (T), relative humidity (RH), aerosol particle composition, and the surface area concentration (S). However, these parameters were not comprehensively considered in the parameterization of the heterogeneous hydrolysis of N2O5 in previous mass-based 3-D aerosol modelling studies. In this investigation, we propose a sophisticated parameterization (NewN2O5) of N2O5 heterogeneous hydrolysis with respect to T, RH, aerosol particle compositions, and S based on laboratory experiments. We evaluated closure between NewN2O5 and a state-of-the-art parameterization based on a sectional aerosol treatment. The comparison showed a good linear relationship (R = 0.91) between these two parameterizations. NewN2O5 was incorporated into a 3-D fully online coupled model, COSMO-MUSCAT, with the mass-based aerosol treatment. As a case study, we used the data from the HOPE Melpitz campaign (10-25 September 2013) to validate model performance. Here, we investigated the improvement of nitrate prediction over western and central Europe. The modelled particulate nitrate mass concentrations ([NO3-]) were validated by filter measurements over Germany (Neuglobsow, Schmücke, Zingst, and Melpitz). The modelled [NO3-] was significantly overestimated for this period by a factor of 5-19, with the corrected NH3 emissions (reduced by 50 %) and the original parameterization of N2O5 heterogeneous hydrolysis. The NewN2O5 significantly reduces the overestimation of [NO3-] by ˜ 35 %. Particularly, the overestimation factor was reduced to approximately 1.4 in our case study (12, 17-18 and 25 September 2013) when [NO3-] was dominated by local chemical formations. In our case, the suppression of organic coating was negligible over western and central Europe

  13. Flexible building stock modelling with array-programming

    DEFF Research Database (Denmark)

    Brøgger, Morten; Wittchen, Kim Bjarne

    2017-01-01

    Many building stock models employ archetype-buildings in order to capture the essential characteristics of a diverse building stock. However, these models often require multiple archetypes, which make them inflexible. This paper proposes an array-programming based model, which calculates the heat...... tend to overestimate potential energy-savings, if we do not consider these discrepancies. The proposed model makes it possible to compute and visualize potential energy-savings in a flexible and transparent way....

  14. Using beryllium-7 to assess cross-tropopause transport in global models

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Hongyu [National Institute of Aerospace, Hampton, VA (United States); Considine, David B. [NASA Langley Research Center, Hampton, VA (United States); Horowitz, Larry W. [NOAA Geophysical Fluid and Dynamics Laboratory, Princeton, NJ (United States); and others

    2016-07-01

    We use the Global Modeling Initiative (GMI) modeling framework to assess the utility of cosmogenic beryllium-7 ({sup 7}Be), a natural aerosol tracer, for evaluating cross-tropopause transport in global models. The GMI chemical transport model (CTM) was used to simulate atmospheric {sup 7}Be distributions using four different meteorological data sets (GEOS1-STRAT DAS, GISS II{sup '} GCM, fvGCM, and GEOS4-DAS), featuring significantly different stratosphere-troposphere exchange (STE) characteristics. The simulations were compared with the upper troposphere and/or lower stratosphere (UT/LS) {sup 7}Be climatology constructed from ∝ 25 years of aircraft and balloon data, as well as climatological records of surface concentrations and deposition fluxes. Comparison of the fraction of surface air of stratospheric origin estimated from the {sup 7}Be simulations with observationally derived estimates indicates excessive cross-tropopause transport at mid-latitudes in simulations using GEOS1-STRAT and at high latitudes using GISS II{sup '} meteorological data. These simulations also overestimate {sup 7}Be deposition fluxes at mid-latitudes (GEOS1-STRAT) and at high latitudes (GISS II{sup '}), respectively. We show that excessive cross-tropopause transport of {sup 7}Be corresponds to overestimated stratospheric contribution to tropospheric ozone. Our perspectives on STE in these meteorological fields based on {sup 7}Be simulations are consistent with previous modeling studies of tropospheric ozone using the same meteorological fields. We conclude that the observational constraints for {sup 7}Be and observed {sup 7}Be total deposition fluxes can be used routinely as a first-order assessment of cross-tropopause transport in global models.

  15. Measurement and modeling of shortwave irradiance components in cloud-free atmospheres

    Energy Technology Data Exchange (ETDEWEB)

    Halthore, R.N.

    1999-08-04

    Atmosphere scatters and absorbs incident solar radiation modifying its spectral content and decreasing its intensity at the surface. It is very useful to classify the earth-atmospheric solar radiation into several components--direct solar surface irradiance (E{sub direct}), diffuse-sky downward surface irradiance (E{sub diffuse}), total surface irradiance, and upwelling flux at the surface and at the top-of-the atmosphere. E{sub direct} depends only on the extinction properties of the atmosphere without regard to details of extinction, namely scattering or absorption; furthermore it can be accurately measured to high accuracy (0.3%) with the aid of an active cavity radiometer (ACR). E{sub diffuse} has relatively larger uncertainties both in its measurement using shaded pyranometers and in model estimates, owing to the difficulty in accurately characterizing pyranometers and in measuring model inputs such as surface reflectance, aerosol single scattering albedo, and phase function. Radiative transfer model simulations of the above surface radiation components in cloud-free skies using measured atmospheric properties show that while E{sub direct} estimates are closer to measurements, E{sub diffuse} is overestimated by an amount larger than the combined uncertainties in model inputs and measurements, illustrating a fundamental gap in the understanding of the magnitude of atmospheric absorption in cloud-free skies. The excess continuum type absorption required to reduce the E{sub diffuse} model overestimate ({approximately}3--8% absorptance) would significantly impact climate prediction and remote sensing. It is not clear at present what the source for this continuum absorption is. Here issues related to measurements and modeling of the surface irradiance components are discussed.

  16. Regional climate modeling over the Maritime Continent: Assessment of RegCM3-BATS1e and RegCM3-IBIS

    Science.gov (United States)

    Gianotti, R. L.; Zhang, D.; Eltahir, E. A.

    2010-12-01

    Despite its importance to global rainfall and circulation processes, the Maritime Continent remains a region that is poorly simulated by climate models. Relatively few studies have been undertaken using a model with fine enough resolution to capture the small-scale spatial heterogeneity of this region and associated land-atmosphere interactions. These studies have shown that even regional climate models (RCMs) struggle to reproduce the climate of this region, particularly the diurnal cycle of rainfall. This study builds on previous work by undertaking a more thorough evaluation of RCM performance in simulating the timing and intensity of rainfall over the Maritime Continent, with identification of major sources of error. An assessment was conducted of the Regional Climate Model Version 3 (RegCM3) used in a coupled system with two land surface schemes: Biosphere Atmosphere Transfer System Version 1e (BATS1e) and Integrated Biosphere Simulator (IBIS). The model’s performance in simulating precipitation was evaluated against the 3-hourly TRMM 3B42 product, with some validation provided of this TRMM product against ground station meteorological data. It is found that the model suffers from three major errors in the rainfall histogram: underestimation of the frequency of dry periods, overestimation of the frequency of low intensity rainfall, and underestimation of the frequency of high intensity rainfall. Additionally, the model shows error in the timing of the diurnal rainfall peak, particularly over land surfaces. These four errors were largely insensitive to the choice of boundary conditions, convective parameterization scheme or land surface scheme. The presence of a wet or dry bias in the simulated volumes of rainfall was, however, dependent on the choice of convection scheme and boundary conditions. This study also showed that the coupled model system has significant error in overestimation of latent heat flux and evapotranspiration from the land surface, and

  17. Performance evaluation of Maxwell and Cercignani-Lampis gas-wall interaction models in the modeling of thermally driven rarefied gas transport

    KAUST Repository

    Liang, Tengfei

    2013-07-16

    A systematic study on the performance of two empirical gas-wall interaction models, the Maxwell model and the Cercignani-Lampis (CL) model, in the entire Knudsen range is conducted. The models are evaluated by examining the accuracy of key macroscopic quantities such as temperature, density, and pressure, in three benchmark thermal problems, namely the Fourier thermal problem, the Knudsen force problem, and the thermal transpiration problem. The reference solutions are obtained from a validated hybrid DSMC-MD algorithm developed in-house. It has been found that while both models predict temperature and density reasonably well in the Fourier thermal problem, the pressure profile obtained from Maxwell model exhibits a trend that opposes that from the reference solution. As a consequence, the Maxwell model is unable to predict the orientation change of the Knudsen force acting on a cold cylinder embedded in a hot cylindrical enclosure at a certain Knudsen number. In the simulation of the thermal transpiration coefficient, although all three models overestimate the coefficient, the coefficient obtained from CL model is the closest to the reference solution. The Maxwell model performs the worst. The cause of the overestimated coefficient is investigated and its link to the overly constrained correlation between the tangential momentum accommodation coefficient and the tangential energy accommodation coefficient inherent in the models is pointed out. Directions for further improvement of models are suggested.

  18. Short-Range Prediction of Monsoon Precipitation by NCMRWF Regional Unified Model with Explicit Convection

    Science.gov (United States)

    Mamgain, Ashu; Rajagopal, E. N.; Mitra, A. K.; Webster, S.

    2018-03-01

    There are increasing efforts towards the prediction of high-impact weather systems and understanding of related dynamical and physical processes. High-resolution numerical model simulations can be used directly to model the impact at fine-scale details. Improvement in forecast accuracy can help in disaster management planning and execution. National Centre for Medium Range Weather Forecasting (NCMRWF) has implemented high-resolution regional unified modeling system with explicit convection embedded within coarser resolution global model with parameterized convection. The models configurations are based on UK Met Office unified seamless modeling system. Recent land use/land cover data (2012-2013) obtained from Indian Space Research Organisation (ISRO) are also used in model simulations. Results based on short-range forecast of both the global and regional models over India for a month indicate that convection-permitting simulations by the high-resolution regional model is able to reduce the dry bias over southern parts of West Coast and monsoon trough zone with more intense rainfall mainly towards northern parts of monsoon trough zone. Regional model with explicit convection has significantly improved the phase of the diurnal cycle of rainfall as compared to the global model. Results from two monsoon depression cases during study period show substantial improvement in details of rainfall pattern. Many categories in rainfall defined for operational forecast purposes by Indian forecasters are also well represented in case of convection-permitting high-resolution simulations. For the statistics of number of days within a range of rain categories between `No-Rain' and `Heavy Rain', the regional model is outperforming the global model in all the ranges. In the very heavy and extremely heavy categories, the regional simulations show overestimation of rainfall days. Global model with parameterized convection have tendency to overestimate the light rainfall days and

  19. Limb Symmetry Indexes Can Overestimate Knee Function After Anterior Cruciate Ligament Injury.

    Science.gov (United States)

    Wellsandt, Elizabeth; Failla, Mathew J; Snyder-Mackler, Lynn

    2017-05-01

    Study Design Prospective cohort. Background The high risk of second anterior cruciate ligament (ACL) injuries after return to sport highlights the importance of return-to-sport decision making. Objective return-to-sport criteria frequently use limb symmetry indexes (LSIs) to quantify quadriceps strength and hop scores. Whether using the uninvolved limb in LSIs is optimal is unknown. Objectives To evaluate the uninvolved limb as a reference standard for LSIs utilized in return-to-sport testing and its relationship with second ACL injury rates. Methods Seventy athletes completed quadriceps strength and 4 single-leg hop tests before anterior cruciate ligament reconstruction (ACLR) and 6 months after ACLR. Limb symmetry indexes for each test compared involved-limb measures at 6 months to uninvolved-limb measures at 6 months. Estimated preinjury capacity (EPIC) levels for each test compared involved-limb measures at 6 months to uninvolved-limb measures before ACLR. Second ACL injuries were tracked for a minimum follow-up of 2 years after ACLR. Results Forty (57.1%) patients achieved 90% LSIs for quadriceps strength and all hop tests. Only 20 (28.6%) patients met 90% EPIC levels (comparing the involved limb at 6 months after ACLR to the uninvolved limb before ACLR) for quadriceps strength and all hop tests. Twenty-four (34.3%) patients who achieved 90% LSIs for all measures 6 months after ACLR did not achieve 90% EPIC levels for all measures. Estimated preinjury capacity levels were more sensitive than LSIs in predicting second ACL injuries (LSIs, 0.273; 95% confidence interval [CI]: 0.010, 0.566 and EPIC, 0.818; 95% CI: 0.523, 0.949). Conclusion Limb symmetry indexes frequently overestimate knee function after ACLR and may be related to second ACL injury risk. These findings raise concern about whether the variable ACL return-to-sport criteria utilized in current clinical practice are stringent enough to achieve safe and successful return to sport. Level of Evidence

  20. ARMA modeling of stochastic processes in nuclear reactor with significant detection noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.

    1992-01-01

    The theoretical basis of ARMA modelling of stochastic processes in nuclear reactor was presented in a previous paper, neglecting observational noise. The identification of real reactor data indicated that in some experiments the detection noise is significant. Thus a more rigorous theoretical modelling of stochastic processes in nuclear reactor is performed. Starting from the fundamental stochastic differential equations of the Langevin type for the interaction of the detector with neutron field, a new theoretical ARMA model is developed. preliminary identification results confirm the theoretical expectations. (author)

  1. A study of modelling simplifications in ground vibration predictions for railway traffic at grade

    Science.gov (United States)

    Germonpré, M.; Degrande, G.; Lombaert, G.

    2017-10-01

    Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.

  2. The quest for significance model of radicalization: implications for the management of terrorist detainees.

    Science.gov (United States)

    Dugas, Michelle; Kruglanski, Arie W

    2014-01-01

    Radicalization and its culmination in terrorism represent a grave threat to the security and stability of the world. A related challenge is effective management of extremists who are detained in prison facilities. The major aim of this article is to review the significance quest model of radicalization and its implications for management of terrorist detainees. First, we review the significance quest model, which elaborates on the roles of motivation, ideology, and social processes in radicalization. Secondly, we explore the implications of the model in relation to the risks of prison radicalization. Finally, we analyze the model's implications for deradicalization strategies and review preliminary evidence for the effectiveness of a rehabilitation program targeting components of the significance quest. Based on this evidence, we argue that the psychology of radicalization provides compelling reason for the inclusion of deradicalization efforts as an essential component of the management of terrorist detainees. Copyright © 2014 John Wiley & Sons, Ltd.

  3. Evaluation of daily maximum and minimum 2-m temperatures as simulated with the Regional Climate Model COSMO-CLM over Africa

    Directory of Open Access Journals (Sweden)

    Stefan Krähenmann

    2013-07-01

    Full Text Available The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8 to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22° and 0.44°, and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax for Africa (covering the period 2008–2010 is created using the regression-kriging-regression-kriging (RKRK algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90th percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2°C across arid areas, yet overestimated by around 2°C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones, but less well performance for Tmax (capture below 70%. Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90th percentile of Tmax, particularly

  4. Evaluation of daily maximum and minimum 2-m temperatures as simulated with the regional climate model COSMO-CLM over Africa

    Energy Technology Data Exchange (ETDEWEB)

    Kraehenmann, Stefan; Kothe, Steffen; Ahrens, Bodo [Frankfurt Univ. (Germany). Inst. for Atmospheric and Environmental Sciences; Panitz, Hans-Juergen [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany)

    2013-10-15

    The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8) to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax) over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22 and 0.44 ), and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax) for Africa (covering the period 2008-2010) is created using the regression-kriging-regression-kriging (RKRK) algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90{sup th} percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2 C across arid areas, yet overestimated by around 2 C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones), but less well performance for Tmax (capture below 70%). Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90{sup th} percentile of Tmax, particularly across

  5. Mixed layer depth calculation in deep convection regions in ocean numerical models

    Science.gov (United States)

    Courtois, Peggy; Hu, Xianmin; Pennelly, Clark; Spence, Paul; Myers, Paul G.

    2017-12-01

    Mixed Layer Depths (MLDs) diagnosed by conventional numerical models are generally based on a density difference with the surface (e.g., 0.01 kg.m-3). However, the temperature-salinity compensation and the lack of vertical resolution contribute to over-estimated MLD, especially in regions of deep convection. In the present work, we examined the diagnostic MLD, associated with the deep convection of the Labrador Sea Water (LSW), calculated with a simple density difference criterion. The over-estimated MLD led us to develop a new tool, based on an observational approach, to recalculate MLD from model output. We used an eddy-permitting, 1/12° regional configuration of the Nucleus for European Modelling of the Ocean (NEMO) to test and discuss our newly defined MLD. We compared our new MLD with that from observations, and we showed a major improvement with our new algorithm. To show the new MLD is not dependent on a single model and its horizontal resolution, we extended our analysis to include 1/4° eddy-permitting simulations, and simulations using the Modular Ocean Model (MOM) model.

  6. Impact of hydrological variations on modeling of peatland CO2 fluxes: Results from the North American Carbon Program site synthesis

    Science.gov (United States)

    Sulman, Benjamin N.; Desai, Ankur R.; Schroeder, Nicole M.; Ricciuto, Dan; Barr, Alan; Richardson, Andrew D.; Flanagan, Lawrence B.; Lafleur, Peter M.; Tian, Hanqin; Chen, Guangsheng; Grant, Robert F.; Poulter, Benjamin; Verbeeck, Hans; Ciais, Philippe; Ringeval, Bruno; Baker, Ian T.; Schaefer, Kevin; Luo, Yiqi; Weng, Ensheng

    2012-03-01

    Northern peatlands are likely to be important in future carbon cycle-climate feedbacks due to their large carbon pools and vulnerability to hydrological change. Use of non-peatland-specific models could lead to bias in modeling studies of peatland-rich regions. Here, seven ecosystem models were used to simulate CO2fluxes at three wetland sites in Canada and the northern United States, including two nutrient-rich fens and one nutrient-poor,sphagnum-dominated bog, over periods between 1999 and 2007. Models consistently overestimated mean annual gross ecosystem production (GEP) and ecosystem respiration (ER) at all three sites. Monthly flux residuals (simulated - observed) were correlated with measured water table for GEP and ER at the two fen sites, but were not consistently correlated with water table at the bog site. Models that inhibited soil respiration under saturated conditions had less mean bias than models that did not. Modeled diurnal cycles agreed well with eddy covariance measurements at fen sites, but overestimated fluxes at the bog site. Eddy covariance GEP and ER at fens were higher during dry periods than during wet periods, while models predicted either the opposite relationship or no significant difference. At the bog site, eddy covariance GEP did not depend on water table, while simulated GEP was higher during wet periods. Carbon cycle modeling in peatland-rich regions could be improved by incorporating wetland-specific hydrology and by inhibiting GEP and ER under saturated conditions. Bogs and fens likely require distinct plant and soil parameterizations in ecosystem models due to differences in nutrients, peat properties, and plant communities.

  7. Validation of precipitation over Japan during 1985-2004 simulated by three regional climate models and two multi-model ensemble means

    Energy Technology Data Exchange (ETDEWEB)

    Ishizaki, Yasuhiro [Meteorological Research Institute, Tsukuba (Japan); National Institute for Environmental Studies, Tsukuba (Japan); Nakaegawa, Toshiyuki; Takayabu, Izuru [Meteorological Research Institute, Tsukuba (Japan)

    2012-07-15

    We dynamically downscaled Japanese reanalysis data (JRA-25) for 60 regions of Japan using three regional climate models (RCMs): the Non-Hydrostatic Regional Climate Model (NHRCM), modified RAMS version 4.3 (NRAMS), and modified Weather Research and Forecasting model (TWRF). We validated their simulations of the precipitation climatology and interannual variations of summer and winter precipitation. We also validated precipitation for two multi-model ensemble means: the arithmetic ensemble mean (AEM) and an ensemble mean weighted according to model reliability. In the 60 regions NRAMS simulated both the winter and summer climatological precipitation better than JRA-25, and NHRCM simulated the wintertime precipitation better than JRA-25. TWRF, however, overestimated precipitation in the 60 regions in both the winter and summer, and NHRCM overestimated precipitation in the summer. The three RCMs simulated interannual variations, particularly summer precipitation, better than JRA-25. AEM simulated both climatological precipitation and interannual variations during the two seasons more realistically than JRA-25 and the three RCMs overall, but the best RCM was often superior to the AEM result. In contrast, the weighted ensemble mean skills were usually superior to those of the best RCM. Thus, both RCMs and multi-model ensemble means, especially multi-model ensemble means weighted according to model reliability, are powerful tools for simulating seasonal and interannual variability of precipitation in Japan under the current climate. (orig.)

  8. Digital models: How can dental arch form be verified chairside?

    Directory of Open Access Journals (Sweden)

    Alana Tavares

    Full Text Available ABSTRACT Introduction: Plaster dental casts are routinely used during clinical practice to access maxillary dental arch form and assist on fabrication of individualized orthodontic archwires. Recently introduced, digital model technology may offer a limitation for the obtainment of a dental physical record. In this context, a tool for dental arch form assessment for chairside use is necessary when employing digital models. In this regard, paper print of the dental arch seems thus to be useful. Methods: In the present study, 37 lower arch models were used. Intercanine and intermolar widths and dental arch length measurements were performed and compared using plaster dental casts, digital models and paper print image of the models. Ortho Insight 3D scanner was employed for model digitalization. Results: No statistically significant differences were noted regarding the measurements performed on the plaster or digital models (p> 0.05. Paper print images, however, showed subestimated values for intercanine and intermolar widths and overestimated values for dental arch length. Despite being statistically significant (p< 0.001, the differences were considered clinically negligible. Conclusion: The present study suggests that paper print images obtained from digital models are clinically accurate and can be used as a tool for dental arch form assessment for fabrication of individualized orthodontic archwires.

  9. Digital models: How can dental arch form be verified chairside?

    Science.gov (United States)

    Tavares, Alana; Braga, Emanuel; de Araújo, Telma Martins

    2017-01-01

    ABSTRACT Introduction: Plaster dental casts are routinely used during clinical practice to access maxillary dental arch form and assist on fabrication of individualized orthodontic archwires. Recently introduced, digital model technology may offer a limitation for the obtainment of a dental physical record. In this context, a tool for dental arch form assessment for chairside use is necessary when employing digital models. In this regard, paper print of the dental arch seems thus to be useful. Methods: In the present study, 37 lower arch models were used. Intercanine and intermolar widths and dental arch length measurements were performed and compared using plaster dental casts, digital models and paper print image of the models. Ortho Insight 3D scanner was employed for model digitalization. Results: No statistically significant differences were noted regarding the measurements performed on the plaster or digital models (p> 0.05). Paper print images, however, showed subestimated values for intercanine and intermolar widths and overestimated values for dental arch length. Despite being statistically significant (p< 0.001), the differences were considered clinically negligible. Conclusion: The present study suggests that paper print images obtained from digital models are clinically accurate and can be used as a tool for dental arch form assessment for fabrication of individualized orthodontic archwires. PMID:29364382

  10. A parameterization of the heterogeneous hydrolysis of N2O5 for mass-based aerosol models: improvement of particulate nitrate prediction

    Directory of Open Access Journals (Sweden)

    Y. Chen

    2018-01-01

    Full Text Available The heterogeneous hydrolysis of N2O5 on the surface of deliquescent aerosol leads to HNO3 formation and acts as a major sink of NOx in the atmosphere during night-time. The reaction constant of this heterogeneous hydrolysis is determined by temperature (T, relative humidity (RH, aerosol particle composition, and the surface area concentration (S. However, these parameters were not comprehensively considered in the parameterization of the heterogeneous hydrolysis of N2O5 in previous mass-based 3-D aerosol modelling studies. In this investigation, we propose a sophisticated parameterization (NewN2O5 of N2O5 heterogeneous hydrolysis with respect to T, RH, aerosol particle compositions, and S based on laboratory experiments. We evaluated closure between NewN2O5 and a state-of-the-art parameterization based on a sectional aerosol treatment. The comparison showed a good linear relationship (R =  0.91 between these two parameterizations. NewN2O5 was incorporated into a 3-D fully online coupled model, COSMO–MUSCAT, with the mass-based aerosol treatment. As a case study, we used the data from the HOPE Melpitz campaign (10–25 September 2013 to validate model performance. Here, we investigated the improvement of nitrate prediction over western and central Europe. The modelled particulate nitrate mass concentrations ([NO3−] were validated by filter measurements over Germany (Neuglobsow, Schmücke, Zingst, and Melpitz. The modelled [NO3−] was significantly overestimated for this period by a factor of 5–19, with the corrected NH3 emissions (reduced by 50 % and the original parameterization of N2O5 heterogeneous hydrolysis. The NewN2O5 significantly reduces the overestimation of [NO3−] by  ∼  35 %. Particularly, the overestimation factor was reduced to approximately 1.4 in our case study (12, 17–18 and 25 September 2013 when [NO3−] was dominated by local chemical formations. In our case, the suppression of organic coating

  11. Mapping the Most Significant Computer Hacking Events to a Temporal Computer Attack Model

    OpenAIRE

    Heerden , Renier ,; Pieterse , Heloise; Irwin , Barry

    2012-01-01

    Part 4: Section 3: ICT for Peace and War; International audience; This paper presents eight of the most significant computer hacking events (also known as computer attacks). These events were selected because of their unique impact, methodology, or other properties. A temporal computer attack model is presented that can be used to model computer based attacks. This model consists of the following stages: Target Identification, Reconnaissance, Attack, and Post-Attack Reconnaissance stages. The...

  12. The measure and significance of Bateman's principles.

    Science.gov (United States)

    Collet, Julie M; Dean, Rebecca F; Worley, Kirsty; Richardson, David S; Pizzari, Tommaso

    2014-05-07

    Bateman's principles explain sex roles and sexual dimorphism through sex-specific variance in mating success, reproductive success and their relationships within sexes (Bateman gradients). Empirical tests of these principles, however, have come under intense scrutiny. Here, we experimentally show that in replicate groups of red junglefowl, Gallus gallus, mating and reproductive successes were more variable in males than in females, resulting in a steeper male Bateman gradient, consistent with Bateman's principles. However, we use novel quantitative techniques to reveal that current methods typically overestimate Bateman's principles because they (i) infer mating success indirectly from offspring parentage, and thus miss matings that fail to result in fertilization, and (ii) measure Bateman gradients through the univariate regression of reproductive over mating success, without considering the substantial influence of other components of male reproductive success, namely female fecundity and paternity share. We also find a significant female Bateman gradient but show that this likely emerges as spurious consequences of male preference for fecund females, emphasizing the need for experimental approaches to establish the causal relationship between reproductive and mating success. While providing qualitative support for Bateman's principles, our study demonstrates how current approaches can generate a misleading view of sex differences and roles.

  13. Forecasting the mortality rates using Lee-Carter model and Heligman-Pollard model

    Science.gov (United States)

    Ibrahim, R. I.; Ngataman, N.; Abrisam, W. N. A. Wan Mohd

    2017-09-01

    Improvement in life expectancies has driven further declines in mortality. The sustained reduction in mortality rates and its systematic underestimation has been attracting the significant interest of researchers in recent years because of its potential impact on population size and structure, social security systems, and (from an actuarial perspective) the life insurance and pensions industry worldwide. Among all forecasting methods, the Lee-Carter model has been widely accepted by the actuarial community and Heligman-Pollard model has been widely used by researchers in modelling and forecasting future mortality. Therefore, this paper only focuses on Lee-Carter model and Heligman-Pollard model. The main objective of this paper is to investigate how accurately these two models will perform using Malaysian data. Since these models involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 8.0 (MATLAB 8.0) software will be used to estimate the parameters of the models. Autoregressive Integrated Moving Average (ARIMA) procedure is applied to acquire the forecasted parameters for both models as the forecasted mortality rates are obtained by using all the values of forecasted parameters. To investigate the accuracy of the estimation, the forecasted results will be compared against actual data of mortality rates. The results indicate that both models provide better results for male population. However, for the elderly female population, Heligman-Pollard model seems to underestimate to the mortality rates while Lee-Carter model seems to overestimate to the mortality rates.

  14. Comparing estimates of genetic variance across different relationship models.

    Science.gov (United States)

    Legarra, Andres

    2016-02-01

    Use of relationships between individuals to estimate genetic variances and heritabilities via mixed models is standard practice in human, plant and livestock genetics. Different models or information for relationships may give different estimates of genetic variances. However, comparing these estimates across different relationship models is not straightforward as the implied base populations differ between relationship models. In this work, I present a method to compare estimates of variance components across different relationship models. I suggest referring genetic variances obtained using different relationship models to the same reference population, usually a set of individuals in the population. Expected genetic variance of this population is the estimated variance component from the mixed model times a statistic, Dk, which is the average self-relationship minus the average (self- and across-) relationship. For most typical models of relationships, Dk is close to 1. However, this is not true for very deep pedigrees, for identity-by-state relationships, or for non-parametric kernels, which tend to overestimate the genetic variance and the heritability. Using mice data, I show that heritabilities from identity-by-state and kernel-based relationships are overestimated. Weighting these estimates by Dk scales them to a base comparable to genomic or pedigree relationships, avoiding wrong comparisons, for instance, "missing heritabilities". Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Evaluation of convection-resolving models using satellite data: The diurnal cycle of summer convection over the Alps

    Directory of Open Access Journals (Sweden)

    Michael Keller

    2016-05-01

    Full Text Available Diurnal moist convection is an important element of summer precipitation over Central Europe and the Alps. It is poorly represented in models using parameterized convection. In this study, we investigate the diurnal cycle of convection during 11 days in June 2007 using the COSMO model. The numerical simulations are compared with satellite measurements of GERB and SEVIRI, AVHRR satellite-based cloud properties and ground-based precipitation and temperature measurements. The simulations use horizontal resolutions of 12 km (convection-parameterizing model, CPM and 2 km (convection-resolving model, CRM and either a one-moment microphysics scheme (1M or a two-moment microphysics scheme (2M.They are conducted for a computational domain that covers an extended Alpine area from Northern Italy to Northern Germany. The CPM with 1M exhibits a significant overestimation of high cloud cover. This results in a compensation effect in the top of the atmosphere energy budget due to an underestimation of outgoing longwave radiation (OLR and an overestimation of reflected solar radiation (RSR. The CRM reduces high cloud cover and improves the OLR bias from a domain mean of −20.1 to −2.6 W/m2. When using 2M with ice sedimentation in the CRM, high cloud cover is further reduced. The stronger diurnal cycle of high cloud cover and associated convection over the Alps, compared to less mountainous regions, is well represented by the CRM but underestimated by the CPM. Despite substantial differences in high cloud cover, the use of a 2M has no significant impact on the diurnal cycle of precipitation. Furthermore, a negative mid-level cloud bias is found for all simulations.

  16. The HIRLAM fast radiation scheme for mesoscale numerical weather prediction models

    Science.gov (United States)

    Rontu, Laura; Gleeson, Emily; Räisänen, Petri; Pagh Nielsen, Kristian; Savijärvi, Hannu; Hansen Sass, Bent

    2017-07-01

    This paper provides an overview of the HLRADIA shortwave (SW) and longwave (LW) broadband radiation schemes used in the HIRLAM numerical weather prediction (NWP) model and available in the HARMONIE-AROME mesoscale NWP model. The advantage of broadband, over spectral, schemes is that they can be called more frequently within the model, without compromising on computational efficiency. In mesoscale models fast interactions between clouds and radiation and the surface and radiation can be of greater importance than accounting for the spectral details of clear-sky radiation; thus calling the routines more frequently can be of greater benefit than the deterioration due to loss of spectral details. Fast but physically based radiation parametrizations are expected to be valuable for high-resolution ensemble forecasting, because as well as the speed of their execution, they may provide realistic physical perturbations. Results from single-column diagnostic experiments based on CIRC benchmark cases and an evaluation of 10 years of radiation output from the FMI operational archive of HIRLAM forecasts indicate that HLRADIA performs sufficiently well with respect to the clear-sky downwelling SW and longwave LW fluxes at the surface. In general, HLRADIA tends to overestimate surface fluxes, with the exception of LW fluxes under cold and dry conditions. The most obvious overestimation of the surface SW flux was seen in the cloudy cases in the 10-year comparison; this bias may be related to using a cloud inhomogeneity correction, which was too large. According to the CIRC comparisons, the outgoing LW and SW fluxes at the top of atmosphere are mostly overestimated by HLRADIA and the net LW flux is underestimated above clouds. The absorption of SW radiation by the atmosphere seems to be underestimated and LW absorption seems to be overestimated. Despite these issues, the overall results are satisfying and work on the improvement of HLRADIA for the use in HARMONIE-AROME NWP system

  17. The Harm Done to Reproducibility by the Culture of Null Hypothesis Significance Testing.

    Science.gov (United States)

    Lash, Timothy L

    2017-09-15

    In the last few years, stakeholders in the scientific community have raised alarms about a perceived lack of reproducibility of scientific results. In reaction, guidelines for journals have been promulgated and grant applicants have been asked to address the rigor and reproducibility of their proposed projects. Neither solution addresses a primary culprit, which is the culture of null hypothesis significance testing that dominates statistical analysis and inference. In an innovative research enterprise, selection of results for further evaluation based on null hypothesis significance testing is doomed to yield a low proportion of reproducible results and a high proportion of effects that are initially overestimated. In addition, the culture of null hypothesis significance testing discourages quantitative adjustments to account for systematic errors and quantitative incorporation of prior information. These strategies would otherwise improve reproducibility and have not been previously proposed in the widely cited literature on this topic. Without discarding the culture of null hypothesis significance testing and implementing these alternative methods for statistical analysis and inference, all other strategies for improving reproducibility will yield marginal gains at best. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Azimuth cut-off model for significant wave height investigation along coastal water of Kuala Terengganu, Malaysia

    Science.gov (United States)

    Marghany, Maged; Ibrahim, Zelina; Van Genderen, Johan

    2002-11-01

    The present work is used to operationalize the azimuth cut-off concept in the study of significant wave height. Three ERS-1 images have been used along the coastal waters of Terengganu, Malaysia. The quasi-linear transform was applied to map the SAR wave spectra into real ocean wave spectra. The azimuth cut-off was then used to model the significant wave height. The results show that azimuth cut-off varied with the different period of the ERS-1 images. This is because of the fact that the azimuth cut-off is a function of wind speed and significant wave height. It is of interest to find that the significant wave height modeled from azimuth cut-off is in good relation with ground wave conditions. It can be concluded that ERS-1 can be used as a monitoring tool in detecting the significant wave height variation. The azimuth cut-off can be used to model the significant wave height. This means that the quasi-linear transform could be a good application to significant wave height variation during different seasons.

  19. Significance of settling model structures and parameter subsets in modelling WWTPs under wet-weather flow and filamentous bulking conditions.

    Science.gov (United States)

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen; Plósz, Benedek Gy

    2014-10-15

    Current research focuses on predicting and mitigating the impacts of high hydraulic loadings on centralized wastewater treatment plants (WWTPs) under wet-weather conditions. The maximum permissible inflow to WWTPs depends not only on the settleability of activated sludge in secondary settling tanks (SSTs) but also on the hydraulic behaviour of SSTs. The present study investigates the impacts of ideal and non-ideal flow (dry and wet weather) and settling (good settling and bulking) boundary conditions on the sensitivity of WWTP model outputs to uncertainties intrinsic to the one-dimensional (1-D) SST model structures and parameters. We identify the critical sources of uncertainty in WWTP models through global sensitivity analysis (GSA) using the Benchmark simulation model No. 1 in combination with first- and second-order 1-D SST models. The results obtained illustrate that the contribution of settling parameters to the total variance of the key WWTP process outputs significantly depends on the influent flow and settling conditions. The magnitude of the impact is found to vary, depending on which type of 1-D SST model is used. Therefore, we identify and recommend potential parameter subsets for WWTP model calibration, and propose optimal choice of 1-D SST models under different flow and settling boundary conditions. Additionally, the hydraulic parameters in the second-order SST model are found significant under dynamic wet-weather flow conditions. These results highlight the importance of developing a more mechanistic based flow-dependent hydraulic sub-model in second-order 1-D SST models in the future. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Comparison of Wake models with data[Efficient Development of Offshore Windfarms

    Energy Technology Data Exchange (ETDEWEB)

    Rados, K. [Robert Gordon Univ., School of Engieering, Aberdeen, Scotland (United Kingdom); Larsen, G.; Barthelmie, R. [Risoe National Lab., Wind Energy Dept., Roskilde (Denmark); Schlez, W. [Garrad Hassan and Partners, Ltd., Bristol (United Kingdom); Lange, B. [Univ. of Oldenburg, Dept. of Energy and Semiconductor Research EHF, Oldenburg (Germany); Schepers, G.; Hegberg, T. [Netherlands Energy Research Foundation ECN, Solar and Wind Energy, Petten (NL); Magnusson, M. [Uppsala Univ., Dept. of Earth Sciences, Meterology, Uppsala (Sweden)

    2002-03-01

    A major objective of the ENDOW project is to evaluate the performance of wake models in offshore environments in order to ascertain the improvements required to enhance the prediction of power output within large offshore wind farms. The strategy for achieving this objective is to compare the performance of the models in a wide range of conditions which are expected to be encountered during turbine operation offshore. Six models of varying complexity have been evaluated initially against the Vindeby single wake data in where it was found that almost all of them overestimate the wake effects and also significant inconsistencies between the model predictions appeared in the near wake and turbulence intensity results. Based on the conclusions of that study, the wake modelling groups have already implemented a number of modifications to their original models. In the present paper, new single wake results are presented against experimental data at Vindeby and Bockstigen wind farms. Clearly, some of the model discrepancies previously observed in Vindeby cases have been smoothed and overall the performance is improved. (au)

  1. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  2. Representing Microbial Dormancy in Soil Decomposition Models Improves Model Performance and Reveals Key Ecosystem Controls on Microbial Activity

    Science.gov (United States)

    He, Y.; Yang, J.; Zhuang, Q.; Wang, G.; Liu, Y.

    2014-12-01

    Climate feedbacks from soils can result from environmental change and subsequent responses of plant and microbial communities and nutrient cycling. Explicit consideration of microbial life history traits and strategy may be necessary to predict climate feedbacks due to microbial physiology and community changes and their associated effect on carbon cycling. In this study, we developed an explicit microbial-enzyme decomposition model and examined model performance with and without representation of dormancy at six temperate forest sites with observed soil efflux ranged from 4 to 10 years across different forest types. We then extrapolated the model to all temperate forests in the Northern Hemisphere (25-50°N) to investigate spatial controls on microbial and soil C dynamics. Both models captured the observed soil heterotrophic respiration (RH), yet no-dormancy model consistently exhibited large seasonal amplitude and overestimation in microbial biomass. Spatially, the total RH from temperate forests based on dormancy model amounts to 6.88PgC/yr, and 7.99PgC/yr based on no-dormancy model. However, no-dormancy model notably overestimated the ratio of microbial biomass to SOC. Spatial correlation analysis revealed key controls of soil C:N ratio on the active proportion of microbial biomass, whereas local dormancy is primarily controlled by soil moisture and temperature, indicating scale-dependent environmental and biotic controls on microbial and SOC dynamics. These developments should provide essential support to modeling future soil carbon dynamics and enhance the avenue for collaboration between empirical soil experiment and modeling in the sense that more microbial physiological measurements are needed to better constrain and evaluate the models.

  3. Modeling carbon dioxide sequestration in saline aquifers: Significance of elevated pressures and salinities

    International Nuclear Information System (INIS)

    Allen, D.E.; Strazisar, B.R.; Soong, Y.; Hedges, S.W.

    2005-01-01

    The ultimate capacity of saline formations to sequester carbon dioxide by solubility and mineral trapping must be determined by simulating sequestration with geochemical models. These models, however, are only as reliable as the data and reaction scheme on which they are based. Several models have been used to make estimates of carbon dioxide solubility and mineral formation as a function of pressure and fluid composition. Intercomparison of modeling results indicates that failure to adjust all equilibrium constants to account for elevated carbon dioxide pressures results in significant errors in both solubility and mineral formation estimates. Absence of experimental data at high carbon dioxide pressures and high salinities make verification of model results difficult. Results indicate standalone solubility models that do not take mineral reactions into account will underestimate the total capacity of aquifers to sequester carbon dioxide in the long term through enhanced solubility and mineral trapping mechanisms. Overall, it is difficult to confidently predict the ultimate sequestration capacity of deep saline aquifers using geochemical models. (author)

  4. Photogrammetry experiments with a model eye.

    Science.gov (United States)

    Rosenthal, A R; Falconer, D G; Pieper, I

    1980-01-01

    Digital photogrammetry was performed on stereophotographs of the optic nerve head of a modified Zeiss model eye in which optic cups of varying depths could be simulated. Experiments were undertaken to determine the impact of both photographic and ocular variables on the photogrammetric measurements of cup depth. The photogrammetric procedure tolerates refocusing, repositioning, and realignment as well as small variations in the geometric position of the camera. Progressive underestimation of cup depth was observed with increasing myopia, while progressive overestimation was noted with increasing hyperopia. High cylindrical errors at axis 90 degrees led to significant errors in cup depth estimates, while high cylindrical errors at axis 180 degrees did not materially affect the accuracy of the analysis. Finally, cup depths were seriously underestimated when the pupil diameter was less than 5.0 mm. Images PMID:7448139

  5. Significant uncertainty in global scale hydrological modeling from precipitation data errors

    Science.gov (United States)

    Sperna Weiland, Frederiek C.; Vrugt, Jasper A.; van Beek, Rens (L.) P. H.; Weerts, Albrecht H.; Bierkens, Marc F. P.

    2015-10-01

    In the past decades significant progress has been made in the fitting of hydrologic models to data. Most of this work has focused on simple, CPU-efficient, lumped hydrologic models using discharge, water table depth, soil moisture, or tracer data from relatively small river basins. In this paper, we focus on large-scale hydrologic modeling and analyze the effect of parameter and rainfall data uncertainty on simulated discharge dynamics with the global hydrologic model PCR-GLOBWB. We use three rainfall data products; the CFSR reanalysis, the ERA-Interim reanalysis, and a combined ERA-40 reanalysis and CRU dataset. Parameter uncertainty is derived from Latin Hypercube Sampling (LHS) using monthly discharge data from five of the largest river systems in the world. Our results demonstrate that the default parameterization of PCR-GLOBWB, derived from global datasets, can be improved by calibrating the model against monthly discharge observations. Yet, it is difficult to find a single parameterization of PCR-GLOBWB that works well for all of the five river basins considered herein and shows consistent performance during both the calibration and evaluation period. Still there may be possibilities for regionalization based on catchment similarities. Our simulations illustrate that parameter uncertainty constitutes only a minor part of predictive uncertainty. Thus, the apparent dichotomy between simulations of global-scale hydrologic behavior and actual data cannot be resolved by simply increasing the model complexity of PCR-GLOBWB and resolving sub-grid processes. Instead, it would be more productive to improve the characterization of global rainfall amounts at spatial resolutions of 0.5° and smaller.

  6. Mechanisms controlling primary and new production in a global ecosystem model – Part II: The role of the upper ocean short-term periodic and episodic mixing events

    Directory of Open Access Journals (Sweden)

    E. E. Popova

    2006-01-01

    Full Text Available The use of 6 h, daily, weekly and monthly atmospheric forcing resulted in dramatically different predictions of plankton productivity in a global 3-D coupled physical-biogeochemical model. Resolving the diurnal cycle of atmospheric variability by use of 6 h forcing, and hence also diurnal variability in UML depth, produced the largest difference, reducing predicted global primary and new production by 25% and 10% respectively relative to that predicted with daily and weekly forcing. This decrease varied regionally, being a 30% reduction in equatorial areas primarily because of increased light limitation resulting from deepening of the mixed layer overnight as well as enhanced storm activity, and 25% at moderate and high latitudes primarily due to increased grazing pressure resulting from late winter stratification events. Mini-blooms of phytoplankton and zooplankton occur in the model during these events, leading to zooplankton populations being sufficiently well developed to suppress the progress of phytoplankton blooms. A 10% increase in primary production was predicted in the peripheries of the oligotrophic gyres due to increased storm-induced nutrient supply end enhanced winter production during the short term stratification events that are resolved in the run forced by 6 h meteorological fields. By resolving the diurnal cycle, model performance was significantly improved with respect to several common problems: underestimated primary production in the oligotrophic gyres; overestimated primary production in the Southern Ocean; overestimated magnitude of the spring bloom in the subarctic Pacific Ocean, and overestimated primary production in equatorial areas. The result of using 6 h forcing on predicted ecosystem dynamics was profound, the effects persisting far beyond the hourly timescale, and having major consequences for predicted global and new production on an annual basis.

  7. Ecological validity of cost-effectiveness models of universal HPV vaccination: A systematic literature review.

    Science.gov (United States)

    Favato, Giampiero; Easton, Tania; Vecchiato, Riccardo; Noikokyris, Emmanouil

    2017-05-09

    The protective (herd) effect of the selective vaccination of pubertal girls against human papillomavirus (HPV) implies a high probability that one of the two partners involved in intercourse is immunised, hence preventing the other from this sexually transmitted infection. The dynamic transmission models used to inform immunisation policy should include consideration of sexual behaviours and population mixing in order to demonstrate an ecological validity, whereby the scenarios modelled remain faithful to the real-life social and cultural context. The primary aim of this review is to test the ecological validity of the universal HPV vaccination cost-effectiveness modelling available in the published literature. The research protocol related to this systematic review has been registered in the International Prospective Register of Systematic Reviews (PROSPERO: CRD42016034145). Eight published economic evaluations were reviewed. None of the studies showed due consideration of the complexities of human sexual behaviour and the impact this may have on the transmission of HPV. Our findings indicate that all the included models might be affected by a different degree of ecological bias, which implies an inability to reflect the natural demographic and behavioural trends in their outcomes and, consequently, to accurately inform public healthcare policy. In particular, ecological bias have the effect to over-estimate the preference-based outcomes of selective immunisation. A relatively small (15-20%) over-estimation of quality-adjusted life years (QALYs) gained with selective immunisation programmes could induce a significant error in the estimate of cost-effectiveness of universal immunisation, by inflating its incremental cost effectiveness ratio (ICER) beyond the acceptability threshold. The results modelled here demonstrate the limitations of the cost-effectiveness studies for HPV vaccination, and highlight the concern that public healthcare policy might have been

  8. Assessment of runoff contributing catchment areas in rainfall runoff modelling

    DEFF Research Database (Denmark)

    Thorndahl, Søren; Johansen, C.; Schaarup-Jensen, Kjeld

    2006-01-01

    In numerical modelling of rainfall caused runoff in urban sewer systems an essential parameter is the hydrological reduction factor which defines the percentage of the impervious area contributing to the surface flow towards the sewer. As the hydrological processes during a rainfall are difficult...... to determine with significant precision the hydrological reduction factor is implemented to account all hydrological losses except the initial loss. This paper presents an inconsistency between calculations of the hydrological reduction factor, based on measurements of rainfall and runoff, and till now...... recommended literature values for residential areas. It is proven by comparing rainfall-runoff measurements from four different residential catchments that the literature values of the hydrological reduction factor are over-estimated for this type of catchment. In addition, different catchment descriptions...

  9. Improving the singles rate method for modeling accidental coincidences in high-resolution PET

    International Nuclear Information System (INIS)

    Oliver, Josep F; Rafecas, Magdalena

    2010-01-01

    Random coincidences ('randoms') are one of the main sources of image degradation in PET imaging. In order to correct for this effect, an accurate method to estimate the contribution of random events is necessary. This aspect becomes especially relevant for high-resolution PET scanners where the highest image quality is sought and accurate quantitative analysis is undertaken. One common approach to estimate randoms is the so-called singles rate method (SR) widely used because of its good statistical properties. SR is based on the measurement of the singles rate in each detector element. However, recent studies suggest that SR systematically overestimates the correct random rate. This overestimation can be particularly marked for low energy thresholds, below 250 keV used in some applications and could entail a significant image degradation. In this work, we investigate the performance of SR as a function of the activity, geometry of the source and energy acceptance window used. We also investigate the performance of an alternative method, which we call 'singles trues' (ST) that improves SR by properly modeling the presence of true coincidences in the sample. Nevertheless, in any real data acquisition the knowledge of which singles are members of a true coincidence is lost. Therefore, we propose an iterative method, STi, that provides an estimation based on ST but which only requires the knowledge of measurable quantities: prompts and singles. Due to inter-crystal scatter, for wide energy windows ST only partially corrects SR overestimations. While SR deviations are in the range 86-300% (depending on the source geometry), the ST deviations are systematically smaller and contained in the range 4-60%. STi fails to reproduce the ST results, although for not too high activities the deviation with respect to ST is only a few percent. For conventional energy windows, i.e. those without inter-crystal scatter, the ST method corrects the SR overestimations, and deviations from

  10. Firetube model and hadron-hadron collisions

    International Nuclear Information System (INIS)

    Nazareth, R.A.M.S.; Kodama, T.; Portes Junior, D.A.

    1992-01-01

    A new version of the fire tube model is developed to describe hadron-hadron collisions at ultrarelativistic energies. Several improvements are introduced in order to include the longitudinal expansion of intermediate fireballs, which remedies the overestimates of the transverse momenta in the previous version. It is found that, within a wide range of incident energies, the model describes well the experimental data for the single particle rapidity distribution, two-body correlations in the pseudo-rapidity, transverse momentum spectra of pions and kaons, the leading particle spectra and the K/π ratio. (author)

  11. Variable population exposure and distributed travel speeds in least-cost tsunami evacuation modelling

    Science.gov (United States)

    Fraser, Stuart A.; Wood, Nathan J.; Johnston, David A.; Leonard, Graham S.; Greening, Paul D.; Rossetto, Tiziana

    2014-01-01

    Evacuation of the population from a tsunami hazard zone is vital to reduce life-loss due to inundation. Geospatial least-cost distance modelling provides one approach to assessing tsunami evacuation potential. Previous models have generally used two static exposure scenarios and fixed travel speeds to represent population movement. Some analyses have assumed immediate departure or a common evacuation departure time for all exposed population. Here, a method is proposed to incorporate time-variable exposure, distributed travel speeds, and uncertain evacuation departure time into an existing anisotropic least-cost path distance framework. The method is demonstrated for hypothetical local-source tsunami evacuation in Napier City, Hawke's Bay, New Zealand. There is significant diurnal variation in pedestrian evacuation potential at the suburb level, although the total number of people unable to evacuate is stable across all scenarios. Whilst some fixed travel speeds approximate a distributed speed approach, others may overestimate evacuation potential. The impact of evacuation departure time is a significant contributor to total evacuation time. This method improves least-cost modelling of evacuation dynamics for evacuation planning, casualty modelling, and development of emergency response training scenarios. However, it requires detailed exposure data, which may preclude its use in many situations.

  12. A keyword spotting model using perceptually significant energy features

    Science.gov (United States)

    Umakanthan, Padmalochini

    The task of a keyword recognition system is to detect the presence of certain words in a conversation based on the linguistic information present in human speech. Such keyword spotting systems have applications in homeland security, telephone surveillance and human-computer interfacing. General procedure of a keyword spotting system involves feature generation and matching. In this work, new set of features that are based on the psycho-acoustic masking nature of human speech are proposed. After developing these features a time aligned pattern matching process was implemented to locate the words in a set of unknown words. A word boundary detection technique based on frame classification using the nonlinear characteristics of speech is also addressed in this work. Validation of this keyword spotting model was done using widely acclaimed Cepstral features. The experimental results indicate the viability of using these perceptually significant features as an augmented feature set in keyword spotting.

  13. Neck Muscle Moment Arms Obtained In-Vivo from MRI: Effect of Curved and Straight Modeled Paths.

    Science.gov (United States)

    Suderman, Bethany L; Vasavada, Anita N

    2017-08-01

    Musculoskeletal models of the cervical spine commonly represent neck muscles with straight paths. However, straight lines do not best represent the natural curvature of muscle paths in the neck, because the paths are constrained by bone and soft tissue. The purpose of this study was to estimate moment arms of curved and straight neck muscle paths using different moment arm calculation methods: tendon excursion, geometric, and effective torque. Curved and straight muscle paths were defined for two subject-specific cervical spine models derived from in vivo magnetic resonance images (MRI). Modeling neck muscle paths with curvature provides significantly different moment arm estimates than straight paths for 10 of 15 neck muscles (p straight lines to model muscle paths can lead to overestimating neck extension moment. However, moment arm methods for curved paths should be investigated further, as different methods of calculating moment arm can provide different estimates.

  14. Bioavailability of particulate metal to zebra mussels: biodynamic modelling shows that assimilation efficiencies are site-specific.

    Science.gov (United States)

    Bourgeault, Adeline; Gourlay-Francé, Catherine; Priadi, Cindy; Ayrault, Sophie; Tusseau-Vuillemin, Marie-Hélène

    2011-12-01

    This study investigates the ability of the biodynamic model to predict the trophic bioaccumulation of cadmium (Cd), chromium (Cr), copper (Cu), nickel (Ni) and zinc (Zn) in a freshwater bivalve. Zebra mussels were transplanted to three sites along the Seine River (France) and collected monthly for 11 months. Measurements of the metal body burdens in mussels were compared with the predictions from the biodynamic model. The exchangeable fraction of metal particles did not account for the bioavailability of particulate metals, since it did not capture the differences between sites. The assimilation efficiency (AE) parameter is necessary to take into account biotic factors influencing particulate metal bioavailability. The biodynamic model, applied with AEs from the literature, overestimated the measured concentrations in zebra mussels, the extent of overestimation being site-specific. Therefore, an original methodology was proposed for in situ AE measurements for each site and metal. Copyright © 2011 Elsevier Ltd. All rights reserved.

  15. The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model.

    Science.gov (United States)

    Fritz, Matthew S; Kenny, David A; MacKinnon, David P

    2016-01-01

    Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator-to-outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. To explore the combined effect of measurement error and omitted confounders in the same model, the effect of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect.

  16. The Combined Effects of Measurement Error and Omitting Confounders in the Single-Mediator Model

    Science.gov (United States)

    Fritz, Matthew S.; Kenny, David A.; MacKinnon, David P.

    2016-01-01

    Mediation analysis requires a number of strong assumptions be met in order to make valid causal inferences. Failing to account for violations of these assumptions, such as not modeling measurement error or omitting a common cause of the effects in the model, can bias the parameter estimates of the mediated effect. When the independent variable is perfectly reliable, for example when participants are randomly assigned to levels of treatment, measurement error in the mediator tends to underestimate the mediated effect, while the omission of a confounding variable of the mediator to outcome relation tends to overestimate the mediated effect. Violations of these two assumptions often co-occur, however, in which case the mediated effect could be overestimated, underestimated, or even, in very rare circumstances, unbiased. In order to explore the combined effect of measurement error and omitted confounders in the same model, the impact of each violation on the single-mediator model is first examined individually. Then the combined effect of having measurement error and omitted confounders in the same model is discussed. Throughout, an empirical example is provided to illustrate the effect of violating these assumptions on the mediated effect. PMID:27739903

  17. Premodelling of the importance of the location of the upstream hydraulic boundary of a regional flow model of the Laxemar-Simpevarp area. Site descriptive modelling SDM-Site Laxemar

    International Nuclear Information System (INIS)

    Holmen, Johan G.

    2008-03-01

    The location of the westernmost hydraulic boundary of a regional groundwater flow model representing the Laxemar investigation area is of importance as the regional flow of groundwater is primarily from the west towards the sea (as given by the regional topography). If the westernmost boundary condition of a regional flow model is located to close to the investigation area, the regional flow model may underestimate the magnitude of the regional groundwater flow (at the investigation area), as well as overestimate breakthrough times of flow paths from the repository area, etc. Groundwater flows have been calculated by use of two mathematical (numerical) models: A very large groundwater flow model, much larger than the regional flow model used in the Laxemar site description version 1.2, and a smaller flow model that is of a comparable size to the regional model used in the site description. The models are identical except for the different horizontal extensions of the models; the large model extends to the west much further than the small model. The westernmost lateral boundary of the small model is a topographic water divide approx. 7 km from the central parts of the Laxemar investigation area, and the westernmost lateral boundary of the large model is a topographic water divide approx. 40 km from the central parts of the Laxemar investigation area. In the models the lateral boundaries are defined as no-flow boundaries. The objective of the study is to calculate and compare the groundwater flow properties at a tentative repository area at Laxemar; by use of a large flow model and a small flow model. The comparisons include the following three parameters: - Length of flow paths from the tentative repository area. - Advective breakthrough time for flow paths from the tentative repository area. - Magnitude of flow at the tentative repository area. The comparisons demonstrated the following considering the median values of the obtained distributions of flow paths

  18. Four-phonon scattering significantly reduces intrinsic thermal conductivity of solids

    Science.gov (United States)

    Feng, Tianli; Lindsay, Lucas; Ruan, Xiulin

    2017-10-01

    For decades, the three-phonon scattering process has been considered to govern thermal transport in solids, while the role of higher-order four-phonon scattering has been persistently unclear and so ignored. However, recent quantitative calculations of three-phonon scattering have often shown a significant overestimation of thermal conductivity as compared to experimental values. In this Rapid Communication we show that four-phonon scattering is generally important in solids and can remedy such discrepancies. For silicon and diamond, the predicted thermal conductivity is reduced by 30% at 1000 K after including four-phonon scattering, bringing predictions in excellent agreement with measurements. For the projected ultrahigh-thermal conductivity material, zinc-blende BAs, a competitor of diamond as a heat sink material, four-phonon scattering is found to be strikingly strong as three-phonon processes have an extremely limited phase space for scattering. The four-phonon scattering reduces the predicted thermal conductivity from 2200 to 1400 W/m K at room temperature. The reduction at 1000 K is 60%. We also find that optical phonon scattering rates are largely affected, being important in applications such as phonon bottlenecks in equilibrating electronic excitations. Recognizing that four-phonon scattering is expensive to calculate, in the end we provide some guidelines on how to quickly assess the significance of four-phonon scattering, based on energy surface anharmonicity and the scattering phase space. Our work clears the decades-long fundamental question of the significance of higher-order scattering, and points out ways to improve thermoelectrics, thermal barrier coatings, nuclear materials, and radiative heat transfer.

  19. Meiofauna metabolism in suboxic sediments: currently overestimated.

    Directory of Open Access Journals (Sweden)

    Ulrike Braeckman

    Full Text Available Oxygen is recognized as a structuring factor of metazoan communities in marine sediments. The importance of oxygen as a controlling factor on meiofauna (32 µm-1 mm in size respiration rates is however less clear. Typically, respiration rates are measured under oxic conditions, after which these rates are used in food web studies to quantify the role of meiofauna in sediment carbon turnover. Sediment oxygen concentration ([O(2] is generally far from saturated, implying that (1 current estimates of the role of meiofauna in carbon cycling may be biased and (2 meiofaunal organisms need strategies to survive in oxygen-stressed environments. Two main survival strategies are often hypothesized: 1 frequent migration to oxic layers and 2 morphological adaptation. To evaluate these hypotheses, we (1 used a model of oxygen turnover in the meiofauna body as a function of ambient [O(2], and (2 performed respiration measurements at a range of [O(2] conditions. The oxygen turnover model predicts a tight coupling between ambient [O(2] and meiofauna body [O(2] with oxygen within the body being consumed in seconds. This fast turnover favors long and slender organisms in sediments with low ambient [O(2] but even then frequent migration between suboxic and oxic layers is for most organisms not a viable strategy to alleviate oxygen limitation. Respiration rates of all measured meiofauna organisms slowed down in response to decreasing ambient [O(2], with Nematoda displaying the highest metabolic sensitivity for declining [O(2] followed by Foraminifera and juvenile Gastropoda. Ostracoda showed a behavioral stress response when ambient [O(2] reached a critical level. Reduced respiration at low ambient [O(2] implies that meiofauna in natural, i.e. suboxic, sediments must have a lower metabolism than inferred from earlier respiration rates conducted under oxic conditions. The implications of these findings are discussed for the contribution of meiofauna to carbon

  20. A Bayesian approach to modeling and predicting pitting flaws in steam generator tubes

    International Nuclear Information System (INIS)

    Yuan, X.-X.; Mao, D.; Pandey, M.D.

    2009-01-01

    Steam generators in nuclear power plants have experienced varying degrees of under-deposit pitting corrosion. A probabilistic model to accurately predict pitting damage is necessary for effective life-cycle management of steam generators. This paper presents an advanced probabilistic model of pitting corrosion characterizing the inherent randomness of the pitting process and measurement uncertainties of the in-service inspection (ISI) data obtained from eddy current (EC) inspections. A Markov chain Monte Carlo simulation-based Bayesian method, enhanced by a data augmentation technique, is developed for estimating the model parameters. The proposed model is able to predict the actual pit number, the actual pit depth as well as the maximum pit depth, which is the main interest of the pitting corrosion model. The study also reveals the significance of inspection uncertainties in the modeling of pitting flaws using the ISI data: Without considering the probability-of-detection issues and measurement errors, the leakage risk resulted from the pitting corrosion would be under-estimated, despite the fact that the actual pit depth would usually be over-estimated.

  1. Numerical study of corner separation in a linear compressor cascade using various turbulence models

    Directory of Open Access Journals (Sweden)

    Liu Yangwei

    2016-06-01

    Full Text Available Three-dimensional corner separation is a common phenomenon that significantly affects compressor performance. Turbulence model is still a weakness for RANS method on predicting corner separation flow accurately. In the present study, numerical study of corner separation in a linear highly loaded prescribed velocity distribution (PVD compressor cascade has been investigated using seven frequently used turbulence models. The seven turbulence models include Spalart–Allmaras model, standard k–ɛ model, realizable k–ɛ model, standard k–ω model, shear stress transport k–ω model, v2–f model and Reynolds stress model. The results of these turbulence models have been compared and analyzed in detail with available experimental data. It is found the standard k–ɛ model, realizable k–ɛ model, v2–f model and Reynolds stress model can provide reasonable results for predicting three dimensional corner separation in the compressor cascade. The Spalart–Allmaras model, standard k–ω model and shear stress transport k–ω model overestimate corner separation region at incidence of 0°. The turbulence characteristics are discussed and turbulence anisotropy is observed to be stronger in the corner separating region.

  2. Significance of categorization and the modeling of age related factors for radiation protection

    International Nuclear Information System (INIS)

    Matsuoka, Osamu

    1987-01-01

    It is proposed that the categorization and modelling are necessary with regard to age related factors of radionuclide metabolism for the radiation protection of the public. In order to utilize the age related information as a model for life time risk estimate of public, it is necessary to generalize and simplify it according to the categorized model patterns. Since the patterns of age related changes in various parameters of radionuclide metabolism seem to be rather simple, it is possible to categorize them into eleven types of model patterns. Among these models, five are selected as positively significant models to be considered. Examples are shown as to the fitting of representative parameters of both physiological and metabolic parameter of radionuclides into the proposed model. The range of deviation from adult standard value is also analyzed for each model. The fitting of each parameter to categorized models, and its comparative consideration provide the effective information as to the physiological basis of radionuclide metabolism. Discussions are made on the problems encountered in the application of available age related information to radiation protection of the public, i.e. distribution of categorized parameter, period of life covered, range of deviation from adult value, implication to other dosimetric and pathological models and to the final estimation. 5 refs.; 3 figs.; 4 tabs

  3. Computation of spatial significance of mountain objects extracted from multiscale digital elevation models

    International Nuclear Information System (INIS)

    Sathyamoorthy, Dinesh

    2014-01-01

    The derivation of spatial significance is an important aspect of geospatial analysis and hence, various methods have been proposed to compute the spatial significance of entities based on spatial distances with other entities within the cluster. This paper is aimed at studying the spatial significance of mountain objects extracted from multiscale digital elevation models (DEMs). At each scale, the value of spatial significance index SSI of a mountain object is the minimum number of morphological dilation iterations required to occupy all the other mountain objects in the terrain. The mountain object with the lowest value of SSI is the spatially most significant mountain object, indicating that it has the shortest distance to the other mountain objects. It is observed that as the area of the mountain objects reduce with increasing scale, the distances between the mountain objects increase, resulting in increasing values of SSI. The results obtained indicate that the strategic location of a mountain object at the centre of the terrain is more important than its size in determining its reach to other mountain objects and thus, its spatial significance

  4. A theory-based parameterization for heterogeneous ice nucleation and implications for the simulation of ice processes in atmospheric models

    Science.gov (United States)

    Savre, J.; Ekman, A. M. L.

    2015-05-01

    A new parameterization for heterogeneous ice nucleation constrained by laboratory data and based on classical nucleation theory is introduced. Key features of the parameterization include the following: a consistent and modular modeling framework for treating condensation/immersion and deposition freezing, the possibility to consider various potential ice nucleating particle types (e.g., dust, black carbon, and bacteria), and the possibility to account for an aerosol size distribution. The ice nucleating ability of each aerosol type is described using a contact angle (θ) probability density function (PDF). A new modeling strategy is described to allow the θ PDF to evolve in time so that the most efficient ice nuclei (associated with the lowest θ values) are progressively removed as they nucleate ice. A computationally efficient quasi Monte Carlo method is used to integrate the computed ice nucleation rates over both size and contact angle distributions. The parameterization is employed in a parcel model, forced by an ensemble of Lagrangian trajectories extracted from a three-dimensional simulation of a springtime low-level Arctic mixed-phase cloud, in order to evaluate the accuracy and convergence of the method using different settings. The same model setup is then employed to examine the importance of various parameters for the simulated ice production. Modeling the time evolution of the θ PDF is found to be particularly crucial; assuming a time-independent θ PDF significantly overestimates the ice nucleation rates. It is stressed that the capacity of black carbon (BC) to form ice in the condensation/immersion freezing mode is highly uncertain, in particular at temperatures warmer than -20°C. In its current version, the parameterization most likely overestimates ice initiation by BC.

  5. Skills of General Circulation and Earth System Models in reproducing streamflow to the ocean: the case of Congo river

    Science.gov (United States)

    Santini, M.; Caporaso, L.

    2017-12-01

    Although the importance of water resources in the context of climate change, it is still difficult to correctly simulate the freshwater cycle over the land via General Circulation and Earth System Models (GCMs and ESMs). Existing efforts from the Climate Model Intercomparison Project 5 (CMIP5) were mainly devoted to the validation of atmospheric variables like temperature and precipitation, with low attention to discharge.Here we investigate the present-day performances of GCMs and ESMs participating to CMIP5 in simulating the discharge of the river Congo to the sea thanks to: i) the long-term availability of discharge data for the Kinshasa hydrological station representative of more than 95% of the water flowing in the whole catchment; and ii) the River's still low influence by human intervention, which enables comparison with the (mostly) natural streamflow simulated within CMIP5.Our findings suggest how most of models appear overestimating the streamflow in terms of seasonal cycle, especially in the late winter and spring, while overestimation and variability across models are lower in late summer. Weighted ensemble means are also calculated, based on simulations' performances given by several metrics, showing some improvements of results.Although simulated inter-monthly and inter-annual percent anomalies do not appear significantly different from those in observed data, when translated into well consolidated indicators of drought attributes (frequency, magnitude, timing, duration), usually adopted for more immediate communication to stakeholders and decision makers, such anomalies can be misleading.These inconsistencies produce incorrect assessments towards water management planning and infrastructures (e.g. dams or irrigated areas), especially if models are used instead of measurements, as in case of ungauged basins or for basins with insufficient data, as well as when relying on models for future estimates without a preliminary quantification of model biases.

  6. Intriguing model significantly reduces boarding of psychiatric patients, need for inpatient hospitalization.

    Science.gov (United States)

    2015-01-01

    As new approaches to the care of psychiatric emergencies emerge, one solution is gaining particular traction. Under the Alameda model, which has been put into practice in Alameda County, CA, patients who are brought to regional EDs with emergency psychiatric issues are quickly transferred to a designated emergency psychiatric facility as soon as they are medically stabilized. This alleviates boarding problems in area EDs while also quickly connecting patients with specialized care. With data in hand on the model's effectiveness, developers believe the approach could alleviate boarding problems in other communities as well. The model is funded by through a billing code established by California's Medicaid program for crisis stabilization services. Currently, only 22% of the patients brought to the emergency psychiatric facility ultimately need to be hospitalized; the other 78% are able to go home or to an alternative situation. In a 30-day study of the model, involving five community hospitals in Alameda County, CA, researchers found that ED boarding times were as much as 80% lower than comparable ED averages, and that patients were stabilized at least 75% of the time, significantly reducing the need for inpatient hospitalization.

  7. Modelling multi-site transmission of the human papillomavirus and its impact on vaccination effectiveness

    Directory of Open Access Journals (Sweden)

    P. Lemieux-Mellouki

    2017-12-01

    Conclusions: Modelling genital-site only transmission may overestimate vaccination impact if extragenital infections contribute to systemic natural immunity or underestimate vaccination impact if a high proportion of genital infections originate from extragenital infections. Under current understanding of heterosexual HPV transmission and immunity, a substantial bias from using uni-site models in predicting vaccination effectiveness against genital HPV infection is unlikely to occur.

  8. The upper limit of the cardiorespiratory training zone (40-84%HRR) is overestimated for postmenopausal women.

    Science.gov (United States)

    Aragão, Florbela; Moreira, Maria Helena; Gabriel, Ronaldo Eugénio; Abrantes, Catarina Gavião

    2013-11-01

    The purpose of this study was to examine the heart rate reserve (HRR) at first and second ventilatory thresholds (VT's) in postmenopausal women and compare it with optimal intensity range recommended by the ACSM (40-84%HRR). An additional aim was to evaluate whether a higher aerobic power level corresponded to a higher HRR at VT's. Fifty-eight postmenopausal women participated in this study (aged 48-69). A graded 25 Wmin(-2) cycle ergometer (Monark E839) exercise protocol was performed in order to assess aerobic power. The heart rate and gas-exchange variables were measured continuously using a portable gas analyzer system (Cosmed K4b). The first (VT1) and the second (VT2) VT's were determined by the time course curves of ventilation and O2 and CO2 ventilatory equivalents. A K-means clustering analysis was used in order to identify VO2max groups (cut-off of 30.5 mlkg(-1)min(-1)) and differences were evaluated by an independent sample t-test. Bland-Altman plots were performed to illustrate the agreement between methods. The women's HRR values at VT1 were similar to 40% HRR in both VO2max groups. At VT2 both VO2max groups exhibited negative differences (Plower VO2max group and -16.32% in the higher VO2max group). An upper limit of 84% overestimates the %HRR value for the second ventilatory threshold, suggesting that the cardiorespiratory target zone for this population should be lower and narrower (40-70%HRR). Copyright © 2012 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.

  9. Limitations and pitfalls in measurements of right ventricular stroke volume in an animal model of right heart failure

    International Nuclear Information System (INIS)

    Vildbrad, Mads Dam; Andersen, Asger; Andersen, Thomas Krarup; Axelgaard, Sofie; Holmboe, Sarah; Andersen, Stine; Nielsen-Kudsk, Jens Erik; Ringgaard, Steffen

    2015-01-01

    Right heart failure occurs in various heart and pulmonary vascular diseases and may be fatal. We aimed to identify limitations in non-invasive measurements of right ventricular stroke volume in an animal model of right ventricular failure. Data from previous studies randomising rats to pulmonary trunk banding (PTB, n = 33) causing pressure-overload right ventricular failure or sham operation (n = 16) was evaluated retrospectively. We measured right ventricular stroke volume by high frequency echocardiography and magnetic resonance imaging (MRI). We found correlation between right ventricular stroke volume measured by echocardiography and MRI in the sham animals (r = 0.677, p = 0.004) but not in the PTB group. Echocardiography overestimated the stroke volume compared to MRI in both groups. Intra- and inter-observer variation did not explain the difference. Technical, physiological and anatomical issues in the pulmonary artery might explain why echocardiography over-estimates stroke volume. Flow acceleration close to the pulmonary artery banding can cause uncertainties in the PTB model and might explain the lack of correlation. In conclusion, we found a correlation in right ventricular stroke volume measured by echocardiography versus MRI in the sham group but not the PTB group. Echocardiography overestimated right ventricular stroke volume compared to MRI. (paper)

  10. ABPM Induced Alarm Reaction: A Possible Cause of Overestimation of Daytime Blood Pressure Values Reduced By Treatment with Beta-Blockers.

    Science.gov (United States)

    Salvo, Francesco; Lonati, Chiara; Albano, Monica; Fogliacco, Paolo; Errani, Andrea Riccardo; Vallo, Cinzia; Berardi, Michele; Meinero, Vito; Muzzulini, Carlo Lorenzo; Morganti, Alberto

    2016-09-01

    Alarm reaction to clinical blood pressure (BP) measurement, defined white-coat effect (WCE), can cause overestimation of true BP values. To assess whether ambulatory blood pressure monitoring (ABPM) can similarly affect BP values during the initial hours of recording. In 420 ABPMs selected for a first systolic BP (SBP) reading at least 10 mmHg higher than the mean daytime SBP, we calculated mean diurnal and 24 h SBP with and without the exclusion of the two first hours of recording defined as the WCE window (WCEw). We also calculated the magnitude and duration of WCE. These analyses were also performed separately in patients off anti-hypertensive treatment (n = 156), and on treatment with and without the inclusion of beta-blockers (respectively n = 113 and 151). Exclusion of WCEw period reduced mean diurnal and 24 h SBP respectively from 135 ± 0.5 to 133 ± 0.5 (p ABPM is not free from WCE. WCE may affect the overall estimation of BP profile and is longer but less blunted by beta-blockers in females than in males.

  11. A Global Atmospheric Model of Meteoric Iron

    Science.gov (United States)

    Feng, Wuhu; Marsh, Daniel R.; Chipperfield, Martyn P.; Janches, Diego; Hoffner, Josef; Yi, Fan; Plane, John M. C.

    2013-01-01

    The first global model of meteoric iron in the atmosphere (WACCM-Fe) has been developed by combining three components: the Whole Atmosphere Community Climate Model (WACCM), a description of the neutral and ion-molecule chemistry of iron in the mesosphere and lower thermosphere (MLT), and a treatment of the injection of meteoric constituents into the atmosphere. The iron chemistry treats seven neutral and four ionized iron containing species with 30 neutral and ion-molecule reactions. The meteoric input function (MIF), which describes the injection of Fe as a function of height, latitude, and day, is precalculated from an astronomical model coupled to a chemical meteoric ablation model (CABMOD). This newly developed WACCM-Fe model has been evaluated against a number of available ground-based lidar observations and performs well in simulating the mesospheric atomic Fe layer. The model reproduces the strong positive correlation of temperature and Fe density around the Fe layer peak and the large anticorrelation around 100 km. The diurnal tide has a significant effect in the middle of the layer, and the model also captures well the observed seasonal variations. However, the model overestimates the peak Fe+ concentration compared with the limited rocket-borne mass spectrometer data available, although good agreement on the ion layer underside can be obtained by adjusting the rate coefficients for dissociative recombination of Fe-molecular ions with electrons. Sensitivity experiments with the same chemistry in a 1-D model are used to highlight significant remaining uncertainties in reaction rate coefficients, and to explore the dependence of the total Fe abundance on the MIF and rate of vertical transport.

  12. [Significance of COI disclosure in medical research in Japan].

    Science.gov (United States)

    Sone, Saburo

    2011-11-01

    In medical research, remarkable increase in collaboration with industry, public organizations such as universities, research institutions, and academic societies makes researchers to be more deeply involved with the activities of commercial entities. Activities of education and research, which are the responsibilities of academic institutions and societies, conflict with the interests of individuals associated with industrial-academic collaboration. Management of such conflict of interest (COI) is of much importance for academic institutions and societies to appropriately promote industrial-academic collaborative activities. Particularly, participation not only by healthy individuals, but also patients, is essential in the medical field as subjects of clinical research. For those involved in medical research, the deeper the level of COI with commercial entities, who are the financial or benefit provider, becomes serious, the more human rights of subjects could be violated, safety of life could be endangered, and research methods, data analysis and interpretation of results could be distorted. It is also possible that research may be unfairly evaluated or not published, even if the results are accurate, sometimes resulting in the ascertained effects of reporting bias included the overestimation of efficacy and the underestimation of safety risks of interventions. According to the COI management guideline of the Japanese Association of Medical Science (JAMS), significance of COI management is discussed.

  13. The importance of tumor volume in the prognosis of patients with glioblastoma. Comparison of computerized volumetry and geometric models

    International Nuclear Information System (INIS)

    Iliadis, Georgios; Misailidou, Despina; Selviaridis, Panagiotis; Chatzisotiriou, Athanasios; Kalogera-Fountzila, Anna; Fragkoulidi, Anna; Fountzilas, George; Baltas, Dimos; Tselis, Nikolaos; Zamboglou, Nikolaos

    2009-01-01

    Background and purpose: the importance of tumor volume as a prognostic factor in high-grade gliomas is highly controversial and there are numerous methods estimating this parameter. In this study, a computer-based application was used in order to assess tumor volume from hard copies and a survival analysis was conducted in order to evaluate the prognostic significance of preoperative volumetric data in patients harboring glioblastomas. Patients and methods: 50 patients suffering from glioblastoma were analyzed retrospectively. Tumor volume was determined by the various geometric models as well as by an own specialized software (Volumio). Age, performance status, type of excision, and tumor location were also included in the multivariate analysis. Results: the spheroid and rectangular models overestimated tumor volume, while the ellipsoid model offered the best approximation. Volume failed to attain any statistical significance in prognosis, while age and performance status confirmed their importance in progression-free and overall survival of patients. Conclusion: geometric models provide a rough approximation of tumor volume and should not be used, as accurate determination of size is of paramount importance in order to draw safe conclusions in oncology. Although the significance of volumetry was not disclosed, further studies are definitely required. (orig.)

  14. Is paediatric trauma severity overestimated at triage?

    DEFF Research Database (Denmark)

    DO, H Q; Hesselfeldt, R; Steinmetz, J

    2014-01-01

    BACKGROUND: Severe paediatric trauma is rare, and pre-hospital and local hospital personnel experience with injured children is often limited. We hypothesised that a higher proportion of paediatric trauma victims were taken to the regional trauma centre (TC). METHODS: This is an observational...... follow-up study that involves one level I TC and seven local hospitals. We included paediatric (trauma patients with a driving distance to the TC > 30 minutes. The primary end-point was the proportion of trauma patients arriving in the TC. RESULTS: We included 1934...... trauma patients, 238 children and 1696 adults. A total of 33/238 children (13.9%) vs. 304/1696 adults (17.9%) were transported to the TC post-injury (P = 0.14). Among these, children were significantly less injured than adults [median Injury Severity Score (ISS) 9 vs. 14, P 

  15. Overestimation of on-road air quality surveying data measured with a mobile laboratory caused by exhaust plumes of a vehicle ahead in dense traffic areas.

    Science.gov (United States)

    Woo, Sang-Hee; Kwak, Kyung-Hwan; Bae, Gwi-Nam; Kim, Kyung Hwan; Kim, Chang Hyeok; Yook, Se-Jin; Jeon, Sangzin; Kwon, Sangil; Kim, Jeongsoo; Lee, Seung-Bok

    2016-11-01

    The unintended influence of exhaust plumes emitted from a vehicle ahead to on-road air quality surveying data measured with a mobile laboratory (ML) at 20-40 km h -1 in dense traffic areas was investigated by experiment and life-sized computational fluidic dynamics (CFD) simulation. The ML equipped with variable sampling inlets of five columns by four rows was used to measure the spatial distribution of CO 2 and NO x concentrations when following 5-20 m behind a sport utility vehicle (SUV) as an emitter vehicle equipped with a portable emission monitoring system (PEMS). The PEMS measured exhaust gases at the tailpipe for input data of the CFD simulations. After the CFD method was verified with experimental results of the SUV, dispersion of exhaust plumes emitted from a bus and a sedan was numerically analyzed. More dilution of the exhaust plume was observed at higher vehicle speeds, probably because of eddy diffusion that was proportional to turbulent kinetic energy and vehicle speed. The CO 2 and NO x concentrations behind the emitter vehicle showed less overestimation as both the distance between the two vehicles and their background concentrations increased. If the height of the ML inlet is lower than 2 m and the ML travels within 20 m behind a SUV and a sedan ahead at 20 km h -1 , the overestimation should be considered by as much as 200 ppb in NO x and 80 ppm in CO 2 . Following a bus should be avoided if possible, because effect of exhaust plumes from a bus ahead could not be negligible even when the distance between the bus and the ML with the inlet height of 2 m, was more than 40 m. Recommendations are provided to avoid the unintended influence of exhaust plumes from vehicles ahead of the ML during on-road measurement in urban dense traffic conditions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Model of the Correlation between Lidar Systems and Wind Turbines for Lidar-Assisted Control

    DEFF Research Database (Denmark)

    Schlipf, David; Cheng, Po Wen; Mann, Jakob

    2013-01-01

    - or spinner-based lidar system. If on the one hand, the assumed correlation is overestimated, then the uncorrelated frequencies of the preview will cause unnecessary control action, inducing undesired loads. On the other hand, the benefits of the lidar-assisted controller will not be fully exhausted......, if correlated frequencies are filtered out. To avoid these miscalculations, this work presents a method to model the correlation between lidar systems and wind turbines using Kaimal wind spectra. The derived model accounts for different measurement configurations and spatial averaging of the lidar system......Investigations of lidar-assisted control to optimize the energy yield and to reduce loads of wind turbines have increased significantly in recent years. For this kind of control, it is crucial to know the correlation between the rotor effective wind speed and the wind preview provided by a nacelle...

  17. Global model simulations of air pollution during the 2003 European heat wave

    Directory of Open Access Journals (Sweden)

    C. Ordóñez

    2010-01-01

    Full Text Available Three global Chemistry Transport Models – MOZART, MOCAGE, and TM5 – as well as MOZART coupled to the IFS meteorological model including assimilation of ozone (O3 and carbon monoxide (CO satellite column retrievals, have been compared to surface measurements and MOZAIC vertical profiles in the troposphere over Western/Central Europe for summer 2003. The models reproduce the meteorological features and enhancement of pollution during the period 2–14 August, but not fully the ozone and CO mixing ratios measured during that episode. Modified normalised mean biases are around −25% (except ~5% for MOCAGE in the case of ozone and from −80% to −30% for CO in the boundary layer above Frankfurt. The coupling and assimilation of CO columns from MOPITT overcomes some of the deficiencies in the treatment of transport, chemistry and emissions in MOZART, reducing the negative biases to around 20%. The high reactivity and small dry deposition velocities in MOCAGE seem to be responsible for the overestimation of O3 in this model. Results from sensitivity simulations indicate that an increase of the horizontal resolution to around 1°×1° and potential uncertainties in European anthropogenic emissions or in long-range transport of pollution cannot completely account for the underestimation of CO and O3 found for most models. A process-oriented TM5 sensitivity simulation where soil wetness was reduced results in a decrease in dry deposition fluxes and a subsequent ozone increase larger than the ozone changes due to the previous sensitivity runs. However this latest simulation still underestimates ozone during the heat wave and overestimates it outside that period. Most probably, a combination of the mentioned factors together with underrepresented biogenic emissions in the models, uncertainties in the modelling of vertical/horizontal transport processes in the proximity of the boundary layer as well as limitations of

  18. Use of Monte Carlo modeling approach for evaluating risk and environmental compliance

    International Nuclear Information System (INIS)

    Higley, K.A.; Strenge, D.L.

    1988-09-01

    Evaluating compliance with environmental regulations, specifically those regulations that pertain to human exposure, can be a difficult task. Historically, maximum individual or worst-case exposures have been calculated as a basis for evaluating risk or compliance with such regulations. However, these calculations may significantly overestimate exposure and may not provide a clear understanding of the uncertainty in the analysis. The use of Monte Carlo modeling techniques can provide a better understanding of the potential range of exposures and the likelihood of high (worst-case) exposures. This paper compares the results of standard exposure estimation techniques with the Monte Carlo modeling approach. The authors discuss the potential application of this approach for demonstrating regulatory compliance, along with the strengths and weaknesses of the approach. Suggestions on implementing this method as a routine tool in exposure and risk analyses are also presented. 16 refs., 5 tabs

  19. A multiparametric magnetic resonance imaging-based risk model to determine the risk of significant prostate cancer prior to biopsy.

    Science.gov (United States)

    van Leeuwen, Pim J; Hayen, Andrew; Thompson, James E; Moses, Daniel; Shnier, Ron; Böhm, Maret; Abuodha, Magdaline; Haynes, Anne-Maree; Ting, Francis; Barentsz, Jelle; Roobol, Monique; Vass, Justin; Rasiah, Krishan; Delprado, Warick; Stricker, Phillip D

    2017-12-01

    To develop and externally validate a predictive model for detection of significant prostate cancer. Development of the model was based on a prospective cohort including 393 men who underwent multiparametric magnetic resonance imaging (mpMRI) before biopsy. External validity of the model was then examined retrospectively in 198 men from a separate institution whom underwent mpMRI followed by biopsy for abnormal prostate-specific antigen (PSA) level or digital rectal examination (DRE). A model was developed with age, PSA level, DRE, prostate volume, previous biopsy, and Prostate Imaging Reporting and Data System (PIRADS) score, as predictors for significant prostate cancer (Gleason 7 with >5% grade 4, ≥20% cores positive or ≥7 mm of cancer in any core). Probability was studied via logistic regression. Discriminatory performance was quantified by concordance statistics and internally validated with bootstrap resampling. In all, 393 men had complete data and 149 (37.9%) had significant prostate cancer. While the variable model had good accuracy in predicting significant prostate cancer, area under the curve (AUC) of 0.80, the advanced model (incorporating mpMRI) had a significantly higher AUC of 0.88 (P prostate cancer. Individualised risk assessment of significant prostate cancer using a predictive model that incorporates mpMRI PIRADS score and clinical data allows a considerable reduction in unnecessary biopsies and reduction of the risk of over-detection of insignificant prostate cancer at the cost of a very small increase in the number of significant cancers missed. © 2017 The Authors BJU International © 2017 BJU International Published by John Wiley & Sons Ltd.

  20. The Significance of the Bystander Effect: Modeling, Experiments, and More Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Brenner, David J.

    2009-07-22

    Non-targeted (bystander) effects of ionizing radiation are caused by intercellular signaling; they include production of DNA damage and alterations in cell fate (i.e. apoptosis, differentiation, senescence or proliferation). Biophysical models capable of quantifying these effects may improve cancer risk estimation at radiation doses below the epidemiological detection threshold. Understanding the spatial patterns of bystander responses is important, because it provides estimates of how many bystander cells are affected per irradiated cell. In a first approach to modeling of bystander spatial effects in a three-dimensional artificial tissue, we assumed the following: (1) The bystander phenomenon results from signaling molecules (S) that rapidly propagate from irradiated cells and decrease in concentration (exponentially in the case of planar symmetry) as distance increases. (2) These signals can convert cells to a long-lived epigenetically activated state, e.g. a state of oxidative stress; cells in this state are more prone to DNA damage and behavior alterations than normal and therefore exhibit an increased response (R) for many end points (e.g. apoptosis, differentiation, micronucleation). These assumptions were implemented by a mathematical formalism and computational algorithms. The model adequately described data on bystander responses in the 3D system using a small number of adjustable parameters. Mathematical models of radiation carcinogenesis are important for understanding mechanisms and for interpreting or extrapolating risk. There are two classes of such models: (1) long-term formalisms that track pre-malignant cell numbers throughout an entire lifetime but treat initial radiation dose-response simplistically and (2) short-term formalisms that provide a detailed initial dose-response even for complicated radiation protocols, but address its modulation during the subsequent cancer latency period only indirectly. We argue that integrating short- and long

  1. Modeling of environmentally significant interfaces: Two case studies

    International Nuclear Information System (INIS)

    Williford, R.E.

    2006-01-01

    When some parameters cannot be easily measured experimentally, mathematical models can often be used to deconvolute or interpret data collected on complex systems, such as those characteristic of many environmental problems. These models can help quantify the contributions of various physical or chemical phenomena that contribute to the overall behavior, thereby enabling the scientist to control and manipulate these phenomena, and thus to optimize the performance of the material or device. In the first case study presented here, a model is used to test the hypothesis that oxygen interactions with hydrogen on the catalyst particles of solid oxide fuel cell anodes can sometimes occur a finite distance away from the triple phase boundary (TPB), so that such reactions are not restricted to the TPB as normally assumed. The model may help explain a discrepancy between the observed structure of SOFCs and their performance. The second case study develops a simple physical model that allows engineers to design and control the sizes and shapes of mesopores in silica thin films. Such pore design can be useful for enhancing the selectivity and reactivity of environmental sensors and catalysts. This paper demonstrates the mutually beneficial interactions between experiment and modeling in the solution of a wide range of problems

  2. Performance of five surface energy balance models for estimating daily evapotranspiration in high biomass sorghum

    Science.gov (United States)

    Wagle, Pradeep; Bhattarai, Nishan; Gowda, Prasanna H.; Kakani, Vijaya G.

    2017-06-01

    Robust evapotranspiration (ET) models are required to predict water usage in a variety of terrestrial ecosystems under different geographical and agrometeorological conditions. As a result, several remote sensing-based surface energy balance (SEB) models have been developed to estimate ET over large regions. However, comparison of the performance of several SEB models at the same site is limited. In addition, none of the SEB models have been evaluated for their ability to predict ET in rain-fed high biomass sorghum grown for biofuel production. In this paper, we evaluated the performance of five widely used single-source SEB models, namely Surface Energy Balance Algorithm for Land (SEBAL), Mapping ET with Internalized Calibration (METRIC), Surface Energy Balance System (SEBS), Simplified Surface Energy Balance Index (S-SEBI), and operational Simplified Surface Energy Balance (SSEBop), for estimating ET over a high biomass sorghum field during the 2012 and 2013 growing seasons. The predicted ET values were compared against eddy covariance (EC) measured ET (ETEC) for 19 cloud-free Landsat image. In general, S-SEBI, SEBAL, and SEBS performed reasonably well for the study period, while METRIC and SSEBop performed poorly. All SEB models substantially overestimated ET under extremely dry conditions as they underestimated sensible heat (H) and overestimated latent heat (LE) fluxes under dry conditions during the partitioning of available energy. METRIC, SEBAL, and SEBS overestimated LE regardless of wet or dry periods. Consequently, predicted seasonal cumulative ET by METRIC, SEBAL, and SEBS were higher than seasonal cumulative ETEC in both seasons. In contrast, S-SEBI and SSEBop substantially underestimated ET under too wet conditions, and predicted seasonal cumulative ET by S-SEBI and SSEBop were lower than seasonal cumulative ETEC in the relatively wetter 2013 growing season. Our results indicate the necessity of inclusion of soil moisture or plant water stress

  3. Testing nonlocal models of electron thermal conduction for magnetic and inertial confinement fusion applications

    Science.gov (United States)

    Brodrick, J. P.; Kingham, R. J.; Marinak, M. M.; Patel, M. V.; Chankin, A. V.; Omotani, J. T.; Umansky, M. V.; Del Sorbo, D.; Dudson, B.; Parker, J. T.; Kerbel, G. D.; Sherlock, M.; Ridgers, C. P.

    2017-09-01

    Three models for nonlocal electron thermal transport are here compared against Vlasov-Fokker-Planck (VFP) codes to assess their accuracy in situations relevant to both inertial fusion hohlraums and tokamak scrape-off layers. The models tested are (i) a moment-based approach using an eigenvector integral closure (EIC) originally developed by Ji, Held, and Sovinec [Phys. Plasmas 16, 022312 (2009)]; (ii) the non-Fourier Landau-fluid (NFLF) model of Dimits, Joseph, and Umansky [Phys. Plasmas 21, 055907 (2014)]; and (iii) Schurtz, Nicolaï, and Busquet's [Phys. Plasmas 7, 4238 (2000)] multigroup diffusion model (SNB). We find that while the EIC and NFLF models accurately predict the damping rate of a small-amplitude temperature perturbation (within 10% at moderate collisionalities), they overestimate the peak heat flow by as much as 35% and do not predict preheat in the more relevant case where there is a large temperature difference. The SNB model, however, agrees better with VFP results for the latter problem if care is taken with the definition of the mean free path. Additionally, we present for the first time a comparison of the SNB model against a VFP code for a hohlraum-relevant problem with inhomogeneous ionisation and show that the model overestimates the heat flow in the helium gas-fill by a factor of ˜2 despite predicting the peak heat flux to within 16%.

  4. Initial spread of {sup 137}Cs from the Fukushima Dai-ichi Nuclear Power Plant over the Japan continental shelf. A study using a high-resolution, global-coastal nested ocean model

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Z. [Sun Yat-Sen Univ., Guangzhou (China). School of Marine Sciences; Univ. of Massachusetts-Dartmouth, New Bedford, MA (United States). School for Marine Science and Technology; Key Laboratory of Marine Resources and Coastal Engineering in Guangdong Province, Guangzhou (China); Chen, C.; Lin, H. [Univ. of Massachusetts-Dartmouth, New Bedford, MA (United States). School for Marine Science and Technology; Shanghai Ocean Univ. (China). International Center for Marine Studies; Beardsley, R. [Woods Hole Oceanographic Institution, Woods Hole, MA (United States). Dept. of Physical Oceanography; Ji, R. [Woods Hole Oceanographic Institution, Woods Hole, MA (United States). Dept. of Biology; Shanghai Ocean Univ. (China). International Center for Marine Studies; Sasaki, J. [The Univ. of Tokyo, Kashiwa (Japan). Dept. of Socio-Cultural Environmental Studies; Lin, J. [Woods Hole Oceanographic Institution, Woods Hole, MA (United States). Dept. of Geology and Geophysics

    2013-07-01

    The 11 March 2011 tsunami triggered by the M9 and M7.9 earthquakes off the Tohoku coast destroyed facilities at the Fukushima Dai-ichi Nuclear Power Plant (FNPP) leading to a significant long-term flow of the radionuclide {sup 137}Cs into coastal waters. A high-resolution, global-coastal nested ocean model was first constructed to simulate the 11 March tsunami and coastal inundation. Based on the model's success in reproducing the observed tsunami and coastal inundation, model experiments were then conducted with differing grid resolution to assess the initial spread of {sup 137}Cs over the eastern shelf of Japan. The {sup 137}Cs was tracked as a conservative tracer (without radioactive decay) in the three-dimensional model flow field over the period of 26 March-31 August 2011. The results clearly show that for the same {sup 137}Cs discharge, the model-predicted spreading of {sup 137}Cs was sensitive not only to model resolution but also the FNPP seawall structure. A coarse-resolution (∝2 km) model simulation led to an overestimation of lateral diffusion and thus faster dispersion of {sup 137}Cs from the coast to the deep ocean, while advective processes played a more significant role when the model resolution at and around the FNPP was refined to ∝5 m. By resolving the pathways from the leaking source to the southern and northern discharge canals, the high-resolution model better predicted the {sup 137}Cs spreading in the inner shelf where in situ measurements were made at 30 km off the coast. The overestimation of {sup 137}Cs concentration near the coast is thought to be due to the omission of sedimentation and biogeochemical processes as well as uncertainties in the amount of {sup 137}Cs leaking from the source in the model. As a result, a biogeochemical module should be included in the model for more realistic simulations of the fate and spreading of {sup 137}Cs in the ocean.

  5. Physical models of corrosion of iron and nickel in liquid sodium

    International Nuclear Information System (INIS)

    Skyrme, G.

    1975-11-01

    The possible physical models for the corrosion of iron and nickel in liquid sodium loops are considered. The models are assessed in the light of available experimental evidence, in particular the magnitude of the corrosion rate and the velocity, downstream, temperature and oxygen effects. Currently recommended solubility values are used throughout. It is shown that the simple model based on these recommended values, which assumes that the dissolved metals are in equilibrium throughout the loop, overestimates the corrosion rate by three orders of magnitude. (author)

  6. Modeling number of bacteria per food unit in comparison to bacterial concentration in quantitative risk assessment: impact on risk estimates.

    Science.gov (United States)

    Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin

    2015-02-01

    When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise. Published by Elsevier Ltd.

  7. Simulated cold bias being improved by using MODIS time-varying albedo in the Tibetan Plateau in WRF model

    Science.gov (United States)

    Meng, X.; Lyu, S.; Zhang, T.; Zhao, L.; Li, Z.; Han, B.; Li, S.; Ma, D.; Chen, H.; Ao, Y.; Luo, S.; Shen, Y.; Guo, J.; Wen, L.

    2018-04-01

    Systematic cold biases exist in the simulation for 2 m air temperature in the Tibetan Plateau (TP) when using regional climate models and global atmospheric general circulation models. We updated the albedo in the Weather Research and Forecasting (WRF) Model lower boundary condition using the Global LAnd Surface Satellite Moderate-Resolution Imaging Spectroradiometer albedo products and demonstrated evident improvement for cold temperature biases in the TP. It is the large overestimation of albedo in winter and spring in the WRF model that resulted in the large cold temperature biases. The overestimated albedo was caused by the simulated precipitation biases and over-parameterization of snow albedo. Furthermore, light-absorbing aerosols can result in a large reduction of albedo in snow and ice cover. The results suggest the necessity of developing snow albedo parameterization using observations in the TP, where snow cover and melting are very different from other low-elevation regions, and the influence of aerosols should be considered as well. In addition to defining snow albedo, our results show an urgent call for improving precipitation simulation in the TP.

  8. Initial spread of "1"3"7Cs from the Fukushima Dai-ichi Nuclear Power Plant over the Japan continental shelf. A study using a high-resolution, global-coastal nested ocean model

    International Nuclear Information System (INIS)

    Lai, Z.; Chen, C.; Lin, H.; Shanghai Ocean Univ.; Beardsley, R.; Ji, R.; Shanghai Ocean Univ.; Sasaki, J.; Lin, J.

    2013-01-01

    The 11 March 2011 tsunami triggered by the M9 and M7.9 earthquakes off the Tohoku coast destroyed facilities at the Fukushima Dai-ichi Nuclear Power Plant (FNPP) leading to a significant long-term flow of the radionuclide "1"3"7Cs into coastal waters. A high-resolution, global-coastal nested ocean model was first constructed to simulate the 11 March tsunami and coastal inundation. Based on the model's success in reproducing the observed tsunami and coastal inundation, model experiments were then conducted with differing grid resolution to assess the initial spread of "1"3"7Cs over the eastern shelf of Japan. The "1"3"7Cs was tracked as a conservative tracer (without radioactive decay) in the three-dimensional model flow field over the period of 26 March-31 August 2011. The results clearly show that for the same "1"3"7Cs discharge, the model-predicted spreading of "1"3"7Cs was sensitive not only to model resolution but also the FNPP seawall structure. A coarse-resolution (∝2 km) model simulation led to an overestimation of lateral diffusion and thus faster dispersion of "1"3"7Cs from the coast to the deep ocean, while advective processes played a more significant role when the model resolution at and around the FNPP was refined to ∝5 m. By resolving the pathways from the leaking source to the southern and northern discharge canals, the high-resolution model better predicted the "1"3"7Cs spreading in the inner shelf where in situ measurements were made at 30 km off the coast. The overestimation of "1"3"7Cs concentration near the coast is thought to be due to the omission of sedimentation and biogeochemical processes as well as uncertainties in the amount of "1"3"7Cs leaking from the source in the model. As a result, a biogeochemical module should be included in the model for more realistic simulations of the fate and spreading of "1"3"7Cs in the ocean.

  9. Inelastic constitutive models for the simulation of a cyclic softening behavior of modified 9Cr-lMo steel at elevated temperatures

    International Nuclear Information System (INIS)

    Koo, Gyeong Hoi; Lee, Jae Han

    2007-01-01

    In this paper, the inelastic constitutive models for the simulations of the cyclic softening behavior of the modified 9Cr-1Mo steel, which has a significant cyclic softening characteristic especially in elevated temperature regions, are investigated in detail. To do this, the plastic modulus, which primarily governs the calculation scheme of the plasticity, is formulated for the inelastic constitutive models such as the Armstrong-Frederick model, Chaboche model, and Ohno-Wang model. By implementing the extracted plastic modulus and the consistency conditions into the computer program, the inelastic constitutive parameters are identified to present the best fit of the uniaxial cyclic test data by strain-controlled simulations. From the computer simulations by using the obtained constitutive parameters, it is found that the Armstrong-Frederick model is simple to use but it causes significant overestimated strain results when compared with the Chaboche and the Ohno-Wang models. And from the ratcheting simulation results, it is found that the cyclic softening behavior of the modified 9Cr-1Mo steel can invoke a ratcheting instability when the applied cyclic loads exceed a certain level of the ratchet loading condition

  10. Model of pulse extraction from a copper laser amplifier

    International Nuclear Information System (INIS)

    Boley, C.D.; Warner, B.E.

    1997-03-01

    A computational model of pulse propagation through a copper laser amplifier has been developed. The model contains a system of 1-D (in the axial direction), time-dependent equations for the laser intensity and amplified spontaneous emission (ASE), coupled to rate equations for the atomic levels. Detailed calculations are presented for a high-power amplifier at Lawrence Livermore National Laboratory. The extracted power agrees with experiment near saturation. At lower input power the calculation overestimates experiment, probably because of increased ASE effects. 6 refs., 6 figs

  11. Can CFMIP2 models reproduce the leading modes of cloud vertical structure in the CALIPSO-GOCCP observations?

    Science.gov (United States)

    Wang, Fang; Yang, Song

    2018-02-01

    Using principal component (PC) analysis, three leading modes of cloud vertical structure (CVS) are revealed by the GCM-Oriented CALIPSO Cloud Product (GOCCP), i.e. tropical high, subtropical anticyclonic and extratropical cyclonic cloud modes (THCM, SACM and ECCM, respectively). THCM mainly reflect the contrast between tropical high clouds and clouds in middle/high latitudes. SACM is closely associated with middle-high clouds in tropical convective cores, few-cloud regimes in subtropical anticyclonic clouds and stratocumulus over subtropical eastern oceans. ECCM mainly corresponds to clouds along extratropical cyclonic regions. Models of phase 2 of Cloud Feedback Model Intercomparison Project (CFMIP2) well reproduce the THCM, but SACM and ECCM are generally poorly simulated compared to GOCCP. Standardized PCs corresponding to CVS modes are generally captured, whereas original PCs (OPCs) are consistently underestimated (overestimated) for THCM (SACM and ECCM) by CFMIP2 models. The effects of CVS modes on relative cloud radiative forcing (RSCRF/RLCRF) (RSCRF being calculated at the surface while RLCRF at the top of atmosphere) are studied in terms of principal component regression method. Results show that CFMIP2 models tend to overestimate (underestimated or simulate the opposite sign) RSCRF/RLCRF radiative effects (REs) of ECCM (THCM and SACM) in unit global mean OPC compared to observations. These RE biases may be attributed to two factors, one of which is underestimation (overestimation) of low/middle clouds (high clouds) (also known as stronger (weaker) REs in unit low/middle (high) clouds) in simulated global mean cloud profiles, the other is eigenvector biases in CVS modes (especially for SACM and ECCM). It is suggested that much more attention should be paid on improvement of CVS, especially cloud parameterization associated with particular physical processes (e.g. downwelling regimes with the Hadley circulation, extratropical storm tracks and others), which

  12. Air quality modelling in the summer over the eastern Mediterranean using WRF-Chem: chemistry and aerosol mechanism intercomparison

    Science.gov (United States)

    Georgiou, George K.; Christoudias, Theodoros; Proestos, Yiannis; Kushta, Jonilda; Hadjinicolaou, Panos; Lelieveld, Jos

    2018-02-01

    We employ the WRF-Chem model to study summertime air pollution, the intense photochemical activity and their impact on air quality over the eastern Mediterranean. We utilize three nested domains with horizontal resolutions of 80, 16 and 4 km, with the finest grid focusing on the island of Cyprus, where the CYPHEX campaign took place in July 2014. Anthropogenic emissions are based on the EDGAR HTAP global emission inventory, while dust and biogenic emissions are calculated online. Three simulations utilizing the CBMZ-MOSAIC, MOZART-MOSAIC, and RADM2-MADE/SORGAM gas-phase and aerosol mechanisms are performed. The results are compared with measurements from a dense observational network of 14 ground stations in Cyprus. The model simulates T2 m, Psurf, and WD10 m accurately, with minor differences in WS10 m between model and observations at coastal and mountainous stations attributed to limitations in the representation of the complex topography in the model. It is shown that the south-eastern part of Cyprus is mostly affected by emissions from within the island, under the dominant (60 %) westerly flow during summertime. Clean maritime air from the Mediterranean can reduce concentrations of local air pollutants over the region during westerlies. Ozone concentrations are overestimated by all three mechanisms (9 % ≤ NMB ≤ 23 %) with the smaller mean bias (4.25 ppbV) obtained by the RADM2-MADE/SORGAM mechanism. Differences in ozone concentrations can be attributed to the VOC treatment by the three mechanisms. The diurnal variability of pollution and ozone precursors is not captured (hourly correlation coefficients for O3 ≤ 0.29). This might be attributed to the underestimation of NOx concentrations by local emissions by up to 50 %. For the fine particulate matter (PM2.5), the lowest mean bias (9 µg m-3) is obtained with the RADM2-MADE/SORGAM mechanism, with overestimates in sulfate and ammonium aerosols. Overestimation of sulfate aerosols by this mechanism may be

  13. Modelling ohmic confinement experiments on the START tokamak

    International Nuclear Information System (INIS)

    Roach, C.M.

    1996-05-01

    Ohmic confinement data from the tight aspect ratio tokamak START has been analysed using the ASTRA transport simulation code. Neoclassical expressions have been modified to describe tight aspect ratio configurations, and the comparison between START data and models of anomalous transport has been made quantitative using the standard χ 2 test from statistics. Four confinement models (T11, Rebut-Lallia-Watkins, Lackner-Gottardi, and Taroni et al's Bohm model) have been compared with the START data. Three of the models are found to simulate START's electron temperature data moderately well, while Taroni et al's Bohm model overestimates electron temperatures in START by an order of magnitude. Thus comparison with START data tends to discriminate against Bohm models; these models are pessimistic or ITER. (author)

  14. Strategies for Testing Statistical and Practical Significance in Detecting DIF with Logistic Regression Models

    Science.gov (United States)

    Fidalgo, Angel M.; Alavi, Seyed Mohammad; Amirian, Seyed Mohammad Reza

    2014-01-01

    This study examines three controversial aspects in differential item functioning (DIF) detection by logistic regression (LR) models: first, the relative effectiveness of different analytical strategies for detecting DIF; second, the suitability of the Wald statistic for determining the statistical significance of the parameters of interest; and…

  15. A Systematic Evaluation of Ultrasound-based Fetal Weight Estimation Models on Indian Population

    Directory of Open Access Journals (Sweden)

    Sujitkumar S. Hiwale

    2017-12-01

    Conclusion: We found that the existing fetal weight estimation models have high systematic and random errors on Indian population, with a general tendency of overestimation of fetal weight in the LBW category and underestimation in the HBW category. We also observed that these models have a limited ability to predict babies at a risk of either low or high birth weight. It is recommended that the clinicians should consider all these factors, while interpreting estimated weight given by the existing models.

  16. Comparison between environmental measurements and model calculations of radioactivity in fish at the Swedish nuclear power plants and Studsvik

    International Nuclear Information System (INIS)

    Karlberg, O.

    1995-02-01

    Doses to critical groups from the activity released from swedish reactors were modelled in 1983. In this report these calculations are compared to doses calculated (using the same assumptions as in the 1983 model) from the activity measured in the water recipient. The study shows that the model overestimates activity in biota and sediments, which was expected, since the model was constructed to be conservative. 13 refs, 5 figs, 6 tabs

  17. A systematic experimental investigation of significant parameters affecting model tire hydroplaning

    Science.gov (United States)

    Wray, G. A.; Ehrlich, I. R.

    1973-01-01

    The results of a comprehensive parametric study of model and small pneumatic tires operating on a wet surface are presented. Hydroplaning inception (spin down) and rolling restoration (spin up) are discussed. Conclusions indicate that hydroplaning inception occurs at a speed significantly higher than the rolling restoration speed. Hydroplaning speed increases considerably with tread depth, surface roughness and tire inflation pressure of footprint pressure, and only moderately with increased load. Water film thickness affects spin down speed only slightly. Spin down speed varies inversely as approximately the one-sixth power of film thickness. Empirical equations relating tire inflation pressure, normal load, tire diameter and water film thickness have been generated for various tire tread and surface configurations.

  18. Processes influencing model-data mismatch in drought-stressed, fire-disturbed eddy flux sites

    Science.gov (United States)

    Mitchell, Stephen; Beven, Keith; Freer, Jim; Law, Beverly

    2011-06-01

    Semiarid forests are very sensitive to climatic change and among the most difficult ecosystems to accurately model. We tested the performance of the Biome-BGC model against eddy flux data taken from young (years 2004-2008), mature (years 2002-2008), and old-growth (year 2000) ponderosa pine stands at Metolius, Oregon, and subsequently examined several potential causes for model-data mismatch. We used the Generalized Likelihood Uncertainty Estimation methodology, which involved 500,000 model runs for each stand (1,500,000 total). Each simulation was run with randomly generated parameter values from a uniform distribution based on published parameter ranges, resulting in modeled estimates of net ecosystem CO2 exchange (NEE) that were compared to measured eddy flux data. Simulations for the young stand exhibited the highest level of performance, though they overestimated ecosystem C accumulation (-NEE) 99% of the time. Among the simulations for the mature and old-growth stands, 100% and 99% of the simulations underestimated ecosystem C accumulation. One obvious area of model-data mismatch is soil moisture, which was overestimated by the model in the young and old-growth stands yet underestimated in the mature stand. However, modeled estimates of soil water content and associated water deficits did not appear to be the primary cause of model-data mismatch; our analysis indicated that gross primary production can be accurately modeled even if soil moisture content is not. Instead, difficulties in adequately modeling ecosystem respiration, mainly autotrophic respiration, appeared to be the fundamental cause of model-data mismatch.

  19. Evaluation and application of site-specific data to revise the first-order decay model for estimating landfill gas generation and emissions at Danish landfills

    DEFF Research Database (Denmark)

    Mou, Zishen; Scheutz, Charlotte; Kjeldsen, Peter

    2015-01-01

    Methane (CH4) generated from low-organic waste degradation at four Danish landfills was estimated by three first-order decay (FOD) landfill gas (LFG) generation models (LandGEM, IPCC, and Afvalzorg). Actual waste data from Danish landfills were applied to fit model (IPCC and Afvalzorg) required...... categories. In general, the single-phase model, LandGEM, significantly overestimated CH4 generation, because it applied too high default values for key parameters to handle low-organic waste scenarios. The key parameters were biochemical CH4 potential (BMP) and CH4 generation rate constant (k...... landfills (from the start of disposal until 2020 and until 2100). Through a CH4 mass balance approach, fugitive CH4 emissions from whole sites and a specific cell for shredder waste were aggregated based on the revised Afvalzorg model outcomes. Aggregated results were in good agreement with field...

  20. Parametrization of turbulence models using 3DVAR data assimilation in laboratory conditions

    Science.gov (United States)

    Olbert, A. I.; Nash, S.; Ragnoli, E.; Hartnett, M.

    2013-12-01

    In this research the 3DVAR data assimilation scheme is implemented in the numerical model DIVAST in order to optimize the performance of the numerical model by selecting an appropriate turbulence scheme and tuning its parameters. Two turbulence closure schemes: the Prandtl mixing length model and the two-equation k-ɛ model were incorporated into DIVAST and examined with respect to their universality of application, complexity of solutions, computational efficiency and numerical stability. A square harbour with one symmetrical entrance subject to tide-induced flows was selected to investigate the structure of turbulent flows. The experimental part of the research was conducted in a tidal basin. A significant advantage of such laboratory experiment is a fully controlled environment where domain setup and forcing are user-defined. The research shows that the Prandtl mixing length model and the two-equation k-ɛ model, with default parameterization predefined according to literature recommendations, overestimate eddy viscosity which in turn results in a significant underestimation of velocity magnitudes in the harbour. The data assimilation of the model-predicted velocity and laboratory observations significantly improves model predictions for both turbulence models by adjusting modelled flows in the harbour to match de-errored observations. Such analysis gives an optimal solution based on which numerical model parameters can be estimated. The process of turbulence model optimization by reparameterization and tuning towards optimal state led to new constants that may be potentially applied to complex turbulent flows, such as rapidly developing flows or recirculating flows. This research further demonstrates how 3DVAR can be utilized to identify and quantify shortcomings of the numerical model and consequently to improve forecasting by correct parameterization of the turbulence models. Such improvements may greatly benefit physical oceanography in terms of

  1. A parametric study of the influence of short-term soil deformability on ...

    African Journals Online (AJOL)

    The fixed-base model underestimates both axial forces and moments in some columns of the dual system. The fixed-base model tends to underestimate the shear wall bending moments and axial forces, whereas it consistently overestimates the shear forces. Significant differences in the reaction moments at the foundation ...

  2. Psychometric evaluation of the Overexcitability Questionnaire-Two applying Bayesian Structural Equation Modeling (BSEM and multiple-group BSEM-based alignment with approximate measurement invariance

    Directory of Open Access Journals (Sweden)

    Niki eDe Bondt

    2015-12-01

    Full Text Available The Overexcitability Questionnaire-Two (OEQ-II measures the degree and nature of overexcitability, which assists in determining the developmental potential of an individual according to Dabrowski’s Theory of Positive Disintegration. Previous validation studies using frequentist confirmatory factor analysis, which postulates exact parameter constraints, led to model rejection and a long series of model modifications. Bayesian structural equation modeling (BSEM allows the application of zero-mean, small-variance priors for cross-loadings, residual covariances, and differences in measurement parameters across groups, better reflecting substantive theory and leading to better model fit and less overestimation of factor correlations. Our BSEM analysis with a sample of 516 students in higher education yields positive results regarding the factorial validity of the OEQ-II. Likewise, applying BSEM-based alignment with approximate measurement invariance, the absence of non-invariant factor loadings and intercepts across gender is supportive of the psychometric quality of the OEQ-II. Compared to males, females scored significantly higher on emotional and sensual overexcitability, and significantly lower on psychomotor overexcitability.

  3. Can the Responses of Photosynthesis and Stomatal Conductance to Water and Nitrogen Stress Combinations Be Modeled Using a Single Set of Parameters?

    Science.gov (United States)

    Zhang, Ningyi; Li, Gang; Yu, Shanxiang; An, Dongsheng; Sun, Qian; Luo, Weihong; Yin, Xinyou

    2017-01-01

    Accurately predicting photosynthesis in response to water and nitrogen stress is the first step toward predicting crop growth, yield and many quality traits under fluctuating environmental conditions. While mechanistic models are capable of predicting photosynthesis under fluctuating environmental conditions, simplifying the parameterization procedure is important toward a wide range of model applications. In this study, the biochemical photosynthesis model of Farquhar, von Caemmerer and Berry (the FvCB model) and the stomatal conductance model of Ball, Woodrow and Berry which was revised by Leuning and Yin (the BWB-Leuning-Yin model) were parameterized for Lilium (L. auratum × speciosum “Sorbonne”) grown under different water and nitrogen conditions. Linear relationships were found between biochemical parameters of the FvCB model and leaf nitrogen content per unit leaf area (Na), and between mesophyll conductance and Na under different water and nitrogen conditions. By incorporating these Na-dependent linear relationships, the FvCB model was able to predict the net photosynthetic rate (An) in response to all water and nitrogen conditions. In contrast, stomatal conductance (gs) can be accurately predicted if parameters in the BWB-Leuning-Yin model were adjusted specifically to water conditions; otherwise gs was underestimated by 9% under well-watered conditions and was overestimated by 13% under water-deficit conditions. However, the 13% overestimation of gs under water-deficit conditions led to only 9% overestimation of An by the coupled FvCB and BWB-Leuning-Yin model whereas the 9% underestimation of gs under well-watered conditions affected little the prediction of An. Our results indicate that to accurately predict An and gs under different water and nitrogen conditions, only a few parameters in the BWB-Leuning-Yin model need to be adjusted according to water conditions whereas all other parameters are either conservative or can be adjusted according to

  4. A model independent safeguard against background mismodeling for statistical inference

    Energy Technology Data Exchange (ETDEWEB)

    Priel, Nadav; Landsman, Hagar; Manfredini, Alessandro; Budnik, Ranny [Department of Particle Physics and Astrophysics, Weizmann Institute of Science, Herzl St. 234, Rehovot (Israel); Rauch, Ludwig, E-mail: nadav.priel@weizmann.ac.il, E-mail: rauch@mpi-hd.mpg.de, E-mail: hagar.landsman@weizmann.ac.il, E-mail: alessandro.manfredini@weizmann.ac.il, E-mail: ran.budnik@weizmann.ac.il [Teilchen- und Astroteilchenphysik, Max-Planck-Institut für Kernphysik, Saupfercheckweg 1, 69117 Heidelberg (Germany)

    2017-05-01

    We propose a safeguard procedure for statistical inference that provides universal protection against mismodeling of the background. The method quantifies and incorporates the signal-like residuals of the background model into the likelihood function, using information available in a calibration dataset. This prevents possible false discovery claims that may arise through unknown mismodeling, and corrects the bias in limit setting created by overestimated or underestimated background. We demonstrate how the method removes the bias created by an incomplete background model using three realistic case studies.

  5. Bayesian mixture modeling of significant p values: A meta-analytic method to estimate the degree of contamination from H₀.

    Science.gov (United States)

    Gronau, Quentin Frederik; Duizer, Monique; Bakker, Marjan; Wagenmakers, Eric-Jan

    2017-09-01

    Publication bias and questionable research practices have long been known to corrupt the published record. One method to assess the extent of this corruption is to examine the meta-analytic collection of significant p values, the so-called p -curve (Simonsohn, Nelson, & Simmons, 2014a). Inspired by statistical research on false-discovery rates, we propose a Bayesian mixture model analysis of the p -curve. Our mixture model assumes that significant p values arise either from the null-hypothesis H ₀ (when their distribution is uniform) or from the alternative hypothesis H1 (when their distribution is accounted for by a simple parametric model). The mixture model estimates the proportion of significant results that originate from H ₀, but it also estimates the probability that each specific p value originates from H ₀. We apply our model to 2 examples. The first concerns the set of 587 significant p values for all t tests published in the 2007 volumes of Psychonomic Bulletin & Review and the Journal of Experimental Psychology: Learning, Memory, and Cognition; the mixture model reveals that p values higher than about .005 are more likely to stem from H ₀ than from H ₁. The second example concerns 159 significant p values from studies on social priming and 130 from yoked control studies. The results from the yoked controls confirm the findings from the first example, whereas the results from the social priming studies are difficult to interpret because they are sensitive to the prior specification. To maximize accessibility, we provide a web application that allows researchers to apply the mixture model to any set of significant p values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. A comparative empirical analysis of statistical models for evaluating highway segment crash frequency

    Directory of Open Access Journals (Sweden)

    Bismark R.D.K. Agbelie

    2016-08-01

    Full Text Available The present study conducted an empirical highway segment crash frequency analysis on the basis of fixed-parameters negative binomial and random-parameters negative binomial models. Using a 4-year data from a total of 158 highway segments, with a total of 11,168 crashes, the results from both models were presented, discussed, and compared. About 58% of the selected variables produced normally distributed parameters across highway segments, while the remaining produced fixed parameters. The presence of a noise barrier along a highway segment would increase mean annual crash frequency by 0.492 for 88.21% of the highway segments, and would decrease crash frequency for 11.79% of the remaining highway segments. Besides, the number of vertical curves per mile along a segment would increase mean annual crash frequency by 0.006 for 84.13% of the highway segments, and would decrease crash frequency for 15.87% of the remaining highway segments. Thus, constraining the parameters to be fixed across all highway segments would lead to an inaccurate conclusion. Although, the estimated parameters from both models showed consistency in direction, the magnitudes were significantly different. Out of the two models, the random-parameters negative binomial model was found to be statistically superior in evaluating highway segment crashes compared with the fixed-parameters negative binomial model. On average, the marginal effects from the fixed-parameters negative binomial model were observed to be significantly overestimated compared with those from the random-parameters negative binomial model.

  7. The cost of simplifying air travel when modeling disease spread.

    Directory of Open Access Journals (Sweden)

    Justin Lessler

    Full Text Available BACKGROUND: Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. METHODOLOGY/PRINCIPAL FINDINGS: Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed introductions of disease is small (<1 per day but for a few routes this rate is greatly underestimated by the pipe model. CONCLUSIONS/SIGNIFICANCE: If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.

  8. Simulating snow maps for Norway: description and statistical evaluation of the seNorge snow model

    Directory of Open Access Journals (Sweden)

    T. M. Saloranta

    2012-11-01

    Full Text Available Daily maps of snow conditions have been produced in Norway with the seNorge snow model since 2004. The seNorge snow model operates with 1 × 1 km resolution, uses gridded observations of daily temperature and precipitation as its input forcing, and simulates, among others, snow water equivalent (SWE, snow depth (SD, and the snow bulk density (ρ. In this paper the set of equations contained in the seNorge model code is described and a thorough spatiotemporal statistical evaluation of the model performance from 1957–2011 is made using the two major sets of extensive in situ snow measurements that exist for Norway. The evaluation results show that the seNorge model generally overestimates both SWE and ρ, and that the overestimation of SWE increases with elevation throughout the snow season. However, the R2-values for model fit are 0.60 for (log-transformed SWE and 0.45 for ρ, indicating that after removal of the detected systematic model biases (e.g. by recalibrating the model or expressing snow conditions in relative units the model performs rather well. The seNorge model provides a relatively simple, not very data-demanding, yet nonetheless process-based method to construct snow maps of high spatiotemporal resolution. It is an especially well suited alternative for operational snow mapping in regions with rugged topography and large spatiotemporal variability in snow conditions, as is the case in the mountainous Norway.

  9. A cutoff value based on analysis of a reference population decreases overestimation of the prevalence of nocturnal polyuria.

    Science.gov (United States)

    van Haarst, Ernst P; Bosch, J L H Ruud

    2012-09-01

    We sought criteria for nocturnal polyuria in asymptomatic, nonurological adults of all ages by reporting reference values of the ratio of daytime and nighttime urine volumes, and finding nocturia predictors. Data from a database of frequency-volume charts from a reference population of 894 nonurological, asymptomatic volunteers of all age groups were analyzed. The nocturnal polyuria index and the nocturia index were calculated and factors influencing these values were determined by multivariate analysis. The nocturnal polyuria index had wide variation but a normal distribution with a mean ± SD of 30% ± 12%. The 95th percentile of the values was 53%. Above this cutoff a patient had nocturnal polyuria. This value contrasts with the International Continence Society definition of 33% but agrees with several other reports. On multivariate regression analysis with the nocturnal polyuria index as the dependent variable sleeping time, maximum voided volume and age were the covariates. However, the increase in the nocturnal polyuria index by age was small. Excluding polyuria and nocturia from analysis did not alter the results in a relevant way. The nocturnal voiding frequency depended on sleeping time and maximum voided volume but most of all on the nocturia index. The prevalence of nocturnal polyuria is overestimated. We suggest a new cutoff value for the nocturnal polyuria index, that is nocturnal polyuria exists when the nocturnal polyuria index exceeds 53%. The nocturia index is the best predictor of nocturia. Copyright © 2012 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  10. Bioavailability of particulate metal to zebra mussels: Biodynamic modelling shows that assimilation efficiencies are site-specific

    Energy Technology Data Exchange (ETDEWEB)

    Bourgeault, Adeline, E-mail: bourgeault@ensil.unilim.fr [Cemagref, Unite de Recherche Hydrosystemes et Bioprocedes, 1 rue Pierre-Gilles de Gennes, 92761 Antony (France); FIRE, FR-3020, 4 place Jussieu, 75005 Paris (France); Gourlay-France, Catherine, E-mail: catherine.gourlay@cemagref.fr [Cemagref, Unite de Recherche Hydrosystemes et Bioprocedes, 1 rue Pierre-Gilles de Gennes, 92761 Antony (France); FIRE, FR-3020, 4 place Jussieu, 75005 Paris (France); Priadi, Cindy, E-mail: cindy.priadi@eng.ui.ac.id [LSCE/IPSL CEA-CNRS-UVSQ, Avenue de la Terrasse, 91198 Gif-sur-Yvette (France); Ayrault, Sophie, E-mail: Sophie.Ayrault@lsce.ipsl.fr [LSCE/IPSL CEA-CNRS-UVSQ, Avenue de la Terrasse, 91198 Gif-sur-Yvette (France); Tusseau-Vuillemin, Marie-Helene, E-mail: Marie-helene.tusseau@ifremer.fr [IFREMER Technopolis 40, 155 rue Jean-Jacques Rousseau, 92138 Issy-Les-Moulineaux (France)

    2011-12-15

    This study investigates the ability of the biodynamic model to predict the trophic bioaccumulation of cadmium (Cd), chromium (Cr), copper (Cu), nickel (Ni) and zinc (Zn) in a freshwater bivalve. Zebra mussels were transplanted to three sites along the Seine River (France) and collected monthly for 11 months. Measurements of the metal body burdens in mussels were compared with the predictions from the biodynamic model. The exchangeable fraction of metal particles did not account for the bioavailability of particulate metals, since it did not capture the differences between sites. The assimilation efficiency (AE) parameter is necessary to take into account biotic factors influencing particulate metal bioavailability. The biodynamic model, applied with AEs from the literature, overestimated the measured concentrations in zebra mussels, the extent of overestimation being site-specific. Therefore, an original methodology was proposed for in situ AE measurements for each site and metal. - Highlights: > Exchangeable fraction of metal particles did not account for the bioavailability of particulate metals. > Need for site-specific biodynamic parameters. > Field-determined AE provide a good fit between the biodynamic model predictions and bioaccumulation measurements. - The interpretation of metal bioaccumulation in transplanted zebra mussels with biodynamic modelling highlights the need for site-specific assimilation efficiencies of particulate metals.

  11. Field significance of performance measures in the context of regional climate model evaluation. Part 2: precipitation

    Science.gov (United States)

    Ivanov, Martin; Warrach-Sagi, Kirsten; Wulfmeyer, Volker

    2018-04-01

    A new approach for rigorous spatial analysis of the downscaling performance of regional climate model (RCM) simulations is introduced. It is based on a multiple comparison of the local tests at the grid cells and is also known as `field' or `global' significance. The block length for the local resampling tests is precisely determined to adequately account for the time series structure. New performance measures for estimating the added value of downscaled data relative to the large-scale forcing fields are developed. The methodology is exemplarily applied to a standard EURO-CORDEX hindcast simulation with the Weather Research and Forecasting (WRF) model coupled with the land surface model NOAH at 0.11 ∘ grid resolution. Daily precipitation climatology for the 1990-2009 period is analysed for Germany for winter and summer in comparison with high-resolution gridded observations from the German Weather Service. The field significance test controls the proportion of falsely rejected local tests in a meaningful way and is robust to spatial dependence. Hence, the spatial patterns of the statistically significant local tests are also meaningful. We interpret them from a process-oriented perspective. While the downscaled precipitation distributions are statistically indistinguishable from the observed ones in most regions in summer, the biases of some distribution characteristics are significant over large areas in winter. WRF-NOAH generates appropriate stationary fine-scale climate features in the daily precipitation field over regions of complex topography in both seasons and appropriate transient fine-scale features almost everywhere in summer. As the added value of global climate model (GCM)-driven simulations cannot be smaller than this perfect-boundary estimate, this work demonstrates in a rigorous manner the clear additional value of dynamical downscaling over global climate simulations. The evaluation methodology has a broad spectrum of applicability as it is

  12. Modelling vocal anatomy's significant effect on speech

    NARCIS (Netherlands)

    de Boer, B.

    2010-01-01

    This paper investigates the effect of larynx position on the articulatory abilities of a humanlike vocal tract. Previous work has investigated models that were built to resemble the anatomy of existing species or fossil ancestors. This has led to conflicting conclusions about the relation between

  13. Evaluation of climate model aerosol seasonal and spatial variability over Africa using AERONET

    Science.gov (United States)

    Horowitz, Hannah M.; Garland, Rebecca M.; Thatcher, Marcus; Landman, Willem A.; Dedekind, Zane; van der Merwe, Jacobus; Engelbrecht, Francois A.

    2017-11-01

    The sensitivity of climate models to the characterization of African aerosol particles is poorly understood. Africa is a major source of dust and biomass burning aerosols and this represents an important research gap in understanding the impact of aerosols on radiative forcing of the climate system. Here we evaluate the current representation of aerosol particles in the Conformal Cubic Atmospheric Model (CCAM) with ground-based remote retrievals across Africa, and additionally provide an analysis of observed aerosol optical depth at 550 nm (AOD550 nm) and Ångström exponent data from 34 Aerosol Robotic Network (AERONET) sites. Analysis of the 34 long-term AERONET sites confirms the importance of dust and biomass burning emissions to the seasonal cycle and magnitude of AOD550 nm across the continent and the transport of these emissions to regions outside of the continent. In general, CCAM captures the seasonality of the AERONET data across the continent. The magnitude of modeled and observed multiyear monthly average AOD550 nm overlap within ±1 standard deviation of each other for at least 7 months at all sites except the Réunion St Denis Island site (Réunion St. Denis). The timing of modeled peak AOD550 nm in southern Africa occurs 1 month prior to the observed peak, which does not align with the timing of maximum fire counts in the region. For the western and northern African sites, it is evident that CCAM currently overestimates dust in some regions while others (e.g., the Arabian Peninsula) are better characterized. This may be due to overestimated dust lifetime, or that the characterization of the soil for these areas needs to be updated with local information. The CCAM simulated AOD550 nm for the global domain is within the spread of previously published results from CMIP5 and AeroCom experiments for black carbon, organic carbon, and sulfate aerosols. The model's performance provides confidence for using the model to estimate large-scale regional impacts

  14. Evaluation of climate model aerosol seasonal and spatial variability over Africa using AERONET

    Directory of Open Access Journals (Sweden)

    H. M. Horowitz

    2017-11-01

    Full Text Available The sensitivity of climate models to the characterization of African aerosol particles is poorly understood. Africa is a major source of dust and biomass burning aerosols and this represents an important research gap in understanding the impact of aerosols on radiative forcing of the climate system. Here we evaluate the current representation of aerosol particles in the Conformal Cubic Atmospheric Model (CCAM with ground-based remote retrievals across Africa, and additionally provide an analysis of observed aerosol optical depth at 550 nm (AOD550 nm and Ångström exponent data from 34 Aerosol Robotic Network (AERONET sites. Analysis of the 34 long-term AERONET sites confirms the importance of dust and biomass burning emissions to the seasonal cycle and magnitude of AOD550 nm across the continent and the transport of these emissions to regions outside of the continent. In general, CCAM captures the seasonality of the AERONET data across the continent. The magnitude of modeled and observed multiyear monthly average AOD550 nm overlap within ±1 standard deviation of each other for at least 7 months at all sites except the Réunion St Denis Island site (Réunion St. Denis. The timing of modeled peak AOD550 nm in southern Africa occurs 1 month prior to the observed peak, which does not align with the timing of maximum fire counts in the region. For the western and northern African sites, it is evident that CCAM currently overestimates dust in some regions while others (e.g., the Arabian Peninsula are better characterized. This may be due to overestimated dust lifetime, or that the characterization of the soil for these areas needs to be updated with local information. The CCAM simulated AOD550 nm for the global domain is within the spread of previously published results from CMIP5 and AeroCom experiments for black carbon, organic carbon, and sulfate aerosols. The model's performance provides confidence for using the model to estimate

  15. Modeling misidentification errors that result from use of genetic tags in capture-recapture studies

    Science.gov (United States)

    Yoshizaki, J.; Brownie, C.; Pollock, K.H.; Link, W.A.

    2011-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture-recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored. ?? 2009 Springer Science+Business Media, LLC.

  16. Overly persistent circulation in climate models contributes to overestimated frequency and duration of heat waves and cold spells

    Czech Academy of Sciences Publication Activity Database

    Plavcová, Eva; Kyselý, Jan

    2016-01-01

    Roč. 46, č. 9 (2016), s. 2805-2820 ISSN 0930-7575 R&D Projects: GA ČR GAP209/10/2265; GA MŠk 7AMB15AR001 EU Projects: European Commission(XE) 505539 - ENSEMBLES Program:FP6 Institutional support: RVO:68378289 Keywords : heat wave * cold spell * atmospheric circulation * persistence * regional climate models * Central Europe Subject RIV: DG - Athmosphere Sciences, Meteorology Impact factor: 4.146, year: 2016 http://link.springer.com/article/10.1007%2Fs00382-015-2733-8

  17. Sensitivity analysis of WRF model PBL schemes in simulating boundary-layer variables in southern Italy: An experimental campaign

    DEFF Research Database (Denmark)

    Avolio, E.; Federico, S.; Miglietta, M.

    2017-01-01

    the surface, where the model uncertainties are, usually, smaller than at the surface. A general anticlockwise rotation of the simulated flow with height is found at all levels. The mixing height is overestimated by all schemes and a possible role of the simulated sensible heat fluxes for this mismatching......The sensitivity of boundary layer variables to five (two non-local and three local) planetary boundary-layer (PBL) parameterization schemes, available in the Weather Research and Forecasting (WRF) mesoscale meteorological model, is evaluated in an experimental site in Calabria region (southern...... is investigated. On a single-case basis, significantly better results are obtained when the atmospheric conditions near the measurement site are dominated by synoptic forcing rather than by local circulations. From this study, it follows that the two first order non-local schemes, ACM2 and YSU, are the schemes...

  18. Overestimation of the earthquake hazard along the Himalaya: constraints in bracketing of medieval earthquakes from paleoseismic studies

    Science.gov (United States)

    Arora, Shreya; Malik, Javed N.

    2017-12-01

    The Himalaya is one of the most seismically active regions of the world. The occurrence of several large magnitude earthquakes viz. 1905 Kangra earthquake (Mw 7.8), 1934 Bihar-Nepal earthquake (Mw 8.2), 1950 Assam earthquake (Mw 8.4), 2005 Kashmir (Mw 7.6), and 2015 Gorkha (Mw 7.8) are the testimony to ongoing tectonic activity. In the last few decades, tremendous efforts have been made along the Himalayan arc to understand the patterns of earthquake occurrences, size, extent, and return periods. Some of the large magnitude earthquakes produced surface rupture, while some remained blind. Furthermore, due to the incompleteness of the earthquake catalogue, a very few events can be correlated with medieval earthquakes. Based on the existing paleoseismic data certainly, there exists a complexity to precisely determine the extent of surface rupture of these earthquakes and also for those events, which occurred during historic times. In this paper, we have compiled the paleo-seismological data and recalibrated the radiocarbon ages from the trenches excavated by previous workers along the entire Himalaya and compared earthquake scenario with the past. Our studies suggest that there were multiple earthquake events with overlapping surface ruptures in small patches with an average rupture length of 300 km limiting Mw 7.8-8.0 for the Himalayan arc, rather than two or three giant earthquakes rupturing the whole front. It has been identified that the large magnitude Himalayan earthquakes, such as 1905 Kangra, 1934 Bihar-Nepal, and 1950 Assam, that have occurred within a time frame of 45 years. Now, if these events are dated, there is a high possibility that within the range of ±50 years, they may be considered as the remnant of one giant earthquake rupturing the entire Himalayan arc. Therefore, leading to an overestimation of seismic hazard scenario in Himalaya.

  19. Short-term light and leaf photosynthetic dynamics affect estimates of daily understory photosynthesis in four tree species.

    Science.gov (United States)

    Naumburg, Elke; Ellsworth, David S

    2002-04-01

    Instantaneous measurements of photosynthesis are often implicitly or explicitly scaled to longer time frames to provide an understanding of plant performance in a given environment. For plants growing in a forest understory, results from photosynthetic light response curves in conjunction with diurnal light data are frequently extrapolated to daily photosynthesis (A(day)), ignoring dynamic photosynthetic responses to light. In this study, we evaluated the importance of two factors on A(day) estimates: dynamic physiological responses to photosynthetic photon flux density (PPFD); and time-resolution of the PPFD data used for modeling. We used a dynamic photosynthesis model to investigate how these factors interact with species-specific photosynthetic traits, forest type, and sky conditions to affect the accuracy of A(day) predictions. Increasing time-averaging of PPFD significantly increased the relative overestimation of A(day) similarly for all study species because of the nonlinear response of photosynthesis to PPFD (15% with 5-min PPFD means). Depending on the light environment characteristics and species-specific dynamic responses to PPFD, understory tree A(day) can be overestimated by 6-42% for the study species by ignoring these dynamics. Although these overestimates decrease under cloudy conditions where direct sunlight and consequently understory sunfleck radiation is reduced, they are still significant. Within a species, overestimation of A(day) as a result of ignoring dynamic responses was highly dependent on daily sunfleck PPFD and the frequency and irradiance of sunflecks. Overall, large overestimates of A(day) in understory trees may cause misleading inferences concerning species growth and competition in forest understories with sunlight. We conclude that comparisons of A(day) among co-occurring understory species in deep shade will be enhanced by consideration of sunflecks by using high-resolution PPFD data and understanding the physiological

  20. Determining the energy performance of manually controlled solar shades: A stochastic model based co-simulation analysis

    International Nuclear Information System (INIS)

    Yao, Jian

    2014-01-01

    Highlights: • Driving factor for adjustment of manually controlled solar shades was determined. • A stochastic model for manual solar shades was constructed using Markov method. • Co-simulation with Energyplus was carried out in BCVTB. • External shading even manually controlled should be used prior to LOW-E windows. • Previous studies on manual solar shades may overestimate energy savings. - Abstract: Solar shading devices play a significant role in reducing building energy consumption and maintaining a comfortable indoor condition. In this paper, a typical office building with internal roller shades in hot summer and cold winter zone was selected to determine the driving factor of control behavior of manual solar shades. Solar radiation was determined as the major factor in driving solar shading adjustment based on field measurements and logit analysis and then a stochastic model for manually adjusted solar shades was constructed by using Markov method. This model was used in BCVTB for further co-simulation with Energyplus to determine the impact of the control behavior of solar shades on energy performance. The results show that manually adjusted solar shades, whatever located inside or outside, have a relatively high energy saving performance than clear-pane windows while only external shades perform better than regularly used LOW-E windows. Simulation also indicates that using an ideal assumption of solar shade adjustment as most studies do in building simulation may lead to an overestimation of energy saving by about 16–30%. There is a need to improve occupants’ actions on shades to more effectively respond to outdoor conditions in order to lower energy consumption, and this improvement can be easily achieved by using simple strategies as a guide to control manual solar shades

  1. Formation of organic aerosol in the Paris region during the MEGAPOLI summer campaign: evaluation of the volatility-basis-set approach within the CHIMERE model

    Directory of Open Access Journals (Sweden)

    Q. J. Zhang

    2013-06-01

    Full Text Available Simulations with the chemistry transport model CHIMERE are compared to measurements performed during the MEGAPOLI (Megacities: Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation summer campaign in the Greater Paris region in July 2009. The volatility-basis-set approach (VBS is implemented into this model, taking into account the volatility of primary organic aerosol (POA and the chemical aging of semi-volatile organic species. Organic aerosol is the main focus and is simulated with three different configurations with a modified treatment of POA volatility and modified secondary organic aerosol (SOA formation schemes. In addition, two types of emission inventories are used as model input in order to test the uncertainty related to the emissions. Predictions of basic meteorological parameters and primary and secondary pollutant concentrations are evaluated, and four pollution regimes are defined according to the air mass origin. Primary pollutants are generally overestimated, while ozone is consistent with observations. Sulfate is generally overestimated, while ammonium and nitrate levels are well simulated with the refined emission data set. As expected, the simulation with non-volatile POA and a single-step SOA formation mechanism largely overestimates POA and underestimates SOA. Simulation of organic aerosol with the VBS approach taking into account the aging of semi-volatile organic compounds (SVOC shows the best correlation with measurements. High-concentration events observed mostly after long-range transport are well reproduced by the model. Depending on the emission inventory used, simulated POA levels are either reasonable or underestimated, while SOA levels tend to be overestimated. Several uncertainties related to the VBS scheme (POA volatility, SOA yields, the aging parameterization, to emission input data, and to simulated OH levels can be responsible for

  2. Residual risk over-estimated

    International Nuclear Information System (INIS)

    Anon.

    1982-01-01

    The way nuclear power plants are built practically excludes accidents with serious consequences. This is attended to by careful selection of material, control of fabrication and regular retesting as well as by several safety systems working independently. But the remaining risk, a 'hypothetic' uncontrollable incident with catastrophic effects is the main subject of the discussion on the peaceful utilization of nuclear power. The this year's 'Annual Meeting on Nuclear Engineering' in Mannheim and the meeting 'Reactor Safety Research' in Cologne showed, that risk studies so far were too pessimistic. 'Best estimate' calculations suggest that core melt-down accidents only occur if almost all safety systems fail, that accidents take place much more slowly, and that the release of radioactive fission products is by several magnitudes lower than it was assumed until now. (orig.) [de

  3. Significance of Bias Correction in Drought Frequency and Scenario Analysis Based on Climate Models

    Science.gov (United States)

    Aryal, Y.; Zhu, J.

    2015-12-01

    Assessment of future drought characteristics is difficult as climate models usually have bias in simulating precipitation frequency and intensity. To overcome this limitation, output from climate models need to be bias corrected based on the specific purpose of applications. In this study, we examine the significance of bias correction in the context of drought frequency and scenario analysis using output from climate models. In particular, we investigate the performance of three widely used bias correction techniques: (1) monthly bias correction (MBC), (2) nested bias correction (NBC), and (3) equidistance quantile mapping (EQM) The effect of bias correction in future scenario of drought frequency is also analyzed. The characteristics of drought are investigated in terms of frequency and severity in nine representative locations in different climatic regions across the United States using regional climate model (RCM) output from the North American Regional Climate Change Assessment Program (NARCCAP). The Standardized Precipitation Index (SPI) is used as the means to compare and forecast drought characteristics at different timescales. Systematic biases in the RCM precipitation output are corrected against the National Centers for Environmental Prediction (NCEP) North American Regional Reanalysis (NARR) data. The results demonstrate that bias correction significantly decreases the RCM errors in reproducing drought frequency derived from the NARR data. Preserving mean and standard deviation is essential for climate models in drought frequency analysis. RCM biases both have regional and timescale dependence. Different timescale of input precipitation in the bias corrections show similar results. Drought frequency obtained from the RCM future (2040-2070) scenarios is compared with that from the historical simulations. The changes in drought characteristics occur in all climatic regions. The relative changes in drought frequency in future scenario in relation to

  4. The Alpine snow-albedo feedback in regional climate models

    Science.gov (United States)

    Winter, Kevin J.-P. M.; Kotlarski, Sven; Scherrer, Simon C.; Schär, Christoph

    2017-02-01

    The effect of the snow-albedo feedback (SAF) on 2m temperatures and their future changes in the European Alps is investigated in the ENSEMBLES regional climate models (RCMs) with a focus on the spring season. A total of 14 re-analysis-driven RCM experiments covering the period 1961-2000 and 10 GCM-driven transient climate change projections for 1950-2099 are analysed. A positive springtime SAF is found in all RCMs, but the range of the diagnosed SAF is large. Results are compared against an observation-based SAF estimate. For some RCMs, values very close to this estimate are found; other models show a considerable overestimation of the SAF. Net shortwave radiation has the largest influence of all components of the energy balance on the diagnosed SAF and can partly explain its spatial variability. Model deficiencies in reproducing 2m temperatures above snow and ice and associated cold temperature biases at high elevations seem to contribute to a SAF overestimation in several RCMs. The diagnosed SAF in the observational period strongly influences the estimated SAF contribution to twenty first century temperature changes in the European Alps. This contribution is subject to a clear elevation dependency that is governed by the elevation-dependent change in the number of snow days. Elevations of maximum SAF contribution range from 1500 to 2000 m in spring and are found above 2000 m in summer. Here, a SAF contribution to the total simulated temperature change between 0 and 0.5 °C until 2099 (multi-model mean in spring: 0.26 °C) or 0 and 14 % (multi-model mean in spring: 8 %) is obtained for models showing a realistic SAF. These numbers represent a well-funded but only approximate estimate of the SAF contribution to future warming, and a remaining contribution of model-specific SAF misrepresentations cannot be ruled out.

  5. Capital adjustment cost and bias in income based dynamic panel models with fixed effects

    OpenAIRE

    Yoseph Yilma Getachew; Keshab Bhattarai; Parantap Basu

    2012-01-01

    The fixed effects (FE) estimator of "conditional convergence" in income based dynamic panel models could be biased downward when capital adjustment cost is present. Such a capital adjustment cost means a rising marginal cost of investment which could slow down the convergence. The standard FE regression fails to take into account of this capital adjustment cost and thus it could overestimate the rate of convergence. Using a Ramsey model with long-run adjustment cost of capital, we characteriz...

  6. Non-stationarities significantly distort short-term spectral, symbolic and entropy heart rate variability indices

    International Nuclear Information System (INIS)

    Magagnin, Valentina; Bassani, Tito; Bari, Vlasta; Turiel, Maurizio; Porta, Alberto; Maestri, Roberto; Pinna, Gian Domenico

    2011-01-01

    The autonomic regulation is non-invasively estimated from heart rate variability (HRV). Many methods utilized to assess autonomic regulation require stationarity of HRV recordings. However, non-stationarities are frequently present even during well-controlled experiments, thus potentially biasing HRV indices. The aim of our study is to quantify the potential bias of spectral, symbolic and entropy HRV indices due to non-stationarities. We analyzed HRV series recorded in healthy subjects during uncontrolled daily life activities typical of 24 h Holter recordings and during predetermined levels of robotic-assisted treadmill-based physical exercise. A stationarity test checking the stability of the mean and variance over short HRV series (about 300 cardiac beats) was utilized to distinguish stationary periods from non-stationary ones. Spectral, symbolic and entropy indices evaluated solely over stationary periods were contrasted with those derived from all the HRV segments. When indices were calculated solely over stationary series, we found that (i) during both uncontrolled daily life activities and controlled physical exercise, the entropy-based complexity indices were significantly larger; (ii) during uncontrolled daily life activities, the spectral and symbolic indices linked to sympathetic modulation were significantly smaller and those associated with vagal modulation were significantly larger; (iii) while during uncontrolled daily life activities, the variance of spectral, symbolic and entropy rate indices was significantly larger, during controlled physical exercise, it was smaller. The study suggests that non-stationarities increase the likelihood to overestimate the contribution of sympathetic control and affect the power of statistical tests utilized to discriminate conditions and/or groups

  7. Constrained parameterisation of photosynthetic capacity causes significant increase of modelled tropical vegetation surface temperature

    Science.gov (United States)

    Kattge, J.; Knorr, W.; Raddatz, T.; Wirth, C.

    2009-04-01

    Photosynthetic capacity is one of the most sensitive parameters of terrestrial biosphere models whose representation in global scale simulations has been severely hampered by a lack of systematic analyses using a sufficiently broad database. Due to its coupling to stomatal conductance changes in the parameterisation of photosynthetic capacity may potentially influence transpiration rates and vegetation surface temperature. Here, we provide a constrained parameterisation of photosynthetic capacity for different plant functional types in the context of the photosynthesis model proposed by Farquhar et al. (1980), based on a comprehensive compilation of leaf photosynthesis rates and leaf nitrogen content. Mean values of photosynthetic capacity were implemented into the coupled climate-vegetation model ECHAM5/JSBACH and modelled gross primary production (GPP) is compared to a compilation of independent observations on stand scale. Compared to the current standard parameterisation the root-mean-squared difference between modelled and observed GPP is substantially reduced for almost all PFTs by the new parameterisation of photosynthetic capacity. We find a systematic depression of NUE (photosynthetic capacity divided by leaf nitrogen content) on certain tropical soils that are known to be deficient in phosphorus. Photosynthetic capacity of tropical trees derived by this study is substantially lower than standard estimates currently used in terrestrial biosphere models. This causes a decrease of modelled GPP while it significantly increases modelled tropical vegetation surface temperatures, up to 0.8°C. These results emphasise the importance of a constrained parameterisation of photosynthetic capacity not only for the carbon cycle, but also for the climate system.

  8. A critical review of predictive models for the onset of significant void in forced-convection subcooled boiling

    International Nuclear Information System (INIS)

    Dorra, H.; Lee, S.C.; Bankoff, S.G.

    1993-06-01

    This predictive models for the onset of significant void (OSV) in forced-convection subcooled boiling are reviewed and compared with extensive data. Three analytical models and seven empirical correlations are considered in this review. These models and correlations are put onto a common basis and are compared, again on a common basis, with a variety of data. The evaluation of their range of validity and applicability under various operating conditions are discussed. The results show that the correlations of Saha-Zuber seems to be the best model to predict OSV in vertical subcooled boiling flow

  9. Knowledge-fused differential dependency network models for detecting significant rewiring in biological networks.

    Science.gov (United States)

    Tian, Ye; Zhang, Bai; Hoffman, Eric P; Clarke, Robert; Zhang, Zhen; Shih, Ie-Ming; Xuan, Jianhua; Herrington, David M; Wang, Yue

    2014-07-24

    Modeling biological networks serves as both a major goal and an effective tool of systems biology in studying mechanisms that orchestrate the activities of gene products in cells. Biological networks are context-specific and dynamic in nature. To systematically characterize the selectively activated regulatory components and mechanisms, modeling tools must be able to effectively distinguish significant rewiring from random background fluctuations. While differential networks cannot be constructed by existing knowledge alone, novel incorporation of prior knowledge into data-driven approaches can improve the robustness and biological relevance of network inference. However, the major unresolved roadblocks include: big solution space but a small sample size; highly complex networks; imperfect prior knowledge; missing significance assessment; and heuristic structural parameter learning. To address these challenges, we formulated the inference of differential dependency networks that incorporate both conditional data and prior knowledge as a convex optimization problem, and developed an efficient learning algorithm to jointly infer the conserved biological network and the significant rewiring across different conditions. We used a novel sampling scheme to estimate the expected error rate due to "random" knowledge. Based on that scheme, we developed a strategy that fully exploits the benefit of this data-knowledge integrated approach. We demonstrated and validated the principle and performance of our method using synthetic datasets. We then applied our method to yeast cell line and breast cancer microarray data and obtained biologically plausible results. The open-source R software package and the experimental data are freely available at http://www.cbil.ece.vt.edu/software.htm. Experiments on both synthetic and real data demonstrate the effectiveness of the knowledge-fused differential dependency network in revealing the statistically significant rewiring in biological

  10. Exciton model and quantum molecular dynamics in inclusive nucleon-induced reactions

    International Nuclear Information System (INIS)

    Bevilacqua, Riccardo; Pomp, Stephan; Watanabe, Yukinobu

    2011-01-01

    We compared inclusive nucleon-induced reactions with two-component exciton model calculations and Kalbach systematics; these successfully describe the production of protons, whereas fail to reproduce the emission of composite particles, generally overestimating it. We show that the Kalbach phenomenological model needs to be revised for energies above 90 MeV; agreement improves introducing a new energy dependence for direct-like mechanisms described by the Kalbach model. Our revised model calculations suggest multiple preequilibrium emission of light charged particles. We have also compared recent neutron-induced data with quantum molecular dynamics (QMD) calculations complemented by the surface coalescence model (SCM); we observed that the SCM improves the predictive power of QMD. (author)

  11. Exploring the MACH Model's Potential as a Metacognitive Tool to Help Undergraduate Students Monitor Their Explanations of Biological Mechanisms

    Science.gov (United States)

    Trujillo, Caleb M.; Anderson, Trevor R.; Pelaez, Nancy J.

    2016-01-01

    When undergraduate biology students learn to explain biological mechanisms, they face many challenges and may overestimate their understanding of living systems. Previously, we developed the MACH model of four components used by expert biologists to explain mechanisms: Methods, Analogies, Context, and How. This study explores the implementation of…

  12. Power penalties for multi-level PAM modulation formats at arbitrary bit error rates

    Science.gov (United States)

    Kaliteevskiy, Nikolay A.; Wood, William A.; Downie, John D.; Hurley, Jason; Sterlingov, Petr

    2016-03-01

    There is considerable interest in combining multi-level pulsed amplitude modulation formats (PAM-L) and forward error correction (FEC) in next-generation, short-range optical communications links for increased capacity. In this paper we derive new formulas for the optical power penalties due to modulation format complexity relative to PAM-2 and due to inter-symbol interference (ISI). We show that these penalties depend on the required system bit-error rate (BER) and that the conventional formulas overestimate link penalties. Our corrections to the standard formulas are very small at conventional BER levels (typically 1×10-12) but become significant at the higher BER levels enabled by FEC technology, especially for signal distortions due to ISI. The standard formula for format complexity, P = 10log(L-1), is shown to overestimate the actual penalty for PAM-4 and PAM-8 by approximately 0.1 and 0.25 dB respectively at 1×10-3 BER. Then we extend the well-known PAM-2 ISI penalty estimation formula from the IEEE 802.3 standard 10G link modeling spreadsheet to the large BER case and generalize it for arbitrary PAM-L formats. To demonstrate and verify the BER dependence of the ISI penalty, a set of PAM-2 experiments and Monte-Carlo modeling simulations are reported. The experimental results and simulations confirm that the conventional formulas can significantly overestimate ISI penalties at relatively high BER levels. In the experiments, overestimates up to 2 dB are observed at 1×10-3 BER.

  13. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, S. K.; Kang, G. B.; Ko, W. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole.

  14. An Adjusted Discount Rate Model for Fuel Cycle Cost Estimation

    International Nuclear Information System (INIS)

    Kim, S. K.; Kang, G. B.; Ko, W. I.

    2013-01-01

    Owing to the diverse nuclear fuel cycle options available, including direct disposal, it is necessary to select the optimum nuclear fuel cycles in consideration of the political and social environments as well as the technical stability and economic efficiency of each country. Economic efficiency is therefore one of the significant evaluation standards. In particular, because nuclear fuel cycle cost may vary in each country, and the estimated cost usually prevails over the real cost, when evaluating the economic efficiency, any existing uncertainty needs to be removed when possible to produce reliable cost information. Many countries still do not have reprocessing facilities, and no globally commercialized HLW (High-level waste) repository is available. A nuclear fuel cycle cost estimation model is therefore inevitably subject to uncertainty. This paper analyzes the uncertainty arising out of a nuclear fuel cycle cost evaluation from the viewpoint of a cost estimation model. Compared to the same discount rate model, the nuclear fuel cycle cost of a different discount rate model is reduced because the generation quantity as denominator in Equation has been discounted. Namely, if the discount rate reduces in the back-end process of the nuclear fuel cycle, the nuclear fuel cycle cost is also reduced. Further, it was found that the cost of the same discount rate model is overestimated compared with the different discount rate model as a whole

  15. On the significance of the noise model for the performance of a linear MPC in closed-loop operation

    DEFF Research Database (Denmark)

    Hagdrup, Morten; Boiroux, Dimitri; Mahmoudi, Zeinab

    2016-01-01

    This paper discusses the significance of the noise model for the performance of a Model Predictive Controller when operating in closed-loop. The process model is parametrized as a continuous-time (CT) model and the relevant sampled-data filtering and control algorithms are developed. Using CT...... models typically means less parameters to identify. Systematic tuning of such controllers is discussed. Simulation studies are conducted for linear time-invariant systems showing that choosing a noise model of low order is beneficial for closed-loop performance. (C) 2016, IFAC (International Federation...

  16. Phasic firing in vasopressin cells: understanding its functional significance through computational models.

    Directory of Open Access Journals (Sweden)

    Duncan J MacGregor

    Full Text Available Vasopressin neurons, responding to input generated by osmotic pressure, use an intrinsic mechanism to shift from slow irregular firing to a distinct phasic pattern, consisting of long bursts and silences lasting tens of seconds. With increased input, bursts lengthen, eventually shifting to continuous firing. The phasic activity remains asynchronous across the cells and is not reflected in the population output signal. Here we have used a computational vasopressin neuron model to investigate the functional significance of the phasic firing pattern. We generated a concise model of the synaptic input driven spike firing mechanism that gives a close quantitative match to vasopressin neuron spike activity recorded in vivo, tested against endogenous activity and experimental interventions. The integrate-and-fire based model provides a simple physiological explanation of the phasic firing mechanism involving an activity-dependent slow depolarising afterpotential (DAP generated by a calcium-inactivated potassium leak current. This is modulated by the slower, opposing, action of activity-dependent dendritic dynorphin release, which inactivates the DAP, the opposing effects generating successive periods of bursting and silence. Model cells are not spontaneously active, but fire when perturbed by random perturbations mimicking synaptic input. We constructed one population of such phasic neurons, and another population of similar cells but which lacked the ability to fire phasically. We then studied how these two populations differed in the way that they encoded changes in afferent inputs. By comparison with the non-phasic population, the phasic population responds linearly to increases in tonic synaptic input. Non-phasic cells respond to transient elevations in synaptic input in a way that strongly depends on background activity levels, phasic cells in a way that is independent of background levels, and show a similar strong linearization of the response

  17. A dependent stress-strength interference model based on mixed copula function

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Jian Xiong; An, Zong Wen; Liu, Bo [School of Mechatronics Engineering, Lanzhou University of Technology, Lanzhou (China)

    2016-10-15

    In the traditional Stress-strength interference (SSI) model, stress and strength must satisfy the basic assumption of mutual independence. However, a complex dependence between stress and strength exists in practical engineering. To evaluate structural reliability under the case that stress and strength are dependent, a mixed copula function is introduced to a new dependent SSI model. This model can fully characterize the dependence between stress and strength. The residual square sum method and genetic algorithm are also used to estimate the unknown parameters of the model. Finally, the validity of the proposed model is demonstrated via a practical case. Results show that traditional SSI model ignoring the dependence between stress and strength more easily overestimates product reliability than the new dependent SSI model.

  18. The improvement of MOSFET prediction in space environments using the conversion model

    International Nuclear Information System (INIS)

    Shvetzov-Shilovsky, I.N.; Cherepko, S.V.; Pershenkov, V.S.

    1994-01-01

    The modeling of MOS device response to a low dose rate irradiation has been performed. The existing conversion model based on the linear dependence between positive oxide charge annealing and interface trap buildup accurately predicts the long time response of MOSFETs with relatively thick oxides but overestimates the threshold voltage shift for radiation hardened MOSFETs with thin oxides. To give an explanation to this fact, the authors investigate the impulse response function for threshold voltage. A revised model, which incorporates the different energy levels of hole traps in the oxide improves the fit between the model and data and gives an explanation to the fitting parameters dependence on oxide field

  19. Reducing equifinality using isotopes in a process-based stream nitrogen model highlights the flux of algal nitrogen from agricultural streams

    Science.gov (United States)

    Ford, William I.; Fox, James F.; Pollock, Erik

    2017-08-01

    The fate of bioavailable nitrogen species transported through agricultural landscapes remains highly uncertain given complexities of measuring fluxes impacting the fluvial N cycle. We present and test a new numerical model named Technology for Removable Annual Nitrogen in Streams For Ecosystem Restoration (TRANSFER), which aims to reduce model uncertainty due to erroneous parameterization, i.e., equifinality, in stream nitrogen cycle assessment and quantify the significance of transient and permanent removal pathways. TRANSFER couples nitrogen elemental and stable isotope mass-balance equations with existing hydrologic, hydraulic, sediment transport, algal biomass, and sediment organic matter mass-balance subroutines and a robust GLUE-like uncertainty analysis. We test the model in an agriculturally impacted, third-order stream reach located in the Bluegrass Region of Central Kentucky. Results of the multiobjective model evaluation for the model application highlight the ability of sediment nitrogen fingerprints including elemental concentrations and stable N isotope signatures to reduce equifinality of the stream N model. Advancements in the numerical simulations allow for illumination of the significance of algal sloughing fluxes for the first time in relation to denitrification. Broadly, model estimates suggest that denitrification is slightly greater than algal N sloughing (10.7% and 6.3% of dissolved N load on average), highlighting the potential for overestimation of denitrification by 37%. We highlight the significance of the transient N pool given the potential for the N store to be regenerated to the water column in downstream reaches, leading to harmful and nuisance algal bloom development.

  20. MRI estimation of total renal volume demonstrates significant association with healthy donor weight

    International Nuclear Information System (INIS)

    Cohen, Emil I.; Kelly, Sarah A.; Edye, Michael; Mitty, Harold A.; Bromberg, Jonathan S.

    2009-01-01

    Purpose: The purpose of this study was to correlate total renal volume (TRV) calculations, obtained through the voxel-count method and ellipsoid formula with various physical characteristics. Materials and methods: MRI reports and physical examination from 210 healthy kidney donors (420 kidneys), on whom renal volumes were obtained using the voxel-count method, were retrospectively reviewed. These values along with ones obtained through a more traditional method (ellipsoid formula) were correlated with subject height, body weight, body mass index (BMI), and age. Results: TRV correlated strongly with body weight (r = 0.7) and to a lesser degree with height, age, or BMI (r = 0.5, -0.2, 0.3, respectively). The left kidney volume was greater than the right, on average (p < 0.001). The ellipsoid formula method over-estimated renal volume by 17% on average which was significant (p < 0.001). Conclusions: Body weight was the physical characteristic which demonstrated the strongest correlation with renal volume in healthy subjects. Given this finding, a formula was derived for estimating the TRV for a given patient based on the his or her weight: TRV = 2.96 x weight (kg) + 113 ± 64.

  1. Radiative effects of interannually varying vs. interannually invariant aerosol emissions from fires

    Directory of Open Access Journals (Sweden)

    B. S. Grandey

    2016-11-01

    Full Text Available Open-burning fires play an important role in the earth's climate system. In addition to contributing a substantial fraction of global emissions of carbon dioxide, they are a major source of atmospheric aerosols containing organic carbon, black carbon, and sulfate. These “fire aerosols” can influence the climate via direct and indirect radiative effects. In this study, we investigate these radiative effects and the hydrological fast response using the Community Atmosphere Model version 5 (CAM5. Emissions of fire aerosols exert a global mean net radiative effect of −1.0 W m−2, dominated by the cloud shortwave response to organic carbon aerosol. The net radiative effect is particularly strong over boreal regions. Conventionally, many climate modelling studies have used an interannually invariant monthly climatology of emissions of fire aerosols. However, by comparing simulations using interannually varying emissions vs. interannually invariant emissions, we find that ignoring the interannual variability of the emissions can lead to systematic overestimation of the strength of the net radiative effect of the fire aerosols. Globally, the overestimation is +23 % (−0.2 W m−2. Regionally, the overestimation can be substantially larger. For example, over Australia and New Zealand the overestimation is +58 % (−1.2 W m−2, while over Boreal Asia the overestimation is +43 % (−1.9 W m−2. The systematic overestimation of the net radiative effect of the fire aerosols is likely due to the non-linear influence of aerosols on clouds. However, ignoring interannual variability in the emissions does not appear to significantly impact the hydrological fast response. In order to improve understanding of the climate system, we need to take into account the interannual variability of aerosol emissions.

  2. Evaluation of uncertainty in dam-break analysis resulting from dynamic representation of a reservoir; Evaluation de l'incertitude due au modele de representation du reservoir dans les analyses de rupture de barrage

    Energy Technology Data Exchange (ETDEWEB)

    Tchamen, G.W.; Gaucher, J. [Hydro-Quebec Production, Montreal, PQ (Canada). Direction Barrage et Environnement, Unite Barrages et Hydraulique

    2010-08-15

    Owners and operators of high capacity dams in Quebec have a legal obligation to conduct dam break analysis for each of their dams in order to ensure public safety. This paper described traditional hydraulic methodologies and models used to perform dam break analyses. In particular, it examined the influence of the reservoir drawdown submodel on the numerical results of a dam break analysis. Numerical techniques from the field of fluid mechanics and aerodynamics have provided the basis for developing effective hydrodynamic codes that reduce the level of uncertainties associated with dam-break analysis. A static representation that considers the storage curve was compared with a dynamic representation based on Saint-Venant equations and the real bathymetry of the reservoir. The comparison was based on breach of reservoir, maximum water level, flooded area, and wave arrival time in the valley downstream. The study showed that the greatest difference in attained water level was in the vicinity of the dam, and the difference decreased as the distance from the reservoir increased. The analysis showed that the static representation overestimated the maximum depth and inundated area by as much as 20 percent. This overestimation can be reduced by 30 to 40 percent by using dynamic representation. A dynamic model based on a synthetic trapezoidal reconstruction of the storage curve was used, given the lack of bathymetric data for the reservoir. It was concluded that this model can significantly reduce the uncertainty associated with the static model. 7 refs., 9 tabs., 7 figs.

  3. Inferring Muscle-Tendon Unit Power from Ankle Joint Power during the Push-Off Phase of Human Walking: Insights from a Multiarticular EMG-Driven Model.

    Science.gov (United States)

    Honert, Eric C; Zelik, Karl E

    2016-01-01

    Inverse dynamics joint kinetics are often used to infer contributions from underlying groups of muscle-tendon units (MTUs). However, such interpretations are confounded by multiarticular (multi-joint) musculature, which can cause inverse dynamics to over- or under-estimate net MTU power. Misestimation of MTU power could lead to incorrect scientific conclusions, or to empirical estimates that misguide musculoskeletal simulations, assistive device designs, or clinical interventions. The objective of this study was to investigate the degree to which ankle joint power overestimates net plantarflexor MTU power during the Push-off phase of walking, due to the behavior of the flexor digitorum and hallucis longus (FDHL)-multiarticular MTUs crossing the ankle and metatarsophalangeal (toe) joints. We performed a gait analysis study on six healthy participants, recording ground reaction forces, kinematics, and electromyography (EMG). Empirical data were input into an EMG-driven musculoskeletal model to estimate ankle power. This model enabled us to parse contributions from mono- and multi-articular MTUs, and required only one scaling and one time delay factor for each subject and speed, which were solved for based on empirical data. Net plantarflexing MTU power was computed by the model and quantitatively compared to inverse dynamics ankle power. The EMG-driven model was able to reproduce inverse dynamics ankle power across a range of gait speeds (R2 ≥ 0.97), while also providing MTU-specific power estimates. We found that FDHL dynamics caused ankle power to slightly overestimate net plantarflexor MTU power, but only by ~2-7%. During Push-off, FDHL MTU dynamics do not substantially confound the inference of net plantarflexor MTU power from inverse dynamics ankle power. However, other methodological limitations may cause inverse dynamics to overestimate net MTU power; for instance, due to rigid-body foot assumptions. Moving forward, the EMG-driven modeling approach presented

  4. Self-reported everyday memory and depression in patients with multiple sclerosis.

    Science.gov (United States)

    Bruce, Jared M; Arnett, Peter A

    2004-04-01

    Depression and memory difficulties are among the most common complaints voiced by patients with multiple sclerosis (MS). Nevertheless, little is known about how depression might affect patients' perceptions of their memory difficulties. The present investigation was designed to explore this issue. Results supported a model that integrates aspects of Beck's theory of depression and the concept of depressive realism. Consistent with the depressive realism literature, nondepressed MS patients significantly overestimated their everyday memory compared with their actual performance on verbal memory and attention/concentration indices, whereas moderately depressed patients' everyday memory ratings mirrored their actual neuropsychological performance. Supporting Beck's negative cognitive schema notion, mildly depressed patients significantly overestimated their memory difficulties. Implications for the treatment of memory problems among MS patients are discussed.

  5. Potential overestimation of HPV vaccine impact due to unmasking of non-vaccine types: quantification using a multi-type mathematical model.

    Science.gov (United States)

    Choi, Yoon Hong; Chapman, Ruth; Gay, Nigel; Jit, Mark

    2012-05-14

    Estimates of human papillomavirus (HPV) vaccine impact in clinical trials and modelling studies rely on DNA tests of cytology or biopsy specimens to determine the HPV type responsible for a cervical lesion. DNA of several oncogenic HPV types may be detectable in a specimen. However, only one type may be responsible for a particular cervical lesion. Misattribution of the causal HPV type for a particular abnormality may give rise to an apparent increase in disease due to non-vaccine HPV types following vaccination ("unmasking"). To investigate the existence and magnitude of unmasking, we analysed data from residual cytology and biopsy specimens in English women aged 20-64 years old using a stochastic type-specific individual-based model of HPV infection, progression and disease. The model parameters were calibrated to data on the prevalence of HPV DNA and cytological lesion of different grades, and used to assign causal HPV types to cervical lesions. The difference between the prevalence of all disease due to non-vaccine HPV types, and disease due to non-vaccine HPV types in the absence of vaccine HPV types, was then estimated. There could be an apparent maximum increase of 3-10% in long-term cervical cancer incidence due to non-vaccine HPV types following vaccination. Unmasking may be an important phenomenon in HPV post-vaccination epidemiology, in the same way that has been observed following pneumococcal conjugate vaccination. Copyright © 2012 Elsevier Ltd. All rights reserved.

  6. Using global magnetospheric models for simulation and interpretation of Swarm external field measurements

    DEFF Research Database (Denmark)

    Moretto, T.; Vennerstrøm, Susanne; Olsen, Nils

    2006-01-01

    simulated external contributions relevant for internal field modeling. These have proven very valuable for the design and planning of the up-coming multi-satellite Swarm mission. In addition, a real event simulation was carried out for a moderately active time interval when observations from the Orsted...... it consistently underestimates the dayside region 2 currents and overestimates the horizontal ionospheric closure currents in the dayside polar cap. Furthermore, with this example we illustrate the great benefit of utilizing the global model for the interpretation of Swarm external field observations and......, likewise, the potential of using Swarm measurements to test and improve the global model....

  7. Biological variability in biomechanical engineering research: Significance and meta-analysis of current modeling practices.

    Science.gov (United States)

    Cook, Douglas; Julias, Margaret; Nauman, Eric

    2014-04-11

    Biological systems are characterized by high levels of variability, which can affect the results of biomechanical analyses. As a review of this topic, we first surveyed levels of variation in materials relevant to biomechanics, and compared these values to standard engineered materials. As expected, we found significantly higher levels of variation in biological materials. A meta-analysis was then performed based on thorough reviews of 60 research studies from the field of biomechanics to assess the methods and manner in which biological variation is currently handled in our field. The results of our meta-analysis revealed interesting trends in modeling practices, and suggest a need for more biomechanical studies that fully incorporate biological variation in biomechanical models and analyses. Finally, we provide some case study example of how biological variability may provide valuable insights or lead to surprising results. The purpose of this study is to promote the advancement of biomechanics research by encouraging broader treatment of biological variability in biomechanical modeling. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. Scoping review identifies significant number of knowledge translation theories, models and frameworks with limited use.

    Science.gov (United States)

    Strifler, Lisa; Cardoso, Roberta; McGowan, Jessie; Cogo, Elise; Nincic, Vera; Khan, Paul A; Scott, Alistair; Ghassemi, Marco; MacDonald, Heather; Lai, Yonda; Treister, Victoria; Tricco, Andrea C; Straus, Sharon E

    2018-04-13

    To conduct a scoping review of knowledge translation (KT) theories, models and frameworks that have been used to guide dissemination or implementation of evidence-based interventions targeted to prevention and/or management of cancer or other chronic diseases. We used a comprehensive multistage search process from 2000-2016, which included traditional bibliographic database searching, searching using names of theories, models and frameworks, and cited reference searching. Two reviewers independently screened the literature and abstracted data. We found 596 studies reporting on the use of 159 KT theories, models or frameworks. A majority (87%) of the identified theories, models or frameworks were used in five or fewer studies, with 60% used once. The theories, models and frameworks were most commonly used to inform planning/design, implementation and evaluation activities, and least commonly used to inform dissemination and sustainability/scalability activities. Twenty-six were used across the full implementation spectrum (from planning/design to sustainability/scalability) either within or across studies. All were used for at least individual-level behavior change, while 48% were used for organization-level, 33% for community-level and 17% for system-level change. We found a significant number of KT theories, models and frameworks with a limited evidence base describing their use. Copyright © 2018. Published by Elsevier Inc.

  9. A new model using routinely available clinical parameters to predict significant liver fibrosis in chronic hepatitis B.

    Directory of Open Access Journals (Sweden)

    Wai-Kay Seto

    Full Text Available OBJECTIVE: We developed a predictive model for significant fibrosis in chronic hepatitis B (CHB based on routinely available clinical parameters. METHODS: 237 treatment-naïve CHB patients [58.4% hepatitis B e antigen (HBeAg-positive] who had undergone liver biopsy were randomly divided into two cohorts: training group (n = 108 and validation group (n = 129. Liver histology was assessed for fibrosis. All common demographics, viral serology, viral load and liver biochemistry were analyzed. RESULTS: Based on 12 available clinical parameters (age, sex, HBeAg status, HBV DNA, platelet, albumin, bilirubin, ALT, AST, ALP, GGT and AFP, a model to predict significant liver fibrosis (Ishak fibrosis score ≥3 was derived using the five best parameters (age, ALP, AST, AFP and platelet. Using the formula log(index+1 = 0.025+0.0031(age+0.1483 log(ALP+0.004 log(AST+0.0908 log(AFP+1-0.028 log(platelet, the PAPAS (Platelet/Age/Phosphatase/AFP/AST index predicts significant fibrosis with an area under the receiving operating characteristics (AUROC curve of 0.776 [0.797 for patients with ALT <2×upper limit of normal (ULN] The negative predictive value to exclude significant fibrosis was 88.4%. This predictive power is superior to other non-invasive models using common parameters, including the AST/platelet/GGT/AFP (APGA index, AST/platelet ratio index (APRI, and the FIB-4 index (AUROC of 0.757, 0.708 and 0.723 respectively. Using the PAPAS index, 67.5% of liver biopsies for patients being considered for treatment with ALT <2×ULN could be avoided. CONCLUSION: The PAPAS index can predict and exclude significant fibrosis, and may reduce the need for liver biopsy in CHB patients.

  10. Modeling the impact of prostate edema on LDR brachytherapy: a Monte Carlo dosimetry study based on a 3D biphasic finite element biomechanical model

    Science.gov (United States)

    Mountris, K. A.; Bert, J.; Noailly, J.; Rodriguez Aguilera, A.; Valeri, A.; Pradier, O.; Schick, U.; Promayon, E.; Gonzalez Ballester, M. A.; Troccaz, J.; Visvikis, D.

    2017-03-01

    Prostate volume changes due to edema occurrence during transperineal permanent brachytherapy should be taken under consideration to ensure optimal dose delivery. Available edema models, based on prostate volume observations, face several limitations. Therefore, patient-specific models need to be developed to accurately account for the impact of edema. In this study we present a biomechanical model developed to reproduce edema resolution patterns documented in the literature. Using the biphasic mixture theory and finite element analysis, the proposed model takes into consideration the mechanical properties of the pubic area tissues in the evolution of prostate edema. The model’s computed deformations are incorporated in a Monte Carlo simulation to investigate their effect on post-operative dosimetry. The comparison of Day1 and Day30 dosimetry results demonstrates the capability of the proposed model for patient-specific dosimetry improvements, considering the edema dynamics. The proposed model shows excellent ability to reproduce previously described edema resolution patterns and was validated based on previous findings. According to our results, for a prostate volume increase of 10-20% the Day30 urethra D10 dose metric is higher by 4.2%-10.5% compared to the Day1 value. The introduction of the edema dynamics in Day30 dosimetry shows a significant global dose overestimation identified on the conventional static Day30 dosimetry. In conclusion, the proposed edema biomechanical model can improve the treatment planning of transperineal permanent brachytherapy accounting for post-implant dose alterations during the planning procedure.

  11. The estimation of body mass index and physical attractiveness is dependent on the observer's own body mass index.

    Science.gov (United States)

    Tovée, M J; Emery, J L; Cohen-Tovée, E M

    2000-01-01

    A disturbance in the evaluation of personal body mass and shape is a key feature of both anorexia and bulimia nervosa. However, it is uncertain whether overestimation is a causal factor in the development of these eating disorders or is merely a secondary effect of having a low body mass. Moreover, does this overestimation extend to the perception of other people's bodies? Since body mass is an important factor in the perception of physical attractiveness, we wanted to determine whether this putative overestimation of self body mass extended to include the perceived attractiveness of others. We asked 204 female observers (31 anorexic, 30 bulimic and 143 control) to estimate the body mass and rate the attractiveness of a set of 25 photographic images showing people of varying body mass index (BMI). BMI is a measure of weight scaled for height (kg m(- 2)). The observers also estimated their own BMI. Anorexic and bulimic observers systematically overestimated the body mass of both their own and other people's bodies, relative to controls, and they rated a significantly lower body mass to be optimally attractive. When the degree of overestimation is plotted against the BMI of the observer there is a strong correlation. Taken across all our observers, as the BMI of the observer declines, the overestimation of body mass increases. One possible explanation for this result is that the overestimation is a secondary effect caused by weight loss. Moreover, if the degree of body mass overestimation is taken into account, then there are no significant differences in the perceptions of attractiveness between anorexic and bulimic observers and control observers. Our results suggest a significant perceptual overestimation of BMI that is based on the observer's own BMI and not correlated with cognitive factors, and suggests that this overestimation in eating-disordered patients must be addressed directly in treatment regimes. PMID:11075712

  12. Integrating wildfire plume rises within atmospheric transport models

    Science.gov (United States)

    Mallia, D. V.; Kochanski, A.; Wu, D.; Urbanski, S. P.; Krueger, S. K.; Lin, J. C.

    2016-12-01

    Wildfires can generate significant pyro-convection that is responsible for releasing pollutants, greenhouse gases, and trace species into the free troposphere, which are then transported a significant distance downwind from the fire. Oftentimes, atmospheric transport and chemistry models have a difficult time resolving the transport of smoke from these wildfires, primarily due to deficiencies in estimating the plume injection height, which has been highlighted in previous work as the most important aspect of simulating wildfire plume transport. As a result of the uncertainties associated with modeled wildfire plume rise, researchers face difficulties modeling the impacts of wildfire smoke on air quality and constraining fire emissions using inverse modeling techniques. Currently, several plume rise parameterizations exist that are able to determine the injection height of fire emissions; however, the success of these parameterizations has been mixed. With the advent of WRF-SFIRE, the wildfire plume rise and injection height can now be explicitly calculated using a fire spread model (SFIRE) that is dynamically linked with the atmosphere simulated by WRF. However, this model has only been tested on a limited basis due to computational costs. Here, we will test the performance of WRF-SFIRE in addition to several commonly adopted plume parameterizations (Freitas, Sofiev, and Briggs) for the 2013 Patch Springs (Utah) and 2012 Baker Canyon (Washington) fires, for both of which observations of plume rise heights are available. These plume rise techniques will then be incorporated within a Lagrangian atmospheric transport model (STILT) in order to simulate CO and CO2 concentrations during NASA's CARVE Earth Science Airborne Program over Alaska during the summer of 2012. Initial model results showed that STILT model simulations were unable to reproduce enhanced CO concentrations produced by Alaskan fires observed during 2012. Near-surface concentrations were drastically

  13. Significance evaluation in factor graphs

    DEFF Research Database (Denmark)

    Madsen, Tobias; Hobolth, Asger; Jensen, Jens Ledet

    2017-01-01

    in genomics and the multiple-testing issues accompanying them, accurate significance evaluation is of great importance. We here address the problem of evaluating statistical significance of observations from factor graph models. Results Two novel numerical approximations for evaluation of statistical...... significance are presented. First a method using importance sampling. Second a saddlepoint approximation based method. We develop algorithms to efficiently compute the approximations and compare them to naive sampling and the normal approximation. The individual merits of the methods are analysed both from....... Conclusions The applicability of saddlepoint approximation and importance sampling is demonstrated on known models in the factor graph framework. Using the two methods we can substantially improve computational cost without compromising accuracy. This contribution allows analyses of large datasets...

  14. Ising model for neural data

    DEFF Research Database (Denmark)

    Roudi, Yasser; Tyrcha, Joanna; Hertz, John

    2009-01-01

    (dansk abstrakt findes ikke) We study pairwise Ising models for describing the statistics of multi-neuron spike trains, using data from a simulated cortical network. We explore efficient ways of finding the optimal couplings in these models and examine their statistical properties. To do this, we...... extract the optimal couplings for subsets of size up to $200$ neurons, essentially exactly, using Boltzmann learning. We then study the quality of several approximate methods for finding the couplings by comparing their results with those found from Boltzmann learning. Two of these methods -- inversion...... of the Thouless-Anderson-Palmer equations and an approximation proposed by Sessak and Monasson -- are remarkably accurate. Using these approximations for larger subsets of neurons, we find that extracting couplings using data from a subset smaller than the full network tends systematically to overestimate...

  15. A special covariance structure for random coefficient models with both between and within covariates

    International Nuclear Information System (INIS)

    Riedel, K.S.

    1990-07-01

    We review random coefficient (RC) models in linear regression and propose a bias correction to the maximum likelihood (ML) estimator. Asymmptotic expansion of the ML equations are given when the between individual variance is much larger or smaller than the variance from within individual fluctuations. The standard model assumes all but one covariate varies within each individual, (we denote the within covariates by vector χ 1 ). We consider random coefficient models where some of the covariates do not vary in any single individual (we denote the between covariates by vector χ 0 ). The regression coefficients, vector β k , can only be estimated in the subspace X k of X. Thus the number of individuals necessary to estimate vector β and the covariance matrix Δ of vector β increases significantly in the presence of more than one between covariate. When the number of individuals is sufficient to estimate vector β but not the entire matrix Δ , additional assumptions must be imposed on the structure of Δ. A simple reduced model is that the between component of vector β is fixed and only the within component varies randomly. This model fails because it is not invariant under linear coordinate transformations and it can significantly overestimate the variance of new observations. We propose a covariance structure for Δ without these difficulties by first projecting the within covariates onto the space perpendicular to be between covariates. (orig.)

  16. Significance of settling model structures and parameter subsets in modelling WWTPs under wet-weather flow and filamentous bulking conditions

    DEFF Research Database (Denmark)

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen

    2014-01-01

    Current research focuses on predicting and mitigating the impacts of high hydraulic loadings on centralized wastewater treatment plants (WWTPs) under wet-weather conditions. The maximum permissible inflow to WWTPs depends not only on the settleability of activated sludge in secondary settling tanks...... (SSTs) but also on the hydraulic behaviour of SSTs. The present study investigates the impacts of ideal and non-ideal flow (dry and wet weather) and settling (good settling and bulking) boundary conditions on the sensitivity of WWTP model outputs to uncertainties intrinsic to the one-dimensional (1-D...... of settling parameters to the total variance of the key WWTP process outputs significantly depends on the influent flow and settling conditions. The magnitude of the impact is found to vary, depending on which type of 1-D SST model is used. Therefore, we identify and recommend potential parameter subsets...

  17. Initial Comparison of Direct and Legacy Modeling Approaches for Radial Core Expansion Analysis

    International Nuclear Information System (INIS)

    Shemon, Emily R.

    2016-01-01

    Radial core expansion in sodium-cooled fast reactors provides an important reactivity feedback effect. As the reactor power increases due to normal start up conditions or accident scenarios, the core and surrounding materials heat up, causing both grid plate expansion and bowing of the assembly ducts. When the core restraint system is designed correctly, the resulting structural deformations introduce negative reactivity which decreases the reactor power. Historically, an indirect procedure has been used to estimate the reactivity feedback due to structural deformation which relies upon perturbation theory and coupling legacy physics codes with limited geometry capabilities. With advancements in modeling and simulation, radial core expansion phenomena can now be modeled directly, providing an assessment of the accuracy of the reactivity feedback coefficients generated by indirect legacy methods. Recently a new capability was added to the PROTEUS-SN unstructured geometry neutron transport solver to analyze deformed meshes quickly and directly. By supplying the deformed mesh in addition to the base configuration input files, PROTEUS-SN automatically processes material adjustments including calculation of region densities to conserve mass, calculation of isotopic densities according to material models (for example, sodium density as a function of temperature), and subsequent re-homogenization of materials. To verify the new capability of directly simulating deformed meshes, PROTEUS-SN was used to compute reactivity feedback for a series of contrived yet representative deformed configurations for the Advanced Burner Test Reactor design. The indirect legacy procedure was also performed to generate reactivity feedback coefficients for the same deformed configurations. Interestingly, the legacy procedure consistently overestimated reactivity feedbacks by 35% compared to direct simulations by PROTEUS-SN. This overestimation indicates that the legacy procedures are in fact

  18. Initial Comparison of Direct and Legacy Modeling Approaches for Radial Core Expansion Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Shemon, Emily R. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-10-10

    Radial core expansion in sodium-cooled fast reactors provides an important reactivity feedback effect. As the reactor power increases due to normal start up conditions or accident scenarios, the core and surrounding materials heat up, causing both grid plate expansion and bowing of the assembly ducts. When the core restraint system is designed correctly, the resulting structural deformations introduce negative reactivity which decreases the reactor power. Historically, an indirect procedure has been used to estimate the reactivity feedback due to structural deformation which relies upon perturbation theory and coupling legacy physics codes with limited geometry capabilities. With advancements in modeling and simulation, radial core expansion phenomena can now be modeled directly, providing an assessment of the accuracy of the reactivity feedback coefficients generated by indirect legacy methods. Recently a new capability was added to the PROTEUS-SN unstructured geometry neutron transport solver to analyze deformed meshes quickly and directly. By supplying the deformed mesh in addition to the base configuration input files, PROTEUS-SN automatically processes material adjustments including calculation of region densities to conserve mass, calculation of isotopic densities according to material models (for example, sodium density as a function of temperature), and subsequent re-homogenization of materials. To verify the new capability of directly simulating deformed meshes, PROTEUS-SN was used to compute reactivity feedback for a series of contrived yet representative deformed configurations for the Advanced Burner Test Reactor design. The indirect legacy procedure was also performed to generate reactivity feedback coefficients for the same deformed configurations. Interestingly, the legacy procedure consistently overestimated reactivity feedbacks by 35% compared to direct simulations by PROTEUS-SN. This overestimation indicates that the legacy procedures are in fact

  19. Heat transfer corrected isothermal model for devolatilization of thermally-thick biomass particles

    DEFF Research Database (Denmark)

    Luo, Hao; Wu, Hao; Lin, Weigang

    Isothermal model used in current computational fluid dynamic (CFD) model neglect the internal heat transfer during biomass devolatilization. This assumption is not reasonable for thermally-thick particles. To solve this issue, a heat transfer corrected isothermal model is introduced. In this model......, two heat transfer corrected coefficients: HT-correction of heat transfer and HR-correction of reaction, are defined to cover the effects of internal heat transfer. A series of single biomass devitalization case have been modeled to validate this model, the results show that devolatilization behaviors...... of both thermally-thick and thermally-thin particles are predicted reasonable by using heat transfer corrected model, while, isothermal model overestimate devolatilization rate and heating rate for thermlly-thick particle.This model probably has better performance than isothermal model when it is coupled...

  20. Causes of variation among rice models in yield response to CO2 examined with Free-Air CO2 Enrichment and growth chamber experiments.

    Science.gov (United States)

    Hasegawa, Toshihiro; Li, Tao; Yin, Xinyou; Zhu, Yan; Boote, Kenneth; Baker, Jeffrey; Bregaglio, Simone; Buis, Samuel; Confalonieri, Roberto; Fugice, Job; Fumoto, Tamon; Gaydon, Donald; Kumar, Soora Naresh; Lafarge, Tanguy; Marcaida Iii, Manuel; Masutomi, Yuji; Nakagawa, Hiroshi; Oriol, Philippe; Ruget, Françoise; Singh, Upendra; Tang, Liang; Tao, Fulu; Wakatsuki, Hitomi; Wallach, Daniel; Wang, Yulong; Wilson, Lloyd Ted; Yang, Lianxin; Yang, Yubin; Yoshida, Hiroe; Zhang, Zhao; Zhu, Jianguo

    2017-11-01

    The CO 2 fertilization effect is a major source of uncertainty in crop models for future yield forecasts, but coordinated efforts to determine the mechanisms of this uncertainty have been lacking. Here, we studied causes of uncertainty among 16 crop models in predicting rice yield in response to elevated [CO 2 ] (E-[CO 2 ]) by comparison to free-air CO 2 enrichment (FACE) and chamber experiments. The model ensemble reproduced the experimental results well. However, yield prediction in response to E-[CO 2 ] varied significantly among the rice models. The variation was not random: models that overestimated at one experiment simulated greater yield enhancements at the others. The variation was not associated with model structure or magnitude of photosynthetic response to E-[CO 2 ] but was significantly associated with the predictions of leaf area. This suggests that modelled secondary effects of E-[CO 2 ] on morphological development, primarily leaf area, are the sources of model uncertainty. Rice morphological development is conservative to carbon acquisition. Uncertainty will be reduced by incorporating this conservative nature of the morphological response to E-[CO 2 ] into the models. Nitrogen levels, particularly under limited situations, make the prediction more uncertain. Improving models to account for [CO 2 ] × N interactions is necessary to better evaluate management practices under climate change.

  1. Comparison of Steady-State SVC Models in Load Flow Calculations

    DEFF Research Database (Denmark)

    Chen, Peiyuan; Chen, Zhe; Bak-Jensen, Birgitte

    2008-01-01

    This paper compares in a load flow calculation three existing steady-state models of static var compensator (SVC), i.e. the generator-fixed susceptance model, the total susceptance model and the firing angle model. The comparison is made in terms of the voltage at the SVC regulated bus, equivalent...... SVC susceptance at the fundamental frequency and the load flow convergence rate both when SVC is operating within and on the limits. The latter two models give inaccurate results of the equivalent SVC susceptance as compared to the generator model due to the assumption of constant voltage when the SVC...... is operating within the limits. This may underestimate or overestimate the SVC regulating capability. Two modified models are proposed to improve the SVC regulated voltage according to its steady-state characteristic. The simulation results of the two modified models show the improved accuracy...

  2. Correction of the significance level when attempting multiple transformations of an explanatory variable in generalized linear models

    Science.gov (United States)

    2013-01-01

    Background In statistical modeling, finding the most favorable coding for an exploratory quantitative variable involves many tests. This process involves multiple testing problems and requires the correction of the significance level. Methods For each coding, a test on the nullity of the coefficient associated with the new coded variable is computed. The selected coding corresponds to that associated with the largest statistical test (or equivalently the smallest pvalue). In the context of the Generalized Linear Model, Liquet and Commenges (Stat Probability Lett,71:33–38,2005) proposed an asymptotic correction of the significance level. This procedure, based on the score test, has been developed for dichotomous and Box-Cox transformations. In this paper, we suggest the use of resampling methods to estimate the significance level for categorical transformations with more than two levels and, by definition those that involve more than one parameter in the model. The categorical transformation is a more flexible way to explore the unknown shape of the effect between an explanatory and a dependent variable. Results The simulations we ran in this study showed good performances of the proposed methods. These methods were illustrated using the data from a study of the relationship between cholesterol and dementia. Conclusion The algorithms were implemented using R, and the associated CPMCGLM R package is available on the CRAN. PMID:23758852

  3. Model to calculate mass flow rate and other quantities of two-phase flow in a pipe with a densitometer, a drag disk, and a turbine meter

    International Nuclear Information System (INIS)

    Aya, I.

    1975-11-01

    The proposed model was developed at ORNL to calculate mass flow rate and other quantities of two-phase flow in a pipe when the flow is dispersed with slip between the phases. The calculational model is based on assumptions concerning the characteristics of a turbine meter and a drag disk. The model should be validated with experimental data before being used in blowdown analysis. In order to compare dispersed flow and homogeneous flow, the ratio of readings from each flow regime for each device discussed is calculated for a given mass flow rate and steam quality. The sensitivity analysis shows that the calculated flow rate of a steam-water mixture (based on the measurements of a drag disk and a gamma densitometer in which the flow is assumed to be homogeneous even if there is some slip between phases) is very close to the real flow rate in the case of dispersed flow at a low quality. As the steam quality increases at a constant slip ratio, all models are prone to overestimate. At 20 percent quality the overestimates reach 8 percent in the proposed model, 15 percent in Rouhani's model, 38 percent in homogeneous model, and 75 percent in Popper's model

  4. Comparing potential recharge estimates from three Land Surface Models across the Western US

    Science.gov (United States)

    NIRAULA, REWATI; MEIXNER, THOMAS; AJAMI, HOORI; RODELL, MATTHEW; GOCHIS, DAVID; CASTRO, CHRISTOPHER L.

    2018-01-01

    Groundwater is a major source of water in the western US. However, there are limited recharge estimates available in this region due to the complexity of recharge processes and the challenge of direct observations. Land surface Models (LSMs) could be a valuable tool for estimating current recharge and projecting changes due to future climate change. In this study, simulations of three LSMs (Noah, Mosaic and VIC) obtained from the North American Land Data Assimilation System (NLDAS-2) are used to estimate potential recharge in the western US. Modeled recharge was compared with published recharge estimates for several aquifers in the region. Annual recharge to precipitation ratios across the study basins varied from 0.01–15% for Mosaic, 3.2–42% for Noah, and 6.7–31.8% for VIC simulations. Mosaic consistently underestimates recharge across all basins. Noah captures recharge reasonably well in wetter basins, but overestimates it in drier basins. VIC slightly overestimates recharge in drier basins and slightly underestimates it for wetter basins. While the average annual recharge values vary among the models, the models were consistent in identifying high and low recharge areas in the region. Models agree in seasonality of recharge occurring dominantly during the spring across the region. Overall, our results highlight that LSMs have the potential to capture the spatial and temporal patterns as well as seasonality of recharge at large scales. Therefore, LSMs (specifically VIC and Noah) can be used as a tool for estimating future recharge rates in data limited regions. PMID:29618845

  5. Risk assessment of oil price from static and dynamic modelling approaches

    DEFF Research Database (Denmark)

    Mi, Zhi-Fu; Wei, Yi-Ming; Tang, Bao-Jun

    2017-01-01

    ) and GARCH model on the basis of generalized error distribution (GED). The results show that EVT is a powerful approach to capture the risk in the oil markets. On the contrary, the traditional variance–covariance (VC) and Monte Carlo (MC) approaches tend to overestimate risk when the confidence level is 95......%, but underestimate risk at the confidence level of 99%. The VaR of WTI returns is larger than that of Brent returns at identical confidence levels. Moreover, the GED-GARCH model can estimate the downside dynamic VaR accurately for WTI and Brent oil returns....

  6. SWAT Modeling for Depression-Dominated Areas: How Do Depressions Manipulate Hydrologic Modeling?

    Directory of Open Access Journals (Sweden)

    Mohsen Tahmasebi Nasab

    2017-01-01

    Full Text Available Modeling hydrologic processes for depression-dominated areas such as the North American Prairie Pothole Region is complex and reliant on a clear understanding of dynamic filling-spilling-merging-splitting processes of numerous depressions over the surface. Puddles are spatially distributed over a watershed and their sizes, storages, and interactions vary over time. However, most hydrologic models fail to account for these dynamic processes. Like other traditional methods, depressions are filled as a required preprocessing step in the Soil and Water Assessment Tool (SWAT. The objective of this study was to facilitate hydrologic modeling for depression-dominated areas by coupling SWAT with a Puddle Delineation (PD algorithm. In the coupled PD-SWAT model, the PD algorithm was utilized to quantify topographic details, including the characteristics, distribution, and hierarchical relationships of depressions, which were incorporated into SWAT at the hydrologic response unit (HRU scale. The new PD-SWAT model was tested for a large watershed in North Dakota under real precipitation events. In addition, hydrologic modeling of a small watershed was conducted under two extreme high and low synthetic precipitation conditions. In particular, the PD-SWAT was compared against the regular SWAT based on depressionless DEMs. The impact of depressions on the hydrologic modeling of the large and small watersheds was evaluated. The simulation results for the large watershed indicated that SWAT systematically overestimated the outlet discharge, which can be attributed to the failure to account for the hydrologic effects of depressions. It was found from the PD-SWAT modeling results that at the HRU scale surface runoff initiation was significantly delayed due to the threshold control of depressions. Under the high precipitation scenario, depressions increased the surface runoff peak. However, the low precipitation scenario could not fully fill depressions to reach

  7. Evaluation of Land Surface Models in Reproducing Satellite-Derived LAI over the High-Latitude Northern Hemisphere. Part I: Uncoupled DGVMs

    Directory of Open Access Journals (Sweden)

    Ning Zeng

    2013-10-01

    Full Text Available Leaf Area Index (LAI represents the total surface area of leaves above a unit area of ground and is a key variable in any vegetation model, as well as in climate models. New high resolution LAI satellite data is now available covering a period of several decades. This provides a unique opportunity to validate LAI estimates from multiple vegetation models. The objective of this paper is to compare new, satellite-derived LAI measurements with modeled output for the Northern Hemisphere. We compare monthly LAI output from eight land surface models from the TRENDY compendium with satellite data from an Artificial Neural Network (ANN from the latest version (third generation of GIMMS AVHRR NDVI data over the period 1986–2005. Our results show that all the models overestimate the mean LAI, particularly over the boreal forest. We also find that seven out of the eight models overestimate the length of the active vegetation-growing season, mostly due to a late dormancy as a result of a late summer phenology. Finally, we find that the models report a much larger positive trend in LAI over this period than the satellite observations suggest, which translates into a higher trend in the growing season length. These results highlight the need to incorporate a larger number of more accurate plant functional types in all models and, in particular, to improve the phenology of deciduous trees.

  8. Modelling of Transport Projects Uncertainties

    DEFF Research Database (Denmark)

    Salling, Kim Bang; Leleur, Steen

    2009-01-01

    This paper proposes a new way of handling the uncertainties present in transport decision making based on infrastructure appraisals. The paper suggests to combine the principle of Optimism Bias, which depicts the historical tendency of overestimating transport related benefits and underestimating...... to supplement Optimism Bias and the associated Reference Class Forecasting (RCF) technique with a new technique that makes use of a scenario-grid. We tentatively introduce and refer to this as Reference Scenario Forecasting (RSF). The final RSF output from the CBA-DK model consists of a set of scenario......-based graphs which function as risk-related decision support for the appraised transport infrastructure project....

  9. Mixed Effects Modeling Using Stochastic Differential Equations: Illustrated by Pharmacokinetic Data of Nicotinic Acid in Obese Zucker Rats.

    Science.gov (United States)

    Leander, Jacob; Almquist, Joachim; Ahlström, Christine; Gabrielsson, Johan; Jirstrand, Mats

    2015-05-01

    Inclusion of stochastic differential equations in mixed effects models provides means to quantify and distinguish three sources of variability in data. In addition to the two commonly encountered sources, measurement error and interindividual variability, we also consider uncertainty in the dynamical model itself. To this end, we extend the ordinary differential equation setting used in nonlinear mixed effects models to include stochastic differential equations. The approximate population likelihood is derived using the first-order conditional estimation with interaction method and extended Kalman filtering. To illustrate the application of the stochastic differential mixed effects model, two pharmacokinetic models are considered. First, we use a stochastic one-compartmental model with first-order input and nonlinear elimination to generate synthetic data in a simulated study. We show that by using the proposed method, the three sources of variability can be successfully separated. If the stochastic part is neglected, the parameter estimates become biased, and the measurement error variance is significantly overestimated. Second, we consider an extension to a stochastic pharmacokinetic model in a preclinical study of nicotinic acid kinetics in obese Zucker rats. The parameter estimates are compared between a deterministic and a stochastic NiAc disposition model, respectively. Discrepancies between model predictions and observations, previously described as measurement noise only, are now separated into a comparatively lower level of measurement noise and a significant uncertainty in model dynamics. These examples demonstrate that stochastic differential mixed effects models are useful tools for identifying incomplete or inaccurate model dynamics and for reducing potential bias in parameter estimates due to such model deficiencies.

  10. Glass operational file. Operational models and integration calculations

    International Nuclear Information System (INIS)

    Ribet, I.

    2004-01-01

    This document presents the operational choices of dominating phenomena, hypotheses, equations and numerical data of the parameters used in the two operational models elaborated for the calculation of the glass source terms with respect to the waste packages considered: existing packages (R7T7, AVM and CEA glasses) and future ones (UOX2, UOX3, UMo, others). The overall operational choices are justified and demonstrated and a critical analysis of the approach is systematically proposed. The use of the operational model (OPM) V 0 → V r , realistic, conservative and robust, is recommended for glasses with a high thermal and radioactive load, which represent the main part of the vitrified wastes. The OPM V 0 S, much more overestimating but faster to parameterize, can be used for the long-term behaviour forecasting of glasses with low thermal and radioactive load, considering today's lack of knowledge for the parameterization of a V 0 → V r type OPM. Efficiency estimations have been made for R7T7 glasses (OPM V 0 → V r ) and AVM glasses (OPM V 0 S), which correspond to more than 99.9% of the vitrified waste packages activity. The very contrasted results obtained, illustrate the importance of the choice of operational models: in conditions representative of a geologic disposal, the estimation of R7T7-type package lifetime exceeds several hundred thousands years. Even if the estimated lifetime of AVM packages is much shorter (because of the overestimating character of the OPM V 0 S), the release potential radiotoxicity is of the same order as the one of R7T7 packages. (J.S.)

  11. Overestimating fish counts by non-instantaneous visual censuses: consequences for population and community descriptions.

    Directory of Open Access Journals (Sweden)

    Christine Ward-Paige

    Full Text Available BACKGROUND: Increasingly, underwater visual censuses (UVC are used to assess fish populations. Several studies have demonstrated the effectiveness of protected areas for increasing fish abundance or provided insight into the natural abundance and structure of reef fish communities in remote areas. Recently, high apex predator densities (>100,000 individuals x km(-2 and biomasses (>4 tonnes x ha(-1 have been reported for some remote islands suggesting the occurrence of inverted trophic biomass pyramids. However, few studies have critically evaluated the methods used for sampling conspicuous and highly mobile fish such as sharks. Ideally, UVC are done instantaneously, however, researchers often count animals that enter the survey area after the survey has started, thus performing non-instantaneous UVC. METHODOLOGY/PRINCIPAL FINDINGS: We developed a simulation model to evaluate counts obtained by divers deploying non-instantaneous belt-transect and stationary-point-count techniques. We assessed how fish speed and survey procedure (visibility, diver speed, survey time and dimensions affect observed fish counts. Results indicate that the bias caused by fish speed alone is huge, while survey procedures had varying effects. Because the fastest fishes tend to be the largest, the bias would have significant implications on their biomass contribution. Therefore, caution is needed when describing abundance, biomass, and community structure based on non-instantaneous UVC, especially for highly mobile species such as sharks. CONCLUSIONS/SIGNIFICANCE: Based on our results, we urge that published literature state explicitly whether instantaneous counts were made and that survey procedures be accounted for when non-instantaneous counts are used. Using published density and biomass values of communities that include sharks we explore the effect of this bias and suggest that further investigation may be needed to determine pristine shark abundances and the

  12. Significance tests to determine the direction of effects in linear regression models.

    Science.gov (United States)

    Wiedermann, Wolfgang; Hagmann, Michael; von Eye, Alexander

    2015-02-01

    Previous studies have discussed asymmetric interpretations of the Pearson correlation coefficient and have shown that higher moments can be used to decide on the direction of dependence in the bivariate linear regression setting. The current study extends this approach by illustrating that the third moment of regression residuals may also be used to derive conclusions concerning the direction of effects. Assuming non-normally distributed variables, it is shown that the distribution of residuals of the correctly specified regression model (e.g., Y is regressed on X) is more symmetric than the distribution of residuals of the competing model (i.e., X is regressed on Y). Based on this result, 4 one-sample tests are discussed which can be used to decide which variable is more likely to be the response and which one is more likely to be the explanatory variable. A fifth significance test is proposed based on the differences of skewness estimates, which leads to a more direct test of a hypothesis that is compatible with direction of dependence. A Monte Carlo simulation study was performed to examine the behaviour of the procedures under various degrees of associations, sample sizes, and distributional properties of the underlying population. An empirical example is given which illustrates the application of the tests in practice. © 2014 The British Psychological Society.

  13. Calculating the true level of predictors significance when carrying out the procedure of regression equation specification

    Directory of Open Access Journals (Sweden)

    Nikita A. Moiseev

    2017-01-01

    Full Text Available The paper is devoted to a new randomization method that yields unbiased adjustments of p-values for linear regression models predictors by incorporating the number of potential explanatory variables, their variance-covariance matrix and its uncertainty, based on the number of observations. This adjustment helps to control type I errors in scientific studies, significantly decreasing the number of publications that report false relations to be authentic ones. Comparative analysis with such existing methods as Bonferroni correction and Shehata and White adjustments explicitly shows their imperfections, especially in case when the number of observations and the number of potential explanatory variables are approximately equal. Also during the comparative analysis it was shown that when the variance-covariance matrix of a set of potential predictors is diagonal, i.e. the data are independent, the proposed simple correction is the best and easiest way to implement the method to obtain unbiased corrections of traditional p-values. However, in the case of the presence of strongly correlated data, a simple correction overestimates the true pvalues, which can lead to type II errors. It was also found that the corrected p-values depend on the number of observations, the number of potential explanatory variables and the sample variance-covariance matrix. For example, if there are only two potential explanatory variables competing for one position in the regression model, then if they are weakly correlated, the corrected p-value will be lower than when the number of observations is smaller and vice versa; if the data are highly correlated, the case with a larger number of observations will show a lower corrected p-value. With increasing correlation, all corrections, regardless of the number of observations, tend to the original p-value. This phenomenon is easy to explain: as correlation coefficient tends to one, two variables almost linearly depend on each

  14. Macroscopic damping model for zero degree energy distribution in ultra-relativistic heavy ion collisions

    International Nuclear Information System (INIS)

    Gao Chongshou; Wang Chengshing

    1993-01-01

    A macroscopic damping model is proposed to calculate the zero degree energy distribution in ultra-relativistic heavy ion collisions. The main features of the measured distributions are reproduced, good agreement is obtained in the middle energy region while overestimation results on the high energy side. The average energy loss coefficient of incident nucleons, varying in the reasonable region 0.2-0.6, depends on beam energy and target size

  15. Assessment of Runoff Contributing Catchment Areas in Rainfall Runoff Modelling

    DEFF Research Database (Denmark)

    Thorndahl, Søren Liedtke; Johansen, C.; Schaarup-Jensen, Kjeld

    2005-01-01

    to determine with significant precision the hydrological reduction factor is implemented to account all hydrological losses except the initial loss. This paper presents an inconsistency between calculations of the hydrological reduction factor, based on measurements of rainfall and runoff, and till now...... recommended literary values for residential areas. It is proven by comparing rainfall-runoff measurements from four different residential catchments that the literary values of the hydrological reduction factor are over-estimated for this type of catchments. In addition, different catchment descriptions...

  16. Sediment plume model-a comparison between use of measured turbidity data and satellite images for model calibration.

    Science.gov (United States)

    Sadeghian, Amir; Hudson, Jeff; Wheater, Howard; Lindenschmidt, Karl-Erich

    2017-08-01

    In this study, we built a two-dimensional sediment transport model of Lake Diefenbaker, Saskatchewan, Canada. It was calibrated by using measured turbidity data from stations along the reservoir and satellite images based on a flood event in 2013. In June 2013, there was heavy rainfall for two consecutive days on the frozen and snow-covered ground in the higher elevations of western Alberta, Canada. The runoff from the rainfall and the melted snow caused one of the largest recorded inflows to the headwaters of the South Saskatchewan River and Lake Diefenbaker downstream. An estimated discharge peak of over 5200 m 3 /s arrived at the reservoir inlet with a thick sediment front within a few days. The sediment plume moved quickly through the entire reservoir and remained visible from satellite images for over 2 weeks along most of the reservoir, leading to concerns regarding water quality. The aims of this study are to compare, quantitatively and qualitatively, the efficacy of using turbidity data and satellite images for sediment transport model calibration and to determine how accurately a sediment transport model can simulate sediment transport based on each of them. Both turbidity data and satellite images were very useful for calibrating the sediment transport model quantitatively and qualitatively. Model predictions and turbidity measurements show that the flood water and suspended sediments entered upstream fairly well mixed and moved downstream as overflow with a sharp gradient at the plume front. The model results suggest that the settling and resuspension rates of sediment are directly proportional to flow characteristics and that the use of constant coefficients leads to model underestimation or overestimation unless more data on sediment formation become available. Hence, this study reiterates the significance of the availability of data on sediment distribution and characteristics for building a robust and reliable sediment transport model.

  17. Natural and drought scenarios in an east central Amazon forest: Fidelity of the Community Land Model 3.5 with three biogeochemical models

    Science.gov (United States)

    Sakaguchi, Koichi; Zeng, Xubin; Christoffersen, Bradley J.; Restrepo-Coupe, Natalia; Saleska, Scott R.; Brando, Paulo M.

    2011-03-01

    Recent development of general circulation models involves biogeochemical cycles: flows of carbon and other chemical species that circulate through the Earth system. Such models are valuable tools for future projections of climate, but still bear large uncertainties in the model simulations. One of the regions with especially high uncertainty is the Amazon forest where large-scale dieback associated with the changing climate is predicted by several models. In order to better understand the capability and weakness of global-scale land-biogeochemical models in simulating a tropical ecosystem under the present day as well as significantly drier climates, we analyzed the off-line simulations for an east central Amazon forest by the Community Land Model version 3.5 of the National Center for Atmospheric Research and its three independent biogeochemical submodels (CASA', CN, and DGVM). Intense field measurements carried out under Large Scale Biosphere-Atmosphere Experiment in Amazonia, including forest response to drought from a throughfall exclusion experiment, are utilized to evaluate the whole spectrum of biogeophysical and biogeochemical aspects of the models. Our analysis shows reasonable correspondence in momentum and energy turbulent fluxes, but it highlights three processes that are not in agreement with observations: (1) inconsistent seasonality in carbon fluxes, (2) biased biomass size and allocation, and (3) overestimation of vegetation stress to short-term drought but underestimation of biomass loss from long-term drought. Without resolving these issues the modeled feedbacks from the biosphere in future climate projections would be questionable. We suggest possible directions for model improvements and also emphasize the necessity of more studies using a variety of in situ data for both driving and evaluating land-biogeochemical models.

  18. Disorder effects in the t-J model

    International Nuclear Information System (INIS)

    Caprara, S.; De Palo, S.; Castellani, C.; Di Castro, C.; Grilli, M.

    1995-01-01

    We investigate the effects of disorder in the single-band t-J model, mainly devoting our analysis to the superconducting phases with d-wave or s-wave symmetry. We present evidence that, in the presence of strong correlation with reduced bandwidth and Van Hove singularity in the density of states, a self-consistent approach for the self-energy associated to the impurities is required. Numerical estimates of the reduction of the critical temperature with disorder are given. When a constant imaginary part of the self-energy is taken, avoiding self-consistency, the reduction of the critical temperature is overestimated

  19. Isoscalar giant resonances in a relativistic model

    International Nuclear Information System (INIS)

    L'Huillier, M.; Nguyen Van Giai.

    1988-07-01

    Isoscalar giant resonances in finite nuclei are studied in a relativistic Random Phase Approximation (RRPA) approach. The model is self-consistent in the sense that one set of coupling constants generates the Dirac-Hartree single-particle spectrum and the residual particle-hole interaction. The RRPA is used to calculate response functions of multipolarity L = 0,2,3, and 4 in light and medium nuclei. It is found that monopole and quadrupole modes exhibit a collective character. The peak energies are overestimated, but not as much as one might think if the bulk properties (compression modulus, effective mass) were the only relevant quantities

  20. A plug flow reactor model of a vanadium redox flow battery considering the conductive current collectors

    Science.gov (United States)

    König, S.; Suriyah, M. R.; Leibfried, T.

    2017-08-01

    A lumped-parameter model for vanadium redox flow batteries, which use metallic current collectors, is extended into a one-dimensional model using the plug flow reactor principle. Thus, the commonly used simplification of a perfectly mixed cell is no longer required. The resistances of the cell components are derived in the in-plane and through-plane directions. The copper current collector is the only component with a significant in-plane conductance, which allows for a simplified electrical network. The division of a full-scale flow cell into 10 layers in the direction of fluid flow represents a reasonable compromise between computational effort and accuracy. Due to the variations in the state of charge and thus the open circuit voltage of the electrolyte, the currents in the individual layers vary considerably. Hence, there are situations, in which the first layer, directly at the electrolyte input, carries a multiple of the last layer's current. The conventional model overestimates the cell performance. In the worst-case scenario, the more accurate 20-layer model yields a discharge capacity 9.4% smaller than that computed with the conventional model. The conductive current collector effectively eliminates the high over-potentials in the last layers of the plug flow reactor models that have been reported previously.

  1. Interpretation of protein quantitation using the Bradford assay: comparison with two calculation models.

    Science.gov (United States)

    Ku, Hyung-Keun; Lim, Hyuk-Min; Oh, Kyong-Hwa; Yang, Hyo-Jin; Jeong, Ji-Seon; Kim, Sook-Kyung

    2013-03-01

    The Bradford assay is a simple method for protein quantitation, but variation in the results between proteins is a matter of concern. In this study, we compared and normalized quantitative values from two models for protein quantitation, where the residues in the protein that bind to anionic Coomassie Brilliant Blue G-250 comprise either Arg and Lys (Method 1, M1) or Arg, Lys, and His (Method 2, M2). Use of the M2 model yielded much more consistent quantitation values compared with use of the M1 model, which exhibited marked overestimations against protein standards. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Mitigation potential of soil carbon management overestimated by neglecting N2O emissions

    Science.gov (United States)

    Lugato, Emanuele; Leip, Adrian; Jones, Arwyn

    2018-03-01

    International initiatives such as the `4 per 1000' are promoting enhanced carbon (C) sequestration in agricultural soils as a way to mitigate greenhouse gas emissions1. However, changes in soil organic C turnover feed back into the nitrogen (N) cycle2, meaning that variation in soil nitrous oxide (N2O) emissions may offset or enhance C sequestration actions3. Here we use a biogeochemistry model on approximately 8,000 soil sampling locations in the European Union4 to quantify the net CO2 equivalent (CO2e) fluxes associated with representative C-mitigating agricultural practices. Practices based on integrated crop residue retention and lower soil disturbance are found to not increase N2O emissions as long as C accumulation continues (until around 2040), thereafter leading to a moderate C sequestration offset mostly below 47% by 2100. The introduction of N-fixing cover crops allowed higher C accumulation over the initial 20 years, but this gain was progressively offset by higher N2O emissions over time. By 2060, around half of the sites became a net source of greenhouse gases. We conclude that significant CO2 mitigation can be achieved in the initial 20-30 years of any C management scheme, but after that N inputs should be controlled through appropriate management.

  3. A forward model for the helium plume effect and the interpretation of helium charge exchange measurements at ASDEX Upgrade

    Science.gov (United States)

    Kappatou, A.; McDermott, R. M.; Pütterich, T.; Dux, R.; Geiger, B.; Jaspers, R. J. E.; Donné, A. J. H.; Viezzer, E.; Cavedon, M.; the ASDEX Upgrade Team

    2018-05-01

    The analysis of the charge exchange measurements of helium is hindered by an additional emission contributing to the spectra, the helium ‘plume’ emission (Fonck et al 1984 Phys. Rev. A 29 3288), which complicates the interpretation of the measurements. The plume emission is indistinguishable from the active charge exchange signal when standard analysis of the spectra is applied and its intensity is of comparable magnitude for ASDEX Upgrade conditions, leading to a significant overestimation of the He2+ densities if not properly treated. Furthermore, the spectral line shape of the plume emission is non-Gaussian and leads to wrong ion temperature and flow measurements when not taken into account. A kinetic model for the helium plume emission has been developed for ASDEX Upgrade. The model is benchmarked against experimental measurements and is shown to capture the underlying physics mechanisms of the plume effect, as it can reproduce the experimental spectra and provides consistent values for the ion temperature, plasma rotation, and He2+ density.

  4. Optimal Physics Parameterization Scheme Combination of the Weather Research and Forecasting Model for Seasonal Precipitation Simulation over Ghana

    Directory of Open Access Journals (Sweden)

    Richard Yao Kuma Agyeman

    2017-01-01

    Full Text Available Seasonal predictions of precipitation, among others, are important to help mitigate the effects of drought and floods on agriculture, hydropower generation, disasters, and many more. This work seeks to obtain a suitable combination of physics schemes of the Weather Research and Forecasting (WRF model for seasonal precipitation simulation over Ghana. Using the ERA-Interim reanalysis as forcing data, simulation experiments spanning eight months (from April to November were performed for two different years: a dry year (2001 and a wet year (2008. A double nested approach was used with the outer domain at 50 km resolution covering West Africa and the inner domain covering Ghana at 10 km resolution. The results suggest that the WRF model generally overestimated the observed precipitation by a mean value between 3% and 64% for both years. Most of the scheme combinations overestimated (underestimated precipitation over coastal (northern zones of Ghana for both years but estimated precipitation reasonably well over forest and transitional zones. On the whole, the combination of WRF Single-Moment 6-Class Microphysics Scheme, Grell-Devenyi Ensemble Cumulus Scheme, and Asymmetric Convective Model Planetary Boundary Layer Scheme simulated the best temporal pattern and temporal variability with the least relative bias for both years and therefore is recommended for Ghana.

  5. Finite-Element Modeling of Timber Joints with Punched Metal Plate Fasteners

    DEFF Research Database (Denmark)

    Ellegaard, Peter

    2006-01-01

    The focus of this paper is to describe the idea and the theory behind a finite-element model developed for analysis of timber trusses with punched metal plate fasteners (nail plates). The finite-element model includes the semirigid and nonlinear behavior of the joints (nonlinear nail and plate...... elements) and contact between timber beams, if any (bilinear contact elements). The timber beams have linear-elastic properties. The section forces needed for design of the joints are given directly by the finite-element model, since special elements are used to model the nail groups and the nail plate...... the behavior of the joints very well at lower load levels. At higher load levels the stiffness is overestimated due to development of cracks in the timber and the linear-elastic timber properties in the finite-element model....

  6. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics

    Science.gov (United States)

    Pineda, M.; Stamatakis, M.

    2017-07-01

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  7. Fruit tree model for uptake of organic compounds from soil

    DEFF Research Database (Denmark)

    Trapp, Stefan; Rasmussen, D.; Samsoe-Petersen, L.

    2003-01-01

    -state, and an example calculation is given. The Fruit Tree Model is compared to the empirical equation of Travis and Arms (T&A), and to results from fruits, collected in contaminated areas. For polar compounds, both T&A and the Fruit Tree Model predict bioconcentration factors fruit to soil (BCF, wet weight based......) of > 1. No empirical data are available to support this prediction. For very lipophilic compounds (log K-OW > 5), T&A overestimates the uptake. The conclusion from the Fruit Tree Model is that the transfer of lipophilic compounds into fruits is not relevant. This was also found by an empirical study...... with PCDD/F. According to the Fruit Tree Model, polar chemicals are transferred efficiently into fruits, but empirical data to verify these predictions are lacking....

  8. Numerical simulation of heat transfer process in solar enhanced natural draft dry cooling tower with radiation model

    International Nuclear Information System (INIS)

    Wang, Qiuhuan; Zhu, Jialing; Lu, Xinli

    2017-01-01

    Graphical abstract: A 3-D numerical model integrated with a discrete ordinate (DO) solar radiation model (considering solar radiation effect in the room of solar collector) was developed to investigate the influence of solar radiation intensity and ambient pressure on the efficiency and thermal characteristics of the SENDDCT. Our study shows that introducing such a radiation model can more accurately simulate the heat transfer process in the SENDDCT. Calculation results indicate that previous simulations overestimated solar energy obtained by the solar collector and underestimated the heat loss. The cooling performance is improved when the solar radiation intensity or ambient pressure is high. Air temperature and velocity increase with the increase of solar radiation intensity. But ambient pressure has inverse effects on the changes of air temperature and velocity. Under a condition that the solar load increases but the ambient pressure decreases, the increased rate of heat transferred in the heat exchanger is not obvious. Thus the performance of the SENDDCT not only depends on the solar radiation intensity but also depends on the ambient pressure. - Highlights: • A radiation model has been introduced to accurately simulate heat transfer process. • Heat transfer rate would be overestimated if the radiation model was not introduced. • The heat transfer rate is approximately proportional to solar radiation intensity. • The higher the solar radiation or ambient pressure, the better SENDDCT performance. - Abstract: Solar enhanced natural draft dry cooling tower (SENDDCT) is more efficient than natural draft dry cooling tower by utilizing solar radiation in arid region. A three-dimensional numerical model considering solar radiation effect was developed to investigate the influence of solar radiation intensity and ambient pressure on the efficiency and thermal characteristics of SENDDCT. The numerical simulation outcomes reveal that a model with consideration of

  9. Deterministic and stochastic trends in the Lee-Carter mortality model

    DEFF Research Database (Denmark)

    Callot, Laurent; Haldrup, Niels; Kallestrup-Lamb, Malene

    2015-01-01

    The Lee and Carter (1992) model assumes that the deterministic and stochastic time series dynamics load with identical weights when describing the development of age-specific mortality rates. Effectively this means that the main characteristics of the model simplify to a random walk model with age...... mortality data. We find empirical evidence that this feature of the Lee–Carter model overly restricts the system dynamics and we suggest to separate the deterministic and stochastic time series components at the benefit of improved fit and forecasting performance. In fact, we find that the classical Lee......–Carter model will otherwise overestimate the reduction of mortality for the younger age groups and will underestimate the reduction of mortality for the older age groups. In practice, our recommendation means that the Lee–Carter model instead of a one-factor model should be formulated as a two- (or several...

  10. Predicting significant torso trauma.

    Science.gov (United States)

    Nirula, Ram; Talmor, Daniel; Brasel, Karen

    2005-07-01

    Identification of motor vehicle crash (MVC) characteristics associated with thoracoabdominal injury would advance the development of automatic crash notification systems (ACNS) by improving triage and response times. Our objective was to determine the relationships between MVC characteristics and thoracoabdominal trauma to develop a torso injury probability model. Drivers involved in crashes from 1993 to 2001 within the National Automotive Sampling System were reviewed. Relationships between torso injury and MVC characteristics were assessed using multivariate logistic regression. Receiver operating characteristic curves were used to compare the model to current ACNS models. There were a total of 56,466 drivers. Age, ejection, braking, avoidance, velocity, restraints, passenger-side impact, rollover, and vehicle weight and type were associated with injury (p < 0.05). The area under the receiver operating characteristic curve (83.9) was significantly greater than current ACNS models. We have developed a thoracoabdominal injury probability model that may improve patient triage when used with ACNS.

  11. Transport simulations of TFTR experiments to test theoretical models for χe and χi

    International Nuclear Information System (INIS)

    Redi, M.H.; Bateman, G.

    1990-08-01

    1-1/2-d BALDUR transport code predictions using recent theoretically-based models for thermal and particle transport are compared to measured profiles of electron plasma density and electron and ion temperatures for TFTR ohmic, L-mode and supershot discharges. The profile consistent drift wave model is found to overestimate ion temperatures at high heating powers, so that a third mode or loss process is needed in addition to drift wave transport (TEM, η i ) and an edge loss model. None of several versions of local multiple mode models, using the 1989 Carreras-Diamond resistive ballooning model, gives T e , T i within 20% for all three TFTR regimes studied. 36 refs., 7 figs., 4 tabs

  12. Evaluation of different methods to model near-surface turbulent fluxes for a mountain glacier in the Cariboo Mountains, BC, Canada

    Science.gov (United States)

    Radić, Valentina; Menounos, Brian; Shea, Joseph; Fitzpatrick, Noel; Tessema, Mekdes A.; Déry, Stephen J.

    2017-12-01

    As part of surface energy balance models used to simulate glacier melting, choosing parameterizations to adequately estimate turbulent heat fluxes is extremely challenging. This study aims to evaluate a set of four aerodynamic bulk methods (labeled as C methods), commonly used to estimate turbulent heat fluxes for a sloped glacier surface, and two less commonly used bulk methods developed from katabatic flow models. The C methods differ in their parameterizations of the bulk exchange coefficient that relates the fluxes to the near-surface measurements of mean wind speed, air temperature, and humidity. The methods' performance in simulating 30 min sensible- and latent-heat fluxes is evaluated against the measured fluxes from an open-path eddy-covariance (OPEC) method. The evaluation is performed at a point scale of a mountain glacier, using one-level meteorological and OPEC observations from multi-day periods in the 2010 and 2012 summer seasons. The analysis of the two independent seasons yielded the same key findings, which include the following: first, the bulk method, with or without the commonly used Monin-Obukhov (M-O) stability functions, overestimates the turbulent heat fluxes over the observational period, mainly due to a substantial overestimation of the friction velocity. This overestimation is most pronounced during the katabatic flow conditions, corroborating the previous findings that the M-O theory works poorly in the presence of a low wind speed maximum. Second, the method based on a katabatic flow model (labeled as the KInt method) outperforms any C method in simulating the friction velocity; however, the C methods outperform the KInt method in simulating the sensible-heat fluxes. Third, the best overall performance is given by a hybrid method, which combines the KInt approach with the C method; i.e., it parameterizes eddy viscosity differently than eddy diffusivity. An error analysis reveals that the uncertainties in the measured meteorological

  13. Evaluation of different methods to model near-surface turbulent fluxes for a mountain glacier in the Cariboo Mountains, BC, Canada

    Directory of Open Access Journals (Sweden)

    V. Radić

    2017-12-01

    Full Text Available As part of surface energy balance models used to simulate glacier melting, choosing parameterizations to adequately estimate turbulent heat fluxes is extremely challenging. This study aims to evaluate a set of four aerodynamic bulk methods (labeled as C methods, commonly used to estimate turbulent heat fluxes for a sloped glacier surface, and two less commonly used bulk methods developed from katabatic flow models. The C methods differ in their parameterizations of the bulk exchange coefficient that relates the fluxes to the near-surface measurements of mean wind speed, air temperature, and humidity. The methods' performance in simulating 30 min sensible- and latent-heat fluxes is evaluated against the measured fluxes from an open-path eddy-covariance (OPEC method. The evaluation is performed at a point scale of a mountain glacier, using one-level meteorological and OPEC observations from multi-day periods in the 2010 and 2012 summer seasons. The analysis of the two independent seasons yielded the same key findings, which include the following: first, the bulk method, with or without the commonly used Monin–Obukhov (M–O stability functions, overestimates the turbulent heat fluxes over the observational period, mainly due to a substantial overestimation of the friction velocity. This overestimation is most pronounced during the katabatic flow conditions, corroborating the previous findings that the M–O theory works poorly in the presence of a low wind speed maximum. Second, the method based on a katabatic flow model (labeled as the KInt method outperforms any C method in simulating the friction velocity; however, the C methods outperform the KInt method in simulating the sensible-heat fluxes. Third, the best overall performance is given by a hybrid method, which combines the KInt approach with the C method; i.e., it parameterizes eddy viscosity differently than eddy diffusivity. An error analysis reveals that the uncertainties in

  14. VALIDITY OF GARBER MODEL IN PREDICTING PAVEMENT CONDITION INDEX OF FLEXIBLE PAVEMENT IN KERBALA CITY

    Directory of Open Access Journals (Sweden)

    Hussein A. Ewadh

    2018-05-01

    Full Text Available Pavement Condition Index (PCI is one of the important basics in pavement maintenance management system (PMMS, and it is used to evaluate the current and future pavement condition. This importantance in decision making to limit the maintenance needs, types of treatment, and maintenance priority. The aim of this research is to estimate the PCI value for flexible pavement urban roads in the study area (kerbala city by using Garber et al. developed model. Based on previous researches, data are collected for variables that have a significant impact on pavement condition. Data for pavement age (AGE, average daily traffic (ADT, and structural number (SN were collected for 44 sections in the network roads. A field survey (destructive test (core test and laboratory test (Marshall Test were used to determine the capacity of structure layer of pavement (SN. The condition index (CI output from a developed model was compared with the PCI output of PAVER 6.5.7 by using statistical analysis test. The developed model overestimates value of CI rather than PCI estimated from PAVER 6.5.7 due to statistical test to a 95% degree of confidence, (R = 0.771 for 44 sections (arterial and collector.

  15. Multi-variable evaluation of hydrological model predictions for a headwater basin in the Canadian Rocky Mountains

    Directory of Open Access Journals (Sweden)

    X. Fang

    2013-04-01

    Full Text Available One of the purposes of the Cold Regions Hydrological Modelling platform (CRHM is to diagnose inadequacies in the understanding of the hydrological cycle and its simulation. A physically based hydrological model including a full suite of snow and cold regions hydrology processes as well as warm season, hillslope and groundwater hydrology was developed in CRHM for application in the Marmot Creek Research Basin (~ 9.4 km2, located in the Front Ranges of the Canadian Rocky Mountains. Parameters were selected from digital elevation model, forest, soil, and geological maps, and from the results of many cold regions hydrology studies in the region and elsewhere. Non-calibrated simulations were conducted for six hydrological years during the period 2005–2011 and were compared with detailed field observations of several hydrological cycle components. The results showed good model performance for snow accumulation and snowmelt compared to the field observations for four seasons during the period 2007–2011, with a small bias and normalised root mean square difference (NRMSD ranging from 40 to 42% for the subalpine conifer forests and from 31 to 67% for the alpine tundra and treeline larch forest environments. Overestimation or underestimation of the peak SWE ranged from 1.6 to 29%. Simulations matched well with the observed unfrozen moisture fluctuation in the top soil layer at a lodgepole pine site during the period 2006–2011, with a NRMSD ranging from 17 to 39%, but with consistent overestimation of 7 to 34%. Evaluations of seasonal streamflow during the period 2006–2011 revealed that the model generally predicted well compared to observations at the basin scale, with a NRMSD of 60% and small model bias (1%, while at the sub-basin scale NRMSDs were larger, ranging from 72 to 76%, though overestimation or underestimation for the cumulative seasonal discharge was within 29%. Timing of discharge was better predicted at the Marmot Creek basin outlet

  16. CAUSES: Attribution of Surface Radiation Biases in NWP and Climate Models near the U.S. Southern Great Plains

    Science.gov (United States)

    Van Weverberg, K.; Morcrette, C. J.; Petch, J.; Klein, S. A.; Ma, H.-Y.; Zhang, C.; Xie, S.; Tang, Q.; Gustafson, W. I.; Qian, Y.; Berg, L. K.; Liu, Y.; Huang, M.; Ahlgrimm, M.; Forbes, R.; Bazile, E.; Roehrig, R.; Cole, J.; Merryfield, W.; Lee, W.-S.; Cheruy, F.; Mellul, L.; Wang, Y.-C.; Johnson, K.; Thieman, M. M.

    2018-04-01

    Many Numerical Weather Prediction (NWP) and climate models exhibit too warm lower tropospheres near the midlatitude continents. The warm bias has been shown to coincide with important surface radiation biases that likely play a critical role in the inception or the growth of the warm bias. This paper presents an attribution study on the net radiation biases in nine model simulations, performed in the framework of the CAUSES project (Clouds Above the United States and Errors at the Surface). Contributions from deficiencies in the surface properties, clouds, water vapor, and aerosols are quantified, using an array of radiation measurement stations near the Atmospheric Radiation Measurement Southern Great Plains site. Furthermore, an in-depth analysis is shown to attribute the radiation errors to specific cloud regimes. The net surface shortwave radiation is overestimated in all models throughout most of the simulation period. Cloud errors are shown to contribute most to this overestimation, although nonnegligible contributions from the surface albedo exist in most models. Missing deep cloud events and/or simulating deep clouds with too weak cloud radiative effects dominate in the cloud-related radiation errors. Some models have compensating errors between excessive occurrence of deep cloud but largely underestimating their radiative effect, while other models miss deep cloud events altogether. Surprisingly, even the latter models tend to produce too much and too frequent afternoon surface precipitation. This suggests that rather than issues with the triggering of deep convection, cloud radiative deficiencies are related to too weak convective cloud detrainment and too large precipitation efficiencies.

  17. A housing stock model of non-heating end-use energy in England verified by aggregate energy use data

    International Nuclear Information System (INIS)

    Lorimer, Stephen

    2012-01-01

    This paper proposes a housing stock model of non-heating end-use energy for England that can be verified using aggregate energy use data available for small areas. These end-uses, commonly referred to as appliances and lighting, are a rapidly increasing part of residential energy demand. This paper proposes a model that can be verified using aggregated data of electricity meters in small areas and census data on housing. Secondly, any differences that open up between major collections of housing could potentially be resolved by using data from frequently updated expenditure surveys. For the year 2008, the model overestimated domestic non-heating energy use at the national scale by 1.5%. This model was then used on the residential sector with various area classifications, which found that rural and suburban areas were generally underestimated by up to 3.3% and urban areas overestimated by up to 5.2% with the notable exception of “professional city life” classifications. The model proposed in this paper has the potential to be a verifiable and adaptable model for non-heating end-use energy in households in England for the future. - Highlights: ► Housing stock energy model was developed for end-uses outside of heating for UK context. ► This entailed changes to the building energy model that serves as the bottom of the stock model. ► The model is adaptable to reflect rapid changes in consumption between major housing surveys. ► Verification was done against aggregated consumption data and for the first time uses a measured size of the housing stock. ► The verification process revealed spatial variations in consumption patterns for future research.

  18. The importance of accurate glacier albedo for estimates of surface mass balance on Vatnajökull: evaluating the surface energy budget in a regional climate model with automatic weather station observations

    Science.gov (United States)

    Steffensen Schmidt, Louise; Aðalgeirsdóttir, Guðfinna; Guðmundsson, Sverrir; Langen, Peter L.; Pálsson, Finnur; Mottram, Ruth; Gascoin, Simon; Björnsson, Helgi

    2017-07-01

    A simulation of the surface climate of Vatnajökull ice cap, Iceland, carried out with the regional climate model HIRHAM5 for the period 1980-2014, is used to estimate the evolution of the glacier surface mass balance (SMB). This simulation uses a new snow albedo parameterization that allows albedo to exponentially decay with time and is surface temperature dependent. The albedo scheme utilizes a new background map of the ice albedo created from observed MODIS data. The simulation is evaluated against observed daily values of weather parameters from five automatic weather stations (AWSs) from the period 2001-2014, as well as in situ SMB measurements from the period 1995-2014. The model agrees well with observations at the AWS sites, albeit with a general underestimation of the net radiation. This is due to an underestimation of the incoming radiation and a general overestimation of the albedo. The average modelled albedo is overestimated in the ablation zone, which we attribute to an overestimation of the thickness of the snow layer and not taking the surface darkening from dirt and volcanic ash deposition during dust storms and volcanic eruptions into account. A comparison with the specific summer, winter, and net mass balance for the whole of Vatnajökull (1995-2014) shows a good overall fit during the summer, with a small mass balance underestimation of 0.04 m w.e. on average, whereas the winter mass balance is overestimated by on average 0.5 m w.e. due to too large precipitation at the highest areas of the ice cap. A simple correction of the accumulation at the highest points of the glacier reduces this to 0.15 m w.e. Here, we use HIRHAM5 to simulate the evolution of the SMB of Vatnajökull for the period 1981-2014 and show that the model provides a reasonable representation of the SMB for this period. However, a major source of uncertainty in the representation of the SMB is the representation of the albedo, and processes currently not accounted for in RCMs

  19. The importance of accurate glacier albedo for estimates of surface mass balance on Vatnajökull: evaluating the surface energy budget in a regional climate model with automatic weather station observations

    Directory of Open Access Journals (Sweden)

    L. S. Schmidt

    2017-07-01

    Full Text Available A simulation of the surface climate of Vatnajökull ice cap, Iceland, carried out with the regional climate model HIRHAM5 for the period 1980–2014, is used to estimate the evolution of the glacier surface mass balance (SMB. This simulation uses a new snow albedo parameterization that allows albedo to exponentially decay with time and is surface temperature dependent. The albedo scheme utilizes a new background map of the ice albedo created from observed MODIS data. The simulation is evaluated against observed daily values of weather parameters from five automatic weather stations (AWSs from the period 2001–2014, as well as in situ SMB measurements from the period 1995–2014. The model agrees well with observations at the AWS sites, albeit with a general underestimation of the net radiation. This is due to an underestimation of the incoming radiation and a general overestimation of the albedo. The average modelled albedo is overestimated in the ablation zone, which we attribute to an overestimation of the thickness of the snow layer and not taking the surface darkening from dirt and volcanic ash deposition during dust storms and volcanic eruptions into account. A comparison with the specific summer, winter, and net mass balance for the whole of Vatnajökull (1995–2014 shows a good overall fit during the summer, with a small mass balance underestimation of 0.04 m w.e. on average, whereas the winter mass balance is overestimated by on average 0.5 m w.e. due to too large precipitation at the highest areas of the ice cap. A simple correction of the accumulation at the highest points of the glacier reduces this to 0.15 m w.e. Here, we use HIRHAM5 to simulate the evolution of the SMB of Vatnajökull for the period 1981–2014 and show that the model provides a reasonable representation of the SMB for this period. However, a major source of uncertainty in the representation of the SMB is the representation of the albedo, and processes

  20. Comparison of experimental slant electron content and IRI model for moderate solar activity conditions

    International Nuclear Information System (INIS)

    Cabrera, M.A.; Ezquer, R.G.; Mosert, M.; Jadur, C.A.

    2002-01-01

    The International Reference Ionosphere model only gives the vertical electron content (VTEC). In this paper the slant electron content (SEC) for the ATS 6 satellite - Palehua (21.4 deg. N, 201.9 deg. E) radio signal path for a middle solar activity year is calculated. To this end, IRI model is used to obtain the electron density at different points of the signal path. Equinoxes and solstices are considered. Measurements obtained with Faraday rotation technique at Palehua are compared with the modelled values. Although overestimation was observed for night hours, the results show good SEC predictions for several hours at period of maximum ionisation, suggesting that would be possible to model the STEC using IRI. (author)

  1. A Comparative Study of CFD Models of a Real Wind Turbine in Solar Chimney Power Plants

    Directory of Open Access Journals (Sweden)

    Ehsan Gholamalizadeh

    2017-10-01

    Full Text Available A solar chimney power plant consists of four main parts, a solar collector, a chimney, an energy storage layer, and a wind turbine. So far, several investigations on the performance of the solar chimney power plant have been conducted. Among them, different approaches have been applied to model the turbine inside the system. In particular, a real wind turbine coupled to the system was simulated using computational fluid dynamics (CFD in three investigations. Gholamalizadeh et al. simulated a wind turbine with the same blade profile as the Manzanares SCPP’s turbine (FX W-151-A blade profile, while a CLARK Y blade profile was modelled by Guo et al. and Ming et al. In this study, simulations of the Manzanares prototype were carried out using the CFD model developed by Gholamalizadeh et al. Then, results obtained by modelling different turbine blade profiles at different turbine rotational speeds were compared. The results showed that a turbine with the CLARK Y blade profile significantly overestimates the value of the pressure drop across the Manzanares prototype turbine as compared to the FX W-151-A blade profile. In addition, modelling of both blade profiles led to very similar trends in changes in turbine efficiency and power output with respect to rotational speed.

  2. A stepped leader model for lightning including charge distribution in branched channels

    Energy Technology Data Exchange (ETDEWEB)

    Shi, Wei; Zhang, Li [School of Electrical Engineering, Shandong University, Jinan 250061 (China); Li, Qingmin, E-mail: lqmeee@ncepu.edu.cn [Beijing Key Lab of HV and EMC, North China Electric Power University, Beijing 102206 (China); State Key Lab of Alternate Electrical Power System with Renewable Energy Sources, Beijing 102206 (China)

    2014-09-14

    The stepped leader process in negative cloud-to-ground lightning plays a vital role in lightning protection analysis. As lightning discharge usually presents significant branched or tortuous channels, the charge distribution along the branched channels and the stochastic feature of stepped leader propagation were investigated in this paper. The charge density along the leader channel and the charge in the leader tip for each lightning branch were approximated by introducing branch correlation coefficients. In combination with geometric characteristics of natural lightning discharge, a stochastic stepped leader propagation model was presented based on the fractal theory. By comparing simulation results with the statistics of natural lightning discharges, it was found that the fractal dimension of lightning trajectory in simulation was in the range of that observed in nature and the calculation results of electric field at ground level were in good agreement with the measurements of a negative flash, which shows the validity of this proposed model. Furthermore, a new equation to estimate the lightning striking distance to flat ground was suggested based on the present model. The striking distance obtained by this new equation is smaller than the value estimated by previous equations, which indicates that the traditional equations may somewhat overestimate the attractive effect of the ground.

  3. A stepped leader model for lightning including charge distribution in branched channels

    International Nuclear Information System (INIS)

    Shi, Wei; Zhang, Li; Li, Qingmin

    2014-01-01

    The stepped leader process in negative cloud-to-ground lightning plays a vital role in lightning protection analysis. As lightning discharge usually presents significant branched or tortuous channels, the charge distribution along the branched channels and the stochastic feature of stepped leader propagation were investigated in this paper. The charge density along the leader channel and the charge in the leader tip for each lightning branch were approximated by introducing branch correlation coefficients. In combination with geometric characteristics of natural lightning discharge, a stochastic stepped leader propagation model was presented based on the fractal theory. By comparing simulation results with the statistics of natural lightning discharges, it was found that the fractal dimension of lightning trajectory in simulation was in the range of that observed in nature and the calculation results of electric field at ground level were in good agreement with the measurements of a negative flash, which shows the validity of this proposed model. Furthermore, a new equation to estimate the lightning striking distance to flat ground was suggested based on the present model. The striking distance obtained by this new equation is smaller than the value estimated by previous equations, which indicates that the traditional equations may somewhat overestimate the attractive effect of the ground.

  4. Model training across multiple breeding cycles significantly improves genomic prediction accuracy in rye (Secale cereale L.).

    Science.gov (United States)

    Auinger, Hans-Jürgen; Schönleben, Manfred; Lehermeier, Christina; Schmidt, Malthe; Korzun, Viktor; Geiger, Hartwig H; Piepho, Hans-Peter; Gordillo, Andres; Wilde, Peer; Bauer, Eva; Schön, Chris-Carolin

    2016-11-01

    Genomic prediction accuracy can be significantly increased by model calibration across multiple breeding cycles as long as selection cycles are connected by common ancestors. In hybrid rye breeding, application of genome-based prediction is expected to increase selection gain because of long selection cycles in population improvement and development of hybrid components. Essentially two prediction scenarios arise: (1) prediction of the genetic value of lines from the same breeding cycle in which model training is performed and (2) prediction of lines from subsequent cycles. It is the latter from which a reduction in cycle length and consequently the strongest impact on selection gain is expected. We empirically investigated genome-based prediction of grain yield, plant height and thousand kernel weight within and across four selection cycles of a hybrid rye breeding program. Prediction performance was assessed using genomic and pedigree-based best linear unbiased prediction (GBLUP and PBLUP). A total of 1040 S 2 lines were genotyped with 16 k SNPs and each year testcrosses of 260 S 2 lines were phenotyped in seven or eight locations. The performance gap between GBLUP and PBLUP increased significantly for all traits when model calibration was performed on aggregated data from several cycles. Prediction accuracies obtained from cross-validation were in the order of 0.70 for all traits when data from all cycles (N CS  = 832) were used for model training and exceeded within-cycle accuracies in all cases. As long as selection cycles are connected by a sufficient number of common ancestors and prediction accuracy has not reached a plateau when increasing sample size, aggregating data from several preceding cycles is recommended for predicting genetic values in subsequent cycles despite decreasing relatedness over time.

  5. Validation of a Process-Based Agro-Ecosystem Model (Agro-IBIS for Maize in Xinjiang, Northwest China

    Directory of Open Access Journals (Sweden)

    Tureniguli Amuti

    2018-03-01

    Full Text Available Agricultural oasis expansion and intensive management practices have occurred in arid and semiarid regions of China during the last few decades. Accordingly, regional carbon and water budgets have been profoundly impacted by agroecosystems in these regions. Therefore, study on the methods used to accurately estimate energy, water, and carbon exchanges is becoming increasingly important. Process-based models can represent the complex processes between land and atmosphere among agricultural ecosystems. However, before the models can be applied they must be validated under different environmental and climatic conditions. In this study, a process-based agricultural ecosystem model (Agro-IBIS was validated for maize crops using 3 years of soil and biometric measurements at Wulanwusu agrometeorological site (WAS located in the Shihezi oasis in Xinjiang, northwest China. The model satisfactorily represented leaf area index (LAI during the growing season, simulating its peak values within the magnitude of 0–10%. The total biomass carbon was overestimated by 15%, 8%, and 16% in 2004, 2005, and 2006, respectively. The model satisfactorily simulated the soil temperature (0–10 cm and volumetric water content (VWC (0–25 cm of farmland during the growing season. However, it overestimated soil temperature approximately by 4 °C and VWC by 15–30% during the winter, coinciding with the period of no vegetation cover in Xinjiang. Overall, the results indicate that the model could represent crop growth, and seems to be applicable in multiple sites in arid oases agroecosystems of Xinjiang. Future application of the model will impose more comprehensive validation using eddy covariance flux data, and consider including dynamics of crop residue and improving characterization of the final stage of leaf development.

  6. Inferring Muscle-Tendon Unit Power from Ankle Joint Power during the Push-Off Phase of Human Walking: Insights from a Multiarticular EMG-Driven Model.

    Directory of Open Access Journals (Sweden)

    Eric C Honert

    Full Text Available Inverse dynamics joint kinetics are often used to infer contributions from underlying groups of muscle-tendon units (MTUs. However, such interpretations are confounded by multiarticular (multi-joint musculature, which can cause inverse dynamics to over- or under-estimate net MTU power. Misestimation of MTU power could lead to incorrect scientific conclusions, or to empirical estimates that misguide musculoskeletal simulations, assistive device designs, or clinical interventions. The objective of this study was to investigate the degree to which ankle joint power overestimates net plantarflexor MTU power during the Push-off phase of walking, due to the behavior of the flexor digitorum and hallucis longus (FDHL-multiarticular MTUs crossing the ankle and metatarsophalangeal (toe joints.We performed a gait analysis study on six healthy participants, recording ground reaction forces, kinematics, and electromyography (EMG. Empirical data were input into an EMG-driven musculoskeletal model to estimate ankle power. This model enabled us to parse contributions from mono- and multi-articular MTUs, and required only one scaling and one time delay factor for each subject and speed, which were solved for based on empirical data. Net plantarflexing MTU power was computed by the model and quantitatively compared to inverse dynamics ankle power.The EMG-driven model was able to reproduce inverse dynamics ankle power across a range of gait speeds (R2 ≥ 0.97, while also providing MTU-specific power estimates. We found that FDHL dynamics caused ankle power to slightly overestimate net plantarflexor MTU power, but only by ~2-7%.During Push-off, FDHL MTU dynamics do not substantially confound the inference of net plantarflexor MTU power from inverse dynamics ankle power. However, other methodological limitations may cause inverse dynamics to overestimate net MTU power; for instance, due to rigid-body foot assumptions. Moving forward, the EMG-driven modeling

  7. A Snow Density Dataset for Improving Surface Boundary Conditions in Greenland Ice Sheet Firn Modeling

    Directory of Open Access Journals (Sweden)

    Robert S. Fausto

    2018-05-01

    Full Text Available The surface snow density of glaciers and ice sheets is of fundamental importance in converting volume to mass in both altimetry and surface mass balance studies, yet it is often poorly constrained. Site-specific surface snow densities are typically derived from empirical relations based on temperature and wind speed. These parameterizations commonly calculate the average density of the top meter of snow, thereby systematically overestimating snow density at the actual surface. Therefore, constraining surface snow density to the top 0.1 m can improve boundary conditions in high-resolution firn-evolution modeling. We have compiled an extensive dataset of 200 point measurements of surface snow density from firn cores and snow pits on the Greenland ice sheet. We find that surface snow density within 0.1 m of the surface has an average value of 315 kg m−3 with a standard deviation of 44 kg m−3, and has an insignificant annual air temperature dependency. We demonstrate that two widely-used surface snow density parameterizations dependent on temperature systematically overestimate surface snow density over the Greenland ice sheet by 17–19%, and that using a constant density of 315 kg m−3 may give superior results when applied in surface mass budget modeling.

  8. Post-cracking tensile behaviour of steel-fibre-reinforced roller-compacted-concrete for FE modelling and design purposes

    International Nuclear Information System (INIS)

    Jafarifar, N.; Pilakoutas, K.; Angelakopoulos, H.; Bennett, T.

    2017-01-01

    Fracture of steel-fibre-reinforced-concrete occurs mostly in the form of a smeared crack band undergoing progressive microcracking. For FE modelling and design purposes, this crack band could be characterised by a stress-strain (σ-ε) relationship. For industrially-produced steel fibres, existing methodologies such as RILEM TC 162-TDF (2003) propose empirical equations to predict a trilinear σ-ε relationship directly from bending test results. This paper evaluates the accuracy of these methodologies and their applicability for roller-compacted-concrete and concrete incorporating steel fibres recycled from post-consumer tyres. It is shown that the energy absorption capacity is generally overestimated by these methodologies, sometimes up to 60%, for both conventional and roller-compacted concrete. Tensile behaviour of fibre-reinforced-concrete is estimated in this paper by inverse analysis of bending test results, examining a variety of concrete mixes and steel fibres. A multilinear relationship is proposed which largely eliminates the overestimation problem and can lead to safer designs. [es

  9. A Self-Determination Model of Childhood Exposure, Perceived Prevalence, Justification, and Perpetration of Intimate Partner Violence

    OpenAIRE

    Neighbors, Clayton; Walker, Denise D.; Mbilinyi, Lyungai F.; Zegree, Joan; Foster, Dawn W.; Roffman, Roger A.

    2013-01-01

    The present research was designed to evaluate self-determination theory as a framework for integrating factors associated with intimate partner violence (IPV) perpetration. The proposed model suggests that childhood exposure to parental violence may influence global motivational orientations which, in turn result in greater cognitive biases (overestimating the prevalence of IPV and justification of IPV) which, in turn, contribute to an individual’s decision to use abusive behavior. Participan...

  10. Sensitivity of Greenland Ice Sheet surface mass balance to surface albedo parameterization: a study with a regional climate model

    OpenAIRE

    Angelen, J. H.; Lenaerts, J. T. M.; Lhermitte, S.; Fettweis, X.; Kuipers Munneke, P.; Broeke, M. R.; Meijgaard, E.; Smeets, C. J. P. P.

    2012-01-01

    We present a sensitivity study of the surface mass balance (SMB) of the Greenland Ice Sheet, as modeled using a regional atmospheric climate model, to various parameter settings in the albedo scheme. The snow albedo scheme uses grain size as a prognostic variable and further depends on cloud cover, solar zenith angle and black carbon concentration. For the control experiment the overestimation of absorbed shortwave radiation (+6%) at the K-transect (west Greenland) for the period 2004–2009 is...

  11. Evaluating the AS-level Internet models: beyond topological characteristics

    International Nuclear Information System (INIS)

    Fan Zheng-Ping

    2012-01-01

    A surge number of models has been proposed to model the Internet in the past decades. However, the issue on which models are better to model the Internet has still remained a problem. By analysing the evolving dynamics of the Internet, we suggest that at the autonomous system (AS) level, a suitable Internet model, should at least be heterogeneous and have a linearly growing mechanism. More importantly, we show that the roles of topological characteristics in evaluating and differentiating Internet models are apparently over-estimated from an engineering perspective. Also, we find that an assortative network is not necessarily more robust than a disassortative network and that a smaller average shortest path length does not necessarily mean a higher robustness, which is different from the previous observations. Our analytic results are helpful not only for the Internet, but also for other general complex networks. (interdisciplinary physics and related areas of science and technology)

  12. Modeling coupled nanoparticle aggregation and transport in porous media: a Lagrangian approach.

    Science.gov (United States)

    Taghavy, Amir; Pennell, Kurt D; Abriola, Linda M

    2015-01-01

    Changes in nanoparticle size and shape due to particle-particle interactions (i.e., aggregation or agglomeration) may significantly alter particle mobility and retention in porous media. To date, however, few modeling studies have considered the coupling of transport and particle aggregation processes. The majority of particle transport models employ an Eulerian modeling framework and are, consequently, limited in the types of collisions and aggregate sizes that can be considered. In this work, a more general Lagrangian modeling framework is developed and implemented to explore coupled nanoparticle aggregation and transport processes. The model was verified through comparison of model simulations to published results of an experimental and Eulerian modeling study (Raychoudhury et al., 2012) of carboxymethyl cellulose (CMC)-modified nano-sized zero-valent iron particle (nZVI) transport and retention in water-saturated sand columns. A model sensitivity analysis reveals the influence of influent particle concentration (ca. 70 to 700 mg/L), primary particle size (10-100 nm) and pore water velocity (ca. 1-6 m/day) on particle-particle, and, consequently, particle-collector interactions. Model simulations demonstrate that, when environmental conditions promote particle-particle interactions, neglecting aggregation effects can lead to under- or over-estimation of nanoparticle mobility. Results also suggest that the extent to which higher order particle-particle collisions influence aggregation kinetics will increase with the fraction of primary particles. This work demonstrates the potential importance of time-dependent aggregation processes on nanoparticle mobility and provides a numerical model capable of capturing/describing these interactions in water-saturated porous media. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Unconventional Constraints on Nitrogen Chemistry using DC3 Observations and Trajectory-based Chemical Modeling

    Science.gov (United States)

    Shu, Q.; Henderson, B. H.

    2017-12-01

    Chemical transport models underestimate nitrogen dioxide observations in the upper troposphere (UT). Previous research in the UT succeeded in combining model predictions with field campaign measurements to demonstrate that the nitric acid formation rate (HO + NO2 → HNO3 (R1)) is overestimated by 22% (Henderson et al., 2012). A subsequent publication (Seltzer et al., 2015) demonstrated that single chemical constraint alters ozone and aerosol formation/composition. This work attempts to replicate previous chemical constraints with newer observations and a different modeling framework. We apply the previously successful constraint framework to Deep Convection Clouds and Chemistry (DC3). DC3 is a more recent field campaign where simulated nitrogen imbalances still exist. Freshly convected air parcels, identified in the DC3 dataset, as initial coordinates to initiate Lagrangian trajectories. Along each trajectory, we simulate the air parcel chemical state. Samples along the trajectories will form ensembles that represent possible realizations of UT air parcels. We then apply Bayesian inference to constrain nitrogen chemistry and compare results to the existing literature. Our anticipated results will confirm overestimation of HNO3 formation rate in previous work and provide further constraints on other nitrogen reaction rate coefficients that affect terminal products from NOx. We will particularly focus on organic nitrate chemistry that laboratory literature has yet to fully address. The results will provide useful insights into nitrogen chemistry that affects climate and human health.

  14. Modeling Anti-HIV Activity of HEPT Derivatives Revisited. Multiregression Models Are Not Inferior Ones

    International Nuclear Information System (INIS)

    Basic, Ivan; Nadramija, Damir; Flajslik, Mario; Amic, Dragan; Lucic, Bono

    2007-01-01

    Several quantitative structure-activity studies for this data set containing 107 HEPT derivatives have been performed since 1997, using the same set of molecules by (more or less) different classes of molecular descriptors. Multivariate Regression (MR) and Artificial Neural Network (ANN) models were developed and in each study the authors concluded that ANN models are superior to MR ones. We re-calculated multivariate regression models for this set of molecules using the same set of descriptors, and compared our results with the previous ones. Two main reasons for overestimation of the quality of the ANN models in previous studies comparing with MR models are: (1) wrong calculation of leave-one-out (LOO) cross-validated (CV) correlation coefficient for MR models in Luco et al., J. Chem. Inf. Comput. Sci. 37 392-401 (1997), and (2) incorrect estimation/interpretation of leave-one-out (LOO) cross-validated and predictive performance and power of ANN models. More precise and fairer comparison of fit and LOO CV statistical parameters shows that MR models are more stable. In addition, MR models are much simpler than ANN ones. For real testing the predictive performance of both classes of models we need more HEPT derivatives, because all ANN models that presented results for external set of molecules used experimental values in optimization of modeling procedure and model parameters

  15. Risk reserve constrained economic dispatch model with wind power penetration

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, W.; Sun, H.; Peng, Y. [Department of Electrical and Electronics Engineering, Dalian University of Technology, Dalian, 116024 (China)

    2010-12-15

    This paper develops a modified economic dispatch (ED) optimization model with wind power penetration. Due to the uncertain nature of wind speed, both overestimation and underestimation of the available wind power are compensated using the up and down spinning reserves. In order to determine both of these two reserve demands, the risk-based up and down spinning reserve constraints are presented considering not only the uncertainty of available wind power, but also the load forecast error and generator outage rates. The predictor-corrector primal-dual interior point method is utilized to solve the proposed ED model. Simulation results of a system with ten conventional generators and one wind farm demonstrate the effectiveness of the proposed method. (authors)

  16. Modeling Cycle Dependence in Credit Insurance

    Directory of Open Access Journals (Sweden)

    Anisa Caja

    2014-03-01

    Full Text Available Business and credit cycles have an impact on credit insurance, as they do on other businesses. Nevertheless, in credit insurance, the impact of the systemic risk is even more important and can lead to major losses during a crisis. Because of this, the insurer surveils and manages policies almost continuously. The management actions it takes limit the consequences of a downturning cycle. However, the traditional modeling of economic capital does not take into account this important feature of credit insurance. This paper proposes a model aiming to estimate future losses of a credit insurance portfolio, while taking into account the insurer’s management actions. The model considers the capacity of the credit insurer to take on less risk in the case of a cycle downturn, but also the inverse, in the case of a cycle upturn; so, losses are predicted with a more dynamic perspective. According to our results, the economic capital is over-estimated when not considering the management actions of the insurer.

  17. Simulating the Water Use of Thermoelectric Power Plants in the United States: Model Development and Verification

    Science.gov (United States)

    Betrie, G.; Yan, E.; Clark, C.

    2016-12-01

    Thermoelectric power plants use the highest amount of freshwater second to the agriculture sector. However, there is scarcity of information that characterizes the freshwater use of these plants in the United States. This could be attributed to the lack of model and data that are required to conduct analysis and gain insights. The competition for freshwater among sectors will increase in the future as the amount of freshwater gets limited due climate change and population growth. A model that makes use of less data is urgently needed to conduct analysis and identify adaptation strategies. The objectives of this study are to develop a model and simulate the water use of thermoelectric power plants in the United States. The developed model has heat-balance, climate, cooling system, and optimization modules. It computes the amount of heat rejected to the environment, estimates the quantity of heat exchanged through latent and sensible heat to the environment, and computes the amount of water required per unit generation of electricity. To verify the model, we simulated a total of 876 fossil-fired, nuclear and gas-turbine power plants with different cooling systems (CS) using 2010-2014 data obtained from Energy Information Administration. The CS includes once-through with cooling pond, once-through without cooling ponds, recirculating with induced draft and recirculating with induced draft natural draft. The results show that the model reproduced the observed water use per unit generation of electricity for the most of the power plants. It is also noticed that the model slightly overestimates the water use during the summer period when the input water temperatures are higher. We are investigating the possible reasons for the overestimation and address it in the future work. The model could be used individually or coupled to regional models to analyze various adaptation strategies and improve the water use efficiency of thermoelectric power plants.

  18. Preliminary evaluation of the Community Multiscale Air Quality model for 2002 over the Southeastern United States.

    Science.gov (United States)

    Morris, Ralph E; McNally, Dennis E; Tesche, Thomas W; Tonnesen, Gail; Boylan, James W; Brewer, Patricia

    2005-11-01

    The Visibility Improvement State and Tribal Association of the Southeast (VISTAS) is one of five Regional Planning Organizations that is charged with the management of haze, visibility, and other regional air quality issues in the United States. The VISTAS Phase I work effort modeled three episodes (January 2002, July 1999, and July 2001) to identify the optimal model configuration(s) to be used for the 2002 annual modeling in Phase II. Using model configurations recommended in the Phase I analysis, 2002 annual meteorological (Mesoscale Meterological Model [MM5]), emissions (Sparse Matrix Operator Kernal Emissions [SMOKE]), and air quality (Community Multiscale Air Quality [CMAQ]) simulations were performed on a 36-km grid covering the continental United States and a 12-km grid covering the Eastern United States. Model estimates were then compared against observations. This paper presents the results of the preliminary CMAQ model performance evaluation for the initial 2002 annual base case simulation. Model performance is presented for the Eastern United States using speciated fine particle concentration and wet deposition measurements from several monitoring networks. Initial results indicate fairly good performance for sulfate with fractional bias values generally within +/-20%. Nitrate is overestimated in the winter by approximately +50% and underestimated in the summer by more than -100%. Organic carbon exhibits a large summer underestimation bias of approximately -100% with much improved performance seen in the winter with a bias near zero. Performance for elemental carbon is reasonable with fractional bias values within +/- 40%. Other fine particulate (soil) and coarse particular matter exhibit large (80-150%) overestimation in the winter but improved performance in the summer. The preliminary 2002 CMAQ runs identified several areas of enhancements to improve model performance, including revised temporal allocation factors for ammonia emissions to improve

  19. Expression and significance of Bax protein in model of radiation injury in mouse skin

    International Nuclear Information System (INIS)

    Feng Yizhong; Mo Yahong

    2002-01-01

    Objective: The study is to find some valuable criteria for diagnosis and treatment of radiation injury in skin. Methods: The expression of Bax protein was studied by SP immunohistochemistry in 40 cases of model of radiation injury in mouse skin. Their relationship relating to radiation dose was also investigated. Results: The expression rates of Bax were 30%, 30%, 70%, 70% in 5 Gy group, 15 Gy group, 30 Gy group, 45 Gy group respectively. There was no significant correlation between the expression of Bax and radiation groups. Conclusions: The experiment shows that radiation can increase the expression of Bax protein which might be related to poor healing in radiation skin injury

  20. A 40-year accumulation dataset for Adelie Land, Antarctica and its application for model validation

    Energy Technology Data Exchange (ETDEWEB)

    Agosta, Cecile; Favier, Vincent [UJF-Grenoble 1 / CNRS, Laboratoire de Glaciologie et de Geophysique de l' Environnement UMR 5183, Saint Martin d' Heres (France); Genthon, Christophe; Gallee, Hubert; Krinner, Gerhard [CNRS / UJF-Grenoble 1, Laboratoire de Glaciologie et de Geophysique de l' Environnement UMR 5183, Saint Martin d' Heres (France); Lenaerts, Jan T.M.; Broeke, Michiel R. van den [Utrecht University, Institute for Marine and Atmospheric Research Utrecht (Netherlands)

    2012-01-15

    The GLACIOCLIM-SAMBA (GS) Antarctic accumulation monitoring network, which extends from the coast of Adelie Land to the Antarctic plateau, has been surveyed annually since 2004. The network includes a 156-km stake-line from the coast inland, along which accumulation shows high spatial and interannual variability with a mean value of 362 mm water equivalent a{sup -1}. In this paper, this accumulation is compared with older accumulation reports from between 1971 and 1991. The mean and annual standard deviation and the km-scale spatial pattern of accumulation were seen to be very similar in the older and more recent data. The data did not reveal any significant accumulation trend over the last 40 years. The ECMWF analysis-based forecasts (ERA-40 and ERA-Interim), a stretched-grid global general circulation model (LMDZ4) and three regional circulation models (PMM5, MAR and RACMO2), all with high resolution over Antarctica (27-125 km), were tested against the GS reports. They qualitatively reproduced the meso-scale spatial pattern of the annual-mean accumulation except MAR. MAR significantly underestimated mean accumulation, while LMDZ4 and RACMO2 overestimated it. ERA-40 and the regional models that use ERA-40 as lateral boundary condition qualitatively reproduced the chronology of interannual variability but underestimated the magnitude of interannual variations. Two widely used climatologies for Antarctic accumulation agreed well with the mean GS data. The model-based climatology was also able to reproduce the observed spatial pattern. These data thus provide new stringent constraints on models and other large-scale evaluations of the Antarctic accumulation. (orig.)

  1. Topside electron density: comparison of experimental and IRI model profiles during low solar activity period

    International Nuclear Information System (INIS)

    Alazo, K.; Coisson, P.; Radicella, S.M.

    2003-01-01

    The pattern of the topside electron density profiles is not yet very well represented by the IRI model. In this work the topside profiles obtained by the ISIS-2 satellite during low solar activity conditions are compared to those modeled by IRI. We take the quantitative parameter ε to measure the deviation of the model from the observed profiles. The results showed that the IRI overestimation of the topside profile is higher for low dip latitudes. The dispersion of the epsilon values is from 40 to 140%, more in equinoctial months and some lower for Winter. The best modeling is about 20% to 40% in middle and high latitudes of the North Hemisphere. (author)

  2. Intercomparison between CMIP5 model and MODIS satellite-retrieved data of aerosol optical depth, cloud fraction, and cloud-aerosol interactions

    Science.gov (United States)

    Sockol, Alyssa; Small Griswold, Jennifer D.

    2017-08-01

    Aerosols are a critical component of the Earth's atmosphere and can affect the climate of the Earth through their interactions with solar radiation and clouds. Cloud fraction (CF) and aerosol optical depth (AOD) at 550 nm from the Moderate Resolution Imaging Spectroradiometer (MODIS) are used with analogous cloud and aerosol properties from Historical Phase 5 of the Coupled Model Intercomparison Project (CMIP5) model runs that explicitly include anthropogenic aerosols and parameterized cloud-aerosol interactions. The models underestimate AOD by approximately 15% and underestimate CF by approximately 10% overall on a global scale. A regional analysis is then used to evaluate model performance in two regions with known biomass burning activity and absorbing aerosol (South America (SAM) and South Africa (SAF)). In SAM, the models overestimate AOD by 4.8% and underestimate CF by 14%. In SAF, the models underestimate AOD by 35% and overestimate CF by 13.4%. Average annual cycles show that the monthly timing of AOD peaks closely match satellite data in both SAM and SAF for all except the Community Atmosphere Model 5 and Geophysical Fluid Dynamics Laboratory (GFDL) models. Monthly timing of CF peaks closely match for all models (except GFDL) for SAM and SAF. Sorting monthly averaged 2° × 2.5° model or MODIS CF as a function of AOD does not result in the previously observed "boomerang"-shaped CF versus AOD relationship characteristic of regions with absorbing aerosols from biomass burning. Cloud-aerosol interactions, as observed using daily (or higher) temporal resolution data, are not reproducible at the spatial or temporal resolution provided by the CMIP5 models.

  3. An improved model for predicting electrical contact resistance between bipolar plate and gas diffusion layer in proton exchange membrane fuel cells

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Zhiliang; Wang, Shuxin [School of Mechanical Engineering, Tianjin University, Tianjin 300072 (China); Zhou, Yuanyuan; Lin, Guosong; Hu, S. Jack [Department of Mechanical Engineering, The University of Michigan, Ann Arbor, MI 48109-2125 (United States)

    2008-07-15

    Electrical contact resistance between bipolar plates (BPPs) and gas diffusion layers (GDLs) in PEM fuel cells has attracted much attention since it is one significant part of the total contact resistance which plays an important role in fuel cell performance. This paper extends a previous model by Zhou et al. [Y. Zhou, G. Lin, A.J. Shih, S.J. Hu, J. Power Sources 163 (2007) 777-783] on the prediction of electrical contact resistance within PEM fuel cells. The original microscale numerical model was based on the Hertz solution for individual elastic contacts, assuming that contact bodies, GDL carbon fibers and BPP asperities are isotropic elastic half-spaces. The new model features a more practical contact by taking into account the bending behavior of carbon fibers as well as their anisotropic properties. The microscale single contact process is solved numerically using the finite element method (FEM). The relationship between the contact pressure and the electrical resistance at the GDL/BPP interface is derived by multiple regression models. Comparisons of the original model by Zhou et al. and the new model with experimental data show that the original model slightly overestimates the electrical contact resistance, whereas a better agreement with experimental data is observed using the new model. (author)

  4. Evaluating the Community Land Model (CLM4.5) at a coniferous forest site in northwestern United States using flux and carbon-isotope measurements

    Science.gov (United States)

    Duarte, Henrique F.; Raczka, Brett M.; Ricciuto, Daniel M.; Lin, John C.; Koven, Charles D.; Thornton, Peter E.; Bowling, David R.; Lai, Chun-Ta; Bible, Kenneth J.; Ehleringer, James R.

    2017-09-01

    Droughts in the western United States are expected to intensify with climate change. Thus, an adequate representation of ecosystem response to water stress in land models is critical for predicting carbon dynamics. The goal of this study was to evaluate the performance of the Community Land Model (CLM) version 4.5 against observations at an old-growth coniferous forest site in the Pacific Northwest region of the United States (Wind River AmeriFlux site), characterized by a Mediterranean climate that subjects trees to water stress each summer. CLM was driven by site-observed meteorology and calibrated primarily using parameter values observed at the site or at similar stands in the region. Key model adjustments included parameters controlling specific leaf area and stomatal conductance. Default values of these parameters led to significant underestimation of gross primary production, overestimation of evapotranspiration, and consequently overestimation of photosynthetic 13C discrimination, reflected in reduced 13C : 12C ratios of carbon fluxes and pools. Adjustments in soil hydraulic parameters within CLM were also critical, preventing significant underestimation of soil water content and unrealistic soil moisture stress during summer. After calibration, CLM was able to simulate energy and carbon fluxes, leaf area index, biomass stocks, and carbon isotope ratios of carbon fluxes and pools in reasonable agreement with site observations. Overall, the calibrated CLM was able to simulate the observed response of canopy conductance to atmospheric vapor pressure deficit (VPD) and soil water content, reasonably capturing the impact of water stress on ecosystem functioning. Both simulations and observations indicate that stomatal response from water stress at Wind River was primarily driven by VPD and not soil moisture. The calibration of the Ball-Berry stomatal conductance slope (mbb) at Wind River aligned with findings from recent CLM experiments at sites characterized by

  5. Evaluating the Community Land Model (CLM4.5 at a coniferous forest site in northwestern United States using flux and carbon-isotope measurements

    Directory of Open Access Journals (Sweden)

    H. F. Duarte

    2017-09-01

    Full Text Available Droughts in the western United States are expected to intensify with climate change. Thus, an adequate representation of ecosystem response to water stress in land models is critical for predicting carbon dynamics. The goal of this study was to evaluate the performance of the Community Land Model (CLM version 4.5 against observations at an old-growth coniferous forest site in the Pacific Northwest region of the United States (Wind River AmeriFlux site, characterized by a Mediterranean climate that subjects trees to water stress each summer. CLM was driven by site-observed meteorology and calibrated primarily using parameter values observed at the site or at similar stands in the region. Key model adjustments included parameters controlling specific leaf area and stomatal conductance. Default values of these parameters led to significant underestimation of gross primary production, overestimation of evapotranspiration, and consequently overestimation of photosynthetic 13C discrimination, reflected in reduced 13C : 12C ratios of carbon fluxes and pools. Adjustments in soil hydraulic parameters within CLM were also critical, preventing significant underestimation of soil water content and unrealistic soil moisture stress during summer. After calibration, CLM was able to simulate energy and carbon fluxes, leaf area index, biomass stocks, and carbon isotope ratios of carbon fluxes and pools in reasonable agreement with site observations. Overall, the calibrated CLM was able to simulate the observed response of canopy conductance to atmospheric vapor pressure deficit (VPD and soil water content, reasonably capturing the impact of water stress on ecosystem functioning. Both simulations and observations indicate that stomatal response from water stress at Wind River was primarily driven by VPD and not soil moisture. The calibration of the Ball–Berry stomatal conductance slope (mbb at Wind River aligned with findings from recent CLM experiments at

  6. Modeling the growth of Listeria monocytogenes in mold-ripened cheeses.

    Science.gov (United States)

    Lobacz, Adriana; Kowalik, Jaroslaw; Tarczynska, Anna

    2013-06-01

    This study presents possible applications of predictive microbiology to model the safety of mold-ripened cheeses with respect to bacteria of the species Listeria monocytogenes during (1) the ripening of Camembert cheese, (2) cold storage of Camembert cheese at temperatures ranging from 3 to 15°C, and (3) cold storage of blue cheese at temperatures ranging from 3 to 15°C. The primary models used in this study, such as the Baranyi model and modified Gompertz function, were fitted to growth curves. The Baranyi model yielded the most accurate goodness of fit and the growth rates generated by this model were used for secondary modeling (Ratkowsky simple square root and polynomial models). The polynomial model more accurately predicted the influence of temperature on the growth rate, reaching the adjusted coefficients of multiple determination 0.97 and 0.92 for Camembert and blue cheese, respectively. The observed growth rates of L. monocytogenes in mold-ripened cheeses were compared with simulations run with the Pathogen Modeling Program (PMP 7.0, USDA, Wyndmoor, PA) and ComBase Predictor (Institute of Food Research, Norwich, UK). However, the latter predictions proved to be consistently overestimated and contained a significant error level. In addition, a validation process using independent data generated in dairy products from the ComBase database (www.combase.cc) was performed. In conclusion, it was found that L. monocytogenes grows much faster in Camembert than in blue cheese. Both the Baranyi and Gompertz models described this phenomenon accurately, although the Baranyi model contained a smaller error. Secondary modeling and further validation of the generated models highlighted the issue of usability and applicability of predictive models in the food processing industry by elaborating models targeted at a specific product or a group of similar products. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  7. Evaluating the effect of alternative carbon allocation schemes in a land surface model (CLM4.5) on carbon fluxes, pools, and turnover in temperate forests

    Science.gov (United States)

    Montané, Francesc; Fox, Andrew M.; Arellano, Avelino F.; MacBean, Natasha; Alexander, M. Ross; Dye, Alex; Bishop, Daniel A.; Trouet, Valerie; Babst, Flurin; Hessl, Amy E.; Pederson, Neil; Blanken, Peter D.; Bohrer, Gil; Gough, Christopher M.; Litvak, Marcy E.; Novick, Kimberly A.; Phillips, Richard P.; Wood, Jeffrey D.; Moore, David J. P.

    2017-09-01

    How carbon (C) is allocated to different plant tissues (leaves, stem, and roots) determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI) measurements) to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM), the Community Land Model (CLM4.5). We ran CLM4.5 for nine temperate (including evergreen and deciduous) forests in North America between 1980 and 2013 using four different C allocation schemes: i. dynamic C allocation scheme (named "D-CLM4.5") with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP); ii. an alternative dynamic C allocation scheme (named "D-Litton"), where, similar to (i), C allocation is a dynamic function of annual NPP, but unlike (i) includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.-iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen") and the other of observations in deciduous forests (named "F-Deciduous"). D-CLM4.5 generally overestimated gross primary production (GPP) and ecosystem respiration, and underestimated net ecosystem exchange (NEE). In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m-2) for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011) was highly underestimated (between 1222 and 7557 g C m-2) for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C-LAI relationship in the model did not match the observed leaf C

  8. High spatial resolution radiation budget for Europe: derived from satellite data, validation of a regional model; Raeumlich hochaufgeloeste Strahlungsbilanz ueber Europa: Ableitung aus Satellitendaten, Validation eines regionalen Modells

    Energy Technology Data Exchange (ETDEWEB)

    Hollmann, R. [GKSS-Forschungszentrum Geesthacht GmbH (Germany). Inst. fuer Atmosphaerenphysik

    2000-07-01

    Since forty years instruments onboard satellites have been demonstrated their usefulness for many applications in the field of meteorology and oceanography. Several experiments, like ERBE, are dedicated to establish a climatology of the global Earth radiation budget at the top of the atmosphere. Now the focus has been changed to the regional scale, e.g. GEWEX with its regional sub-experiments like BALTEX. To obtain a regional radiation budget for Europe in the first part of the work the well calibrated measurements from ScaRaB (scanner for radiation budget) are used to derive a narrow-to-broadband conversion, which is applicable to the AVHRR (advanced very high resolution radiometer). It is shown, that the accuracy of the method is in the order of that from SCaRaB itself. In the second part of the work, results of REMO have been compared with measurements of ScaRaB and AVHRR for March 1994. The model reproduces the measurements overall well, but it is overestimating the cold areas and underestimating the warm areas in the longwave spectral domain. Similarly it is overestimating the dark areas and underestimating the bright areas in the solar spectral domain. (orig.)

  9. Experimental and numerical modeling of chloride diffusivity in hardened cement concrete considering the aggregate shapes and exposure-duration effects

    Directory of Open Access Journals (Sweden)

    Wu Jie

    Full Text Available This paper presents an experimental and numerical model describing the effects of the aggregate shapes and exposure duration of chloride diffusion into cement-based materials. A simple chloride diffusion test was performed on a concrete specimen composed of a mixture of cement mortar with crushed granites and round gravels. A simulation was done and the numerical model developed was applied to the matrix at the meso-scale level and the chloride diffusivity was investigated at 30, 60, and 90 days. The experimental and simulation results showed that the aggregate shape and the exposure duration of chloride diffusing into concrete are of high significance. It was indicated that the model with crushed granite presents a good resistance against chloride ingress, while the model with rounded gravels shows some sensitivity to the chloride penetration. It was also found out that when the time dependence of the diffusion coefficient is not taken into account, the diffusion rate will be overestimated. The meso-scale model developed in this study also provides a new method applied in the analysis of the chloride and water transport that causes damage to concrete considering the particle inclusion and the diffusion duration. Keywords: Meso-scale modeling, Chloride diffusivity, Concrete, Effects of aggregates shape and exposure duration, FEM

  10. Assessment of ENSEMBLES regional climate models for the representation of monthly wind characteristics in the Aegean Sea (Greece): Mean and extremes analysis

    Science.gov (United States)

    Anagnostopoulou, Christina; Tolika, Konstantia; Tegoulias, Ioannis; Velikou, Kondylia; Vagenas, Christos

    2013-04-01

    The main scope of the present study is the assessment of the ability of three of the most updated regional climate models, developed under the frame of the European research project ENSEMBLES (http://www.ensembles-eu.org/), to simulate the wind characteristics in the Aegean Sea in Greece. The examined models are KNMI-RACMO2, MPI-MREMO, and ICTP - RegCM3. They all have the same spatial resolution (25x25km) and for their future projections they are using the A1B SRES emission scenarios. Their simulated wind data (speed and direction) were compared with observational data from several stations over the domain of study for a time period of 25 years, from 1980 to 2004 on a monthly basis. The primer data were available every three or six hours from which we computed the mean daily wind speed and the prevailing daily wind direction. It should be mentioned, that the comparison was made for the grid point that was the closest to each station over land. Moreover, the extreme speed values were also calculated both for the observational and the simulated data, in order to assess the ability of the models in capturing the most intense wind conditions. The first results of the study showed that the prevailing winds during the winter and spring months have a north - northeastern or a south - south western direction in most parts of the Aegean sea. The models under examination seem to capture quite satisfactorily this pattern as well as the general characteristics of the winds in this area. During summer, winds in the Aegean Sea have mainly north direction and the models have quite good agreement both in simulating this direction and the wind speed. Concerning the extreme wind speed (percentiles) it was found that for the stations in the northern Aegean all the models overestimate the extreme wind indices. For the eastern parts of the Aegean the KNMI and the MPI model underestimate the extreme wind speeds while on the other hand the ICTP model overestimates them. Finally for the

  11. When is a heavy quark not a parton? Charged Higgs production and heavy quark mass effects in the QCD-based parton model

    International Nuclear Information System (INIS)

    Olness, F.I.; Tung, Wu-Ki

    1989-10-01

    Applications of the QCD-based parton model to new physics processes involving heavy partons are illustrated using charged Higgs production. The naive parton model predictions are found to over-estimate the actual cross section by a factor of 2 to 5. The role of the top quark as a ''parton'' is examined, and the energy range over which heavy quarks (or other particles) should or should not be naturally treated as ''partons'' is delineated. 12 refs., 5 figs

  12. Modeling decadal timescale interactions between surface water and ground water in the central Everglades, Florida, USA

    Science.gov (United States)

    Harvey, Judson W.; Newlin, Jessica T.; Krupa, Steven L.

    2006-04-01

    Surface-water and ground-water flow are coupled in the central Everglades, although the remoteness of this system has hindered many previous attempts to quantify interactions between surface water and ground water. We modeled flow through a 43,000 ha basin in the central Everglades called Water Conservation Area 2A. The purpose of the model was to quantify recharge and discharge in the basin's vast interior areas. The presence and distribution of tritium in ground water was the principal constraint on the modeling, based on measurements in 25 research wells ranging in depth from 2 to 37 m. In addition to average characteristics of surface-water flow, the model parameters included depth of the layer of 'interactive' ground water that is actively exchanged with surface water, average residence time of interactive ground water, and the associated recharge and discharge fluxes across the wetland ground surface. Results indicated that only a relatively thin (8 m) layer of the 60 m deep surfical aquifer actively exchanges surface water and ground water on a decadal timescale. The calculated storage depth of interactive ground water was 3.1 m after adjustment for the porosity of peat and sandy limestone. Modeling of the tritium data yielded an average residence time of 90 years in interactive ground water, with associated recharge and discharge fluxes equal to 0.01 cm d -1. 3H/ 3He isotopic ratio measurements (which correct for effects of vertical mixing in the aquifer with deeper, tritium-dead water) were available from several wells, and these indicated an average residence time of 25 years, suggesting that residence time was overestimated using tritium measurements alone. Indeed, both residence time and storage depth would be expected to be overestimated due to vertical mixing. The estimate of recharge and discharge (0.01 cm d -1) that resulted from tritium modeling therefore is still considered reliable, because the ratio of residence time and storage depth (used to

  13. Simple Comparative Analyses of Differentially Expressed Gene Lists May Overestimate Gene Overlap.

    Science.gov (United States)

    Lawhorn, Chelsea M; Schomaker, Rachel; Rowell, Jonathan T; Rueppell, Olav

    2018-04-16

    Comparing the overlap between sets of differentially expressed genes (DEGs) within or between transcriptome studies is regularly used to infer similarities between biological processes. Significant overlap between two sets of DEGs is usually determined by a simple test. The number of potentially overlapping genes is compared to the number of genes that actually occur in both lists, treating every gene as equal. However, gene expression is controlled by transcription factors that bind to a variable number of transcription factor binding sites, leading to variation among genes in general variability of their expression. Neglecting this variability could therefore lead to inflated estimates of significant overlap between DEG lists. With computer simulations, we demonstrate that such biases arise from variation in the control of gene expression. Significant overlap commonly arises between two lists of DEGs that are randomly generated, assuming that the control of gene expression is variable among genes but consistent between corresponding experiments. More overlap is observed when transcription factors are specific to their binding sites and when the number of genes is considerably higher than the number of different transcription factors. In contrast, overlap between two DEG lists is always lower than expected when the genetic architecture of expression is independent between the two experiments. Thus, the current methods for determining significant overlap between DEGs are potentially confounding biologically meaningful overlap with overlap that arises due to variability in control of expression among genes, and more sophisticated approaches are needed.

  14. Impact of a new wavelength-dependent representation of methane photolysis branching ratios on the modeling of Titan’s atmospheric photochemistry

    Science.gov (United States)

    Gans, B.; Peng, Z.; Carrasco, N.; Gauyacq, D.; Lebonnois, S.; Pernot, P.

    2013-03-01

    A new wavelength-dependent model for CH4 photolysis branching ratios is proposed, based on the values measured recently by Gans et al. (Gans, B. et al. [2011]. Phys. Chem. Chem. Phys. 13, 8140-8152). We quantify the impact of this representation on the predictions of a photochemical model of Titan’s atmosphere, on their precision, and compare to earlier representations. Although the observed effects on the mole fraction of the species are small (never larger than 50%), it is possible to draw some recommendations for further studies: (i) the Ly-α branching ratios of Wang et al. (Wang, J.H. et al. [2000]. J. Chem. Phys. 113, 4146-4152) used in recent models overestimate the CH2:CH3 ratio, a factor to which a lot of species are sensitive; (ii) the description of out-of-Ly-α branching ratios by the “100% CH3” scenario has to be avoided, as it can bias significantly the mole fractions of some important species (C3H8); and (iii) complementary experimental data in the 130-140 nm range would be useful to constrain the models in the Ly-α deprived 500-700 km altitude range.

  15. Forsmark site investigation. Assessment of the validity of the rock domain model, version 1.2, based on the modelling of gravity and petrophysical data

    International Nuclear Information System (INIS)

    Isaksson, Hans; Stephens, Michael B.

    2007-11-01

    This document reports the results gained by the geophysical modelling of rock domains based on gravity and petrophysical data, which is one of the activities performed within the site investigation work at Forsmark. The main objective with this activity is to assess the validity of the geological rock domain model version 1.2, and to identify discrepancies in the model that may indicate a need for revision of the model or a need for additional investigations. The verification is carried out by comparing the calculated gravity model response, which takes account of the geological model, with a local gravity anomaly that represents the measured data. The model response is obtained from the three-dimensional geometry and the petrophysical data provided for each rock domain in the geological model. Due to model boundary conditions, the study is carried out in a smaller area within the regional model area. Gravity model responses are calculated in three stages; an initial model, a base model and a refined base model. The refined base model is preferred and is used for comparison purposes. In general, there is a good agreement between the refined base model that makes use of the rock domain model, version 1.2 and the measured gravity data, not least where it concerns the depth extension of the critical rock domain RFM029. The most significant discrepancy occurs in the area extending from the SFR office to the SFR underground facility and further to the northwest. It is speculated that this discrepancy is caused by a combination of an overestimation of the volume of gabbro (RFM016) that plunges towards the southeast in the rock domain model, and an underestimation of the volume of occurrence of pegmatite and pegmatitic granite that are known to be present and occur as larger bodies around SFR. Other discrepancies are noted in rock domain RFM022, which is considered to be overestimated in the rock domain model, version 1.2, and in rock domain RFM017, where the gravity

  16. Evaluation of RRTMG and Fu-Liou RTM Performance against LBLRTM-DISORT Simulations and CERES Data in terms of Ice Clouds Radiative Effects

    Science.gov (United States)

    Gu, B.; Yang, P.; Kuo, C. P.; Mlawer, E. J.

    2017-12-01

    Evaluation of RRTMG and Fu-Liou RTM Performance against LBLRTM-DISORT Simulations and CERES Data in terms of Ice Clouds Radiative Effects Boyan Gu1, Ping Yang1, Chia-Pang Kuo1, Eli J. Mlawer2 Department of Atmospheric Sciences, Texas A&M University, College Station, TX 77843, USA Atmospheric and Environmental Research (AER), Lexington, MA 02421, USA Ice clouds play an important role in climate system, especially in the Earth's radiation balance and hydrological cycle. However, the representation of ice cloud radiative effects (CRE) remains significant uncertainty, because scattering properties of ice clouds are not well considered in general circulation models (GCM). We analyze the strengths and weakness of the Rapid Radiative Transfer Model for GCM Applications (RRTMG) and Fu-Liou Radiative Transfer Model (RTM) against rigorous LBLRTM-DISORT (a combination of Line-By-Line Radiative Transfer Model and Discrete Ordinate Radiative Transfer Model) calculations and CERES (Clouds and the Earth's Radiant Energy System) flux observations. In total, 6 US standard atmospheric profiles and 42 atmospheric profiles from Atmospheric and Environmental Research (AER) Company are used to evaluate the RRTMG and Fu-Liou RTM by LBLRTM-DISORT calculations from 0 to 3250 cm-1. Ice cloud radiative effect simulations with RRTMG and Fu-Liou RTM are initialized using the ice cloud properties from MODIS collection-6 products. Simulations of single layer ice cloud CRE by RRTMG and LBLRTM-DISORT show that RRTMG, neglecting scattering, overestimates the TOA flux by about 0-15 W/m2 depending on the cloud particle size and optical depth, and the most significant overestimation occurs when the particle effective radius is small (around 10 μm) and the cloud optical depth is intermediate (about 1-10). The overestimation reduces significantly when the similarity rule is applied to RRTMG. We combine ice cloud properties from MODIS Collection-6 and atmospheric profiles from the Modern

  17. Development of an inorganic and organic aerosol model (CHIMERE 2017β v1.0): seasonal and spatial evaluation over Europe

    Science.gov (United States)

    Couvidat, Florian; Bessagnet, Bertrand; Garcia-Vivanco, Marta; Real, Elsa; Menut, Laurent; Colette, Augustin

    2018-01-01

    A new aerosol module was developed and integrated in the air quality model CHIMERE. Developments include the use of the Model of Emissions and Gases and Aerosols from Nature (MEGAN) 2.1 for biogenic emissions, the implementation of the inorganic thermodynamic model ISORROPIA 2.1, revision of wet deposition processes and of the algorithms of condensation/evaporation and coagulation and the implementation of the secondary organic aerosol (SOA) mechanism H2O and the thermodynamic model SOAP. Concentrations of particles over Europe were simulated by the model for the year 2013. Model concentrations were compared to the European Monitoring and Evaluation Programme (EMEP) observations and other observations available in the EBAS database to evaluate the performance of the model. Performances were determined for several components of particles (sea salt, sulfate, ammonium, nitrate, organic aerosol) with a seasonal and regional analysis of results. The model gives satisfactory performance in general. For sea salt, the model succeeds in reproducing the seasonal evolution of concentrations for western and central Europe. For sulfate, except for an overestimation of sulfate in northern Europe, modeled concentrations are close to observations and the model succeeds in reproducing the seasonal evolution of concentrations. For organic aerosol, the model reproduces with satisfactory results concentrations for stations with strong modeled biogenic SOA concentrations. However, the model strongly overestimates ammonium nitrate concentrations during late autumn (possibly due to problems in the temporal evolution of emissions) and strongly underestimates summer organic aerosol concentrations over most of the stations (especially in the northern half of Europe). This underestimation could be due to a lack of anthropogenic SOA or biogenic emissions in northern Europe. A list of recommended tests and developments to improve the model is also given.

  18. Subjective versus objective risk in genetic counseling for hereditary breast and/or ovarian cancers

    Directory of Open Access Journals (Sweden)

    Sperduti Isabella

    2009-12-01

    Full Text Available Abstract Background Despite the fact that genetic counseling in oncology provides information regarding objective risks, it can be found a contrast between the subjective and objective risk. The aims of this study were to evaluate the accuracy of the perceived risk compared to the objective risk estimated by the BRCApro computer model and to evaluate any associations between medical, demographic and psychological variables and the accuracy of risk perception. Methods 130 subjects were given medical-demographic file, Cancer and Genetic Risk Perception, Hospital Anxiety-Depression Scale. It was also computed an objective evaluation of the risk by the BRCApro model. Results The subjective risk was significantly higher than objective risk. The risk of tumour was overestimated by 56%, and the genetic risk by 67%. The subjects with less cancer affected relatives significantly overestimated their risk of being mutation carriers and made a more innacurate estimation than high risk subjects. Conclusion The description of this sample shows: general overestimation of the risk, inaccurate perception compared to BRCApro calculation and a more accurate estimation in those subjects with more cancer affected relatives (high risk subjects. No correlation was found between the levels of perception of risk and anxiety and depression. Based on our findings, it is worth pursuing improved communication strategies about the actual cancer and genetic risk, especially for subjects at "intermediate and slightly increased risk" of developing an hereditary breast and/or ovarian cancer or of being mutation carrier.

  19. The impact of structural error on parameter constraint in a climate model

    Science.gov (United States)

    McNeall, Doug; Williams, Jonny; Booth, Ben; Betts, Richard; Challenor, Peter; Wiltshire, Andy; Sexton, David

    2016-11-01

    Uncertainty in the simulation of the carbon cycle contributes significantly to uncertainty in the projections of future climate change. We use observations of forest fraction to constrain carbon cycle and land surface input parameters of the global climate model FAMOUS, in the presence of an uncertain structural error. Using an ensemble of climate model runs to build a computationally cheap statistical proxy (emulator) of the climate model, we use history matching to rule out input parameter settings where the corresponding climate model output is judged sufficiently different from observations, even allowing for uncertainty. Regions of parameter space where FAMOUS best simulates the Amazon forest fraction are incompatible with the regions where FAMOUS best simulates other forests, indicating a structural error in the model. We use the emulator to simulate the forest fraction at the best set of parameters implied by matching the model to the Amazon, Central African, South East Asian, and North American forests in turn. We can find parameters that lead to a realistic forest fraction in the Amazon, but that using the Amazon alone to tune the simulator would result in a significant overestimate of forest fraction in the other forests. Conversely, using the other forests to tune the simulator leads to a larger underestimate of the Amazon forest fraction. We use sensitivity analysis to find the parameters which have the most impact on simulator output and perform a history-matching exercise using credible estimates for simulator discrepancy and observational uncertainty terms. We are unable to constrain the parameters individually, but we rule out just under half of joint parameter space as being incompatible with forest observations. We discuss the possible sources of the discrepancy in the simulated Amazon, including missing processes in the land surface component and a bias in the climatology of the Amazon.

  20. Evaluation of global continental hydrology as simulated by the Land-surface Processes and eXchanges Dynamic Global Vegetation Model

    Directory of Open Access Journals (Sweden)

    S. J. Murray

    2011-01-01

    Full Text Available Global freshwater resources are sensitive to changes in climate, land cover and population density and distribution. The Land-surface Processes and eXchanges Dynamic Global Vegetation Model is a recent development of the Lund-Potsdam-Jena model with improved representation of fire-vegetation interactions. It allows simultaneous consideration of the effects of changes in climate, CO2 concentration, natural vegetation and fire regime shifts on the continental hydrological cycle. Here the model is assessed for its ability to simulate large-scale spatial and temporal runoff patterns, in order to test its suitability for modelling future global water resources. Comparisons are made against observations of streamflow and a composite dataset of modelled and observed runoff (1986–1995 and are also evaluated against soil moisture data and the Palmer Drought Severity Index. The model captures the main features of the geographical distribution of global runoff, but tends to overestimate runoff in much of the Northern Hemisphere (where this can be somewhat accounted for by freshwater consumption and the unrealistic accumulation of the simulated winter snowpack in permafrost regions and the southern tropics. Interannual variability is represented reasonably well at the large catchment scale, as are seasonal flow timings and monthly high and low flow events. Further improvements to the simulation of intra-annual runoff might be achieved via the addition of river flow routing. Overestimates of runoff in some basins could likely be corrected by the inclusion of transmission losses and direct-channel evaporation.

  1. Climate variability and predictability associated with the Indo-Pacific Oceanic Channel Dynamics in the CCSM4 Coupled System Model

    Science.gov (United States)

    Yuan, Dongliang; Xu, Peng; Xu, Tengfei

    2017-01-01

    An experiment using the Community Climate System Model (CCSM4), a participant of the Coupled Model Intercomparison Project phase-5 (CMIP5), is analyzed to assess the skills of this model in simulating and predicting the climate variabilities associated with the oceanic channel dynamics across the Indo-Pacific Oceans. The results of these analyses suggest that the model is able to reproduce the observed lag correlation between the oceanic anomalies in the southeastern tropical Indian Ocean and those in the cold tongue in the eastern equatorial Pacific Ocean at a time lag of 1 year. This success may be largely attributed to the successful simulation of the interannual variations of the Indonesian Throughflow, which carries the anomalies of the Indian Ocean Dipole (IOD) into the western equatorial Pacific Ocean to produce subsurface temperature anomalies, which in turn propagate to the eastern equatorial Pacific to generate ENSO. This connection is termed the "oceanic channel dynamics" and is shown to be consistent with the observational analyses. However, the model simulates a weaker connection between the IOD and the interannual variability of the Indonesian Throughflow transport than found in the observations. In addition, the model overestimates the westerly wind anomalies in the western-central equatorial Pacific in the year following the IOD, which forces unrealistic upwelling Rossby waves in the western equatorial Pacific and downwelling Kelvin waves in the east. This assessment suggests that the CCSM4 coupled climate system has underestimated the oceanic channel dynamics and overestimated the atmospheric bridge processes.

  2. How significant is the ‘significant other’? Associations between significant others’ health behaviors and attitudes and young adults’ health outcomes

    Directory of Open Access Journals (Sweden)

    Berge Jerica M

    2012-04-01

    Full Text Available Abstract Background Having a significant other has been shown to be protective against physical and psychological health conditions for adults. Less is known about the period of emerging young adulthood and associations between significant others’ weight and weight-related health behaviors (e.g. healthy dietary intake, the frequency of physical activity, weight status. This study examined the association between significant others’ health attitudes and behaviors regarding eating and physical activity and young adults’ weight status, dietary intake, and physical activity. Methods This study uses data from Project EAT-III, a population-based cohort study with emerging young adults from diverse ethnic and socioeconomic backgrounds (n = 1212. Logistic regression models examining cross-sectional associations, adjusted for sociodemographics and health behaviors five years earlier, were used to estimate predicted probabilities and calculate prevalence differences. Results Young adult women whose significant others had health promoting attitudes/behaviors were significantly less likely to be overweight/obese and were more likely to eat ≥ 5 fruits/vegetables per day and engage in ≥ 3.5 hours/week of physical activity, compared to women whose significant others did not have health promoting behaviors/attitudes. Young adult men whose significant other had health promoting behaviors/attitudes were more likely to engage in ≥ 3.5 hours/week of physical activity compared to men whose significant others did not have health promoting behaviors/attitudes. Conclusions Findings suggest the protective nature of the significant other with regard to weight-related health behaviors of young adults, particularly for young adult women. Obesity prevention efforts should consider the importance of including the significant other in intervention efforts with young adult women and potentially men.

  3. Significance of uncertainties derived from settling tank model structure and parameters on predicting WWTP performance - A global sensitivity analysis study

    DEFF Research Database (Denmark)

    Ramin, Elham; Sin, Gürkan; Mikkelsen, Peter Steen

    2011-01-01

    Uncertainty derived from one of the process models – such as one-dimensional secondary settling tank (SST) models – can impact the output of the other process models, e.g., biokinetic (ASM1), as well as the integrated wastewater treatment plant (WWTP) models. The model structure and parameter...... and from the last aerobic bioreactor upstream to the SST (Garrett/hydraulic method). For model structure uncertainty, two one-dimensional secondary settling tank (1-D SST) models are assessed, including a first-order model (the widely used Takács-model), in which the feasibility of using measured...... uncertainty of settler models can therefore propagate, and add to the uncertainties in prediction of any plant performance criteria. Here we present an assessment of the relative significance of secondary settling model performance in WWTP simulations. We perform a global sensitivity analysis (GSA) based...

  4. Comparison of two recent models for estimating actual evapotranspiration using only regularly recorded data

    Science.gov (United States)

    Ali, M. F.; Mawdsley, J. A.

    1987-09-01

    An advection-aridity model for estimating actual evapotranspiration ET is tested with over 700 days of lysimeter evapotranspiration and meteorological data from barley, turf and rye-grass from three sites in the U.K. The performance of the model is also compared with the API model . It is observed from the test that the advection-aridity model overestimates nonpotential ET and tends to underestimate potential ET, but when tested with potential and nonpotential data together, the tendencies appear to cancel each other. On a daily basis the performance level of this model is found to be of the same order as the API model: correlation coefficients were obtained between the model estimates and lysimeter data of 0.62 and 0.68 respectively. For periods greater than one day, generally the performance of the models are improved. Proposed by Mawdsley and Ali (1979)

  5. Significance of flow clustering and sequencing on sediment transport: 1D sediment transport modelling

    Science.gov (United States)

    Hassan, Kazi; Allen, Deonie; Haynes, Heather

    2016-04-01

    . Results illustrate that clustered flood events generated sediment loads up to an order of magnitude greater than that of individual events of the same flood volume. Correlations were significant for sediment volume compared to both maximum flow discharge (R2<0.8) and number of events (R2 -0.5 to -0.7) within the cluster. The strongest correlations occurred for clusters with a greater number of flow events only slightly above-threshold. This illustrates that the numerical model can capture a degree of the non-linear morphological response to flow magnitude. Analysis of the relationship between morphological change and the skewness of flow events within each cluster was also determined, illustrating only minor sensitivity to cluster peak distribution skewness. This is surprising and discussion is presented on model limitations, including the capability of sediment transport formulae to effectively account for temporal processes of antecedent flow, hysteresis, local supply etc.

  6. Mixed models in cerebral ischemia study

    Directory of Open Access Journals (Sweden)

    Matheus Henrique Dal Molin Ribeiro

    2016-06-01

    Full Text Available The data modeling from longitudinal studies stands out in the current scientific scenario, especially in the areas of health and biological sciences, which induces a correlation between measurements for the same observed unit. Thus, the modeling of the intra-individual dependency is required through the choice of a covariance structure that is able to receive and accommodate the sample variability. However, the lack of methodology for correlated data analysis may result in an increased occurrence of type I or type II errors and underestimate/overestimate the standard errors of the model estimates. In the present study, a Gaussian mixed model was adopted for the variable response latency of an experiment investigating the memory deficits in animals subjected to cerebral ischemia when treated with fish oil (FO. The model parameters estimation was based on maximum likelihood methods. Based on the restricted likelihood ratio test and information criteria, the autoregressive covariance matrix was adopted for errors. The diagnostic analyses for the model were satisfactory, since basic assumptions and results obtained corroborate with biological evidence; that is, the effectiveness of the FO treatment to alleviate the cognitive effects caused by cerebral ischemia was found.

  7. Theoretical cytotoxicity models for combined exposure of cells to different radiations

    International Nuclear Information System (INIS)

    Scott, B.R.

    1981-01-01

    Theoretical cytotoxicity models for predicting cell survival after sequential or simultaneous exposure of cells to high and low linear energy transfer (LET) radiation are discussed. Major findings are that (1) ordering of sequential exposures can influence the level of cell killing achieved; (2) synergism is unimportant at low doses; (3) effects at very low doses should be additive; (4) use of the conventional relative biological effectiveness approach for predicting combined effects of different radiations is unnecessary at very low doses and can lead to overestimation of risk at moderate and high doses

  8. Multiresolution wavelet-ANN model for significant wave height forecasting.

    Digital Repository Service at National Institute of Oceanography (India)

    Deka, P.C.; Mandal, S.; Prahlada, R.

    Hybrid wavelet artificial neural network (WLNN) has been applied in the present study to forecast significant wave heights (Hs). Here Discrete Wavelet Transformation is used to preprocess the time series data (Hs) prior to Artificial Neural Network...

  9. Evaluation of Stochastic Rainfall Models in Capturing Climate Variability for Future Drought and Flood Risk Assessment

    Science.gov (United States)

    Chowdhury, A. F. M. K.; Lockart, N.; Willgoose, G. R.; Kuczera, G. A.; Kiem, A.; Nadeeka, P. M.

    2016-12-01

    One of the key objectives of stochastic rainfall modelling is to capture the full variability of climate system for future drought and flood risk assessment. However, it is not clear how well these models can capture the future climate variability when they are calibrated to Global/Regional Climate Model data (GCM/RCM) as these datasets are usually available for very short future period/s (e.g. 20 years). This study has assessed the ability of two stochastic daily rainfall models to capture climate variability by calibrating them to a dynamically downscaled RCM dataset in an east Australian catchment for 1990-2010, 2020-2040, and 2060-2080 epochs. The two stochastic models are: (1) a hierarchical Markov Chain (MC) model, which we developed in a previous study and (2) a semi-parametric MC model developed by Mehrotra and Sharma (2007). Our hierarchical model uses stochastic parameters of MC and Gamma distribution, while the semi-parametric model uses a modified MC process with memory of past periods and kernel density estimation. This study has generated multiple realizations of rainfall series by using parameters of each model calibrated to the RCM dataset for each epoch. The generated rainfall series are used to generate synthetic streamflow by using a SimHyd hydrology model. Assessing the synthetic rainfall and streamflow series, this study has found that both stochastic models can incorporate a range of variability in rainfall as well as streamflow generation for both current and future periods. However, the hierarchical model tends to overestimate the multiyear variability of wet spell lengths (therefore, is less likely to simulate long periods of drought and flood), while the semi-parametric model tends to overestimate the mean annual rainfall depths and streamflow volumes (hence, simulated droughts are likely to be less severe). Sensitivity of these limitations of both stochastic models in terms of future drought and flood risk assessment will be discussed.

  10. Methods for significance testing of categorical covariates in logistic regression models after multiple imputation: power and applicability analysis

    NARCIS (Netherlands)

    Eekhout, I.; Wiel, M.A. van de; Heymans, M.W.

    2017-01-01

    Background. Multiple imputation is a recommended method to handle missing data. For significance testing after multiple imputation, Rubin’s Rules (RR) are easily applied to pool parameter estimates. In a logistic regression model, to consider whether a categorical covariate with more than two levels

  11. Modelling of Transport Projects Uncertainties

    DEFF Research Database (Denmark)

    Salling, Kim Bang; Leleur, Steen

    2012-01-01

    This paper proposes a new way of handling the uncertainties present in transport decision making based on infrastructure appraisals. The paper suggests to combine the principle of Optimism Bias, which depicts the historical tendency of overestimating transport related benefits and underestimating...... to supplement Optimism Bias and the associated Reference Class Forecasting (RCF) technique with a new technique that makes use of a scenario-grid. We tentatively introduce and refer to this as Reference Scenario Forecasting (RSF). The final RSF output from the CBA-DK model consists of a set of scenario......-based graphs which functions as risk-related decision support for the appraised transport infrastructure project. The presentation of RSF is demonstrated by using an appraisal case concerning a new airfield in the capital of Greenland, Nuuk....

  12. Sheath and arc-column voltages in high-pressure arc discharges

    International Nuclear Information System (INIS)

    Benilov, M S; Benilova, L G; Li Heping; Wu Guiqing

    2012-01-01

    Electrical characteristics of a 1 cm-long free-burning atmospheric-pressure argon arc are calculated by means of a model taking into account the existence of a near-cathode space-charge sheath and the discrepancy between the electron and heavy-particle temperatures in the arc column. The computed arc voltage exhibits a variation with the arc current I similar to the one revealed by the experiment and exceeds experimental values by no more than approximately 2 V in the current range 20-175 A. The sheath contributes about two-thirds or more of the arc voltage. The LTE model predicts a different variation of the arc voltage with I and underestimates the experimental values appreciably for low currents but by no more than approximately 2 V for I ≳ 120 A. However, the latter can hardly be considered as a proof of unimportance of the space-charge sheath at high currents: the LTE model overestimates both the resistance of the bulk of the arc column and the resistance of the part of the column that is adjacent to the cathode, and this overestimation to a certain extent compensates for the neglect of the voltage drop in the sheath. Furthermore, if the latter resistance were evaluated in the framework of the LTE model in an accurate way, then the overestimation would be still much stronger and the obtained voltage would significantly exceed those observed in the experiment.

  13. Soils apart from equilibrium – consequences for soil carbon balance modelling

    Directory of Open Access Journals (Sweden)

    T. Wutzler

    2007-01-01

    Full Text Available Many projections of the soil carbon sink or source are based on kinetically defined carbon pool models. Para-meters of these models are often determined in a way that the steady state of the model matches observed carbon stocks. The underlying simplifying assumption is that observed carbon stocks are near equilibrium. This assumption is challenged by observations of very old soils that do still accumulate carbon. In this modelling study we explored the consequences of the case where soils are apart from equilibrium. Calculation of equilibrium states of soils that are currently accumulating small amounts of carbon were performed using the Yasso model. It was found that already very small current accumulation rates cause big changes in theoretical equilibrium stocks, which can virtually approach infinity. We conclude that soils that have been disturbed several centuries ago are not in equilibrium but in a transient state because of the slowly ongoing accumulation of the slowest pool. A first consequence is that model calibrations to current carbon stocks that assume equilibrium state, overestimate the decay rate of the slowest pool. A second consequence is that spin-up runs (simulations until equilibrium overestimate stocks of recently disturbed sites. In order to account for these consequences, we propose a transient correction. This correction prescribes a lower decay rate of the slowest pool and accounts for disturbances in the past by decreasing the spin-up-run predicted stocks to match an independent estimate of current soil carbon stocks. Application of this transient correction at a Central European beech forest site with a typical disturbance history resulted in an additional carbon fixation of 5.7±1.5 tC/ha within 100 years. Carbon storage capacity of disturbed forest soils is potentially much higher than currently assumed. Simulations that do not adequately account for the transient state of soil carbon stocks neglect a considerable

  14. Development of an inorganic and organic aerosol model (CHIMERE 2017β v1.0: seasonal and spatial evaluation over Europe

    Directory of Open Access Journals (Sweden)

    F. Couvidat

    2018-01-01

    Full Text Available A new aerosol module was developed and integrated in the air quality model CHIMERE. Developments include the use of the Model of Emissions and Gases and Aerosols from Nature (MEGAN 2.1 for biogenic emissions, the implementation of the inorganic thermodynamic model ISORROPIA 2.1, revision of wet deposition processes and of the algorithms of condensation/evaporation and coagulation and the implementation of the secondary organic aerosol (SOA mechanism H2O and the thermodynamic model SOAP. Concentrations of particles over Europe were simulated by the model for the year 2013. Model concentrations were compared to the European Monitoring and Evaluation Programme (EMEP observations and other observations available in the EBAS database to evaluate the performance of the model. Performances were determined for several components of particles (sea salt, sulfate, ammonium, nitrate, organic aerosol with a seasonal and regional analysis of results. The model gives satisfactory performance in general. For sea salt, the model succeeds in reproducing the seasonal evolution of concentrations for western and central Europe. For sulfate, except for an overestimation of sulfate in northern Europe, modeled concentrations are close to observations and the model succeeds in reproducing the seasonal evolution of concentrations. For organic aerosol, the model reproduces with satisfactory results concentrations for stations with strong modeled biogenic SOA concentrations. However, the model strongly overestimates ammonium nitrate concentrations during late autumn (possibly due to problems in the temporal evolution of emissions and strongly underestimates summer organic aerosol concentrations over most of the stations (especially in the northern half of Europe. This underestimation could be due to a lack of anthropogenic SOA or biogenic emissions in northern Europe. A list of recommended tests and developments to improve the model is also given.

  15. An Equation-of-State Compositional In-Situ Combustion Model: A Study of Phase Behavior Sensitivity

    DEFF Research Database (Denmark)

    Kristensen, Morten Rode; Gerritsen, M. G.; Thomsen, Per Grove

    2009-01-01

    phase behavior sensitivity for in situ combustion, a thermal oil recovery process. For the one-dimensional model we first study the sensitivity to numerical discretization errors and provide grid density guidelines for proper resolution of in situ combustion behavior. A critical condition for success...... to ignition. For a particular oil we show that the simplified approach overestimates the required air injection rate for sustained front propagation by 17% compared to the equation of state-based approach....

  16. Moving forward socio-economically focused models of deforestation.

    Science.gov (United States)

    Dezécache, Camille; Salles, Jean-Michel; Vieilledent, Ghislain; Hérault, Bruno

    2017-09-01

    Whilst high-resolution spatial variables contribute to a good fit of spatially explicit deforestation models, socio-economic processes are often beyond the scope of these models. Such a low level of interest in the socio-economic dimension of deforestation limits the relevancy of these models for decision-making and may be the cause of their failure to accurately predict observed deforestation trends in the medium term. This study aims to propose a flexible methodology for taking into account multiple drivers of deforestation in tropical forested areas, where the intensity of deforestation is explicitly predicted based on socio-economic variables. By coupling a model of deforestation location based on spatial environmental variables with several sub-models of deforestation intensity based on socio-economic variables, we were able to create a map of predicted deforestation over the period 2001-2014 in French Guiana. This map was compared to a reference map for accuracy assessment, not only at the pixel scale but also over cells ranging from 1 to approximately 600 sq. km. Highly significant relationships were explicitly established between deforestation intensity and several socio-economic variables: population growth, the amount of agricultural subsidies, gold and wood production. Such a precise characterization of socio-economic processes allows to avoid overestimation biases in high deforestation areas, suggesting a better integration of socio-economic processes in the models. Whilst considering deforestation as a purely geographical process contributes to the creation of conservative models unable to effectively assess changes in the socio-economic and political contexts influencing deforestation trends, this explicit characterization of the socio-economic dimension of deforestation is critical for the creation of deforestation scenarios in REDD+ projects. © 2017 John Wiley & Sons Ltd.

  17. Recurrence risk perception and quality of life following treatment of breast cancer.

    Science.gov (United States)

    Hawley, Sarah T; Janz, Nancy K; Griffith, Kent A; Jagsi, Reshma; Friese, Christopher R; Kurian, Allison W; Hamilton, Ann S; Ward, Kevin C; Morrow, Monica; Wallner, Lauren P; Katz, Steven J

    2017-02-01

    Little is known about different ways of assessing risk of distant recurrence following cancer treatment (e.g., numeric or descriptive). We sought to evaluate the association between overestimation of risk of distant recurrence of breast cancer and key patient-reported outcomes, including quality of life and worry. We surveyed a weighted random sample of newly diagnosed patients with early-stage breast cancer identified through SEER registries of Los Angeles County & Georgia (2013-14) ~2 months after surgery (N = 2578, RR = 71%). Actual 10-year risk of distant recurrence after treatment was based on clinical factors for women with DCIS & low-risk invasive cancer (Stg 1A, ER+, HER2-, Gr 1-2). Women reported perceptions of their risk numerically (0-100%), with values ≥10% for DCIS & ≥20% for invasive considered overestimates. Perceptions of "moderate, high or very high" risk were considered descriptive overestimates. In our analytic sample (N = 927), we assessed factors correlated with both types of overestimation and report multivariable associations between overestimation and QoL (PROMIS physical & mental health) and frequent worry. 30.4% of women substantially overestimated their risk of distant recurrence numerically and 14.7% descriptively. Few factors other than family history were significantly associated with either type of overestimation. Both types of overestimation were significantly associated with frequent worry, and lower QoL. Ensuring understanding of systemic recurrence risk, particularly among patients with favorable prognosis, is important. Better risk communication by clinicians may translate to better risk comprehension among patients and to improvements in QoL.

  18. A WRF/Chem sensitivity study using ensemble modelling for a high ozone episode in Slovenia and the Northern Adriatic area

    Science.gov (United States)

    Žabkar, Rahela; Koračin, Darko; Rakovec, Jože

    2013-10-01

    A high ozone (O3) concentrations episode during a heat wave event in the Northeastern Mediterranean was investigated using the WRF/Chem model. To understand the major model uncertainties and errors as well as the impacts of model inputs on the model accuracy, an ensemble modelling experiment was conducted. The 51-member ensemble was designed by varying model physics parameterization options (PBL schemes with different surface layer and land-surface modules, and radiation schemes); chemical initial and boundary conditions; anthropogenic and biogenic emission inputs; and model domain setup and resolution. The main impacts of the geographical and emission characteristics of three distinct regions (suburban Mediterranean, continental urban, and continental rural) on the model accuracy and O3 predictions were investigated. In spite of the large ensemble set size, the model generally failed to simulate the extremes; however, as expected from probabilistic forecasting the ensemble spread improved results with respect to extremes compared to the reference run. Noticeable model nighttime overestimations at the Mediterranean and some urban and rural sites can be explained by too strong simulated winds, which reduce the impact of dry deposition and O3 titration in the near surface layers during the nighttime. Another possible explanation could be inaccuracies in the chemical mechanisms, which are suggested also by model insensitivity to variations in the nitrogen oxides (NOx) and volatile organic compounds (VOC) emissions. Major impact factors for underestimations of the daytime O3 maxima at the Mediterranean and some rural sites include overestimation of the PBL depths, a lack of information on forest fires, too strong surface winds, and also possible inaccuracies in biogenic emissions. This numerical experiment with the ensemble runs also provided guidance on an optimum model setup and input data.

  19. Uncertainty Source of Modeled Ecosystem Productivity in East Asian Monsoon Region: A Traceability Analysis

    Science.gov (United States)

    Cui, E.; Xia, J.; Huang, K.; Ito, A.; Arain, M. A.; Jain, A. K.; Poulter, B.; Peng, C.; Hayes, D. J.; Ricciuto, D. M.; Huntzinger, D. N.; Tian, H.; Mao, J.; Fisher, J.; Schaefer, K. M.; Huang, M.; Peng, S.; Wang, W.

    2017-12-01

    East Asian monsoon region, benefits from sufficient water-heat availability and increasing nitrogen deposition, represents significantly higher net ecosystem productivity than the same latitudes of Europe-Africa and North America. A better understanding of major contributions to the uncertainties of terrestrial carbon cycle in this region is greatly important for evaluating the global carbon balance. This study analyzed the key carbon processes and parameters derived from a series of terrestrial biosphere models. A wide range of inter-model disagreement on GPP was found in China's subtropical regions. Then, this large difference was traced to a few traceable components included in terrestrial carbon cycle. The increase in ensemble mean GPP over 1901-2010 was predominantly resulted from increasing atmospheric CO2 concentration and nitrogen deposition, while high frequent land-use change over this region showed a slightly negative effect on GPP. However, inter-model differences of GPP were mainly attributed to the baseline simulations without changes in external forcing. According to the variance decomposition, the large spread in simulated GPP was well explained by the differences in leaf area index (LAI) and specific leaf area (SLA) among models. In addition, the underlying errors in simulated GPP propagate through the model and introduce some additional errors to the simulation of NPP and biomass. By comparing the simulations with satellite-derived, data-oriented and observation-based datasets, we further found that GPP, vegetation carbon turn-over time, aboveground biomass, LAI and SLA were all overestimated in most of the models while biomass distribution in leaves was significantly underestimated. The results of this study indicate that model performance on ecosystem productivity in East Asian monsoon region can be improved by a more realistic representation of leaf functional traits.

  20. Constraining the uncertainty in emissions over India with a regional air quality model evaluation

    Science.gov (United States)

    Karambelas, Alexandra; Holloway, Tracey; Kiesewetter, Gregor; Heyes, Chris

    2018-02-01

    To evaluate uncertainty in the spatial distribution of air emissions over India, we compare satellite and surface observations with simulations from the U.S. Environmental Protection Agency (EPA) Community Multi-Scale Air Quality (CMAQ) model. Seasonally representative simulations were completed for January, April, July, and October 2010 at 36 km × 36 km using anthropogenic emissions from the Greenhouse Gas-Air Pollution Interaction and Synergies (GAINS) model following version 5a of the Evaluating the Climate and Air Quality Impacts of Short-Lived Pollutants project (ECLIPSE v5a). We use both tropospheric columns from the Ozone Monitoring Instrument (OMI) and surface observations from the Central Pollution Control Board (CPCB) to closely examine modeled nitrogen dioxide (NO2) biases in urban and rural regions across India. Spatial average evaluation with satellite retrievals indicate a low bias in the modeled tropospheric column (-63.3%), which reflects broad low-biases in majority non-urban regions (-70.1% in rural areas) across the sub-continent to slightly lesser low biases reflected in semi-urban areas (-44.7%), with the threshold between semi-urban and rural defined as 400 people per km2. In contrast, modeled surface NO2 concentrations exhibit a slight high bias of +15.6% when compared to surface CPCB observations predominantly located in urban areas. Conversely, in examining extremely population dense urban regions with more than 5000 people per km2 (dense-urban), we find model overestimates in both the column (+57.8) and at the surface (+131.2%) compared to observations. Based on these results, we find that existing emission fields for India may overestimate urban emissions in densely populated regions and underestimate rural emissions. However, if we rely on model evaluation with predominantly urban surface observations from the CPCB, comparisons reflect model high biases, contradictory to the knowledge gained using satellite observations. Satellites thus

  1. Effects of Initial Drivers and Land Use on WRF Modeling for Near-Surface Fields and Atmospheric Boundary Layer over the Northeastern Tibetan Plateau

    Directory of Open Access Journals (Sweden)

    Junhua Yang

    2016-01-01

    Full Text Available To improve the simulation performance of mesoscale models in the northeastern Tibetan Plateau, two reanalysis initial datasets (NCEP FNL and ERA-Interim and two MODIS (Moderate-Resolution Imaging Spectroradiometer land-use datasets (from 2001 and 2010 are used in WRF (Weather Research and Forecasting modeling. The model can reproduce the variations of 2 m temperature (T2 and 2 m relative humidity (RH2, but T2 is overestimated and RH2 is underestimated in the control experiment. After using the new initial drive and land use data, the simulation precision in T2 is improved by the correction of overestimated net energy flux at surface and the RH2 is improved due to the lower T2 and larger soil moisture. Due to systematic bias in WRF modeling for wind speed, we design another experiment that includes the Jimenez subgrid-scale orography scheme, which reduces the frequency of low wind speed and increases the frequency of high wind speed and that is more consistent with the observation. Meanwhile, the new drive and land-use data lead to lower boundary layer height and influence the potential temperature and wind speed in both the lower atmosphere and the upper layer, while the impact on water vapor mixing ratio is primarily concentrated in the lower atmosphere.

  2. Celecoxib does not significantly delay bone healing in a rat femoral osteotomy model: a bone histomorphometry study

    Directory of Open Access Journals (Sweden)

    Iwamoto J

    2011-12-01

    Full Text Available Jun Iwamoto1, Azusa Seki2, Yoshihiro Sato3, Hideo Matsumoto11Institute for Integrated Sports Medicine, Keio University School of Medicine, Tokyo, Japan; 2Hamri Co, Ltd, Tokyo, Japan; 3Department of Neurology, Mitate Hospital, Fukuoka, JapanBackground and objective: The objective of the present study was to determine whether celecoxib, a cyclo-oxygenase-2 inhibitor, would delay bone healing in a rat femoral osteotomy model by examining bone histomorphometry parameters.Methods: Twenty-one 6-week-old female Sprague-Dawley rats underwent a unilateral osteotomy of the femoral diaphysis followed by intramedullary wire fixation; the rats were then divided into three groups: the vehicle administration group (control, n = 8, the vitamin K2 administration (menatetrenone 30 mg/kg orally, five times a week group (positive control, n = 5, and the celecoxib administration (4 mg/kg orally, five times a week group (n = 8. After 6 weeks of treatment, the wires were removed, and a bone histomorphometric analysis was performed on the bone tissue inside the callus. The lamellar area relative to the bone area was significantly higher and the total area and woven area relative to the bone area were significantly lower in the vitamin K2 group than in the vehicle group. However, none of the structural parameters, such as the callus and bone area relative to the total area, lamellar and woven areas relative to the bone area, or the formative and resorptive parameters such as osteoclast surface, number of osteoclasts, osteoblast surface, osteoid surface, eroded surface, and bone formation rate per bone surface differed significantly between the vehicle and celecoxib groups.Conclusion: The present study implies that celecoxib may not significantly delay bone healing in a rat femoral osteotomy model based on the results of a bone histomorphometric analysis.Keywords: femoral osteotomy, bone healing, callus, rat, celecoxib

  3. Coronary risk assessment by point-based vs. equation-based Framingham models: significant implications for clinical care.

    Science.gov (United States)

    Gordon, William J; Polansky, Jesse M; Boscardin, W John; Fung, Kathy Z; Steinman, Michael A

    2010-11-01

    US cholesterol guidelines use original and simplified versions of the Framingham model to estimate future coronary risk and thereby classify patients into risk groups with different treatment strategies. We sought to compare risk estimates and risk group classification generated by the original, complex Framingham model and the simplified, point-based version. We assessed 2,543 subjects age 20-79 from the 2001-2006 National Health and Nutrition Examination Surveys (NHANES) for whom Adult Treatment Panel III (ATP-III) guidelines recommend formal risk stratification. For each subject, we calculated the 10-year risk of major coronary events using the original and point-based Framingham models, and then compared differences in these risk estimates and whether these differences would place subjects into different ATP-III risk groups (20% risk). Using standard procedures, all analyses were adjusted for survey weights, clustering, and stratification to make our results nationally representative. Among 39 million eligible adults, the original Framingham model categorized 71% of subjects as having "moderate" risk (20%) risk. Estimates of coronary risk by the original and point-based models often differed substantially. The point-based system classified 15% of adults (5.7 million) into different risk groups than the original model, with 10% (3.9 million) misclassified into higher risk groups and 5% (1.8 million) into lower risk groups, for a net impact of classifying 2.1 million adults into higher risk groups. These risk group misclassifications would impact guideline-recommended drug treatment strategies for 25-46% of affected subjects. Patterns of misclassifications varied significantly by gender, age, and underlying CHD risk. Compared to the original Framingham model, the point-based version misclassifies millions of Americans into risk groups for which guidelines recommend different treatment strategies.

  4. On the electrical conductivity for the mixed-valence model with d-f correlations

    International Nuclear Information System (INIS)

    Borgiel, W.; Matlak, M.

    1984-08-01

    The static electrical conductivity of mixed-valence systems is calculated in the model of Matlak and Nolting [Solid State Commun., 47, 11 (1983); Z. Phys., B55, 103 (1984)]. The method takes into account the atomic properties more exactly than those connected with bands, and hence emphasizes the ionic aspect of the problem in some way; indeed, the calculations overestimate the atomic properties. Some results are presented in a graph. It is found that the electrical conductivity depends strongly on temperature and the electron-hole attraction constant

  5. Constructing Quality Adjusted Price Indexes: a Comparison of Hedonic and Discrete Choice Models

    OpenAIRE

    N. Jonker

    2001-01-01

    The Boskin report (1996) concluded that the US consumer price index (CPI) overestimated the inflation by 1.1 percentage points. This was due to several measurement errors in the CPI. One of them is called quality change bias. In this paper two methods are compared which can be used to eliminate quality change bias, namely the hedonic method and a method based on the use of discrete choice models. The underlying micro-economic fundations of the two methods are compared as well as their empiric...

  6. Radiation absorption and use by humid savanna grassland: assessment using remote sensing and modelling

    International Nuclear Information System (INIS)

    Roux, X. le; Gauthier, H.; Begue, A.; Sinoquet, H.

    1997-01-01

    The components of the canopy radiation balance in photosynthetically active radiation (PAR), phytomass and leaf area index (LAI) were measured during a complete annual cycle in an annually burned African humid savanna. Directional reflectances measured by a hand-held radiometer were used to compute the canopy normalized difference vegetation index (NDVI). The fraction f APAR of PAR absorbed by the canopy (APAR) and canopy reflectances were simulated by the scattering from arbitrarily inclined leaves (SAIL) and the radiation interception in row intercropping (RIRI) models. The daily PAR to solar radiation ratio was linearly related to the daily fraction of diffuse solar radiation with an annual value around 0.47. The observed f APAR was non-linearly related to NDVI. The SAIL model simulated reasonably well directional reflectances but noticeably overestimated f APAR during most of the growing season. Comparison of simulations performed with the 1D and 3D versions of the RIRI model highlighted the weak influence of the heterogeneous structure of the canopy after fire and of the vertical distribution of dead and green leaves on total f APAR . Daily f APAR values simulated by the 3D-RIRI model were linearly related to and 9.8% higher than observed values. For sufficient soil water availability, the net production efficiency ϵ n of the savanna grass canopy was 1.92 and 1.28 g DM MJ −1 APAR (where DM stands for dry matter) during early regrowth and mature stage, respectively. In conclusion, the linear relationship between NDVI and f APAR used in most primary production models operating at large scales may slightly overestimate f APAR by green leaves for the humid savanna biome. Moreover, the net production efficiency of humid savannas is close to or higher than values reported for the other major natural biomes. (author)

  7. Assessment of a turbulence model for numerical predictions of sheet-cavitating flows in centrifugal pumps

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Houlin; Wang, Yong; Liu, Dongxi; Yuan, Shouqi; Wang, Jian [Jiangsu University, Zhenjiang (China)

    2013-09-15

    Various approaches have been developed for numerical predictions of unsteady cavitating turbulent flows. To verify the influence of a turbulence model on the simulation of unsteady attached sheet-cavitating flows in centrifugal pumps, two modified RNG k-ε models (DCM and FBM) are implemented in ANSYS-CFX 13.0 by second development technology, so as to compare three widespread turbulence models in the same platform. The simulation has been executed and compared to experimental results for three different flow coefficients. For four operating conditions, qualitative comparisons are carried out between experimental and numerical cavitation patterns, which are visualized by a high-speed camera and depicted as isosurfaces of vapor volume fraction α{sub v} = 0.1, respectively. The comparison results indicate that, for the development of the sheet attached cavities on the suction side of the impeller blades, the numerical results with different turbulence models are very close to each other and overestimate the experiment ones slightly. However, compared to the cavitation performance experimental curves, the numerical results have obvious difference: the prediction precision with the FBM is higher than the other two turbulence models. In addition, the loading distributions around the blade section at midspan are analyzed in detail. The research results suggest that, for numerical prediction of cavitating flows in centrifugal pumps, the turbulence model has little influence on the development of cavitation bubbles, but the advanced turbulence model can significantly improve the prediction precision of head coefficients and critical cavitation numbers.

  8. Modeling surface energy fluxes and thermal dynamics of a seasonally ice-covered hydroelectric reservoir.

    Science.gov (United States)

    Wang, Weifeng; Roulet, Nigel T; Strachan, Ian B; Tremblay, Alain

    2016-04-15

    The thermal dynamics of human created northern reservoirs (e.g., water temperatures and ice cover dynamics) influence carbon processing and air-water gas exchange. Here, we developed a process-based one-dimensional model (Snow, Ice, WAater, and Sediment: SIWAS) to simulate a full year's surface energy fluxes and thermal dynamics for a moderately large (>500km(2)) boreal hydroelectric reservoir in northern Quebec, Canada. There is a lack of climate and weather data for most of the Canadian boreal so we designed SIWAS with a minimum of inputs and with a daily time step. The modeled surface energy fluxes were consistent with six years of observations from eddy covariance measurements taken in the middle of the reservoir. The simulated water temperature profiles agreed well with observations from over 100 sites across the reservoir. The model successfully captured the observed annual trend of ice cover timing, although the model overestimated the length of ice cover period (15days). Sensitivity analysis revealed that air temperature significantly affects the ice cover duration, water and sediment temperatures, but that dissolved organic carbon concentrations have little effect on the heat fluxes, and water and sediment temperatures. We conclude that the SIWAS model is capable of simulating surface energy fluxes and thermal dynamics for boreal reservoirs in regions where high temporal resolution climate data are not available. SIWAS is suitable for integration into biogeochemical models for simulating a reservoir's carbon cycle. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Development of Shear Capacity Prediction Model for FRP-RC Beam without Web Reinforcement

    Directory of Open Access Journals (Sweden)

    Md. Arman Chowdhury

    2016-01-01

    Full Text Available Available codes and models generally use partially modified shear design equation, developed earlier for steel reinforced concrete, for predicting the shear capacity of FRP-RC members. Consequently, calculated shear capacity shows under- or overestimation. Furthermore, in most models some affecting parameters of shear strength are overlooked. In this study, a new and simplified shear capacity prediction model is proposed considering all the parameters. A large database containing 157 experimental results of FRP-RC beams without shear reinforcement is assembled from the published literature. A parametric study is then performed to verify the accuracy of the proposed model. Again, a comprehensive review of 9 codes and 12 available models is done, published back from 1997 to date for comparison with the proposed model. Hence, it is observed that the proposed equation shows overall optimized performance compared to all the codes and models within the range of used experimental dataset.

  10. Identification of sequence motifs significantly associated with antisense activity

    Directory of Open Access Journals (Sweden)

    Peek Andrew S

    2007-06-01

    Full Text Available Abstract Background Predicting the suppression activity of antisense oligonucleotide sequences is the main goal of the rational design of nucleic acids. To create an effective predictive model, it is important to know what properties of an oligonucleotide sequence associate significantly with antisense activity. Also, for the model to be efficient we must know what properties do not associate significantly and can be omitted from the model. This paper will discuss the results of a randomization procedure to find motifs that associate significantly with either high or low antisense suppression activity, analysis of their properties, as well as the results of support vector machine modelling using these significant motifs as features. Results We discovered 155 motifs that associate significantly with high antisense suppression activity and 202 motifs that associate significantly with low suppression activity. The motifs range in length from 2 to 5 bases, contain several motifs that have been previously discovered as associating highly with antisense activity, and have thermodynamic properties consistent with previous work associating thermodynamic properties of sequences with their antisense activity. Statistical analysis revealed no correlation between a motif's position within an antisense sequence and that sequences antisense activity. Also, many significant motifs existed as subwords of other significant motifs. Support vector regression experiments indicated that the feature set of significant motifs increased correlation compared to all possible motifs as well as several subsets of the significant motifs. Conclusion The thermodynamic properties of the significantly associated motifs support existing data correlating the thermodynamic properties of the antisense oligonucleotide with antisense efficiency, reinforcing our hypothesis that antisense suppression is strongly associated with probe/target thermodynamics, as there are no enzymatic

  11. Building a better methane generation model: Validating models with methane recovery rates from 35 Canadian landfills.

    Science.gov (United States)

    Thompson, Shirley; Sawyer, Jennifer; Bonam, Rathan; Valdivia, J E

    2009-07-01

    The German EPER, TNO, Belgium, LandGEM, and Scholl Canyon models for estimating methane production were compared to methane recovery rates for 35 Canadian landfills, assuming that 20% of emissions were not recovered. Two different fractions of degradable organic carbon (DOC(f)) were applied in all models. Most models performed better when the DOC(f) was 0.5 compared to 0.77. The Belgium, Scholl Canyon, and LandGEM version 2.01 models produced the best results of the existing models with respective mean absolute errors compared to methane generation rates (recovery rates + 20%) of 91%, 71%, and 89% at 0.50 DOC(f) and 171%, 115%, and 81% at 0.77 DOC(f). The Scholl Canyon model typically overestimated methane recovery rates and the LandGEM version 2.01 model, which modifies the Scholl Canyon model by dividing waste by 10, consistently underestimated methane recovery rates; this comparison suggested that modifying the divisor for waste in the Scholl Canyon model between one and ten could improve its accuracy. At 0.50 DOC(f) and 0.77 DOC(f) the modified model had the lowest absolute mean error when divided by 1.5 yielding 63 +/- 45% and 2.3 yielding 57 +/- 47%, respectively. These modified models reduced error and variability substantially and both have a strong correlation of r = 0.92.

  12. Carbon dioxide abatement in an empirical model of the Indian economy: an integration of micro and macro analysis

    International Nuclear Information System (INIS)

    Gupta, S.

    1995-01-01

    Global warming and associated climate change are the likely results of an enhanced greenhouse effect due to excessive emission of greenhouse gases. Carbon dioxide (CO 2 ) is the largest contributor to the greenhouse effect. The costs of stabilising or reducing CO 2 emissions are estimated by two types of models. Macro models based on aggregate macroeconomic relationships, study the macroeconomic impacts of and responses to different policies. These overestimate costs as technological responses are not adequately modelled. Micro models contain the necessary technical information to assess the abatement potential, but exclude indirect costs. In this study, a methodology for integrating the two approaches for developing countries is proposed and illustrated for India. The problems associated with modelling developing economies are recognized in the integrated model proposed. (Author)

  13. A long range dependent model with nonlinear innovations for simulating daily river flows

    Directory of Open Access Journals (Sweden)

    P. Elek

    2004-01-01

    Full Text Available We present the analysis aimed at the estimation of flood risks of Tisza River in Hungary on the basis of daily river discharge data registered in the last 100 years. The deseasonalised series has skewed and leptokurtic distribution and various methods suggest that it possesses substantial long memory. This motivates the attempt to fit a fractional ARIMA model with non-Gaussian innovations as a first step. Synthetic streamflow series can then be generated from the bootstrapped innovations. However, there remains a significant difference between the empirical and the synthetic density functions as well as the quantiles. This brings attention to the fact that the innovations are not independent, both their squares and absolute values are autocorrelated. Furthermore, the innovations display non-seasonal periods of high and low variances. This behaviour is characteristic to generalised autoregressive conditional heteroscedastic (GARCH models. However, when innovations are simulated as GARCH processes, the quantiles and extremes of the discharge series are heavily overestimated. Therefore we suggest to fit a smooth transition GARCH-process to the innovations. In a standard GARCH model the dependence of the variance on the lagged innovation is quadratic whereas in our proposed model it is a bounded function. While preserving long memory and eliminating the correlation from both the generating noise and from its square, the new model is superior to the previously mentioned ones in approximating the probability density, the high quantiles and the extremal behaviour of the empirical river flows.

  14. Field clearance of an intertidal bivalve bed: relative significance of the co-occurring blue mussel Mytilus edulis and Pacific oyster Crassostrea gigas

    DEFF Research Database (Denmark)

    Vismann, Bent; Holm, Mark Wejlemann; Davids, Jens

    2016-01-01

    was estimated by combining field measurements of clearance rates and modelling of the bivalve bed (topography, biomass distribution, temporal and spatial water coverage and depth). The average density of C. gigas and M. edulis was 35 ± 36 and 1001 ± 685 ind. m−2, respectively. The water volume cleared during...... a tidal cycle was estimated at 45 838 m3, of which C. gigas and M. edulis contributed 9169 and 36 669 m3, respectively. Therefore, M. edulis contributed 4 times as much as C. gigas to the bivalve bed’s clearance, and the 2 bivalves were estimated to clear the water volume 1.9 times during each tidal cycle....... However, the estimated water column cleared during low tide is overestimated due to phytoplankton depletion. Hence, it is concluded that the bivalve bed clears the water close to 1 time each tidal cycle. This, together with a low dry weight of soft parts, indicates that the bivalve bed, in general...

  15. Applications of Living Fire PRA models to Fire Protection Significance Determination Process in Taiwan

    International Nuclear Information System (INIS)

    De-Cheng, Chen; Chung-Kung, Lo; Tsu-Jen, Lin; Ching-Hui, Wu; Lin, James C.

    2004-01-01

    The living fire probabilistic risk assessment (PRA) models for all three operating nuclear power plants (NPPs) in Taiwan had been established in December 2000. In that study, a scenario-based PRA approach was adopted to systematically evaluate the fire and smoke hazards and associated risks. Using these fire PRA models developed, a risk-informed application project had also been completed in December 2002 for the evaluation of cable-tray fire-barrier wrapping exemption. This paper presents a new application of the fire PRA models to fire protection issues using the fire protection significance determination process (FP SDP). The fire protection issues studied may involve the selection of appropriate compensatory measures during the period when an automatic fire detection or suppression system in a safety-related fire zone becomes inoperable. The compensatory measure can either be a 24-hour fire watch or an hourly fire patrol. The living fire PRA models were used to estimate the increase in risk associated with the fire protection issue in terms of changes in core damage frequency (CDF) and large early release frequency (LERF). In compliance with SDP at-power and the acceptance guidelines specified in RG 1.174, the fire protection issues in question can be grouped into four categories; red, yellow, white and green, in accordance with the guidelines developed for FD SDP. A 24-hour fire watch is suggested only required for the yellow condition, while an hourly fire patrol may be adopted for the white condition. More limiting requirement is suggested for the red condition, but no special consideration is needed for the green condition. For the calculation of risk measures, risk impacts from any additional fire scenarios that may have been introduced, as well as more severe initiating events and fire damages that may accompany the fire protection issue should be considered carefully. Examples are presented in this paper to illustrate the evaluation process. (authors)

  16. Hydrothermal Fe cycling and deep ocean organic carbon scavenging: Model-based evidence for significant POC supply to seafloor sediments

    Digital Repository Service at National Institute of Oceanography (India)

    German, C.R.; Legendre, L.L.; Sander, S.G.;; Niquil, N.; Luther-III, G.W.; LokaBharathi, P.A.; Han, X.; LeBris, N.

    by more than ~10% over background values, what the model does indicate is that scavenging of carbon in association with Fe-rich hydrothermal plume particles should play a significant role in the delivery of particulate organic carbon to deep ocean...

  17. Meteorological Modeling Using the WRF-ARW Model for Grand Bay Intensive Studies of Atmospheric Mercury

    Directory of Open Access Journals (Sweden)

    Fong Ngan

    2015-02-01

    Full Text Available Measurements at the Grand Bay National Estuarine Research Reserve support a range of research activities aimed at improving the understanding of the atmospheric fate and transport of mercury. Routine monitoring was enhanced by two intensive measurement periods conducted at the site in summer 2010 and spring 2011. Detailed meteorological data are required to properly represent the weather conditions, to determine the transport and dispersion of plumes and to understand the wet and dry deposition of mercury. To describe the mesoscale features that might influence future plume calculations for mercury episodes during the Grand Bay Intensive campaigns, fine-resolution meteorological simulations using the Weather Research and Forecasting (WRF model were conducted with various initialization and nudging configurations. The WRF simulations with nudging generated reasonable results in comparison with conventional observations in the region and measurements obtained at the Grand Bay site, including surface and sounding data. The grid nudging, together with observational nudging, had a positive effect on wind prediction. However, the nudging of mass fields (temperature and moisture led to overestimates of precipitation, which may introduce significant inaccuracies if the data were to be used for subsequent atmospheric mercury modeling. The regional flow prediction was also influenced by the reanalysis data used to initialize the WRF simulations. Even with observational nudging, the summer case simulation results in the fine resolution domain inherited features of the reanalysis data, resulting in different regional wind patterns. By contrast, the spring intensive period showed less influence from the reanalysis data.

  18. Combining interviewing and modeling for end-user energy conservation

    International Nuclear Information System (INIS)

    Goldblatt, David L.; Hartmann, Christoph; Duerrenberger, Gregor

    2005-01-01

    Studying energy consumption through the lens of households is an increasingly popular research avenue. This paper focuses on residential end-user energy conservation. It describes an approach that combines energy modeling and in-depth interviews for communicating about energy use and revealing consumer preferences for change at different levels and intervention points. Expert knowledge was embodied in a computer model for householders that calculates an individual's current energy consumption and helps assess personal savings potentials, while also bringing in socio-technical and economic elements beyond the user's direct control. The paper gives a detailed account of this computer information tool developed for interviewing purposes. It then describes the interview guidelines, data analysis, and main results. In general, interview subjects overestimated the environmental friendliness of their lifestyles. After experience with the program, they tended to rate external (technological, societal) factors as somewhat stronger determinants of their consumption levels than personal (behavioral and household investment) factors, with the notable exception of mobility. Concerning long-term energy perspectives, the majority of subjects felt that society has the ability to make a collective choice towards significantly lower energy consumption levels. Interviewees confirmed that the software and interactive sessions helped them think more holistically about the personal, social, and technological dimensions of energy conservation. Lessons can be applied to the development of future energy communication tools

  19. SVM and ANFIS Models for precipitaton Modeling (Case Study: GonbadKavouse

    Directory of Open Access Journals (Sweden)

    N. Zabet Pishkhani

    2016-10-01

    since it is less computationally exhaustive and more transparent than other models. A consequent membership function (MF of the Sugeno model could be any arbitrary parameterized function of the crisp inputs, most like lya polynomial. Zero and first order polynomials were used as consequent MF in constant and linear Sugeno models, respectively. In addition, the defuzzification process in Sugeno fuzzy models is a simple weighted average calculation. The fuzzy space was divided via grid partitioning according to the number of antecedent MF, and each fuzzy region was covered with a fuzzy rule. Results Discussion: The statistical results showed that in first structure determination coefficient values for both the training and test was not good performance in precipitation prediction so that ANFIS and SVM had determination coefficient of 0.67 and 0.33 in training phase and 0.45 and 0.40 in test phase. Also the error RMSE values showed that both models had failed to predict precipitation in first structure. The results of second structure in precipitation prediction showed that determination coefficient of ANFIS at training and testing was 0.93 and 0.87 respectively and RMSE was 7.06 and 9.28 respectively. MBE values showed that the ANFIS underestimated at training phase and overestimated at test phase. Determination coefficient of SVM at training and testing was 0.89 and 0.91 respectively and RMSE was 9.28 and 5.59 respectively. SVM underestimated precipitation at train phase and overestimated it at test phase. ANFIS and SVM modeled precipitation using precipitation gauging stations with reasonable accuracy. Determining coefficient in the test phase was almost the same for ANFIS and SVM but the RMSE error of SVM model was about 20% lower than the ANFIS. The coefficient of determination and error values indicated SVM had greater accuracy than ANFIS. ANFIS overestimated precipitation for less than 20 mm but for higher values of uniformly distributed around the 1:1. SVM

  20. Cyclosporin A significantly improves preeclampsia signs and suppresses inflammation in a rat model.

    Science.gov (United States)

    Hu, Bihui; Yang, Jinying; Huang, Qian; Bao, Junjie; Brennecke, Shaun Patrick; Liu, Huishu

    2016-05-01

    Preeclampsia is associated with an increased inflammatory response. Immune suppression might be an effective treatment. The aim of this study was to examine whether Cyclosporin A (CsA), an immunosuppressant, improves clinical characteristics of preeclampsia and suppresses inflammation in a lipopolysaccharide (LPS) induced preeclampsia rat model. Pregnant rats were randomly divided into 4 groups: group 1 (PE) rats each received LPS via tail vein on gestational day (GD) 14; group 2 (PE+CsA5) rats were pretreated with LPS (1.0 μg/kg) on GD 14 and were then treated with CsA (5mg/kg, ip) on GDs 16, 17 and 18; group 3 (PE+CsA10) rats were pretreated with LPS (1.0 μg/kg) on GD 14 and were then treated with CsA (10mg/kg, ip) on GDs 16, 17 and 18; group 4 (pregnant control, PC) rats were treated with the vehicle (saline) used for groups 1, 2 and 3. Systolic blood pressure, urinary albumin, biometric parameters and the levels of serum cytokines were measured on day 20. CsA treatment significantly reduced LPS-induced systolic blood pressure and the mean 24-h urinary albumin excretion. Pro-inflammatory cytokines IL-6, IL-17, IFN-γ and TNF-α were increased in the LPS treatment group but were reduced in (LPS+CsA) group (Ppreeclampsia signs and attenuated inflammatory responses in the LPS induced preeclampsia rat model which suggests that immunosuppressant might be an alternative management option for preeclampsia. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. A geochemical and geophysical reappraisal to the significance of the recent unrest at Campi Flegrei caldera (Southern Italy)

    Science.gov (United States)

    Moretti, Roberto; De Natale, Giuseppe; Troise, Claudia

    2017-04-01

    Volcanic unrest at calderas involve complex interaction between magma, hydrothermal fluids and crustal stress and strain. Campi Flegrei caldera (CFc), located in the Naples (Italy) area and characterised by the highest volcanic risk on Earth for the extreme urbanisation, undergoes unrest phenomena involving several meters of uplift and intense shallow micro-seismicity since several decades. Despite unrest episodes display in the last decade only moderate ground deformation and seismicity, current interpretations of geochemical data point to a highly pressurized hydrothermal system. We show that at CFc, the usual assumption of vapour-liquid coexistence in the fumarole plumes leads to largely overestimated hydrothermal pressures and, accordingly, interpretations of elevated unrest. By relaxing unconstrained geochemical assumptions, we infer an alternative model yielding better agreement between geophysical and geochemical observations. The model reconciles discrepancies between what observed 1) for two decades since the 1982-84 large unrest, when shallow magma was supplying heat and fluids to the hydrothermal system, and 2) in the last decade. Compared to the 1980's unrest, the post-2005 phenomena are characterized by much lower aquifers overpressure and magmatic involvement, as indicated by geophysical data and despite large changes in geochemical indicators. Our interpretation points out a model in which shallow sills, intruded during 1969-1984, have completely cooled, so that fumarole emissions are affected now by deeper, CO2-richer, magmatic gases producing a relatively modest heating and overpressure of the hydrothermal system. Our results do have important implications on the short-term eruption hazard assessment and on the best strategies for monitoring and interpreting geochemical data.

  2. Lagrangian Timescales of Southern Ocean Upwelling in a Hierarchy of Model Resolutions

    Science.gov (United States)

    Drake, Henri F.; Morrison, Adele K.; Griffies, Stephen M.; Sarmiento, Jorge L.; Weijer, Wilbert; Gray, Alison R.

    2018-01-01

    In this paper we study upwelling pathways and timescales of Circumpolar Deep Water (CDW) in a hierarchy of models using a Lagrangian particle tracking method. Lagrangian timescales of CDW upwelling decrease from 87 years to 31 years to 17 years as the ocean resolution is refined from 1° to 0.25° to 0.1°. We attribute some of the differences in timescale to the strength of the eddy fields, as demonstrated by temporally degrading high-resolution model velocity fields. Consistent with the timescale dependence, we find that an average Lagrangian particle completes 3.2 circumpolar loops in the 1° model in comparison to 0.9 loops in the 0.1° model. These differences suggest that advective timescales and thus interbasin merging of upwelling CDW may be overestimated by coarse-resolution models, potentially affecting the skill of centennial scale climate change projections.

  3. Exaggerating Accessible Differences: When Gender Stereotypes Overestimate Actual Group Differences.

    Science.gov (United States)

    Eyal, Tal; Epley, Nicholas

    2017-09-01

    Stereotypes are often presumed to exaggerate group differences, but empirical evidence is mixed. We suggest exaggeration is moderated by the accessibility of specific stereotype content. In particular, because the most accessible stereotype contents are attributes perceived to differ between groups, those attributes are most likely to exaggerate actual group differences due to regression to the mean. We tested this hypothesis using a highly accessible gender stereotype: that women are more socially sensitive than men. We confirmed that the most accessible stereotype content involves attributes perceived to differ between groups (pretest), and that these stereotypes contain some accuracy but significantly exaggerate actual gender differences (Experiment 1). We observe less exaggeration when judging less accessible stereotype content (Experiment 2), or when judging individual men and women (Experiment 3). Considering the accessibility of specific stereotype content may explain when stereotypes exaggerate actual group differences and when they do not.

  4. Arterio-venous concentration difference of [51Cr]EDTA after a single injection in man. Significance of renal function and local blood flow

    DEFF Research Database (Denmark)

    Rehling, M; Hyldstrup, L; Henriksen, Jens Henrik Sahl

    1989-01-01

    , whereas the difference was very sensitive to even small changes in forearm blood flow within the physiological range. For measurement of renal plasma clearance it is recommended to use one long period: from the time of injection until 300 min p.i. or longer. If the clearance period is too short, the use...... introduced in the measurement of renal plasma clearance and total plasma clearance by using venous blood samples instead of arterial. In 13 patients with GFR ranging from 29 to 150 ml min-1, Ca was higher than Cv immediately after the injection. After mean 38 min (range 12-82 min) the two curves crossed...... of venous samples will overestimate the true renal clearance. Plasma clearance determined by venous and arterial blood samples does not differ significantly as long as the concentration is followed from the time of injection and a long period is applied. When simplified plasma clearance techniques are used...

  5. Improvement of snowpack simulations in a regional climate model

    Energy Technology Data Exchange (ETDEWEB)

    Jin, J.; Miller, N.L.

    2011-01-10

    To improve simulations of regional-scale snow processes and related cold-season hydroclimate, the Community Land Model version 3 (CLM3), developed by the National Center for Atmospheric Research (NCAR), was coupled with the Pennsylvania State University/NCAR fifth-generation Mesoscale Model (MM5). CLM3 physically describes the mass and heat transfer within the snowpack using five snow layers that include liquid water and solid ice. The coupled MM5–CLM3 model performance was evaluated for the snowmelt season in the Columbia River Basin in the Pacific Northwestern United States using gridded temperature and precipitation observations, along with station observations. The results from MM5–CLM3 show a significant improvement in the SWE simulation, which has been underestimated in the original version of MM5 coupled with the Noah land-surface model. One important cause for the underestimated SWE in Noah is its unrealistic land-surface structure configuration where vegetation, snow and the topsoil layer are blended when snow is present. This study demonstrates the importance of the sheltering effects of the forest canopy on snow surface energy budgets, which is included in CLM3. Such effects are further seen in the simulations of surface air temperature and precipitation in regional weather and climate models such as MM5. In addition, the snow-season surface albedo overestimated by MM5–Noah is now more accurately predicted by MM5–CLM3 using a more realistic albedo algorithm that intensifies the solar radiation absorption on the land surface, reducing the strong near-surface cold bias in MM5–Noah. The cold bias is further alleviated due to a slower snowmelt rate in MM5–CLM3 during the early snowmelt stage, which is closer to observations than the comparable components of MM5–Noah. In addition, the over-predicted precipitation in the Pacific Northwest as shown in MM5–Noah is significantly decreased in MM5 CLM3 due to the lower evaporation resulting from the

  6. GRACE gravity data help constraining seismic models of the 2004 Sumatran earthquake

    Science.gov (United States)

    Cambiotti, G.; Bordoni, A.; Sabadini, R.; Colli, L.

    2011-10-01

    The analysis of Gravity Recovery and Climate Experiment (GRACE) Level 2 data time series from the Center for Space Research (CSR) and GeoForschungsZentrum (GFZ) allows us to extract a new estimate of the co-seismic gravity signal due to the 2004 Sumatran earthquake. Owing to compressible self-gravitating Earth models, including sea level feedback in a new self-consistent way and designed to compute gravitational perturbations due to volume changes separately, we are able to prove that the asymmetry in the co-seismic gravity pattern, in which the north-eastern negative anomaly is twice as large as the south-western positive anomaly, is not due to the previously overestimated dilatation in the crust. The overestimate was due to a large dilatation localized at the fault discontinuity, the gravitational effect of which is compensated by an opposite contribution from topography due to the uplifted crust. After this localized dilatation is removed, we instead predict compression in the footwall and dilatation in the hanging wall. The overall anomaly is then mainly due to the additional gravitational effects of the ocean after water is displaced away from the uplifted crust, as first indicated by de Linage et al. (2009). We also detail the differences between compressible and incompressible material properties. By focusing on the most robust estimates from GRACE data, consisting of the peak-to-peak gravity anomaly and an asymmetry coefficient, that is given by the ratio of the negative gravity anomaly over the positive anomaly, we show that they are quite sensitive to seismic source depths and dip angles. This allows us to exploit space gravity data for the first time to help constraining centroid-momentum-tensor (CMT) source analyses of the 2004 Sumatran earthquake and to conclude that the seismic moment has been released mainly in the lower crust rather than the lithospheric mantle. Thus, GRACE data and CMT source analyses, as well as geodetic slip distributions aided

  7. Development of a CFD Model Including Tree's Drag Parameterizations: Application to Pedestrian's Wind Comfort in an Urban Area

    Science.gov (United States)

    Kang, G.; Kim, J.

    2017-12-01

    This study investigated the tree's effect on wind comfort at pedestrian height in an urban area using a computational fluid dynamics (CFD) model. We implemented the tree's drag parameterization scheme to the CFD model and validated the simulated results against the wind-tunnel measurement data as well as LES data via several statistical methods. The CFD model underestimated (overestimated) the concentrations on the leeward (windward) walls inside the street canyon in the presence of trees, because the CFD model can't resolve the latticed cage and can't reflect the concentration increase and decrease caused by the latticed cage in the simulations. However, the scalar pollutants' dispersion simulated by the CFD model was quite similar to that in the wind-tunnel measurement in pattern and magnitude, on the whole. The CFD model overall satisfied the statistical validation indices (root normalized mean square error, geometric mean variance, correlation coefficient, and FAC2) but failed to satisfy the fractional bias and geometric mean bias due to the underestimation on the leeward wall and overestimation on the windward wall, showing that its performance was comparable to the LES's performance. We applied the CFD model to evaluation of the trees' effect on the pedestrian's wind-comfort in an urban area. To investigate sensory levels for human activities, the wind-comfort criteria based on Beaufort wind-force scales (BWSs) were used. In the tree-free scenario, BWS 4 and 5 (unpleasant condition for sitting long and sitting short, respectively) appeared in the narrow spaces between buildings, in the upwind side of buildings, and the unobstructed areas. In the tree scenario, BWSs decreased by 1 3 grade inside the campus of Pukyong National University located in the target area, which indicated that trees planted in the campus effectively improved pedestrian's wind comfort.

  8. Towards the improvements of simulating the chemical and optical properties of Chinese aerosols using an online coupled model – CUACE/Aero

    Directory of Open Access Journals (Sweden)

    Chun-Hong Zhou

    2012-07-01

    Full Text Available CUACE/Aero, the China Meteorological Administration (CMA Unified Atmospheric Chemistry Environment for aerosols, is a comprehensive numerical aerosol module incorporating emissions, gaseous chemistry and size-segregated multi-component aerosol algorithm. On-line coupled into a meso-scale weather forecast model (MM5, its performance and improvements for aerosol chemical and optical simulations have been evaluated using the observations data of aerosols/gases from the intensive observations and from the CMA Atmosphere Watch network, plus aerosol optical depth (AOD data from CMA Aerosol Remote Sensing network (CARSNET and from Moderate Resolution Imaging Spectroradiometer (MODIS. Targeting Beijing and North China region from July 13 to 31, 2008, when a heavy hazy weather system occurred, the model captured the general variations of PM10 with most of the data within a factor of 2 from the observations and a combined correlation coefficient (r of 0.38 (significance level=0.05. The correlation coefficients are better at rural than at urban sites, and better at daytime than at nighttime. Chemically, the correlation coefficients between the daily-averaged modelled and observed concentrations range from 0.34 for black carbon (BC to 0.09 for nitrates with sulphate, ammonium and organic carbon (OC in between. Like the PM10, the values of chemical species are higher for the daytime than those for the nighttime. On average, the sulphate, ammonium, nitrate and OC are underestimated by about 60, 70, 96.0 and 10.8%, respectively. Black carbon is overestimated by about 120%. A new size distribution for the primary particle emissions was constructed for most of the anthropogenic aerosols such as BC, OC, sulphate, nitrate and ammonium from the observed size distribution of atmospheric aerosols in Beijing. This not only improves the correlation between the modelled and observed AOD, but also reduces the overestimation of AOD simulated by the original model size

  9. Significance of matrix diagonalization in modelling inelastic electron scattering

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Z. [University of Ulm, Ulm 89081 (Germany); Hambach, R. [University of Ulm, Ulm 89081 (Germany); University of Jena, Jena 07743 (Germany); Kaiser, U.; Rose, H. [University of Ulm, Ulm 89081 (Germany)

    2017-04-15

    Electron scattering is always applied as one of the routines to investigate nanostructures. Nowadays the development of hardware offers more and more prospect for this technique. For example imaging nanostructures with inelastic scattered electrons may allow to produce component-sensitive images with atomic resolution. Modelling inelastic electron scattering is therefore essential for interpreting these images. The main obstacle to study inelastic scattering problem is its complexity. During inelastic scattering, incident electrons entangle with objects, and the description of this process involves a multidimensional array. Since the simulation usually involves fourdimensional Fourier transforms, the computation is highly inefficient. In this work we have offered one solution to handle the multidimensional problem. By transforming a high dimensional array into twodimensional array, we are able to perform matrix diagonalization and approximate the original multidimensional array with its twodimensional eigenvectors. Our procedure reduces the complicated multidimensional problem to a twodimensional problem. In addition, it minimizes the number of twodimensional problems. This method is very useful for studying multiple inelastic scattering. - Highlights: • 4D problems are involved in modelling inelastic electron scattering. • By means of matrix diagonalization, the 4D problems can be simplified as 2D problems. • The number of 2D problems is minimized by using this approach.

  10. Asian dust outflow in the PBL and free atmosphere retrieved by NASA CALIPSO and an assimilated dust transport model

    OpenAIRE

    Y. Hara; K. Yumimoto; I. Uno; A. Shimizu; N. Sugimoto; Z. Liu; D. M. Winker

    2009-01-01

    International audience; Three-dimensional structures of Asian dust transport in the planetary boundary layer (PBL) and free atmosphere occurring successively during the end of May 2007 were clarified using results of space-borne backscatter lidar, Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP), and results simulated using a data-assimilated version of a dust transport model (RC4) based on a ground-based NIES lidar network. Assimilated results mitigated overestimation of dust concen...

  11. Propagation of uncertainty in nasal spray in vitro performance models using Monte Carlo simulation: Part II. Error propagation during product performance modeling.

    Science.gov (United States)

    Guo, Changning; Doub, William H; Kauffman, John F

    2010-08-01

    Monte Carlo simulations were applied to investigate the propagation of uncertainty in both input variables and response measurements on model prediction for nasal spray product performance design of experiment (DOE) models in the first part of this study, with an initial assumption that the models perfectly represent the relationship between input variables and the measured responses. In this article, we discard the initial assumption, and extended the Monte Carlo simulation study to examine the influence of both input variable variation and product performance measurement variation on the uncertainty in DOE model coefficients. The Monte Carlo simulations presented in this article illustrate the importance of careful error propagation during product performance modeling. Our results show that the error estimates based on Monte Carlo simulation result in smaller model coefficient standard deviations than those from regression methods. This suggests that the estimated standard deviations from regression may overestimate the uncertainties in the model coefficients. Monte Carlo simulations provide a simple software solution to understand the propagation of uncertainty in complex DOE models so that design space can be specified with statistically meaningful confidence levels. (c) 2010 Wiley-Liss, Inc. and the American Pharmacists Association

  12. Global tropospheric ozone modeling: Quantifying errors due to grid resolution

    Science.gov (United States)

    Wild, Oliver; Prather, Michael J.

    2006-06-01

    Ozone production in global chemical models is dependent on model resolution because ozone chemistry is inherently nonlinear, the timescales for chemical production are short, and precursors are artificially distributed over the spatial scale of the model grid. In this study we examine the sensitivity of ozone, its precursors, and its production to resolution by running a global chemical transport model at four different resolutions between T21 (5.6° × 5.6°) and T106 (1.1° × 1.1°) and by quantifying the errors in regional and global budgets. The sensitivity to vertical mixing through the parameterization of boundary layer turbulence is also examined. We find less ozone production in the boundary layer at higher resolution, consistent with slower chemical production in polluted emission regions and greater export of precursors. Agreement with ozonesonde and aircraft measurements made during the NASA TRACE-P campaign over the western Pacific in spring 2001 is consistently better at higher resolution. We demonstrate that the numerical errors in transport processes on a given resolution converge geometrically for a tracer at successively higher resolutions. The convergence in ozone production on progressing from T21 to T42, T63, and T106 resolution is likewise monotonic but indicates that there are still large errors at 120 km scales, suggesting that T106 resolution is too coarse to resolve regional ozone production. Diagnosing the ozone production and precursor transport that follow a short pulse of emissions over east Asia in springtime allows us to quantify the impacts of resolution on both regional and global ozone. Production close to continental emission regions is overestimated by 27% at T21 resolution, by 13% at T42 resolution, and by 5% at T106 resolution. However, subsequent ozone production in the free troposphere is not greatly affected. We find that the export of short-lived precursors such as NOx by convection is overestimated at coarse resolution.

  13. Possible overestimation of surface disinfection efficiency by assessment methods based on liquid sampling procedures as demonstrated by in situ quantification of spore viability.

    Science.gov (United States)

    Grand, I; Bellon-Fontaine, M-N; Herry, J-M; Hilaire, D; Moriconi, F-X; Naïtali, M

    2011-09-01

    The standard test methods used to assess the efficiency of a disinfectant applied to surfaces are often based on counting the microbial survivors sampled in a liquid, but total cell removal from surfaces is seldom achieved. One might therefore wonder whether evaluations of microbial survivors in liquid-sampled cells are representative of the levels of survivors in whole populations. The present study was thus designed to determine the "damaged/undamaged" status induced by a peracetic acid disinfection for Bacillus atrophaeus spores deposited on glass coupons directly on this substrate and to compare it to the status of spores collected in liquid by a sampling procedure. The method utilized to assess the viability of both surface-associated and liquid-sampled spores included fluorescence labeling with a combination of Syto 61 and Chemchrome V6 dyes and quantifications by analyzing the images acquired by confocal laser scanning microscopy. The principal result of the study was that the viability of spores sampled in the liquid was found to be poorer than that of surface-associated spores. For example, after 2 min of peracetic acid disinfection, less than 17% ± 5% of viable cells were detected among liquid-sampled cells compared to 79% ± 5% or 47% ± 4%, respectively, when the viability was evaluated on the surface after or without the sampling procedure. Moreover, assessments of the survivors collected in the liquid phase, evaluated using the microscopic method and standard plate counts, were well correlated. Evaluations based on the determination of survivors among the liquid-sampled cells can thus overestimate the efficiency of surface disinfection procedures.

  14. Evaluating the effect of alternative carbon allocation schemes in a land surface model (CLM4.5 on carbon fluxes, pools, and turnover in temperate forests

    Directory of Open Access Journals (Sweden)

    F. Montané

    2017-09-01

    Full Text Available How carbon (C is allocated to different plant tissues (leaves, stem, and roots determines how long C remains in plant biomass and thus remains a central challenge for understanding the global C cycle. We used a diverse set of observations (AmeriFlux eddy covariance tower observations, biomass estimates from tree-ring data, and leaf area index (LAI measurements to compare C fluxes, pools, and LAI data with those predicted by a land surface model (LSM, the Community Land Model (CLM4.5. We ran CLM4.5 for nine temperate (including evergreen and deciduous forests in North America between 1980 and 2013 using four different C allocation schemes: i. dynamic C allocation scheme (named "D-CLM4.5" with one dynamic allometric parameter, which allocates C to the stem and leaves to vary in time as a function of annual net primary production (NPP; ii. an alternative dynamic C allocation scheme (named "D-Litton", where, similar to (i, C allocation is a dynamic function of annual NPP, but unlike (i includes two dynamic allometric parameters involving allocation to leaves, stem, and coarse roots; iii.–iv. a fixed C allocation scheme with two variants, one representative of observations in evergreen (named "F-Evergreen" and the other of observations in deciduous forests (named "F-Deciduous". D-CLM4.5 generally overestimated gross primary production (GPP and ecosystem respiration, and underestimated net ecosystem exchange (NEE. In D-CLM4.5, initial aboveground biomass in 1980 was largely overestimated (between 10 527 and 12 897 g C m−2 for deciduous forests, whereas aboveground biomass accumulation through time (between 1980 and 2011 was highly underestimated (between 1222 and 7557 g C m−2 for both evergreen and deciduous sites due to a lower stem turnover rate in the sites than the one used in the model. D-CLM4.5 overestimated LAI in both evergreen and deciduous sites because the leaf C–LAI relationship in the model did not match the

  15. Noninvasive Coronary Angiography using 64-Detector-Row Computed Tomography in Patients with a Low to Moderate Pretest Probability of Significant Coronary Artery Disease

    International Nuclear Information System (INIS)

    Schlosser, T.; Mohrs, O.K.; Magedanz, A.; Nowak, B.; Voigtlaender, T.; Barkhausen, J.; Schmermund, A.

    2007-01-01

    Purpose: To evaluate the value of 64-detector-row computed tomography for ruling out high-grade coronary stenoses in patients with a low to moderate pretest probability of significant coronary artery disease. Material and Methods: The study included 61 patients with a suspicion of coronary artery disease on the basis of atypical angina or ambiguous findings in noninvasive stress testing and a class II indication for invasive coronary angiography (ICA). All patients were examined by 64-detector-row computed tomography angiography (CTA) and ICA. On a coronary segmental level, the presence of significant (>50% diameter) stenoses was examined. Results: In a total of 915 segments, CTA detected 62 significant stenoses. Thirty-four significant stenoses were confirmed by ICA, whereas 28 stenoses could not be confirmed by ICA. Twenty-two of them showed wall irregularities on ICA, and six were angiographically normal. Accordingly, on a coronary segmental basis, 28 false-positive and 0 false-negative findings resulted in a sensitivity of 100%, a specificity of 96.8%, a positive predictive value of 54.8%, and a negative predictive value of 100%. The diagnostic accuracy was 96.9%. Conclusion: Sixty-four-detector-row computed tomography reliably detects significant coronary stenoses in patients with suspected coronary artery disease and appears to be helpful in the selection of patients who need to undergo ICA. Calcified and non-calcified plaques are detected. Grading of stenoses in areas with calcification is difficult. Frequently, stenosis severity is overestimated by 64-detector-row computed tomography

  16. Noninvasive Coronary Angiography using 64-Detector-Row Computed Tomography in Patients with a Low to Moderate Pretest Probability of Significant Coronary Artery Disease

    Energy Technology Data Exchange (ETDEWEB)

    Schlosser, T.; Mohrs, O.K.; Magedanz, A.; Nowak, B.; Voigtlaender, T.; Barkhausen, J.; Schmermund, A. [Dept. of Diagnostic and Interventional Radiology and Neuroradiology, University Hospital Essen, Essen (Germany)

    2007-04-15

    Purpose: To evaluate the value of 64-detector-row computed tomography for ruling out high-grade coronary stenoses in patients with a low to moderate pretest probability of significant coronary artery disease. Material and Methods: The study included 61 patients with a suspicion of coronary artery disease on the basis of atypical angina or ambiguous findings in noninvasive stress testing and a class II indication for invasive coronary angiography (ICA). All patients were examined by 64-detector-row computed tomography angiography (CTA) and ICA. On a coronary segmental level, the presence of significant (>50% diameter) stenoses was examined. Results: In a total of 915 segments, CTA detected 62 significant stenoses. Thirty-four significant stenoses were confirmed by ICA, whereas 28 stenoses could not be confirmed by ICA. Twenty-two of them showed wall irregularities on ICA, and six were angiographically normal. Accordingly, on a coronary segmental basis, 28 false-positive and 0 false-negative findings resulted in a sensitivity of 100%, a specificity of 96.8%, a positive predictive value of 54.8%, and a negative predictive value of 100%. The diagnostic accuracy was 96.9%. Conclusion: Sixty-four-detector-row computed tomography reliably detects significant coronary stenoses in patients with suspected coronary artery disease and appears to be helpful in the selection of patients who need to undergo ICA. Calcified and non-calcified plaques are detected. Grading of stenoses in areas with calcification is difficult. Frequently, stenosis severity is overestimated by 64-detector-row computed tomography.

  17. Modelling of impaired cerebral blood flow due to gaseous emboli

    International Nuclear Information System (INIS)

    Hague, J P; Banahan, C; Chung, E M L

    2013-01-01

    Bubbles introduced to the arterial circulation during invasive medical procedures can have devastating consequences for brain function but their effects are currently difficult to quantify. Here we present a Monte Carlo simulation investigating the impact of gas bubbles on cerebral blood flow. For the first time, this model includes realistic adhesion forces, bubble deformation, fluid dynamical considerations, and bubble dissolution. This allows investigation of the effects of buoyancy, solubility, and blood pressure on embolus clearance. Our results illustrate that blockages depend on several factors, including the number and size distribution of incident emboli, dissolution time and blood pressure. We found it essential to model the deformation of bubbles to avoid overestimation of arterial obstruction. Incorporation of buoyancy effects within our model slightly reduced the overall level of obstruction but did not decrease embolus clearance times. We found that higher blood pressures generate lower levels of obstruction and improve embolus clearance. Finally, we demonstrate the effects of gas solubility and discuss potential clinical applications of the model. (paper)

  18. Modeling the global atmospheric transport and deposition of mercury to the Great Lakes

    Directory of Open Access Journals (Sweden)

    Mark D. Cohen

    2016-07-01

    Full Text Available Abstract Mercury contamination in the Great Lakes continues to have important public health and wildlife ecotoxicology impacts, and atmospheric deposition is a significant ongoing loading pathway. The objective of this study was to estimate the amount and source-attribution for atmospheric mercury deposition to each lake, information needed to prioritize amelioration efforts. A new global, Eulerian version of the HYSPLIT-Hg model was used to simulate the 2005 global atmospheric transport and deposition of mercury to the Great Lakes. In addition to the base case, 10 alternative model configurations were used to examine sensitivity to uncertainties in atmospheric mercury chemistry and surface exchange. A novel atmospheric lifetime analysis was used to characterize fate and transport processes within the model. Model-estimated wet deposition and atmospheric concentrations of gaseous elemental mercury (Hg(0 were generally within ∼10% of measurements in the Great Lakes region. The model overestimated non-Hg(0 concentrations by a factor of 2–3, similar to other modeling studies. Potential reasons for this disagreement include model inaccuracies, differences in atmospheric Hg fractions being compared, and the measurements being biased low. Lake Erie, downwind of significant local/regional emissions sources, was estimated by the model to be the most impacted by direct anthropogenic emissions (58% of the base case total deposition, while Lake Superior, with the fewest upwind local/regional sources, was the least impacted (27%. The U.S. was the largest national contributor, followed by China, contributing 25% and 6%, respectively, on average, for the Great Lakes. The contribution of U.S. direct anthropogenic emissions to total mercury deposition varied between 46% for the base case (with a range of 24–51% over all model configurations for Lake Erie and 11% (range 6–13% for Lake Superior. These results illustrate the importance of atmospheric

  19. Dosimetric and radiobiological comparison of TG-43 and Monte Carlo calculations in 192Ir breast brachytherapy applications.

    Science.gov (United States)

    Peppa, V; Pappas, E P; Karaiskos, P; Major, T; Polgár, C; Papagiannis, P

    2016-10-01

    To investigate the clinical significance of introducing model based dose calculation algorithms (MBDCAs) as an alternative to TG-43 in 192 Ir interstitial breast brachytherapy. A 57 patient cohort was used in a retrospective comparison between TG-43 based dosimetry data exported from a treatment planning system and Monte Carlo (MC) dosimetry performed using MCNP v. 6.1 with plan and anatomy information in DICOM-RT format. Comparison was performed for the target, ipsilateral lung, heart, skin, breast and ribs, using dose distributions, dose-volume histograms (DVH) and plan quality indices clinically used for plan evaluation, as well as radiobiological parameters. TG-43 overestimation of target DVH parameters is statistically significant but small (less than 2% for the target coverage indices and 4% for homogeneity indices, on average). Significant dose differences (>5%) were observed close to the skin and at relatively large distances from the implant leading to a TG-43 dose overestimation for the organs at risk. These differences correspond to low dose regions (<50% of the prescribed dose), being less than 2% of the prescribed dose. Detected dosimetric differences did not induce clinically significant differences in calculated tumor control probabilities (mean absolute difference <0.2%) and normal tissue complication probabilities. While TG-43 shows a statistically significant overestimation of most indices used for plan evaluation, differences are small and therefore not clinically significant. Improved MBDCA dosimetry could be important for re-irradiation, technique inter-comparison and/or the assessment of secondary cancer induction risk, where accurate dosimetry in the whole patient anatomy is of the essence. Copyright © 2016 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  20. Evaluation of land surface model simulations of evapotranspiration over a 12 year crop succession: impact of the soil hydraulic properties

    Science.gov (United States)

    Garrigues, S.; Olioso, A.; Calvet, J.-C.; Martin, E.; Lafont, S.; Moulin, S.; Chanzy, A.; Marloie, O.; Desfonds, V.; Bertrand, N.; Renard, D.

    2014-10-01

    transpiration at the end of the crop cycles. The overestimation of the soil moisture at saturation triggers the underestimation of the soil evaporation during the wet soil periods. The use of field capacity values derived from laboratory retention measurements leads to inaccurate simulation of soil evaporation due to the lack of representativeness of the soil structure variability at the field scale. The most accurate simulation is achieved with the values of the soil hydraulic properties derived from field measured soil moisture. Their temporal analysis over each crop cycle provides meaningful estimates of the wilting point, the field capacity and the rooting depth to represent the crop water needs and accurately simulate the evapotranspiration over the crop succession. We showed that the uncertainties in the eddy-covariance measurements are significant and can explain a large part of the unresolved random differences between the simulations and the measurements of evapotranspiration. Other possible model shortcomings include the lack of representation of soil vertical heterogeneity and root profile along with inaccurate energy balance partitioning between the soil and the vegetation at low LAI.

  1. HTTR criticality calculations with SCALE6: Studies of various geometric and unit-cell options in modeling

    Energy Technology Data Exchange (ETDEWEB)

    Wang, J. Y.; Chiang, M. H.; Sheu, R. J.; Liu, Y. W. H. [Inst. of Nuclear Engineering and Science, National Tsing Hua Univ., Hsinchu 30013, Taiwan (China)

    2012-07-01

    The fuel element of the High Temperature Engineering Test Reactor (HTTR) presents a doubly heterogeneous geometry, where tiny TRISO fuel particles dispersed in a graphite matrix form the fuel region of a cylindrical fuel rod, and a number of fuel rods together with moderator or reflector then constitute the lattice design of the core. In this study, a series of full-core HTTR criticality calculations were performed with the SCALE6 code system using various geometric and unit-cell options in order to systematically investigate their effects on neutronic analysis. Two geometric descriptions (ARRAY or HOLE) in SCALE6 can be used to construct a complicated and repeated model. The result shows that eliminating the use of HOLE in the HTTR geometric model can save the computation time by a factor of 4. Four unit-cell treatments for resonance self-shielding corrections in SCALE6 were tested to create problem-specific multigroup cross sections for the HTTR core model. Based on the same ENDF/B-VII cross-section library, their results were evaluated by comparing with continuous-energy calculations. The comparison indicates that the INFHOMMEDIUM result overestimates the system multiplication factor (k{sub eff}) by 55 mk, whereas the LATTICECELL and MULTIREGION treatments predict the k{sub eff} values with similar biases of approximately 10 mk overestimation. The DOUBLEHET result shows a more satisfactory agreement, about 4.2 mk underestimation in the k{sub eff} value. In addition, using cell-weighted cross sections instead of an explicit modeling of TRISO particles in fuel region can further reduce the computation time by a factor of 5 without sacrificing accuracy. (authors)

  2. The Surface Energy Balance at Local and Regional Scales-A Comparison of General Circulation Model Results with Observations.

    Science.gov (United States)

    Garratt, J. R.; Krummel, P. B.; Kowalczyk, E. A.

    1993-06-01

    Aspects of the mean monthly energy balance at continental surfaces are examined by appeal to the results of general circulation model (GCM) simulations, climatological maps of surface fluxes, and direct observations. Emphasis is placed on net radiation and evaporation for (i) five continental regions (each approximately 20°×150°) within Africa, Australia, Eurasia, South America, and the United States; (ii) a number of continental sites in both hemispheres. Both the mean monthly values of the local and regional fluxes and the mean monthly diurnal cycles of the local fluxes are described. Mostly, GCMs tend to overestimate the mean monthly levels of net radiation by about 15% -20% on an annual basis, for observed annual values in the range 50 to 100 Wm2. This is probably the result of several deficiencies, including (i) continental surface albedos being undervalued in a number of the models, resulting in overestimates of the net shortwave flux at the surface (though this deficiency is steadily being addressed by modelers); (ii) incoming shortwave fluxes being overestimated due to uncertainties in cloud schemes and clear-sky absorption; (iii) land-surface temperatures being under-estimated resulting in an underestimate of the outgoing longwave flux. In contrast, and even allowing for the poor observational base for evaporation, there is no obvious overall bias in mean monthly levels of evaporation determined in GCMS, with one or two exceptions. Rather, and far more so than with net radiation, there is a wide range in values of evaporation for all regions investigated. For continental regions and at times of the year of low to moderate rainfall, there is a tendency for the simulated evaporation to be closely related to the precipitation-this is not surprising. In contrast, for regions where there is sufficient or excessive rainfall, the evaporation tends to follow the behavior of the net radiation. Again, this is not surprising given the close relation between

  3. Qualification of a Plant Disease Simulation Model: Performance of the LATEBLIGHT Model Across a Broad Range of Environments.

    Science.gov (United States)

    Andrade-Piedra, Jorge L; Forbes, Gregory A; Shtienberg, Dani; Grünwald, Niklaus J; Chacón, María G; Taipe, Marco V; Hijmans, Robert J; Fry, William E

    2005-12-01

    ABSTRACT The concept of model qualification, i.e., discovering the domain over which a validated model may be properly used, was illustrated with LATEBLIGHT, a mathematical model that simulates the effect of weather, host growth and resistance, and fungicide use on asexual development and growth of Phytophthora infestans on potato foliage. Late blight epidemics from Ecuador, Mexico, Israel, and the United States involving 13 potato cultivars (32 epidemics in total) were compared with model predictions using graphical and statistical tests. Fungicides were not applied in any of the epidemics. For the simulations, a host resistance level was assigned to each cultivar based on general categories reported by local investigators. For eight cultivars, the model predictions fit the observed data. For four cultivars, the model predictions overestimated disease, likely due to inaccurate estimates of host resistance. Model predictions were inconsistent for one cultivar and for one location. It was concluded that the domain of applicability of LATEBLIGHT can be extended from the range of conditions in Peru for which it has been previously validated to those observed in this study. A sensitivity analysis showed that, within the range of values observed empirically, LATEBLIGHT is more sensitive to changes in variables related to initial inoculum and to weather than to changes in variables relating to host resistance.

  4. Structural mode significance using INCA. [Interactive Controls Analysis computer program

    Science.gov (United States)

    Bauer, Frank H.; Downing, John P.; Thorpe, Christopher J.

    1990-01-01

    Structural finite element models are often too large to be used in the design and analysis of control systems. Model reduction techniques must be applied to reduce the structural model to manageable size. In the past, engineers either performed the model order reduction by hand or used distinct computer programs to retrieve the data, to perform the significance analysis and to reduce the order of the model. To expedite this process, the latest version of INCA has been expanded to include an interactive graphical structural mode significance and model order reduction capability.

  5. Peripheral doses in modulated intensive radiotherapy (MIRT) and its implications in radiological protection

    International Nuclear Information System (INIS)

    Cobos, Agustin C.; Sanz, Dario E.; Alvarez, Guilhermo D.

    2013-01-01

    A calculation model based on the theory of photon transport, to estimate the peripheral energy fluence (fluence occurring outside the radiation beam) produced by the dispersions of photon compensating filters used was developed in IMRT mode, in a treatment room radiotherapy service of FUESMEN. In order to validate the model were experimentally determined fluences and peripheral dose for three different sizes of compensating filters. It was found that there is a slight systematic overestimation model with respect to experimental results. The experimental values also allowed the comparison of the peripheral doses with other modalities. Furthermore, a model was developed to estimate the annual dose that occurs at any point to be protected with a shield, from the theoretical values obtained from peripheral energy flow. Using the theoretical values automatically allowed to take a conservative approach because of the slight overestimation already mentioned, the couple have a calculation model for widespread use. It was found that the contribution of the peripheral dose to the annual dose is more than significant, thus suggesting that the same should be considered in the design calculations of secondary barriers

  6. Sensitivity and Interaction Analysis Based on Sobol’ Method and Its Application in a Distributed Flood Forecasting Model

    Directory of Open Access Journals (Sweden)

    Hui Wan

    2015-06-01

    Full Text Available Sensitivity analysis is a fundamental approach to identify the most significant and sensitive parameters, helping us to understand complex hydrological models, particularly for time-consuming distributed flood forecasting models based on complicated theory with numerous parameters. Based on Sobol’ method, this study compared the sensitivity and interactions of distributed flood forecasting model parameters with and without accounting for correlation. Four objective functions: (1 Nash–Sutcliffe efficiency (ENS; (2 water balance coefficient (WB; (3 peak discharge efficiency (EP; and (4 time to peak efficiency (ETP were implemented to the Liuxihe model with hourly rainfall-runoff data collected in the Nanhua Creek catchment, Pearl River, China. Contrastive results for the sensitivity and interaction analysis were also illustrated among small, medium, and large flood magnitudes. Results demonstrated that the choice of objective functions had no effect on the sensitivity classification, while it had great influence on the sensitivity ranking for both uncorrelated and correlated cases. The Liuxihe model behaved and responded uniquely to various flood conditions. The results also indicated that the pairwise parameters interactions revealed a non-ignorable contribution to the model output variance. Parameters with high first or total order sensitivity indices presented a corresponding high second order sensitivity indices and correlation coefficients with other parameters. Without considering parameter correlations, the variance contributions of highly sensitive parameters might be underestimated and those of normally sensitive parameters might be overestimated. This research laid a basic foundation to improve the understanding of complex model behavior.

  7. Genome-wide significant localization for working and spatial memory: Identifying genes for psychosis using models of cognition.

    Science.gov (United States)

    Knowles, Emma E M; Carless, Melanie A; de Almeida, Marcio A A; Curran, Joanne E; McKay, D Reese; Sprooten, Emma; Dyer, Thomas D; Göring, Harald H; Olvera, Rene; Fox, Peter; Almasy, Laura; Duggirala, Ravi; Kent, Jack W; Blangero, John; Glahn, David C

    2014-01-01

    It is well established that risk for developing psychosis is largely mediated by the influence of genes, but identifying precisely which genes underlie that risk has been problematic. Focusing on endophenotypes, rather than illness risk, is one solution to this problem. Impaired cognition is a well-established endophenotype of psychosis. Here we aimed to characterize the genetic architecture of cognition using phenotypically detailed models as opposed to relying on general IQ or individual neuropsychological measures. In so doing we hoped to identify genes that mediate cognitive ability, which might also contribute to psychosis risk. Hierarchical factor models of genetically clustered cognitive traits were subjected to linkage analysis followed by QTL region-specific association analyses in a sample of 1,269 Mexican American individuals from extended pedigrees. We identified four genome wide significant QTLs, two for working and two for spatial memory, and a number of plausible and interesting candidate genes. The creation of detailed models of cognition seemingly enhanced the power to detect genetic effects on cognition and provided a number of possible candidate genes for psychosis. © 2013 Wiley Periodicals, Inc.

  8. Beyond Rational Decision-Making: Modelling the Influence of Cognitive Biases on the Dynamics of Vaccination Coverage.

    Directory of Open Access Journals (Sweden)

    Marina Voinson

    Full Text Available Theoretical studies predict that it is not possible to eradicate a disease under voluntary vaccination because of the emergence of non-vaccinating "free-riders" when vaccination coverage increases. A central tenet of this approach is that human behaviour follows an economic model of rational choice. Yet, empirical studies reveal that vaccination decisions do not necessarily maximize individual self-interest. Here we investigate the dynamics of vaccination coverage using an approach that dispenses with payoff maximization and assumes that risk perception results from the interaction between epidemiology and cognitive biases.We consider a behaviour-incidence model in which individuals perceive actual epidemiological risks as a function of their opinion of vaccination. As a result of confirmation bias, sceptical individuals (negative opinion overestimate infection cost while pro-vaccines individuals (positive opinion overestimate vaccination cost. We considered a feedback between individuals and their environment as individuals could change their opinion, and thus the way they perceive risks, as a function of both the epidemiology and the most common opinion in the population.For all parameter values investigated, the infection is never eradicated under voluntary vaccination. For moderately contagious diseases, oscillations in vaccination coverage emerge because individuals process epidemiological information differently depending on their opinion. Conformism does not generate oscillations but slows down the cultural response to epidemiological change.Failure to eradicate vaccine preventable disease emerges from the model because of cognitive biases that maintain heterogeneity in how people perceive risks. Thus, assumptions of economic rationality and payoff maximization are not mandatory for predicting commonly observed dynamics of vaccination coverage. This model shows that alternative notions of rationality, such as that of ecological

  9. Beyond Rational Decision-Making: Modelling the Influence of Cognitive Biases on the Dynamics of Vaccination Coverage.

    Science.gov (United States)

    Voinson, Marina; Billiard, Sylvain; Alvergne, Alexandra

    2015-01-01

    Theoretical studies predict that it is not possible to eradicate a disease under voluntary vaccination because of the emergence of non-vaccinating "free-riders" when vaccination coverage increases. A central tenet of this approach is that human behaviour follows an economic model of rational choice. Yet, empirical studies reveal that vaccination decisions do not necessarily maximize individual self-interest. Here we investigate the dynamics of vaccination coverage using an approach that dispenses with payoff maximization and assumes that risk perception results from the interaction between epidemiology and cognitive biases. We consider a behaviour-incidence model in which individuals perceive actual epidemiological risks as a function of their opinion of vaccination. As a result of confirmation bias, sceptical individuals (negative opinion) overestimate infection cost while pro-vaccines individuals (positive opinion) overestimate vaccination cost. We considered a feedback between individuals and their environment as individuals could change their opinion, and thus the way they perceive risks, as a function of both the epidemiology and the most common opinion in the population. For all parameter values investigated, the infection is never eradicated under voluntary vaccination. For moderately contagious diseases, oscillations in vaccination coverage emerge because individuals process epidemiological information differently depending on their opinion. Conformism does not generate oscillations but slows down the cultural response to epidemiological change. Failure to eradicate vaccine preventable disease emerges from the model because of cognitive biases that maintain heterogeneity in how people perceive risks. Thus, assumptions of economic rationality and payoff maximization are not mandatory for predicting commonly observed dynamics of vaccination coverage. This model shows that alternative notions of rationality, such as that of ecological rationality whereby

  10. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part II: Multi-layered cloud

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, H; McCoy, R B; Klein, S A; Xie, S; Luo, Y; Avramov, A; Chen, M; Cole, J; Falk, M; Foster, M; Genio, A D; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; McFarquhar, G; Poellot, M; Shipway, B; Shupe, M; Sud, Y; Turner, D; Veron, D; Walker, G; Wang, Z; Wolf, A; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a deep, multi-layered, mixed-phase cloud system observed during the ARM Mixed-Phase Arctic Cloud Experiment. This cloud system was associated with strong surface turbulent sensible and latent heat fluxes as cold air flowed over the open Arctic Ocean, combined with a low pressure system that supplied moisture at mid-level. The simulations, performed by 13 single-column and 4 cloud-resolving models, generally overestimate the liquid water path and strongly underestimate the ice water path, although there is a large spread among the models. This finding is in contrast with results for the single-layer, low-level mixed-phase stratocumulus case in Part I of this study, as well as previous studies of shallow mixed-phase Arctic clouds, that showed an underprediction of liquid water path. The overestimate of liquid water path and underestimate of ice water path occur primarily when deeper mixed-phase clouds extending into the mid-troposphere were observed. These results suggest important differences in the ability of models to simulate Arctic mixed-phase clouds that are deep and multi-layered versus shallow and single-layered. In general, models with a more sophisticated, two-moment treatment of the cloud microphysics produce a somewhat smaller liquid water path that is closer to observations. The cloud-resolving models tend to produce a larger cloud fraction than the single-column models. The liquid water path and especially the cloud fraction have a large impact on the cloud radiative forcing at the surface, which is dominated by the longwave flux for this case.

  11. Modeling of the metallic port in breast tissue expanders for photon radiotherapy.

    Science.gov (United States)

    Yoon, Jihyung; Xie, Yibo; Heins, David; Zhang, Rui

    2018-03-30

    The purpose of this study was to model the metallic port in breast tissue expanders and to improve the accuracy of dose calculations in a commercial photon treatment planning system (TPS). The density of the model was determined by comparing TPS calculations and ion chamber (IC) measurements. The model was further validated and compared with two widely used clinical models by using a simplified anthropomorphic phantom and thermoluminescent dosimeters (TLD) measurements. Dose perturbations and target coverage for a single postmastectomy radiotherapy (PMRT) patient were also evaluated. The dimensions of the metallic port model were determined to be 1.75 cm in diameter and 5 mm in thickness. The density of the port was adjusted to be 7.5 g/cm 3 which minimized the differences between IC measurements and TPS calculations. Using the simplified anthropomorphic phantom, we found the TPS calculated point doses based on the new model were in agreement with TLD measurements within 5.0% and were more accurate than doses calculated based on the clinical models. Based on the photon treatment plans for a real patient, we found that the metallic port has a negligible dosimetric impact on chest wall, while the port introduced significant dose shadow in skin area. The current clinical port models either overestimate or underestimate the attenuation from the metallic port, and the dose perturbation depends on the plan and the model in a complex way. TPS calculations based on our model of the metallic port showed good agreement with measurements for all cases. This new model could improve the accuracy of dose calculations for PMRT patients who have temporary tissue expanders implanted during radiotherapy and could potentially reduce the risk of complications after the treatment. © 2018 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.

  12. A regional scale model for ozone in the United States with subgrid representation of urban and power plant plumes

    International Nuclear Information System (INIS)

    Sillman, S.; Logan, J.A.; Wofsy, S.C.

    1990-01-01

    A new approach to modeling regional air chemistry is presented for application to industrialized regions such as the continental US. Rural chemistry and transport are simulated using a coarse grid, while chemistry and transport in urban and power plant plumes are represented by detailed subgrid models. Emissions from urban and power plant sources are processed in generalized plumes where chemistry and dilution proceed for 8-12 hours before mixing with air in a large resolution element. A realistic fraction of pollutants reacts under high-NO x conditions, and NO x is removed significantly before dispersal. Results from this model are compared with results from grid odels that do not distinguish plumes and with observational data defining regional ozone distributions. Grid models with coarse resolution are found to artificially disperse NO x over rural areas, therefore overestimating rural levels of both NO x and O 3 . Regional net ozone production is too high in coarse grid models, because production of O 3 is more efficient per molecule of NO x in the low-concentration regime of rural areas than in heavily polluted plumes from major emission sources. Ozone levels simulated by this model are shown to agree with observations in urban plumes and in rural regions. The model reproduces accurately average regional and peak ozone concentrations observed during a 4-day ozone episode. Computational costs for the model are reduced 25-to 100-fold as compared to fine-mesh models

  13. Bayesian spatial modelling and the significance of agricultural land use to scrub typhus infection in Taiwan.

    Science.gov (United States)

    Wardrop, Nicola A; Kuo, Chi-Chien; Wang, Hsi-Chieh; Clements, Archie C A; Lee, Pei-Fen; Atkinson, Peter M

    2013-11-01

    Scrub typhus is transmitted by the larval stage of trombiculid mites. Environmental factors, including land cover and land use, are known to influence breeding and survival of trombiculid mites and, thus, also the spatial heterogeneity of scrub typhus risk. Here, a spatially autoregressive modelling framework was applied to scrub typhus incidence data from Taiwan, covering the period 2003 to 2011, to provide increased understanding of the spatial pattern of scrub typhus risk and the environmental and socioeconomic factors contributing to this pattern. A clear spatial pattern in scrub typhus incidence was observed within Taiwan, and incidence was found to be significantly correlated with several land cover classes, temperature, elevation, normalized difference vegetation index, rainfall, population density, average income and the proportion of the population that work in agriculture. The final multivariate regression model included statistically significant correlations between scrub typhus incidence and average income (negatively correlated), the proportion of land that contained mosaics of cropland and vegetation (positively correlated) and elevation (positively correlated). These results highlight the importance of land cover on scrub typhus incidence: mosaics of cropland and vegetation represent a transitional land cover type which can provide favourable habitats for rodents and, therefore, trombiculid mites. In Taiwan, these transitional land cover areas tend to occur in less populated and mountainous areas, following the frontier establishment and subsequent partial abandonment of agricultural cultivation, due to demographic and socioeconomic changes. Future land use policy decision-making should ensure that potential public health outcomes, such as modified risk of scrub typhus, are considered.

  14. Vertical dispersion from surface and elevated releases: An investigation of a Non-Gaussian plume model

    International Nuclear Information System (INIS)

    Brown, M.J.; Arya, S.P.; Snyder, W.H.

    1993-01-01

    The vertical diffusion of a passive tracer released from surface and elevated sources in a neutrally stratified boundary layer has been studied by comparing field and laboratory experiments with a non-Gaussian K-theory model that assumes power-law profiles for the mean velocity and vertical eddy diffusivity. Several important differences between model predictions and experimental data were discovered: (1) the model overestimated ground-level concentrations from surface and elevated releases at distances beyond the peak concentration; (2) the model overpredicted vertical mixing near elevated sources, especially in the upward direction; (3) the model-predicted exponent α in the exponential vertical concentration profile for a surface release [bar C(z)∝ exp(-z α )] was smaller than the experimentally measured exponent. Model closure assumptions and experimental short-comings are discussed in relation to their probable effect on model predictions and experimental measurements. 42 refs., 13 figs., 3 tabs

  15. Understanding & modeling bus transit driver availability.

    Science.gov (United States)

    2014-07-01

    Bus transit agencies are required to hire extraboard (i.e. back-up) operators to account for unexpected absences. Incorrect sizing of extra driver workforce is problematic for a number of reasons. Overestimating the appropriate number of extraboard o...

  16. Validation of the Eddy Viscosity and Lange Wake Models using Measured Wake Flow Characteristics Behind a Large Wind Turbine Rotor

    Energy Technology Data Exchange (ETDEWEB)

    Jeon, Sang Hyeon; Kim, Bum Suk; Huh, Jong Chul [Jeju National Univ., Jeju (Korea, Republic of); Go, Young Jun [Hanjin Ind, Co., Ltd., Yangsan (Korea, Republic of)

    2016-01-15

    The wake effects behind wind turbines were investigated by using data from a Met Mast tower and the SCADA (Supervisory Control and Data Acquisition) system for a wind turbine. The results of the wake investigations and predicted values for the velocity deficit based on the eddy viscosity model were compared with the turbulence intensity from the Lange model. As a result, the velocity deficit and turbulence intensity of the wake increased as the free stream wind speed decreased. In addition, the magnitude of the velocity deficit for the center of the wake using the eddy viscosity model was overestimated while the turbulence intensity from the Lange model showed similarities with measured values.

  17. Implementation of viscoelastic mud-induced energy attenuation in the third-generation wave model, SWAN

    Science.gov (United States)

    Beyramzade, Mostafa; Siadatmousavi, Seyed Mostafa

    2018-01-01

    The interaction of waves with fluid mud can dissipate the wave energy significantly over few wavelengths. In this study, the third-generation wave model, SWAN, was advanced to include attenuation of wave energy due to interaction with a viscoelastic fluid mud layer. The performances of implemented viscoelastic models were verified against an analytical solution and viscous formulations for simple one-dimensional propagation cases. Stationary and non-stationary test cases in the Surinam coast and the Atchafalaya Shelf showed that the inclusion of the mud-wave interaction term in the third-generation wave model enhances the model performance in real applications. A high value of mud viscosity (of the order of 0.1 m2/s) was required in both field cases to remedy model overestimation at high frequency ranges of the wave spectrum. The use of frequency-dependent mud viscosity value improved the performance of model, especially in the frequency range of 0.2-0.35 Hz in the wave spectrum. In addition, the mud-wave interaction might affect the high frequency part of the spectrum, and this part of the wave spectrum is also affected by energy transfer from wind to waves, even for the fetch lengths of the order of 10 km. It is shown that exclusion of the wind input term in such cases might result in different values for parameters of mud layer when inverse modeling procedure was employed. Unlike viscous models for wave-mud interaction, the inverse modeling results to a set of mud parameters with the same performance when the viscoelastic model is used. It provides an opportunity to select realistic mud parameters which are in more agreement with in situ measurements.

  18. Reassessing the variability in atmospheric H2 using the two-way nested TM5 model

    Energy Technology Data Exchange (ETDEWEB)

    Pieterse, G.; Batenburg, A.M; Roeckmann, T. [Institute for Marine and Atmospheric Research Utrecht (IMAU), Utrecht (Netherlands); Krol, M.C. [Department of Meteorology and Air Quality at Wageningen University, Wageningen (Netherlands); Brenninkmeijer, C.A.M. [Max-Planck-Institut fuer Chemie, Air Chemistry Division, Mainz (Germany); Popa, M.E.; Vermeulen, A.T. [Department of Air Quality and Climate Research at the Energy Research Centre of the Netherlands ECN, Petten (Netherlands); O' Doherty, S.; Grant, A. [School of Chemistry, University of Bristol, Bristol (United Kingdom); Steele, L.P.; Krummel, P.B.; Langenfelds, R.L. [Centre for Australian Weather and Climate Research, CSIRO Marine and Atmospheric Research, Aspendale, Victoria (Austria); Wang, H.J. [School of Earth and Atmospheric Sciences, Georgia Institute of Technology, Atlanta, GA (United States); Schmidt, M.; Yver, C. [Laboratoire des Sciences du Climat et de l' Environnement (LSCE), Gif-sur-Yvette (France); Jordan, A. [Max-Planck Institut fuer Biogeochemie, Jena (Germany); Engel, A. [Institut fuer Meteorologie und Geophysik, Goethe-Universitaet Frankfurt, Frankfurt (Germany); Fisher, R.E.; Lowry, D.; Nisbet, E.G. [Department of Earth Sciences, Royal Holloway, University of London, Egham (United Kingdom); Reimann, S.; Vollmer, M.K.; Steinbacher, M. [Empa, Swiss Federal Institute for Materials Science and Technology, Laboratory for Air Pollution/Environmental Technology, Duebendorf (Switzerland); Hammer, S. [Institut fuer Umweltphysik, Heidelberg Universitaet, Heidelberg (Germany); Forster, G.; Sturges, W.T. [School of Environmental Sciences, University of East Anglia, Norwich (United Kingdom)

    2013-05-16

    This work reassesses the global atmospheric budget of H2 with the TM5 model. The recent adjustment of the calibration scale for H2 translates into a change in the tropospheric burden. Furthermore, the ECMWF Reanalysis-Interim (ERA-Interim) data from the European Centre for Medium-Range Weather Forecasts (ECMWF) used in this study show slower vertical transport than the operational data used before. Consequently, more H2 is removed by deposition. The deposition parametrization is updated because significant deposition fluxes for snow, water, and vegetation surfaces were calculated in our previous study. Timescales of 1-2h are asserted for the transport of H2 through the canopies of densely vegetated regions. The global scale variability of H2 and {rho}({Delta}H2) is well represented by the updated model. H2 is slightly overestimated in the Southern Hemisphere because too little H2 is removed by dry deposition to rainforests and savannahs. The variability in H2 over Europe is further investigated using a high-resolution model subdomain. It is shown that discrepancies between the model and the observations are mainly caused by the finite model resolution. The tropospheric burden is estimated at 165{+-}8 Tg H2. The removal rates of H2 by deposition and photochemical oxidation are estimated at 53{+-}4 and 23{+-}2 Tg H2/yr, resulting in a tropospheric lifetime of 2.2{+-}0.2 year.

  19. Cloud-Resolving Modeling Intercomparison Study of a Squall Line Case from MC3E - Properties of Convective Core

    Science.gov (United States)

    Fan, J.; Han, B.; Varble, A.; Morrison, H.; North, K.; Kollias, P.; Chen, B.; Dong, X.; Giangrande, S. E.; Khain, A.; Lin, Y.; Mansell, E.; Milbrandt, J.; Stenz, R.; Thompson, G.; Wang, Y.

    2016-12-01

    The large spread in CRM model simulations of deep convection and aerosol effects on deep convective clouds (DCCs) makes it difficult to (1) further our understanding of deep convection and (2) define "benchmarks" and then limit their use in parameterization developments. A constrained model intercomparsion study on a mid-latitude mesoscale squall line is performed using the Weather Research & Forecasting (WRF) model at 1-km horizontal grid spacing with eight cloud microphysics schemes to understand specific processes that lead to the large spreads of simulated convection and precipitation. Various observational data are employed to evaluate the baseline simulations. All simulations tend to produce a wider convective area but a much narrower stratiform area. The magnitudes of virtual potential temperature drop, pressure rise, and wind speed peak associated with the passage of the gust front are significantly smaller compared with the observations, suggesting simulated cool pools are weaker. Simulations generally overestimate the vertical velocity and radar reflectivity in convective cores compared with the retrievals. The modeled updraft velocity and precipitation have a significant spread across eight schemes. The spread of updraft velocity is the combination of both low-level pressure perturbation gradient (PPG) and buoyancy. Both PPG and thermal buoyancy are small for simulations of weak convection but both are large for those of strong convection. Ice-related parameterizations contribute majorly to the spread of updraft velocity, while they are not the reason for the large spread of precipitation. The understandings gained in this study can help to focus future observations and parameterization development.

  20. Effect of modelling slum populations on influenza spread in Delhi

    Science.gov (United States)

    Chen, Jiangzhuo; Chu, Shuyu; Chungbaek, Youngyun; Khan, Maleq; Kuhlman, Christopher; Marathe, Achla; Mortveit, Henning; Vullikanti, Anil; Xie, Dawen

    2016-01-01

    Objectives This research studies the impact of influenza epidemic in the slum and non-slum areas of Delhi, the National Capital Territory of India, by taking proper account of slum demographics and residents’ activities, using a highly resolved social contact network of the 13.8 million residents of Delhi. Methods An SEIR model is used to simulate the spread of influenza on two different synthetic social contact networks of Delhi, one where slums and non-slums are treated the same in terms of their demographics and daily sets of activities and the other, where slum and non-slum regions have different attributes. Results Differences between the epidemic outcomes on the two networks are large. Time-to-peak infection is overestimated by several weeks, and the cumulative infection rate and peak infection rate are underestimated by 10–50%, when slum attributes are ignored. Conclusions Slum populations have a significant effect on influenza transmission in urban areas. Improper specification of slums in large urban regions results in underestimation of infections in the entire population and hence will lead to misguided interventions by policy planners. PMID:27687898

  1. The significance of some methodological effects on filtration and ingestion rates of the rotifer Brachionus plicatilis

    Science.gov (United States)

    Schlosser, H. J.; Anger, K.

    1982-06-01

    Filtration rate (F) and ingestion rate (I) were measured in the rotifer Brachionus plicatilis feeding on the flagellate Dunaliella spec. and on yeast cells (Saccharomyces cerevisiae). 60-min experiments in rotating bottles served as a standard for testing methodological effects on levels of F and I. A lack of rotation reduced F values by 40 %, and a rise in temperature from 18° to 23.5 °C increased them by 42 %. Ingestion rates increased significantly up to a particle (yeast) concentration of ca. 600-800 cells · μl-1; then they remained constant, whereas filtration rates decreased beyond this threshold. Rotifer density (up to 1000 ind · ml-1) and previous starvation (up to 40 h) did not significantly influence food uptake rates. The duration of the experiment proved to have the most significant effect on F and I values: in 240-min experiments, these values were on the average more than 90 % lower than in 15-min experiments. From this finding it is concluded that ingestion rates obtained from short-term experiments (60 min or less) cannot be used in energy budgets, because they severely overestimate the actual long-term feeding capacity of the rotifers. At the lower end of the particle size spectrum (2 to 3 µm) there are not only food cells, but apparently also contaminating faecal particles. Their number increased with increasing duration of experiments and lead to an underestimation of F and I. Elemental analyses of rotifers and their food suggest that B. plicatilis can ingest up to 0.6 mJ or ca. 14 % of its own body carbon within 15 min. The long term average was estimated as 3.4 mJ · ind-1 · d-1 or ca. 75 % of body carbon · d-1.

  2. Evaluation Of Statistical Models For Forecast Errors From The HBV-Model

    Science.gov (United States)

    Engeland, K.; Kolberg, S.; Renard, B.; Stensland, I.

    2009-04-01

    Three statistical models for the forecast errors for inflow to the Langvatn reservoir in Northern Norway have been constructed and tested according to how well the distribution and median values of the forecasts errors fit to the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order autoregressive model was constructed for the forecast errors. The parameters were conditioned on climatic conditions. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order autoregressive model was constructed for the forecast errors. For the last model positive and negative errors were modeled separately. The errors were first NQT-transformed before a model where the mean values were conditioned on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: We wanted a) the median values to be close to the observed values; b) the forecast intervals to be narrow; c) the distribution to be correct. The results showed that it is difficult to obtain a correct model for the forecast errors, and that the main challenge is to account for the auto-correlation in the errors. Model 1 and 2 gave similar results, and the main drawback is that the distributions are not correct. The 95% forecast intervals were well identified, but smaller forecast intervals were over-estimated, and larger intervals were under-estimated. Model 3 gave a distribution that fits better, but the median values do not fit well since the auto-correlation is not properly accounted for. If the 95% forecast interval is of interest, Model 2 is recommended. If the whole distribution is of interest, Model 3 is recommended.

  3. Statistically significant relational data mining :

    Energy Technology Data Exchange (ETDEWEB)

    Berry, Jonathan W.; Leung, Vitus Joseph; Phillips, Cynthia Ann; Pinar, Ali; Robinson, David Gerald; Berger-Wolf, Tanya; Bhowmick, Sanjukta; Casleton, Emily; Kaiser, Mark; Nordman, Daniel J.; Wilson, Alyson G.

    2014-02-01

    This report summarizes the work performed under the project (3z(BStatitically significant relational data mining.(3y (BThe goal of the project was to add more statistical rigor to the fairly ad hoc area of data mining on graphs. Our goal was to develop better algorithms and better ways to evaluate algorithm quality. We concetrated on algorithms for community detection, approximate pattern matching, and graph similarity measures. Approximate pattern matching involves finding an instance of a relatively small pattern, expressed with tolerance, in a large graph of data observed with uncertainty. This report gathers the abstracts and references for the eight refereed publications that have appeared as part of this work. We then archive three pieces of research that have not yet been published. The first is theoretical and experimental evidence that a popular statistical measure for comparison of community assignments favors over-resolved communities over approximations to a ground truth. The second are statistically motivated methods for measuring the quality of an approximate match of a small pattern in a large graph. The third is a new probabilistic random graph model. Statisticians favor these models for graph analysis. The new local structure graph model overcomes some of the issues with popular models such as exponential random graph models and latent variable models.

  4. Impact of the snow cover scheme on snow distribution and energy budget modeling over the Tibetan Plateau

    Science.gov (United States)

    Xie, Zhipeng; Hu, Zeyong; Xie, Zhenghui; Jia, Binghao; Sun, Genhou; Du, Yizhen; Song, Haiqing

    2018-02-01

    This paper presents the impact of two snow cover schemes (NY07 and SL12) in the Community Land Model version 4.5 (CLM4.5) on the snow distribution and surface energy budget over the Tibetan Plateau. The simulated snow cover fraction (SCF), snow depth, and snow cover days were evaluated against in situ snow depth observations and a satellite-based snow cover product and snow depth dataset. The results show that the SL12 scheme, which considers snow accumulation and snowmelt processes separately, has a higher overall accuracy (81.8%) than the NY07 (75.8%). The newer scheme performs better in the prediction of overall accuracy compared with the NY07; however, SL12 yields a 15.1% underestimation rate while NY07 overestimated the SCF with a 15.2% overestimation rate. Both two schemes capture the distribution of the maximum snow depth well but show large positive biases in the average value through all periods (3.37, 3.15, and 1.48 cm for NY07; 3.91, 3.52, and 1.17 cm for SL12) and overestimate snow cover days compared with the satellite-based product and in situ observations. Higher altitudes show larger root-mean-square errors (RMSEs) in the simulations of snow depth and snow cover days during the snow-free period. Moreover, the surface energy flux estimations from the SL12 scheme are generally superior to the simulation from NY07 when evaluated against ground-based observations, in particular for net radiation and sensible heat flux. This study has great implications for further improvement of the subgrid-scale snow variations over the Tibetan Plateau.

  5. A simulation model to estimate the cost and effectiveness of alternative dialysis initiation strategies.

    Science.gov (United States)

    Lee, Chris P; Chertow, Glenn M; Zenios, Stefanos A

    2006-01-01

    Patients with end-stage renal disease (ESRD) require dialysis to maintain survival. The optimal timing of dialysis initiation in terms of cost-effectiveness has not been established. We developed a simulation model of individuals progressing towards ESRD and requiring dialysis. It can be used to analyze dialysis strategies and scenarios. It was embedded in an optimization frame worked to derive improved strategies. Actual (historical) and simulated survival curves and hospitalization rates were virtually indistinguishable. The model overestimated transplantation costs (10%) but it was related to confounding by Medicare coverage. To assess the model's robustness, we examined several dialysis strategies while input parameters were perturbed. Under all 38 scenarios, relative rankings remained unchanged. An improved policy for a hypothetical patient was derived using an optimization algorithm. The model produces reliable results and is robust. It enables the cost-effectiveness analysis of dialysis strategies.

  6. Arsenic levels in wipe samples collected from play structures constructed with CCA-treated wood: Impact on exposure estimates

    Energy Technology Data Exchange (ETDEWEB)

    Barraj, Leila M. [Chemical Regulation and Food Safety, Exponent, Inc., Suite 1100, 1150 Connecticut Ave., NW, Washington, DC 20036 (United States)], E-mail: lbarraj@exponent.com; Scrafford, Carolyn G. [Chemical Regulation and Food Safety, Exponent, Inc., Suite 1100, 1150 Connecticut Ave., NW, Washington, DC 20036 (United States); Eaton, W. Cary [RTI International, 3040 Cornwallis Road, Research Triangle Park, NC 27709 (United States); Rogers, Robert E.; Jeng, Chwen-Jyh [Toxcon Health Sciences Research Centre Inc., 9607 - 41 Avenue, Edmonton, Alberta, T6E 5X7 (Canada)

    2009-04-01

    Lumber treated with chromated copper arsenate (CCA) has been used in residential outdoor wood structures and playgrounds. The U.S. EPA has conducted a probabilistic assessment of children's exposure to arsenic from CCA-treated structures using the Stochastic Human Exposure and Dose Simulation model for the wood preservative scenario (SHEDS-Wood). The EPA assessment relied on data from an experimental study using adult volunteers and designed to measure arsenic in maximum hand and wipe loadings. Analyses using arsenic handloading data from a study of children playing on CCA-treated play structures in Edmonton, Canada, indicate that the maximum handloading values significantly overestimate the exposure that occurs during actual play. The objective of our paper is to assess whether the dislodgeable arsenic residues from structures in the Edmonton study are comparable to those observed in other studies and whether they support the conclusion that the values derived by EPA using modeled maximum loading values overestimate hand exposures. We compared dislodgeable arsenic residue data from structures in the playgrounds in the Edmonton study to levels observed in studies used in EPA's assessment. Our analysis showed that the dislodgeable arsenic levels in the Edmonton playground structures are similar to those in the studies used by EPA. Hence, the exposure estimates derived using the handloading data from children playing on CCA-treated structures are more representative of children's actual exposures than the overestimates derived by EPA using modeled maximum values. Handloading data from children playing on CCA-treated structures should be used to reduce the uncertainty of modeled estimates derived using the SHEDS-Wood model.

  7. The choice of a constitutive formulation for modeling limb flexion-induced deformations and stresses in the human femoropopliteal arteries of different ages.

    Science.gov (United States)

    Desyatova, Anastasia; MacTaggart, Jason; Poulson, William; Deegan, Paul; Lomneth, Carol; Sandip, Anjali; Kamenskiy, Alexey

    2017-06-01

    Open and endovascular treatments for peripheral arterial disease are notorious for high failure rates. Severe mechanical deformations experienced by the femoropopliteal artery (FPA) during limb flexion and interactions between the artery and repair materials play important roles and may contribute to poor clinical outcomes. Computational modeling can help optimize FPA repair, but these simulations heavily depend on the choice of constitutive model describing the arterial behavior. In this study finite element model of the FPA in the standing (straight) and gardening (acutely bent) postures was built using computed tomography data, longitudinal pre-stretch and biaxially determined mechanical properties. Springs and dashpots were used to represent surrounding tissue forces associated with limb flexion-induced deformations. These forces were then used with age-specific longitudinal pre-stretch and mechanical properties to obtain deformed FPA configurations for seven age groups. Four commonly used invariant-based constitutive models were compared to determine the accuracy of capturing deformations and stresses in each age group. The four-fiber FPA model most accurately portrayed arterial behavior in all ages, but in subjects younger than 40 years, the performance of all constitutive formulations was similar. In older subjects, Demiray (Delfino) and classic two-fiber Holzapfel-Gasser-Ogden formulations were better than the Neo-Hookean model for predicting deformations due to limb flexion, but both significantly overestimated principal stresses compared to the FPA or Neo-Hookean models.

  8. Valence-bond theory of linear Hubbard and Pariser-Parr-Pople models

    Science.gov (United States)

    Soos, Z. G.; Ramasesha, S.

    1984-05-01

    The ground and low-lying states of finite quantum-cell models with one state per site are obtained exactly through a real-space basis of valence-bond (VB) diagrams that explicitly conserve the total spin. Regular and alternating Hubbard and Pariser-Parr-Pople (PPP) chains and rings with Ne electrons on N(PPP models, but differ from mean-field results. Molecular PPP parameters describe well the excitations of finite polyenes, odd polyene ions, linear cyanine dyes, and slightly overestimate the absorption peaks in polyacetylene (CH)x. Molecular correlations contrast sharply with uncorrelated descriptions of topological solitons, which are modeled by regular polyene radicals and their ions for both wide and narrow alternation crossovers. Neutral solitons have no midgap absorption and negative spin densities, while the intensity of the in-gap excitation of charged solitons is not enhanced. The properties of correlated states in quantum-cell models with one valence state per site are discussed in the adiabatic limit for excited-state geometries and instabilities to dimerization.

  9. Inter-comparison between HERMESv2.0 and TNO-MACC-II emission data using the CALIOPE air quality system (Spain)

    Science.gov (United States)

    Guevara, Marc; Pay, María Teresa; Martínez, Francesc; Soret, Albert; Denier van der Gon, Hugo; Baldasano, José M.

    2014-12-01

    This work examines and compares the performance of two emission datasets on modelling air quality concentrations for Spain: (i) the High-Elective Resolution Modelling Emissions System (HERMESv2.0) and (ii) the TNO-MACC-II emission inventory. For this purpose, the air quality system CALIOPE-AQFS (WRF-ARW/CMAQ/BSC-DREAM8b) was run over Spain for February and June 2009 using the two emission datasets (4 km × 4 km and 1 h). Nitrogen dioxide (NO2), sulphur dioxide (SO2), Ozone (O3) and particular matter (PM10) modelled concentrations were compared with measurements at different type of air quality stations (i.e. rural background, urban, suburban industrial). A preliminary emission comparison showed significant discrepancies between the two datasets, highlighting an overestimation of industrial emissions in urban areas when using TNO-MACC-II. However, simulations showed similar performances of both emission datasets in terms of air quality. Modelled NO2 concentrations were similar between both datasets at the background stations, although TNO-MACC-II presented lower underestimations due to differences in industrial, other mobile sources and residential emissions. At Madrid urban stations NO2 was significantly underestimated in both cases despite the fact that HERMESv2.0 estimates traffic emissions using a more local information and detailed methodology. This NO2 underestimation problem was not found in Barcelona due to the influence of international shipping emissions located in the coastline. An inadequate characterization of some TNO-MACC-II's point sources led to high SO2 biases at industrial stations, especially in northwest Spain where large facilities are grouped. In general, surface O3 was overestimated regardless of the emission dataset used, depicting the problematic of CMAQ on overestimating low ozone at night. On the other hand, modelled PM10 concentrations were less underestimated in urban areas when applying HERMESv2.0 due to the inclusion of road dust

  10. Composition of fibrin glues significantly influences axial vascularization and degradation in isolation chamber model.

    Science.gov (United States)

    Arkudas, Andreas; Pryymachuk, Galyna; Hoereth, Tobias; Beier, Justus P; Polykandriotis, Elias; Bleiziffer, Oliver; Gulle, Heinz; Horch, Raymund E; Kneser, Ulrich

    2012-07-01

    In this study, different fibrin sealants with varying concentrations of the fibrin components were evaluated in terms of matrix degradation and vascularization in the arteriovenous loop (AVL) model of the rat. An AVL was placed in a Teflon isolation chamber filled with 500 μl fibrin gel. The matrix was composed of commercially available fibrin gels, namely Beriplast (Behring GmbH, Marburg, Germany) (group A), Evicel (Omrix Biopharmaceuticals S.A., Somerville, New Jersey, USA) (group B), Tisseel VH S/D (Baxter, Vienna, Austria) with a thrombin concentration of 4 IU/ml and a fibrinogen concentration of 80 mg/ml [Tisseel S F80 (Baxter), group C] and with an fibrinogen concentration of 20 mg/ml [Tisseel S F20 (Baxter), group D]. After 2 and 4 weeks, five constructs per group and time point were investigated using micro-computed tomography, and histological and morphometrical analysis techniques. The aprotinin, factor XIII and thrombin concentration did not affect the degree of clot degradation. An inverse relationship was found between fibrin matrix degradation and sprouting of blood vessels. By reducing the fibrinogen concentration in group D, a significantly decreased construct weight and an increased generation of vascularized connective tissue were detected. There was an inverse relationship between matrix degradation and vascularization detectable. Fibrinogen as the major matrix component showed a significant impact on the matrix properties. Alteration of fibrin gel properties might optimize formation of blood vessels.

  11. Double-layer structure model of the uranium generating bed in the land basins of the northwestern China and its significance

    International Nuclear Information System (INIS)

    Wang Zhilong

    1988-04-01

    The paper puts forward a double layer structure model of uranium generating bed in the land basins of Northwestern China, i.e. uranium ganerating bed = source layer of uranium+gathering uranium layer. The mechanism of its formation: Feldspar was hydromicatized. Some feldspar, quarts detrital silicate minerals were replaced to redden by the authigenesis of hematite and goethite. In the course of the oxidation, a little uranium is released from the detrital minerals. Because of the oxidation environment, the released uranium wasn't able to be precipitated, only to diffuse to the adjacent grey bed which has low Eh value with uranium-bearing 'stagnant water' fixed in pores during the dewatering process of the diagenesis and form minable uranium deposit. The significance of the model for uranium prospecting are as follows: (1) Uranium source range is much expanded concerning ruanium prospecting in sandstone. (2) For the potential assessment of basin and the selection of potential area, the model is an important prospecting criterion. (3) By using the main criterion uranium-generating bed-arkosic red beds well, the buried ore bodies can be found provided that arkosic red beds were regarded as a significant criterion of uranium-generating bed

  12. Investigating added value of regional climate modeling in North American winter storm track simulations

    Science.gov (United States)

    Poan, E. D.; Gachon, P.; Laprise, R.; Aider, R.; Dueymes, G.

    2018-03-01

    Extratropical Cyclone (EC) characteristics depend on a combination of large-scale factors and regional processes. However, the latter are considered to be poorly represented in global climate models (GCMs), partly because their resolution is too coarse. This paper describes a framework using possibilities given by regional climate models (RCMs) to gain insight into storm activity during winter over North America (NA). Recent past climate period (1981-2005) is considered to assess EC activity over NA using the NCEP regional reanalysis (NARR) as a reference, along with the European reanalysis ERA-Interim (ERAI) and two CMIP5 GCMs used to drive the Canadian Regional Climate Model—version 5 (CRCM5) and the corresponding regional-scale simulations. While ERAI and GCM simulations show basic agreement with NARR in terms of climatological storm track patterns, detailed bias analyses show that, on the one hand, ERAI presents statistically significant positive biases in terms of EC genesis and therefore occurrence while capturing their intensity fairly well. On the other hand, GCMs present large negative intensity biases in the overall NA domain and particularly over NA eastern coast. In addition, storm occurrence over the northwestern topographic regions is highly overestimated. When the CRCM5 is driven by ERAI, no significant skill deterioration arises and, more importantly, all storm characteristics near areas with marked relief and over regions with large water masses are significantly improved with respect to ERAI. Conversely, in GCM-driven simulations, the added value contributed by CRCM5 is less prominent and systematic, except over western NA areas with high topography and over the Western Atlantic coastlines where the most frequent and intense ECs are located. Despite this significant added-value on seasonal-mean characteristics, a caveat is raised on the RCM ability to handle storm temporal `seriality', as a measure of their temporal variability at a given

  13. Using field data to assess model predictions of surface and ground fuel consumption by wildfire in coniferous forests of California

    Science.gov (United States)

    Lydersen, Jamie M.; Collins, Brandon M.; Ewell, Carol M.; Reiner, Alicia L.; Fites, Jo Ann; Dow, Christopher B.; Gonzalez, Patrick; Saah, David S.; Battles, John J.

    2014-03-01

    Inventories of greenhouse gas (GHG) emissions from wildfire provide essential information to the state of California, USA, and other governments that have enacted emission reductions. Wildfires can release a substantial amount of GHGs and other compounds to the atmosphere, so recent increases in fire activity may be increasing GHG emissions. Quantifying wildfire emissions however can be difficult due to inherent variability in fuel loads and consumption and a lack of field data of fuel consumption by wildfire. We compare a unique set of fuel data collected immediately before and after six wildfires in coniferous forests of California to fuel consumption predictions of the first-order fire effects model (FOFEM), based on two different available fuel characterizations. We found strong regional differences in the performance of different fuel characterizations, with FOFEM overestimating the fuel consumption to a greater extent in the Klamath Mountains than in the Sierra Nevada. Inaccurate fuel load inputs caused the largest differences between predicted and observed fuel consumption. Fuel classifications tended to overestimate duff load and underestimate litter load, leading to differences in predicted emissions for some pollutants. When considering total ground and surface fuels, modeled consumption was fairly accurate on average, although the range of error in estimates of plot level consumption was very large. These results highlight the importance of fuel load input to the accuracy of modeled fuel consumption and GHG emissions from wildfires in coniferous forests.

  14. Determining metal origins and availability in fluvial deposits by analysis of geochemical baselines and solid-solution partitioning measurements and modelling.

    Science.gov (United States)

    Vijver, Martina G; Spijker, Job; Vink, Jos P M; Posthuma, Leo

    2008-12-01

    Metals in floodplain soils and sediments (deposits) can originate from lithogenic and anthropogenic sources, and their availability for uptake in biota is hypothesized to depend on both origin and local sediment conditions. In criteria-based environmental risk assessments, these issues are often neglected, implying local risks to be often over-estimated. Current problem definitions in river basin management tend to require a refined, site-specific focus, resulting in a need to address both aspects. This paper focuses on the determination of local environmental availabilities of metals in fluvial deposits by addressing both the origins of the metals and their partitioning over the solid and solution phases. The environmental availability of metals is assumed to be a key force influencing exposure levels in field soils and sediments. Anthropogenic enrichments of Cu, Zn and Pb in top layers could be distinguished from lithogenic background concentrations and described using an aluminium-proxy. Cd in top layers was attributed to anthropogenic enrichment almost fully. Anthropogenic enrichments for Cu and Zn appeared further to be also represented by cold 2M HNO3 extraction of site samples. For Pb the extractions over-estimated the enrichments. Metal partitioning was measured, and measurements were compared to predictions generated by an empirical regression model and by a mechanistic-kinetic model. The partitioning models predicted metal partitioning in floodplain deposits within about one order of magnitude, though a large inter-sample variability was found for Pb.

  15. A Self-Determination Model of Childhood Exposure, Perceived Prevalence, Justification, and Perpetration of Intimate Partner Violence.

    Science.gov (United States)

    Neighbors, Clayton; Walker, Denise D; Mbilinyi, Lyungai F; Zegree, Joan; Foster, Dawn W; Roffman, Roger A

    2013-02-01

    The present research was designed to evaluate self-determination theory as a framework for integrating factors associated with intimate partner violence (IPV) perpetration. The proposed model suggests that childhood exposure to parental violence may influence global motivational orientations which, in turn result in greater cognitive biases (overestimating the prevalence of IPV and justification of IPV) which, in turn, contribute to an individual's decision to use abusive behavior. Participants included 124 men who had engaged in abusive behavior toward an intimate partner. Results provided reasonable support for the proposed model and stronger support for a revised model suggesting that controlled orientation, rather than autonomy orientation, appears to play a stronger role in the association between childhood exposure to parental violence and cognitive biases associated with abusive behavior.

  16. A physically based model of global freshwater surface temperature

    Science.gov (United States)

    van Beek, Ludovicus P. H.; Eikelboom, Tessa; van Vliet, Michelle T. H.; Bierkens, Marc F. P.

    2012-09-01

    the Arctic rivers because the timing of ice breakup is predicted too late in the year due to the lack of including a mechanical breakup mechanism. Moreover, surface water temperatures for tropical rivers were overestimated, most likely due to an overestimation of rainfall temperature and incoming shortwave radiation. The spatiotemporal variation of water temperature reveals large temperature differences between water and atmosphere for the higher latitudes, while considerable lateral transport of heat can be observed for rivers crossing hydroclimatic zones, such as the Nile, the Mississippi, and the large rivers flowing to the Arctic. Overall, our model results show promise for future projection of global surface freshwater temperature under global change.

  17. Analysis of significance of environmental factors in landslide susceptibility modeling: Case study Jemma drainage network, Ethiopia

    Directory of Open Access Journals (Sweden)

    Vít Maca

    2017-06-01

    Full Text Available Aim of the paper is to describe methodology for calculating significance of environmental factors in landslide susceptibility modeling and present result of selected one. As a study area part of a Jemma basin in Ethiopian Highland is used. This locality is highly affected by mass movement processes. In the first part all major factors and their influence are described briefly. Majority of the work focuses on research of other methodologies used in susceptibility models and design of own methodology. This method is unlike most of the methods used completely objective, therefore it is not possible to intervene in the results. In article all inputs and outputs of the method are described as well as all stages of calculations. Results are illustrated on specific examples. In study area most important factor for landslide susceptibility is slope, on the other hand least important is land cover. At the end of article landslide susceptibility map is created. Part of the article is discussion of results and possible improvements of the methodology.

  18. Models to capture the potential for disease transmission in domestic sheep flocks.

    Science.gov (United States)

    Schley, David; Whittle, Sophie; Taylor, Michael; Kiss, Istvan Zoltan

    2012-09-15

    Successful control of livestock diseases requires an understanding of how they spread amongst animals and between premises. Mathematical models can offer important insight into the dynamics of disease, especially when built upon experimental and/or field data. Here the dynamics of a range of epidemiological models are explored in order to determine which models perform best in capturing real-world heterogeneities at sufficient resolution. Individual based network models are considered together with one- and two-class compartmental models, for which the final epidemic size is calculated as a function of the probability of disease transmission occurring during a given physical contact between two individuals. For numerical results the special cases of a viral disease with a fast recovery rate (foot-and-mouth disease) and a bacterial disease with a slow recovery rate (brucellosis) amongst sheep are considered. Quantitative results from observational studies of physical contact amongst domestic sheep are applied and results from the differently structured flocks (ewes with newborn lambs, ewes with nearly weaned lambs and ewes only) compared. These indicate that the breeding cycle leads to significant changes in the expected basic reproduction ratio of diseases. The observed heterogeneity of contacts amongst animals is best captured by full network simulations, although simple compartmental models describe the key features of an outbreak but, as expected, often overestimate the speed of an outbreak. Here the weights of contacts are heterogeneous, with many low weight links. However, due to the well-connected nature of the networks, this has little effect and differences between models remain small. These results indicate that simple compartmental models can be a useful tool for modelling real-world flocks; their applicability will be greater still for more homogeneously mixed livestock, which could be promoted by higher intensity farming practices. Copyright © 2012

  19. On the influence of cell size in physically-based distributed hydrological modelling to assess extreme values in water resource planning

    Directory of Open Access Journals (Sweden)

    M. Egüen

    2012-05-01

    Full Text Available This paper studies the influence of changing spatial resolution on the implementation of distributed hydrological modelling for water resource planning in Mediterranean areas. Different cell sizes were used to investigate variations in the basin hydrologic response given by the model WiMMed, developed in Andalusia (Spain, in a selected watershed. The model was calibrated on a monthly basis from the available daily flow data at the reservoir that closes the watershed, for three different cell sizes, 30, 100, and 500 m, and the effects of this change on the hydrological response of the basin were analysed by means of the comparison of the hydrological variables at different time scales for a 3-yr-period, and the effective values for the calibration parameters obtained for each spatial resolution. The variation in the distribution of the input parameters due to using different spatial resolutions resulted in a change in the obtained hydrological networks and significant differences in other hydrological variables, both in mean basin-scale and values distributed in the cell level. Differences in the magnitude of annual and global runoff, together with other hydrological components of the water balance, became apparent. This study demonstrated the importance of choosing the appropriate spatial scale in the implementation of a distributed hydrological model to reach a balance between the quality of results and the computational cost; thus, 30 and 100-m could be chosen for water resource management, without significant decrease in the accuracy of the simulation, but the 500-m cell size resulted in significant overestimation of runoff and consequently, could involve uncertain decisions based on the expected availability of rainfall excess for storage in the reservoirs. Particular values of the effective calibration parameters are also provided for this hydrological model and the study area.

  20. Computer modeling of oil spill trajectories with a high accuracy method

    International Nuclear Information System (INIS)

    Garcia-Martinez, Reinaldo; Flores-Tovar, Henry

    1999-01-01

    This paper proposes a high accuracy numerical method to model oil spill trajectories using a particle-tracking algorithm. The Euler method, used to calculate oil trajectories, can give adequate solutions in most open ocean applications. However, this method may not predict accurate particle trajectories in certain highly non-uniform velocity fields near coastal zones or in river problems. Simple numerical experiments show that the Euler method may also introduce artificial numerical dispersion that could lead to overestimation of spill areas. This article proposes a fourth-order Runge-Kutta method with fourth-order velocity interpolation to calculate oil trajectories that minimise these problems. The algorithm is implemented in the OilTrack model to predict oil trajectories following the 'Nissos Amorgos' oil spill accident that occurred in the Gulf of Venezuela in 1997. Despite lack of adequate field information, model results compare well with observations in the impacted area. (Author)

  1. Mesoscopic modeling of DNA denaturation rates: Sequence dependence and experimental comparison

    Energy Technology Data Exchange (ETDEWEB)

    Dahlen, Oda, E-mail: oda.dahlen@ntnu.no; Erp, Titus S. van, E-mail: titus.van.erp@ntnu.no [Department of Chemistry, Norwegian University of Science and Technology (NTNU), Høgskoleringen 5, Realfagbygget D3-117 7491 Trondheim (Norway)

    2015-06-21

    Using rare event simulation techniques, we calculated DNA denaturation rate constants for a range of sequences and temperatures for the Peyrard-Bishop-Dauxois (PBD) model with two different parameter sets. We studied a larger variety of sequences compared to previous studies that only consider DNA homopolymers and DNA sequences containing an equal amount of weak AT- and strong GC-base pairs. Our results show that, contrary to previous findings, an even distribution of the strong GC-base pairs does not always result in the fastest possible denaturation. In addition, we applied an adaptation of the PBD model to study hairpin denaturation for which experimental data are available. This is the first quantitative study in which dynamical results from the mesoscopic PBD model have been compared with experiments. Our results show that present parameterized models, although giving good results regarding thermodynamic properties, overestimate denaturation rates by orders of magnitude. We believe that our dynamical approach is, therefore, an important tool for verifying DNA models and for developing next generation models that have higher predictive power than present ones.

  2. Preclinical Models in Chimeric Antigen Receptor-Engineered T-Cell Therapy.

    Science.gov (United States)

    Siegler, Elizabeth Louise; Wang, Pin

    2018-05-01

    Cancer immunotherapy has enormous potential in inducing long-term remission in cancer patients, and chimeric antigen receptor (CAR)-engineered T cells have been largely successful in treating hematological malignancies in the clinic. CAR-T therapy has not been as effective in treating solid tumors, in part due to the immunosuppressive tumor microenvironment. Additionally, CAR-T therapy can cause dangerous side effects, including off-tumor toxicity, cytokine release syndrome, and neurotoxicity. Animal models of CAR-T therapy often fail to predict such adverse events and frequently overestimate the efficacy of the treatment. Nearly all preclinical CAR-T studies have been performed in mice, including syngeneic, xenograft, transgenic, and humanized mouse models. Recently, a few studies have used primate models to mimic clinical side effects better. To date, no single model perfectly recapitulates the human immune system and tumor microenvironment, and some models have revealed CAR-T limitations that were contradicted or missed entirely in other models. Careful model selection based on the primary goals of the study is a crucial step in evaluating CAR-T treatment. Advancements are being made in preclinical models, with the ultimate objective of providing safer, more effective CAR-T therapy to patients.

  3. Do simple models give a correct description of the wind condition in a coastal area ?

    Energy Technology Data Exchange (ETDEWEB)

    Kaellstrand, B. [Uppsala Univ. (Sweden). Dept. of Meteorology

    1996-12-01

    When the surface conditions changes at a coastline, an internal boundary layer evolves, with a wind speed and turbulence intensity influenced by these new conditions. Aircraft measurements across the coastline, performed during near neutral conditions, are compared with a model and thirteen more simple expressions for the growth of an internal boundary layer (IBL). The majority of the expressions overestimate the IBL height, while other underestimate it. Some of the expressions give reasonable result close to the coast. The model gives good agreement, even for larger distances. The vertical potential temperature gradient turned out to be an important parameter for the growth of the IBL, even with this near neutral conditions. 21 refs, 5 figs, 1 tab

  4. The Significant of Model School in Pluralistic Society of the Three Southern Border Provinces of Thailand

    Directory of Open Access Journals (Sweden)

    Haji-Awang Faisol

    2016-01-01

    The result of the study show that, a significant traits of the model schools in the multi-cultural society are not merely performed well in administrative procedure, teaching and learning process, but these schools also able to reveal the real social norm and religious believe into communities’ practical life as a truly “Malay-Muslim” society. It is means that, the school able to run the integrated programs under the shade of philosophy of Islamic education paralleled the National Education aims to ensure that the productivities of the programs able to serve both sides, national education on the one hand and the Malay Muslim communities’ satisfaction on the other hand.

  5. Long-term changes in lower tropospheric baseline ozone concentrations: Comparing chemistry-climate models and observations at northern midlatitudes

    Science.gov (United States)

    Parrish, D. D.; Lamarque, J.-F.; Naik, V.; Horowitz, L.; Shindell, D. T.; Staehelin, J.; Derwent, R.; Cooper, O. R.; Tanimoto, H.; Volz-Thomas, A.; Gilge, S.; Scheel, H.-E.; Steinbacher, M.; Fröhlich, M.

    2014-05-01

    Two recent papers have quantified long-term ozone (O3) changes observed at northern midlatitude sites that are believed to represent baseline (here understood as representative of continental to hemispheric scales) conditions. Three chemistry-climate models (NCAR CAM-chem, GFDL-CM3, and GISS-E2-R) have calculated retrospective tropospheric O3 concentrations as part of the Atmospheric Chemistry and Climate Model Intercomparison Project and Coupled Model Intercomparison Project Phase 5 model intercomparisons. We present an approach for quantitative comparisons of model results with measurements for seasonally averaged O3 concentrations. There is considerable qualitative agreement between the measurements and the models, but there are also substantial and consistent quantitative disagreements. Most notably, models (1) overestimate absolute O3 mixing ratios, on average by 5 to 17 ppbv in the year 2000, (2) capture only 50% of O3 changes observed over the past five to six decades, and little of observed seasonal differences, and (3) capture 25 to 45% of the rate of change of the long-term changes. These disagreements are significant enough to indicate that only limited confidence can be placed on estimates of present-day radiative forcing of tropospheric O3 derived from modeled historic concentration changes and on predicted future O3 concentrations. Evidently our understanding of tropospheric O3, or the incorporation of chemistry and transport processes into current chemical climate models, is incomplete. Modeled O3 trends approximately parallel estimated trends in anthropogenic emissions of NOx, an important O3 precursor, while measured O3 changes increase more rapidly than these emission estimates.

  6. Modelling of the spallation reaction: analysis and testing of nuclear models

    International Nuclear Information System (INIS)

    Toccoli, C.

    2000-01-01

    The spallation reaction is considered as a 2-step process. First a very quick stage (10 -22 , 10 -29 s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10 -18 , 10 -19 s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, 3 He, 4 He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  7. Performance of the Bulgarian WRF-CMAQ modelling system for three subdomains in Europe

    Energy Technology Data Exchange (ETDEWEB)

    Syrakov, D.; Prodanova, M.; Georgieva, E.

    2015-07-01

    The air quality modelling system WRF-CMAQ running at the National Institute of Meteorology and Hydrology (NIMH) in Sofia was applied to the European domain for the year 2010 in the frame of the Air Quality Model Evaluation International Initiative (AQMEII), Phase 2. The model system was set up for a domain of 5000x5000 km2 size with horizontal resolution of 25 km. The models options used and the emission input are briefly outlined. The model performance was investigated based on graphical plots and statistical indexes obtained by the web-based model evaluation platform ENSEMBLE. A preliminary operational model evaluation for ozone and particulate matter was conducted, comparing simulated and observed concentrations at ground level in three sub-domains of Europe. The analysis shows model overestimation for ozone and model underestimation for particulate matter. The best statistical indicators are for ozone concentrations during summer, when comparing data for EMEP stations in the EU domain. The worse results are for PM10 winter concentration in the region of the Balkan countries. (Author)

  8. Low modeled ozone production suggests underestimation of precursor emissions (especially NOx) in Europe

    Science.gov (United States)

    Oikonomakis, Emmanouil; Aksoyoglu, Sebnem; Ciarelli, Giancarlo; Baltensperger, Urs; Prévôt, André Stephan Henry

    2018-02-01

    High surface ozone concentrations, which usually occur when photochemical ozone production takes place, pose a great risk to human health and vegetation. Air quality models are often used by policy makers as tools for the development of ozone mitigation strategies. However, the modeled ozone production is often not or not enough evaluated in many ozone modeling studies. The focus of this work is to evaluate the modeled ozone production in Europe indirectly, with the use of the ozone-temperature correlation for the summer of 2010 and to analyze its sensitivity to precursor emissions and meteorology by using the regional air quality model, the Comprehensive Air Quality Model with Extensions (CAMx). The results show that the model significantly underestimates the observed high afternoon surface ozone mixing ratios (≥ 60 ppb) by 10-20 ppb and overestimates the lower ones (degradation of the model performance for the lower ozone mixing ratios. The model performance for ozone-temperature correlation is also better when NOx emissions are doubled. In the Benelux area, however, the third scenario (where both NOx and VOC emissions are increased) leads to a better model performance. Although increasing only the traffic NOx emissions by a factor of 4 gave very similar results to the doubling of all NOx emissions, the first scenario is more consistent with the uncertainties reported by other studies than the latter, suggesting that high uncertainties in NOx emissions might originate mainly from the road-transport sector rather than from other sectors. The impact of meteorology was examined with three sensitivity tests: (i) increased surface temperature by 4 °C, (ii) reduced wind speed by 50 % and (iii) doubled wind speed. The first two scenarios led to a consistent increase in all surface ozone mixing ratios, thus improving the model performance for the high ozone values but significantly degrading it for the low ozone values, while the third scenario had exactly the

  9. Statistical significance of cis-regulatory modules

    Directory of Open Access Journals (Sweden)

    Smith Andrew D

    2007-01-01

    Full Text Available Abstract Background It is becoming increasingly important for researchers to be able to scan through large genomic regions for transcription factor binding sites or clusters of binding sites forming cis-regulatory modules. Correspondingly, there has been a push to develop algorithms for the rapid detection and assessment of cis-regulatory modules. While various algorithms for this purpose have been introduced, most are not well suited for rapid, genome scale scanning. Results We introduce methods designed for the detection and statistical evaluation of cis-regulatory modules, modeled as either clusters of individual binding sites or as combinations of sites with constrained organization. In order to determine the statistical significance of module sites, we first need a method to determine the statistical significance of single transcription factor binding site matches. We introduce a straightforward method of estimating the statistical significance of single site matches using a database of known promoters to produce data structures that can be used to estimate p-values for binding site matches. We next introduce a technique to calculate the statistical significance of the arrangement of binding sites within a module using a max-gap model. If the module scanned for has defined organizational parameters, the probability of the module is corrected to account for organizational constraints. The statistical significance of single site matches and the architecture of sites within the module can be combined to provide an overall estimation of statistical significance of cis-regulatory module sites. Conclusion The methods introduced in this paper allow for the detection and statistical evaluation of single transcription factor binding sites and cis-regulatory modules. The features described are implemented in the Search Tool for Occurrences of Regulatory Motifs (STORM and MODSTORM software.

  10. Comparison of Langevin and Markov channel noise models for neuronal signal generation.

    Science.gov (United States)

    Sengupta, B; Laughlin, S B; Niven, J E

    2010-01-01

    The stochastic opening and closing of voltage-gated ion channels produce noise in neurons. The effect of this noise on the neuronal performance has been modeled using either an approximate or Langevin model based on stochastic differential equations or an exact model based on a Markov process model of channel gating. Yet whether the Langevin model accurately reproduces the channel noise produced by the Markov model remains unclear. Here we present a comparison between Langevin and Markov models of channel noise in neurons using single compartment Hodgkin-Huxley models containing either Na+ and K+, or only K+ voltage-gated ion channels. The performance of the Langevin and Markov models was quantified over a range of stimulus statistics, membrane areas, and channel numbers. We find that in comparison to the Markov model, the Langevin model underestimates the noise contributed by voltage-gated ion channels, overestimating information rates for both spiking and nonspiking membranes. Even with increasing numbers of channels, the difference between the two models persists. This suggests that the Langevin model may not be suitable for accurately simulating channel noise in neurons, even in simulations with large numbers of ion channels.

  11. Evaluation of water vapor distribution in general circulation models using satellite observations

    Science.gov (United States)

    Soden, Brian J.; Bretherton, Francis P.

    1994-01-01

    This paper presents a comparison of the water vapor distribution obtained from two general circulation models, the European Centre for Medium-Range Weather Forecasts (ECMWF) model and the National Center for Atmospheric Research (NCAR) Community Climate Model (CCM), with satellite observations of total precipitable water (TPW) from Special Sensor Microwave/Imager (SSM/I) and upper tropospheric relative humidity (UTH) from GOES. Overall, both models are successful in capturing the primary features of the observed water vapor distribution and its seasonal variation. For the ECMWF model, however, a systematic moist bias in TPW is noted over well-known stratocumulus regions in the eastern subtropical oceans. Comparison with radiosonde profiles suggests that this problem is attributable to difficulties in modeling the shallowness of the boundary layer and large vertical water vapor gradients which characterize these regions. In comparison, the CCM is more successful in capturing the low values of TPW in the stratocumulus regions, although it tends to exhibit a dry bias over the eastern half of the subtropical oceans and a corresponding moist bias in the western half. The CCM also significantly overestimates the daily variability of the moisture fields in convective regions, suggesting a problem in simulating the temporal nature of moisture transport by deep convection. Comparison of the monthly mean UTH distribution indicates generally larger discrepancies than were noted for TPW owing to the greater influence of large-scale dynamical processes in determining the distribution of UTH. In particular, the ECMWF model exhibits a distinct dry bias along the Intertropical Convergence Zone (ITCZ) and a moist bias over the subtropical descending branches of the Hadley cell, suggesting an underprediction in the strength of the Hadley circulation. The CCM, on the other hand, demonstrates greater discrepancies in UTH than are observed for the ECMWF model, but none that are as

  12. A comprehensive assessment of land surface-atmosphere interactions in a WRF/Urban modeling system for Indianapolis, IN

    Directory of Open Access Journals (Sweden)

    Daniel P. Sarmiento

    2017-05-01

    Full Text Available As part of the Indianapolis Flux (INFLUX experiment, the accuracy and biases of simulated meteorological fields were assessed for the city of Indianapolis, IN. The INFLUX project allows for a unique opportunity to conduct an extensive observation-to-model comparison in order to assess model errors for the following meteorological variables: latent heat and sensible heat fluxes, air temperature near the surface and in the planetary boundary layer (PBL, wind speed and direction, and PBL height. In order to test the sensitivity of meteorological simulations to different model packages, a set of simulations was performed by implementing different PBL schemes, urban canopy models (UCMs, and a model subroutine that was created in order to reduce an inherent model overestimation of urban land cover. It was found that accurately representing the amount of urban cover in the simulations reduced the biases in most cases during the summertime (SUMMER simulations. The simulations that used the BEP urban canopy model and the Bougeault & Lacarrere (BouLac PBL scheme had the smallest biases in the wintertime (WINTER simulations for most meteorological variables, with the exception being wind direction. The model configuration chosen had a larger impact on model errors during the WINTER simulations, whereas the differences between most of the model configurations during the SUMMER simulations were not statistically significant. By learning the behaviors of different PBL schemes and urban canopy models, researchers can start to understand the expected biases in certain model configurations for their own simulations and have a hypothesis as to the potential errors and biases that might occur when using a multi-physics ensemble based modeling approach.

  13. Stochastic radiative transfer model for mixture of discontinuous vegetation canopies

    International Nuclear Information System (INIS)

    Shabanov, Nikolay V.; Huang, D.; Knjazikhin, Y.; Dickinson, R.E.; Myneni, Ranga B.

    2007-01-01

    Modeling of the radiation regime of a mixture of vegetation species is a fundamental problem of the Earth's land remote sensing and climate applications. The major existing approaches, including the linear mixture model and the turbid medium (TM) mixture radiative transfer model, provide only an approximate solution to this problem. In this study, we developed the stochastic mixture radiative transfer (SMRT) model, a mathematically exact tool to evaluate radiation regime in a natural canopy with spatially varying optical properties, that is, canopy, which exhibits a structured mixture of vegetation species and gaps. The model solves for the radiation quantities, direct input to the remote sensing/climate applications: mean radiation fluxes over whole mixture and over individual species. The canopy structure is parameterized in the SMRT model in terms of two stochastic moments: the probability of finding species and the conditional pair-correlation of species. The second moment is responsible for the 3D radiation effects, namely, radiation streaming through gaps without interaction with vegetation and variation of the radiation fluxes between different species. We performed analytical and numerical analysis of the radiation effects, simulated with the SMRT model for the three cases of canopy structure: (a) non-ordered mixture of species and gaps (TM); (b) ordered mixture of species without gaps; and (c) ordered mixture of species with gaps. The analysis indicates that the variation of radiation fluxes between different species is proportional to the variation of species optical properties (leaf albedo, density of foliage, etc.) Gaps introduce significant disturbance to the radiation regime in the canopy as their optical properties constitute major contrast to those of any vegetation species. The SMRT model resolves deficiencies of the major existing mixture models: ignorance of species radiation coupling via multiple scattering of photons (the linear mixture model

  14. Performance of the Bulgarian WRF-CMAQ modelling system for three subdomains in Europe

    Energy Technology Data Exchange (ETDEWEB)

    Syrakov, M.; Prodanova, M.; Georgieva, E.

    2015-07-01

    The air quality modelling system WRF-CMAQ running at the National Institute of Meteorology and Hydrology (NIMH) in Sofia was applied to the European domain for the year 2010 in the frame of the Air Quality Model Evaluation International Initiative (AQMEII), Phase 2. The model system was set up for a domain of 5000x5000 km2 size with horizontal resolution of 25 km. The models’ options used and the emission input are briefly outlined. The model performance was investigated based on graphical plots and statistical indexes obtained by the web-based model evaluation platform ENSEMBLE. A preliminary operational model evaluation for ozone and particulate matter was conducted, comparing simulated and observed concentrations at ground level in three sub-domains of Europe. The analysis shows model overestimation for ozone and model underestimation for particulate matter. The best statistical indicators are for ozone concentrations during summer, when comparing data for EMEP stations in the EU domain. The worse results are for PM10 winter concentration in the region of the Balkan countries. (Author)

  15. Endogenous innovation, the economy and the environment : impacts of a technology-based modelling approach for energy-intensive industries in Germany

    International Nuclear Information System (INIS)

    Lutz, C.; Meyer, B.; Nathani, C.; Schleich, J.

    2007-01-01

    Policy simulations in environmental-economic models are influenced by the modelling of technological change. However, environmental-economic models have generally treated technological change as exogenous. This paper presented simulations with a new modelling approach in which technological change was portrayed and linked to actual production processes in 3 industry sectors in Germany, namely iron and steel; cement; and, pulp and paper. Technological choice was modelled via investments in new production process lines. The generic modelling procedure for all 3 industry sectors was presented along with an overview of the relevant sector-specific modelling results and the integration into the macro-economic model PANTA RHEI. The new modelling approach endogenizes technological change. As such, it considers that policy interventions may affect the rate and direction of technological progress. Carbon tax simulations were also performed to investigate the influence of a tax in both the new and conventional modelling approach. For the energy- and capital-intensive industries considered in this study, the conventional top-down approach overestimated the short-term possibilities to adapt to higher carbon dioxide (CO 2 ) prices in the early years. Since the new approach includes policy-induced technological change and process shifts, it also captures the long-term effects on CO 2 emissions far beyond the initial price impulse. It was concluded that the new modelling approach results in significantly higher emission reductions than the conventional approach. Therefore, the estimated costs of the climate policy are lower using the new modelling approach. 37 refs., 2 tabs., 4 figs., 1 appendix

  16. Dose-rate dependent stochastic effects in radiation cell-survival models

    International Nuclear Information System (INIS)

    Sachs, R.K.; Hlatky, L.R.

    1990-01-01

    When cells are subjected to ionizing radiation the specific energy rate (microscopic analog of dose-rate) varies from cell to cell. Within one cell, this rate fluctuates during the course of time; a crossing of a sensitive cellular site by a high energy charged particle produces many ionizations almost simultaneously, but during the interval between events no ionizations occur. In any cell-survival model one can incorporate the effect of such fluctuations without changing the basic biological assumptions. Using stochastic differential equations and Monte Carlo methods to take into account stochastic effects we calculated the dose-survival rfelationships in a number of current cell survival models. Some of the models assume quadratic misrepair; others assume saturable repair enzyme systems. It was found that a significant effect of random fluctuations is to decrease the theoretically predicted amount of dose-rate sparing. In the limit of low dose-rates neglecting the stochastic nature of specific energy rates often leads to qualitatively misleading results by overestimating the surviving fraction drastically. In the opposite limit of acute irradiation, analyzing the fluctuations in rates merely amounts to analyzing fluctuations in total specific energy via the usual microdosimetric specific energy distribution function, and neglecting fluctuations usually underestimates the surviving fraction. The Monte Carlo methods interpolate systematically between the low dose-rate and high dose-rate limits. As in other approaches, the slope of the survival curve at low dose-rates is virtually independent of dose and equals the initial slope of the survival curve for acute radiation. (orig.)

  17. VALORA: data base system for storage significant information used in the behavior modelling in the biosphere

    International Nuclear Information System (INIS)

    Valdes R, M.; Aguero P, A.; Perez S, D.; Cancio P, D.

    2006-01-01

    The nuclear and radioactive facilities can emit to the environment effluents that contain radionuclides, which are dispersed and/or its accumulate in the atmosphere, the terrestrial surface and the surface waters. As part of the evaluations of radiological impact, it requires to be carried out qualitative and quantitative analysis. In many of the cases it doesn't have the real values of the parameters that are used in the modelling, neither it is possible to carry out their measure, for that to be able to carry out the evaluation it needs to be carried out an extensive search of that published in the literature about the possible values of each parameter, under similar conditions to the object of study, this work can be extensive. In this work the characteristics of the VALORA Database System developed with the purpose of organizing and to automate significant information that it appears in different sources (scientific or technique literature) of the parameters that are used in the modelling of the behavior of the pollutants in the environment and the values assigned to these parameters that are used in the evaluation of the radiological impact potential is described; VALORA allows the consultation and selection of the characteristic parametric data of different situations and processes that are required by the calculation pattern implemented. The software VALORA it is a component of a group of tools computer that have as objective to help to the resolution of dispersion models and transfer of pollutants. (Author)

  18. CAUSES: Diagnosis of the Summertime Warm Bias in CMIP5 Climate Models at the ARM Southern Great Plains Site

    Science.gov (United States)

    Zhang, Chengzhu; Xie, Shaocheng; Klein, Stephen A.; Ma, Hsi-yen; Tang, Shuaiqi; Van Weverberg, Kwinten; Morcrette, Cyril J.; Petch, Jon

    2018-03-01

    All the weather and climate models participating in the Clouds Above the United States and Errors at the Surface project show a summertime surface air temperature (T2 m) warm bias in the region of the central United States. To understand the warm bias in long-term climate simulations, we assess the Atmospheric Model Intercomparison Project simulations from the Coupled Model Intercomparison Project Phase 5, with long-term observations mainly from the Atmospheric Radiation Measurement program Southern Great Plains site. Quantities related to the surface energy and water budget, and large-scale circulation are analyzed to identify possible factors and plausible links involved in the warm bias. The systematic warm season bias is characterized by an overestimation of T2 m and underestimation of surface humidity, precipitation, and precipitable water. Accompanying the warm bias is an overestimation of absorbed solar radiation at the surface, which is due to a combination of insufficient cloud reflection and clear-sky shortwave absorption by water vapor and an underestimation in surface albedo. The bias in cloud is shown to contribute most to the radiation bias. The surface layer soil moisture impacts T2 m through its control on evaporative fraction. The error in evaporative fraction is another important contributor to T2 m. Similar sources of error are found in hindcast from other Clouds Above the United States and Errors at the Surface studies. In Atmospheric Model Intercomparison Project simulations, biases in meridional wind velocity associated with the low-level jet and the 500 hPa vertical velocity may also relate to T2 m bias through their control on the surface energy and water budget.

  19. Joint modeling of correlated binary outcomes: The case of contraceptive use and HIV knowledge in Bangladesh.

    Directory of Open Access Journals (Sweden)

    Di Fang

    Full Text Available Recent advances in statistical methods enable the study of correlation among outcomes through joint modeling, thereby addressing spillover effects. By joint modeling, we refer to simultaneously analyzing two or more different response variables emanating from the same individual. Using the 2011 Bangladesh Demographic and Health Survey, we jointly address spillover effects between contraceptive use (CUC and knowledge of HIV and other sexually transmitted diseases. Jointly modeling these two outcomes is appropriate because certain types of contraceptive use contribute to the prevention of HIV and STDs and the knowledge and awareness of HIV and STDs typically lead to protection during sexual intercourse. In particular, we compared the differences as they pertained to the interpretive advantage of modeling the spillover effects of joint modeling HIV and CUC as opposed to addressing them separately. We also identified risk factors that determine contraceptive use and knowledge of HIV and STDs among women in Bangladesh. We found that by jointly modeling the correlation between HIV knowledge and contraceptive use, the importance of education decreased. The HIV prevention program had a spillover effect on CUC: what seemed to be impacted by education can be partially contributed to one's exposure to HIV knowledge. The joint model revealed a less significant impact of covariates as opposed to both separate models and standard models. Additionally, we found a spillover effect that would have otherwise been undiscovered if we did not jointly model. These findings further suggested that the simultaneous impact of correlated outcomes can be adequately addressed for the commonality between different responses and deflate, which is otherwise overestimated when examined separately.

  20. Significance of the expression of matrix metalloproteinase-9 (MMP-9) in brain tissue of rat models of experimental intracerebral haemorrhage (ICH)

    International Nuclear Information System (INIS)

    Wu Jiami; Liu Shengda

    2005-01-01

    Objective: To study the relationship between the brain tissue expression of MMP-9 and brain water content in rat models of experimental ICH. Methods: Rat models of ICH were prepared with intracerebral (caudate nuclei) injection of autologous noncoagulated blood (50 μl). Animals were sacrificed at 6h, 12h, 24h, 48h, 72h, 120h, lw, 2w and the MMP-9 expressions at the periphery of intracerebral hematoma were examined with immunohisto chemistry. The brain water content was also determined at the same time. Control models were prepared with intracerebral sham injection of normal saline. Results: (1) In the ICH models, the number of MMP-9 positive capillaries at the periphery of hematoma began to rise at 6h (vs that of sham group, P < 0.01 ) with peak at 48h, then gradually dropped. At lwk, the number was still significantly higher than that in the sham group (P <0.01 ). However, there were no expression at 2wk. (2) The brain water content in the ICH group was significantly increased at 12h (vs sham group, P < 0.05) with peak at 72h. At lwk, the brain water content was still significantly higher in the ICH group (P <0.01 ) but at 2wk, the brain water content was about the same in both groups. (3) Animals injected with different amounts of blood (30 μl, 50 μl, 100 μl) showed increased expression of MMP-9 along with the increase of dose (P<0.01). (4) The MMP-9 expression was positively correlated with the brain water content (r=0.8291, P<0.05). Conclusion: In the rat models, MMP-9 expression was activated after ICH. The increase paralleled that of the amount of haemorrhage and brain water content. It was postulated that MMP-9 enhanced development of brain edema through degrading of the blood brain barrier component substances. (authors)

  1. Application of multi-scale wavelet entropy and multi-resolution Volterra models for climatic downscaling

    Science.gov (United States)

    Sehgal, V.; Lakhanpal, A.; Maheswaran, R.; Khosa, R.; Sridhar, Venkataramana

    2018-01-01

    This study proposes a wavelet-based multi-resolution modeling approach for statistical downscaling of GCM variables to mean monthly precipitation for five locations at Krishna Basin, India. Climatic dataset from NCEP is used for training the proposed models (Jan.'69 to Dec.'94) and are applied to corresponding CanCM4 GCM variables to simulate precipitation for the validation (Jan.'95-Dec.'05) and forecast (Jan.'06-Dec.'35) periods. The observed precipitation data is obtained from the India Meteorological Department (IMD) gridded precipitation product at 0.25 degree spatial resolution. This paper proposes a novel Multi-Scale Wavelet Entropy (MWE) based approach for clustering climatic variables into suitable clusters using k-means methodology. Principal Component Analysis (PCA) is used to obtain the representative Principal Components (PC) explaining 90-95% variance for each cluster. A multi-resolution non-linear approach combining Discrete Wavelet Transform (DWT) and Second Order Volterra (SoV) is used to model the representative PCs to obtain the downscaled precipitation for each downscaling location (W-P-SoV model). The results establish that wavelet-based multi-resolution SoV models perform significantly better compared to the traditional Multiple Linear Regression (MLR) and Artificial Neural Networks (ANN) based frameworks. It is observed that the proposed MWE-based clustering and subsequent PCA, helps reduce the dimensionality of the input climatic variables, while capturing more variability compared to stand-alone k-means (no MWE). The proposed models perform better in estimating the number of precipitation events during the non-monsoon periods whereas the models with clustering without MWE over-estimate the rainfall during the dry season.

  2. Critique of the Board-Hall model for thermal detonations in UO2--Na systems

    International Nuclear Information System (INIS)

    Williams, D.C.

    1976-01-01

    The Board--Hall model for detonating thermal explosions is reviewed and some criticisms are offered in terms of its application to UO 2 -Na systems. The basic concept of a detonation-like thermal explosion is probably valid provided certain fundamental conditions can be met; however, Board and Hall's arguments as to just how these conditions can be met in UO 2 -Na mixtures appear to contain serious flaws. Even as given, the model itself predicts that a very large triggering event is needed to initiate the process. More importantly, the model for shock-induced fragmentation greatly overestimates the tendency for such fragmentation to occur. The shock-dispersive effects of mixtures are ignored. Altogether, the model's deficiencies imply that, as given, it is not applicable to LMFBR accident analysis; nonetheless, one cannot completely rule out the possibility of meeting the fundamental conditions for detonation by other mechanisms

  3. Air density dependence of the response of the PTW SourceCheck 4pi ionization chamber for 125I brachytherapy seeds.

    Science.gov (United States)

    Torres Del Río, J; Tornero-López, A M; Guirado, D; Pérez-Calatayud, J; Lallena, A M

    2017-06-01

    To analyze the air density dependence of the response of the new SourceCheck 4pi ionization chamber, manufactured by PTW. The air density dependence of three different SourceCheck 4pi chambers was studied by measuring 125 I sources. Measurements were taken by varying the pressure from 746.6 to 986.6hPa in a pressure chamber. Three different HDR 1000 Plus ionization chambers were also analyzed under similar conditions. A linear and a potential-like function of the air density were fitted to experimental data and their achievement in describing them was analyzed. SourceCheck 4pi chamber response showed a residual dependence on the air density once the standard pressure and temperature factor was applied. The chamber response was overestimated when the air density was below that under normal atmospheric conditions. A similar dependence was found for the HDR 1000 Plus chambers analyzed. A linear function of the air density permitted a very good description of this residual dependence, better than with a potential function. No significant variability between the different specimens of the same chamber model studied was found. The effect of overestimation observed in the chamber responses once they are corrected for the standard pressure and temperature may represent a non-negligible ∼4% overestimation in high altitude cities as ours (700m AMSL). This overestimation behaves linearly with the air density in all cases analyzed. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  4. Errors in Computing the Normalized Protein Catabolic Rate due to Use of Single-pool Urea Kinetic Modeling or to Omission of the Residual Kidney Urea Clearance.

    Science.gov (United States)

    Daugirdas, John T

    2017-07-01

    The protein catabolic rate normalized to body size (PCRn) often is computed in dialysis units to obtain information about protein ingestion. However, errors can manifest when inappropriate modeling methods are used. We used a variable volume 2-pool urea kinetic model to examine the percent errors in PCRn due to use of a 1-pool urea kinetic model or after omission of residual urea clearance (Kru). When a single-pool model was used, 2 sources of errors were identified. The first, dependent on the ratio of dialyzer urea clearance to urea distribution volume (K/V), resulted in a 7% inflation of the PCRn when K/V was in the range of 6 mL/min per L. A second, larger error appeared when Kt/V values were below 1.0 and was related to underestimation of urea distribution volume (due to overestimation of effective clearance) by the single-pool model. A previously reported prediction equation for PCRn was valid, but data suggest that it should be modified using 2-pool eKt/V and V coefficients instead of single-pool values. A third source of error, this one unrelated to use of a single-pool model, namely omission of Kru, was shown to result in an underestimation of PCRn, such that each ml/minute Kru per 35 L of V caused a 5.6% underestimate in PCRn. Marked overestimation of PCRn can result due to inappropriate use of a single-pool urea kinetic model, particularly when Kt/V <1.0 (as in short daily dialysis), or after omission of residual native kidney clearance. Copyright © 2017 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  5. Caesium-137 and strontium-90 in the food chains: model development using fallout data. [Pasture-cow-milk pathway

    Energy Technology Data Exchange (ETDEWEB)

    Haywood, S M [National Radiological Protection Board, Harwell (UK)

    1980-11-01

    The development of the models for the movement of /sup 137/Cs and /sup 90/Sr in the pasture-cow-milk pathway is briefly discussed. Using recorded deposition rates of these radionuclides as input to the basic model, it was found that the models for /sup 137/Cs and /sup 90/Sr in their initial form were found to predict higher levels of activity in milk than those recorded. The predicted contributions of the processes of root uptake into pasture grass and the resuspension of deposited activity on to pasture surfaces were shown to be responsible for much of the overestimation. Following a re-evaluation of all transfer parameters, major improvements in the quality of fit between the model predictions and measured levels of /sup 137/Cs and /sup 90/Sr in milk were obtained.

  6. Multilevel linear modelling of the response-contingent learning of young children with significant developmental delays.

    Science.gov (United States)

    Raab, Melinda; Dunst, Carl J; Hamby, Deborah W

    2018-02-27

    The purpose of the study was to isolate the sources of variations in the rates of response-contingent learning among young children with multiple disabilities and significant developmental delays randomly assigned to contrasting types of early childhood intervention. Multilevel, hierarchical linear growth curve modelling was used to analyze four different measures of child response-contingent learning where repeated child learning measures were nested within individual children (Level-1), children were nested within practitioners (Level-2), and practitioners were nested within the contrasting types of intervention (Level-3). Findings showed that sources of variations in rates of child response-contingent learning were associated almost entirely with type of intervention after the variance associated with differences in practitioners nested within groups were accounted for. Rates of child learning were greater among children whose existing behaviour were used as the building blocks for promoting child competence (asset-based practices) compared to children for whom the focus of intervention was promoting child acquisition of missing skills (needs-based practices). The methods of analysis illustrate a practical approach to clustered data analysis and the presentation of results in ways that highlight sources of variations in the rates of response-contingent learning among young children with multiple developmental disabilities and significant developmental delays. Copyright © 2018 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  7. Evaluating model parameterizations of submicron aerosol scattering and absorption with in situ data from ARCTAS 2008

    Directory of Open Access Journals (Sweden)

    M. J. Alvarado

    2016-07-01

    Full Text Available Accurate modeling of the scattering and absorption of ultraviolet and visible radiation by aerosols is essential for accurate simulations of atmospheric chemistry and climate. Closure studies using in situ measurements of aerosol scattering and absorption can be used to evaluate and improve models of aerosol optical properties without interference from model errors in aerosol emissions, transport, chemistry, or deposition rates. Here we evaluate the ability of four externally mixed, fixed size distribution parameterizations used in global models to simulate submicron aerosol scattering and absorption at three wavelengths using in situ data gathered during the 2008 Arctic Research of the Composition of the Troposphere from Aircraft and Satellites (ARCTAS campaign. The four models are the NASA Global Modeling Initiative (GMI Combo model, GEOS-Chem v9-02, the baseline configuration of a version of GEOS-Chem with online radiative transfer calculations (called GC-RT, and the Optical Properties of Aerosol and Clouds (OPAC v3.1 package. We also use the ARCTAS data to perform the first evaluation of the ability of the Aerosol Simulation Program (ASP v2.1 to simulate submicron aerosol scattering and absorption when in situ data on the aerosol size distribution are used, and examine the impact of different mixing rules for black carbon (BC on the results. We find that the GMI model tends to overestimate submicron scattering and absorption at shorter wavelengths by 10–23 %, and that GMI has smaller absolute mean biases for submicron absorption than OPAC v3.1, GEOS-Chem v9-02, or GC-RT. However, the changes to the density and refractive index of BC in GC-RT improve the simulation of submicron aerosol absorption at all wavelengths relative to GEOS-Chem v9-02. Adding a variable size distribution, as in ASP v2.1, improves model performance for scattering but not for absorption, likely due to the assumption in ASP v2.1 that BC is present at a constant mass

  8. The importance of age dependent mortality and the extrinsic incubation period in models of mosquito-borne disease transmission and control.

    Directory of Open Access Journals (Sweden)

    Steve E Bellan

    2010-04-01

    Full Text Available Nearly all mathematical models of vector-borne diseases have assumed that vectors die at constant rates. However, recent empirical research suggests that mosquito mortality rates are frequently age dependent. This work develops a simple mathematical model to assess how relaxing the classical assumption of constant mortality affects the predicted effectiveness of anti-vectorial interventions. The effectiveness of mosquito control when mosquitoes die at age dependent rates was also compared across different extrinsic incubation periods. Compared to a more realistic age dependent model, constant mortality models overestimated the sensitivity of disease transmission to interventions that reduce mosquito survival. Interventions that reduce mosquito survival were also found to be slightly less effective when implemented in systems with shorter EIPs. Future transmission models that examine anti-vectorial interventions should incorporate realistic age dependent mortality rates.

  9. Development and validation of models for bubble coalescence and breakup

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Yiaxiang

    2013-10-08

    A generalized model for bubble coalescence and breakup has been developed, which is based on a comprehensive survey of existing theories and models. One important feature of the model is that all important mechanisms leading to bubble coalescence and breakup in a turbulent gas-liquid flow are considered. The new model is tested extensively in a 1D Test Solver and a 3D CFD code ANSYS CFX for the case of vertical gas-liquid pipe flow under adiabatic conditions, respectively. Two kinds of extensions of the standard multi-fluid model, i.e. the discrete population model and the inhomogeneous MUSIG (multiple-size group) model, are available in the two solvers, respectively. These extensions with suitable closure models such as those for coalescence and breakup are able to predict the evolution of bubble size distribution in dispersed flows and to overcome the mono-dispersed flow limitation of the standard multi-fluid model. For the validation of the model the high quality database of the TOPFLOW L12 experiments for air-water flow in a vertical pipe was employed. A wide range of test points, which cover the bubbly flow, turbulent-churn flow as well as the transition regime, is involved in the simulations. The comparison between the simulated results such as bubble size distribution, gas velocity and volume fraction and the measured ones indicates a generally good agreement for all selected test points. As the superficial gas velocity increases, bubble size distribution evolves via coalescence dominant regimes first, then breakup-dominant regimes and finally turns into a bimodal distribution. The tendency of the evolution is well reproduced by the model. However, the tendency is almost always overestimated, i.e. too much coalescence in the coalescence dominant case while too much breakup in breakup dominant ones. The reason of this problem is discussed by studying the contribution of each coalescence and breakup mechanism at different test points. The redistribution of the

  10. Development and validation of models for bubble coalescence and breakup

    International Nuclear Information System (INIS)

    Liao, Yiaxiang

    2013-01-01

    A generalized model for bubble coalescence and breakup has been developed, which is based on a comprehensive survey of existing theories and models. One important feature of the model is that all important mechanisms leading to bubble coalescence and breakup in a turbulent gas-liquid flow are considered. The new model is tested extensively in a 1D Test Solver and a 3D CFD code ANSYS CFX for the case of vertical gas-liquid pipe flow under adiabatic conditions, respectively. Two kinds of extensions of the standard multi-fluid model, i.e. the discrete population model and the inhomogeneous MUSIG (multiple-size group) model, are available in the two solvers, respectively. These extensions with suitable closure models such as those for coalescence and breakup are able to predict the evolution of bubble size distribution in dispersed flows and to overcome the mono-dispersed flow limitation of the standard multi-fluid model. For the validation of the model the high quality database of the TOPFLOW L12 experiments for air-water flow in a vertical pipe was employed. A wide range of test points, which cover the bubbly flow, turbulent-churn flow as well as the transition regime, is involved in the simulations. The comparison between the simulated results such as bubble size distribution, gas velocity and volume fraction and the measured ones indicates a generally good agreement for all selected test points. As the superficial gas velocity increases, bubble size distribution evolves via coalescence dominant regimes first, then breakup-dominant regimes and finally turns into a bimodal distribution. The tendency of the evolution is well reproduced by the model. However, the tendency is almost always overestimated, i.e. too much coalescence in the coalescence dominant case while too much breakup in breakup dominant ones. The reason of this problem is discussed by studying the contribution of each coalescence and breakup mechanism at different test points. The redistribution of the

  11. The validity of EORTC GBM prognostic calculator on survival of GBM patients in the West of Scotland.

    Science.gov (United States)

    Teo, Mario; Clark, Brian; MacKinnon, Mairi; Stewart, Willie; Paul, James; St George, Jerome

    2014-06-01

    It is now accepted that the addition of temozolomide to radiotherapy in the treatment of patients with newly diagnosed glioblastoma multiforme (GBM) significantly improves survival. In 2008, a subanalysis of the original study data was performed, and an online "GBM Calculator" was made available on the European Organisation for Research and Treatment of Cancer (EORTC) website allowing users to estimate patients' survival outcomes. We tested this calculator against actual local survival data to validate its use in our patients. Prospectively collected clinical data were analysed on 105 consecutive patients receiving concurrent chemoradiotherapy following surgical treatment of GBM between December 2004 and February 2009. Using the EORTC online calculator, survival outcomes were generated for these patients and compared with their actual survival. The median overall survival for the entire cohort was 15.3 months (range 2.8-50.5 months), with 1-year and 2-year overall survival of 65.7% and 19%, respectively. This is in comparison to the median overall predictive survival of 21.3 months, with 1-year and 2-year survival of 95% and 39.5%, respectively. Case by case analysis also showed that the survival was overestimated in nearly 80% of patients. Subgroup analyses showed similar overestimation of patients' survival, except calculator Model 3 which utilised MGMT status. Use of the EORTC GBM prognostic calculator would have overestimated the survival of the majority of our patients with GBM. Uncertainty exists as to the cause of overestimation in the cohort although local socioeconomic factors might play a role. The different calculator models yielded different outcomes and the "best" predictor of survival for the cohort under study utilised the tumour MGMT status. We would strongly encourage similar local studies of validity testing prior to employing the online prognostic calculator for other population groups.

  12. Photochemical model evaluation of 2013 California wild fire air quality impacts using surface, aircraft, and satellite data.

    Science.gov (United States)

    Baker, K R; Woody, M C; Valin, L; Szykman, J; Yates, E L; Iraci, L T; Choi, H D; Soja, A J; Koplitz, S N; Zhou, L; Campuzano-Jost, Pedro; Jimenez, Jose L; Hair, J W

    2018-10-01

    The Rim Fire was one of the largest wildfires in California history, burning over 250,000 acres during August and September 2013 affecting air quality locally and regionally in the western U.S. Routine surface monitors, remotely sensed data, and aircraft based measurements were used to assess how well the Community Multiscale Air Quality (CMAQ) photochemical grid model applied at 4 and 12 km resolution represented regional plume transport and chemical evolution during this extreme wildland fire episode. Impacts were generally similar at both grid resolutions although notable differences were seen in some secondary pollutants (e.g., formaldehyde and peroxyacyl nitrate) near the Rim fire. The modeling system does well at capturing near-fire to regional scale smoke plume transport compared to remotely sensed aerosol optical depth (AOD) and aircraft transect measurements. Plume rise for the Rim fire was well characterized as the modeled plume top was consistent with remotely sensed data and the altitude of aircraft measurements, which were typically made at the top edge of the plume. Aircraft-based lidar suggests O 3 downwind in the Rim fire plume was vertically stratified and tended to be higher at the plume top, while CMAQ estimated a more uniformly mixed column of O 3 . Predicted wildfire ozone (O 3 ) was overestimated both at the plume top and at nearby rural and urban surface monitors. Photolysis rates were well characterized by the model compared with aircraft measurements meaning aerosol attenuation was reasonably estimated and unlikely contributing to O 3 overestimates at the top of the plume. Organic carbon was underestimated close to the Rim fire compared to aircraft data, but was consistent with nearby surface measurements. Periods of elevated surface PM 2.5 at rural monitors near the Rim fire were not usually coincident with elevated O 3 . Published by Elsevier B.V.

  13. Dynamics of {sup 40,48}Ca+{sup 238}U→{sup 278,286}112{sup ⁎} reactions across the Coulomb barrier using dynamical cluster decay model

    Energy Technology Data Exchange (ETDEWEB)

    Sandhu, Kirandeep; Kaur, Gurvinder; Sharma, Manoj K., E-mail: msharma@thapar.edu

    2014-01-15

    The role of deformations and related orientations (optimum or compact) is investigated in reference to dynamics of {sup 40,48}Ca+{sup 238}U→{sup 278,286}112{sup ⁎} reactions using dynamical cluster decay model (DCM). The use of quadrupole and hexadecapole deformations in the decay of compound system suggest that the degree of compactness changes with addition of higher order deformations. The decay cross-sections are calculated in reference to the available data, including β{sub 2}-static deformations within ‘optimum’ orientation approach. The comparative analysis of spherical, β{sub 2}-static and dynamic alongwith β{sub 4}-static deformations is investigated at comparable center of mass energy of 230 MeV for both nuclei. To address the specific role of optimized orientations in the decay of {sup 278}112{sup ⁎} and {sup 286}112{sup ⁎} nuclei, the calculations are done using equatorial compact and polar elongated orientations. Using hot equatorial collisions, symmetric fission is observed as the dominant decay mode across the barrier, which otherwise becomes asymmetric for cold elongated approach. The calculated cross-sections match nicely with experimental data using hot configuration but the same are overestimated for the use of cold (polar) orientation approach at deep sub-barrier region. This overestimation in the deep sub-barrier region may be associated with the quasi-fission decay channel. The contribution of QF in both {sup 278}112{sup ⁎} and {sup 286}112{sup ⁎} nuclei are predicted through the overestimated cross-sections being more for neutron-deficient {sup 278}112{sup ⁎} nucleus, in agreement with experimental results. Larger barrier modification ΔV{sub B} is observed at sub-barrier energies for both isotopes of Z=112 nucleus. Also the contribution of ΔV{sub B} at lower incident energies is relatively higher for cold elongated polar configuration as compared to hot compact equatorial configuration, causing overestimation of cross

  14. Road traffic impact on urban water quality: a step towards integrated traffic, air and stormwater modelling.

    Science.gov (United States)

    Fallah Shorshani, Masoud; Bonhomme, Céline; Petrucci, Guido; André, Michel; Seigneur, Christian

    2014-04-01

    Methods for simulating air pollution due to road traffic and the associated effects on stormwater runoff quality in an urban environment are examined with particular emphasis on the integration of the various simulation models into a consistent modelling chain. To that end, the models for traffic, pollutant emissions, atmospheric dispersion and deposition, and stormwater contamination are reviewed. The present study focuses on the implementation of a modelling chain for an actual urban case study, which is the contamination of water runoff by cadmium (Cd), lead (Pb), and zinc (Zn) in the Grigny urban catchment near Paris, France. First, traffic emissions are calculated with traffic inputs using the COPERT4 methodology. Next, the atmospheric dispersion of pollutants is simulated with the Polyphemus line source model and pollutant deposition fluxes in different subcatchment areas are calculated. Finally, the SWMM water quantity and quality model is used to estimate the concentrations of pollutants in stormwater runoff. The simulation results are compared to mass flow rates and concentrations of Cd, Pb and Zn measured at the catchment outlet. The contribution of local traffic to stormwater contamination is estimated to be significant for Pb and, to a lesser extent, for Zn and Cd; however, Pb is most likely overestimated due to outdated emissions factors. The results demonstrate the importance of treating distributed traffic emissions from major roadways explicitly since the impact of these sources on concentrations in the catchment outlet is underestimated when those traffic emissions are spatially averaged over the catchment area.

  15. Pharmacological kynurenine 3-monooxygenase enzyme inhibition significantly reduces neuropathic pain in a rat model.

    Science.gov (United States)

    Rojewska, Ewelina; Piotrowska, Anna; Makuch, Wioletta; Przewlocka, Barbara; Mika, Joanna

    2016-03-01

    Recent studies have highlighted the involvement of the kynurenine pathway in the pathology of neurodegenerative diseases, but the role of this system in neuropathic pain requires further extensive research. Therefore, the aim of our study was to examine the role of kynurenine 3-monooxygenase (Kmo), an enzyme that is important in this pathway, in a rat model of neuropathy after chronic constriction injury (CCI) to the sciatic nerve. For the first time, we demonstrated that the injury-induced increase in the Kmo mRNA levels in the spinal cord and the dorsal root ganglia (DRG) was reduced by chronic administration of the microglial inhibitor minocycline and that this effect paralleled a decrease in the intensity of neuropathy. Further, minocycline administration alleviated the lipopolysaccharide (LPS)-induced upregulation of Kmo mRNA expression in microglial cell cultures. Moreover, we demonstrated that not only indirect inhibition of Kmo using minocycline but also direct inhibition using Kmo inhibitors (Ro61-6048 and JM6) decreased neuropathic pain intensity on the third and the seventh days after CCI. Chronic Ro61-6048 administration diminished the protein levels of IBA-1, IL-6, IL-1beta and NOS2 in the spinal cord and/or the DRG. Both Kmo inhibitors potentiated the analgesic properties of morphine. In summary, our data suggest that in neuropathic pain model, inhibiting Kmo function significantly reduces pain symptoms and enhances the effectiveness of morphine. The results of our studies show that the kynurenine pathway is an important mediator of neuropathic pain pathology and indicate that Kmo represents a novel pharmacological target for the treatment of neuropathy. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. Top-down Estimates of Isoprene Emissions in Australia Inferred from OMI Satellite Data.

    Science.gov (United States)

    Greenslade, J.; Fisher, J. A.; Surl, L.; Palmer, P. I.

    2017-12-01

    Australia is a global hotspot for biogenic isoprene emission factors predicted by process-based models such as the Model of Emissions of Gases and Aerosols from Nature (MEGAN). It is also prone to increasingly frequent temperature extremes that can drive episodically high emissions. Estimates of biogenic isoprene emissions from Australia are poorly constrained, with the frequently used MEGAN model overestimating emissions by a factor of 4-6 in some areas. Evaluating MEGAN and other models in Australia is difficult due to sparse measurements of emissions and their ensuing chemical products. In this talk, we will describe efforts to better quantify Australian isoprene emissions using top-down estimates based on formaldehyde (HCHO) observations from the OMI satellite instrument, combined with modelled isoprene to HCHO yields obtained from the GEOS-Chem chemical transport model. The OMI-based estimates are evaluated using in situ observations from field campaigns conducted in southeast Australia. We also investigate the impact on the inferred emission of horizontal resolution used for the yield calculations, particularly in regions on the boundary between low- and high-NOx chemistry. The prevalence of fire smoke plumes roughly halves the available satellite dataset over Australia for much of the year; however, seasonal averages remain robust. Preliminary results show that the top-down isoprene emissions are lower than MEGAN estimates by up to 90% in summer. The overestimates are greatest along the eastern coast, including areas surrounding Australia's major population centres in Sydney, Melbourne, and Brisbane. The coarse horizontal resolution of the model significantly affects the emissions estimates, as many biogenic emitting regions lie along narrow coastal stretches. Our results confirm previous findings that the MEGAN biogenic emission model is poorly calibrated for the Australian environment and suggests that chemical transport models driven by MEGAN are likely

  17. The true meaning of 'exotic species' as a model for genetically engineered organisms.

    Science.gov (United States)

    Regal, P J

    1993-03-15

    The exotic or non-indigenous species model for deliberately introduced genetically engineered organisms (GEOs) has often been misunderstood or misrepresented. Yet proper comparisons of of ecologically competent GEOs to the patterns of adaptation of introduced species have been highly useful among scientists in attempting to determine how to apply biological theory to specific GEO risk issues, and in attempting to define the probabilities and scale of ecological risks with GEOs. In truth, the model predicts that most projects may be environmentally safe, but a significant minority may be very risky. The model includes a history of institutional follies that also should remind workers of the danger of oversimplifying biological issues, and warn against repeating the sorts of professional misjudgements that have too often been made in introducing organisms to new settings. We once expected that the non-indigenous species model would be refined by more analysis of species eruptions, ecological genetics, and the biology of select GEOs themselves, as outlined. But there has been political resistance to the effective regulation of GEOs, and a bureaucratic tendency to focus research agendas on narrow data collection. Thus there has been too little promotion by responsible agencies of studies to provide the broad conceptual base for truly science-based regulation. In its presently unrefined state, the non-indigenous species comparison would overestimate the risks of GEOs if it were (mis)applied to genetically disrupted, ecologically crippled GEOs, but in some cases of wild-type organisms with novel engineered traits, it could greatly underestimate the risks. Further analysis is urgently needed.

  18. On testing the significance of atmospheric response to smoke from the Kuwaiti oil fires using the Los Alamos general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Kao, C.J.; Glatzmaier, G.A.; Malone, R.C. [Los Alamos National Laboratory, Los Alamos, NM (United States)

    1994-07-01

    The response of the Los Alamos atmospheric general circulation model to the smoke from the Kuwaiti oil fires set in 1991 is examined. The model has an interactive soot transport module that uses a Lagrangian tracer particle scheme. The statistical significance of the results is evaluated using a methodology based on the classic Student`s t test. Among various estimated smoke emission rates and associated visible absorption coefficients, the worst- and best-case scenarios are selected. In each of the scenarios, an ensemble of 10 30-day June simulations are conducted with the smoke and are compared to the same 10 June simulations without the smoke. The results of the worst-case scneario show that a statistically significant wave train pattern propagates eastward-poleward downstream from the source. The signals favorably compare with the observed climate anomalies in summer 1991, albeit some possible El Nino-Southern Oscillation effects were involved in the actual climate. The results of the best-case (i.e., least-impact) scenario show that the significance is rather small but that its general pattern is quite similar to that in the worst-case scenario.

  19. On testing the significance of atmospheric response to smoke from the Kuwaiti oil fires using the Los Alamos general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Chih-Yue Jim Kao; Glatzmaier, G.A.; Malone, R.C. [Los Alamos National Lab., NM (United States)

    1994-07-20

    The response of the Los Alamos atmospheric general circulation model to the smoke from the Kuwaiti oil fires set in 1991 is examined. The model has an interactive soot transport module that uses a Lagrangian tracer particle scheme. The statistical significance of the results is evaluated using a methodology based on the classic Student`s t test. Among various estimated smoke emission rates and associated visible absorption coefficients, the worst- and best-case scenarios are selected. In each of the scenarios, an ensemble of 10, 30-day June simulations are conducted with the smoke, and are compared to the same 10 June simulations without the smoke. The results of the worst-case scenario show that a statistically significant wave train pattern propagates eastward-poleward downstream from the source. The signals favorably compare with the observed climate anomalies in summer 1991, albeit some possible El Nino-Southern Oscillation effects were involved in the actual climate. The results of the best-case (i.e., least-impact) scenario show that the significance is rather small but that its general pattern is quite similar to that in the worst-case scenario. 24 refs., 5 figs.

  20. Stage-specific predictive models for breast cancer survivability.

    Science.gov (United States)

    Kate, Rohit J; Nadig, Ramya

    2017-01-01

    Survivability rates vary widely among various stages of breast cancer. Although machine learning models built in past to predict breast cancer survivability were given stage as one of the features, they were not trained or evaluated separately for each stage. To investigate whether there are differences in performance of machine learning models trained and evaluated across different stages for predicting breast cancer survivability. Using three different machine learning methods we built models to predict breast cancer survivability separately for each stage and compared them with the traditional joint models built for all the stages. We also evaluated the models separately for each stage and together for all the stages. Our results show that the most suitable model to predict survivability for a specific stage is the model trained for that particular stage. In our experiments, using additional examples of other stages during training did not help, in fact, it made it worse in some cases. The most important features for predicting survivability were also found to be different for different stages. By evaluating the models separately on different stages we found that the performance widely varied across them. We also demonstrate that evaluating predictive models for survivability on all the stages together, as was done in the past, is misleading because it overestimates performance. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  1. Pressure balance inconsistency exhibited in a statistical model of magnetospheric plasma

    Science.gov (United States)

    Garner, T. W.; Wolf, R. A.; Spiro, R. W.; Thomsen, M. F.; Korth, H.

    2003-08-01

    While quantitative theories of plasma flow from the magnetotail to the inner magnetosphere typically assume adiabatic convection, it has long been understood that these convection models tend to overestimate the plasma pressure in the inner magnetosphere. This phenomenon is called the pressure crisis or the pressure balance inconsistency. In order to analyze it in a new and more detailed manner we utilize an empirical model of the proton and electron distribution functions in the near-Earth plasma sheet (-50 RE attributed to gradient/curvature drift for large isotropic energy invariants but not for small invariants. The tailward gradient of the distribution function indicates a violation of the adiabatic drift condition in the plasma sheet. It also confirms the existence of a "number crisis" in addition to the pressure crisis. In addition, plasma sheet pressure gradients, when crossed with the gradient of flux tube volume computed from the [1989] magnetic field model, indicate Region 1 currents on the dawn and dusk sides of the outer plasma sheet.

  2. Analytical study of performance evaluation for seismic retrofitting of reinforced concrete building using 3D dynamic nonlinear finite element analysis

    Science.gov (United States)

    Sato, Yuichi; Kajihara, Shinichi; Kaneko, Yoshio

    2011-06-01

    This paper presents three-dimensional finite element (FE) analyses of an all-frame model of a three-story reinforced concrete (RC) building damaged in the 1999 Taiwan Chi-Chi Earthquake. Non-structural brick walls of the building acted as a seismic resistant element although their contributions were neglected in the design. Hence, the entire structure of a typical frame was modeled and static and dynamic nonlinear analyses were conducted to evaluate the contributions of the brick walls. However, the results of the analyses were considerably overestimated due to coarse mesh discretizations, which were unavoidable due to limited computer resources. This study corrects the overestimations by modifying (1) the tensile strengths and (2) shear stiffness reduction factors of concrete and brick. The results indicate that brick walls improve frame strength although shear failures are caused in columns shortened by spandrel walls. Then, the effectiveness of three types of seismic retrofits is evaluated. The maximum drift of the first floor is reduced by 89.3%, 94.8%, and 27.5% by Steel-confined, Full-RC, and Full-brick models, respectively. Finally, feasibility analyses of models with soils were conducted. The analyses indicated that the soils elongate the natural period of building models although no significant differences were observed.

  3. Cloud-resolving model intercomparison of an MC3E squall line case: Part I-Convective updrafts: CRM Intercomparison of a Squall Line

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Jiwen [Pacific Northwest National Laboratory, Richland Washington USA; Han, Bin [Pacific Northwest National Laboratory, Richland Washington USA; School of Atmospheric Sciences, Nanjing University, Nanjing China; Varble, Adam [Department of Atmospheric Sciences, University of Utah, Salt Lake City Utah USA; Morrison, Hugh [National Center for Atmospheric Research, Boulder Colorado USA; North, Kirk [Department of Atmospheric and Oceanic Sciences, McGill University, Montreal Quebec USA; Kollias, Pavlos [Department of Atmospheric and Oceanic Sciences, McGill University, Montreal Quebec USA; School of Marine and Atmospheric Sciences, Stony Brook University, Stony Brook New York USA; Chen, Baojun [School of Atmospheric Sciences, Nanjing University, Nanjing China; Dong, Xiquan [Department of Hydrology and Atmospheric Sciences, University of Arizona, Tucson Arizona USA; Giangrande, Scott E. [Environmental and Climate Sciences Department, Brookhaven National Laboratory, Upton New York USA; Khain, Alexander [The Institute of the Earth Science, The Hebrew University of Jerusalem, Jerusalem Israel; Lin, Yun [Department of Atmospheric Sciences, Texas A& M University, College Station Texas USA; Mansell, Edward [NOAA/OAR/National Severe Storms Laboratory, Norman Oklahoma USA; Milbrandt, Jason A. [Meteorological Research Division, Environment and Climate Change Canada, Dorval Canada; Stenz, Ronald [Department of Atmospheric Sciences, University of North Dakota, Grand Forks North Dakota USA; Thompson, Gregory [National Center for Atmospheric Research, Boulder Colorado USA; Wang, Yuan [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena California USA

    2017-09-06

    A constrained model intercomparison study of a mid-latitude mesoscale squall line is performed using the Weather Research & Forecasting (WRF) model at 1-km horizontal grid spacing with eight cloud microphysics schemes, to understand specific processes that lead to the large spread of simulated cloud and precipitation at cloud-resolving scales, with a focus of this paper on convective cores. Various observational data are employed to evaluate the baseline simulations. All simulations tend to produce a wider convective area than observed, but a much narrower stratiform area, with most bulk schemes overpredicting radar reflectivity. The magnitudes of the virtual potential temperature drop, pressure rise, and the peak wind speed associated with the passage of the gust front are significantly smaller compared with the observations, suggesting simulated cool pools are weaker. Simulations also overestimate the vertical velocity and Ze in convective cores as compared with observational retrievals. The modeled updraft velocity and precipitation have a significant spread across the eight schemes even in this strongly dynamically-driven system. The spread of updraft velocity is attributed to the combined effects of the low-level perturbation pressure gradient determined by cold pool intensity and buoyancy that is not necessarily well correlated to differences in latent heating among the simulations. Variability of updraft velocity between schemes is also related to differences in ice-related parameterizations, whereas precipitation variability increases in no-ice simulations because of scheme differences in collision-coalescence parameterizations.

  4. A more robust model of the biodiesel reaction, allowing identification of process conditions for significantly enhanced rate and water tolerance.

    Science.gov (United States)

    Eze, Valentine C; Phan, Anh N; Harvey, Adam P

    2014-03-01

    A more robust kinetic model of base-catalysed transesterification than the conventional reaction scheme has been developed. All the relevant reactions in the base-catalysed transesterification of rapeseed oil (RSO) to fatty acid methyl ester (FAME) were investigated experimentally, and validated numerically in a model implemented using MATLAB. It was found that including the saponification of RSO and FAME side reactions and hydroxide-methoxide equilibrium data explained various effects that are not captured by simpler conventional models. Both the experiment and modelling showed that the "biodiesel reaction" can reach the desired level of conversion (>95%) in less than 2min. Given the right set of conditions, the transesterification can reach over 95% conversion, before the saponification losses become significant. This means that the reaction must be performed in a reactor exhibiting good mixing and good control of residence time, and the reaction mixture must be quenched rapidly as it leaves the reactor. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Possible Overestimation of Surface Disinfection Efficiency by Assessment Methods Based on Liquid Sampling Procedures as Demonstrated by In Situ Quantification of Spore Viability ▿

    Science.gov (United States)

    Grand, I.; Bellon-Fontaine, M.-N.; Herry, J.-M.; Hilaire, D.; Moriconi, F.-X.; Naïtali, M.

    2011-01-01

    The standard test methods used to assess the efficiency of a disinfectant applied to surfaces are often based on counting the microbial survivors sampled in a liquid, but total cell removal from surfaces is seldom achieved. One might therefore wonder whether evaluations of microbial survivors in liquid-sampled cells are representative of the levels of survivors in whole populations. The present study was thus designed to determine the “damaged/undamaged” status induced by a peracetic acid disinfection for Bacillus atrophaeus spores deposited on glass coupons directly on this substrate and to compare it to the status of spores collected in liquid by a sampling procedure. The method utilized to assess the viability of both surface-associated and liquid-sampled spores included fluorescence labeling with a combination of Syto 61 and Chemchrome V6 dyes and quantifications by analyzing the images acquired by confocal laser scanning microscopy. The principal result of the study was that the viability of spores sampled in the liquid was found to be poorer than that of surface-associated spores. For example, after 2 min of peracetic acid disinfection, less than 17% ± 5% of viable cells were detected among liquid-sampled cells compared to 79% ± 5% or 47% ± 4%, respectively, when the viability was evaluated on the surface after or without the sampling procedure. Moreover, assessments of the survivors collected in the liquid phase, evaluated using the microscopic method and standard plate counts, were well correlated. Evaluations based on the determination of survivors among the liquid-sampled cells can thus overestimate the efficiency of surface disinfection procedures. PMID:21742922

  6. Do older adults perceive postural constraints for reach estimation?

    Science.gov (United States)

    Cordova, Alberto; Gabbard, Carl

    2014-01-01

    BACKGROUND/STUDY CONTEXT: Recent evidence indicates that older persons have difficulty mentally representing intended movements. Furthermore, in an estimation of reach paradigm using motor imagery, a form of mental representation, older persons significantly overestimated their ability compared with young adults. The authors tested the notion that older adults may also have difficulty perceiving the postural constraints associated with reach estimation. The authors compared young (Mage = 22 years) and older (Mage = 67) adults on reach estimation while seated and in a more postural demanding standing and leaning forward position. The expectation was a significant postural effect with the standing condition, as evidenced by reduced overestimation. Whereas there was no difference between groups in the seated condition (both overestimated), older adults underestimated whereas the younger group once again overestimated in the standing condition. From one perspective, these results show that older adults do perceive postural constraints in light of their own physical capabilities. That is, that group perceived greater postural demands with the standing posture and elected to program a more conservative strategy, resulting in underestimation.

  7. A computational model of pile vertical vibration in saturated soil based on the radial disturbed zone of pile driving

    International Nuclear Information System (INIS)

    Li Qiang; Shi Qian; Wang Kuihua

    2010-01-01

    In this study, a simplified computational model of pile vertical vibration was developed. The model was based on the inhomogeneous radial disturbed zone of soil in the vicinity of a pile disturbed by pile driving. The model contained two regions: the disturbed zone, which was located in the immediate vicinity of the pile, and the undisturbed region, external to the disturbed zone. In the model, excess pore pressure in the disturbed zone caused by pile driving was assumed to follow a logarithmic distribution. The relationships of stress and strain in the disturbed zone were based on the principle of effective stress under plain strain conditions. The external zone was governed by the poroelastic theory proposed by Biot. With the use of a variable separation method, an analytical solution in the frequency domain was obtained. Furthermore, a semi-analytical solution was attained by employing a numerical convolution method. Numerical results from the frequency and time domain indicated that the equivalent radius of the disturbed zone and the ratio of excess pore pressure had a significant effect on pile dynamic response. However, actual interactions between pile and soil will be weaker due to the presence of the radial disturbed zone, which is caused by pile driving. Consequently, the ideal undisturbed model overestimates the interaction between pile and soil; however, the proposed model reflects the interaction of pile and soil better than the perfect contact model. Numerical results indicate that the model can account for the time effect of pile dynamic tests.

  8. Thermal comfort in residential buildings - Failure to predict by Standard model

    Energy Technology Data Exchange (ETDEWEB)

    Becker, R. [Faculty of Civil and Environmental Engineering, Technion - Israel Institute of Technology, Rabin Building, Technion City, Haifa 32000 (Israel); Paciuk, M. [National Building Research Institute, Technion - IIT, Haifa 32000 (Israel)

    2009-05-15

    A field study, conducted in 189 dwellings in winter and 205 dwellings in summer, included measurement of hygro-thermal conditions and documentation of occupant responses and behavior patterns. Both samples included both passive and actively space-conditioned dwellings. Predicted mean votes (PMV) computed using Fanger's model yielded significantly lower-than-reported thermal sensation (TS) values, especially for the winter heated and summer air-conditioned groups. The basic model assumption of a proportional relationship between thermal response and thermal load proved to be inadequate, with actual thermal comfort achieved at substantially lower loads than predicted. Survey results also refuted the model's second assumption that symmetrical responses in the negative and positive directions of the scale represent similar comfort levels. Results showed that the model's curve of predicted percentage of dissatisfied (PPD) substantially overestimated the actual percentage of dissatisfied within the partial group of respondents who voted TS > 0 in winter as well as within the partial group of respondents who voted TS < 0 in summer. Analyses of sensitivity to possible survey-related inaccuracy factors (metabolic rate, clothing thermal resistance) did not explain the systematic discrepancies. These discrepancies highlight the role of contextual variables (local climate, expectations, available control) in thermal adaptation in actual settings. Collected data was analyzed statistically to establish baseline data for local standardized thermal and energy calculations. A 90% satisfaction criterion yielded 19.5 C and 26 C as limit values for passive winter and summer design conditions, respectively, while during active conditioning periods, set-point temperatures of 21.5 C and 23 C should be assumed for winter and summer, respectively. (author)

  9. Computer simulation model of reflex e-beam systems coupled to an external circuit

    International Nuclear Information System (INIS)

    Jungwirth, K.; Stavinoha, P.

    1982-01-01

    Dynamics of ions and relativistic electrons in various high-voltage reflexing systems (reflex diodes and triodes) was investigated numerically by means of 1 1/2-dimensional PIC simulation model OREBIA. Its perfected version OREBIA-REX also accounts for system coupling to an external power source circuit, thus yielding the currents and applied voltage self-consistently. Various modes of operation of reflex diode and triode were studied using both models. It is shown that neglecting the influence of the external circuit can lead to seve--re overestimation of both ion currents and electron accumulation rates. In coupled systems with ions repeated collapses of impedance due to electron-ion relaxation processes are observed. The current and voltage pulses calculated for several reflex diodes and triodes with and without ions are presented. (J.U.)

  10. Evaluation of the ENVI-Met Vegetation Model of Four Common Tree Species in a Subtropical Hot-Humid Area

    Directory of Open Access Journals (Sweden)

    Zhixin Liu

    2018-05-01

    Full Text Available Urban trees can significantly improve the outdoor thermal environment, especially in subtropical zones. However, due to the lack of fundamental evaluations of numerical simulation models, design and modification strategies for optimizing the thermal environment in subtropical hot-humid climate zones cannot be proposed accurately. To resolve this issue, this study investigated the physiological parameters (leaf surface temperature and vapor flux and thermal effects (solar radiation, air temperature, and humidity of four common tree species (Michelia alba, Mangifera indica, Ficus microcarpa, and Bauhinia blakeana in both spring and summer in Guangzhou, China. A comprehensive comparison of the observed and modeled data from ENVI-met (v4.2 Science, a three-dimensional microclimate model was performed. The results show that the most fundamental weakness of ENVI-met is the limitation of input solar radiation, which cannot be input hourly in the current version and may impact the thermal environment in simulation. For the tree model, the discrepancy between modeled and observed microclimate parameters was acceptable. However, for the physiological parameters, ENVI-met tended to overestimate the leaf surface temperature and underestimate the vapor flux, especially at midday in summer. The simplified calculation of the tree model may be one of the main reasons. Furthermore, the thermal effect of trees, meaning the differences between nearby treeless sites and shaded areas, were all underestimated in ENVI-met for each microclimate variable. This study shows that the tree model is suitable in subtropical hot-humid climates, but also needs some improvement.

  11. Bioenergy Supply and Environmental Impacts on Cropland: Insights from Multi-market Forecasts in a Great Lakes Subregional Bioeconomic Model

    Energy Technology Data Exchange (ETDEWEB)

    Egbendewe-Mondzozo, Aklesso [Univ. of Lome, Lome (Togo); Swinton, Scott M. [Univ. of Lome, Lome (Togo); Kang, Shujiang [Univ. of Lome, Lome (Togo); Post, Wilfred M. [Univ. of Lome, Lome (Togo); Binfield, Julian C. [Univ. of Lome, Lome (Togo); Thompson, Wyatt [Univ. of Lome, Lome (Togo)

    2015-01-03

    Using subregional models of crop production choices in central Wisconsin and southwest Michigan, we predict biomass production, land use, and environmental impacts with details that are unavailable from national scale models. When biomass prices are raised exogenously, we find that the subregional models overestimate the supply, the land use, and the beneficial environmental aspects of perennial biomass crops. Multi-market price feedbacks tied to realistic policy parameters predict high threshold absolute prices for biomass to enter production, resulting in intensified production of biomass from annual grain crops with damaging environmental impacts. Multi-market feedbacks also predict regional specialization in energy biomass production in areas with lower yields of food crops. Furthermore, policies promoting biofuels will not necessarily generate environmental benefits in the absence of environmental regulations.

  12. THE COMPARISON BETWEEN COMPUTER SIMULATION AND PHYSICAL MODEL IN CALCULATING ILLUMINANCE LEVEL OF ATRIUM BUILDING

    Directory of Open Access Journals (Sweden)

    Sushardjanti Felasari

    2003-01-01

    Full Text Available This research examines the accuracy of computer programmes to simulate the illuminance level in atrium buildings compare to the measurement of those in physical models. The case was taken in atrium building with 4 types of roof i.e. pitched roof, barrel vault roof, monitor pitched roof (both monitor pitched roof and monitor barrel vault roof, and north light roof (both with north orientation and south orientation. The results show that both methods have agreement and disagreement. They show the same pattern of daylight distribution. In the other side, in terms of daylight factors, computer simulation tends to underestimate calculation compared to physical model measurement, while for average and minimum illumination, it tends to overestimate the calculation.

  13. Decadal predictions of Southern Ocean sea ice : testing different initialization methods with an Earth-system Model of Intermediate Complexity

    Science.gov (United States)

    Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana

    2013-04-01

    The sea ice extent in the Southern Ocean has increased since 1979 but the causes of this expansion have not been firmly identified. In particular, the contribution of internal variability and external forcing to this positive trend has not been fully established. In this region, the lack of observations and the overestimation of internal variability of the sea ice by contemporary General Circulation Models (GCMs) make it difficult to understand the behaviour of the sea ice. Nevertheless, if its evolution is governed by the internal variability of the system and if this internal variability is in some way predictable, a suitable initialization method should lead to simulations results that better fit the reality. Current GCMs decadal predictions are generally initialized through a nudging towards some observed fields. This relatively simple method does not seem to be appropriated to the initialization of sea ice in the Southern Ocean. The present study aims at identifying an initialization method that could improve the quality of the predictions of Southern Ocean sea ice at decadal timescales. We use LOVECLIM, an Earth-system Model of Intermediate Complexity that allows us to perform, within a reasonable computational time, the large amount of simulations required to test systematically different initialization procedures. These involve three data assimilation methods: a nudging, a particle filter and an efficient particle filter. In a first step, simulations are performed in an idealized framework, i.e. data from a reference simulation of LOVECLIM are used instead of observations, herein after called pseudo-observations. In this configuration, the internal variability of the model obviously agrees with the one of the pseudo-observations. This allows us to get rid of the issues related to the overestimation of the internal variability by models compared to the observed one. This way, we can work out a suitable methodology to assess the efficiency of the

  14. Nonequilibrium shock-heated nitrogen flows using a rovibrational state-to-state method

    Science.gov (United States)

    Panesi, M.; Munafò, A.; Magin, T. E.; Jaffe, R. L.

    2014-07-01

    A rovibrational collisional model is developed to study the internal energy excitation and dissociation processes behind a strong shock wave in a nitrogen flow. The reaction rate coefficients are obtained from the ab initio database of the NASA Ames Research Center. The master equation is coupled with a one-dimensional flow solver to study the nonequilibrium phenomena encountered in the gas during a hyperbolic reentry into Earth's atmosphere. The analysis of the populations of the rovibrational levels demonstrates how rotational and vibrational relaxation proceed at the same rate. This contrasts with the common misconception that translational and rotational relaxation occur concurrently. A significant part of the relaxation process occurs in non-quasi-steady-state conditions. Exchange processes are found to have a significant impact on the relaxation of the gas, while predissociation has a negligible effect. The results obtained by means of the full rovibrational collisional model are used to assess the validity of reduced order models (vibrational collisional and multitemperature) which are based on the same kinetic database. It is found that thermalization and dissociation are drastically overestimated by the reduced order models. The reasons of the failure differ in the two cases. In the vibrational collisional model the overestimation of the dissociation is a consequence of the assumption of equilibrium between the rotational energy and the translational energy. The multitemperature model fails to predict the correct thermochemical relaxation due to the failure of the quasi-steady-state assumption, used to derive the phenomenological rate coefficient for dissociation.

  15. Verification of high-speed solar wind stream forecasts using operational solar wind models

    DEFF Research Database (Denmark)

    Reiss, Martin A.; Temmer, Manuela; Veronig, Astrid M.

    2016-01-01

    and the background solar wind conditions. We found that both solar wind models are capable of predicting the large-scale features of the observed solar wind speed (root-mean-square error, RMSE ≈100 km/s) but tend to either overestimate (ESWF) or underestimate (WSA) the number of high-speed solar wind streams (threat......High-speed solar wind streams emanating from coronal holes are frequently impinging on the Earth's magnetosphere causing recurrent, medium-level geomagnetic storm activity. Modeling high-speed solar wind streams is thus an essential element of successful space weather forecasting. Here we evaluate...... high-speed stream forecasts made by the empirical solar wind forecast (ESWF) and the semiempirical Wang-Sheeley-Arge (WSA) model based on the in situ plasma measurements from the Advanced Composition Explorer (ACE) spacecraft for the years 2011 to 2014. While the ESWF makes use of an empirical relation...

  16. Chromosome aberration model combining radiation tracks, chromatin structure, DSB repair and chromatin mobility

    International Nuclear Information System (INIS)

    Friedland, W.; Kundrat, P.

    2015-01-01

    The module that simulates the kinetics and yields of radiation-induced chromosome aberrations within the biophysical code PARTRAC is described. Radiation track structures simulated by Monte Carlo methods are overlapped with multi-scale models of DNA and chromatin to assess the resulting DNA damage. Spatial mobility of individual DNA ends from double-strand breaks is modelled simultaneously with their processing by the non-homologous end-joining enzymes. To score diverse types of chromosome aberrations, the joined ends are classified regarding their original chromosomal location, orientation and the involvement of centromeres. A comparison with experimental data on dicentrics induced by gamma and alpha particles shows that their relative dose dependence is predicted correctly, although the absolute yields are overestimated. The critical model assumptions on chromatin mobility and on the initial damage recognition and chromatin remodelling steps and their future refinements to solve this issue are discussed. (authors)

  17. Transport energy modeling with meta-heuristic harmony search algorithm, an application to Turkey

    Energy Technology Data Exchange (ETDEWEB)

    Ceylan, Huseyin; Ceylan, Halim; Haldenbilen, Soner; Baskan, Ozgur [Department of Civil Engineering, Engineering Faculty, Pamukkale University, Muh. Fak. Denizli 20017 (Turkey)

    2008-07-15

    This study proposes a new method for estimating transport energy demand using a harmony search (HS) approach. HArmony Search Transport Energy Demand Estimation (HASTEDE) models are developed taking population, gross domestic product and vehicle kilometers as an input. The HASTEDE models are in forms of linear, exponential and quadratic mathematical expressions and they are applied to Turkish Transportation sector energy consumption. Optimum or near-optimum values of the HS parameters are obtained with sensitivity analysis (SA). Performance of all models is compared with the Ministry of Energy and Natural Resources (MENR) projections. Results showed that HS algorithm may be used for energy modeling, but SA is required to obtain best values of the HS parameters. The quadratic form of HASTEDE will overestimate transport sector energy consumption by about 26% and linear and exponential forms underestimate by about 21% when they are compared with the MENR projections. This may happen due to the modeling procedure and selected parameters for models, but determining the upper and lower values of transportation sector energy consumption will provide a framework and flexibility for setting up energy policies. (author)

  18. Evaluation of statistical models for forecast errors from the HBV model

    Science.gov (United States)

    Engeland, Kolbjørn; Renard, Benjamin; Steinsland, Ingelin; Kolberg, Sjur

    2010-04-01

    SummaryThree statistical models for the forecast errors for inflow into the Langvatn reservoir in Northern Norway have been constructed and tested according to the agreement between (i) the forecast distribution and the observations and (ii) median values of the forecast distribution and the observations. For the first model observed and forecasted inflows were transformed by the Box-Cox transformation before a first order auto-regressive model was constructed for the forecast errors. The parameters were conditioned on weather classes. In the second model the Normal Quantile Transformation (NQT) was applied on observed and forecasted inflows before a similar first order auto-regressive model was constructed for the forecast errors. For the third model positive and negative errors were modeled separately. The errors were first NQT-transformed before conditioning the mean error values on climate, forecasted inflow and yesterday's error. To test the three models we applied three criterions: we wanted (a) the forecast distribution to be reliable; (b) the forecast intervals to be narrow; (c) the median values of the forecast distribution to be close to the observed values. Models 1 and 2 gave almost identical results. The median values improved the forecast with Nash-Sutcliffe R eff increasing from 0.77 for the original forecast to 0.87 for the corrected forecasts. Models 1 and 2 over-estimated the forecast intervals but gave the narrowest intervals. Their main drawback was that the distributions are less reliable than Model 3. For Model 3 the median values did not fit well since the auto-correlation was not accounted for. Since Model 3 did not benefit from the potential variance reduction that lies in bias estimation and removal it gave on average wider forecasts intervals than the two other models. At the same time Model 3 on average slightly under-estimated the forecast intervals, probably explained by the use of average measures to evaluate the fit.

  19. Performance of Linear and Nonlinear Two-Leaf Light Use Efficiency Models at Different Temporal Scales

    Directory of Open Access Journals (Sweden)

    Xiaocui Wu

    2015-02-01

    Full Text Available The reliable simulation of gross primary productivity (GPP at various spatial and temporal scales is of significance to quantifying the net exchange of carbon between terrestrial ecosystems and the atmosphere. This study aimed to verify the ability of a nonlinear two-leaf model (TL-LUEn, a linear two-leaf model (TL-LUE, and a big-leaf light use efficiency model (MOD17 to simulate GPP at half-hourly, daily and 8-day scales using GPP derived from 58 eddy-covariance flux sites in Asia, Europe and North America as benchmarks. Model evaluation showed that the overall performance of TL-LUEn was slightly but not significantly better than TL-LUE at half-hourly and daily scale, while the overall performance of both TL-LUEn and TL-LUE were significantly better (p < 0.0001 than MOD17 at the two temporal scales. The improvement of TL-LUEn over TL-LUE was relatively small in comparison with the improvement of TL-LUE over MOD17. However, the differences between TL-LUEn and MOD17, and TL-LUE and MOD17 became less distinct at the 8-day scale. As for different vegetation types, TL-LUEn and TL-LUE performed better than MOD17 for all vegetation types except crops at the half-hourly scale. At the daily and 8-day scales, both TL-LUEn and TL-LUE outperformed MOD17 for forests. However, TL-LUEn had a mixed performance for the three non-forest types while TL-LUE outperformed MOD17 slightly for all these non-forest types at daily and 8-day scales. The better performance of TL-LUEn and TL-LUE for forests was mainly achieved by the correction of the underestimation/overestimation of GPP simulated by MOD17 under low/high solar radiation and sky clearness conditions. TL-LUEn is more applicable at individual sites at the half-hourly scale while TL-LUE could be regionally used at half-hourly, daily and 8-day scales. MOD17 is also an applicable option regionally at the 8-day scale.

  20. A threshold-voltage model for small-scaled GaAs nMOSFET with stacked high-k gate dielectric

    International Nuclear Information System (INIS)

    Liu Chaowen; Xu Jingping; Liu Lu; Lu Hanhan; Huang Yuan

    2016-01-01

    A threshold-voltage model for a stacked high-k gate dielectric GaAs MOSFET is established by solving a two-dimensional Poisson's equation in channel and considering the short-channel, DIBL and quantum effects. The simulated results are in good agreement with the Silvaco TCAD data, confirming the correctness and validity of the model. Using the model, impacts of structural and physical parameters of the stack high-k gate dielectric on the threshold-voltage shift and the temperature characteristics of the threshold voltage are investigated. The results show that the stacked gate dielectric structure can effectively suppress the fringing-field and DIBL effects and improve the threshold and temperature characteristics, and on the other hand, the influence of temperature on the threshold voltage is overestimated if the quantum effect is ignored. (paper)