WorldWideScience

Sample records for models generally overestimate

  1. Resource overestimates

    Indian Academy of Sciences (India)

    First page Back Continue Last page Graphics. Extensive field studies revealed over-estimates of bamboo stocks by a factor of ten! Extensive field studies revealed over-estimates of bamboo stocks by a factor of ten! Forest compartments that had been completely clear felled to set up WCPM still showed large stocks because ...

  2. Why do general circulation models overestimate the aerosol cloud lifetime effect? A case study comparing CAM5 and a CRM

    Science.gov (United States)

    Zhou, Cheng; Penner, Joyce E.

    2017-01-01

    Observation-based studies have shown that the aerosol cloud lifetime effect or the increase of cloud liquid water path (LWP) with increased aerosol loading may have been overestimated in climate models. Here, we simulate shallow warm clouds on 27 May 2011 at the southern Great Plains (SGP) measurement site established by the Department of Energy's (DOE) Atmospheric Radiation Measurement (ARM) program using a single-column version of a global climate model (Community Atmosphere Model or CAM) and a cloud resolving model (CRM). The LWP simulated by CAM increases substantially with aerosol loading while that in the CRM does not. The increase of LWP in CAM is caused by a large decrease of the autoconversion rate when cloud droplet number increases. In the CRM, the autoconversion rate is also reduced, but this is offset or even outweighed by the increased evaporation of cloud droplets near the cloud top, resulting in an overall decrease in LWP. Our results suggest that climate models need to include the dependence of cloud top growth and the evaporation/condensation process on cloud droplet number concentrations.

  3. Overestimating resource value and its effects on fighting decisions.

    Directory of Open Access Journals (Sweden)

    Lee Alan Dugatkin

    Full Text Available Much work in behavioral ecology has shown that animals fight over resources such as food, and that they make strategic decisions about when to engage in such fights. Here, we examine the evolution of one, heretofore unexamined, component of that strategic decision about whether to fight for a resource. We present the results of a computer simulation that examined the evolution of over- or underestimating the value of a resource (food as a function of an individual's current hunger level. In our model, animals fought for food when they perceived their current food level to be below the mean for the environment. We considered seven strategies for estimating food value: 1 always underestimate food value, 2 always overestimate food value, 3 never over- or underestimate food value, 4 overestimate food value when hungry, 5 underestimate food value when hungry, 6 overestimate food value when relatively satiated, and 7 underestimate food value when relatively satiated. We first competed all seven strategies against each other when they began at approximately equal frequencies. In such a competition, two strategies--"always overestimate food value," and "overestimate food value when hungry"--were very successful. We next competed each of these strategies against the default strategy of "never over- or underestimate," when the default strategy was set at 99% of the population. Again, the strategies of "always overestimate food value" and "overestimate food value when hungry" fared well. Our results suggest that overestimating food value when deciding whether to fight should be favored by natural selection.

  4. Lake Wobegon’s Guns: Overestimating Our Gun-Related Competences

    Directory of Open Access Journals (Sweden)

    Emily Stark

    2016-02-01

    Full Text Available The Lake Wobegon Effect is a general tendency for people to overestimate their own abilities. In this study, the authors conducted a large, nationally-representative survey of U.S. citizens to test whether Americans overestimate their own gun-relevant personality traits, gun safety knowledge, and ability to use a gun in an emergency. The authors also tested how gun control attitudes, political identification, gender, and gun experience affect self-perceptions. Consistent with prior research on the Lake Wobegon Effect, participants overestimated their gun-related competencies. Conservatives, males, and pro-gun advocates self-enhanced somewhat more than their counterparts but this effect was primarily due to increased gun experience among these participants. These findings are important to policymakers in the area of gun use, because overconfidence in one’s gun-related abilities may lead to a reduced perceived need for gun training.

  5. Do young novice drivers overestimate their driving skills?

    NARCIS (Netherlands)

    Craen, S. de Twisk, D.A.M. Hagenzieker, M.P. Elffers, H. & Brookhuis, K.A.

    2007-01-01

    In this study the authors argue that, in order to sufficiently adapt to task demands in traffic, drivers have to make an assessment of their own driving skills. There are indications that drivers in general, and novice drivers in particular, overestimate their driving skills. The objective of this

  6. Generalized PSF modeling for optimized quantitation in PET imaging.

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    modeling does not offer optimized PET quantitation, and that PSF overestimation may provide enhanced SUV quantitation. Furthermore, generalized PSF modeling may provide a valuable approach for quantitative tasks such as treatment-response assessment and prognostication.

  7. Predictors and overestimation of recalled mobile phone use among children and adolescents.

    Science.gov (United States)

    Aydin, Denis; Feychting, Maria; Schüz, Joachim; Andersen, Tina Veje; Poulsen, Aslak Harbo; Prochazka, Michaela; Klæboe, Lars; Kuehni, Claudia E; Tynes, Tore; Röösli, Martin

    2011-12-01

    A growing body of literature addresses possible health effects of mobile phone use in children and adolescents by relying on the study participants' retrospective reconstruction of mobile phone use. In this study, we used data from the international case-control study CEFALO to compare self-reported with objectively operator-recorded mobile phone use. The aim of the study was to assess predictors of level of mobile phone use as well as factors that are associated with overestimating own mobile phone use. For cumulative number and duration of calls as well as for time since first subscription we calculated the ratio of self-reported to operator-recorded mobile phone use. We used multiple linear regression models to assess possible predictors of the average number and duration of calls per day and logistic regression models to assess possible predictors of overestimation. The cumulative number and duration of calls as well as the time since first subscription of mobile phones were overestimated on average by the study participants. Likelihood to overestimate number and duration of calls was not significantly different for controls compared to cases (OR=1.1, 95%-CI: 0.5 to 2.5 and OR=1.9, 95%-CI: 0.85 to 4.3, respectively). However, likelihood to overestimate was associated with other health related factors such as age and sex. As a consequence, such factors act as confounders in studies relying solely on self-reported mobile phone use and have to be considered in the analysis. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Americans Still Overestimate Social Class Mobility: A Pre-Registered Self-Replication.

    Science.gov (United States)

    Kraus, Michael W

    2015-01-01

    Kraus and Tan (2015) hypothesized that Americans tend to overestimate social class mobility in society, and do so because they seek to protect the self. This paper reports a pre-registered exact replication of Study 3 from this original paper and finds, consistent with the original study, that Americans substantially overestimate social class mobility, that people provide greater overestimates when made while thinking of similar others, and that high perceived social class is related to greater overestimates. The current results provide additional evidence consistent with the idea that people overestimate class mobility to protect their beliefs in the promise of equality of opportunity. Discussion considers the utility of pre-registered self-replications as one tool for encouraging replication efforts and assessing the robustness of effect sizes.

  9. Americans Still Overestimate Social Class Mobility: A Pre-Registered Self-Replication

    Directory of Open Access Journals (Sweden)

    Michael W. Kraus

    2015-11-01

    Full Text Available Kraus and Tan (2015 hypothesized that Americans tend to overestimate social class mobility in society, and do so because they seek to protect the self. This paper reports a pre-registered exact replication of Study 3 from this original paper and finds, consistent with the original study, that Americans substantially overestimate social class mobility, that people provide greater overestimates when made while thinking of similar others, and that high perceived social class is related to greater overestimates. The current results provide additional evidence consistent with the idea that people overestimate class mobility to protect their beliefs in the promise of equality of opportunity. Discussion considers the utility of pre-registered self-replications as one tool for encouraging replication efforts and assessing the robustness of effect sizes.

  10. Predictive Validity of Explicit and Implicit Threat Overestimation in Contamination Fear

    Science.gov (United States)

    Green, Jennifer S.; Teachman, Bethany A.

    2012-01-01

    We examined the predictive validity of explicit and implicit measures of threat overestimation in relation to contamination-fear outcomes using structural equation modeling. Undergraduate students high in contamination fear (N = 56) completed explicit measures of contamination threat likelihood and severity, as well as looming vulnerability cognitions, in addition to an implicit measure of danger associations with potential contaminants. Participants also completed measures of contamination-fear symptoms, as well as subjective distress and avoidance during a behavioral avoidance task, and state looming vulnerability cognitions during an exposure task. The latent explicit (but not implicit) threat overestimation variable was a significant and unique predictor of contamination fear symptoms and self-reported affective and cognitive facets of contamination fear. On the contrary, the implicit (but not explicit) latent measure predicted behavioral avoidance (at the level of a trend). Results are discussed in terms of differential predictive validity of implicit versus explicit markers of threat processing and multiple fear response systems. PMID:24073390

  11. Reducing WCET Overestimations by Correcting Errors in Loop Bound Constraints

    Directory of Open Access Journals (Sweden)

    Fanqi Meng

    2017-12-01

    Full Text Available In order to reduce overestimations of worst-case execution time (WCET, in this article, we firstly report a kind of specific WCET overestimation caused by non-orthogonal nested loops. Then, we propose a novel correction approach which has three basic steps. The first step is to locate the worst-case execution path (WCEP in the control flow graph and then map it onto source code. The second step is to identify non-orthogonal nested loops from the WCEP by means of an abstract syntax tree. The last step is to recursively calculate the WCET errors caused by the loose loop bound constraints, and then subtract the total errors from the overestimations. The novelty lies in the fact that the WCET correction is only conducted on the non-branching part of WCEP, thus avoiding potential safety risks caused by possible WCEP switches. Experimental results show that our approach reduces the specific WCET overestimation by an average of more than 82%, and 100% of corrected WCET is no less than the actual WCET. Thus, our approach is not only effective but also safe. It will help developers to design energy-efficient and safe real-time systems.

  12. Do young novice drivers overestimate their driving skills more than experienced drivers? : different methods lead to different conclusions.

    NARCIS (Netherlands)

    Craen, S. de Twisk, D.A.M. Hagenzieker, M.P. Elffers, H. & Brookhuis, K.A.

    2011-01-01

    In this study the authors argue that drivers have to make an assessment of their own driving skills, in order to sufficiently adapt to their task demands in traffic. There are indications that drivers in general, but novice drivers in particular, overestimate their driving skills. However, study

  13. MRI Overestimates Excitotoxic Amygdala Lesion Damage in Rhesus Monkeys

    Directory of Open Access Journals (Sweden)

    Benjamin M. Basile

    2017-06-01

    Full Text Available Selective, fiber-sparing excitotoxic lesions are a state-of-the-art tool for determining the causal contributions of different brain areas to behavior. For nonhuman primates especially, it is advantageous to keep subjects with high-quality lesions alive and contributing to science for many years. However, this requires the ability to estimate lesion extent accurately. Previous research has shown that in vivo T2-weighted magnetic resonance imaging (MRI accurately estimates damage following selective ibotenic acid lesions of the hippocampus. Here, we show that the same does not apply to lesions of the amygdala. Across 19 hemispheres from 13 rhesus monkeys, MRI assessment consistently overestimated amygdala damage as assessed by microscopic examination of Nissl-stained histological material. Two outliers suggested a linear relation for lower damage levels, and values of unintended amygdala damage from a previous study fell directly on that regression line, demonstrating that T2 hypersignal accurately predicts damage levels below 50%. For unintended damage, MRI estimates correlated with histological assessment for entorhinal cortex, perirhinal cortex and hippocampus, though MRI significantly overestimated the extent of that damage in all structures. Nevertheless, ibotenic acid injections routinely produced extensive intentional amygdala damage with minimal unintended damage to surrounding structures, validating the general success of the technique. The field will benefit from more research into in vivo lesion assessment techniques, and additional evaluation of the accuracy of MRI assessment in different brain areas. For now, in vivo MRI assessment of ibotenic acid lesions of the amygdala can be used to confirm successful injections, but MRI estimates of lesion extent should be interpreted with caution.

  14. Total body surface area overestimation at referring institutions in children transferred to a burn center.

    Science.gov (United States)

    Swords, Douglas S; Hadley, Edmund D; Swett, Katrina R; Pranikoff, Thomas

    2015-01-01

    Total body surface area (TBSA) burned is a powerful descriptor of burn severity and influences the volume of resuscitation required in burn patients. The incidence and severity of TBSA overestimation by referring institutions (RIs) in children transferred to a burn center (BC) are unclear. The association between TBSA overestimation and overresuscitation is unknown as is that between TBSA overestimation and outcome. The trauma registry at a BC was queried over 7.25 years for children presenting with burns. TBSA estimate at RIs and BC, total fluid volume given before arrival at a BC, demographic variables, and clinical variables were reviewed. Nearly 20 per cent of children arrived from RIs without TBSA estimation. Nearly 50 per cent were overestimated by 5 per cent or greater TBSA and burn sizes were overestimated by up to 44 per cent TBSA. Average TBSA measured at BC was 9.5 ± 8.3 per cent compared with 15.5 ± 11.8 per cent as measured at RIs (P < 0.0001). Burns between 10 and 19.9 per cent TBSA were overestimated most often and by the greatest amounts. There was a statistically significant relationship between overestimation of TBSA by 5 per cent or greater and overresuscitation by 10 mL/kg or greater (P = 0.02). No patient demographic or clinical factors were associated with TBSA overestimation. Education efforts aimed at emergency department physicians regarding the importance of always calculating TBSA as well as the mechanics of TBSA estimation and calculating resuscitation volume are needed. Further studies should evaluate the association of TBSA overestimation by RIs with adverse outcomes and complications in the burned child.

  15. Adolescent-perceived parent and teacher overestimation of mathematics ability: Developmental implications for students' mathematics task values.

    Science.gov (United States)

    Gniewosz, Burkhard; Watt, Helen M G

    2017-07-01

    This study examines whether and how student-perceived parents' and teachers' overestimation of students' own perceived mathematical ability can explain trajectories for adolescents' mathematical task values (intrinsic and utility) controlling for measured achievement, following expectancy-value and self-determination theories. Longitudinal data come from a 3-cohort (mean ages 13.25, 12.36, and 14.41 years; Grades 7-10), 4-wave data set of 1,271 Australian secondary school students. Longitudinal structural equation models revealed positive effects of student-perceived overestimation of math ability by parents and teachers on students' intrinsic and utility math task values development. Perceived parental overestimations predicted intrinsic task value changes between all measurement occasions, whereas utility task value changes only were predicted between Grades 9 and 10. Parental influences were stronger for intrinsic than utility task values. Teacher influences were similar for both forms of task values and commenced after the curricular school transition in Grade 8. Results support the assumptions that the perceived encouragement conveyed by student-perceived mathematical ability beliefs of parents and teachers, promote positive mathematics task values development. Moreover, results point to different mechanisms underlying parents' and teachers' support. Finally, the longitudinal changes indicate transition-related increases in the effects of student-perceived overestimations and stronger effects for intrinsic than utility values. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. Reassessment of soil erosion on the Chinese loess plateau: were rates overestimated?

    Science.gov (United States)

    Zhao, Jianlin; Govers, Gerard

    2014-05-01

    Several studies have estimated regional soil erosion rates (rill and interrill erosion) on the Chinese loess plateau using an erosion model such as the RUSLE (e.g. Fu et al., 2011; Sun et al., 2013). However, the question may be asked whether such estimates are realistic: studies have shown that the use of models for large areas may lead to significant overestimations (Quinton et al., 2010). In this study, soil erosion rates on the Chinese loess plateau were reevaluated by using field measured soil erosion data from erosion plots (216 plots and 1380 plot years) in combination with a careful extrapolation procedure. Data analysis showed that the relationship between slope and erosion rate on arable land could be well described by erosion-slope relationships reported in the literature (Nearing, 1997). The increase of average erosion rate with slope length was clearly degressive, as could be expected from earlier research. However, for plots with permanent vegetation (grassland, shrub, forest) no relationship was found between erosion rates and slope gradient and/or slope length. This is important, as it implies that spatial variations of erosion on permanently vegetated areas cannot be modeled using topographical functions derived from observations on arable land. Application of relationships developed for arable land will lead to a significant overestimation of soil erosion rates. Based on our analysis we estimate the total soil erosion rate in the Chinese Loess plateau averages ca. 6.78 t ha-1 yr-1 for the whole loess plateau, resulting in a total sediment mobilisation of ca. 0.38 Gt yr-1. Erosion rates on arable land average ca. 15.10 t ha-1 yr-1. These estimates are 2 to 3 times lower than previously published estimates. The main reason why previous estimates are likely to be too high is that the values of (R)USLE parameters such as K, P and LS factor were overestimated. Overestimations of the K factor are due to the reliance of nomograph calculations, resulting

  17. Peer substance use overestimation among French university students: a cross-sectional survey

    Directory of Open Access Journals (Sweden)

    Dautzenberg Bertrand

    2010-03-01

    Full Text Available Abstract Background Normative misperceptions have been widely documented for alcohol use among U.S. college students. There is less research on other substances or European cultural contexts. This study explores which factors are associated with alcohol, tobacco and cannabis use misperceptions among French college students, focusing on substance use. Methods 12 classes of second-year college students (n = 731 in sociology, medicine, nursing or foreign language estimated the proportion of tobacco, cannabis, alcohol use and heavy episodic drinking among their peers and reported their own use. Results Peer substance use overestimation frequency was 84% for tobacco, 55% for cannabis, 37% for alcohol and 56% for heavy episodic drinking. Cannabis users (p = 0.006, alcohol (p = 0.003 and heavy episodic drinkers (p = 0.002, are more likely to overestimate the prevalence of use of these consumptions. Tobacco users are less likely to overestimate peer prevalence of smoking (p = 0.044. Women are more likely to overestimate tobacco (p Conclusions Local interventions that focus on creating realistic perceptions of substance use prevalence could be considered for cannabis and alcohol prevention in French campuses.

  18. Overestimation of Knowledge About Word Meanings: The “Misplaced Meaning” Effect

    Science.gov (United States)

    Kominsky, Jonathan F.; Keil, Frank C.

    2014-01-01

    Children and adults may not realize how much they depend on external sources in understanding word meanings. Four experiments investigated the existence and developmental course of a “Misplaced Meaning” (MM) effect, wherein children and adults overestimate their knowledge about the meanings of various words by underestimating how much they rely on outside sources to determine precise reference. Studies 1 & 2 demonstrate that children and adults show a highly consistent MM effect, and that it is stronger in young children. Study 3 demonstrates that adults are explicitly aware of the availability of outside knowledge, and that this awareness may be related to the strength of the MM effect. Study 4 rules out general overconfidence effects by examining a metalinguistic task in which adults are well-calibrated. PMID:24890038

  19. Kaplan-Meier Survival Analysis Overestimates the Risk of Revision Arthroplasty: A Meta-analysis.

    Science.gov (United States)

    Lacny, Sarah; Wilson, Todd; Clement, Fiona; Roberts, Derek J; Faris, Peter D; Ghali, William A; Marshall, Deborah A

    2015-11-01

    Although Kaplan-Meier survival analysis is commonly used to estimate the cumulative incidence of revision after joint arthroplasty, it theoretically overestimates the risk of revision in the presence of competing risks (such as death). Because the magnitude of overestimation is not well documented, the potential associated impact on clinical and policy decision-making remains unknown. We performed a meta-analysis to answer the following questions: (1) To what extent does the Kaplan-Meier method overestimate the cumulative incidence of revision after joint replacement compared with alternative competing-risks methods? (2) Is the extent of overestimation influenced by followup time or rate of competing risks? We searched Ovid MEDLINE, EMBASE, BIOSIS Previews, and Web of Science (1946, 1980, 1980, and 1899, respectively, to October 26, 2013) and included article bibliographies for studies comparing estimated cumulative incidence of revision after hip or knee arthroplasty obtained using both Kaplan-Meier and competing-risks methods. We excluded conference abstracts, unpublished studies, or studies using simulated data sets. Two reviewers independently extracted data and evaluated the quality of reporting of the included studies. Among 1160 abstracts identified, six studies were included in our meta-analysis. The principal reason for the steep attrition (1160 to six) was that the initial search was for studies in any clinical area that compared the cumulative incidence estimated using the Kaplan-Meier versus competing-risks methods for any event (not just the cumulative incidence of hip or knee revision); we did this to minimize the likelihood of missing any relevant studies. We calculated risk ratios (RRs) comparing the cumulative incidence estimated using the Kaplan-Meier method with the competing-risks method for each study and used DerSimonian and Laird random effects models to pool these RRs. Heterogeneity was explored using stratified meta-analyses and

  20. The Validity of Conscientiousness Is Overestimated in the Prediction of Job Performance.

    Science.gov (United States)

    Kepes, Sven; McDaniel, Michael A

    2015-01-01

    Sensitivity analyses refer to investigations of the degree to which the results of a meta-analysis remain stable when conditions of the data or the analysis change. To the extent that results remain stable, one can refer to them as robust. Sensitivity analyses are rarely conducted in the organizational science literature. Despite conscientiousness being a valued predictor in employment selection, sensitivity analyses have not been conducted with respect to meta-analytic estimates of the correlation (i.e., validity) between conscientiousness and job performance. To address this deficiency, we reanalyzed the largest collection of conscientiousness validity data in the personnel selection literature and conducted a variety of sensitivity analyses. Publication bias analyses demonstrated that the validity of conscientiousness is moderately overestimated (by around 30%; a correlation difference of about .06). The misestimation of the validity appears to be due primarily to suppression of small effects sizes in the journal literature. These inflated validity estimates result in an overestimate of the dollar utility of personnel selection by millions of dollars and should be of considerable concern for organizations. The fields of management and applied psychology seldom conduct sensitivity analyses. Through the use of sensitivity analyses, this paper documents that the existing literature overestimates the validity of conscientiousness in the prediction of job performance. Our data show that effect sizes from journal articles are largely responsible for this overestimation.

  1. Factors associated with overestimation of asthma control: A cross-sectional study in Australia.

    Science.gov (United States)

    Bereznicki, Bonnie J; Chapman, Millicent P; Bereznicki, Luke R E

    2017-05-01

    To investigate actual and perceived disease control in Australians with asthma, and identify factors associated with overestimation of asthma control. This was a cross-sectional study of Australian adults with asthma, who were recruited via Facebook to complete an online survey. The survey included basic demographic questions, and validated tools assessing asthma knowledge, medication adherence, medicine beliefs, illness perception and asthma control. Items that measured symptoms and frequency of reliever medication use were compared to respondents' self-rating of their own asthma control. Predictors of overestimation of asthma control were determined using multivariate logistic regression. Of 2971 survey responses, 1950 (65.6%) were complete and eligible for inclusion. Overestimation of control was apparent in 45.9% of respondents. Factors independently associated with overestimation of asthma control included education level (OR = 0.755, 95% CI: 0.612-0.931, P = 0.009), asthma knowledge (OR = 0.942, 95% CI: 0.892-0.994, P = 0.029), total asthma control, (OR = 0.842, 95% CI: 0.818-0.867, P addictive (OR = 1.144, 95% CI: 1.017-1.287, P = 0.025), and increased feelings of control over asthma (OR = 1.261, 95% CI: 1.191-1.335), P < 0.001). Overestimation of asthma control remains a significant issue in Australians with asthma. The study highlights the importance of encouraging patients to express their feelings about asthma control and beliefs about medicines, and to be more forthcoming with their asthma symptoms. This would help to reveal any discrepancies between perceived and actual asthma control.

  2. Instantaneous-to-daily GPP upscaling schemes based on a coupled photosynthesis-stomatal conductance model: correcting the overestimation of GPP by directly using daily average meteorological inputs.

    Science.gov (United States)

    Wang, Fumin; Gonsamo, Alemu; Chen, Jing M; Black, T Andrew; Zhou, Bin

    2014-11-01

    Daily canopy photosynthesis is usually temporally upscaled from instantaneous (i.e., seconds) photosynthesis rate. The nonlinear response of photosynthesis to meteorological variables makes the temporal scaling a significant challenge. In this study, two temporal upscaling schemes of daily photosynthesis, the integrated daily model (IDM) and the segmented daily model (SDM), are presented by considering the diurnal variations of meteorological variables based on a coupled photosynthesis-stomatal conductance model. The two models, as well as a simple average daily model (SADM) with daily average meteorological inputs, were validated using the tower-derived gross primary production (GPP) to assess their abilities in simulating daily photosynthesis. The results showed IDM closely followed the seasonal trend of the tower-derived GPP with an average RMSE of 1.63 g C m(-2) day(-1), and an average Nash-Sutcliffe model efficiency coefficient (E) of 0.87. SDM performed similarly to IDM in GPP simulation but decreased the computation time by >66%. SADM overestimated daily GPP by about 15% during the growing season compared to IDM. Both IDM and SDM greatly decreased the overestimation by SADM, and improved the simulation of daily GPP by reducing the RMSE by 34 and 30%, respectively. The results indicated that IDM and SDM are useful temporal upscaling approaches, and both are superior to SADM in daily GPP simulation because they take into account the diurnally varying responses of photosynthesis to meteorological variables. SDM is computationally more efficient, and therefore more suitable for long-term and large-scale GPP simulations.

  3. Are the performance overestimates given by boys with ADHD self-protective?

    Science.gov (United States)

    Ohan, Jeneva L; Johnston, Charlotte

    2002-06-01

    Tested the self-protective hypothesis that boys with attention deficit hyperactivity disorder (ADHD) overestimate their performance to protect a positive self-image. We examined the impact of performance feedback on the social and academic performance self-perceptions of 45 boys with and 43 boys without ADHD ages 7 to 12. Consistent with the self-protective hypothesis, positive feedback led to increases in social performance estimates in boys without ADHD but to decreases in estimates given by boys with ADHD. This suggests that boys with ADHD can give more realistic self-appraisals when their self-image has been bolstered. In addition, social performance estimates in boys with ADHD were correlated with measures of self-esteem and positive presentation bias. In contrast, for academic performance estimates, boys in both groups increased their performance estimates after receiving positive versus average or no feedback, and estimates were not correlated with self-esteem or social desirability for boys with ADHD. We conclude that the self-protective hypothesis can account for social performance overestimations given by boys with ADHD but that other factors may better account for their academic performance overestimates.

  4. Kaplan-Meier survival analysis overestimates cumulative incidence of health-related events in competing risk settings: a meta-analysis.

    Science.gov (United States)

    Lacny, Sarah; Wilson, Todd; Clement, Fiona; Roberts, Derek J; Faris, Peter; Ghali, William A; Marshall, Deborah A

    2018-01-01

    Kaplan-Meier survival analysis overestimates cumulative incidence in competing risks (CRs) settings. The extent of overestimation (or its clinical significance) has been questioned, and CRs methods are infrequently used. This meta-analysis compares the Kaplan-Meier method to the cumulative incidence function (CIF), a CRs method. We searched MEDLINE, EMBASE, BIOSIS Previews, Web of Science (1992-2016), and article bibliographies for studies estimating cumulative incidence using the Kaplan-Meier method and CIF. For studies with sufficient data, we calculated pooled risk ratios (RRs) comparing Kaplan-Meier and CIF estimates using DerSimonian and Laird random effects models. We performed stratified meta-analyses by clinical area, rate of CRs (CRs/events of interest), and follow-up time. Of 2,192 identified abstracts, we included 77 studies in the systematic review and meta-analyzed 55. The pooled RR demonstrated the Kaplan-Meier estimate was 1.41 [95% confidence interval (CI): 1.36, 1.47] times higher than the CIF. Overestimation was highest among studies with high rates of CRs [RR = 2.36 (95% CI: 1.79, 3.12)], studies related to hepatology [RR = 2.60 (95% CI: 2.12, 3.19)], and obstetrics and gynecology [RR = 1.84 (95% CI: 1.52, 2.23)]. The Kaplan-Meier method overestimated the cumulative incidence across 10 clinical areas. Using CRs methods will ensure accurate results inform clinical and policy decisions. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. Prediction Equations Overestimate the Energy Requirements More for Obesity-Susceptible Individuals.

    Science.gov (United States)

    McLay-Cooke, Rebecca T; Gray, Andrew R; Jones, Lynnette M; Taylor, Rachael W; Skidmore, Paula M L; Brown, Rachel C

    2017-09-13

    Predictive equations to estimate resting metabolic rate (RMR) are often used in dietary counseling and by online apps to set energy intake goals for weight loss. It is critical to know whether such equations are appropriate for those susceptible to obesity. We measured RMR by indirect calorimetry after an overnight fast in 26 obesity susceptible (OSI) and 30 obesity resistant (ORI) individuals, identified using a simple 6-item screening tool. Predicted RMR was calculated using the FAO/WHO/UNU (Food and Agricultural Organisation/World Health Organisation/United Nations University), Oxford and Miflin-St Jeor equations. Absolute measured RMR did not differ significantly between OSI versus ORI (6339 vs. 5893 kJ·d -1 , p = 0.313). All three prediction equations over-estimated RMR for both OSI and ORI when measured RMR was ≤5000 kJ·d -1 . For measured RMR ≤7000 kJ·d -1 there was statistically significant evidence that the equations overestimate RMR to a greater extent for those classified as obesity susceptible with biases ranging between around 10% to nearly 30% depending on the equation. The use of prediction equations may overestimate RMR and energy requirements particularly in those who self-identify as being susceptible to obesity, which has implications for effective weight management.

  6. Influencing Factors on the Overestimation of Self-Reported Physical Activity: A Cross-Sectional Analysis of Low Back Pain Patients and Healthy Controls

    Directory of Open Access Journals (Sweden)

    Andrea Schaller

    2016-01-01

    Full Text Available Introduction. The aim of the present study was to determine the closeness of agreement between a self-reported and an objective measure of physical activity in low back pain patients and healthy controls. Beyond, influencing factors on overestimation were identified. Methods. 27 low back pain patients and 53 healthy controls wore an accelerometer (objective measure for seven consecutive days and answered a questionnaire on physical activity (self-report over the same period of time. Differences between self-reported and objective data were tested by Wilcoxon test. Bland-Altman analysis was conducted for describing the closeness of agreement. Linear regression models were calculated to identify the influence of age, sex, and body mass index on the overestimation by self-report. Results. Participants overestimated self-reported moderate activity in average by 42 min/day (p=0.003 and vigorous activity by 39 min/day (p<0.001. Self-reported sedentary time was underestimated by 122 min/day (p<0.001. No individual-related variables influenced the overestimation of physical activity. Low back pain patients were more likely to underestimate sedentary time compared to healthy controls. Discussion. In rehabilitation and health promotion, the application-oriented measurement of physical activity remains a challenge. The present results contradict other studies that had identified an influence of age, sex, and body mass index on the overestimation of physical activity.

  7. Calcified Plaque of Coronary Artery: Factors Influencing Overestimation of Coronary Artery Stenosis on Coronary CT Angiography

    International Nuclear Information System (INIS)

    Kim, Mok Hee; Kim, Yun Hyeon; Choi, Song; Seon, Hyun Ju; Jeong, Gwang Woo; Park, Jin Gyoon; Kang, Heoung Keun; Ko, Joon Seok

    2010-01-01

    To assess the influence of calcified plaque characteristics on the overestimation of coronary arterial stenosis on a coronary CT angiography (CCTA). The study included 271 coronary arteries with calcified plaques identified by CCTA, and based on 928 coronary arteries from 232 patients who underwent both CCTA and invasive coronary angiography (ICA). Individual coronary arteries were classified into two groups by agreement based on the degree of stenosis from each CCTA and ICA: 1) group A includes patients with concordant CCTA and ICA results and, 2) group B includes patients with an overestimation of CCTA compared to ICA. Parameters including total calcium score, calcium score of an individual coronary artery, calcium burden number of an individual coronary artery, and the density of each calcified plaque (calcium score / number of calcium burden) for each individual coronary artery were compared between the two groups. Of the 271 coronary arteries, 164 (60.5%) were overestimated on CCTA. The left anterior descending artery (LAD) had a significantly low rate of overestimation (47.1%) compared to the other coronary arteries (p=0.001). No significant differences for total calcium score, calcium score of individual coronary artery, and the density of each calcified plaque from individual coronary arteries between two groups was observed. However, a decreasing tendency for the rate of overestimation on CCTA was observed with an increase in calcium burden of individual coronary arteries (p<0.05). The evaluation of coronary arteries suggests that the degree of coronary arterial stenosis had a tendency to be overestimated by calcified plaques on CCTA. However, the rate of overestimation for the degree of coronary arterial stenosis by calcified plaques was not significantly influenced by total calcium score, calcium score of individual coronary artery, and density of each calcified plaque

  8. Forgetting to remember our experiences: People overestimate how much they will retrospect about personal events.

    Science.gov (United States)

    Tully, Stephanie; Meyvis, Tom

    2017-12-01

    People value experiences in part because of the memories they create. Yet, we find that people systematically overestimate how much they will retrospect about their experiences. This overestimation results from people focusing on their desire to retrospect about experiences, while failing to consider the experience's limited enduring accessibility in memory. Consistent with this view, we find that desirability is a stronger predictor of forecasted retrospection than it is of reported retrospection, resulting in greater overestimation when the desirability of retrospection is higher. Importantly, the desire to retrospect does not change over time. Instead, past experiences become less top-of-mind over time and, as a result, people simply forget to remember. In line with this account, our results show that obtaining physical reminders of an experience reduces the overestimation of retrospection by increasing how much people retrospect, bringing their realized retrospection more in line with their forecasts (and aspirations). We further observe that the extent to which reported retrospection falls short of forecasted retrospection reliably predicts declining satisfaction with an experience over time. Despite this potential negative consequence of retrospection falling short of expectations, we suggest that the initial overestimation itself may in fact be adaptive. This possibility and other potential implications of this work are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Voltage and pace-capture mapping of linear ablation lesions overestimates chronic ablation gap size.

    Science.gov (United States)

    O'Neill, Louisa; Harrison, James; Chubb, Henry; Whitaker, John; Mukherjee, Rahul K; Bloch, Lars Ølgaard; Andersen, Niels Peter; Dam, Høgni; Jensen, Henrik K; Niederer, Steven; Wright, Matthew; O'Neill, Mark; Williams, Steven E

    2018-04-26

    Conducting gaps in lesion sets are a major reason for failure of ablation procedures. Voltage mapping and pace-capture have been proposed for intra-procedural identification of gaps. We aimed to compare gap size measured acutely and chronically post-ablation to macroscopic gap size in a porcine model. Intercaval linear ablation was performed in eight Göttingen minipigs with a deliberate gap of ∼5 mm left in the ablation line. Gap size was measured by interpolating ablation contact force values between ablation tags and thresholding at a low force cut-off of 5 g. Bipolar voltage mapping and pace-capture mapping along the length of the line were performed immediately, and at 2 months, post-ablation. Animals were euthanized and gap sizes were measured macroscopically. Voltage thresholds to define scar were determined by receiver operating characteristic analysis as voltage, pace-capture, and ablation contact force maps. All modalities overestimated chronic gap size, by 1.4 ± 2.0 mm (ablation contact force map), 5.1 ± 3.4 mm (pace-capture), and 9.5 ± 3.8 mm (voltage mapping). Error on ablation contact force map gap measurements were significantly less than for voltage mapping (P = 0.003, Tukey's multiple comparisons test). Chronically, voltage mapping and pace-capture mapping overestimated macroscopic gap size by 11.9 ± 3.7 and 9.8 ± 3.5 mm, respectively. Bipolar voltage and pace-capture mapping overestimate the size of chronic gap formation in linear ablation lesions. The most accurate estimation of chronic gap size was achieved by analysis of catheter-myocardium contact force during ablation.

  10. Overestimation of own body weights in female university students: associations with lifestyles, weight control behaviors and depression.

    Science.gov (United States)

    Kim, Miso; Lee, Hongmie

    2010-12-01

    The study aimed to analyze the lifestyles, weight control behavior, dietary habits, and depression of female university students. The subjects were 532 students from 8 universities located in 4 provinces in Korea. According to percent ideal body weight, 33 (6.4%), 181 (34.0%), 283 (53.2%), 22 (4.1%) and 13 (2.5%) were severely underweight, underweight, normal, overweight and obese, respectively, based on self-reported height and weight. As much as 64.1% and only 2.4%, respectively, overestimated and underestimated their body weight status. Six overweight subjects were excluded from overestimation group for the purpose of this study, resulting in overestimation group consisting of only underweight and normal weight subjects. Compared to those from the normal perception group, significantly more subjects from the overestimation group were currently smoking (P = 0.017) and drank more often than once a week (P = 0.015), without any significant differences in dietary habits. Despite similar BMIs, subjects who overestimated their own weight statuses had significantly higher weight dissatisfaction (P = 0.000), obesity stress (P = 0.000), obsession to lose weight (P = 0.007) and depression (P = 0.018). Also, more of them wanted to lose weight (P = 0.000), checked their body weights more often than once a week (P = 0.025) and had dieting experiences using 'reducing meal size' (P = 0.012), 'reducing snacks' (P = 0.042) and 'taking prescribed pills' (P = 0.032), and presented 'for a wider range of clothes selection' as the reason for weight loss (P = 0.039), although none was actually overweight or obese. Unlike the case with overestimating one's own weight, being overweight was associated with less drinking (P = 0.035) and exercising more often (P = 0.001) and for longer (P = 0.001) and healthier reasons for weight control (P = 0.002), despite no differences in frequency of weighing and depression. The results showed that weight overestimation, independent of weight status

  11. Overestimation of Knowledge about Word Meanings: The "Misplaced Meaning" Effect

    Science.gov (United States)

    Kominsky, Jonathan F.; Keil, Frank C.

    2014-01-01

    Children and adults may not realize how much they depend on external sources in understanding word meanings. Four experiments investigated the existence and developmental course of a "Misplaced Meaning" (MM) effect, wherein children and adults overestimate their knowledge about the meanings of various words by underestimating how much…

  12. Mobility overestimation due to gated contacts in organic field-effect transistors

    Science.gov (United States)

    Bittle, Emily G.; Basham, James I.; Jackson, Thomas N.; Jurchescu, Oana D.; Gundlach, David J.

    2016-01-01

    Parameters used to describe the electrical properties of organic field-effect transistors, such as mobility and threshold voltage, are commonly extracted from measured current–voltage characteristics and interpreted by using the classical metal oxide–semiconductor field-effect transistor model. However, in recent reports of devices with ultra-high mobility (>40 cm2 V−1 s−1), the device characteristics deviate from this idealized model and show an abrupt turn-on in the drain current when measured as a function of gate voltage. In order to investigate this phenomenon, here we report on single crystal rubrene transistors intentionally fabricated to exhibit an abrupt turn-on. We disentangle the channel properties from the contact resistance by using impedance spectroscopy and show that the current in such devices is governed by a gate bias dependence of the contact resistance. As a result, extracted mobility values from d.c. current–voltage characterization are overestimated by one order of magnitude or more. PMID:26961271

  13. Gun Carrying by High School Students in Boston, MA: Does Overestimation of Peer Gun Carrying Matter?

    Science.gov (United States)

    Hemenway, David; Vriniotis, Mary; Johnson, Renee M.; Miller, Matthew; Azrael, Deborah

    2011-01-01

    This paper investigates: (1) whether high school students overestimate gun carrying by their peers, and (2) whether those students who overestimate peer gun carrying are more likely to carry firearms. Data come from a randomly sampled survey conducted in 2008 of over 1700 high school students in Boston, MA. Over 5% of students reported carrying a…

  14. Partners' Overestimation of Patients' Pain Severity: Relationships with Partners' Interpersonal Responses.

    Science.gov (United States)

    Junghaenel, Doerte U; Schneider, Stefan; Broderick, Joan E

    2017-09-26

    The present study examined whether concordance between patients' and their partners' reports of patient pain severity relates to partners' social support and behavioral responses in couples coping with chronic pain. Fifty-two couples completed questionnaires about the patient's pain severity. Both dyad members also rated the partner's social support and negative, solicitous, and distracting responses toward the patient when in pain. Bivariate correlations showed moderate correspondence between patient and partner ratings of pain severity (r = 0.55) and negative (r = 0.46), solicitous (r = 0.47), and distracting responses (r = 0.53), but lower correspondence for social support (r = 0.28). Twenty-eight couples (54%) were concordant in their perceptions of patient pain; partners overestimated pain in 14 couples (27%), and partners underestimated pain in 10 couples (19%). Couple concordance in pain perceptions was not related to patients' reports; however, it significantly predicted partners' reports: Partners who overestimated pain reported giving more social support (β = 0.383, P = 0.016), fewer negative responses (β = -0.332, P = 0.029), and more solicitous responses (β = 0.438, P = 0.016) than partners who were in agreement or who underestimated pain. Partner overestimation of pain severity is associated with partner-reported but not with patient-reported support-related responses. This finding has important clinical implications for couple interventions in chronic pain. © 2017 American Academy of Pain Medicine. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  15. Evaluation of water vapor distribution in general circulation models using satellite observations

    Science.gov (United States)

    Soden, Brian J.; Bretherton, Francis P.

    1994-01-01

    This paper presents a comparison of the water vapor distribution obtained from two general circulation models, the European Centre for Medium-Range Weather Forecasts (ECMWF) model and the National Center for Atmospheric Research (NCAR) Community Climate Model (CCM), with satellite observations of total precipitable water (TPW) from Special Sensor Microwave/Imager (SSM/I) and upper tropospheric relative humidity (UTH) from GOES. Overall, both models are successful in capturing the primary features of the observed water vapor distribution and its seasonal variation. For the ECMWF model, however, a systematic moist bias in TPW is noted over well-known stratocumulus regions in the eastern subtropical oceans. Comparison with radiosonde profiles suggests that this problem is attributable to difficulties in modeling the shallowness of the boundary layer and large vertical water vapor gradients which characterize these regions. In comparison, the CCM is more successful in capturing the low values of TPW in the stratocumulus regions, although it tends to exhibit a dry bias over the eastern half of the subtropical oceans and a corresponding moist bias in the western half. The CCM also significantly overestimates the daily variability of the moisture fields in convective regions, suggesting a problem in simulating the temporal nature of moisture transport by deep convection. Comparison of the monthly mean UTH distribution indicates generally larger discrepancies than were noted for TPW owing to the greater influence of large-scale dynamical processes in determining the distribution of UTH. In particular, the ECMWF model exhibits a distinct dry bias along the Intertropical Convergence Zone (ITCZ) and a moist bias over the subtropical descending branches of the Hadley cell, suggesting an underprediction in the strength of the Hadley circulation. The CCM, on the other hand, demonstrates greater discrepancies in UTH than are observed for the ECMWF model, but none that are as

  16. Debate on the Chernobyl disaster: on the causes of Chernobyl overestimation.

    Science.gov (United States)

    Jargin, Sergei V

    2012-01-01

    After the Chernobyl accident, many publications appeared that overestimated its medical consequences. Some of them are discussed in this article. Among the motives for the overestimation were anti-nuclear sentiments, widespread among some adherents of the Green movement; however, their attitude has not been wrong: nuclear facilities should have been prevented from spreading to overpopulated countries governed by unstable regimes and regions where conflicts and terrorism cannot be excluded. The Chernobyl accident has hindered worldwide development of atomic industry. Today, there are no alternatives to nuclear power: nonrenewable fossil fuels will become more and more expensive, contributing to affluence in the oil-producing countries and poverty in the rest of the world. Worldwide introduction of nuclear energy will become possible only after a concentration of authority within an efficient international executive. This will enable construction of nuclear power plants in optimally suitable places, considering all sociopolitical, geographic, geologic, and other preconditions. In this way, accidents such as that in Japan in 2011 will be prevented.

  17. Skills of General Circulation and Earth System Models in reproducing streamflow to the ocean: the case of Congo river

    Science.gov (United States)

    Santini, M.; Caporaso, L.

    2017-12-01

    Although the importance of water resources in the context of climate change, it is still difficult to correctly simulate the freshwater cycle over the land via General Circulation and Earth System Models (GCMs and ESMs). Existing efforts from the Climate Model Intercomparison Project 5 (CMIP5) were mainly devoted to the validation of atmospheric variables like temperature and precipitation, with low attention to discharge.Here we investigate the present-day performances of GCMs and ESMs participating to CMIP5 in simulating the discharge of the river Congo to the sea thanks to: i) the long-term availability of discharge data for the Kinshasa hydrological station representative of more than 95% of the water flowing in the whole catchment; and ii) the River's still low influence by human intervention, which enables comparison with the (mostly) natural streamflow simulated within CMIP5.Our findings suggest how most of models appear overestimating the streamflow in terms of seasonal cycle, especially in the late winter and spring, while overestimation and variability across models are lower in late summer. Weighted ensemble means are also calculated, based on simulations' performances given by several metrics, showing some improvements of results.Although simulated inter-monthly and inter-annual percent anomalies do not appear significantly different from those in observed data, when translated into well consolidated indicators of drought attributes (frequency, magnitude, timing, duration), usually adopted for more immediate communication to stakeholders and decision makers, such anomalies can be misleading.These inconsistencies produce incorrect assessments towards water management planning and infrastructures (e.g. dams or irrigated areas), especially if models are used instead of measurements, as in case of ungauged basins or for basins with insufficient data, as well as when relying on models for future estimates without a preliminary quantification of model biases.

  18. The Surface Energy Balance at Local and Regional Scales-A Comparison of General Circulation Model Results with Observations.

    Science.gov (United States)

    Garratt, J. R.; Krummel, P. B.; Kowalczyk, E. A.

    1993-06-01

    Aspects of the mean monthly energy balance at continental surfaces are examined by appeal to the results of general circulation model (GCM) simulations, climatological maps of surface fluxes, and direct observations. Emphasis is placed on net radiation and evaporation for (i) five continental regions (each approximately 20°×150°) within Africa, Australia, Eurasia, South America, and the United States; (ii) a number of continental sites in both hemispheres. Both the mean monthly values of the local and regional fluxes and the mean monthly diurnal cycles of the local fluxes are described. Mostly, GCMs tend to overestimate the mean monthly levels of net radiation by about 15% -20% on an annual basis, for observed annual values in the range 50 to 100 Wm2. This is probably the result of several deficiencies, including (i) continental surface albedos being undervalued in a number of the models, resulting in overestimates of the net shortwave flux at the surface (though this deficiency is steadily being addressed by modelers); (ii) incoming shortwave fluxes being overestimated due to uncertainties in cloud schemes and clear-sky absorption; (iii) land-surface temperatures being under-estimated resulting in an underestimate of the outgoing longwave flux. In contrast, and even allowing for the poor observational base for evaporation, there is no obvious overall bias in mean monthly levels of evaporation determined in GCMS, with one or two exceptions. Rather, and far more so than with net radiation, there is a wide range in values of evaporation for all regions investigated. For continental regions and at times of the year of low to moderate rainfall, there is a tendency for the simulated evaporation to be closely related to the precipitation-this is not surprising. In contrast, for regions where there is sufficient or excessive rainfall, the evaporation tends to follow the behavior of the net radiation. Again, this is not surprising given the close relation between

  19. Evaluation of dust and trace metal estimates from the Community Multiscale Air Quality (CMAQ model version 5.0

    Directory of Open Access Journals (Sweden)

    K. W. Appel

    2013-07-01

    Full Text Available The Community Multiscale Air Quality (CMAQ model is a state-of-the-science air quality model that simulates the emission, transformation, transport, and fate of the many different air pollutant species that comprise particulate matter (PM, including dust (or soil. The CMAQ model version 5.0 (CMAQv5.0 has several enhancements over the previous version of the model for estimating the emission and transport of dust, including the ability to track the specific elemental constituents of dust and have the model-derived concentrations of those elements participate in chemistry. The latest version of the model also includes a parameterization to estimate emissions of dust due to wind action. The CMAQv5.0 modeling system was used to simulate the entire year 2006 for the continental United States, and the model estimates were evaluated against daily surface-based measurements from several air quality networks. The CMAQ modeling system overall did well replicating the observed soil concentrations in the western United States (mean bias generally around ±0.5 μg m−3; however, the model consistently overestimated the observed soil concentrations in the eastern United States (mean bias generally between 0.5–1.5 μg m−3, regardless of season. The performance of the individual trace metals was highly dependent on the network, species, and season, with relatively small biases for Fe, Al, Si, and Ti throughout the year at the Interagency Monitoring of Protected Visual Environments (IMPROVE sites, while Ca, K, and Mn were overestimated and Mg underestimated. For the urban Chemical Speciation Network (CSN sites, Fe, Mg, and Mn, while overestimated, had comparatively better performance throughout the year than the other trace metals, which were consistently overestimated, including very large overestimations of Al (380%, Ti (370% and Si (470% in the fall. An underestimation of nighttime mixing in the urban areas appears to contribute to the overestimation of

  20. Coronal 2D MR cholangiography overestimates the length of the right hepatic duct in liver transplantation donors

    International Nuclear Information System (INIS)

    Kim, Bohyun; Kim, Kyoung Won; Kim, So Yeon; Park, So Hyun; Lee, Jeongjin; Song, Gi Won; Jung, Dong-Hwan; Ha, Tae-Yong; Lee, Sung Gyu

    2017-01-01

    To compare the length of the right hepatic duct (RHD) measured on rotatory coronal 2D MR cholangiography (MRC), rotatory axial 2D MRC, and reconstructed 3D MRC. Sixty-seven donors underwent coronal and axial 2D projection MRC and 3D MRC. RHD length was measured and categorized as ultrashort (≤1 mm), short (>1-14 mm), and long (>14 mm). The measured length, frequency of overestimation, and the degree of underestimation between two 2D MRC sets were compared to 3D MRC. The length of the RHD from 3D MRC, coronal 2D MRC, and axial 2D MRC showed significant difference (p < 0.05). RHD was frequently overestimated on the coronal than on axial 2D MRC (61.2 % vs. 9 %; p <.0001). On coronal 2D MRC, four (6 %) with short RHD and one (1.5 %) with ultrashort RHD were over-categorized as long RHD. On axial 2D MRC, overestimation was mostly <1 mm (83.3 %), none exceeding 3 mm or over-categorized. The degree of underestimation between the two projection planes was comparable. Coronal 2D MRC overestimates the RHD in liver donors. We suggest adding axial 2D MRC to conventional coronal 2D MRC in the preoperative workup protocol for living liver donors to avoid unexpected confrontation with multiple ductal openings when harvesting the graft. (orig.)

  1. Coronal 2D MR cholangiography overestimates the length of the right hepatic duct in liver transplantation donors

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Bohyun [University of Ulsan College of Medicine, Department of Radiology, Asan Medical Center, 88, Olympic-ro 43-gil, Songpa-gu, Seoul (Korea, Republic of); Ajou University School of Medicine, Department of Radiology, Ajou University Medical Center, Suwon (Korea, Republic of); Kim, Kyoung Won; Kim, So Yeon; Park, So Hyun [University of Ulsan College of Medicine, Department of Radiology, Asan Medical Center, 88, Olympic-ro 43-gil, Songpa-gu, Seoul (Korea, Republic of); Lee, Jeongjin [Soongsil University, School of Computer Science and Engineering, Seoul (Korea, Republic of); Song, Gi Won; Jung, Dong-Hwan; Ha, Tae-Yong; Lee, Sung Gyu [University of Ulsan College of Medicine, Department of Surgery, Division of Hepatobiliary and Liver Transplantation Surgery, Asan Medical Center, Seoul (Korea, Republic of)

    2017-05-15

    To compare the length of the right hepatic duct (RHD) measured on rotatory coronal 2D MR cholangiography (MRC), rotatory axial 2D MRC, and reconstructed 3D MRC. Sixty-seven donors underwent coronal and axial 2D projection MRC and 3D MRC. RHD length was measured and categorized as ultrashort (≤1 mm), short (>1-14 mm), and long (>14 mm). The measured length, frequency of overestimation, and the degree of underestimation between two 2D MRC sets were compared to 3D MRC. The length of the RHD from 3D MRC, coronal 2D MRC, and axial 2D MRC showed significant difference (p < 0.05). RHD was frequently overestimated on the coronal than on axial 2D MRC (61.2 % vs. 9 %; p <.0001). On coronal 2D MRC, four (6 %) with short RHD and one (1.5 %) with ultrashort RHD were over-categorized as long RHD. On axial 2D MRC, overestimation was mostly <1 mm (83.3 %), none exceeding 3 mm or over-categorized. The degree of underestimation between the two projection planes was comparable. Coronal 2D MRC overestimates the RHD in liver donors. We suggest adding axial 2D MRC to conventional coronal 2D MRC in the preoperative workup protocol for living liver donors to avoid unexpected confrontation with multiple ductal openings when harvesting the graft. (orig.)

  2. Volume-Dependent Overestimation of Spontaneous Intracerebral Hematoma Volume by the ABC/2 Formula

    International Nuclear Information System (INIS)

    Chih-Wei Wang; Chun-Jung Juan; Hsian-He Hsu; Hua-Shan Liu; Cheng-Yu Chen; Chun-Jen Hsueh; Hung-Wen Kao; Guo-Shu Huang; Yi-Jui Liu; Chung-Ping Lo

    2009-01-01

    Background: Although the ABC/2 formula has been widely used to estimate the volume of intracerebral hematoma (ICH), the formula tends to overestimate hematoma volume. The volume-related imprecision of the ABC/2 formula has not been documented quantitatively. Purpose: To investigate the volume-dependent overestimation of the ABC/2 formula by comparing it with computer-assisted volumetric analysis (CAVA). Material and Methods: Forty patients who had suffered spontaneous ICH and who had undergone non-enhanced brain computed tomography scans were enrolled in this study. The ICH volume was estimated based on the ABC/2 formula and also calculated by CAVA. Based on the ICH volume calculated by the CAVA method, the patients were divided into three groups: group 1 consisted of 17 patients with an ICH volume of less than 20 ml; group 2 comprised 13 patients with an ICH volume of 20 to 40 ml; and group 3 was composed of 10 patients with an ICH volume larger than 40 ml. Results: The mean estimated hematoma volume was 43.6 ml when using the ABC/2 formula, compared with 33.8 ml when using the CAVA method. The mean estimated difference was 1.3 ml, 4.4 ml, and 31.4 ml for groups 1, 2, and 3, respectively, corresponding to an estimation error of 9.9%, 16.7%, and 37.1% by the ABC/2 formula (P<0.05). Conclusion: The ABC/2 formula significantly overestimates the volume of ICH. A positive association between the estimation error and the volume of ICH is demonstrated

  3. Overestimation of Knowledge About Word Meanings: The “Misplaced Meaning” Effect

    OpenAIRE

    Kominsky, Jonathan F.; Keil, Frank C.

    2014-01-01

    Children and adults may not realize how much they depend on external sources in understanding word meanings. Four experiments investigated the existence and developmental course of a “Misplaced Meaning” (MM) effect, wherein children and adults overestimate their knowledge about the meanings of various words by underestimating how much they rely on outside sources to determine precise reference. Studies 1 & 2 demonstrate that children and adults show a highly consistent MM effect, and that it ...

  4. The number of patients and events required to limit the risk of overestimation of intervention effects in meta-analysis--a simulation study

    DEFF Research Database (Denmark)

    Thorlund, Kristian; Imberger, Georgina; Walsh, Michael

    2011-01-01

    Meta-analyses including a limited number of patients and events are prone to yield overestimated intervention effect estimates. While many assume bias is the cause of overestimation, theoretical considerations suggest that random error may be an equal or more frequent cause. The independent impact...... of random error on meta-analyzed intervention effects has not previously been explored. It has been suggested that surpassing the optimal information size (i.e., the required meta-analysis sample size) provides sufficient protection against overestimation due to random error, but this claim has not yet been...

  5. Overestimation of infant and toddler energy intake by 24-h recall compared with weighed food records.

    Science.gov (United States)

    Fisher, Jennifer O; Butte, Nancy F; Mendoza, Patricia M; Wilson, Theresa A; Hodges, Eric A; Reidy, Kathleen C; Deming, Denise

    2008-08-01

    Twenty-four-hour dietary recalls have been used in large surveys of infant and toddler energy intake, but the accuracy of the method for young children is not well documented. We aimed to determine the accuracy of infant and toddler energy intakes by a single, telephone-administered, multiple-pass 24-h recall as compared with 3-d weighed food records. A within-subjects design was used in which a 24-h recall and 3-d weighed food records were completed within 2 wk by 157 mothers (56 non-Hispanic white, 51 non-Hispanic black, and 50 Hispanic) of 7-11-mo-old infants or 12-24-mo-old toddlers. Child and caregiver anthropometrics, child eating patterns, and caregiver demographics and social desirability were evaluated as correlates of reporting bias. Intakes based on 3-d weighed food records were within 5% of estimated energy requirements. Compared with the 3-d weighed food records, the 24-h recall overestimated energy intake by 13% among infants (740 +/- 154 and 833 +/- 255 kcal, respectively) and by 29% among toddlers (885 +/- 197 and 1140 +/- 299 kcal, respectively). Eating patterns (ie, frequency and location) did not differ appreciably between methods. Macronutrient and micronutrient intakes were higher by 24-h recall than by 3-d weighed food record. Dairy and grains contributed the most energy to the diet and accounted for 74% and 54% of the overestimation seen in infants and toddlers, respectively. Greater overestimation was associated with a greater number of food items reported by the caregiver and lower child weight-for-length z scores. The use of a single, telephone-administered, multiple-pass 24-h recall may significantly overestimate infant or toddler energy and nutrient intakes because of portion size estimation errors.

  6. Ignoring detailed fast-changing dynamics of land use overestimates regional terrestrial carbon sequestration

    Directory of Open Access Journals (Sweden)

    S. Q. Zhao

    2009-08-01

    Full Text Available Land use change is critical in determining the distribution, magnitude and mechanisms of terrestrial carbon budgets at the local to global scales. To date, almost all regional to global carbon cycle studies are driven by a static land use map or land use change statistics with decadal time intervals. The biases in quantifying carbon exchange between the terrestrial ecosystems and the atmosphere caused by using such land use change information have not been investigated. Here, we used the General Ensemble biogeochemical Modeling System (GEMS, along with consistent and spatially explicit land use change scenarios with different intervals (1 yr, 5 yrs, 10 yrs and static, respectively, to evaluate the impacts of land use change data frequency on estimating regional carbon sequestration in the southeastern United States. Our results indicate that ignoring the detailed fast-changing dynamics of land use can lead to a significant overestimation of carbon uptake by the terrestrial ecosystem. Regional carbon sequestration increased from 0.27 to 0.69, 0.80 and 0.97 Mg C ha−1 yr−1 when land use change data frequency shifting from 1 year to 5 years, 10 years interval and static land use information, respectively. Carbon removal by forest harvesting and prolonged cumulative impacts of historical land use change on carbon cycle accounted for the differences in carbon sequestration between static and dynamic land use change scenarios. The results suggest that it is critical to incorporate the detailed dynamics of land use change into local to global carbon cycle studies. Otherwise, it is impossible to accurately quantify the geographic distributions, magnitudes, and mechanisms of terrestrial carbon sequestration at the local to global scales.

  7. Introduction to generalized linear models

    CERN Document Server

    Dobson, Annette J

    2008-01-01

    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  8. Fronts and precipitation in CMIP5 models for the austral winter of the Southern Hemisphere

    Science.gov (United States)

    Blázquez, Josefina; Solman, Silvina A.

    2018-04-01

    Wintertime fronts climatology and the relationship between fronts and precipitation as depicted by a group of CMIP5 models are evaluated over the Southern Hemisphere (SH). The frontal activity is represented by an index that takes into account the vorticity, the gradient of temperature and the specific humidity at the 850 hPa level. ERA-Interim reanalysis and GPCP datasets are used to assess the performance of the models in the present climate. Overall, it is found that the models can reproduce adequately the main features of frontal activity and front frequency over the SH. The total precipitation is overestimated in most of the models, especially the maximum values over the mid latitudes. This overestimation could be related to the high values of precipitation frequency that are identified in some of the models evaluated. The relationship between fronts and precipitation has also been evaluated in terms of both frequency of frontal precipitation and percentage of precipitation due to fronts. In general terms, the models overestimate the proportion between frontal and total precipitation. In contrast with frequency of total precipitation, the frequency of frontal precipitation is well reproduced by the models, with the higher values located at the mid latitudes. The results suggest that models represent very well the dynamic forcing (fronts) and the frequency of frontal precipitation, though the amount of precipitation due to fronts is overestimated.

  9. Testing the generalized partial credit model

    OpenAIRE

    Glas, Cornelis A.W.

    1996-01-01

    The partial credit model (PCM) (G.N. Masters, 1982) can be viewed as a generalization of the Rasch model for dichotomous items to the case of polytomous items. In many cases, the PCM is too restrictive to fit the data. Several generalizations of the PCM have been proposed. In this paper, a generalization of the PCM (GPCM), a further generalization of the one-parameter logistic model, is discussed. The model is defined and the conditional maximum likelihood procedure for the method is describe...

  10. On the intra-seasonal variability within the extratropics in the ECHAM3 general circulation model

    International Nuclear Information System (INIS)

    May, W.

    1994-01-01

    First we consider the GCM's capability to reproduce the midlatitude variability on intra-seasonal time scales by a comparison with observational data (ECMWF analyses). Secondly we assess the possible influence of Sea Surface Temperatures on the intra-seasonal variability by comparing estimates obtained from different simulations performed with ECHAM3 with varying and fixed SST as boundary forcing. The intra-seasonal variability as simulated by ECHAM3 is underestimated over most of the Northern Hemisphere. While the contributions of the high-frequency transient fluctuations are reasonably well captured by the model, ECHAM3 fails to reproduce the observed level of low-frequency intra-seasonal variability. This is mainly due to the underestimation of the variability caused by the ultra-long planetary waves in the Northern Hemisphere midlatitudes by the model. In the Southern Hemisphere midlatitudes, on the other hand, the intra-seasonal variability as simulated by ECHAM3 is generally underestimated in the area north of about 50 southern latitude, but overestimated at higher latitudes. This is the case for the contributions of the high-frequency and the low-frequency transient fluctuations as well. Further, the model indicates a strong tendency for zonal symmetry, in particular with respect to the high-frequency transient fluctuations. While the two sets of simulations with varying and fixed Sea Surface Temepratures as boundary forcing reveal only small regional differences in the Southern Hemisphere, there is a strong response to be found in the Northern Hemisphere. The contributions of the high-frequency transient fluctuations to the intra-seasonal variability are generally stronger in the simulations with fixed SST. Further, the Pacific storm track is shifted slightly poleward in this set of simulations. For the low-frequency intra-seasonal variability the model gives a strong, but regional response to the interannual variations of the SST. (orig.)

  11. Generalized, Linear, and Mixed Models

    CERN Document Server

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  12. Overestimation of body size in eating disorders and its association to body-related avoidance behavior.

    Science.gov (United States)

    Vossbeck-Elsebusch, Anna N; Waldorf, Manuel; Legenbauer, Tanja; Bauer, Anika; Cordes, Martin; Vocks, Silja

    2015-06-01

    Body-related avoidance behavior, e.g., not looking in the mirror, is a common feature of eating disorders. It is assumed that it leads to insufficient feedback concerning one's own real body form and might thus contribute to distorted mental representation of one's own body. However, this assumption still lacks empirical foundation. Therefore, the aim of the present study was to examine the relationship between misperception of one's own body and body-related avoidance behavior in N = 78 female patients with Bulimia nervosa and eating disorder not otherwise specified. Body-size misperception was assessed using a digital photo distortion technique based on an individual picture of each participant which was taken in a standardized suit. In a regression analysis with body-related avoidance behavior, body mass index and weight and shape concerns as predictors, only body-related avoidance behavior significantly contributed to the explanation of body-size overestimation. This result supports the theoretical assumption that body-related avoidance behavior makes body-size overestimation more likely.

  13. Generalized complex geometry, generalized branes and the Hitchin sigma model

    International Nuclear Information System (INIS)

    Zucchini, Roberto

    2005-01-01

    Hitchin's generalized complex geometry has been shown to be relevant in compactifications of superstring theory with fluxes and is expected to lead to a deeper understanding of mirror symmetry. Gualtieri's notion of generalized complex submanifold seems to be a natural candidate for the description of branes in this context. Recently, we introduced a Batalin-Vilkovisky field theoretic realization of generalized complex geometry, the Hitchin sigma model, extending the well known Poisson sigma model. In this paper, exploiting Gualtieri's formalism, we incorporate branes into the model. A detailed study of the boundary conditions obeyed by the world sheet fields is provided. Finally, it is found that, when branes are present, the classical Batalin-Vilkovisky cohomology contains an extra sector that is related non trivially to a novel cohomology associated with the branes as generalized complex submanifolds. (author)

  14. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  15. The prevalence of maternal F cells in a pregnant population and potential overestimation of foeto-maternal haemorrhage as a consequence.

    LENUS (Irish Health Repository)

    Corcoran, Deirdre

    2014-06-12

    Acid elution (AE) is used to estimate foeto-maternal haemorrhage (FMH). However AE cannot differentiate between cells containing foetal or adult haemoglobin F (F cells), potentially leading to false positive results or an overestimate of the amount of FMH. The prevalence of F cells in pregnant populations remains poorly characterised. The purpose of this study was to ascertain the incidence of HbF-containing red cells in our pregnant population using anti-HbF-fluorescein isothiocyanate flow cytometry (anti-HbF FC) and to assess whether its presence leads to a significant overestimate of FMH.

  16. A general consumer-resource population model

    Science.gov (United States)

    Lafferty, Kevin D.; DeLeo, Giulio; Briggs, Cheryl J.; Dobson, Andrew P.; Gross, Thilo; Kuris, Armand M.

    2015-01-01

    Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model.

  17. Overestimation of molecular and modelling methods and underestimation of traditional taxonomy leads to real problems in assessing and handling of the world's biodiversity.

    Science.gov (United States)

    Löbl, Ivan

    2014-02-27

    Since the 1992 Rio Convention on Biological Diversity, the earth's biodiversity is a matter of constant public interest, but the community of scientists who describe and delimit species in mega-diverse animal groups, i.e. the bulk of global biodiversity, faces ever-increasing impediments. The problems are rooted in poor understanding of specificity of taxonomy, and overestimation of quantitative approaches and modern technology. A high proportion of the animal species still remains to be discovered and studied, so a more balanced approach to the situation is needed.

  18. Longitudinal Biases in the Seychelles Dome Simulated by 34 Ocean-Atmosphere Coupled General Circulation Models

    Science.gov (United States)

    Nagura, M.; Sasaki, W.; Tozuka, T.; Luo, J.; Behera, S. K.; Yamagata, T.

    2012-12-01

    The upwelling dome of the southern tropical Indian Ocean is examined by using simulated results from 34 ocean-atmosphere coupled general circulation models (CGCMs) including those from the phase five of the Coupled Model Intercomparison Project (CMIP5). Among the current set of the 34 CGCMs, 12 models erroneously produce the upwelling dome in the eastern half of the basin while the observed Seychelles Dome is located in the southwestern tropical Indian Ocean (Figure 1). The annual mean Ekman pumping velocity is almost zero in the southern off-equatorial region in these models. This is in contrast with the observations that show Ekman upwelling as the cause of the Seychelles Dome. In the models that produce the dome in the eastern basin, the easterly biases are prominent along the equator in boreal summer and fall that cause shallow thermocline biases along the Java and Sumatra coasts via Kelvin wave dynamics and result in a spurious upwelling dome there. In addition, these models tend to overestimate (underestimate) the magnitude of annual (semiannual) cycle of thermocline depth variability in the dome region, which is another consequence of the easterly wind biases in boreal summer-fall. Compared to the CMIP3 models (Yokoi et al. 2009), the CMIP5 models are even worse in simulating the dome longitudes and magnitudes of annual and semiannual cycles of thermocline depth variability in the dome region. Considering the increasing need to understand regional impacts of climate modes, these results may give serious caveats to interpretation of model results and help in further model developments.; Figure 1: The longitudes of the shallowest annual-mean D20 in 5°S-12°S. The open and filled circles are for the observations and the CGCMs, respectively.

  19. Responsibility/Threat Overestimation Moderates the Relationship Between Contamination-Based Disgust and Obsessive-Compulsive Concerns About Sexual Orientation.

    Science.gov (United States)

    Ching, Terence H W; Williams, Monnica T; Siev, Jedidiah; Olatunji, Bunmi O

    2018-05-01

    Disgust has been shown to perform a "disease-avoidance" function in contamination fears. However, no studies have examined the relevance of disgust to obsessive-compulsive (OC) concerns about sexual orientation (e.g., fear of one's sexual orientation transforming against one's will, and compulsive avoidance of same-sex and/or gay or lesbian individuals to prevent that from happening). Therefore, we investigated whether the specific domain of contamination-based disgust (i.e., evoked by the perceived threat of transmission of essences between individuals) predicted OC concerns about sexual orientation, and whether this effect was moderated/amplified by obsessive beliefs, in evaluation of a "sexual orientation transformation-avoidance" function. We recruited 283 self-identified heterosexual college students (152 females, 131 males; mean age = 20.88 years, SD = 3.19) who completed three measures assessing disgust, obsessive beliefs, and OC concerns about sexual orientation. Results showed that contamination-based disgust (β = .17), responsibility/threat overestimation beliefs (β = .15), and their interaction (β = .17) each uniquely predicted OC concerns about sexual orientation, ts = 2.22, 2.50, and 2.90, ps contamination-based disgust accompanied by strong responsibility/threat overestimation beliefs predicted more severe OC concerns about sexual orientation, β = .48, t = 3.24, p contamination-based disgust, and exacerbated by responsibility/threat overestimation beliefs. Treatment for OC concerns about sexual orientation should target such beliefs.

  20. The use of Chernobyl fallout to test model predictions of the transfer of radioiodine from air to vegetation to milk

    International Nuclear Information System (INIS)

    Hoffman, F.O.; Amaral, E.

    1989-01-01

    Comparison of observed values with model predictions indicate a tendency for the models to overpredict the air-vegetation-milk transfer of Chernobyl I-131 by one to two orders of magnitude. Detailed analysis of the data indicated that, in general, most overpredictions were accounted for by the portion of the air-pasture-cow-milk pathway dealing with the transfer from air to pasture vegetation rather than the transfer from vegetation to milk. A partial analysis using available data to infer site-specific conditions and parameter values indicates that differences between model predictions and observations can be explained by: 1) overestimation of the fraction of the total amount of I-131 in air that was present as molecular vapour, 2) overestimation of wet and dry deposition of elemental and organic iodine and particulate aerosols, 3) overestimation of initial vegetation interception of material deposited during sever thunderstorms, 4) underestimation of the rates of weathering and growth dilution of material deposited on vegetation during periods of spring growth, 5) underestimation of the amount of uncontaminated feed consumed by dairy cows, and 6) overestimation of the diet-to-milk transfer coefficient for I-131. (orig./HP)

  1. Multivariate generalized linear mixed models using R

    CERN Document Server

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  2. Overestimation of test performance by ROC analysis: Effect of small sample size

    International Nuclear Information System (INIS)

    Seeley, G.W.; Borgstrom, M.C.; Patton, D.D.; Myers, K.J.; Barrett, H.H.

    1984-01-01

    New imaging systems are often observer-rated by ROC techniques. For practical reasons the number of different images, or sample size (SS), is kept small. Any systematic bias due to small SS would bias system evaluation. The authors set about to determine whether the area under the ROC curve (AUC) would be systematically biased by small SS. Monte Carlo techniques were used to simulate observer performance in distinguishing signal (SN) from noise (N) on a 6-point scale; P(SN) = P(N) = .5. Four sample sizes (15, 25, 50 and 100 each of SN and N), three ROC slopes (0.8, 1.0 and 1.25), and three intercepts (0.8, 1.0 and 1.25) were considered. In each of the 36 combinations of SS, slope and intercept, 2000 runs were simulated. Results showed a systematic bias: the observed AUC exceeded the expected AUC in every one of the 36 combinations for all sample sizes, with the smallest sample sizes having the largest bias. This suggests that evaluations of imaging systems using ROC curves based on small sample size systematically overestimate system performance. The effect is consistent but subtle (maximum 10% of AUC standard deviation), and is probably masked by the s.d. in most practical settings. Although there is a statistically significant effect (F = 33.34, P<0.0001) due to sample size, none was found for either the ROC curve slope or intercept. Overestimation of test performance by small SS seems to be an inherent characteristic of the ROC technique that has not previously been described

  3. Overestimation of closed-chamber soil CO2 effluxes at low atmospheric turbulence

    DEFF Research Database (Denmark)

    Brændholt, Andreas; Larsen, Klaus Steenberg; Ibrom, Andreas

    2017-01-01

    Soil respiration (R-s) is an important component of ecosystem carbon balance, and accurate quantification of the diurnal and seasonal variation of R-s is crucial for a correct interpretation of the response of R-s to biotic and abiotic factors, as well as for estimating annual soil CO2 efflux rates...... be eliminated if proper mixing of air is ensured, and indeed the use of fans removed the overestimation of R-s rates during low u(*). Artificial turbulent air mixing may thus provide a method to overcome the problems of using closed-chamber gas-exchange measurement techniques during naturally occurring low...

  4. Micro Data and General Equilibrium Models

    DEFF Research Database (Denmark)

    Browning, Martin; Hansen, Lars Peter; Heckman, James J.

    1999-01-01

    Dynamic general equilibrium models are required to evaluate policies applied at the national level. To use these models to make quantitative forecasts requires knowledge of an extensive array of parameter values for the economy at large. This essay describes the parameters required for different...... economic models, assesses the discordance between the macromodels used in policy evaluation and the microeconomic models used to generate the empirical evidence. For concreteness, we focus on two general equilibrium models: the stochastic growth model extended to include some forms of heterogeneity...

  5. Glauber model and its generalizations

    International Nuclear Information System (INIS)

    Bialkowski, G.

    The physical aspects of the Glauber model problems are studied: potential model, profile function and Feynman diagrams approaches. Different generalizations of the Glauber model are discussed: particularly higher and lower energy processes and large angles [fr

  6. The generalized circular model

    NARCIS (Netherlands)

    Webers, H.M.

    1995-01-01

    In this paper we present a generalization of the circular model. In this model there are two concentric circular markets, which enables us to study two types of markets simultaneously. There are switching costs involved for moving from one circle to the other circle, which can also be thought of as

  7. Existing creatinine-based equations overestimate glomerular filtration rate in Indians.

    Science.gov (United States)

    Kumar, Vivek; Yadav, Ashok Kumar; Yasuda, Yoshinari; Horio, Masaru; Kumar, Vinod; Sahni, Nancy; Gupta, Krishan L; Matsuo, Seiichi; Kohli, Harbir Singh; Jha, Vivekanand

    2018-02-01

    Accurate estimation of glomerular filtration rate (GFR) is important for diagnosis and risk stratification in chronic kidney disease and for selection of living donors. Ethnic differences have required correction factors in the originally developed creatinine-based GFR estimation equations for populations around the world. Existing equations have not been validated in the vegetarian Indian population. We examined the performance of creatinine and cystatin-based GFR estimating equations in Indians. GFR was measured by urinary clearance of inulin. Serum creatinine was measured using IDMS-traceable Jaffe's and enzymatic assays, and cystatin C by colloidal gold immunoassay. Dietary protein intake was calculated by measuring urinary nitrogen appearance. Bias, precision and accuracy were calculated for the eGFR equations. A total of 130 participants (63 healthy kidney donors and 67 with CKD) were studied. About 50% were vegetarians, and the remainder ate meat 3.8 times every month. The average creatinine excretion were 14.7 mg/kg/day (95% CI: 13.5 to 15.9 mg/kg/day) and 12.4 mg/kg/day (95% CI: 11.2 to 13.6 mg/kg/day) in males and females, respectively. The average daily protein intake was 46.1 g/day (95% CI: 43.2 to 48.8 g/day). The mean mGFR in the study population was 51.66 ± 31.68 ml/min/1.73m 2 . All creatinine-based eGFR equations overestimated GFR (p < 0.01 for each creatinine based eGFR equation). However, eGFR by CKD-EPI Cys was not significantly different from mGFR (p = 0.38). The CKD-EPI Cys exhibited lowest bias [mean bias: -3.53 ± 14.70 ml/min/1.73m 2 (95% CI: -0.608 to -0.98)] and highest accuracy (P 30 : 74.6%). The GFR in the healthy population was 79.44 ± 20.19 (range: 41.90-134.50) ml/min/1.73m 2 . Existing creatinine-based GFR estimating equations overestimate GFR in Indians. An appropriately powered study is needed to develop either a correction factor or a new equation for accurate assessment of kidney function in the

  8. Were mercury emission factors for Chinese non-ferrous metal smelters overestimated? Evidence from onsite measurements in six smelters

    International Nuclear Information System (INIS)

    Zhang Lei; Wang Shuxiao; Wu Qingru; Meng Yang; Yang Hai; Wang Fengyang; Hao Jiming

    2012-01-01

    Non-ferrous metal smelting takes up a large proportion of the anthropogenic mercury emission inventory in China. Zinc, lead and copper smelting are three leading sources. Onsite measurements of mercury emissions were conducted for six smelters. The mercury emission factors were 0.09–2.98 g Hg/t metal produced. Acid plants with the double-conversion double-absorption process had mercury removal efficiency of over 99%. In the flue gas after acid plants, 45–88% was oxidized mercury which can be easily scavenged in the flue gas scrubber. 70–97% of the mercury was removed from the flue gas to the waste water and 1–17% to the sulfuric acid product. Totally 0.3–13.5% of the mercury in the metal concentrate was emitted to the atmosphere. Therefore, acid plants in non-ferrous metal smelters have significant co-benefit on mercury removal, and the mercury emission factors from Chinese non-ferrous metal smelters were probably overestimated in previous studies. - Highlights: ► Acid plants in smelters provide significant co-benefits for mercury removal (over 99%). ► Most of the mercury in metal concentrates for smelting ended up in waste water. ► Previously published emission factors for Chinese metal smelters were probably overestimated. - Acid plants in smelters have high mercury removal efficiency, and thus mercury emission factors for Chinese non-ferrous metal smelters were probably overestimated.

  9. Testing the generalized partial credit model

    NARCIS (Netherlands)

    Glas, Cornelis A.W.

    1996-01-01

    The partial credit model (PCM) (G.N. Masters, 1982) can be viewed as a generalization of the Rasch model for dichotomous items to the case of polytomous items. In many cases, the PCM is too restrictive to fit the data. Several generalizations of the PCM have been proposed. In this paper, a

  10. Generalized Nonlinear Yule Models

    OpenAIRE

    Lansky, Petr; Polito, Federico; Sacerdote, Laura

    2016-01-01

    With the aim of considering models with persistent memory we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macrovolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth...

  11. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  12. Generalized Nonlinear Yule Models

    Science.gov (United States)

    Lansky, Petr; Polito, Federico; Sacerdote, Laura

    2016-11-01

    With the aim of considering models related to random graphs growth exhibiting persistent memory, we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macroevolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth rates. Among the main results we derive the explicit distribution of the number of in-links of a webpage chosen uniformly at random recognizing the contribution to the asymptotics and the finite time correction. The mean value of the latter distribution is also calculated explicitly in the most general case. Furthermore, in order to show the usefulness of our results, we particularize them in the case of specific birth rates giving rise to a saturating behaviour, a property that is often observed in nature. The further specialization to the non-fractional case allows us to extend the Yule model accounting for a nonlinear growth.

  13. Climatology of the HOPE-G global ocean general circulation model - Sea ice general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Legutke, S. [Deutsches Klimarechenzentrum (DKRZ), Hamburg (Germany); Maier-Reimer, E. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany)

    1999-12-01

    The HOPE-G global ocean general circulation model (OGCM) climatology, obtained in a long-term forced integration is described. HOPE-G is a primitive-equation z-level ocean model which contains a dynamic-thermodynamic sea-ice model. It is formulated on a 2.8 grid with increased resolution in low latitudes in order to better resolve equatorial dynamics. The vertical resolution is 20 layers. The purpose of the integration was both to investigate the models ability to reproduce the observed general circulation of the world ocean and to obtain an initial state for coupled atmosphere - ocean - sea-ice climate simulations. The model was driven with daily mean data of a 15-year integration of the atmosphere general circulation model ECHAM4, the atmospheric component in later coupled runs. Thereby, a maximum of the flux variability that is expected to appear in coupled simulations is included already in the ocean spin-up experiment described here. The model was run for more than 2000 years until a quasi-steady state was achieved. It reproduces the major current systems and the main features of the so-called conveyor belt circulation. The observed distribution of water masses is reproduced reasonably well, although with a saline bias in the intermediate water masses and a warm bias in the deep and bottom water of the Atlantic and Indian Oceans. The model underestimates the meridional transport of heat in the Atlantic Ocean. The simulated heat transport in the other basins, though, is in good agreement with observations. (orig.)

  14. The General Education Collaboration Model: A Model for Successful Mainstreaming.

    Science.gov (United States)

    Simpson, Richard L.; Myles, Brenda Smith

    1990-01-01

    The General Education Collaboration Model is designed to support general educators teaching mainstreamed disabled students, through collaboration with special educators. The model is based on flexible departmentalization, program ownership, identification and development of supportive attitudes, student assessment as a measure of program…

  15. A new General Lorentz Transformation model

    International Nuclear Information System (INIS)

    Novakovic, Branko; Novakovic, Alen; Novakovic, Dario

    2000-01-01

    A new general structure of Lorentz Transformations, in the form of General Lorentz Transformation model (GLT-model), has been derived. This structure includes both Lorentz-Einstein and Galilean Transformations as its particular (special) realizations. Since the free parameters of GLT-model have been identified in a gravitational field, GLT-model can be employed both in Special and General Relativity. Consequently, the possibilities of an unification of Einstein's Special and General Theories of Relativity, as well as an unification of electromagnetic and gravitational fields are opened. If GLT-model is correct then there exist four new observation phenomena (a length and time neutrality, and a length dilation and a time contraction). Besides, the well-known phenomena (a length contraction, and a time dilation) are also the constituents of GLT-model. It means that there is a symmetry in GLT-model, where the center of this symmetry is represented by a length and a time neutrality. A time and a length neutrality in a gravitational field can be realized if the velocity of a moving system is equal to the free fall velocity. A time and a length neutrality include an observation of a particle mass neutrality. A special consideration has been devoted to a correlation between GLT-model and a limitation on particle velocities in order to investigate the possibility of a travel time reduction. It is found out that an observation of a particle speed faster then c=299 792 458 m/s, is possible in a gravitational field, if certain conditions are fulfilled

  16. Overestimation of reliability by Guttman’s λ4, λ5, and λ6, and the greatest lower bound

    NARCIS (Netherlands)

    Oosterwijk, P.R.; van der Ark, L.A.; Sijtsma, K.; van der Ark, L.A.; Wiberg, M.; Culpepper, S.A.; Douglas, J.A.; Wang, W.-C.

    2017-01-01

    For methods using statistical optimization to estimate lower bounds to test-score reliability, we investigated the degree to which they overestimate true reliability. Optimization methods do not only exploit real relationships between items but also tend to capitalize on sampling error and do this

  17. The General Aggression Model

    NARCIS (Netherlands)

    Allen, Johnie J.; Anderson, Craig A.; Bushman, Brad J.

    The General Aggression Model (GAM) is a comprehensive, integrative, framework for understanding aggression. It considers the role of social, cognitive, personality, developmental, and biological factors on aggression. Proximate processes of GAM detail how person and situation factors influence

  18. Mixed layer depth calculation in deep convection regions in ocean numerical models

    Science.gov (United States)

    Courtois, Peggy; Hu, Xianmin; Pennelly, Clark; Spence, Paul; Myers, Paul G.

    2017-12-01

    Mixed Layer Depths (MLDs) diagnosed by conventional numerical models are generally based on a density difference with the surface (e.g., 0.01 kg.m-3). However, the temperature-salinity compensation and the lack of vertical resolution contribute to over-estimated MLD, especially in regions of deep convection. In the present work, we examined the diagnostic MLD, associated with the deep convection of the Labrador Sea Water (LSW), calculated with a simple density difference criterion. The over-estimated MLD led us to develop a new tool, based on an observational approach, to recalculate MLD from model output. We used an eddy-permitting, 1/12° regional configuration of the Nucleus for European Modelling of the Ocean (NEMO) to test and discuss our newly defined MLD. We compared our new MLD with that from observations, and we showed a major improvement with our new algorithm. To show the new MLD is not dependent on a single model and its horizontal resolution, we extended our analysis to include 1/4° eddy-permitting simulations, and simulations using the Modular Ocean Model (MOM) model.

  19. Generalized bi-additive modelling for categorical data

    NARCIS (Netherlands)

    P.J.F. Groenen (Patrick); A.J. Koning (Alex)

    2004-01-01

    textabstractGeneralized linear modelling (GLM) is a versatile technique, which may be viewed as a generalization of well-known techniques such as least squares regression, analysis of variance, loglinear modelling, and logistic regression. In may applications, low-order interaction (such as

  20. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  1. On the Use of Generalized Volume Scattering Models for the Improvement of General Polarimetric Model-Based Decomposition

    Directory of Open Access Journals (Sweden)

    Qinghua Xie

    2017-01-01

    Full Text Available Recently, a general polarimetric model-based decomposition framework was proposed by Chen et al., which addresses several well-known limitations in previous decomposition methods and implements a simultaneous full-parameter inversion by using complete polarimetric information. However, it only employs four typical models to characterize the volume scattering component, which limits the parameter inversion performance. To overcome this issue, this paper presents two general polarimetric model-based decomposition methods by incorporating the generalized volume scattering model (GVSM or simplified adaptive volume scattering model, (SAVSM proposed by Antropov et al. and Huang et al., respectively, into the general decomposition framework proposed by Chen et al. By doing so, the final volume coherency matrix structure is selected from a wide range of volume scattering models within a continuous interval according to the data itself without adding unknowns. Moreover, the new approaches rely on one nonlinear optimization stage instead of four as in the previous method proposed by Chen et al. In addition, the parameter inversion procedure adopts the modified algorithm proposed by Xie et al. which leads to higher accuracy and more physically reliable output parameters. A number of Monte Carlo simulations of polarimetric synthetic aperture radar (PolSAR data are carried out and show that the proposed method with GVSM yields an overall improvement in the final accuracy of estimated parameters and outperforms both the version using SAVSM and the original approach. In addition, C-band Radarsat-2 and L-band AIRSAR fully polarimetric images over the San Francisco region are also used for testing purposes. A detailed comparison and analysis of decomposition results over different land-cover types are conducted. According to this study, the use of general decomposition models leads to a more accurate quantitative retrieval of target parameters. However, there

  2. Simple implementation of general dark energy models

    International Nuclear Information System (INIS)

    Bloomfield, Jolyon K.; Pearson, Jonathan A.

    2014-01-01

    We present a formalism for the numerical implementation of general theories of dark energy, combining the computational simplicity of the equation of state for perturbations approach with the generality of the effective field theory approach. An effective fluid description is employed, based on a general action describing single-scalar field models. The formalism is developed from first principles, and constructed keeping the goal of a simple implementation into CAMB in mind. Benefits of this approach include its straightforward implementation, the generality of the underlying theory, the fact that the evolved variables are physical quantities, and that model-independent phenomenological descriptions may be straightforwardly investigated. We hope this formulation will provide a powerful tool for the comparison of theoretical models of dark energy with observational data

  3. Actuarial statistics with generalized linear mixed models

    NARCIS (Netherlands)

    Antonio, K.; Beirlant, J.

    2007-01-01

    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  4. Multi-model evaluation of short-lived pollutant distributions over east Asia during summer 2008

    Science.gov (United States)

    Quennehen, B.; Raut, J.-C.; Law, K. S.; Daskalakis, N.; Ancellet, G.; Clerbaux, C.; Kim, S.-W.; Lund, M. T.; Myhre, G.; Olivié, D. J. L.; Safieddine, S.; Skeie, R. B.; Thomas, J. L.; Tsyro, S.; Bazureau, A.; Bellouin, N.; Hu, M.; Kanakidou, M.; Klimont, Z.; Kupiainen, K.; Myriokefalitakis, S.; Quaas, J.; Rumbold, S. T.; Schulz, M.; Cherian, R.; Shimizu, A.; Wang, J.; Yoon, S.-C.; Zhu, T.

    2016-08-01

    is too weak to explain the differences between the models. Our results rather point to an overestimation of SO2 emissions, in particular, close to the surface in Chinese urban areas. However, we also identify a clear underestimation of aerosol concentrations over northern India, suggesting that the rapid recent growth of emissions in India, as well as their spatial extension, is underestimated in emission inventories. Model deficiencies in the representation of pollution accumulation due to the Indian monsoon may also be playing a role. Comparison with vertical aerosol lidar measurements highlights a general underestimation of scattering aerosols in the boundary layer associated with overestimation in the free troposphere pointing to modelled aerosol lifetimes that are too long. This is likely linked to too strong vertical transport and/or insufficient deposition efficiency during transport or export from the boundary layer, rather than chemical processing (in the case of sulphate aerosols). Underestimation of sulphate in the boundary layer implies potentially large errors in simulated aerosol-cloud interactions, via impacts on boundary-layer clouds.This evaluation has important implications for accurate assessment of air pollutants on regional air quality and global climate based on global model calculations. Ideally, models should be run at higher resolution over source regions to better simulate urban-rural pollutant gradients and/or chemical regimes, and also to better resolve pollutant processing and loss by wet deposition as well as vertical transport. Discrepancies in vertical distributions require further quantification and improvement since these are a key factor in the determination of radiative forcing from short-lived pollutants.

  5. Multi-model evaluation of short-lived pollutant distributions over east Asia during summer 2008

    Directory of Open Access Journals (Sweden)

    B. Quennehen

    2016-08-01

    mitigation in Beijing is too weak to explain the differences between the models. Our results rather point to an overestimation of SO2 emissions, in particular, close to the surface in Chinese urban areas. However, we also identify a clear underestimation of aerosol concentrations over northern India, suggesting that the rapid recent growth of emissions in India, as well as their spatial extension, is underestimated in emission inventories. Model deficiencies in the representation of pollution accumulation due to the Indian monsoon may also be playing a role. Comparison with vertical aerosol lidar measurements highlights a general underestimation of scattering aerosols in the boundary layer associated with overestimation in the free troposphere pointing to modelled aerosol lifetimes that are too long. This is likely linked to too strong vertical transport and/or insufficient deposition efficiency during transport or export from the boundary layer, rather than chemical processing (in the case of sulphate aerosols. Underestimation of sulphate in the boundary layer implies potentially large errors in simulated aerosol–cloud interactions, via impacts on boundary-layer clouds.This evaluation has important implications for accurate assessment of air pollutants on regional air quality and global climate based on global model calculations. Ideally, models should be run at higher resolution over source regions to better simulate urban–rural pollutant gradients and/or chemical regimes, and also to better resolve pollutant processing and loss by wet deposition as well as vertical transport. Discrepancies in vertical distributions require further quantification and improvement since these are a key factor in the determination of radiative forcing from short-lived pollutants.

  6. A Generalized QMRA Beta-Poisson Dose-Response Model.

    Science.gov (United States)

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-01

    Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0Poisson model, PI(d|α,β), is a special case of the generalized model with K min = 1 (which implies r*=1). The generalized beta-Poisson model is based on a conceptual model with greater detail in the dose-response mechanism. Since a maximum likelihood solution is not easily available, a likelihood-free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median r* estimates produced fall short of meeting the required condition of r* = 1 for single-hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single-hit assumption for characterizing the dose-response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three-parameter generalized model provides a possibility to investigate the mechanism of a dose-response process in greater detail than is possible under a single-hit model. © 2016 Society for Risk Analysis.

  7. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  8. A Systematic Evaluation of Ultrasound-based Fetal Weight Estimation Models on Indian Population

    Directory of Open Access Journals (Sweden)

    Sujitkumar S. Hiwale

    2017-12-01

    Conclusion: We found that the existing fetal weight estimation models have high systematic and random errors on Indian population, with a general tendency of overestimation of fetal weight in the LBW category and underestimation in the HBW category. We also observed that these models have a limited ability to predict babies at a risk of either low or high birth weight. It is recommended that the clinicians should consider all these factors, while interpreting estimated weight given by the existing models.

  9. A Generalized Deduction of the Ideal-Solution Model

    Science.gov (United States)

    Leo, Teresa J.; Perez-del-Notario, Pedro; Raso, Miguel A.

    2006-01-01

    A new general procedure for deriving the Gibbs energy of mixing is developed through general thermodynamic considerations, and the ideal-solution model is obtained as a special particular case of the general one. The deduction of the Gibbs energy of mixing for the ideal-solution model is a rational one and viewed suitable for advanced students who…

  10. The economic impact of subclinical ketosis at the farm level: Tackling the challenge of over-estimation due to multiple interactions.

    Science.gov (United States)

    Raboisson, D; Mounié, M; Khenifar, E; Maigné, E

    2015-12-01

    Subclinical ketosis (SCK) is a major metabolic disorder that affects dairy cows, and its lactational prevalence in Europe is estimated to be at 25%. Nonetheless, few data are available on the economics of SCK, although its management clearly must be improved. With this in mind, this study develops a double-step stochastic approach to evaluate the total cost of SCK to dairy farming. First, all the production and reproduction changes and all the health disorders associated with SCK were quantified using the meta-analysis from a previous study. Second, the total cost of SCK was determined with a stochastic model using distribution laws as input parameters. The mean total cost of SCK was estimated to be Є257 per calving cow with SCK (95% prediction interval (PI): Є72-442). The margin over feeding costs slightly influenced the results. When the parameters of the model are not modified to account for the conclusions from the meta-analysis and for the prevalence of health disorders in the population without SCK, the mean cost of SCK was overestimated by 68%, reaching Є434 per calving cow (95%PI: Є192-676). This result indicates that the total cost of complex health disorders is likely to be substantially overestimated when calculations use raw results from the literature or-even worse-punctual data. Excluding labour costs in the estimation reduced the SCK total cost by 12%, whereas excluding contributors with scarce data and imprecise calibrations (for lameness and udder health) reduced costs by another 18-20% (Є210, 95%PI=30-390). The proposed method accounted for uncertainty and variability in inputs by using distributions instead of point estimates. The mean value and associated prediction intervals (PIs) yielded good insight into the economic consequences of this complex disease and can be easily and practically used by decision makers in the field while simultaneously accounting for biological variability. Moreover, PIs can help prevent the blind use of economic

  11. Topics in the generalized vector dominance model

    International Nuclear Information System (INIS)

    Chavin, S.

    1976-01-01

    Two topics are covered in the generalized vector dominance model. In the first topic a model is constructed for dilepton production in hadron-hadron interactions based on the idea of generalized vector-dominance. It is argued that in the high mass region the generalized vector-dominance model and the Drell-Yan parton model are alternative descriptions of the same underlying physics. In the low mass regions the models differ; the vector-dominance approach predicts a greater production of dileptons. It is found that the high mass vector mesons which are the hallmark of the generalized vector-dominance model make little contribution to the large yield of leptons observed in the transverse-momentum range 1 less than p/sub perpendicular/ less than 6 GeV. The recently measured hadronic parameters lead one to believe that detailed fits to the data are possible under the model. The possibility was expected, and illustrated with a simple model the extreme sensitivity of the large-p/sub perpendicular/ lepton yield to the large-transverse-momentum tail of vector-meson production. The second topic is an attempt to explain the mysterious phenomenon of photon shadowing in nuclei utilizing the contribution of the longitudinally polarized photon. It is argued that if the scalar photon anti-shadows, it could compensate for the transverse photon, which is presumed to shadow. It is found in a very simple model that the scalar photon could indeed anti-shadow. The principal feature of the model is a cancellation of amplitudes. The scheme is consistent with scalar photon-nucleon data as well. The idea is tested with two simple GVDM models and finds that the anti-shadowing contribution of the scalar photon is not sufficient to compensate for the contribution of the transverse photon. It is found doubtful that the scalar photon makes a significant contribution to the total photon-nuclear cross section

  12. A Generalized Random Regret Minimization Model

    NARCIS (Netherlands)

    Chorus, C.G.

    2013-01-01

    This paper presents, discusses and tests a generalized Random Regret Minimization (G-RRM) model. The G-RRM model is created by replacing a fixed constant in the attribute-specific regret functions of the RRM model, by a regret-weight variable. Depending on the value of the regret-weights, the G-RRM

  13. The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.

    Science.gov (United States)

    von Davier, Matthias

    2014-02-01

    The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.

  14. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. A Note on the Identifiability of Generalized Linear Mixed Models

    DEFF Research Database (Denmark)

    Labouriau, Rodrigo

    2014-01-01

    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...

  16. Multiple phase transitions in the generalized Curie-Weiss model

    International Nuclear Information System (INIS)

    Eisele, T.; Ellis, R.S.

    1988-01-01

    The generalized Curie-Weiss model is an extension of the classical Curie-Weiss model in which the quadratic interaction function of the mean spin value is replaced by a more general interaction function. It is shown that the generalized Curie-Weiss model can have a sequence of phase transitions at different critical temperatures. Both first-order and second-order phase transitions can occur, and explicit criteria for the two types are given. Three examples of generalized Curie-Weiss models are worked out in detail, including one example with infinitely many phase transitions. A number of results are derived using large-deviation techniques

  17. Generalization of the quark rearrangement model

    International Nuclear Information System (INIS)

    Fields, T.; Chen, C.K.

    1976-01-01

    An extension and generalization of the quark rearrangement model of baryon annihilation is described which can be applied to all annihilation reactions and which incorporates some of the features of the highly successful quark parton model. Some p anti-p interactions are discussed

  18. Can CFMIP2 models reproduce the leading modes of cloud vertical structure in the CALIPSO-GOCCP observations?

    Science.gov (United States)

    Wang, Fang; Yang, Song

    2018-02-01

    Using principal component (PC) analysis, three leading modes of cloud vertical structure (CVS) are revealed by the GCM-Oriented CALIPSO Cloud Product (GOCCP), i.e. tropical high, subtropical anticyclonic and extratropical cyclonic cloud modes (THCM, SACM and ECCM, respectively). THCM mainly reflect the contrast between tropical high clouds and clouds in middle/high latitudes. SACM is closely associated with middle-high clouds in tropical convective cores, few-cloud regimes in subtropical anticyclonic clouds and stratocumulus over subtropical eastern oceans. ECCM mainly corresponds to clouds along extratropical cyclonic regions. Models of phase 2 of Cloud Feedback Model Intercomparison Project (CFMIP2) well reproduce the THCM, but SACM and ECCM are generally poorly simulated compared to GOCCP. Standardized PCs corresponding to CVS modes are generally captured, whereas original PCs (OPCs) are consistently underestimated (overestimated) for THCM (SACM and ECCM) by CFMIP2 models. The effects of CVS modes on relative cloud radiative forcing (RSCRF/RLCRF) (RSCRF being calculated at the surface while RLCRF at the top of atmosphere) are studied in terms of principal component regression method. Results show that CFMIP2 models tend to overestimate (underestimated or simulate the opposite sign) RSCRF/RLCRF radiative effects (REs) of ECCM (THCM and SACM) in unit global mean OPC compared to observations. These RE biases may be attributed to two factors, one of which is underestimation (overestimation) of low/middle clouds (high clouds) (also known as stronger (weaker) REs in unit low/middle (high) clouds) in simulated global mean cloud profiles, the other is eigenvector biases in CVS modes (especially for SACM and ECCM). It is suggested that much more attention should be paid on improvement of CVS, especially cloud parameterization associated with particular physical processes (e.g. downwelling regimes with the Hadley circulation, extratropical storm tracks and others), which

  19. A generalized logarithmic image processing model based on the gigavision sensor model.

    Science.gov (United States)

    Deng, Guang

    2012-03-01

    The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.

  20. Do children overestimate the extent of smoking among their peers? A feasibility study of the social norms approach to prevent smoking.

    Science.gov (United States)

    Elsey, Helen; Owiredu, Elizabeth; Thomson, Heather; Mann, Gemma; Mehta, Rashesh; Siddiqi, Kamran

    2015-02-01

    Social norms approaches (SNA) are based on the premise that we frequently overestimate risk behaviours among our peers. By conducting campaigns to reduce these misperceptions, SNAs aim to reduce risk behaviours. This study examines the extent to which 12 to 13year old pupils overestimate smoking among their peers and explores the appropriateness of using SNA in secondary schools to prevent smoking uptake. The extent of overestimation of smoking among peers was assessed through an on-line SNA questionnaire in five schools (n=595). Based on questionnaire results, pupils developed SNA campaigns in each school. Qualitative methods of focus groups (7), interviews (7) and observation were used to explore in-depth, from the perspective of staff and pupils, the appropriateness and feasibility of the SNA to prevent smoking uptake in secondary schools. A quarter of pupils, 25.9% (95% CI 25.6% to 26.1%) believed that most of their peers smoked, however, only 3% (95% CI 2.8% to 3.3%) reported that they actually did; a difference of 22.9% (95% CI 19.1% to 26.6%). Self-reported smoking was not significantly different between schools (X(2)=8.7 p=0.064), however, perceptions of year group smoking was significantly different across schools (X(2)=63.9 psmoking among peers in secondary schools, thus supporting a key premise of social norms theory. Implementing SNAs and studying effects is feasible within secondary schools. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. The DART general equilibrium model: A technical description

    OpenAIRE

    Springer, Katrin

    1998-01-01

    This paper provides a technical description of the Dynamic Applied Regional Trade (DART) General Equilibrium Model. The DART model is a recursive dynamic, multi-region, multi-sector computable general equilibrium model. All regions are fully specified and linked by bilateral trade flows. The DART model can be used to project economic activities, energy use and trade flows for each of the specified regions to simulate various trade policy as well as environmental policy scenarios, and to analy...

  2. A generalized model via random walks for information filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhuo-Ming, E-mail: zhuomingren@gmail.com [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Kong, Yixiu [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Shang, Ming-Sheng, E-mail: msshang@cigit.ac.cn [Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Zhang, Yi-Cheng [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland)

    2016-08-06

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  3. A generalized model via random walks for information filtering

    International Nuclear Information System (INIS)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-01-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  4. Generalized Ordinary Differential Equation Models.

    Science.gov (United States)

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-10-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.

  5. Assessment of an extended version of the Jenkinson-Collison classification on CMIP5 models over Europe

    Science.gov (United States)

    Otero, Noelia; Sillmann, Jana; Butler, Tim

    2018-03-01

    A gridded, geographically extended weather type classification has been developed based on the Jenkinson-Collison (JC) classification system and used to evaluate the representation of weather types over Europe in a suite of climate model simulations. To this aim, a set of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) is compared with the circulation from two reanalysis products. Furthermore, we examine seasonal changes between simulated frequencies of weather types at present and future climate conditions. The models are in reasonably good agreement with the reanalyses, but some discrepancies occur in cyclonic days being overestimated over North, and underestimated over South Europe, while anticyclonic situations were overestimated over South, and underestimated over North Europe. Low flow conditions were generally underestimated, especially in summer over South Europe, and Westerly conditions were generally overestimated. The projected frequencies of weather types in the late twenty-first century suggest an increase of Anticyclonic days over South Europe in all seasons except summer, while Westerly days increase over North and Central Europe, particularly in winter. We find significant changes in the frequency of Low flow conditions and the Easterly type that become more frequent during the warmer seasons over Southeast and Southwest Europe, respectively. Our results indicate that in winter the Westerly type has significant impacts on positive anomalies of maximum and minimum temperature over most of Europe. Except in winter, the warmer temperatures are linked to Easterlies, Anticyclonic and Low Flow conditions, especially over the Mediterranean area. Furthermore, we show that changes in the frequency of weather types represent a minor contribution of the total change of European temperatures, which would be mainly driven by changes in the temperature anomalies associated with the weather types themselves.

  6. Kalman Filter for Generalized 2-D Roesser Models

    Institute of Scientific and Technical Information of China (English)

    SHENG Mei; ZOU Yun

    2007-01-01

    The design problem of the state filter for the generalized stochastic 2-D Roesser models, which appears when both the state and measurement are simultaneously subjected to the interference from white noise, is discussed. The wellknown Kalman filter design is extended to the generalized 2-D Roesser models. Based on the method of "scanning line by line", the filtering problem of generalized 2-D Roesser models with mode-energy reconstruction is solved. The formula of the optimal filtering, which minimizes the variance of the estimation error of the state vectors, is derived. The validity of the designed filter is verified by the calculation steps and the examples are introduced.

  7. Back-calculating baseline creatinine overestimates prevalence of acute kidney injury with poor sensitivity.

    Science.gov (United States)

    Kork, F; Balzer, F; Krannich, A; Bernardi, M H; Eltzschig, H K; Jankowski, J; Spies, C

    2017-03-01

    Acute kidney injury (AKI) is diagnosed by a 50% increase in creatinine. For patients without a baseline creatinine measurement, guidelines suggest estimating baseline creatinine by back-calculation. The aim of this study was to evaluate different glomerular filtration rate (GFR) equations and different GFR assumptions for back-calculating baseline creatinine as well as the effect on the diagnosis of AKI. The Modification of Diet in Renal Disease, the Chronic Kidney Disease Epidemiology (CKD-EPI) and the Mayo quadratic (MQ) equation were evaluated to estimate baseline creatinine, each under the assumption of either a fixed GFR of 75 mL min -1  1.73 m -2 or an age-adjusted GFR. Estimated baseline creatinine, diagnoses and severity stages of AKI based on estimated baseline creatinine were compared to measured baseline creatinine and corresponding diagnoses and severity stages of AKI. The data of 34 690 surgical patients were analysed. Estimating baseline creatinine overestimated baseline creatinine. Diagnosing AKI based on estimated baseline creatinine had only substantial agreement with AKI diagnoses based on measured baseline creatinine [Cohen's κ ranging from 0.66 (95% CI 0.65-0.68) to 0.77 (95% CI 0.76-0.79)] and overestimated AKI prevalence with fair sensitivity [ranging from 74.3% (95% CI 72.3-76.2) to 90.1% (95% CI 88.6-92.1)]. Staging AKI severity based on estimated baseline creatinine had moderate agreement with AKI severity based on measured baseline creatinine [Cohen's κ ranging from 0.43 (95% CI 0.42-0.44) to 0.53 (95% CI 0.51-0.55)]. Diagnosing AKI and staging AKI severity on the basis of estimated baseline creatinine in surgical patients is not feasible. Patients at risk for post-operative AKI should have a pre-operative creatinine measurement to adequately assess post-operative AKI. © 2016 Scandinavian Physiological Society. Published by John Wiley & Sons Ltd.

  8. A Model Fit Statistic for Generalized Partial Credit Model

    Science.gov (United States)

    Liang, Tie; Wells, Craig S.

    2009-01-01

    Investigating the fit of a parametric model is an important part of the measurement process when implementing item response theory (IRT), but research examining it is limited. A general nonparametric approach for detecting model misfit, introduced by J. Douglas and A. S. Cohen (2001), has exhibited promising results for the two-parameter logistic…

  9. A general model for membrane-based separation processes

    DEFF Research Database (Denmark)

    Soni, Vipasha; Abildskov, Jens; Jonsson, Gunnar Eigil

    2009-01-01

    behaviour will play an important role. In this paper, modelling of membrane-based processes for separation of gas and liquid mixtures are considered. Two general models, one for membrane-based liquid separation processes (with phase change) and another for membrane-based gas separation are presented....... The separation processes covered are: membrane-based gas separation processes, pervaporation and various types of membrane distillation processes. The specific model for each type of membrane-based process is generated from the two general models by applying the specific system descriptions and the corresponding...

  10. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  11. Calibration and validation of a general infiltration model

    Science.gov (United States)

    Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.

    1999-08-01

    A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.

  12. Partially Observed Mixtures of IRT Models: An Extension of the Generalized Partial-Credit Model

    Science.gov (United States)

    Von Davier, Matthias; Yamamoto, Kentaro

    2004-01-01

    The generalized partial-credit model (GPCM) is used frequently in educational testing and in large-scale assessments for analyzing polytomous data. Special cases of the generalized partial-credit model are the partial-credit model--or Rasch model for ordinal data--and the two parameter logistic (2PL) model. This article extends the GPCM to the…

  13. Extrinsic value orientation and affective forecasting: overestimating the rewards, underestimating the costs.

    Science.gov (United States)

    Sheldon, Kennon M; Gunz, Alexander; Nichols, Charles P; Ferguson, Yuna

    2010-02-01

    We examined affective forecasting errors as a possible explanation of the perennial appeal of extrinsic values and goals. Study 1 found that although people relatively higher in extrinsic (money, fame, image) compared to intrinsic (growth, intimacy, community) value orientation (REVO) are less happy, they nevertheless believe that attaining extrinsic goals offers a strong potential route to happiness. Study 2's longitudinal experimental design randomly assigned participants to pursue either 3 extrinsic or 3 intrinsic goals over 4 weeks, and REVO again predicted stronger forecasts regarding extrinsic goals. However, not even extrinsically oriented participants gained well-being benefits from attaining extrinsic goals, whereas all participants tended to gain in happiness from attaining intrinsic goals. Study 3 showed that the effect of REVO on forecasts is mediated by extrinsic individuals' belief that extrinsic goals will satisfy autonomy and competence needs. It appears that some people overestimate the emotional benefits of achieving extrinsic goals, to their potential detriment.

  14. Partial report and other sampling procedures overestimate the duration of iconic memory.

    Science.gov (United States)

    Appelman, I B

    1980-03-01

    In three experiments, subjects estimated the duration of a brief visual image (iconic memory) either directly by adjusting onset of a click to offset of the visual image, or indirectly with a Sperling partial report (sampling) procedure. The results indicated that partial report and other sampling procedures may reflect other brief phenomena along with iconic memory. First, the partial report procedure yields a greater estimate of the duration of iconic memory than the more direct click method. Second, the partial report estimate of the duration of iconic memory is affected if the subject is required to simultaneously retain a list of distractor items (memory load), while the click method estimate of the duration of iconic memory is not affected by a memory load. Finally, another sampling procedure based on visual cuing yields different estimates of the duration of iconic memory depending on how many items are cued. It was concluded that partial report and other sampling procedures overestimate the duration of iconic memory.

  15. Cosmological models in general relativity

    Indian Academy of Sciences (India)

    Cosmological models in general relativity. B B PAUL. Department of Physics, Nowgong College, Nagaon, Assam, India. MS received 4 October 2002; revised 6 March 2003; accepted 21 May 2003. Abstract. LRS Bianchi type-I space-time filled with perfect fluid is considered here with deceler- ation parameter as variable.

  16. Why do Models Overestimate Surface Ozone in the Southeastern United States?

    Science.gov (United States)

    Travis, Katherine R.; Jacob, Daniel J.; Fisher, Jenny A.; Kim, Patrick S.; Marais, Eloise A.; Zhu, Lei; Yu, Karen; Miller, Christopher C.; Yantosca, Robert M.; Sulprizio, Melissa P.; Thompson, Anne M.; Wennberg, Paul O.; Crounse, John D.; St Clair, Jason M.; Cohen, Ronald C.; Laughner, Joshua L.; Dibb, Jack E.; Hall, Samuel R.; Ullmann, Kirk; Wolfe, Glenn M.; Pollack, Illana B.; Peischl, Jeff; Neuman, Jonathan A.; Zhou, Xianliang

    2018-01-01

    Ozone pollution in the Southeast US involves complex chemistry driven by emissions of anthropogenic nitrogen oxide radicals (NOx ≡ NO + NO2) and biogenic isoprene. Model estimates of surface ozone concentrations tend to be biased high in the region and this is of concern for designing effective emission control strategies to meet air quality standards. We use detailed chemical observations from the SEAC4RS aircraft campaign in August and September 2013, interpreted with the GEOS-Chem chemical transport model at 0.25°×0.3125° horizontal resolution, to better understand the factors controlling surface ozone in the Southeast US. We find that the National Emission Inventory (NEI) for NOx from the US Environmental Protection Agency (EPA) is too high. This finding is based on SEAC4RS observations of NOx and its oxidation products, surface network observations of nitrate wet deposition fluxes, and OMI satellite observations of tropospheric NO2 columns. Our results indicate that NEI NOx emissions from mobile and industrial sources must be reduced by 30–60%, dependent on the assumption of the contribution by soil NOx emissions. Upper tropospheric NO2 from lightning makes a large contribution to satellite observations of tropospheric NO2 that must be accounted for when using these data to estimate surface NOx emissions. We find that only half of isoprene oxidation proceeds by the high-NOx pathway to produce ozone; this fraction is only moderately sensitive to changes in NOx emissions because isoprene and NOx emissions are spatially segregated. GEOS-Chem with reduced NOx emissions provides an unbiased simulation of ozone observations from the aircraft, and reproduces the observed ozone production efficiency in the boundary layer as derived from a regression of ozone and NOx oxidation products. However, the model is still biased high by 8±13 ppb relative to observed surface ozone in the Southeast US. Ozonesondes launched during midday hours show a 7 ppb ozone decrease

  17. Why do models overestimate surface ozone in the Southeast United States?

    Directory of Open Access Journals (Sweden)

    K. R. Travis

    2016-11-01

    Full Text Available Ozone pollution in the Southeast US involves complex chemistry driven by emissions of anthropogenic nitrogen oxide radicals (NOx  ≡  NO + NO2 and biogenic isoprene. Model estimates of surface ozone concentrations tend to be biased high in the region and this is of concern for designing effective emission control strategies to meet air quality standards. We use detailed chemical observations from the SEAC4RS aircraft campaign in August and September 2013, interpreted with the GEOS-Chem chemical transport model at 0.25°  ×  0.3125° horizontal resolution, to better understand the factors controlling surface ozone in the Southeast US. We find that the National Emission Inventory (NEI for NOx from the US Environmental Protection Agency (EPA is too high. This finding is based on SEAC4RS observations of NOx and its oxidation products, surface network observations of nitrate wet deposition fluxes, and OMI satellite observations of tropospheric NO2 columns. Our results indicate that NEI NOx emissions from mobile and industrial sources must be reduced by 30–60 %, dependent on the assumption of the contribution by soil NOx emissions. Upper-tropospheric NO2 from lightning makes a large contribution to satellite observations of tropospheric NO2 that must be accounted for when using these data to estimate surface NOx emissions. We find that only half of isoprene oxidation proceeds by the high-NOx pathway to produce ozone; this fraction is only moderately sensitive to changes in NOx emissions because isoprene and NOx emissions are spatially segregated. GEOS-Chem with reduced NOx emissions provides an unbiased simulation of ozone observations from the aircraft and reproduces the observed ozone production efficiency in the boundary layer as derived from a regression of ozone and NOx oxidation products. However, the model is still biased high by 6 ± 14 ppb relative to observed surface ozone in the Southeast US. Ozonesondes

  18. Why do Models Overestimate Surface Ozone in the Southeastern United States?

    Science.gov (United States)

    Travis, Katherine R.; Jacob, Daniel J.; Fisher, Jenny A.; Kim, Patrick S.; Marais, Eloise A.; Zhu, Lei; Yu, Karen; Miller, Christopher C.; Yantosca, Robert M.; Sulprizio, Melissa P.; hide

    2016-01-01

    Ozone pollution in the Southeast US involves complex chemistry driven by emissions of anthropogenic nitrogen oxide radicals (NOx = NO + NO2) and biogenic isoprene. Model estimates of surface ozone concentrations tend to be biased high in the region and this is of concern for designing effective emission control strategies to meet air quality standards. We use detailed chemical observations from the SEAC4RS aircraft campaign in August and September 2013, interpreted with the GEOS-Chem chemical transport model at 0.25 deg. x 0.3125 deg. horizontal resolution, to better understand the factors controlling surface ozone in the Southeast US. We find that the National Emission Inventory (NEI) for NOx from the US Environmental Protection Agency (EPA) is too high. This finding is based on SEAC4RS observations of NOx and its oxidation products, surface network observations of nitrate wet deposition fluxes, and OMI satellite observations of tropospheric NO2 columns. Our results indicate that NEI NOx emissions from mobile and industrial sources must be reduced by 30-60%, dependent on the assumption of the contribution by soil NOx emissions. Upper tropospheric NO2 from lightning makes a large contribution to satellite observations of tropospheric NO2 that must be accounted for when using these data to estimate surface NOx emissions. We find that only half of isoprene oxidation proceeds by the high-NOx pathway to produce ozone; this fraction is only moderately sensitive to changes in NOx emissions because isoprene and NOx emissions are spatially segregated. GEOS-Chem with reduced NOx emissions provides an unbiased simulation of ozone observations from the aircraft, and reproduces the observed ozone production efficiency in the boundary layer as derived from a 15 regression of ozone and NOx oxidation products. However, the model is still biased high by 8 +/- 13 ppb relative to observed surface ozone in the Southeast US. Ozonesondes launched during midday hours show a 7 ppb ozone

  19. Generalizations of the noisy-or model

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří

    2015-01-01

    Roč. 51, č. 3 (2015), s. 508-524 ISSN 0023-5954 R&D Projects: GA ČR GA13-20012S Institutional support: RVO:67985556 Keywords : Bayesian networks * noisy-or model * classification * generalized linear models Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.628, year: 2015 http://library.utia.cas.cz/separaty/2015/MTR/vomlel-0447357.pdf

  20. Generalized Born Models of Macromolecular Solvation Effects

    Science.gov (United States)

    Bashford, Donald; Case, David A.

    2000-10-01

    It would often be useful in computer simulations to use a simple description of solvation effects, instead of explicitly representing the individual solvent molecules. Continuum dielectric models often work well in describing the thermodynamic aspects of aqueous solvation, and approximations to such models that avoid the need to solve the Poisson equation are attractive because of their computational efficiency. Here we give an overview of one such approximation, the generalized Born model, which is simple and fast enough to be used for molecular dynamics simulations of proteins and nucleic acids. We discuss its strengths and weaknesses, both for its fidelity to the underlying continuum model and for its ability to replace explicit consideration of solvent molecules in macromolecular simulations. We focus particularly on versions of the generalized Born model that have a pair-wise analytical form, and therefore fit most naturally into conventional molecular mechanics calculations.

  1. Infrared problems in two-dimensional generalized σ-models

    International Nuclear Information System (INIS)

    Curci, G.; Paffuti, G.

    1989-01-01

    We study the correlations of the energy-momentum tensor for classically conformally invariant generalized σ-models in the Wilson operator-product-expansion approach. We find that these correlations are, in general, infrared divergent. The absence of infrared divergences is obtained, as one can expect, for σ-models on a group manifold or for σ-models with a string-like interpretation. Moreover, the infrared divergences spoil the naive scaling arguments used by Zamolodchikov in the demonstration of the C-theorem. (orig.)

  2. Generalized Landau-Lifshitz models on the interval

    International Nuclear Information System (INIS)

    Doikou, Anastasia; Karaiskos, Nikos

    2011-01-01

    We study the classical generalized gl n Landau-Lifshitz (L-L) model with special boundary conditions that preserve integrability. We explicitly derive the first non-trivial local integral of motion, which corresponds to the boundary Hamiltonian for the sl 2 L-L model. Novel expressions of the modified Lax pairs associated to the integrals of motion are also extracted. The relevant equations of motion with the corresponding boundary conditions are determined. Dynamical integrable boundary conditions are also examined within this spirit. Then the generalized isotropic and anisotropic gl n Landau-Lifshitz models are considered, and novel expressions of the boundary Hamiltonians and the relevant equations of motion and boundary conditions are derived.

  3. A QCD Model Using Generalized Yang-Mills Theory

    International Nuclear Information System (INIS)

    Wang Dianfu; Song Heshan; Kou Lina

    2007-01-01

    Generalized Yang-Mills theory has a covariant derivative, which contains both vector and scalar gauge bosons. Based on this theory, we construct a strong interaction model by using the group U(4). By using this U(4) generalized Yang-Mills model, we also obtain a gauge potential solution, which can be used to explain the asymptotic behavior and color confinement.

  4. Have We Overestimated Saline Aquifer CO2 Storage Capacities?

    International Nuclear Information System (INIS)

    Thibeau, S.; Mucha, V.

    2011-01-01

    During future, large scale CO 2 geological storage in saline aquifers, fluid pressure is expected to rise as a consequence of CO 2 injection, but the pressure build up will have to stay below specified values to ensure a safe and long term containment of the CO 2 in the storage site. The pressure build up is the result of two different effects. The first effect is a local overpressure around the injectors, which is due to the high CO 2 velocities around the injectors, and which can be mitigated by adding CO 2 injectors. The second effect is a regional scale pressure build up that will take place if the storage aquifer is closed or if the formation water that flows away from the pressurised area is not large enough to compensate volumetrically the CO 2 injection. This second effect cannot be mitigated by adding additional injectors. In the first section of this paper, we review some major global and regional assessments of CO 2 storage capacities in deep saline aquifers, in term of mass and storage efficiency. These storage capacities are primarily based on a volumetric approach: storage capacity is the volumetric sum of the CO 2 that can be stored through various trapping mechanisms. We then discuss in Section 2 storage efficiencies derived from a pressure build up approach, as stated in the CO2STORE final report (Chadwick A. et al. (eds) (2008) Best Practice for the Storage of CO 2 in Saline Aquifers, Observations and Guidelines from the SACS and CO2STORE Projects, Keyworth, Nottingham, BGS Occasional Publication No. 14) and detailed by Van der Meer and Egberts (van der Meer L.G.H., Egberts P.J.P. (2008) A General Method for Calculating Subsurface CO 2 Storage Capacity, OTC Paper 19309, presented at the OTC Conference held in Houston, Texas, USA, 5-8 May). A quantitative range of such storage efficiency is presented, based on a review of orders of magnitudes of pore and water compressibilities and allowable pressure increase. To illustrate the relevance of this

  5. A generalized model via random walks for information filtering

    Science.gov (United States)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-08-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.

  6. Over-estimation of sea level measurements arising from water density anomalies within tide-wells - A case study at Zuari Estuary, Goa

    Digital Repository Service at National Institute of Oceanography (India)

    Joseph, A.; VijayKumar, K.; Desa, E.S.; Desa, E.; Peshwe, V.B.

    at the mouth of the Zuari estuary, and anomalies were reported at all periods except during peak summer and the onset of the summer monsoon. These anomalies lead to an over-estimation of sea level by a tide-well based gauge. The density difference, delta p...

  7. Overestimation of organic phosphorus in wetland soils by alkaline extraction and molybdate colorimetry.

    Science.gov (United States)

    Turner, Benjamin L; Newman, Susan; Reddy, K Ramesh

    2006-05-15

    Accurate information on the chemical nature of soil phosphorus is essential for understanding its bioavailability and fate in wetland ecosystems. Solution phosphorus-31 nuclear magnetic resonance (31P NMR) spectroscopy was used to assess the conventional colorimetric procedure for phosphorus speciation in alkaline extracts of organic soils from the Florida Everglades. Molybdate colorimetry markedly overestimated organic phosphorus by between 30 and 54% compared to NMR spectroscopy. This was due in large part to the association of inorganic phosphate with organic matter, although the error was exacerbated in some samples by the presence of pyrophosphate, an inorganic polyphosphate that is not detected by colorimetry. The results have important implications for our understanding of phosphorus biogeochemistry in wetlands and suggest that alkaline extraction and solution 31p NMR spectroscopy is the only accurate method for quantifying organic phosphorus in wetland soils.

  8. Non-linear general instability of ring-stiffened conical shells under external hydrostatic pressure

    International Nuclear Information System (INIS)

    Ross, C T F; Kubelt, C; McLaughlin, I; Etheridge, A; Turner, K; Paraskevaides, D; Little, A P F

    2011-01-01

    The paper presents the experimental results for 15 ring-stiffened circular steel conical shells, which failed by non-linear general instability. The results of these investigations were compared with various theoretical analyses, including an ANSYS eigen buckling analysis and another ANSYS analysis; which involved a step-by-step method until collapse; where both material and geometrical nonlinearity were considered. The investigation also involved an analysis using BS5500 (PD 5500), together with the method of Ross of the University of Portsmouth. The ANSYS eigen buckling analysis tended to overestimate the predicted buckling pressures; whereas the ANSYS nonlinear results compared favourably with the experimental results. The PD5500 analysis was very time consuming and tended to grossly underestimate the experimental buckling pressures and in some cases, overestimate them. In contrast to PD5500 and ANSYS, the design charts of Ross of the University of Portsmouth were the easiest of all these methods to use and generally only slightly underestimated the experimental collapse pressures. The ANSYS analyses gave some excellent graphical displays.

  9. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  10. EOP MIT General Circulation Model (MITgcm)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data contains a regional implementation of the Massachusetts Institute of Technology general circulation model (MITgcm) at a 1-km spatial resolution for the...

  11. Overestimation of Crop Root Biomass in Field Experiments Due to Extraneous Organic Matter.

    Science.gov (United States)

    Hirte, Juliane; Leifeld, Jens; Abiven, Samuel; Oberholzer, Hans-Rudolf; Hammelehle, Andreas; Mayer, Jochen

    2017-01-01

    Root biomass is one of the most relevant root parameters for studies of plant response to environmental change, soil carbon modeling or estimations of soil carbon sequestration. A major source of error in root biomass quantification of agricultural crops in the field is the presence of extraneous organic matter in soil: dead roots from previous crops, weed roots, incorporated above ground plant residues and organic soil amendments, or remnants of soil fauna. Using the isotopic difference between recent maize root biomass and predominantly C3-derived extraneous organic matter, we determined the proportions of maize root biomass carbon of total carbon in root samples from the Swiss long-term field trial "DOK." We additionally evaluated the effects of agricultural management (bio-organic and conventional), sampling depth (0-0.25, 0.25-0.5, 0.5-0.75 m) and position (within and between maize rows), and root size class (coarse and fine roots) as defined by sieve mesh size (2 and 0.5 mm) on those proportions, and quantified the success rate of manual exclusion of extraneous organic matter from root samples. Only 60% of the root mass that we retrieved from field soil cores was actual maize root biomass from the current season. While the proportions of maize root biomass carbon were not affected by agricultural management, they increased consistently with soil depth, were higher within than between maize rows, and were higher in coarse (>2 mm) than in fine (≤2 and >0.5) root samples. The success rate of manual exclusion of extraneous organic matter from root samples was related to agricultural management and, at best, about 60%. We assume that the composition of extraneous organic matter is strongly influenced by agricultural management and soil depth and governs the effect size of the investigated factors. Extraneous organic matter may result in severe overestimation of recovered root biomass and has, therefore, large implications for soil carbon modeling and estimations

  12. a Proposal for Generalization of 3d Models

    Science.gov (United States)

    Uyar, A.; Ulugtekin, N. N.

    2017-11-01

    In recent years, 3D models have been created of many cities around the world. Most of the 3D city models have been introduced as completely graphic or geometric models, and the semantic and topographic aspects of the models have been neglected. In order to use 3D city models beyond the task, a generalization is necessary. CityGML is an open data model and XML-based format for the storage and exchange of virtual 3D city models. Level of Details (LoD) which is an important concept for 3D modelling, can be defined as outlined degree or prior representation of real-world objects. The paper aim is first describes some requirements of 3D model generalization, then presents problems and approaches that have been developed in recent years. In conclude the paper will be a summary and outlook on problems and future work.

  13. Crash data modeling with a generalized estimator.

    Science.gov (United States)

    Ye, Zhirui; Xu, Yueru; Lord, Dominique

    2018-05-11

    The investigation of relationships between traffic crashes and relevant factors is important in traffic safety management. Various methods have been developed for modeling crash data. In real world scenarios, crash data often display the characteristics of over-dispersion. However, on occasions, some crash datasets have exhibited under-dispersion, especially in cases where the data are conditioned upon the mean. The commonly used models (such as the Poisson and the NB regression models) have associated limitations to cope with various degrees of dispersion. In light of this, a generalized event count (GEC) model, which can be generally used to handle over-, equi-, and under-dispersed data, is proposed in this study. This model was first applied to case studies using data from Toronto, characterized by over-dispersion, and then to crash data from railway-highway crossings in Korea, characterized with under-dispersion. The results from the GEC model were compared with those from the Negative binomial and the hyper-Poisson models. The cases studies show that the proposed model provides good performance for crash data characterized with over- and under-dispersion. Moreover, the proposed model simplifies the modeling process and the prediction of crash data. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. Generalized versus non-generalized neural network model for multi-lead inflow forecasting at Aswan High Dam

    Directory of Open Access Journals (Sweden)

    A. El-Shafie

    2011-03-01

    Full Text Available Artificial neural networks (ANN have been found efficient, particularly in problems where characteristics of the processes are stochastic and difficult to describe using explicit mathematical models. However, time series prediction based on ANN algorithms is fundamentally difficult and faces problems. One of the major shortcomings is the search for the optimal input pattern in order to enhance the forecasting capabilities for the output. The second challenge is the over-fitting problem during the training procedure and this occurs when ANN loses its generalization. In this research, autocorrelation and cross correlation analyses are suggested as a method for searching the optimal input pattern. On the other hand, two generalized methods namely, Regularized Neural Network (RNN and Ensemble Neural Network (ENN models are developed to overcome the drawbacks of classical ANN models. Using Generalized Neural Network (GNN helped avoid over-fitting of training data which was observed as a limitation of classical ANN models. Real inflow data collected over the last 130 years at Lake Nasser was used to train, test and validate the proposed model. Results show that the proposed GNN model outperforms non-generalized neural network and conventional auto-regressive models and it could provide accurate inflow forecasting.

  15. Simulated cold bias being improved by using MODIS time-varying albedo in the Tibetan Plateau in WRF model

    Science.gov (United States)

    Meng, X.; Lyu, S.; Zhang, T.; Zhao, L.; Li, Z.; Han, B.; Li, S.; Ma, D.; Chen, H.; Ao, Y.; Luo, S.; Shen, Y.; Guo, J.; Wen, L.

    2018-04-01

    Systematic cold biases exist in the simulation for 2 m air temperature in the Tibetan Plateau (TP) when using regional climate models and global atmospheric general circulation models. We updated the albedo in the Weather Research and Forecasting (WRF) Model lower boundary condition using the Global LAnd Surface Satellite Moderate-Resolution Imaging Spectroradiometer albedo products and demonstrated evident improvement for cold temperature biases in the TP. It is the large overestimation of albedo in winter and spring in the WRF model that resulted in the large cold temperature biases. The overestimated albedo was caused by the simulated precipitation biases and over-parameterization of snow albedo. Furthermore, light-absorbing aerosols can result in a large reduction of albedo in snow and ice cover. The results suggest the necessity of developing snow albedo parameterization using observations in the TP, where snow cover and melting are very different from other low-elevation regions, and the influence of aerosols should be considered as well. In addition to defining snow albedo, our results show an urgent call for improving precipitation simulation in the TP.

  16. Exciton model and quantum molecular dynamics in inclusive nucleon-induced reactions

    International Nuclear Information System (INIS)

    Bevilacqua, Riccardo; Pomp, Stephan; Watanabe, Yukinobu

    2011-01-01

    We compared inclusive nucleon-induced reactions with two-component exciton model calculations and Kalbach systematics; these successfully describe the production of protons, whereas fail to reproduce the emission of composite particles, generally overestimating it. We show that the Kalbach phenomenological model needs to be revised for energies above 90 MeV; agreement improves introducing a new energy dependence for direct-like mechanisms described by the Kalbach model. Our revised model calculations suggest multiple preequilibrium emission of light charged particles. We have also compared recent neutron-induced data with quantum molecular dynamics (QMD) calculations complemented by the surface coalescence model (SCM); we observed that the SCM improves the predictive power of QMD. (author)

  17. Generalized heat-transport equations: parabolic and hyperbolic models

    Science.gov (United States)

    Rogolino, Patrizia; Kovács, Robert; Ván, Peter; Cimmelli, Vito Antonio

    2018-03-01

    We derive two different generalized heat-transport equations: the most general one, of the first order in time and second order in space, encompasses some well-known heat equations and describes the hyperbolic regime in the absence of nonlocal effects. Another, less general, of the second order in time and fourth order in space, is able to describe hyperbolic heat conduction also in the presence of nonlocal effects. We investigate the thermodynamic compatibility of both models by applying some generalizations of the classical Liu and Coleman-Noll procedures. In both cases, constitutive equations for the entropy and for the entropy flux are obtained. For the second model, we consider a heat-transport equation which includes nonlocal terms and study the resulting set of balance laws, proving that the corresponding thermal perturbations propagate with finite speed.

  18. The general dynamic model

    DEFF Research Database (Denmark)

    Borregaard, Michael K.; Matthews, Thomas J.; Whittaker, Robert James

    2016-01-01

    Aim: Island biogeography focuses on understanding the processes that underlie a set of well-described patterns on islands, but it lacks a unified theoretical framework for integrating these processes. The recently proposed general dynamic model (GDM) of oceanic island biogeography offers a step...... towards this goal. Here, we present an analysis of causality within the GDM and investigate its potential for the further development of island biogeographical theory. Further, we extend the GDM to include subduction-based island arcs and continental fragment islands. Location: A conceptual analysis...... of evolutionary processes in simulations derived from the mechanistic assumptions of the GDM corresponded broadly to those initially suggested, with the exception of trends in extinction rates. Expanding the model to incorporate different scenarios of island ontogeny and isolation revealed a sensitivity...

  19. Generalized Path Analysis and Generalized Simultaneous Equations Model for Recursive Systems with Responses of Mixed Types

    Science.gov (United States)

    Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang

    2006-01-01

    This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…

  20. Reliability assessment of competing risks with generalized mixed shock models

    International Nuclear Information System (INIS)

    Rafiee, Koosha; Feng, Qianmei; Coit, David W.

    2017-01-01

    This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.

  1. Foundations of linear and generalized linear models

    CERN Document Server

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  2. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  3. Risk assessment of oil price from static and dynamic modelling approaches

    DEFF Research Database (Denmark)

    Mi, Zhi-Fu; Wei, Yi-Ming; Tang, Bao-Jun

    2017-01-01

    ) and GARCH model on the basis of generalized error distribution (GED). The results show that EVT is a powerful approach to capture the risk in the oil markets. On the contrary, the traditional variance–covariance (VC) and Monte Carlo (MC) approaches tend to overestimate risk when the confidence level is 95......%, but underestimate risk at the confidence level of 99%. The VaR of WTI returns is larger than that of Brent returns at identical confidence levels. Moreover, the GED-GARCH model can estimate the downside dynamic VaR accurately for WTI and Brent oil returns....

  4. Modeling number of bacteria per food unit in comparison to bacterial concentration in quantitative risk assessment: impact on risk estimates.

    Science.gov (United States)

    Pouillot, Régis; Chen, Yuhuan; Hoelzer, Karin

    2015-02-01

    When developing quantitative risk assessment models, a fundamental consideration for risk assessors is to decide whether to evaluate changes in bacterial levels in terms of concentrations or in terms of bacterial numbers. Although modeling bacteria in terms of integer numbers may be regarded as a more intuitive and rigorous choice, modeling bacterial concentrations is more popular as it is generally less mathematically complex. We tested three different modeling approaches in a simulation study. The first approach considered bacterial concentrations; the second considered the number of bacteria in contaminated units, and the third considered the expected number of bacteria in contaminated units. Simulation results indicate that modeling concentrations tends to overestimate risk compared to modeling the number of bacteria. A sensitivity analysis using a regression tree suggests that processes which include drastic scenarios consisting of combinations of large bacterial inactivation followed by large bacterial growth frequently lead to a >10-fold overestimation of the average risk when modeling concentrations as opposed to bacterial numbers. Alternatively, the approach of modeling the expected number of bacteria in positive units generates results similar to the second method and is easier to use, thus potentially representing a promising compromise. Published by Elsevier Ltd.

  5. Plant pathogens as biocontrol agents of Cirsium arvense – an overestimated approach?

    Directory of Open Access Journals (Sweden)

    Esther Müller

    2011-11-01

    Full Text Available Cirsium arvense is one of the worst weeds in agriculture. As herbicides are not very effective and not accepted by organic farming and special habitats, possible biocontrol agents have been investigated since many decades. In particular plant pathogens of C. arvense have received considerable interest and have been promoted as “mycoherbicides” or “bioherbicides”. A total of 10 fungi and one bacterium have been proposed and tested as biocontrol agents against C. arvense. A variety of experiments analysed the noxious influence of spores or other parts of living fungi or bacteria on plants while others used fungal or bacterial products, usually toxins. Also combinations of spores with herbicides and combinations of several pathogens were tested. All approaches turned out to be inappropriate with regard to target plant specificity, effectiveness and application possibilities. As yet, none of the tested species or substances has achieved marketability, despite two patents on the use of Septoria cirsii and Phomopsis cirsii. We conclude that the potential of pathogens for biocontrol of C. arvense has largely been overestimated.

  6. Generalized algebra-valued models of set theory

    NARCIS (Netherlands)

    Löwe, B.; Tarafder, S.

    2015-01-01

    We generalize the construction of lattice-valued models of set theory due to Takeuti, Titani, Kozawa and Ozawa to a wider class of algebras and show that this yields a model of a paraconsistent logic that validates all axioms of the negation-free fragment of Zermelo-Fraenkel set theory.

  7. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  8. Formation of organic aerosol in the Paris region during the MEGAPOLI summer campaign: evaluation of the volatility-basis-set approach within the CHIMERE model

    Directory of Open Access Journals (Sweden)

    Q. J. Zhang

    2013-06-01

    Full Text Available Simulations with the chemistry transport model CHIMERE are compared to measurements performed during the MEGAPOLI (Megacities: Emissions, urban, regional and Global Atmospheric POLlution and climate effects, and Integrated tools for assessment and mitigation summer campaign in the Greater Paris region in July 2009. The volatility-basis-set approach (VBS is implemented into this model, taking into account the volatility of primary organic aerosol (POA and the chemical aging of semi-volatile organic species. Organic aerosol is the main focus and is simulated with three different configurations with a modified treatment of POA volatility and modified secondary organic aerosol (SOA formation schemes. In addition, two types of emission inventories are used as model input in order to test the uncertainty related to the emissions. Predictions of basic meteorological parameters and primary and secondary pollutant concentrations are evaluated, and four pollution regimes are defined according to the air mass origin. Primary pollutants are generally overestimated, while ozone is consistent with observations. Sulfate is generally overestimated, while ammonium and nitrate levels are well simulated with the refined emission data set. As expected, the simulation with non-volatile POA and a single-step SOA formation mechanism largely overestimates POA and underestimates SOA. Simulation of organic aerosol with the VBS approach taking into account the aging of semi-volatile organic compounds (SVOC shows the best correlation with measurements. High-concentration events observed mostly after long-range transport are well reproduced by the model. Depending on the emission inventory used, simulated POA levels are either reasonable or underestimated, while SOA levels tend to be overestimated. Several uncertainties related to the VBS scheme (POA volatility, SOA yields, the aging parameterization, to emission input data, and to simulated OH levels can be responsible for

  9. Biased processing of threat-related information rather than knowledge deficits contributes to overestimation of threat in obsessive-compulsive disorder.

    Science.gov (United States)

    Moritz, Steffen; Pohl, Rüdiger F

    2009-11-01

    Overestimation of threat (OET) has been implicated in the pathogenesis of obsessive-compulsive disorder (OCD). The present study deconstructed this complex concept and looked for specific deviances in OCD relative to controls. A total of 46 participants with OCD and 51 nonclinical controls were asked: (a) to estimate the incidence rate for 20 events relating to washing, checking, positive, or negative incidents. Furthermore, they were required (b) to assess their personal vulnerability to experience each event type, and (c) to judge the degree of accompanying worry. Later, participants were confronted with the correct statistics and asked (d) to rate their degree of worry versus relief. OCD participants did not provide higher estimates for OCD-related events than healthy participants, thus rendering a knowledge deficit unlikely. The usual unrealistic optimism bias was found in both groups but was markedly attenuated in OCD participants. OCD-related events worried OCD participants more than controls. Confrontation with the correct statistics appeased OCD participants less than healthy participants. Even in the case of large initial overestimations for OCD-related events, correct information appeased OCD participants significantly less than healthy participants. Our results suggest that OCD is not associated with a knowledge deficit regarding OCD-related events but that patients feel personally more vulnerable than nonclinical controls.

  10. Overestimation of myocardial infarct size on two-dimensional echocardiograms due to remodelling of the infarct zone.

    Science.gov (United States)

    Johnston, B J; Blinston, G E; Jugdutt, B I

    1994-01-01

    To assess the effect of early regional diastolic shape distortion or bulging of infarct zones due to infarct expansion on estimates of regional left ventricular dysfunction and infarct size by two-dimensional echocardiographic imaging. Quantitative two-dimensional echocardiograms from patients with a first Q wave myocardial infarction and creatine kinase infarct size data, and normal subjects, were subjected to detailed analysis of regional left ventricular dysfunction and shape distortion in short-axis images by established methods. Regional left ventricular asynergy (akinesis and dyskinesis) and shape distortion indices (eg, peak [Pk]/radius [ri]) were measured on endocardial diastolic outlines of short-axis images in 43 postinfarction patients (28 anterior and 15 inferior, 5.9 h after onset) and 11 normal subjects (controls). In the infarction group, endocardial surface area of asynergy was calculated by three-dimensional reconstruction of the images and infarct size from serial creatine kinase blood levels. Diastolic bulging of asynergic zones was found in all infarction patients. The regional shape distortion indices characterizing the area between the 'actual' bulging asynergic segment and the derived 'ideal' circular segment (excluding the bulge) on indexed sections were greater in infarct than control groups (Pk/ri 0.31 versus 0, P 0.001). Importantly, the degree of distortion correlated with overestimation of asynergy (r = 0.89, P < 0.001), and the relation between infarct size and total 'ideal' asynergy showed a leftward shift from that with 'actual' asynergy. Early regional diastolic bulging of the infarct zone results in overestimation of regional ventricular dysfunction, especially in patients with anterior infarction. This effect should be considered when assessing effects of therapy on infarct size, remodelling and dysfunction using tomographical imaging.

  11. Multivariate covariance generalized linear models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...

  12. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  13. Modelling uncertainty with generalized credal sets: application to conjunction and decision

    Science.gov (United States)

    Bronevich, Andrey G.; Rozenberg, Igor N.

    2018-01-01

    To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.

  14. Generalized continua as models for classical and advanced materials

    CERN Document Server

    Forest, Samuel

    2016-01-01

    This volume is devoted to an actual topic which is the focus world-wide of various research groups. It contains contributions describing the material behavior on different scales, new existence and uniqueness theorems, the formulation of constitutive equations for advanced materials. The main emphasis of the contributions is directed on the following items - Modelling and simulation of natural and artificial materials with significant microstructure, - Generalized continua as a result of multi-scale models, - Multi-field actions on materials resulting in generalized material models, - Theories including higher gradients, and - Comparison with discrete modelling approaches.

  15. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  16. Generalized waste package containment model

    International Nuclear Information System (INIS)

    Liebetrau, A.M.; Apted, M.J.

    1985-02-01

    The US Department of Energy (DOE) is developing a performance assessment strategy to demonstrate compliance with standards and technical requirements of the Environmental Protection Agency (EPA) and the Nuclear Regulatory Commission (NRC) for the permanent disposal of high-level nuclear wastes in geologic repositories. One aspect of this strategy is the development of a unified performance model of the entire geologic repository system. Details of a generalized waste package containment (WPC) model and its relationship with other components of an overall repository model are presented in this paper. The WPC model provides stochastically determined estimates of the distributions of times-to-failure of the barriers of a waste package by various corrosion mechanisms and degradation processes. The model consists of a series of modules which employ various combinations of stochastic (probabilistic) and mechanistic process models, and which are individually designed to reflect the current state of knowledge. The WPC model is designed not only to take account of various site-specific conditions and processes, but also to deal with a wide range of site, repository, and waste package configurations. 11 refs., 3 figs., 2 tabs

  17. Geometrical efficiency in computerized tomography: generalized model

    International Nuclear Information System (INIS)

    Costa, P.R.; Robilotta, C.C.

    1992-01-01

    A simplified model for producing sensitivity and exposure profiles in computerized tomographic system was recently developed allowing the forecast of profiles behaviour in the rotation center of the system. The generalization of this model for some point of the image plane was described, and the geometrical efficiency could be evaluated. (C.G.C.)

  18. Generalized formal model of Big Data

    OpenAIRE

    Shakhovska, N.; Veres, O.; Hirnyak, M.

    2016-01-01

    This article dwells on the basic characteristic features of the Big Data technologies. It is analyzed the existing definition of the “big data” term. The article proposes and describes the elements of the generalized formal model of big data. It is analyzed the peculiarities of the application of the proposed model components. It is described the fundamental differences between Big Data technology and business analytics. Big Data is supported by the distributed file system Google File System ...

  19. Adaptive Inference on General Graphical Models

    OpenAIRE

    Acar, Umut A.; Ihler, Alexander T.; Mettu, Ramgopal; Sumer, Ozgur

    2012-01-01

    Many algorithms and applications involve repeatedly solving variations of the same inference problem; for example we may want to introduce new evidence to the model or perform updates to conditional dependencies. The goal of adaptive inference is to take advantage of what is preserved in the model and perform inference more rapidly than from scratch. In this paper, we describe techniques for adaptive inference on general graphs that support marginal computation and updates to the conditional ...

  20. Higher dimensional generalizations of the SYK model

    Energy Technology Data Exchange (ETDEWEB)

    Berkooz, Micha [Department of Particle Physics and Astrophysics, Weizmann Institute of Science,Rehovot 7610001 (Israel); Narayan, Prithvi [International Centre for Theoretical Sciences, Hesaraghatta,Bengaluru North, 560 089 (India); Rozali, Moshe [Department of Physics and Astronomy, University of British Columbia,Vancouver, BC V6T 1Z1 (Canada); Simón, Joan [School of Mathematics and Maxwell Institute for Mathematical Sciences, University of Edinburgh,King’s Buildings, Edinburgh EH9 3FD (United Kingdom)

    2017-01-31

    We discuss a 1+1 dimensional generalization of the Sachdev-Ye-Kitaev model. The model contains N Majorana fermions at each lattice site with a nearest-neighbour hopping term. The SYK random interaction is restricted to low momentum fermions of definite chirality within each lattice site. This gives rise to an ordinary 1+1 field theory above some energy scale and a low energy SYK-like behavior. We exhibit a class of low-pass filters which give rise to a rich variety of hyperscaling behaviour in the IR. We also discuss another set of generalizations which describes probing an SYK system with an external fermion, together with the new scaling behavior they exhibit in the IR.

  1. Learning general phonological rules from distributional information: a computational model.

    Science.gov (United States)

    Calamaro, Shira; Jarosz, Gaja

    2015-04-01

    Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony (Peperkamp, Le Calvez, Nadal, & Dupoux, 2006). This paper extends the model to account for learning of a broader set of phonological alternations and the formalization of these alternations as general rules. In Experiment 1, we apply the original model to new data in Dutch and demonstrate its limitations in learning nonallophonic rules. In Experiment 2, we extend the model to allow it to learn general rules for alternations that apply to a class of segments. In Experiment 3, the model is further extended to allow for generalization by context; we argue that this generalization must be constrained by linguistic principles. Copyright © 2014 Cognitive Science Society, Inc.

  2. The HIRLAM fast radiation scheme for mesoscale numerical weather prediction models

    Science.gov (United States)

    Rontu, Laura; Gleeson, Emily; Räisänen, Petri; Pagh Nielsen, Kristian; Savijärvi, Hannu; Hansen Sass, Bent

    2017-07-01

    This paper provides an overview of the HLRADIA shortwave (SW) and longwave (LW) broadband radiation schemes used in the HIRLAM numerical weather prediction (NWP) model and available in the HARMONIE-AROME mesoscale NWP model. The advantage of broadband, over spectral, schemes is that they can be called more frequently within the model, without compromising on computational efficiency. In mesoscale models fast interactions between clouds and radiation and the surface and radiation can be of greater importance than accounting for the spectral details of clear-sky radiation; thus calling the routines more frequently can be of greater benefit than the deterioration due to loss of spectral details. Fast but physically based radiation parametrizations are expected to be valuable for high-resolution ensemble forecasting, because as well as the speed of their execution, they may provide realistic physical perturbations. Results from single-column diagnostic experiments based on CIRC benchmark cases and an evaluation of 10 years of radiation output from the FMI operational archive of HIRLAM forecasts indicate that HLRADIA performs sufficiently well with respect to the clear-sky downwelling SW and longwave LW fluxes at the surface. In general, HLRADIA tends to overestimate surface fluxes, with the exception of LW fluxes under cold and dry conditions. The most obvious overestimation of the surface SW flux was seen in the cloudy cases in the 10-year comparison; this bias may be related to using a cloud inhomogeneity correction, which was too large. According to the CIRC comparisons, the outgoing LW and SW fluxes at the top of atmosphere are mostly overestimated by HLRADIA and the net LW flux is underestimated above clouds. The absorption of SW radiation by the atmosphere seems to be underestimated and LW absorption seems to be overestimated. Despite these issues, the overall results are satisfying and work on the improvement of HLRADIA for the use in HARMONIE-AROME NWP system

  3. Automation of electroweak NLO corrections in general models

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Jean-Nicolas [Universitaet Wuerzburg (Germany)

    2016-07-01

    I discuss the automation of generation of scattering amplitudes in general quantum field theories at next-to-leading order in perturbation theory. The work is based on Recola, a highly efficient one-loop amplitude generator for the Standard Model, which I have extended so that it can deal with general quantum field theories. Internally, Recola computes off-shell currents and for new models new rules for off-shell currents emerge which are derived from the Feynman rules. My work relies on the UFO format which can be obtained by a suited model builder, e.g. FeynRules. I have developed tools to derive the necessary counterterm structures and to perform the renormalization within Recola in an automated way. I describe the procedure using the example of the two-Higgs-doublet model.

  4. Comparison of body composition between fashion models and women in general.

    Science.gov (United States)

    Park, Sunhee

    2017-12-31

    The present study compared the physical characteristics and body composition of professional fashion models and women in general, utilizing the skinfold test. The research sample consisted of 90 professional fashion models presently active in Korea and 100 females in the general population, all selected through convenience sampling. Measurement was done following standardized methods and procedures set by the International Society for the Advancement of Kinanthropometry. Body density (mg/ mm) and body fat (%) were measured at the biceps, triceps, subscapular, and suprailiac areas. The results showed that the biceps, triceps, subscapular, and suprailiac areas of professional fashion models were significantly thinner than those of women in general (pfashion models were significantly lower than those in women in general (pfashion models was significantly greater (pfashion models is higher, due to taller stature, than in women in general. Moreover, there is an effort on the part of fashion models to lose weight in order to maintain a thin body and a low weight for occupational reasons. ©2017 The Korean Society for Exercise Nutrition

  5. Generalized Linear Models with Applications in Engineering and the Sciences

    CERN Document Server

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J

    2012-01-01

    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  6. Double generalized linear compound poisson models to insurance claims data

    DEFF Research Database (Denmark)

    Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo

    2017-01-01

    This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....

  7. Modeling age-specific mortality for countries with generalized HIV epidemics.

    Directory of Open Access Journals (Sweden)

    David J Sharrow

    Full Text Available In a given population the age pattern of mortality is an important determinant of total number of deaths, age structure, and through effects on age structure, the number of births and thereby growth. Good mortality models exist for most populations except those experiencing generalized HIV epidemics and some developing country populations. The large number of deaths concentrated at very young and adult ages in HIV-affected populations produce a unique 'humped' age pattern of mortality that is not reproduced by any existing mortality models. Both burden of disease reporting and population projection methods require age-specific mortality rates to estimate numbers of deaths and produce plausible age structures. For countries with generalized HIV epidemics these estimates should take into account the future trajectory of HIV prevalence and its effects on age-specific mortality. In this paper we present a parsimonious model of age-specific mortality for countries with generalized HIV/AIDS epidemics.The model represents a vector of age-specific mortality rates as the weighted sum of three independent age-varying components. We derive the age-varying components from a Singular Value Decomposition of the matrix of age-specific mortality rate schedules. The weights are modeled as a function of HIV prevalence and one of three possible sets of inputs: life expectancy at birth, a measure of child mortality, or child mortality with a measure of adult mortality. We calibrate the model with 320 five-year life tables for each sex from the World Population Prospects 2010 revision that come from the 40 countries of the world that have and are experiencing a generalized HIV epidemic. Cross validation shows that the model is able to outperform several existing model life table systems.We present a flexible, parsimonious model of age-specific mortality for countries with generalized HIV epidemics. Combined with the outputs of existing epidemiological and

  8. Modeling misidentification errors that result from use of genetic tags in capture-recapture studies

    Science.gov (United States)

    Yoshizaki, J.; Brownie, C.; Pollock, K.H.; Link, W.A.

    2011-01-01

    Misidentification of animals is potentially important when naturally existing features (natural tags) such as DNA fingerprints (genetic tags) are used to identify individual animals. For example, when misidentification leads to multiple identities being assigned to an animal, traditional estimators tend to overestimate population size. Accounting for misidentification in capture-recapture models requires detailed understanding of the mechanism. Using genetic tags as an example, we outline a framework for modeling the effect of misidentification in closed population studies when individual identification is based on natural tags that are consistent over time (non-evolving natural tags). We first assume a single sample is obtained per animal for each capture event, and then generalize to the case where multiple samples (such as hair or scat samples) are collected per animal per capture occasion. We introduce methods for estimating population size and, using a simulation study, we show that our new estimators perform well for cases with moderately high capture probabilities or high misidentification rates. In contrast, conventional estimators can seriously overestimate population size when errors due to misidentification are ignored. ?? 2009 Springer Science+Business Media, LLC.

  9. Membrane models and generalized Z2 gauge theories

    International Nuclear Information System (INIS)

    Lowe, M.J.; Wallace, D.J.

    1980-01-01

    We consider models of (d-n)-dimensional membranes fluctuating in a d-dimensional space under the action of surface tension. We investigate the renormalization properties of these models perturbatively and in 1/n expansion. The potential relationships of these models to generalized Z 2 gauge theories are indicated. (orig.)

  10. A General Model for Estimating Macroevolutionary Landscapes.

    Science.gov (United States)

    Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef

    2018-03-01

    The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].

  11. Dynamical CP violation of the generalized Yang-Mills model

    International Nuclear Information System (INIS)

    Wang Dianfu; Chang Xiaojing; Sun Xiaoyu

    2011-01-01

    Starting from the generalized Yang-Mills model which contains, besides the vector part V μ , also a scalar part S and a pseudoscalar part P . It is shown, in terms of the Nambu-Jona-Lasinio (NJL) mechanism, that CP violation can be realized dynamically. The combination of the generalized Yang-Mills model and the NJL mechanism provides a new way to explain CP violation. (authors)

  12. Generalized Reduced Order Model Generation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  13. Anisotropic charged generalized polytropic models

    Science.gov (United States)

    Nasim, A.; Azam, M.

    2018-06-01

    In this paper, we found some new anisotropic charged models admitting generalized polytropic equation of state with spherically symmetry. An analytic solution of the Einstein-Maxwell field equations is obtained through the transformation introduced by Durgapal and Banerji (Phys. Rev. D 27:328, 1983). The physical viability of solutions corresponding to polytropic index η =1/2, 2/3, 1, 2 is analyzed graphically. For this, we plot physical quantities such as radial and tangential pressure, anisotropy, speed of sound which demonstrated that these models achieve all the considerable physical conditions required for a relativistic star. Further, it is mentioned here that previous results for anisotropic charged matter with linear, quadratic and polytropic equation of state can be retrieved.

  14. Generalized model of the microwave auditory effect

    International Nuclear Information System (INIS)

    Yitzhak, N M; Ruppin, R; Hareuveny, R

    2009-01-01

    A generalized theoretical model for evaluating the amplitudes of the sound waves generated in a spherical head model, which is irradiated by microwave pulses, is developed. The thermoelastic equation of motion is solved for a spherically symmetric heating pattern of arbitrary form. For previously treated heating patterns that are peaked at the sphere centre, the results reduce to those presented before. The generalized model is applied to the case in which the microwave absorption is concentrated near the sphere surface. It is found that, for equal average specific absorption rates, the sound intensity generated by a surface localized heating pattern is comparable to that generated by a heating pattern that is peaked at the centre. The dependence of the induced sound pressure on the shape of the microwave pulse is explored. Another theoretical extension, to the case of repeated pulses, is developed and applied to the interpretation of existing experimental data on the dependence of the human hearing effect threshold on the pulse repetition frequency.

  15. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  16. Models of clinical reasoning with a focus on general practice: A critical review.

    Science.gov (United States)

    Yazdani, Shahram; Hosseinzadeh, Mohammad; Hosseini, Fakhrolsadat

    2017-10-01

    Diagnosis lies at the heart of general practice. Every day general practitioners (GPs) visit patients with a wide variety of complaints and concerns, with often minor but sometimes serious symptoms. General practice has many features which differentiate it from specialty care setting, but during the last four decades little attention was paid to clinical reasoning in general practice. Therefore, we aimed to critically review the clinical reasoning models with a focus on the clinical reasoning in general practice or clinical reasoning of general practitioners to find out to what extent the existing models explain the clinical reasoning specially in primary care and also identity the gaps of the model for use in primary care settings. A systematic search to find models of clinical reasoning were performed. To have more precision, we excluded the studies that focused on neurobiological aspects of reasoning, reasoning in disciplines other than medicine decision making or decision analysis on treatment or management plan. All the articles and documents were first scanned to see whether they include important relevant contents or any models. The selected studies which described a model of clinical reasoning in general practitioners or with a focus on general practice were then reviewed and appraisal or critics of other authors on these models were included. The reviewed documents on the model were synthesized. Six models of clinical reasoning were identified including hypothetic-deductive model, pattern recognition, a dual process diagnostic reasoning model, pathway for clinical reasoning, an integrative model of clinical reasoning, and model of diagnostic reasoning strategies in primary care. Only one model had specifically focused on general practitioners reasoning. A Model of clinical reasoning that included specific features of general practice to better help the general practitioners with the difficulties of clinical reasoning in this setting is needed.

  17. Models of clinical reasoning with a focus on general practice: a critical review

    Directory of Open Access Journals (Sweden)

    SHAHRAM YAZDANI

    2017-10-01

    Full Text Available Introduction: Diagnosis lies at the heart of general practice. Every day general practitioners (GPs visit patients with a wide variety of complaints and concerns, with often minor but sometimes serious symptoms. General practice has many features which differentiate it from specialty care setting, but during the last four decades little attention was paid to clinical reasoning in general practice. Therefore, we aimed to critically review the clinical reasoning models with a focus on the clinical reasoning in general practice or clinical reasoning of general practitioners to find out to what extent the existing models explain the clinical reasoning specially in primary care and also identity the gaps of the model for use in primary care settings Methods: A systematic search to find models of clinical reasoning were performed. To have more precision, we excluded the studies that focused on neurobiological aspects of reasoning, reasoning in disciplines other than medicine decision making or decision analysis on treatment or management plan. All the articles and documents were first scanned to see whether they include important relevant contents or any models. The selected studies which described a model of clinical reasoning in general practitioners or with a focus on general practice were then reviewed and appraisal or critics of other authors on these models were included. The reviewed documents on the model were synthesized Results: Six models of clinical reasoning were identified including hypothetic-deductive model, pattern recognition, a dual process diagnostic reasoning model, pathway for clinical reasoning, an integrative model of clinical reasoning, and model of diagnostic reasoning strategies in primary care. Only one model had specifically focused on general practitioners reasoning. Conclusion: A Model of clinical reasoning that included specific features of general practice to better help the general practitioners with the difficulties

  18. Overestimation of heterosexually attributed AIDS deaths is associated with immature psychological defence mechanisms and clitoral masturbation during penile-vaginal intercourse.

    Science.gov (United States)

    Brody, S; Costa, R M

    2009-12-01

    Research shows that (1) greater use of immature psychological defence mechanisms (associated with psychopathology) is associated with lesser orgasmic consistency from penile-vaginal intercourse (PVI), but greater frequency of other sexual behaviours and greater condom use for PVI, and (2) unlike the vectors of receptive anal intercourse and punctures, HIV acquisition during PVI is extremely unlikely in reasonably healthy persons. However, the relationship between overestimation of AIDS deaths due to 'heterosexual transmission' (often misunderstood as only PVI), sexual behaviour and mental health has been lacking. Two hundred and twenty-one Scottish women completed the Defense Style Questionnaire, reported past month frequencies of their various sexual activities, and estimated the total number of women who died from AIDS in Scotland nominally as a result of heterosexual transmission in the UK from a partner not known to be an injecting drug user, bisexual or infected through transfusion. The average respondent overestimated by 226,000%. Women providing lower estimates were less likely to use immature psychological defences, and had a lower frequency of orgasms from clitoral masturbation during PVI and from vibrator use. The results indicate that those who perceive 'heterosexual transmission' led to many AIDS deaths have poorer psychological functioning, and might be less able to appreciate PVI.

  19. Generalized entropy formalism and a new holographic dark energy model

    Science.gov (United States)

    Sayahian Jahromi, A.; Moosavi, S. A.; Moradpour, H.; Morais Graça, J. P.; Lobo, I. P.; Salako, I. G.; Jawad, A.

    2018-05-01

    Recently, the Rényi and Tsallis generalized entropies have extensively been used in order to study various cosmological and gravitational setups. Here, using a special type of generalized entropy, a generalization of both the Rényi and Tsallis entropy, together with holographic principle, we build a new model for holographic dark energy. Thereinafter, considering a flat FRW universe, filled by a pressureless component and the new obtained dark energy model, the evolution of cosmos has been investigated showing satisfactory results and behavior. In our model, the Hubble horizon plays the role of IR cutoff, and there is no mutual interaction between the cosmos components. Our results indicate that the generalized entropy formalism may open a new window to become more familiar with the nature of spacetime and its properties.

  20. A Generalized Yang-Mills Model and Dynamical Breaking of Gauge Symmetry

    International Nuclear Information System (INIS)

    Wang Dianfu; Song Heshan

    2005-01-01

    A generalized Yang-Mills model, which contains, besides the vector part V μ , also a scalar part S, is constructed and the dynamical breaking of gauge symmetry in the model is also discussed. It is shown, in terms of Nambu-Jona-Lasinio (NJL) mechanism, that the gauge symmetry breaking can be realized dynamically in the generalized Yang-Mills model. The combination of the generalized Yang-Mills model and the NJL mechanism provides a way to overcome the difficulties related to the Higgs field and the Higgs mechanism in the usual spontaneous symmetry breaking theory.

  1. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  2. A generalized statistical model for the size distribution of wealth

    International Nuclear Information System (INIS)

    Clementi, F; Gallegati, M; Kaniadakis, G

    2012-01-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature. (paper)

  3. A generalized statistical model for the size distribution of wealth

    Science.gov (United States)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  4. Generalized Tavis-Cummings models and quantum networks

    Science.gov (United States)

    Gorokhov, A. V.

    2018-04-01

    The properties of quantum networks based on generalized Tavis-Cummings models are theoretically investigated. We have calculated the information transfer success rate from one node to another in a simple model of a quantum network realized with two-level atoms placed in the cavities and interacting with an external laser field and cavity photons. The method of dynamical group of the Hamiltonian and technique of corresponding coherent states were used for investigation of the temporal dynamics of the two nodes model.

  5. Head multidetector computed tomography: emergency medicine physicians overestimate the pretest probability and legal risk of significant findings.

    Science.gov (United States)

    Baskerville, Jerry Ray; Herrick, John

    2012-02-01

    This study focuses on clinically assigned prospective estimated pretest probability and pretest perception of legal risk as independent variables in the ordering of multidetector computed tomographic (MDCT) head scans. Our primary aim is to measure the association between pretest probability of a significant finding and pretest perception of legal risk. Secondarily, we measure the percentage of MDCT scans that physicians would not order if there was no legal risk. This study is a prospective, cross-sectional, descriptive analysis of patients 18 years and older for whom emergency medicine physicians ordered a head MDCT. We collected a sample of 138 patients subjected to head MDCT scans. The prevalence of a significant finding in our population was 6%, yet the pretest probability expectation of a significant finding was 33%. The legal risk presumed was even more dramatic at 54%. These data support the hypothesis that physicians presume the legal risk to be significantly higher than the risk of a significant finding. A total of 21% or 15% patients (95% confidence interval, ±5.9%) would not have been subjected to MDCT if there was no legal risk. Physicians overestimated the probability that the computed tomographic scan would yield a significant result and indicated an even greater perceived medicolegal risk if the scan was not obtained. Physician test-ordering behavior is complex, and our study queries pertinent aspects of MDCT testing. The magnification of legal risk vs the pretest probability of a significant finding is demonstrated. Physicians significantly overestimated pretest probability of a significant finding on head MDCT scans and presumed legal risk. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. General Friction Model Extended by the Effect of Strain Hardening

    DEFF Research Database (Denmark)

    Nielsen, Chris V.; Martins, Paulo A.F.; Bay, Niels

    2016-01-01

    An extension to the general friction model proposed by Wanheim and Bay [1] to include the effect of strain hardening is proposed. The friction model relates the friction stress to the fraction of real contact area by a friction factor under steady state sliding. The original model for the real...... contact area as function of the normalized contact pressure is based on slip-line analysis and hence on the assumption of rigid-ideally plastic material behavior. In the present work, a general finite element model is established to, firstly, reproduce the original model under the assumption of rigid...

  7. A General Microscopic Traffic Model Yielding Dissipative Shocks

    DEFF Research Database (Denmark)

    Gaididei, Yuri Borisovich; Caputo, Jean Guy; Christiansen, Peter Leth

    2018-01-01

    We consider a general microscopic traffic model with a delay. An algebraic traffic function reduces the equation to the Aw-Rascle microscopic model while a sigmoid function gives the standard “follow the leader”. For zero delay we prove that the homogeneous solution is globally stable...

  8. Meiofauna metabolism in suboxic sediments: currently overestimated.

    Directory of Open Access Journals (Sweden)

    Ulrike Braeckman

    Full Text Available Oxygen is recognized as a structuring factor of metazoan communities in marine sediments. The importance of oxygen as a controlling factor on meiofauna (32 µm-1 mm in size respiration rates is however less clear. Typically, respiration rates are measured under oxic conditions, after which these rates are used in food web studies to quantify the role of meiofauna in sediment carbon turnover. Sediment oxygen concentration ([O(2] is generally far from saturated, implying that (1 current estimates of the role of meiofauna in carbon cycling may be biased and (2 meiofaunal organisms need strategies to survive in oxygen-stressed environments. Two main survival strategies are often hypothesized: 1 frequent migration to oxic layers and 2 morphological adaptation. To evaluate these hypotheses, we (1 used a model of oxygen turnover in the meiofauna body as a function of ambient [O(2], and (2 performed respiration measurements at a range of [O(2] conditions. The oxygen turnover model predicts a tight coupling between ambient [O(2] and meiofauna body [O(2] with oxygen within the body being consumed in seconds. This fast turnover favors long and slender organisms in sediments with low ambient [O(2] but even then frequent migration between suboxic and oxic layers is for most organisms not a viable strategy to alleviate oxygen limitation. Respiration rates of all measured meiofauna organisms slowed down in response to decreasing ambient [O(2], with Nematoda displaying the highest metabolic sensitivity for declining [O(2] followed by Foraminifera and juvenile Gastropoda. Ostracoda showed a behavioral stress response when ambient [O(2] reached a critical level. Reduced respiration at low ambient [O(2] implies that meiofauna in natural, i.e. suboxic, sediments must have a lower metabolism than inferred from earlier respiration rates conducted under oxic conditions. The implications of these findings are discussed for the contribution of meiofauna to carbon

  9. A generalized conditional heteroscedastic model for temperature downscaling

    Science.gov (United States)

    Modarres, R.; Ouarda, T. B. M. J.

    2014-11-01

    This study describes a method for deriving the time varying second order moment, or heteroscedasticity, of local daily temperature and its association to large Coupled Canadian General Circulation Models predictors. This is carried out by applying a multivariate generalized autoregressive conditional heteroscedasticity (MGARCH) approach to construct the conditional variance-covariance structure between General Circulation Models (GCMs) predictors and maximum and minimum temperature time series during 1980-2000. Two MGARCH specifications namely diagonal VECH and dynamic conditional correlation (DCC) are applied and 25 GCM predictors were selected for a bivariate temperature heteroscedastic modeling. It is observed that the conditional covariance between predictors and temperature is not very strong and mostly depends on the interaction between the random process governing temporal variation of predictors and predictants. The DCC model reveals a time varying conditional correlation between GCM predictors and temperature time series. No remarkable increasing or decreasing change is observed for correlation coefficients between GCM predictors and observed temperature during 1980-2000 while weak winter-summer seasonality is clear for both conditional covariance and correlation. Furthermore, the stationarity and nonlinearity Kwiatkowski-Phillips-Schmidt-Shin (KPSS) and Brock-Dechert-Scheinkman (BDS) tests showed that GCM predictors, temperature and their conditional correlation time series are nonlinear but stationary during 1980-2000 according to BDS and KPSS test results. However, the degree of nonlinearity of temperature time series is higher than most of the GCM predictors.

  10. The generalized spherical model of ferromagnetic films

    International Nuclear Information System (INIS)

    Costache, G.

    1977-12-01

    The D→ infinity of the D-vectorial model of a ferromagnetic film with free surfaces is exactly solved. The mathematical mechanism responsible for the onset of a phase transition in the system is a generalized sticking phenomenon. It is shown that the temperature at which the sticking appears, the transition temperature of the model is monotonously increasing with increasing the number of layers of the film, contrary to what happens in the spherical model with overall constraint. Certain correlation inequalities of Griffiths type are shown to hold. (author)

  11. Simulation modelling in agriculture: General considerations. | R.I. ...

    African Journals Online (AJOL)

    A computer simulation model is a detailed working hypothesis about a given system. The computer does all the necessary arithmetic when the hypothesis is invoked to predict the future behaviour of the simulated system under given conditions.A general pragmatic approach to model building is discussed; techniques are ...

  12. Fermions as generalized Ising models

    Directory of Open Access Journals (Sweden)

    C. Wetterich

    2017-04-01

    Full Text Available We establish a general map between Grassmann functionals for fermions and probability or weight distributions for Ising spins. The equivalence between the two formulations is based on identical transfer matrices and expectation values of products of observables. The map preserves locality properties and can be realized for arbitrary dimensions. We present a simple example where a quantum field theory for free massless Dirac fermions in two-dimensional Minkowski space is represented by an asymmetric Ising model on a euclidean square lattice.

  13. International Competition and Inequality: A Generalized Ricardian Model

    OpenAIRE

    Adolfo Figueroa

    2014-01-01

    Why does the gap in real wage rates persist between the First World and the Third World after so many years of increasing globalization? The standard neoclassical trade model predicts that real wage rates will be equalized with international trade, whereas the standard Ricardian trade model does not. Facts are thus consistent with the Ricardian model. However, this model leaves undetermined income distribution. The objective of this paper is to fill this gap by developing a generalized Ricard...

  14. Students with Non-Proficient Information Seeking Skills Greatly Over-Estimate Their Abilities. A Review of: Gross, Melissa, and Don Latham.

    Directory of Open Access Journals (Sweden)

    David Herron

    2008-06-01

    Full Text Available Objective – The objective of this study is an investigation of the relationship between students’ self-assessment of their information literacy skills and their actual skill level, as well as an analysis of whether library anxiety is related to information skill attainment. Design – Quantitative research design (Information Literacy Test (ILT, Library Anxiety Scale (LAS, pre and post surveys.Setting – Florida State University, United States.Subjects – Students, incoming freshmen.Methods – Information literacy skills were measured using the Information Literacy Test (ILT, presenting subjects with 65 multiple choice items designed around four of the five ACRL information literacy standards, in which students were expectedto: 1 determine the nature and extent of the information needed; 2 access needed information effectively and efficiently; 3 evaluate information and its sources critically and incorporates selected information into his/her knowledge base system; 4 understand many of the economic, legal and social issues surrounding the use of information and accesses and uses information ethically and legally. The ILT categorized participant scores as non-proficient(Main Results – The main aim of the study was to test the hypothesis that students who test non-proficient on an information literacy test tend to overestimate their competency to a higher degree than proficient and advanced students. In the pre- and post-surveys, the students were asked to estimate their performance onthe ILT in terms of the expected percentage of questions they would answer correctly, the number of questions they expected to answer correctly, and how their performance on the ILT would compare toothers taking the test (in percentage. The results of the study show that all students overestimate their abilities, both in terms of performance and relative performance, in the pre-survey. The estimated percentage correct answers for the whole group was 75%, but

  15. Awareness of Stroke Risk after TIA in Swiss General Practitioners and Hospital Physicians.

    Science.gov (United States)

    Streit, Sven; Baumann, Philippe; Barth, Jürgen; Mattle, Heinrich P; Arnold, Marcel; Bassetti, Claudio L; Meli, Damian N; Fischer, Urs

    2015-01-01

    Transient ischemic attacks (TIA) are stroke warning signs and emergency situations, and, if immediately investigated, doctors can intervene to prevent strokes. Nevertheless, many patients delay going to the doctor, and doctors might delay urgently needed investigations and preventative treatments. We set out to determine how much general practitioners (GPs) and hospital physicians (HPs) knew about stroke risk after TIA, and to measure their referral rates. We used a structured questionnaire to ask GPs and HPs in the catchment area of the University Hospital of Bern to estimate a patient's risk of stroke after TIA. We also assessed their referral behavior. We then statistically analysed their reasons for deciding not to immediately refer patients. Of the 1545 physicians, 40% (614) returned the survey. Of these, 75% (457) overestimated stroke risk within 24 hours, and 40% (245) overestimated risk within 3 months after TIA. Only 9% (53) underestimated stroke risk within 24 hours and 26% (158) underestimated risk within 3 months; 78% (473) of physicians overestimated the amount that carotid endarterectomy reduces stroke risk; 93% (543) would rigorously investigate the cause of a TIA, but only 38% (229) would refer TIA patients for urgent investigations "very often". Physicians most commonly gave these reasons for not making emergency referrals: patient's advanced age; patient's preference; patient was multimorbid; and, patient needed long-term care. Although physicians overestimate stroke risk after TIA, their rate of emergency referral is modest, mainly because they tend not to refer multimorbid and elderly patients at the appropriate rate. Since old and frail patients benefit from urgent investigations and treatment after TIA as much as younger patients, future educational campaigns should focus on the importance of emergency evaluations for all TIA patients.

  16. Awareness of Stroke Risk after TIA in Swiss General Practitioners and Hospital Physicians.

    Directory of Open Access Journals (Sweden)

    Sven Streit

    Full Text Available Transient ischemic attacks (TIA are stroke warning signs and emergency situations, and, if immediately investigated, doctors can intervene to prevent strokes. Nevertheless, many patients delay going to the doctor, and doctors might delay urgently needed investigations and preventative treatments. We set out to determine how much general practitioners (GPs and hospital physicians (HPs knew about stroke risk after TIA, and to measure their referral rates.We used a structured questionnaire to ask GPs and HPs in the catchment area of the University Hospital of Bern to estimate a patient's risk of stroke after TIA. We also assessed their referral behavior. We then statistically analysed their reasons for deciding not to immediately refer patients.Of the 1545 physicians, 40% (614 returned the survey. Of these, 75% (457 overestimated stroke risk within 24 hours, and 40% (245 overestimated risk within 3 months after TIA. Only 9% (53 underestimated stroke risk within 24 hours and 26% (158 underestimated risk within 3 months; 78% (473 of physicians overestimated the amount that carotid endarterectomy reduces stroke risk; 93% (543 would rigorously investigate the cause of a TIA, but only 38% (229 would refer TIA patients for urgent investigations "very often". Physicians most commonly gave these reasons for not making emergency referrals: patient's advanced age; patient's preference; patient was multimorbid; and, patient needed long-term care.Although physicians overestimate stroke risk after TIA, their rate of emergency referral is modest, mainly because they tend not to refer multimorbid and elderly patients at the appropriate rate. Since old and frail patients benefit from urgent investigations and treatment after TIA as much as younger patients, future educational campaigns should focus on the importance of emergency evaluations for all TIA patients.

  17. On a Generalized Squared Gaussian Diffusion Model for Option Valuation

    Directory of Open Access Journals (Sweden)

    Edeki S.O.

    2017-01-01

    Full Text Available In financial mathematics, option pricing models are vital tools whose usefulness cannot be overemphasized. Modern approaches and modelling of financial derivatives are therefore required in option pricing and valuation settings. In this paper, we derive via the application of Ito lemma, a pricing model referred to as Generalized Squared Gaussian Diffusion Model (GSGDM for option pricing and valuation. Same approach can be considered via Stratonovich stochastic dynamics. We also show that the classical Black-Scholes, and the square root constant elasticity of variance models are special cases of the GSGDM. In addition, general solution of the GSGDM is obtained using modified variational iterative method (MVIM.

  18. Simulating snow maps for Norway: description and statistical evaluation of the seNorge snow model

    Directory of Open Access Journals (Sweden)

    T. M. Saloranta

    2012-11-01

    Full Text Available Daily maps of snow conditions have been produced in Norway with the seNorge snow model since 2004. The seNorge snow model operates with 1 × 1 km resolution, uses gridded observations of daily temperature and precipitation as its input forcing, and simulates, among others, snow water equivalent (SWE, snow depth (SD, and the snow bulk density (ρ. In this paper the set of equations contained in the seNorge model code is described and a thorough spatiotemporal statistical evaluation of the model performance from 1957–2011 is made using the two major sets of extensive in situ snow measurements that exist for Norway. The evaluation results show that the seNorge model generally overestimates both SWE and ρ, and that the overestimation of SWE increases with elevation throughout the snow season. However, the R2-values for model fit are 0.60 for (log-transformed SWE and 0.45 for ρ, indicating that after removal of the detected systematic model biases (e.g. by recalibrating the model or expressing snow conditions in relative units the model performs rather well. The seNorge model provides a relatively simple, not very data-demanding, yet nonetheless process-based method to construct snow maps of high spatiotemporal resolution. It is an especially well suited alternative for operational snow mapping in regions with rugged topography and large spatiotemporal variability in snow conditions, as is the case in the mountainous Norway.

  19. Processes influencing model-data mismatch in drought-stressed, fire-disturbed eddy flux sites

    Science.gov (United States)

    Mitchell, Stephen; Beven, Keith; Freer, Jim; Law, Beverly

    2011-06-01

    Semiarid forests are very sensitive to climatic change and among the most difficult ecosystems to accurately model. We tested the performance of the Biome-BGC model against eddy flux data taken from young (years 2004-2008), mature (years 2002-2008), and old-growth (year 2000) ponderosa pine stands at Metolius, Oregon, and subsequently examined several potential causes for model-data mismatch. We used the Generalized Likelihood Uncertainty Estimation methodology, which involved 500,000 model runs for each stand (1,500,000 total). Each simulation was run with randomly generated parameter values from a uniform distribution based on published parameter ranges, resulting in modeled estimates of net ecosystem CO2 exchange (NEE) that were compared to measured eddy flux data. Simulations for the young stand exhibited the highest level of performance, though they overestimated ecosystem C accumulation (-NEE) 99% of the time. Among the simulations for the mature and old-growth stands, 100% and 99% of the simulations underestimated ecosystem C accumulation. One obvious area of model-data mismatch is soil moisture, which was overestimated by the model in the young and old-growth stands yet underestimated in the mature stand. However, modeled estimates of soil water content and associated water deficits did not appear to be the primary cause of model-data mismatch; our analysis indicated that gross primary production can be accurately modeled even if soil moisture content is not. Instead, difficulties in adequately modeling ecosystem respiration, mainly autotrophic respiration, appeared to be the fundamental cause of model-data mismatch.

  20. Generalized network modeling of capillary-dominated two-phase flow.

    Science.gov (United States)

    Raeini, Ali Q; Bijeljic, Branko; Blunt, Martin J

    2018-02-01

    We present a generalized network model for simulating capillary-dominated two-phase flow through porous media at the pore scale. Three-dimensional images of the pore space are discretized using a generalized network-described in a companion paper [A. Q. Raeini, B. Bijeljic, and M. J. Blunt, Phys. Rev. E 96, 013312 (2017)2470-004510.1103/PhysRevE.96.013312]-which comprises pores that are divided into smaller elements called half-throats and subsequently into corners. Half-throats define the connectivity of the network at the coarsest level, connecting each pore to half-throats of its neighboring pores from their narrower ends, while corners define the connectivity of pore crevices. The corners are discretized at different levels for accurate calculation of entry pressures, fluid volumes, and flow conductivities that are obtained using direct simulation of flow on the underlying image. This paper discusses the two-phase flow model that is used to compute the averaged flow properties of the generalized network, including relative permeability and capillary pressure. We validate the model using direct finite-volume two-phase flow simulations on synthetic geometries, and then present a comparison of the model predictions with a conventional pore-network model and experimental measurements of relative permeability in the literature.

  1. Generalized network modeling of capillary-dominated two-phase flow

    Science.gov (United States)

    Raeini, Ali Q.; Bijeljic, Branko; Blunt, Martin J.

    2018-02-01

    We present a generalized network model for simulating capillary-dominated two-phase flow through porous media at the pore scale. Three-dimensional images of the pore space are discretized using a generalized network—described in a companion paper [A. Q. Raeini, B. Bijeljic, and M. J. Blunt, Phys. Rev. E 96, 013312 (2017), 10.1103/PhysRevE.96.013312]—which comprises pores that are divided into smaller elements called half-throats and subsequently into corners. Half-throats define the connectivity of the network at the coarsest level, connecting each pore to half-throats of its neighboring pores from their narrower ends, while corners define the connectivity of pore crevices. The corners are discretized at different levels for accurate calculation of entry pressures, fluid volumes, and flow conductivities that are obtained using direct simulation of flow on the underlying image. This paper discusses the two-phase flow model that is used to compute the averaged flow properties of the generalized network, including relative permeability and capillary pressure. We validate the model using direct finite-volume two-phase flow simulations on synthetic geometries, and then present a comparison of the model predictions with a conventional pore-network model and experimental measurements of relative permeability in the literature.

  2. Generalized modeling of the fractional-order memcapacitor and its character analysis

    Science.gov (United States)

    Guo, Zhang; Si, Gangquan; Diao, Lijie; Jia, Lixin; Zhang, Yanbin

    2018-06-01

    Memcapacitor is a new type of memory device generalized from the memristor. This paper proposes a generalized fractional-order memcapacitor model by introducing the fractional calculus into the model. The generalized formulas are studied and the two fractional-order parameter α, β are introduced where α mostly affects the fractional calculus value of charge q within the generalized Ohm's law and β generalizes the state equation which simulates the physical mechanism of a memcapacitor into the fractional sense. This model will be reduced to the conventional memcapacitor as α = 1 , β = 0 and to the conventional memristor as α = 0 , β = 1 . Then the numerical analysis of the fractional-order memcapacitor is studied. And the characteristics and output behaviors of the fractional-order memcapacitor applied with sinusoidal charge are derived. The analysis results have shown that there are four basic v - q and v - i curve patterns when the fractional order α, β respectively equal to 0 or 1, moreover all v - q and v - i curves of the other fractional-order models are transition curves between the four basic patterns.

  3. Current definition and a generalized federbush model

    International Nuclear Information System (INIS)

    Singh, L.P.S.; Hagen, C.R.

    1978-01-01

    The Federbush model is studied, with particular attention being given to the definition of currents. Inasmuch as there is no a priori restriction of local gauge invariance, the currents in the interacting case can be defined more generally than in Q.E.D. It is found that two arbitrary parameters are thereby introduced into the theory. Lowest order perturbation calculations for the current correlation functions and the Fermion propagators indicate that the theory admits a whole class of solutions dependent upon these parameters with the closed solution of Federbush emerging as a special case. The theory is shown to be locally covariant, and a conserved energy--momentum tensor is displayed. One finds in addition that the generators of gauge transformations for the fields are conserved. Finally it is shown that the general theory yields the Federbush solution if suitable Thirring model type counterterms are added

  4. Retrofitting Non-Cognitive-Diagnostic Reading Assessment under the Generalized DINA Model Framework

    Science.gov (United States)

    Chen, Huilin; Chen, Jinsong

    2016-01-01

    Cognitive diagnosis models (CDMs) are psychometric models developed mainly to assess examinees' specific strengths and weaknesses in a set of skills or attributes within a domain. By adopting the Generalized-DINA model framework, the recently developed general modeling framework, we attempted to retrofit the PISA reading assessments, a…

  5. Tilted Bianchi type I dust fluid cosmological model in general relativity

    Indian Academy of Sciences (India)

    Tilted Bianchi type I dust fluid cosmological model in general relativity ... In this paper, we have investigated a tilted Bianchi type I cosmological model filled with dust of perfect fluid in general relativity. ... Pramana – Journal of Physics | News ...

  6. Combining a popularity-productivity stochastic block model with a discriminative-content model for general structure detection.

    Science.gov (United States)

    Chai, Bian-fang; Yu, Jian; Jia, Cai-Yan; Yang, Tian-bao; Jiang, Ya-wen

    2013-07-01

    Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.

  7. Maximally Generalized Yang-Mills Model and Dynamical Breaking of Gauge Symmetry

    International Nuclear Information System (INIS)

    Wang Dianfu; Song Heshan

    2006-01-01

    A maximally generalized Yang-Mills model, which contains, besides the vector part V μ , also an axial-vector part A μ , a scalar part S, a pseudoscalar part P, and a tensor part T μν , is constructed and the dynamical breaking of gauge symmetry in the model is also discussed. It is shown, in terms of the Nambu-Jona-Lasinio mechanism, that the gauge symmetry breaking can be realized dynamically in the maximally generalized Yang-Mills model. The combination of the maximally generalized Yang-Mills model and the NJL mechanism provides a way to overcome the difficulties related to the Higgs field and the Higgs mechanism in the usual spontaneous symmetry breaking theory.

  8. Anomaly General Circulation Models.

    Science.gov (United States)

    Navarra, Antonio

    The feasibility of the anomaly model is assessed using barotropic and baroclinic models. In the barotropic case, both a stationary and a time-dependent model has been formulated and constructed, whereas only the stationary, linear case is considered in the baroclinic case. Results from the barotropic model indicate that a relation between the stationary solution and the time-averaged non-linear solution exists. The stationary linear baroclinic solution can therefore be considered with some confidence. The linear baroclinic anomaly model poses a formidable mathematical problem because it is necessary to solve a gigantic linear system to obtain the solution. A new method to find solution of large linear system, based on a projection on the Krylov subspace is shown to be successful when applied to the linearized baroclinic anomaly model. The scheme consists of projecting the original linear system on the Krylov subspace, thereby reducing the dimensionality of the matrix to be inverted to obtain the solution. With an appropriate setting of the damping parameters, the iterative Krylov method reaches a solution even using a Krylov subspace ten times smaller than the original space of the problem. This generality allows the treatment of the important problem of linear waves in the atmosphere. A larger class (nonzonally symmetric) of basic states can now be treated for the baroclinic primitive equations. These problem leads to large unsymmetrical linear systems of order 10000 and more which can now be successfully tackled by the Krylov method. The (R7) linear anomaly model is used to investigate extensively the linear response to equatorial and mid-latitude prescribed heating. The results indicate that the solution is deeply affected by the presence of the stationary waves in the basic state. The instability of the asymmetric flows, first pointed out by Simmons et al. (1983), is active also in the baroclinic case. However, the presence of baroclinic processes modifies the

  9. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  10. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Science.gov (United States)

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  11. Modelling debris flows down general channels

    Directory of Open Access Journals (Sweden)

    S. P. Pudasaini

    2005-01-01

    Full Text Available This paper is an extension of the single-phase cohesionless dry granular avalanche model over curved and twisted channels proposed by Pudasaini and Hutter (2003. It is a generalisation of the Savage and Hutter (1989, 1991 equations based on simple channel topography to a two-phase fluid-solid mixture of debris material. Important terms emerging from the correct treatment of the kinematic and dynamic boundary condition, and the variable basal topography are systematically taken into account. For vanishing fluid contribution and torsion-free channel topography our new model equations exactly degenerate to the previous Savage-Hutter model equations while such a degeneration was not possible by the Iverson and Denlinger (2001 model, which, in fact, also aimed to extend the Savage and Hutter model. The model equations of this paper have been rigorously derived; they include the effects of the curvature and torsion of the topography, generally for arbitrarily curved and twisted channels of variable channel width. The equations are put into a standard conservative form of partial differential equations. From these one can easily infer the importance and influence of the pore-fluid-pressure distribution in debris flow dynamics. The solid-phase is modelled by applying a Coulomb dry friction law whereas the fluid phase is assumed to be an incompressible Newtonian fluid. Input parameters of the equations are the internal and bed friction angles of the solid particles, the viscosity and volume fraction of the fluid, the total mixture density and the pore pressure distribution of the fluid at the bed. Given the bed topography and initial geometry and the initial velocity profile of the debris mixture, the model equations are able to describe the dynamics of the depth profile and bed parallel depth-averaged velocity distribution from the initial position to the final deposit. A shock capturing, total variation diminishing numerical scheme is implemented to

  12. Ecosystem Model Performance at Wetlands: Results from the North American Carbon Program Site Synthesis

    Science.gov (United States)

    Sulman, B. N.; Desai, A. R.; Schroeder, N. M.; NACP Site Synthesis Participants

    2011-12-01

    Northern peatlands contain a significant fraction of the global carbon pool, and their responses to hydrological change are likely to be important factors in future carbon cycle-climate feedbacks. Global-scale carbon cycle modeling studies typically use general ecosystem models with coarse spatial resolution, often without peatland-specific processes. Here, seven ecosystem models were used to simulate CO2 fluxes at three field sites in Canada and the northern United States, including two nutrient-rich fens and one nutrient-poor, sphagnum-dominated bog, from 2002-2006. Flux residuals (simulated - observed) were positively correlated with measured water table for both gross ecosystem productivity (GEP) and ecosystem respiration (ER) at the two fen sites for all models, and were positively correlated with water table at the bog site for the majority of models. Modeled diurnal cycles at fen sites agreed well with eddy covariance measurements overall. Eddy covariance GEP and ER were higher during dry periods than during wet periods, while model results predicted either the opposite relationship or no significant difference. At the bog site, eddy covariance GEP had no significant dependence on water table, while models predicted higher GEP during wet periods. All models significantly over-estimated GEP at the bog site, and all but one over-estimated ER at the bog site. Carbon cycle models in peatland-rich regions could be improved by incorporating better models or measurements of hydrology and by inhibiting GEP and ER rates under saturated conditions. Bogs and fens likely require distinct treatments in ecosystem models due to differences in nutrients, peat properties, and plant communities.

  13. General Equilibrium Models: Improving the Microeconomics Classroom

    Science.gov (United States)

    Nicholson, Walter; Westhoff, Frank

    2009-01-01

    General equilibrium models now play important roles in many fields of economics including tax policy, environmental regulation, international trade, and economic development. The intermediate microeconomics classroom has not kept pace with these trends, however. Microeconomics textbooks primarily focus on the insights that can be drawn from the…

  14. The Overestimation Phenomenon in a Skill-Based Gaming Context: The Case of March Madness Pools.

    Science.gov (United States)

    Kwak, Dae Hee

    2016-03-01

    Over 100 million people are estimated to take part in the NCAA Men's Basketball Tournament Championship bracket contests. However, relatively little is known about consumer behavior in skill-based gaming situations (e.g., sports betting). In two studies, we investigated the overestimation phenomenon in the "March Madness" context. In Study 1 (N = 81), we found that individuals who were allowed to make their own predictions were significantly more optimistic about their performance than individuals who did not make their own selections. In Study 2 (N = 197), all subjects participated in a mock competitive bracket pool. In line with the illusion of control theory, results showed that higher self-ratings of probability of winning significantly increased maximum willingness to wager but did not improve actual performance. Lastly, perceptions of high probability of winning significantly contributed to consumers' enjoyment and willingness to participate in a bracket pool in the future.

  15. A general diagnostic model applied to language testing data.

    Science.gov (United States)

    von Davier, Matthias

    2008-11-01

    Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.

  16. A Generalized Nonlocal Calculus with Application to the Peridynamics Model for Solid Mechanics

    OpenAIRE

    Alali, Bacim; Liu, Kuo; Gunzburger, Max

    2014-01-01

    A nonlocal vector calculus was introduced in [2] that has proved useful for the analysis of the peridynamics model of nonlocal mechanics and nonlocal diffusion models. A generalization is developed that provides a more general setting for the nonlocal vector calculus that is independent of particular nonlocal models. It is shown that general nonlocal calculus operators are integral operators with specific integral kernels. General nonlocal calculus properties are developed, including nonlocal...

  17. Performance of five surface energy balance models for estimating daily evapotranspiration in high biomass sorghum

    Science.gov (United States)

    Wagle, Pradeep; Bhattarai, Nishan; Gowda, Prasanna H.; Kakani, Vijaya G.

    2017-06-01

    Robust evapotranspiration (ET) models are required to predict water usage in a variety of terrestrial ecosystems under different geographical and agrometeorological conditions. As a result, several remote sensing-based surface energy balance (SEB) models have been developed to estimate ET over large regions. However, comparison of the performance of several SEB models at the same site is limited. In addition, none of the SEB models have been evaluated for their ability to predict ET in rain-fed high biomass sorghum grown for biofuel production. In this paper, we evaluated the performance of five widely used single-source SEB models, namely Surface Energy Balance Algorithm for Land (SEBAL), Mapping ET with Internalized Calibration (METRIC), Surface Energy Balance System (SEBS), Simplified Surface Energy Balance Index (S-SEBI), and operational Simplified Surface Energy Balance (SSEBop), for estimating ET over a high biomass sorghum field during the 2012 and 2013 growing seasons. The predicted ET values were compared against eddy covariance (EC) measured ET (ETEC) for 19 cloud-free Landsat image. In general, S-SEBI, SEBAL, and SEBS performed reasonably well for the study period, while METRIC and SSEBop performed poorly. All SEB models substantially overestimated ET under extremely dry conditions as they underestimated sensible heat (H) and overestimated latent heat (LE) fluxes under dry conditions during the partitioning of available energy. METRIC, SEBAL, and SEBS overestimated LE regardless of wet or dry periods. Consequently, predicted seasonal cumulative ET by METRIC, SEBAL, and SEBS were higher than seasonal cumulative ETEC in both seasons. In contrast, S-SEBI and SSEBop substantially underestimated ET under too wet conditions, and predicted seasonal cumulative ET by S-SEBI and SSEBop were lower than seasonal cumulative ETEC in the relatively wetter 2013 growing season. Our results indicate the necessity of inclusion of soil moisture or plant water stress

  18. Critical Comments on the General Model of Instructional Communication

    Science.gov (United States)

    Walton, Justin D.

    2014-01-01

    This essay presents a critical commentary on McCroskey et al.'s (2004) general model of instructional communication. In particular, five points are examined which make explicit and problematize the meta-theoretical assumptions of the model. Comments call attention to the limitations of the model and argue for a broader approach to…

  19. An Object-oriented Knowledge Link Model for General Knowledge Management

    OpenAIRE

    Xiao-hong, CHEN; Bang-chuan, LAI

    2005-01-01

    The knowledge link is the basic on knowledge share and the indispensable part in knowledge standardization management. In this paper, a object-oriented knowledge link model is proposed for general knowledge management by using objectoriented representation based on knowledge levels system. In the model, knowledge link is divided into general knowledge link and integrated knowledge with corresponding link properties and methods. What’s more, its BNF syntax is described and designed.

  20. Modeling Answer Change Behavior: An Application of a Generalized Item Response Tree Model

    Science.gov (United States)

    Jeon, Minjeong; De Boeck, Paul; van der Linden, Wim

    2017-01-01

    We present a novel application of a generalized item response tree model to investigate test takers' answer change behavior. The model allows us to simultaneously model the observed patterns of the initial and final responses after an answer change as a function of a set of latent traits and item parameters. The proposed application is illustrated…

  1. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  2. The importance of accurate glacier albedo for estimates of surface mass balance on Vatnajökull: evaluating the surface energy budget in a regional climate model with automatic weather station observations

    Science.gov (United States)

    Steffensen Schmidt, Louise; Aðalgeirsdóttir, Guðfinna; Guðmundsson, Sverrir; Langen, Peter L.; Pálsson, Finnur; Mottram, Ruth; Gascoin, Simon; Björnsson, Helgi

    2017-07-01

    A simulation of the surface climate of Vatnajökull ice cap, Iceland, carried out with the regional climate model HIRHAM5 for the period 1980-2014, is used to estimate the evolution of the glacier surface mass balance (SMB). This simulation uses a new snow albedo parameterization that allows albedo to exponentially decay with time and is surface temperature dependent. The albedo scheme utilizes a new background map of the ice albedo created from observed MODIS data. The simulation is evaluated against observed daily values of weather parameters from five automatic weather stations (AWSs) from the period 2001-2014, as well as in situ SMB measurements from the period 1995-2014. The model agrees well with observations at the AWS sites, albeit with a general underestimation of the net radiation. This is due to an underestimation of the incoming radiation and a general overestimation of the albedo. The average modelled albedo is overestimated in the ablation zone, which we attribute to an overestimation of the thickness of the snow layer and not taking the surface darkening from dirt and volcanic ash deposition during dust storms and volcanic eruptions into account. A comparison with the specific summer, winter, and net mass balance for the whole of Vatnajökull (1995-2014) shows a good overall fit during the summer, with a small mass balance underestimation of 0.04 m w.e. on average, whereas the winter mass balance is overestimated by on average 0.5 m w.e. due to too large precipitation at the highest areas of the ice cap. A simple correction of the accumulation at the highest points of the glacier reduces this to 0.15 m w.e. Here, we use HIRHAM5 to simulate the evolution of the SMB of Vatnajökull for the period 1981-2014 and show that the model provides a reasonable representation of the SMB for this period. However, a major source of uncertainty in the representation of the SMB is the representation of the albedo, and processes currently not accounted for in RCMs

  3. The importance of accurate glacier albedo for estimates of surface mass balance on Vatnajökull: evaluating the surface energy budget in a regional climate model with automatic weather station observations

    Directory of Open Access Journals (Sweden)

    L. S. Schmidt

    2017-07-01

    Full Text Available A simulation of the surface climate of Vatnajökull ice cap, Iceland, carried out with the regional climate model HIRHAM5 for the period 1980–2014, is used to estimate the evolution of the glacier surface mass balance (SMB. This simulation uses a new snow albedo parameterization that allows albedo to exponentially decay with time and is surface temperature dependent. The albedo scheme utilizes a new background map of the ice albedo created from observed MODIS data. The simulation is evaluated against observed daily values of weather parameters from five automatic weather stations (AWSs from the period 2001–2014, as well as in situ SMB measurements from the period 1995–2014. The model agrees well with observations at the AWS sites, albeit with a general underestimation of the net radiation. This is due to an underestimation of the incoming radiation and a general overestimation of the albedo. The average modelled albedo is overestimated in the ablation zone, which we attribute to an overestimation of the thickness of the snow layer and not taking the surface darkening from dirt and volcanic ash deposition during dust storms and volcanic eruptions into account. A comparison with the specific summer, winter, and net mass balance for the whole of Vatnajökull (1995–2014 shows a good overall fit during the summer, with a small mass balance underestimation of 0.04 m w.e. on average, whereas the winter mass balance is overestimated by on average 0.5 m w.e. due to too large precipitation at the highest areas of the ice cap. A simple correction of the accumulation at the highest points of the glacier reduces this to 0.15 m w.e. Here, we use HIRHAM5 to simulate the evolution of the SMB of Vatnajökull for the period 1981–2014 and show that the model provides a reasonable representation of the SMB for this period. However, a major source of uncertainty in the representation of the SMB is the representation of the albedo, and processes

  4. Pricing Participating Products under a Generalized Jump-Diffusion Model

    Directory of Open Access Journals (Sweden)

    Tak Kuen Siu

    2008-01-01

    Full Text Available We propose a model for valuing participating life insurance products under a generalized jump-diffusion model with a Markov-switching compensator. It also nests a number of important and popular models in finance, including the classes of jump-diffusion models and Markovian regime-switching models. The Esscher transform is employed to determine an equivalent martingale measure. Simulation experiments are conducted to illustrate the practical implementation of the model and to highlight some features that can be obtained from our model.

  5. a Model Study of Small-Scale World Map Generalization

    Science.gov (United States)

    Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.

    2018-04-01

    With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.

  6. A Duality Result for the Generalized Erlang Risk Model

    Directory of Open Access Journals (Sweden)

    Lanpeng Ji

    2014-11-01

    Full Text Available In this article, we consider the generalized Erlang risk model and its dual model. By using a conditional measure-preserving correspondence between the two models, we derive an identity for two interesting conditional probabilities. Applications to the discounted joint density of the surplus prior to ruin and the deficit at ruin are also discussed.

  7. Efficient probabilistic model checking on general purpose graphic processors

    NARCIS (Netherlands)

    Bosnacki, D.; Edelkamp, S.; Sulewski, D.; Pasareanu, C.S.

    2009-01-01

    We present algorithms for parallel probabilistic model checking on general purpose graphic processing units (GPGPUs). For this purpose we exploit the fact that some of the basic algorithms for probabilistic model checking rely on matrix vector multiplication. Since this kind of linear algebraic

  8. A Generalized Radiation Model for Human Mobility: Spatial Scale, Searching Direction and Trip Constraint.

    Directory of Open Access Journals (Sweden)

    Chaogui Kang

    Full Text Available We generalized the recently introduced "radiation model", as an analog to the generalization of the classic "gravity model", to consolidate its nature of universality for modeling diverse mobility systems. By imposing the appropriate scaling exponent λ, normalization factor κ and system constraints including searching direction and trip OD constraint, the generalized radiation model accurately captures real human movements in various scenarios and spatial scales, including two different countries and four different cities. Our analytical results also indicated that the generalized radiation model outperformed alternative mobility models in various empirical analyses.

  9. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  10. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  11. Over-estimation of glomerular filtration rate by single injection [51Cr]EDTA plasma clearance determination in patients with ascites

    DEFF Research Database (Denmark)

    Henriksen, Jens Henrik Sahl; Brøchner-Mortensen, J; Malchow-Møller, A

    1980-01-01

    The total plasma (Clt) and the renal plasma (Clr) clearances of [51Cr]EDTA were determined simultaneously in nine patients with ascites due to liver cirrhosis. Clt (mean 78 ml/min, range 34-115 ml/min) was significantly higher than Clr (mean 52 ml/min, range 13-96 ml/min, P ... fluid-plasma activity ratio of [51Cr]EDTA increased throughout the investigation period (5h). The results suggest that [51Cr]EDTA equilibrates slowly with the peritoneal space which indicates that Clt will over-estimate the glomerular filtration rate by approximately 20 ml/min in patients with ascites...

  12. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...

  13. PerMallows: An R Package for Mallows and Generalized Mallows Models

    Directory of Open Access Journals (Sweden)

    Ekhine Irurozki

    2016-08-01

    Full Text Available In this paper we present the R package PerMallows, which is a complete toolbox to work with permutations, distances and some of the most popular probability models for permutations: Mallows and the Generalized Mallows models. The Mallows model is an exponential location model, considered as analogous to the Gaussian distribution. It is based on the definition of a distance between permutations. The Generalized Mallows model is its best-known extension. The package includes functions for making inference, sampling and learning such distributions. The distances considered in PerMallows are Kendall's τ , Cayley, Hamming and Ulam.

  14. General classical solutions in the noncommutative CPN-1 model

    International Nuclear Information System (INIS)

    Foda, O.; Jack, I.; Jones, D.R.T.

    2002-01-01

    We give an explicit construction of general classical solutions for the noncommutative CP N-1 model in two dimensions, showing that they correspond to integer values for the action and topological charge. We also give explicit solutions for the Dirac equation in the background of these general solutions and show that the index theorem is satisfied

  15. Interest Rates with Long Memory: A Generalized Affine Term-Structure Model

    DEFF Research Database (Denmark)

    Osterrieder, Daniela

    .S. government bonds, we model the time series of the state vector by means of a co-fractional vector autoregressive model. The implication is that yields of all maturities exhibit nonstationary, yet mean-reverting, long-memory behavior of the order d ≈ 0.87. The long-run dynamics of the state vector are driven......We propose a model for the term structure of interest rates that is a generalization of the discrete-time, Gaussian, affine yield-curve model. Compared to standard affine models, our model allows for general linear dynamics in the vector of state variables. In an application to real yields of U...... forecasts that outperform several benchmark models, especially at long forecasting horizons....

  16. Merons in a generally covariant model with Gursey term

    International Nuclear Information System (INIS)

    Akdeniz, K.G.; Smailagic, A.

    1982-10-01

    We study meron solutions of the generally covariant and Weyl invariant fermionic model with Gursey term. We find that, due to the presence of this term, merons can exist even without the cosmological constant. This is a new feature compared to previously studied models. (author)

  17. A General Polygon-based Deformable Model for Object Recognition

    DEFF Research Database (Denmark)

    Jensen, Rune Fisker; Carstensen, Jens Michael

    1999-01-01

    We propose a general scheme for object localization and recognition based on a deformable model. The model combines shape and image properties by warping a arbitrary prototype intensity template according to the deformation in shape. The shape deformations are constrained by a probabilistic distr...

  18. On the characterization and software implementation of general protein lattice models.

    Directory of Open Access Journals (Sweden)

    Alessio Bechini

    Full Text Available models of proteins have been widely used as a practical means to computationally investigate general properties of the system. In lattice models any sterically feasible conformation is represented as a self-avoiding walk on a lattice, and residue types are limited in number. So far, only two- or three-dimensional lattices have been used. The inspection of the neighborhood of alpha carbons in the core of real proteins reveals that also lattices with higher coordination numbers, possibly in higher dimensional spaces, can be adopted. In this paper, a new general parametric lattice model for simplified protein conformations is proposed and investigated. It is shown how the supporting software can be consistently designed to let algorithms that operate on protein structures be implemented in a lattice-agnostic way. The necessary theoretical foundations are developed and organically presented, pinpointing the role of the concept of main directions in lattice-agnostic model handling. Subsequently, the model features across dimensions and lattice types are explored in tests performed on benchmark protein sequences, using a Python implementation. Simulations give insights on the use of square and triangular lattices in a range of dimensions. The trend of potential minimum for sequences of different lengths, varying the lattice dimension, is uncovered. Moreover, an extensive quantitative characterization of the usage of the so-called "move types" is reported for the first time. The proposed general framework for the development of lattice models is simple yet complete, and an object-oriented architecture can be proficiently employed for the supporting software, by designing ad-hoc classes. The proposed framework represents a new general viewpoint that potentially subsumes a number of solutions previously studied. The adoption of the described model pushes to look at protein structure issues from a more general and essential perspective, making

  19. Generalized model for Memristor-based Wien family oscillators

    KAUST Repository

    Talukdar, Abdul Hafiz Ibne; Radwan, Ahmed G.; Salama, Khaled N.

    2012-01-01

    In this paper, we report the unconventional characteristics of Memristor in Wien oscillators. Generalized mathematical models are developed to analyze four members of the Wien family using Memristors. Sustained oscillation is reported for all types

  20. Generalized model of island biodiversity

    Science.gov (United States)

    Kessler, David A.; Shnerb, Nadav M.

    2015-04-01

    The dynamics of a local community of competing species with weak immigration from a static regional pool is studied. Implementing the generalized competitive Lotka-Volterra model with demographic noise, a rich dynamics with four qualitatively distinct phases is unfolded. When the overall interspecies competition is weak, the island species recapitulate the mainland species. For higher values of the competition parameter, the system still admits an equilibrium community, but now some of the mainland species are absent on the island. Further increase in competition leads to an intermittent "disordered" phase, where the dynamics is controlled by invadable combinations of species and the turnover rate is governed by the migration. Finally, the strong competition phase is glasslike, dominated by uninvadable states and noise-induced transitions. Our model contains, as a special case, the celebrated neutral island theories of Wilson-MacArthur and Hubbell. Moreover, we show that slight deviations from perfect neutrality may lead to each of the phases, as the Hubbell point appears to be quadracritical.

  1. A General Framework for Portfolio Theory—Part I: Theory and Various Models

    Directory of Open Access Journals (Sweden)

    Stanislaus Maier-Paape

    2018-05-01

    Full Text Available Utility and risk are two often competing measurements on the investment success. We show that efficient trade-off between these two measurements for investment portfolios happens, in general, on a convex curve in the two-dimensional space of utility and risk. This is a rather general pattern. The modern portfolio theory of Markowitz (1959 and the capital market pricing model Sharpe (1964, are special cases of our general framework when the risk measure is taken to be the standard deviation and the utility function is the identity mapping. Using our general framework, we also recover and extend the results in Rockafellar et al. (2006, which were already an extension of the capital market pricing model to allow for the use of more general deviation measures. This generalized capital asset pricing model also applies to e.g., when an approximation of the maximum drawdown is considered as a risk measure. Furthermore, the consideration of a general utility function allows for going beyond the “additive” performance measure to a “multiplicative” one of cumulative returns by using the log utility. As a result, the growth optimal portfolio theory Lintner (1965 and the leverage space portfolio theory Vince (2009 can also be understood and enhanced under our general framework. Thus, this general framework allows a unification of several important existing portfolio theories and goes far beyond. For simplicity of presentation, we phrase all for a finite underlying probability space and a one period market model, but generalizations to more complex structures are straightforward.

  2. Post-cracking tensile behaviour of steel-fibre-reinforced roller-compacted-concrete for FE modelling and design purposes

    International Nuclear Information System (INIS)

    Jafarifar, N.; Pilakoutas, K.; Angelakopoulos, H.; Bennett, T.

    2017-01-01

    Fracture of steel-fibre-reinforced-concrete occurs mostly in the form of a smeared crack band undergoing progressive microcracking. For FE modelling and design purposes, this crack band could be characterised by a stress-strain (σ-ε) relationship. For industrially-produced steel fibres, existing methodologies such as RILEM TC 162-TDF (2003) propose empirical equations to predict a trilinear σ-ε relationship directly from bending test results. This paper evaluates the accuracy of these methodologies and their applicability for roller-compacted-concrete and concrete incorporating steel fibres recycled from post-consumer tyres. It is shown that the energy absorption capacity is generally overestimated by these methodologies, sometimes up to 60%, for both conventional and roller-compacted concrete. Tensile behaviour of fibre-reinforced-concrete is estimated in this paper by inverse analysis of bending test results, examining a variety of concrete mixes and steel fibres. A multilinear relationship is proposed which largely eliminates the overestimation problem and can lead to safer designs. [es

  3. Tilted Bianchi type I dust fluid cosmological model in general relativity

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 58; Issue 3. Tilted Bianchi type I dust fluid cosmological model in general ... In this paper, we have investigated a tilted Bianchi type I cosmological model filled with dust of perfect fluid in general relativity. To get a determinate solution, we have assumed a condition  ...

  4. Generalized Linear Models in Vehicle Insurance

    Directory of Open Access Journals (Sweden)

    Silvie Kafková

    2014-01-01

    Full Text Available Actuaries in insurance companies try to find the best model for an estimation of insurance premium. It depends on many risk factors, e.g. the car characteristics and the profile of the driver. In this paper, an analysis of the portfolio of vehicle insurance data using a generalized linear model (GLM is performed. The main advantage of the approach presented in this article is that the GLMs are not limited by inflexible preconditions. Our aim is to predict the relation of annual claim frequency on given risk factors. Based on a large real-world sample of data from 57 410 vehicles, the present study proposed a classification analysis approach that addresses the selection of predictor variables. The models with different predictor variables are compared by analysis of deviance and Akaike information criterion (AIC. Based on this comparison, the model for the best estimate of annual claim frequency is chosen. All statistical calculations are computed in R environment, which contains stats package with the function for the estimation of parameters of GLM and the function for analysis of deviation.

  5. Evaluating the AS-level Internet models: beyond topological characteristics

    International Nuclear Information System (INIS)

    Fan Zheng-Ping

    2012-01-01

    A surge number of models has been proposed to model the Internet in the past decades. However, the issue on which models are better to model the Internet has still remained a problem. By analysing the evolving dynamics of the Internet, we suggest that at the autonomous system (AS) level, a suitable Internet model, should at least be heterogeneous and have a linearly growing mechanism. More importantly, we show that the roles of topological characteristics in evaluating and differentiating Internet models are apparently over-estimated from an engineering perspective. Also, we find that an assortative network is not necessarily more robust than a disassortative network and that a smaller average shortest path length does not necessarily mean a higher robustness, which is different from the previous observations. Our analytic results are helpful not only for the Internet, but also for other general complex networks. (interdisciplinary physics and related areas of science and technology)

  6. An Overview of Generalized Gamma Mittag–Leffler Model and Its Applications

    Directory of Open Access Journals (Sweden)

    Seema S. Nair

    2015-08-01

    Full Text Available Recently, probability models with thicker or thinner tails have gained more importance among statisticians and physicists because of their vast applications in random walks, Lévi flights, financial modeling, etc. In this connection, we introduce here a new family of generalized probability distributions associated with the Mittag–Leffler function. This family gives an extension to the generalized gamma family, opens up a vast area of potential applications and establishes connections to the topics of fractional calculus, nonextensive statistical mechanics, Tsallis statistics, superstatistics, the Mittag–Leffler stochastic process, the Lévi process and time series. Apart from examining the properties, the matrix-variate analogue and the connection to fractional calculus are also explained. By using the pathway model of Mathai, the model is further generalized. Connections to Mittag–Leffler distributions and corresponding autoregressive processes are also discussed.

  7. The General Aggression Model.

    Science.gov (United States)

    Allen, Johnie J; Anderson, Craig A; Bushman, Brad J

    2018-02-01

    The General Aggression Model (GAM) is a comprehensive, integrative, framework for understanding aggression. It considers the role of social, cognitive, personality, developmental, and biological factors on aggression. Proximate processes of GAM detail how person and situation factors influence cognitions, feelings, and arousal, which in turn affect appraisal and decision processes, which in turn influence aggressive or nonaggressive behavioral outcomes. Each cycle of the proximate processes serves as a learning trial that affects the development and accessibility of aggressive knowledge structures. Distal processes of GAM detail how biological and persistent environmental factors can influence personality through changes in knowledge structures. GAM has been applied to understand aggression in many contexts including media violence effects, domestic violence, intergroup violence, temperature effects, pain effects, and the effects of global climate change. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Topics in conformal invariance and generalized sigma models

    International Nuclear Information System (INIS)

    Bernardo, L.M.; Lawrence Berkeley National Lab., CA

    1997-05-01

    This thesis consists of two different parts, having in common the fact that in both, conformal invariance plays a central role. In the first part, the author derives conditions for conformal invariance, in the large N limit, and for the existence of an infinite number of commuting classical conserved quantities, in the Generalized Thirring Model. The treatment uses the bosonized version of the model. Two different approaches are used to derive conditions for conformal invariance: the background field method and the Hamiltonian method based on an operator algebra, and the agreement between them is established. The author constructs two infinite sets of non-local conserved charges, by specifying either periodic or open boundary conditions, and he finds the Poisson Bracket algebra satisfied by them. A free field representation of the algebra satisfied by the relevant dynamical variables of the model is also presented, and the structure of the stress tensor in terms of free fields (and free currents) is studied in detail. In the second part, the author proposes a new approach for deriving the string field equations from a general sigma model on the world sheet. This approach leads to an equation which combines some of the attractive features of both the renormalization group method and the covariant beta function treatment of the massless excitations. It has the advantage of being covariant under a very general set of both local and non-local transformations in the field space. The author applies it to the tachyon, massless and first massive level, and shows that the resulting field equations reproduce the correct spectrum of a left-right symmetric closed bosonic string

  9. Comparison of two recent models for estimating actual evapotranspiration using only regularly recorded data

    Science.gov (United States)

    Ali, M. F.; Mawdsley, J. A.

    1987-09-01

    An advection-aridity model for estimating actual evapotranspiration ET is tested with over 700 days of lysimeter evapotranspiration and meteorological data from barley, turf and rye-grass from three sites in the U.K. The performance of the model is also compared with the API model . It is observed from the test that the advection-aridity model overestimates nonpotential ET and tends to underestimate potential ET, but when tested with potential and nonpotential data together, the tendencies appear to cancel each other. On a daily basis the performance level of this model is found to be of the same order as the API model: correlation coefficients were obtained between the model estimates and lysimeter data of 0.62 and 0.68 respectively. For periods greater than one day, generally the performance of the models are improved. Proposed by Mawdsley and Ali (1979)

  10. A Statistical Evaluation of Atmosphere-Ocean General Circulation Models: Complexity vs. Simplicity

    OpenAIRE

    Robert K. Kaufmann; David I. Stern

    2004-01-01

    The principal tools used to model future climate change are General Circulation Models which are deterministic high resolution bottom-up models of the global atmosphere-ocean system that require large amounts of supercomputer time to generate results. But are these models a cost-effective way of predicting future climate change at the global level? In this paper we use modern econometric techniques to evaluate the statistical adequacy of three general circulation models (GCMs) by testing thre...

  11. Analysis of dental caries using generalized linear and count regression models

    Directory of Open Access Journals (Sweden)

    Javali M. Phil

    2013-11-01

    Full Text Available Generalized linear models (GLM are generalization of linear regression models, which allow fitting regression models to response data in all the sciences especially medical and dental sciences that follow a general exponential family. These are flexible and widely used class of such models that can accommodate response variables. Count data are frequently characterized by overdispersion and excess zeros. Zero-inflated count models provide a parsimonious yet powerful way to model this type of situation. Such models assume that the data are a mixture of two separate data generation processes: one generates only zeros, and the other is either a Poisson or a negative binomial data-generating process. Zero inflated count regression models such as the zero-inflated Poisson (ZIP, zero-inflated negative binomial (ZINB regression models have been used to handle dental caries count data with many zeros. We present an evaluation framework to the suitability of applying the GLM, Poisson, NB, ZIP and ZINB to dental caries data set where the count data may exhibit evidence of many zeros and over-dispersion. Estimation of the model parameters using the method of maximum likelihood is provided. Based on the Vuong test statistic and the goodness of fit measure for dental caries data, the NB and ZINB regression models perform better than other count regression models.

  12. Generalized additive model of air pollution to daily mortality

    International Nuclear Information System (INIS)

    Kim, J.; Yang, H.E.

    2005-01-01

    The association of air pollution with daily mortality due to cardiovascular disease, respiratory disease, and old age (65 or older) in Seoul, Korea was investigated in 1999 using daily values of TSP, PM10, O 3 , SO 2 , NO 2 , and CO. Generalized additive Poisson models were applied to allow for the highly flexible fitting of daily trends in air pollution as well as nonlinear association with meteorological variables such as temperature, humidity, and wind speed. To estimate the effect of air pollution and weather on mortality, LOESS smoothing was used in generalized additive models. The findings suggest that air pollution levels affect significantly the daily mortality. (orig.)

  13. Specific and General Human Capital in an Endogenous Growth Model

    OpenAIRE

    Evangelia Vourvachaki; Vahagn Jerbashian; : Sergey Slobodyan

    2014-01-01

    In this article, we define specific (general) human capital in terms of the occupations whose use is spread in a limited (wide) set of industries. We analyze the growth impact of an economy's composition of specific and general human capital, in a model where education and research and development are costly and complementary activities. The model suggests that a declining share of specific human capital, as observed in the Czech Republic, can be associated with a lower rate of long-term grow...

  14. Optimisation of a parallel ocean general circulation model

    OpenAIRE

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  15. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part II: Multi-layered cloud

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, H; McCoy, R B; Klein, S A; Xie, S; Luo, Y; Avramov, A; Chen, M; Cole, J; Falk, M; Foster, M; Genio, A D; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; McFarquhar, G; Poellot, M; Shipway, B; Shupe, M; Sud, Y; Turner, D; Veron, D; Walker, G; Wang, Z; Wolf, A; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a deep, multi-layered, mixed-phase cloud system observed during the ARM Mixed-Phase Arctic Cloud Experiment. This cloud system was associated with strong surface turbulent sensible and latent heat fluxes as cold air flowed over the open Arctic Ocean, combined with a low pressure system that supplied moisture at mid-level. The simulations, performed by 13 single-column and 4 cloud-resolving models, generally overestimate the liquid water path and strongly underestimate the ice water path, although there is a large spread among the models. This finding is in contrast with results for the single-layer, low-level mixed-phase stratocumulus case in Part I of this study, as well as previous studies of shallow mixed-phase Arctic clouds, that showed an underprediction of liquid water path. The overestimate of liquid water path and underestimate of ice water path occur primarily when deeper mixed-phase clouds extending into the mid-troposphere were observed. These results suggest important differences in the ability of models to simulate Arctic mixed-phase clouds that are deep and multi-layered versus shallow and single-layered. In general, models with a more sophisticated, two-moment treatment of the cloud microphysics produce a somewhat smaller liquid water path that is closer to observations. The cloud-resolving models tend to produce a larger cloud fraction than the single-column models. The liquid water path and especially the cloud fraction have a large impact on the cloud radiative forcing at the surface, which is dominated by the longwave flux for this case.

  16. Modeling the brain morphology distribution in the general aging population

    Science.gov (United States)

    Huizinga, W.; Poot, D. H. J.; Roshchupkin, G.; Bron, E. E.; Ikram, M. A.; Vernooij, M. W.; Rueckert, D.; Niessen, W. J.; Klein, S.

    2016-03-01

    Both normal aging and neurodegenerative diseases such as Alzheimer's disease cause morphological changes of the brain. To better distinguish between normal and abnormal cases, it is necessary to model changes in brain morphology owing to normal aging. To this end, we developed a method for analyzing and visualizing these changes for the entire brain morphology distribution in the general aging population. The method is applied to 1000 subjects from a large population imaging study in the elderly, from which 900 were used to train the model and 100 were used for testing. The results of the 100 test subjects show that the model generalizes to subjects outside the model population. Smooth percentile curves showing the brain morphology changes as a function of age and spatiotemporal atlases derived from the model population are publicly available via an interactive web application at agingbrain.bigr.nl.

  17. Generalized Additive Models for Nowcasting Cloud Shading

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Paulescu, M.; Badescu, V.

    2014-01-01

    Roč. 101, March (2014), s. 272-282 ISSN 0038-092X R&D Projects: GA MŠk LD12009 Grant - others:European Cooperation in Science and Technology(XE) COST ES1002 Institutional support: RVO:67985807 Keywords : sunshine number * nowcasting * generalized additive model * Markov chain Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.469, year: 2014

  18. Dynamic generalized linear models for monitoring endemic diseases

    DEFF Research Database (Denmark)

    Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq

    2016-01-01

    The objective was to use a Dynamic Generalized Linear Model (DGLM) based on abinomial distribution with a linear trend, for monitoring the PRRS (Porcine Reproductive and Respiratory Syndrome sero-prevalence in Danish swine herds. The DGLM was described and its performance for monitoring control...... and eradication programmes based on changes in PRRS sero-prevalence was explored. Results showed a declining trend in PRRS sero-prevalence between 2007 and 2014 suggesting that Danish herds are slowly eradicating PRRS. The simulation study demonstrated the flexibility of DGLMs in adapting to changes intrends...... in sero-prevalence. Based on this, it was possible to detect variations in the growth model component. This study is a proof-of-concept, demonstrating the use of DGLMs for monitoring endemic diseases. In addition, the principles stated might be useful in general research on monitoring and surveillance...

  19. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  20. Generalized Continuum: from Voigt to the Modeling of Quasi-Brittle Materials

    Directory of Open Access Journals (Sweden)

    Jamile Salim Fuina

    2010-12-01

    Full Text Available This article discusses the use of the generalized continuum theories to incorporate the effects of the microstructure in the nonlinear finite element analysis of quasi-brittle materials and, thus, to solve mesh dependency problems. A description of the problem called numerically induced strain localization, often found in Finite Element Method material non-linear analysis, is presented. A brief historic about the Generalized Continuum Mechanics based models is presented, since the initial work of Voigt (1887 until the more recent studies. By analyzing these models, it is observed that the Cosserat and microstretch approaches are particular cases of a general formulation that describes the micromorphic continuum. After reporting attempts to incorporate the material microstructure in Classical Continuum Mechanics based models, the article shows the recent tendency of doing it according to assumptions of the Generalized Continuum Mechanics. Finally, it presents numerical results which enable to characterize this tendency as a promising way to solve the problem.

  1. Adaptation of a general circulation model to ocean dynamics

    Science.gov (United States)

    Turner, R. E.; Rees, T. H.; Woodbury, G. E.

    1976-01-01

    A primitive-variable general circulation model of the ocean was formulated in which fast external gravity waves are suppressed with rigid-lid surface constraint pressires which also provide a means for simulating the effects of large-scale free-surface topography. The surface pressure method is simpler to apply than the conventional stream function models, and the resulting model can be applied to both global ocean and limited region situations. Strengths and weaknesses of the model are also presented.

  2. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    Science.gov (United States)

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  3. Working covariance model selection for generalized estimating equations.

    Science.gov (United States)

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  4. On a generalized Dirac oscillator interaction for the nonrelativistic limit 3 D generalized SUSY model oscillator Hamiltonian of Celka and Hussin

    International Nuclear Information System (INIS)

    Jayaraman, Jambunatha; Lima Rodrigues, R. de

    1994-01-01

    In the context of the 3 D generalized SUSY model oscillator Hamiltonian of Celka and Hussin (CH), a generalized Dirac oscillator interaction is studied, that leads, in the non-relativistic limit considered for both signs of energy, to the CH's generalized 3 D SUSY oscillator. The relevance of this interaction to the CH's SUSY model and the SUSY breaking dependent on the Wigner parameter is brought out. (author). 6 refs

  5. Optimal Designs for the Generalized Partial Credit Model

    OpenAIRE

    Bürkner, Paul-Christian; Schwabe, Rainer; Holling, Heinz

    2018-01-01

    Analyzing ordinal data becomes increasingly important in psychology, especially in the context of item response theory. The generalized partial credit model (GPCM) is probably the most widely used ordinal model and finds application in many large scale educational assessment studies such as PISA. In the present paper, optimal test designs are investigated for estimating persons' abilities with the GPCM for calibrated tests when item parameters are known from previous studies. We will derive t...

  6. Stability analysis for a general age-dependent vaccination model

    International Nuclear Information System (INIS)

    El Doma, M.

    1995-05-01

    An SIR epidemic model of a general age-dependent vaccination model is investigated when the fertility, mortality and removal rates depends on age. We give threshold criteria of the existence of equilibriums and perform stability analysis. Furthermore a critical vaccination coverage that is sufficient to eradicate the disease is determined. (author). 12 refs

  7. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  8. A Generalized Partial Credit Model: Application of an EM Algorithm.

    Science.gov (United States)

    Muraki, Eiji

    1992-01-01

    The partial credit model with a varying slope parameter is developed and called the generalized partial credit model (GPCM). Analysis results for simulated data by this and other polytomous item-response models demonstrate that the rating formulation of the GPCM is adaptable to the analysis of polytomous item responses. (SLD)

  9. Interacting holographic dark energy models: a general approach

    Science.gov (United States)

    Som, S.; Sil, A.

    2014-08-01

    Dark energy models inspired by the cosmological holographic principle are studied in homogeneous isotropic spacetime with a general choice for the dark energy density . Special choices of the parameters enable us to obtain three different holographic models, including the holographic Ricci dark energy (RDE) model. Effect of interaction between dark matter and dark energy on the dynamics of those models are investigated for different popular forms of interaction. It is found that crossing of phantom divide can be avoided in RDE models for β>0.5 irrespective of the presence of interaction. A choice of α=1 and β=2/3 leads to a varying Λ-like model introducing an IR cutoff length Λ -1/2. It is concluded that among the popular choices an interaction of the form Q∝ Hρ m suits the best in avoiding the coincidence problem in this model.

  10. General Potential-Current Model and Validation for Electrocoagulation

    International Nuclear Information System (INIS)

    Dubrawski, Kristian L.; Du, Codey; Mohseni, Madjid

    2014-01-01

    A model relating potential and current in continuous parallel plate iron electrocoagulation (EC) was developed for application in drinking water treatment. The general model can be applied to any EC parallel plate system relying only on geometric and tabulated input variables without the need of system-specific experimentally derived constants. For the theoretical model, the anode and cathode were vertically divided into n equipotential segments in a single pass, upflow, and adiabatic EC reactor. Potential and energy balances were simultaneously solved at each vertical segment, which included the contribution of ionic concentrations, solution temperature and conductivity, cathodic hydrogen flux, and gas/liquid ratio. We experimentally validated the numerical model with a vertical upflow EC reactor using a 24 cm height 99.99% pure iron anode divided into twelve 2 cm segments. Individual experimental currents from each segment were summed to determine total current, and compared with the theoretically derived value. Several key variables were studied to determine their impact on model accuracy: solute type, solute concentration, current density, flow rate, inter-electrode gap, and electrode surface condition. Model results were in good agreement with experimental values at cell potentials of 2-20 V (corresponding to a current density range of approximately 50-800 A/m 2 ), with mean relative deviation of 9% for low flow rate, narrow electrode gap, polished electrodes, and 150 mg/L NaCl. Highest deviation occurred with a large electrode gap, unpolished electrodes, and Na 2 SO 4 electrolyte, due to parasitic H 2 O oxidation and less than unity current efficiency. This is the first general model which can be applied to any parallel plate EC system for accurate electrochemical voltage or current prediction

  11. Optimal Physics Parameterization Scheme Combination of the Weather Research and Forecasting Model for Seasonal Precipitation Simulation over Ghana

    Directory of Open Access Journals (Sweden)

    Richard Yao Kuma Agyeman

    2017-01-01

    Full Text Available Seasonal predictions of precipitation, among others, are important to help mitigate the effects of drought and floods on agriculture, hydropower generation, disasters, and many more. This work seeks to obtain a suitable combination of physics schemes of the Weather Research and Forecasting (WRF model for seasonal precipitation simulation over Ghana. Using the ERA-Interim reanalysis as forcing data, simulation experiments spanning eight months (from April to November were performed for two different years: a dry year (2001 and a wet year (2008. A double nested approach was used with the outer domain at 50 km resolution covering West Africa and the inner domain covering Ghana at 10 km resolution. The results suggest that the WRF model generally overestimated the observed precipitation by a mean value between 3% and 64% for both years. Most of the scheme combinations overestimated (underestimated precipitation over coastal (northern zones of Ghana for both years but estimated precipitation reasonably well over forest and transitional zones. On the whole, the combination of WRF Single-Moment 6-Class Microphysics Scheme, Grell-Devenyi Ensemble Cumulus Scheme, and Asymmetric Convective Model Planetary Boundary Layer Scheme simulated the best temporal pattern and temporal variability with the least relative bias for both years and therefore is recommended for Ghana.

  12. A General Business Model for Marine Reserves

    Science.gov (United States)

    Sala, Enric; Costello, Christopher; Dougherty, Dawn; Heal, Geoffrey; Kelleher, Kieran; Murray, Jason H.; Rosenberg, Andrew A.; Sumaila, Rashid

    2013-01-01

    Marine reserves are an effective tool for protecting biodiversity locally, with potential economic benefits including enhancement of local fisheries, increased tourism, and maintenance of ecosystem services. However, fishing communities often fear short-term income losses associated with closures, and thus may oppose marine reserves. Here we review empirical data and develop bioeconomic models to show that the value of marine reserves (enhanced adjacent fishing + tourism) may often exceed the pre-reserve value, and that economic benefits can offset the costs in as little as five years. These results suggest the need for a new business model for creating and managing reserves, which could pay for themselves and turn a profit for stakeholder groups. Our model could be expanded to include ecosystem services and other benefits, and it provides a general framework to estimate costs and benefits of reserves and to develop such business models. PMID:23573192

  13. Comparison of body composition between fashion models and women in general

    OpenAIRE

    Park, Sunhee

    2017-01-01

    [Purpose] The present study compared the physical characteristics and body composition of professional fashion models and women in general, utilizing the skinfold test. [Methods] The research sample consisted of 90 professional fashion models presently active in Korea and 100 females in the general population, all selected through convenience sampling. Measurement was done following standardized methods and procedures set by the International Society for the Advancement of Kinanthropometry. B...

  14. The Generalized Quantum Episodic Memory Model.

    Science.gov (United States)

    Trueblood, Jennifer S; Hemmer, Pernille

    2017-11-01

    Recent evidence suggests that experienced events are often mapped to too many episodic states, including those that are logically or experimentally incompatible with one another. For example, episodic over-distribution patterns show that the probability of accepting an item under different mutually exclusive conditions violates the disjunction rule. A related example, called subadditivity, occurs when the probability of accepting an item under mutually exclusive and exhaustive instruction conditions sums to a number >1. Both the over-distribution effect and subadditivity have been widely observed in item and source-memory paradigms. These phenomena are difficult to explain using standard memory frameworks, such as signal-detection theory. A dual-trace model called the over-distribution (OD) model (Brainerd & Reyna, 2008) can explain the episodic over-distribution effect, but not subadditivity. Our goal is to develop a model that can explain both effects. In this paper, we propose the Generalized Quantum Episodic Memory (GQEM) model, which extends the Quantum Episodic Memory (QEM) model developed by Brainerd, Wang, and Reyna (2013). We test GQEM by comparing it to the OD model using data from a novel item-memory experiment and a previously published source-memory experiment (Kellen, Singmann, & Klauer, 2014) examining the over-distribution effect. Using the best-fit parameters from the over-distribution experiments, we conclude by showing that the GQEM model can also account for subadditivity. Overall these results add to a growing body of evidence suggesting that quantum probability theory is a valuable tool in modeling recognition memory. Copyright © 2016 Cognitive Science Society, Inc.

  15. The algebra of the general Markov model on phylogenetic trees and networks.

    Science.gov (United States)

    Sumner, J G; Holland, B R; Jarvis, P D

    2012-04-01

    It is known that the Kimura 3ST model of sequence evolution on phylogenetic trees can be extended quite naturally to arbitrary split systems. However, this extension relies heavily on mathematical peculiarities of the associated Hadamard transformation, and providing an analogous augmentation of the general Markov model has thus far been elusive. In this paper, we rectify this shortcoming by showing how to extend the general Markov model on trees to include incompatible edges; and even further to more general network models. This is achieved by exploring the algebra of the generators of the continuous-time Markov chain together with the “splitting” operator that generates the branching process on phylogenetic trees. For simplicity, we proceed by discussing the two state case and then show that our results are easily extended to more states with little complication. Intriguingly, upon restriction of the two state general Markov model to the parameter space of the binary symmetric model, our extension is indistinguishable from the Hadamard approach only on trees; as soon as any incompatible splits are introduced the two approaches give rise to differing probability distributions with disparate structure. Through exploration of a simple example, we give an argument that our extension to more general networks has desirable properties that the previous approaches do not share. In particular, our construction allows for convergent evolution of previously divergent lineages; a property that is of significant interest for biological applications.

  16. Comparison of nonstationary generalized logistic models based on Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    S. Kim

    2015-06-01

    Full Text Available Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.

  17. Development of Shear Capacity Prediction Model for FRP-RC Beam without Web Reinforcement

    Directory of Open Access Journals (Sweden)

    Md. Arman Chowdhury

    2016-01-01

    Full Text Available Available codes and models generally use partially modified shear design equation, developed earlier for steel reinforced concrete, for predicting the shear capacity of FRP-RC members. Consequently, calculated shear capacity shows under- or overestimation. Furthermore, in most models some affecting parameters of shear strength are overlooked. In this study, a new and simplified shear capacity prediction model is proposed considering all the parameters. A large database containing 157 experimental results of FRP-RC beams without shear reinforcement is assembled from the published literature. A parametric study is then performed to verify the accuracy of the proposed model. Again, a comprehensive review of 9 codes and 12 available models is done, published back from 1997 to date for comparison with the proposed model. Hence, it is observed that the proposed equation shows overall optimized performance compared to all the codes and models within the range of used experimental dataset.

  18. Explained variation and predictive accuracy in general parametric statistical models: the role of model misspecification

    DEFF Research Database (Denmark)

    Rosthøj, Susanne; Keiding, Niels

    2004-01-01

    When studying a regression model measures of explained variation are used to assess the degree to which the covariates determine the outcome of interest. Measures of predictive accuracy are used to assess the accuracy of the predictions based on the covariates and the regression model. We give a ...... a detailed and general introduction to the two measures and the estimation procedures. The framework we set up allows for a study of the effect of misspecification on the quantities estimated. We also introduce a generalization to survival analysis....

  19. [A competency model of rural general practitioners: theory construction and empirical study].

    Science.gov (United States)

    Yang, Xiu-Mu; Qi, Yu-Long; Shne, Zheng-Fu; Han, Bu-Xin; Meng, Bei

    2015-04-01

    To perform theory construction and empirical study of the competency model of rural general practitioners. Through literature study, job analysis, interviews, and expert team discussion, the questionnaire of rural general practitioners competency was constructed. A total of 1458 rural general practitioners were surveyed by the questionnaire in 6 central provinces. The common factors were constructed using the principal component method of exploratory factor analysis and confirmatory factor analysis. The influence of the competency characteristics on the working performance was analyzed using regression equation analysis. The Cronbach 's alpha coefficient of the questionnaire was 0.974. The model consisted of 9 dimensions and 59 items. The 9 competency dimensions included basic public health service ability, basic clinical skills, system analysis capability, information management capability, communication and cooperation ability, occupational moral ability, non-medical professional knowledge, personal traits and psychological adaptability. The rate of explained cumulative total variance was 76.855%. The model fitting index were Χ(2)/df 1.88, GFI=0.94, NFI=0.96, NNFI=0.98, PNFI=0.91, RMSEA=0.068, CFI=0.97, IFI=0.97, RFI=0.96, suggesting good model fitting. Regression analysis showed that the competency characteristics had a significant effect on job performance. The rural general practitioners competency model provides reference for rural doctor training, rural order directional cultivation of medical students, and competency performance management of the rural general practitioners.

  20. Davidson's generalization of the Fenyes-Nelson stochastic model of quantum mechanics

    International Nuclear Information System (INIS)

    Shucker, D.S.

    1980-01-01

    Davidson's generalization of the Fenyes-Nelson stochastic model of quantum mechanics is discussed. It is shown that this author's previous results concerning the Fenyes-Nelson process extend to the more general theory of Davidson. (orig.)

  1. Evaluation of climate model aerosol seasonal and spatial variability over Africa using AERONET

    Science.gov (United States)

    Horowitz, Hannah M.; Garland, Rebecca M.; Thatcher, Marcus; Landman, Willem A.; Dedekind, Zane; van der Merwe, Jacobus; Engelbrecht, Francois A.

    2017-11-01

    The sensitivity of climate models to the characterization of African aerosol particles is poorly understood. Africa is a major source of dust and biomass burning aerosols and this represents an important research gap in understanding the impact of aerosols on radiative forcing of the climate system. Here we evaluate the current representation of aerosol particles in the Conformal Cubic Atmospheric Model (CCAM) with ground-based remote retrievals across Africa, and additionally provide an analysis of observed aerosol optical depth at 550 nm (AOD550 nm) and Ångström exponent data from 34 Aerosol Robotic Network (AERONET) sites. Analysis of the 34 long-term AERONET sites confirms the importance of dust and biomass burning emissions to the seasonal cycle and magnitude of AOD550 nm across the continent and the transport of these emissions to regions outside of the continent. In general, CCAM captures the seasonality of the AERONET data across the continent. The magnitude of modeled and observed multiyear monthly average AOD550 nm overlap within ±1 standard deviation of each other for at least 7 months at all sites except the Réunion St Denis Island site (Réunion St. Denis). The timing of modeled peak AOD550 nm in southern Africa occurs 1 month prior to the observed peak, which does not align with the timing of maximum fire counts in the region. For the western and northern African sites, it is evident that CCAM currently overestimates dust in some regions while others (e.g., the Arabian Peninsula) are better characterized. This may be due to overestimated dust lifetime, or that the characterization of the soil for these areas needs to be updated with local information. The CCAM simulated AOD550 nm for the global domain is within the spread of previously published results from CMIP5 and AeroCom experiments for black carbon, organic carbon, and sulfate aerosols. The model's performance provides confidence for using the model to estimate large-scale regional impacts

  2. Evaluation of climate model aerosol seasonal and spatial variability over Africa using AERONET

    Directory of Open Access Journals (Sweden)

    H. M. Horowitz

    2017-11-01

    Full Text Available The sensitivity of climate models to the characterization of African aerosol particles is poorly understood. Africa is a major source of dust and biomass burning aerosols and this represents an important research gap in understanding the impact of aerosols on radiative forcing of the climate system. Here we evaluate the current representation of aerosol particles in the Conformal Cubic Atmospheric Model (CCAM with ground-based remote retrievals across Africa, and additionally provide an analysis of observed aerosol optical depth at 550 nm (AOD550 nm and Ångström exponent data from 34 Aerosol Robotic Network (AERONET sites. Analysis of the 34 long-term AERONET sites confirms the importance of dust and biomass burning emissions to the seasonal cycle and magnitude of AOD550 nm across the continent and the transport of these emissions to regions outside of the continent. In general, CCAM captures the seasonality of the AERONET data across the continent. The magnitude of modeled and observed multiyear monthly average AOD550 nm overlap within ±1 standard deviation of each other for at least 7 months at all sites except the Réunion St Denis Island site (Réunion St. Denis. The timing of modeled peak AOD550 nm in southern Africa occurs 1 month prior to the observed peak, which does not align with the timing of maximum fire counts in the region. For the western and northern African sites, it is evident that CCAM currently overestimates dust in some regions while others (e.g., the Arabian Peninsula are better characterized. This may be due to overestimated dust lifetime, or that the characterization of the soil for these areas needs to be updated with local information. The CCAM simulated AOD550 nm for the global domain is within the spread of previously published results from CMIP5 and AeroCom experiments for black carbon, organic carbon, and sulfate aerosols. The model's performance provides confidence for using the model to estimate

  3. Generalized Whittle-Matern random field as a model of correlated fluctuations

    International Nuclear Information System (INIS)

    Lim, S C; Teo, L P

    2009-01-01

    This paper considers a generalization of the Gaussian random field with covariance function of the Whittle-Matern family. Such a random field can be obtained as the solution to the fractional stochastic differential equation with two fractional orders. Asymptotic properties of the covariance functions belonging to this generalized Whittle-Matern family are studied, which are used to deduce the sample path properties of the random field. The Whittle-Matern field has been widely used in modeling geostatistical data such as sea beam data, wind speed, field temperature and soil data. In this paper we show that the generalized Whittle-Matern field provides a more flexible model for wind speed data

  4. Using video modeling for generalizing toy play in children with autism.

    Science.gov (United States)

    Paterson, Claire R; Arco, Lucius

    2007-09-01

    The present study examined effects of video modeling on generalized independent toy play of two boys with autism. Appropriate and repetitive verbal and motor play were measured, and intermeasure relationships were examined. Two single-participant experiments with multiple baselines and withdrawals across toy play were used. One boy was presented with three physically unrelated toys, whereas the other was presented with three related toys. Video modeling produced increases in appropriate play and decreases in repetitive play, but generalized play was observed only with the related toys. Generalization may have resulted from variables including the toys' common physical characteristics and natural reinforcing properties and the increased correspondence between verbal and motor play.

  5. Generalized semi-Markovian dividend discount model: risk and return

    OpenAIRE

    D'Amico, Guglielmo

    2016-01-01

    The article presents a general discrete time dividend valuation model when the dividend growth rate is a general continuous variable. The main assumption is that the dividend growth rate follows a discrete time semi-Markov chain with measurable space. The paper furnishes sufficient conditions that assure finiteness of fundamental prices and risks and new equations that describe the first and second order price-dividend ratios. Approximation methods to solve equations are provided and some new...

  6. Evaluation of the WRF-Urban Modeling System Coupled to Noah and Noah-MP Land Surface Models Over a Semiarid Urban Environment

    Science.gov (United States)

    Salamanca, Francisco; Zhang, Yizhou; Barlage, Michael; Chen, Fei; Mahalov, Alex; Miao, Shiguang

    2018-03-01

    We have augmented the existing capabilities of the integrated Weather Research and Forecasting (WRF)-urban modeling system by coupling three urban canopy models (UCMs) available in the WRF model with the new community Noah with multiparameterization options (Noah-MP) land surface model (LSM). The WRF-urban modeling system's performance has been evaluated by conducting six numerical experiments at high spatial resolution (1 km horizontal grid spacing) during a 15 day clear-sky summertime period for a semiarid urban environment. To assess the relative importance of representing urban surfaces, three different urban parameterizations are used with the Noah and Noah-MP LSMs, respectively, over the two major cities of Arizona: Phoenix and Tucson metropolitan areas. Our results demonstrate that Noah-MP reproduces somewhat better than Noah the daily evolution of surface skin temperature and near-surface air temperature (especially nighttime temperature) and wind speed. Concerning the urban areas, bulk urban parameterization overestimates nighttime 2 m air temperature compared to the single-layer and multilayer UCMs that reproduce more accurately the daily evolution of near-surface air temperature. Regarding near-surface wind speed, only the multilayer UCM was able to reproduce realistically the daily evolution of wind speed, although maximum winds were slightly overestimated, while both the single-layer and bulk urban parameterizations overestimated wind speed considerably. Based on these results, this paper demonstrates that the new community Noah-MP LSM coupled to an UCM is a promising physics-based predictive modeling tool for urban applications.

  7. Generalized model for Memristor-based Wien family oscillators

    KAUST Repository

    Talukdar, Abdul Hafiz Ibne

    2012-07-23

    In this paper, we report the unconventional characteristics of Memristor in Wien oscillators. Generalized mathematical models are developed to analyze four members of the Wien family using Memristors. Sustained oscillation is reported for all types though oscillating resistance and time dependent poles are present. We have also proposed an analytical model to estimate the desired amplitude of oscillation before the oscillation starts. These Memristor-based oscillation results, presented for the first time, are in good agreement with simulation results. © 2011 Elsevier Ltd.

  8. Verification and Validation of a Three-Dimensional Generalized Composite Material Model

    Science.gov (United States)

    Hoffarth, Canio; Harrington, Joseph; Rajan, Subramaniam D.; Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Blankenhorn, Gunther

    2015-01-01

    A general purpose orthotropic elasto-plastic computational constitutive material model has been developed to improve predictions of the response of composites subjected to high velocity impact. The three-dimensional orthotropic elasto-plastic composite material model is being implemented initially for solid elements in LS-DYNA as MAT213. In order to accurately represent the response of a composite, experimental stress-strain curves are utilized as input, allowing for a more general material model that can be used on a variety of composite applications. The theoretical details are discussed in a companion paper. This paper documents the implementation, verification and qualitative validation of the material model using the T800-F3900 fiber/resin composite material

  9. Nature of dynamical suppressions in the generalized Veneziano model

    International Nuclear Information System (INIS)

    Odorico, R.

    1976-05-01

    It is shown by explicit numerical calculations that of a class of coupling suppressions existing in the generalized Veneziano model, which have been recently used to interpret the psi data and other related phenomena, only a part can be attributed to the exponential growth with energy of the number of levels in the model. The remaining suppressions have a more direct dual origin

  10. Anisotropic cosmological models and generalized scalar tensor theory

    Indian Academy of Sciences (India)

    Abstract. In this paper generalized scalar tensor theory has been considered in the background of anisotropic cosmological models, namely, axially symmetric Bianchi-I, Bianchi-III and Kortowski–. Sachs space-time. For bulk viscous fluid, both exponential and power-law solutions have been stud- ied and some assumptions ...

  11. Anisotropic cosmological models and generalized scalar tensor theory

    Indian Academy of Sciences (India)

    In this paper generalized scalar tensor theory has been considered in the background of anisotropic cosmological models, namely, axially symmetric Bianchi-I, Bianchi-III and Kortowski–Sachs space-time. For bulk viscous fluid, both exponential and power-law solutions have been studied and some assumptions among the ...

  12. Overestimation of Albumin Measured by Bromocresol Green vs Bromocresol Purple Method: Influence of Acute-Phase Globulins.

    Science.gov (United States)

    Garcia Moreira, Vanessa; Beridze Vaktangova, Nana; Martinez Gago, Maria Dolores; Laborda Gonzalez, Belen; Garcia Alonso, Sara; Fernandez Rodriguez, Eloy

    2018-05-22

    Usually serum albumin is measured with dye-binding assay as bromocresol green (BCG) and bromocresol purple (BCP) methods. The aim of this paper was to examine the differences in albumin measurements between the Advia2400 BCG method (AlbBCG), Dimension RxL BCP (AlbBCP) and capillary zone electrophoresis (CZE). Albumin concentrations from 165 serum samples were analysed using AlbBCG, AlbBCP and CZE. CZE was employed to estimate different serum protein fractions. Influence of globulins on albumin concentration discrepancies between methods was estimated as well as the impact of the albumin method on aCa concentrations. Medcalc was employed for statistical analysis, setting a value of P albumin concentrations. AlbBCG were positively biased versus CZE (3.54 g/L). There was good agreement between CZE and ALbBCP (Albumin results from the BCP and BCG methods may result in unacceptable differences and clinical confusion, especially at lower albumin concentrations. Serum acute phase proteins contribute to overestimating the albumin concentration using AlbBCG.

  13. The generalized collective model

    International Nuclear Information System (INIS)

    Troltenier, D.

    1992-07-01

    In this thesis a new way of proceeding, basing on the method of the finite elements, for the solution of the collective Schroedinger equation in the framework of the Generalized Collective Model was presented. The numerically reachable accuracy was illustrated by the comparison to analytically known solutions by means of numerous examples. Furthermore the potential-energy surfaces of the 182-196 Hg, 242-248 Cm, and 242-246 Pu isotopes were determined by the fitting of the parameters of the Gneuss-Greiner potential to the experimental data. In the Hg isotopes a shape consistency of nearly spherical and oblate deformations is shown, while the Cm and Pu isotopes possess an essentially equal remaining prolate deformation. By means of the pseudo-symplectic model the potential-energy surfaces of 24 Mg, 190 Pt, and 238 U were microscopically calculated. Using a deformation-independent kinetic energy so the collective excitation spectra and the electrical properties (B(E2), B(E4) values, quadrupole moments) of these nuclei were calculated and compared with the experiment. Finally an analytic relation between the (g R -Z/A) value and the quadrupole moment was derived. The study of the experimental data of the 166-170 Er isotopes shows an in the framework of the measurement accuracy a sufficient agreement with this relation. Furthermore it is by this relation possible to determine the effective magnetic dipole moment parameter-freely. (orig./HSI) [de

  14. A generalized multivariate regression model for modelling ocean wave heights

    Science.gov (United States)

    Wang, X. L.; Feng, Y.; Swail, V. R.

    2012-04-01

    In this study, a generalized multivariate linear regression model is developed to represent the relationship between 6-hourly ocean significant wave heights (Hs) and the corresponding 6-hourly mean sea level pressure (MSLP) fields. The model is calibrated using the ERA-Interim reanalysis of Hs and MSLP fields for 1981-2000, and is validated using the ERA-Interim reanalysis for 2001-2010 and ERA40 reanalysis of Hs and MSLP for 1958-2001. The performance of the fitted model is evaluated in terms of Pierce skill score, frequency bias index, and correlation skill score. Being not normally distributed, wave heights are subjected to a data adaptive Box-Cox transformation before being used in the model fitting. Also, since 6-hourly data are being modelled, lag-1 autocorrelation must be and is accounted for. The models with and without Box-Cox transformation, and with and without accounting for autocorrelation, are inter-compared in terms of their prediction skills. The fitted MSLP-Hs relationship is then used to reconstruct historical wave height climate from the 6-hourly MSLP fields taken from the Twentieth Century Reanalysis (20CR, Compo et al. 2011), and to project possible future wave height climates using CMIP5 model simulations of MSLP fields. The reconstructed and projected wave heights, both seasonal means and maxima, are subject to a trend analysis that allows for non-linear (polynomial) trends.

  15. [The management of osteoarthritis by general practitioners in Germany: comparison of self-reported behaviour with international guidelines].

    Science.gov (United States)

    Rosemann, T; Joos, S; Szecsenyi, J

    2008-01-01

    In most countries, guidelines for the treatment of osteoarthritis (OA) are available. However, in Germany, no guideline for the primary care sector is available. The care provider of most patients is the general practitioner (GP). The aim of the study was to investigate the approaches in diagnosing and treating OA of German GPs and to assess adherence to international guidelines. Cross-sectional study using a structured questionnaire with a random sample of 144 GPs. Regarding diagnosis, the importance of X-rays was overestimated. Regarding treatment approaches, exercising and weight reduction were regarded as primary treatment targets. Pharmacological treatment approaches were somewhat guideline oriented, but conservative approaches such as physical therapy were overestimated as invasive treatments such as intra-articular injections were underestimated in its benefit. Establishing a guideline specifically for primary care and increasing guideline adherence could help to prevent the present overuse of X-rays and the high amount of referrals to orthopaedics, save costs and reduce inadequate treatments.

  16. Climate Simulations from Super-parameterized and Conventional General Circulation Models with a Third-order Turbulence Closure

    Science.gov (United States)

    Xu, Kuan-Man; Cheng, Anning

    2014-05-01

    A high-resolution cloud-resolving model (CRM) embedded in a general circulation model (GCM) is an attractive alternative for climate modeling because it replaces all traditional cloud parameterizations and explicitly simulates cloud physical processes in each grid column of the GCM. Such an approach is called "Multiscale Modeling Framework." MMF still needs to parameterize the subgrid-scale (SGS) processes associated with clouds and large turbulent eddies because circulations associated with planetary boundary layer (PBL) and in-cloud turbulence are unresolved by CRMs with horizontal grid sizes on the order of a few kilometers. A third-order turbulence closure (IPHOC) has been implemented in the CRM component of the super-parameterized Community Atmosphere Model (SPCAM). IPHOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This model has produced promised results, especially for low-level cloud climatology, seasonal variations and diurnal variations (Cheng and Xu 2011, 2013a, b; Xu and Cheng 2013a, b). Because of the enormous computational cost of SPCAM-IPHOC, which is 400 times of a conventional CAM, we decided to bypass the CRM and implement the IPHOC directly to CAM version 5 (CAM5). IPHOC replaces the PBL/stratocumulus, shallow convection, and cloud macrophysics parameterizations in CAM5. Since there are large discrepancies in the spatial and temporal scales between CRM and CAM5, IPHOC used in CAM5 has to be modified from that used in SPCAM. In particular, we diagnose all second- and third-order moments except for the fluxes. These prognostic and diagnostic moments are used to select a double-Gaussian probability density function to describe the SGS variability. We also incorporate a diagnostic PBL height parameterization to represent the strong inversion above PBL. The goal of this study is to compare the simulation of the climatology from these three

  17. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

    2012-01-01

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  18. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  19. Dividend taxation in an infinite-horizon general equilibrium model

    OpenAIRE

    Pham, Ngoc-Sang

    2017-01-01

    We consider an infinite-horizon general equilibrium model with heterogeneous agents and financial market imperfections. We investigate the role of dividend taxation on economic growth and asset price. The optimal dividend taxation is also studied.

  20. Aspects of general linear modelling of migration.

    Science.gov (United States)

    Congdon, P

    1992-01-01

    "This paper investigates the application of general linear modelling principles to analysing migration flows between areas. Particular attention is paid to specifying the form of the regression and error components, and the nature of departures from Poisson randomness. Extensions to take account of spatial and temporal correlation are discussed as well as constrained estimation. The issue of specification bears on the testing of migration theories, and assessing the role migration plays in job and housing markets: the direction and significance of the effects of economic variates on migration depends on the specification of the statistical model. The application is in the context of migration in London and South East England in the 1970s and 1980s." excerpt

  1. General extrapolation model for an important chemical dose-rate effect

    International Nuclear Information System (INIS)

    Gillen, K.T.; Clough, R.L.

    1984-12-01

    In order to extrapolate material accelerated aging data, methodologies must be developed based on sufficient understanding of the processes leading to material degradation. One of the most important mechanisms leading to chemical dose-rate effects in polymers involves the breakdown of intermediate hydroperoxide species. A general model for this mechanism is derived based on the underlying chemical steps. The results lead to a general formalism for understanding dose rate and sequential aging effects when hydroperoxide breakdown is important. We apply the model to combined radiation/temperature aging data for a PVC material and show that this data is consistent with the model and that model extrapolations are in excellent agreement with 12-year real-time aging results from an actual nuclear plant. This model and other techniques discussed in this report can aid in the selection of appropriate accelerated aging methods and can also be used to compare and select materials for use in safety-related components. This will result in increased assurance that equipment qualification procedures are adequate

  2. The reliability of grazing rate estimates from dilution experiments: Have we over-estimated rates of organic carbon consumption by microzooplankton?

    Directory of Open Access Journals (Sweden)

    J. R. Dolan,

    2005-01-01

    Full Text Available According to a recent global analysis, microzooplankton grazing is surprisingly invariant, ranging only between 59 and 74% of phytoplankton primary production across systems differing in seasonality, trophic status, latitude, or salinity. Thus an important biological process in the world ocean, the daily consumption of recently fixed carbon, appears nearly constant. We believe this conclusion is an artefact because dilution experiments are 1 prone to providing over-estimates of grazing rates and 2 unlikely to furnish evidence of low grazing rates. In our view the overall average rate of microzooplankton grazing probably does not exceed 50% of primary production and may be even lower in oligotrophic systems.

  3. Generalized Modeling of the Human Lower Limb Assembly

    Science.gov (United States)

    Cofaru, Ioana; Huzu, Iulia

    2014-11-01

    The main reason for creating a generalized assembly of the main bones of the lower human member is to create the premises of realizing a biomechanic assisted study which could be used for the study of the high range of varieties of pathologies that exist at this level. Starting from 3D CAD models of the main bones of the lower human member, which were realized in previous researches, in this study a generalized assembly system was developed, system in which are highlighted both the situation of an healthy subject and the situation of the situation of a subject affected by axial deviations. In order to achieve these purpose reference systems were created, systems that are in accordance with the mechanical axes and the anatomic axes of the lower member, which were later generally assembled in a manner that provides an easy customization option

  4. A General Model for Thermal, Hydraulic and Electric Analysis of Superconducting Cables

    CERN Document Server

    Bottura, L; Rosso, C

    2000-01-01

    In this paper we describe a generic, multi-component and multi-channel model for the analysis of superconducting cables. The aim of the model is to treat in a general and consistent manner simultaneous thermal, electric and hydraulic transients in cables. The model is devised for most general situations, but reduces in limiting cases to most common approximations without loss of efficiency. We discuss here the governing equations, and we write them in a matrix form that is well adapted to numerical treatment. We finally demonstrate the model capability by comparison with published experimental data on current distribution in a two-strand cable.

  5. Multiple-event probability in general-relativistic quantum mechanics. II. A discrete model

    International Nuclear Information System (INIS)

    Mondragon, Mauricio; Perez, Alejandro; Rovelli, Carlo

    2007-01-01

    We introduce a simple quantum mechanical model in which time and space are discrete and periodic. These features avoid the complications related to continuous-spectrum operators and infinite-norm states. The model provides a tool for discussing the probabilistic interpretation of generally covariant quantum systems, without the confusion generated by spurious infinities. We use the model to illustrate the formalism of general-relativistic quantum mechanics, and to test the definition of multiple-event probability introduced in a companion paper [Phys. Rev. D 75, 084033 (2007)]. We consider a version of the model with unitary time evolution and a version without unitary time evolution

  6. Structural dynamic analysis with generalized damping models analysis

    CERN Document Server

    Adhikari , Sondipon

    2013-01-01

    Since Lord Rayleigh introduced the idea of viscous damping in his classic work ""The Theory of Sound"" in 1877, it has become standard practice to use this approach in dynamics, covering a wide range of applications from aerospace to civil engineering. However, in the majority of practical cases this approach is adopted more for mathematical convenience than for modeling the physics of vibration damping. Over the past decade, extensive research has been undertaken on more general ""non-viscous"" damping models and vibration of non-viscously damped systems. This book, along with a related book

  7. Testing a generalized cubic Galileon gravity model with the Coma Cluster

    Energy Technology Data Exchange (ETDEWEB)

    Terukina, Ayumu; Yamamoto, Kazuhiro; Okabe, Nobuhiro [Department of Physical Sciences, Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Matsushita, Kyoko; Sasaki, Toru, E-mail: telkina@theo.phys.sci.hiroshima-u.ac.jp, E-mail: kazuhiro@hiroshima-u.ac.jp, E-mail: okabe@hiroshima-u.ac.jp, E-mail: matusita@rs.kagu.tus.ac.jp, E-mail: j1213703@ed.tus.ac.jp [Department of Physics, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601 (Japan)

    2015-10-01

    We obtain a constraint on the parameters of a generalized cubic Galileon gravity model exhibiting the Vainshtein mechanism by using multi-wavelength observations of the Coma Cluster. The generalized cubic Galileon model is characterized by three parameters of the turning scale associated with the Vainshtein mechanism, and the amplitude of modifying a gravitational potential and a lensing potential. X-ray and Sunyaev-Zel'dovich (SZ) observations of the intra-cluster medium are sensitive to the gravitational potential, while the weak-lensing (WL) measurement is specified by the lensing potential. A joint fit of a complementary multi-wavelength dataset of X-ray, SZ and WL measurements enables us to simultaneously constrain these three parameters of the generalized cubic Galileon model for the first time. We also find a degeneracy between the cluster mass parameters and the gravitational modification parameters, which is influential in the limit of the weak screening of the fifth force.

  8. A NEW GENERAL 3DOF QUASI-STEADY AERODYNAMIC INSTABILITY MODEL

    DEFF Research Database (Denmark)

    Gjelstrup, Henrik; Larsen, Allan; Georgakis, Christos

    2008-01-01

    but can generally be applied for aerodynamic instability prediction for prismatic bluff bodies. The 3DOF, which make up the movement of the model, are the displacements in the XY-plane and the rotation around the bluff body’s rotational axis. The proposed model incorporates inertia coupling between...

  9. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  10. Particles and holes equivalence for generalized seniority and the interacting boson model

    International Nuclear Information System (INIS)

    Talmi, I.

    1982-01-01

    An apparent ambiguity was recently reported in coupling either pairs of identical fermions or hole pairs. This is explained here as due to a Hamiltonian whose lowest eigenstates do not have the structure prescribed by generalized seniority. It is shown that generalized seniority eigenstates can be equivalently constructed from correlated J = 0 and J = 2 pair states of either particles or holes. The interacting boson model parameters calculated can be unambiguously interpreted and then are of real interest to the shell model basis of interacting boson model

  11. A general relativistic hydrostatic model for a galaxy

    International Nuclear Information System (INIS)

    Hojman, R.; Pena, L.; Zamorano, N.

    1991-08-01

    The existence of huge amounts of mass laying at the center of some galaxies has been inferred by data gathered at different wavelengths. It seems reasonable then, to incorporate general relativity in the study of these objects. A general relativistic hydrostatic model for a galaxy is studied. We assume that the galaxy is dominated by the dark mass except at the nucleus, where the luminous matter prevails. It considers four different concentric spherically symmetric regions, properly matched and with a specific equation of state for each of them. It yields a slowly raising orbital velocity for a test particle moving in the background gravitational field of the dark matter region. In this sense we think of this model as representing a spiral galaxy. The dependence of the mass on the radius in cluster and field spiral galaxies published recently, can be used to fix the size of the inner luminous core. A vanishing pressure at the edge of the galaxy and the assumption of hydrostatic equilibrium everywhere generates a jump in the density and the orbital velocity at the shell enclosing the galaxy. This is a prediction of this model. The ratio between the size core and the shells introduced here are proportional to their densities. In this sense the model is scale invariant. It can be used to reproduce a galaxy or the central region of a galaxy. We have also compared our results with those obtained with the Newtonian isothermal sphere. The luminosity is not included in our model as an extra variable in the determination of the orbital velocity. (author). 29 refs, 10 figs

  12. Vector generalized linear and additive models with an implementation in R

    CERN Document Server

    Yee, Thomas W

    2015-01-01

    This book presents a statistical framework that expands generalized linear models (GLMs) for regression modelling. The framework shared in this book allows analyses based on many semi-traditional applied statistics models to be performed as a coherent whole. This is possible through the approximately half-a-dozen major classes of statistical models included in the book and the software infrastructure component, which makes the models easily operable.    The book’s methodology and accompanying software (the extensive VGAM R package) are directed at these limitations, and this is the first time the methodology and software are covered comprehensively in one volume. Since their advent in 1972, GLMs have unified important distributions under a single umbrella with enormous implications. The demands of practical data analysis, however, require a flexibility that GLMs do not have. Data-driven GLMs, in the form of generalized additive models (GAMs), are also largely confined to the exponential family. This book ...

  13. A nested Atlantic-Mediterranean Sea general circulation model for operational forecasting

    Directory of Open Access Journals (Sweden)

    P. Oddo

    2009-10-01

    Full Text Available A new numerical general circulation ocean model for the Mediterranean Sea has been implemented nested within an Atlantic general circulation model within the framework of the Marine Environment and Security for the European Area project (MERSEA, Desaubies, 2006. A 4-year twin experiment was carried out from January 2004 to December 2007 with two different models to evaluate the impact on the Mediterranean Sea circulation of open lateral boundary conditions in the Atlantic Ocean. One model considers a closed lateral boundary in a large Atlantic box and the other is nested in the same box in a global ocean circulation model. Impact was observed comparing the two simulations with independent observations: ARGO for temperature and salinity profiles and tide gauges and along-track satellite observations for the sea surface height. The improvement in the nested Atlantic-Mediterranean model with respect to the closed one is particularly evident in the salinity characteristics of the Modified Atlantic Water and in the Mediterranean sea level seasonal variability.

  14. Parameter identification in a generalized time-harmonic Rayleigh damping model for elastography.

    Directory of Open Access Journals (Sweden)

    Elijah E W Van Houten

    Full Text Available The identifiability of the two damping components of a Generalized Rayleigh Damping model is investigated through analysis of the continuum equilibrium equations as well as a simple spring-mass system. Generalized Rayleigh Damping provides a more diversified attenuation model than pure Viscoelasticity, with two parameters to describe attenuation effects and account for the complex damping behavior found in biological tissue. For heterogeneous Rayleigh Damped materials, there is no equivalent Viscoelastic system to describe the observed motions. For homogeneous systems, the inverse problem to determine the two Rayleigh Damping components is seen to be uniquely posed, in the sense that the inverse matrix for parameter identification is full rank, with certain conditions: when either multi-frequency data is available or when both shear and dilatational wave propagation is taken into account. For the multi-frequency case, the frequency dependency of the elastic parameters adds a level of complexity to the reconstruction problem that must be addressed for reasonable solutions. For the dilatational wave case, the accuracy of compressional wave measurement in fluid saturated soft tissues becomes an issue for qualitative parameter identification. These issues can be addressed with reasonable assumptions on the negligible damping levels of dilatational waves in soft tissue. In general, the parameters of a Generalized Rayleigh Damping model are identifiable for the elastography inverse problem, although with more complex conditions than the simpler Viscoelastic damping model. The value of this approach is the additional structural information provided by the Generalized Rayleigh Damping model, which can be linked to tissue composition as well as rheological interpretations.

  15. A study of modelling simplifications in ground vibration predictions for railway traffic at grade

    Science.gov (United States)

    Germonpré, M.; Degrande, G.; Lombaert, G.

    2017-10-01

    Accurate computational models are required to predict ground-borne vibration due to railway traffic. Such models generally require a substantial computational effort. Therefore, much research has focused on developing computationally efficient methods, by either exploiting the regularity of the problem geometry in the direction along the track or assuming a simplified track structure. This paper investigates the modelling errors caused by commonly made simplifications of the track geometry. A case study is presented investigating a ballasted track in an excavation. The soil underneath the ballast is stiffened by a lime treatment. First, periodic track models with different cross sections are analyzed, revealing that a prediction of the rail receptance only requires an accurate representation of the soil layering directly underneath the ballast. A much more detailed representation of the cross sectional geometry is required, however, to calculate vibration transfer from track to free field. Second, simplifications in the longitudinal track direction are investigated by comparing 2.5D and periodic track models. This comparison shows that the 2.5D model slightly overestimates the track stiffness, while the transfer functions between track and free field are well predicted. Using a 2.5D model to predict the response during a train passage leads to an overestimation of both train-track interaction forces and free field vibrations. A combined periodic/2.5D approach is therefore proposed in this paper. First, the dynamic axle loads are computed by solving the train-track interaction problem with a periodic model. Next, the vibration transfer to the free field is computed with a 2.5D model. This combined periodic/2.5D approach only introduces small modelling errors compared to an approach in which a periodic model is used in both steps, while significantly reducing the computational cost.

  16. Generalized symmetries and conserved quantities of the Lotka-Volterra model

    Science.gov (United States)

    Baumann, G.; Freyberger, M.

    1991-07-01

    We examine the generalized symmetries of the Lotka-Volterra model to find the parameter values at which one time-dependent integral of motion exists. In this case the integral can be read off from the symmetries themselves. We also demonstrate the connection to a Hamiltonian structure of the Lotka-Volterra model.

  17. Itinerant deaf educator and general educator perceptions of the D/HH push-in model.

    Science.gov (United States)

    Rabinsky, Rebecca J

    2013-01-01

    A qualitative case study using the deaf and hard of hearing (D/HH) push-in model was conducted on the perceptions of 3 itinerant deaf educators and 3 general educators working in 1 school district. Participants worked in pairs of 1 deaf educator and 1 general educator at 3 elementary schools. Open-ended research questions guided the study, which was concerned with teachers' perceptions of the model in general and with the model's advantages, disadvantages, and effectiveness. Data collected from observations, one-to-one interviews, and a focus group interview enabled the investigator to uncover 4 themes: Participants (a) had an overall positive experience, (b) viewed general education immersion as an advantage, (c) considered high noise levels a disadvantage, and (d) believed the effectiveness of the push-in model was dependent on several factors, in particular, the needs of the student and the nature of the general education classroom environment.

  18. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    Science.gov (United States)

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  19. Generalized Roe's numerical scheme for a two-fluid model

    International Nuclear Information System (INIS)

    Toumi, I.; Raymond, P.

    1993-01-01

    This paper is devoted to a mathematical and numerical study of a six equation two-fluid model. We will prove that the model is strictly hyperbolic due to the inclusion of the virtual mass force term in the phasic momentum equations. The two-fluid model is naturally written under a nonconservative form. To solve the nonlinear Riemann problem for this nonconservative hyperbolic system, a generalized Roe's approximate Riemann solver, is used, based on a linearization of the nonconservative terms. A Godunov type numerical scheme is built, using this approximate Riemann solver. 10 refs., 5 figs,

  20. Non-residential water demand model validated with extensive measurements and surveys

    NARCIS (Netherlands)

    Pieterse-Quirijns, I.; Blokker, E.J.M.; van der Blom, E.C.; Vreeburg, J.H.G.

    2013-01-01

    Existing Dutch guidelines for the design of the drinking water and hot water system of nonresidential buildings are based on outdated assumptions on peak water demand or on unfounded assumptions on hot water demand. They generally overestimate peak demand values required for the design of an

  1. Generalized Penner models and multicritical behavior

    International Nuclear Information System (INIS)

    Tan, C.

    1992-01-01

    In this paper, we are interested in the critical behavior of generalized Penner models at t∼-1+μ/N where the topological expansion for the free energy develops logarithmic singularities: Γ∼-(χ 0 μ 2 lnμ+χ 1 lnμ+...). We demonstrate that these criticalities can best be characterized by the fact that the large-N generating function becomes meromorphic with a single pole term of unit residue, F(z)→1/(z-a), where a is the location of the ''sink.'' For a one-band eigenvalue distribution, we identify multicritical potentials; we find that none of these can be associated with the c=1 string compactified at an integral multiple of the self-dual radius. We also give an exact solution to the Gaussian Penner model and explicitly demonstrate that, at criticality, this solution does not correspond to a c=1 string compactified at twice the self-dual radius

  2. General formulation of standard model the standard model is in need of new concepts

    International Nuclear Information System (INIS)

    Khodjaev, L.Sh.

    2001-01-01

    The phenomenological basis for formulation of the Standard Model has been reviewed. The Standard Model based on the fundamental postulates has been formulated. The concept of the fundamental symmetries has been introduced: To look for not fundamental particles but fundamental symmetries. By searching of more general theory it is natural to search first of all global symmetries and than to learn consequence connected with the localisation of this global symmetries like wise of the standard Model

  3. Attractive Hubbard model with disorder and the generalized Anderson theorem

    International Nuclear Information System (INIS)

    Kuchinskii, E. Z.; Kuleeva, N. A.; Sadovskii, M. V.

    2015-01-01

    Using the generalized DMFT+Σ approach, we study the influence of disorder on single-particle properties of the normal phase and the superconducting transition temperature in the attractive Hubbard model. A wide range of attractive potentials U is studied, from the weak coupling region, where both the instability of the normal phase and superconductivity are well described by the BCS model, to the strong-coupling region, where the superconducting transition is due to Bose-Einstein condensation (BEC) of compact Cooper pairs, formed at temperatures much higher than the superconducting transition temperature. We study two typical models of the conduction band with semi-elliptic and flat densities of states, respectively appropriate for three-dimensional and two-dimensional systems. For the semi-elliptic density of states, the disorder influence on all single-particle properties (e.g., density of states) is universal for an arbitrary strength of electronic correlations and disorder and is due to only the general disorder widening of the conduction band. In the case of a flat density of states, universality is absent in the general case, but still the disorder influence is mainly due to band widening, and the universal behavior is restored for large enough disorder. Using the combination of DMFT+Σ and Nozieres-Schmitt-Rink approximations, we study the disorder influence on the superconducting transition temperature T c for a range of characteristic values of U and disorder, including the BCS-BEC crossover region and the limit of strong-coupling. Disorder can either suppress T c (in the weak-coupling region) or significantly increase T c (in the strong-coupling region). However, in all cases, the generalized Anderson theorem is valid and all changes of the superconducting critical temperature are essentially due to only the general disorder widening of the conduction band

  4. Simplicial models for trace spaces II: General higher dimensional automata

    DEFF Research Database (Denmark)

    Raussen, Martin

    of directed paths with given end points in a pre-cubical complex as the nerve of a particular category. The paper generalizes the results from Raussen [19, 18] in which we had to assume that the HDA in question arises from a semaphore model. In particular, important for applications, it allows for models...

  5. Influence of an urban canopy model and PBL schemes on vertical mixing for air quality modeling over Greater Paris

    Science.gov (United States)

    Kim, Youngseob; Sartelet, Karine; Raut, Jean-Christophe; Chazette, Patrick

    2015-04-01

    Impacts of meteorological modeling in the planetary boundary layer (PBL) and urban canopy model (UCM) on the vertical mixing of pollutants are studied. Concentrations of gaseous chemical species, including ozone (O3) and nitrogen dioxide (NO2), and particulate matter over Paris and the near suburbs are simulated using the 3-dimensional chemistry-transport model Polair3D of the Polyphemus platform. Simulated concentrations of O3, NO2 and PM10/PM2.5 (particulate matter of aerodynamic diameter lower than 10 μm/2.5 μm, respectively) are first evaluated using ground measurements. Higher surface concentrations are obtained for PM10, PM2.5 and NO2 with the MYNN PBL scheme than the YSU PBL scheme because of lower PBL heights in the MYNN scheme. Differences between simulations using different PBL schemes are lower than differences between simulations with and without the UCM and the Corine land-use over urban areas. Regarding the root mean square error, the simulations using the UCM and the Corine land-use tend to perform better than the simulations without it. At urban stations, the PM10 and PM2.5 concentrations are over-estimated and the over-estimation is reduced using the UCM and the Corine land-use. The ability of the model to reproduce vertical mixing is evaluated using NO2 measurement data at the upper air observation station of the Eiffel Tower, and measurement data at a ground station near the Eiffel Tower. Although NO2 is under-estimated in all simulations, vertical mixing is greatly improved when using the UCM and the Corine land-use. Comparisons of the modeled PM10 vertical distributions to distributions deduced from surface and mobile lidar measurements are performed. The use of the UCM and the Corine land-use is crucial to accurately model PM10 concentrations during nighttime in the center of Paris. In the nocturnal stable boundary layer, PM10 is relatively well modeled, although it is over-estimated on 24 May and under-estimated on 25 May. However, PM10 is

  6. A general graphical user interface for automatic reliability modeling

    Science.gov (United States)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  7. Border Collision Bifurcations in a Generalized Model of Population Dynamics

    Directory of Open Access Journals (Sweden)

    Lilia M. Ladino

    2016-01-01

    Full Text Available We analyze the dynamics of a generalized discrete time population model of a two-stage species with recruitment and capture. This generalization, which is inspired by other approaches and real data that one can find in literature, consists in considering no restriction for the value of the two key parameters appearing in the model, that is, the natural death rate and the mortality rate due to fishing activity. In the more general case the feasibility of the system has been preserved by posing opportune formulas for the piecewise map defining the model. The resulting two-dimensional nonlinear map is not smooth, though continuous, as its definition changes as any border is crossed in the phase plane. Hence, techniques from the mathematical theory of piecewise smooth dynamical systems must be applied to show that, due to the existence of borders, abrupt changes in the dynamic behavior of population sizes and multistability emerge. The main novelty of the present contribution with respect to the previous ones is that, while using real data, richer dynamics are produced, such as fluctuations and multistability. Such new evidences are of great interest in biology since new strategies to preserve the survival of the species can be suggested.

  8. Response of an ocean general circulation model to wind and ...

    Indian Academy of Sciences (India)

    The stretched-coordinate ocean general circulation model has been designed to study the observed variability due to wind and thermodynamic forcings. The model domain extends from 60°N to 60°S and cyclically continuous in the longitudinal direction. The horizontal resolution is 5° × 5° and 9 discrete vertical levels.

  9. A generalized model for compact stars

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Abdul [Bodai High School (H.S.), Department of Physics, Kolkata, West Bengal (India); Ray, Saibal [Government College of Engineering and Ceramic Technology, Department of Physics, Kolkata, West Bengal (India); Rahaman, Farook [Jadavpur University, Department of Mathematics, Kolkata, West Bengal (India)

    2016-05-15

    By virtue of the maximum entropy principle, we get an Euler-Lagrange equation which is a highly nonlinear differential equation containing the mass function and its derivatives. Solving the equation by a homotopy perturbation method we derive a generalized expression for the mass which is a polynomial function of the radial distance. Using the mass function we find a partially stable configuration and its characteristics. We show that different physical features of the known compact stars, viz. Her X-1, RX J 1856-37, SAX J (SS1), SAX J (SS2), and PSR J 1614-2230, can be explained by the present model. (orig.)

  10. A proposed general model of information behaviour.

    Directory of Open Access Journals (Sweden)

    2003-01-01

    Full Text Available Presents a critical description of Wilson's (1996 global model of information behaviour and proposes major modification on the basis of research into information behaviour of managers, conducted in Poland. The theoretical analysis and research results suggest that Wilson's model has certain imperfections, both in its conceptual content, and in graphical presentation. The model, for example, cannot be used to describe managers' information behaviour, since managers basically are not the end users of external from organization or computerized information services, and they acquire information mainly through various intermediaries. Therefore, the model cannot be considered as a general model, applicable to every category of information users. The proposed new model encompasses the main concepts of Wilson's model, such as: person-in-context, three categories of intervening variables (individual, social and environmental, activating mechanisms, cyclic character of information behaviours, and the adoption of a multidisciplinary approach to explain them. However, the new model introduces several changes. They include: 1. identification of 'context' with the intervening variables; 2. immersion of the chain of information behaviour in the 'context', to indicate that the context variables influence behaviour at all stages of the process (identification of needs, looking for information, processing and using it; 3. stress is put on the fact that the activating mechanisms also can occur at all stages of the information acquisition process; 4. introduction of two basic strategies of looking for information: personally and/or using various intermediaries.

  11. [Can overestimating one's own capacities of action lead to fall? A study on the perception of affordance in the elderly].

    Science.gov (United States)

    Luyat, Marion; Domino, Delphine; Noël, Myriam

    2008-12-01

    Falls are frequent in the elderly and account for medical complications and loss of autonomy. Affordance, a concept proposed by Gibson, can help to understand a possible cause of falls. An affordance is defined as a potentiality of action offered by the environment in relation with both the properties of this environment and the properties of the organism. Most of our daily activities reflect a perfect adjustment between the perception of these potentialities of action and our actual action abilities. In other words, we correctly perceive affordances. However, in the elderly, postural abilities are reduced and equilibration is more unstable. Thus, some falls could result from a misperception of the affordances of posturability. The aim of our study was to test the hypothesis that cognitive overestimation of real postural abilities in the elderly may cause falls. There would be a gap between what the old subjects believe to be able to do and what they actually can do. Fifteen young adults (mean age = 24 years) and fifteen older adults (mean age = 72 years) had to judge if they were able to stand upright on an inclined surface. The exploration of the inclined surface was made in two conditions: visually and also by haptics (without vision with a cane). In a second part, we measured their real postural stance on the inclined surface. The results show that the perceptual judgments were not different among old and young people. However, as expected, the old subjects had lower postural boundaries than the younger. They could stand on lower inclinations of the surface. These results show an involution of the perception of the affordances in aging. They support the hypothesis of a cognitive overestimation of action abilities in the elderly, possibly due to a difficulty to actualize the new limits for action.

  12. On the general procedure for modelling complex ecological systems

    International Nuclear Information System (INIS)

    He Shanyu.

    1987-12-01

    In this paper, the principle of a general procedure for modelling complex ecological systems, i.e. the Adaptive Superposition Procedure (ASP) is shortly stated. The result of application of ASP in a national project for ecological regionalization is also described. (author). 3 refs

  13. A Generalized Dynamic Model of Geared System: Establishment and Application

    Directory of Open Access Journals (Sweden)

    Hui Liu

    2011-12-01

    Full Text Available In order to make the dynamic characteristic simulation of the ordinary and planetary gears drive more accurate and more efficient , a generalized dynamic model of geared system is established including internal and external mesh gears in this paper. It is used to build a mathematical model, which achieves the auto judgment of the gear mesh state. We do not need to concern about active or passive gears any more, and the complicated power flow analysis can be avoided. With the numerical integration computation, the axis orbits diagram and dynamic gear mesh force characteristic are acquired and the results show that the dynamic response of translational displacement is greater when contacting line direction change is considered, and with the quickly change of direction of contacting line, the amplitude of mesh force would be increased, which easily causes the damage to the gear tooth. Moreover, compared with ordinary gear, dynamic responses of planetary gear would be affected greater by the gear backlash. Simulation results show the effectiveness of the generalized dynamic model and the mathematical model.

  14. A report on workshops: General circulation model study of climate- chemistry interaction

    International Nuclear Information System (INIS)

    Wei-Chyung, Wang; Isaksen, I.S.A.

    1993-01-01

    This report summarizes the discussion on General Circulation Model Study of Climate-Chemistry Interaction from two workshops, the first held 19--21 August 1992 at Oslo, Norway and the second 26--27 May 1993 at Albany, New York, USA. The workshops are the IAMAP activities under the Trace Constituent Working Group. The main objective of the two workshops was to recommend specific general circulation model (GCM) studies of the ozone distribution and the climatic effect of its changes. The workshops also discussed the climatic implications of increasing sulfate aerosols because of its importance to regional climate. The workshops were organized into four working groups: observation of atmospheric O 3 ; modeling of atmospheric chemical composition; modeling of sulfate aerosols; and aspects of climate modeling

  15. On-line validation of linear process models using generalized likelihood ratios

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1981-12-01

    A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator

  16. Evaluation of daily maximum and minimum 2-m temperatures as simulated with the Regional Climate Model COSMO-CLM over Africa

    Directory of Open Access Journals (Sweden)

    Stefan Krähenmann

    2013-07-01

    Full Text Available The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8 to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22° and 0.44°, and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax for Africa (covering the period 2008–2010 is created using the regression-kriging-regression-kriging (RKRK algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90th percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2°C across arid areas, yet overestimated by around 2°C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones, but less well performance for Tmax (capture below 70%. Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90th percentile of Tmax, particularly

  17. Evaluation of daily maximum and minimum 2-m temperatures as simulated with the regional climate model COSMO-CLM over Africa

    Energy Technology Data Exchange (ETDEWEB)

    Kraehenmann, Stefan; Kothe, Steffen; Ahrens, Bodo [Frankfurt Univ. (Germany). Inst. for Atmospheric and Environmental Sciences; Panitz, Hans-Juergen [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany)

    2013-10-15

    The representation of the diurnal 2-m temperature cycle is challenging because of the many processes involved, particularly land-atmosphere interactions. This study examines the ability of the regional climate model COSMO-CLM (version 4.8) to capture the statistics of daily maximum and minimum 2-m temperatures (Tmin/Tmax) over Africa. The simulations are carried out at two different horizontal grid-spacings (0.22 and 0.44 ), and are driven by ECMWF ERA-Interim reanalyses as near-perfect lateral boundary conditions. As evaluation reference, a high-resolution gridded dataset of daily maximum and minimum temperatures (Tmin/Tmax) for Africa (covering the period 2008-2010) is created using the regression-kriging-regression-kriging (RKRK) algorithm. RKRK applies, among other predictors, the remotely sensed predictors land surface temperature and cloud cover to compensate for the missing information about the temperature pattern due to the low station density over Africa. This dataset allows the evaluation of temperature characteristics like the frequencies of Tmin/Tmax, the diurnal temperature range, and the 90{sup th} percentile of Tmax. Although the large-scale patterns of temperature are reproduced well, COSMO-CLM shows significant under- and overestimation of temperature at regional scales. The hemispheric summers are generally too warm and the day-to-day temperature variability is overestimated over northern and southern extra-tropical Africa. The average diurnal temperature range is underestimated by about 2 C across arid areas, yet overestimated by around 2 C over the African tropics. An evaluation based on frequency distributions shows good model performance for simulated Tmin (the simulated frequency distributions capture more than 80% of the observed ones), but less well performance for Tmax (capture below 70%). Further, over wide parts of Africa a too large fraction of daily Tmax values exceeds the observed 90{sup th} percentile of Tmax, particularly across

  18. A General Attribute and Rule Based Role-Based Access Control Model

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Growing numbers of users and many access control policies which involve many different resource attributes in service-oriented environments bring various problems in protecting resource. This paper analyzes the relationships of resource attributes to user attributes in all policies, and propose a general attribute and rule based role-based access control(GAR-RBAC) model to meet the security needs. The model can dynamically assign users to roles via rules to meet the need of growing numbers of users. These rules use different attribute expression and permission as a part of authorization constraints, and are defined by analyzing relations of resource attributes to user attributes in many access policies that are defined by the enterprise. The model is a general access control model, and can support many access control policies, and also can be used to wider application for service. The paper also describes how to use the GAR-RBAC model in Web service environments.

  19. Exploring the squeezed three-point galaxy correlation function with generalized halo occupation distribution models

    Science.gov (United States)

    Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.

    2018-04-01

    We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.

  20. Vacuum Expectation Value Profiles of the Bulk Scalar Field in the Generalized Randall-Sundrum Model

    International Nuclear Information System (INIS)

    Moazzen, M.; Tofighi, A.; Farokhtabar, A.

    2015-01-01

    In the generalized Randall-Sundrum warped brane-world model the cosmological constant induced on the visible brane can be positive or negative. In this paper we investigate profiles of vacuum expectation value of the bulk scalar field under general Dirichlet and Neumann boundary conditions in the generalized warped brane-world model. We show that the VEV profiles generally depend on the value of the brane cosmological constant. We find that the VEV profiles of the bulk scalar field for a visible brane with negative cosmological constant and positive tension are quite distinct from those of Randall-Sundrum model. In addition we show that the VEV profiles for a visible brane with large positive cosmological constant are also different from those of the Randall-Sundrum model. We also verify that Goldberger and Wise mechanism can work under nonzero Dirichlet boundary conditions in the generalized Randall-Sundrum model.

  1. Modelling road accident blackspots data with the discrete generalized Pareto distribution.

    Science.gov (United States)

    Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María

    2014-10-01

    This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. Self-organization of critical behavior in controlled general queueing models

    International Nuclear Information System (INIS)

    Blanchard, Ph.; Hongler, M.-O.

    2004-01-01

    We consider general queueing models of the (G/G/1) type with service times controlled by the busy period. For feedback control mechanisms driving the system to very high traffic load, it is shown the busy period probability density exhibits a generic -((3)/(2)) power law which is a typical mean field behavior of SOC models

  3. General sets of coherent states and the Jaynes-Cummings model

    International Nuclear Information System (INIS)

    Daoud, M.; Hussin, V.

    2002-01-01

    General sets of coherent states are constructed for quantum systems admitting a nondegenerate infinite discrete energy spectrum. They are eigenstates of an annihilation operator and satisfy the usual properties of standard coherent states. The application of such a construction to the quantum optics Jaynes-Cummings model leads to a new understanding of the properties of this model. (author)

  4. Self-organization of critical behavior in controlled general queueing models

    Science.gov (United States)

    Blanchard, Ph.; Hongler, M.-O.

    2004-03-01

    We consider general queueing models of the (G/G/1) type with service times controlled by the busy period. For feedback control mechanisms driving the system to very high traffic load, it is shown the busy period probability density exhibits a generic - {3}/{2} power law which is a typical mean field behavior of SOC models.

  5. Generalized Jaynes-Cummings model as a quantum search algorithm

    International Nuclear Information System (INIS)

    Romanelli, A.

    2009-01-01

    We propose a continuous time quantum search algorithm using a generalization of the Jaynes-Cummings model. In this model the states of the atom are the elements among which the algorithm realizes the search, exciting resonances between the initial and the searched states. This algorithm behaves like Grover's algorithm; the optimal search time is proportional to the square root of the size of the search set and the probability to find the searched state oscillates periodically in time. In this frame, it is possible to reinterpret the usual Jaynes-Cummings model as a trivial case of the quantum search algorithm.

  6. Doctor-patient relationships in general practice--a different model.

    Science.gov (United States)

    Kushner, T

    1981-09-01

    Philosophical concerns cannot be excluded from even a cursory examination of the physician-patient relationship. Two possible alternatives for determining what this relationship entails are the teleological (outcome) approach vs the deontological (process) one. Traditionally, this relationship has been structured around the 'clinical model' which views the physician-patient relationship in teleological terms. Data on the actual content of general medical practice indicate the advisability of reassessing this relationship, and suggest that the 'clinical model' may be too limiting, and that a more appropriate basis for the physician-patient relationship is one described in this paper as the 'relational model'.

  7. General circulation model study of atmospheric carbon monoxide

    International Nuclear Information System (INIS)

    Pinto, J.P.; Yung, Y.L.; Rind, D.; Russell, G.L.; Lerner, J.A.; Hansen, J.E.; Hameed, S.

    1983-01-01

    The carbon monoxide cycle is studied by incorporating the known and hypothetical sources and sinks in a tracer model that uses the winds generated by a general circulation model. Photochemical production and loss terms, which depend on OH radical concentrations, are calculated in an interactive fashion. The computed global distribution and seasonal variations of CO are compared with observations to obtain constraints on the distribution and magnitude of the sources and sinks of CO, and on the tropospheric abundance of OH. The simplest model that accounts for available observations requires a low latitude plant source of about 1.3 x 10 15 g yr -1 , in addition to sources from incomplete combustion of fossil fuels and oxidation of methane. The globally averaged OH concentration calculated in the model is 7 x 10 5 cm -3 . Models that calculate globally averaged OH concentrations much lower than our nominal value are not consistent with the observed variability of CO. Such models are also inconsistent with measurements of CO isotopic abundances, which imply the existence of plant sources

  8. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  9. The epistemological status of general circulation models

    Science.gov (United States)

    Loehle, Craig

    2018-03-01

    Forecasts of both likely anthropogenic effects on climate and consequent effects on nature and society are based on large, complex software tools called general circulation models (GCMs). Forecasts generated by GCMs have been used extensively in policy decisions related to climate change. However, the relation between underlying physical theories and results produced by GCMs is unclear. In the case of GCMs, many discretizations and approximations are made, and simulating Earth system processes is far from simple and currently leads to some results with unknown energy balance implications. Statistical testing of GCM forecasts for degree of agreement with data would facilitate assessment of fitness for use. If model results need to be put on an anomaly basis due to model bias, then both visual and quantitative measures of model fit depend strongly on the reference period used for normalization, making testing problematic. Epistemology is here applied to problems of statistical inference during testing, the relationship between the underlying physics and the models, the epistemic meaning of ensemble statistics, problems of spatial and temporal scale, the existence or not of an unforced null for climate fluctuations, the meaning of existing uncertainty estimates, and other issues. Rigorous reasoning entails carefully quantifying levels of uncertainty.

  10. A housing stock model of non-heating end-use energy in England verified by aggregate energy use data

    International Nuclear Information System (INIS)

    Lorimer, Stephen

    2012-01-01

    This paper proposes a housing stock model of non-heating end-use energy for England that can be verified using aggregate energy use data available for small areas. These end-uses, commonly referred to as appliances and lighting, are a rapidly increasing part of residential energy demand. This paper proposes a model that can be verified using aggregated data of electricity meters in small areas and census data on housing. Secondly, any differences that open up between major collections of housing could potentially be resolved by using data from frequently updated expenditure surveys. For the year 2008, the model overestimated domestic non-heating energy use at the national scale by 1.5%. This model was then used on the residential sector with various area classifications, which found that rural and suburban areas were generally underestimated by up to 3.3% and urban areas overestimated by up to 5.2% with the notable exception of “professional city life” classifications. The model proposed in this paper has the potential to be a verifiable and adaptable model for non-heating end-use energy in households in England for the future. - Highlights: ► Housing stock energy model was developed for end-uses outside of heating for UK context. ► This entailed changes to the building energy model that serves as the bottom of the stock model. ► The model is adaptable to reflect rapid changes in consumption between major housing surveys. ► Verification was done against aggregated consumption data and for the first time uses a measured size of the housing stock. ► The verification process revealed spatial variations in consumption patterns for future research.

  11. Self-dual configurations in Abelian Higgs models with k-generalized gauge field dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Casana, R.; Cavalcante, A. [Departamento de Física, Universidade Federal do Maranhão,65080-805, São Luís, Maranhão (Brazil); Hora, E. da [Departamento de Física, Universidade Federal do Maranhão,65080-805, São Luís, Maranhão (Brazil); Coordenadoria Interdisciplinar de Ciência e Tecnologia, Universidade Federal do Maranhão,65080-805, São Luís, Maranhão (Brazil)

    2016-12-14

    We have shown the existence of self-dual solutions in new Maxwell-Higgs scenarios where the gauge field possesses a k-generalized dynamic, i.e., the kinetic term of gauge field is a highly nonlinear function of F{sub μν}F{sup μν}. We have implemented our proposal by means of a k-generalized model displaying the spontaneous symmetry breaking phenomenon. We implement consistently the Bogomol’nyi-Prasad-Sommerfield formalism providing highly nonlinear self-dual equations whose solutions are electrically neutral possessing total energy proportional to the magnetic flux. Among the infinite set of possible configurations, we have found families of k-generalized models whose self-dual equations have a form mathematically similar to the ones arising in the Maxwell-Higgs or Chern-Simons-Higgs models. Furthermore, we have verified that our proposal also supports infinite twinlike models with |ϕ|{sup 4}-potential or |ϕ|{sup 6}-potential. With the aim to show explicitly that the BPS equations are able to provide well-behaved configurations, we have considered a test model in order to study axially symmetric vortices. By depending of the self-dual potential, we have shown that the k-generalized model is able to produce solutions that for long distances have a exponential decay (as Abrikosov-Nielsen-Olesen vortices) or have a power-law decay (characterizing delocalized vortices). In all cases, we observe that the generalization modifies the vortex core size, the magnetic field amplitude and the bosonic masses but the total energy remains proportional to the quantized magnetic flux.

  12. Testing for constant nonparametric effects in general semiparametric regression models with interactions

    KAUST Repository

    Wei, Jiawei

    2011-07-01

    We consider the problem of testing for a constant nonparametric effect in a general semi-parametric regression model when there is the potential for interaction between the parametrically and nonparametrically modeled variables. The work was originally motivated by a unique testing problem in genetic epidemiology (Chatterjee, et al., 2006) that involved a typical generalized linear model but with an additional term reminiscent of the Tukey one-degree-of-freedom formulation, and their interest was in testing for main effects of the genetic variables, while gaining statistical power by allowing for a possible interaction between genes and the environment. Later work (Maity, et al., 2009) involved the possibility of modeling the environmental variable nonparametrically, but they focused on whether there was a parametric main effect for the genetic variables. In this paper, we consider the complementary problem, where the interest is in testing for the main effect of the nonparametrically modeled environmental variable. We derive a generalized likelihood ratio test for this hypothesis, show how to implement it, and provide evidence that our method can improve statistical power when compared to standard partially linear models with main effects only. We use the method for the primary purpose of analyzing data from a case-control study of colorectal adenoma.

  13. A General Accelerated Degradation Model Based on the Wiener Process.

    Science.gov (United States)

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-12-06

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  14. A General Accelerated Degradation Model Based on the Wiener Process

    Directory of Open Access Journals (Sweden)

    Le Liu

    2016-12-01

    Full Text Available Accelerated degradation testing (ADT is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  15. Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models

    Directory of Open Access Journals (Sweden)

    Shelton Peiris

    2017-12-01

    Full Text Available This paper considers a flexible class of time series models generated by Gegenbauer polynomials incorporating the long memory in stochastic volatility (SV components in order to develop the General Long Memory SV (GLMSV model. We examine the corresponding statistical properties of this model, discuss the spectral likelihood estimation and investigate the finite sample properties via Monte Carlo experiments. We provide empirical evidence by applying the GLMSV model to three exchange rate return series and conjecture that the results of out-of-sample forecasts adequately confirm the use of GLMSV model in certain financial applications.

  16. Python tools for rapid development, calibration, and analysis of generalized groundwater-flow models

    Science.gov (United States)

    Starn, J. J.; Belitz, K.

    2014-12-01

    National-scale water-quality data sets for the United States have been available for several decades; however, groundwater models to interpret these data are available for only a small percentage of the country. Generalized models may be adequate to explain and project groundwater-quality trends at the national scale by using regional scale models (defined as watersheds at or between the HUC-6 and HUC-8 levels). Coast-to-coast data such as the National Hydrologic Dataset Plus (NHD+) make it possible to extract the basic building blocks for a model anywhere in the country. IPython notebooks have been developed to automate the creation of generalized groundwater-flow models from the NHD+. The notebook format allows rapid testing of methods for model creation, calibration, and analysis. Capabilities within the Python ecosystem greatly speed up the development and testing of algorithms. GeoPandas is used for very efficient geospatial processing. Raster processing includes the Geospatial Data Abstraction Library and image processing tools. Model creation is made possible through Flopy, a versatile input and output writer for several MODFLOW-based flow and transport model codes. Interpolation, integration, and map plotting included in the standard Python tool stack also are used, making the notebook a comprehensive platform within on to build and evaluate general models. Models with alternative boundary conditions, number of layers, and cell spacing can be tested against one another and evaluated by using water-quality data. Novel calibration criteria were developed by comparing modeled heads to land-surface and surface-water elevations. Information, such as predicted age distributions, can be extracted from general models and tested for its ability to explain water-quality trends. Groundwater ages then can be correlated with horizontal and vertical hydrologic position, a relation that can be used for statistical assessment of likely groundwater-quality conditions

  17. Generalized anxiety disorder in urban China: Prevalence, awareness, and disease burden.

    Science.gov (United States)

    Yu, Wei; Singh, Shikha Satendra; Calhoun, Shawna; Zhang, Hui; Zhao, Xiahong; Yang, Fengchi

    2018-07-01

    Limited published research has quantified the Generalized Anxiety Disorder (GAD) prevalence and its burden in China. This study aimed to fill in the knowledge gap and to evaluate the burden of GAD among adults in urban China. This study utilized existing data from the China National Health and Wellness Survey (NHWS) 2012-2013. Prevalence of self-reported diagnosed and undiagnosed GAD was estimated. Diagnosed and undiagnosed GAD respondents were compared with non-anxious respondents in terms of health-related quality of life (HRQoL), resource utilization, and work productivity and activity impairment using multivariate generalized linear models. A multivariate logistic model assessed the risk factors for GAD. The prevalence of undiagnosed/diagnosed GAD was 5.3% in urban China with only 0.5% of GAD respondents reporting a diagnosis. Compared with non-anxious respondents, both diagnosed and undiagnosed GAD respondents had significantly lower HRQoL, more work productivity and activity impairment, and greater healthcare resource utilization in the past six months. Age, gender, marital status, income level, insurance status, smoking, drinking and exercise behaviors, and comorbidity burdens were significantly associated with GAD. This was a patient-reported study; data are therefore subject to recall bias. The survey was limited to respondents in urban China; therefore, these results focused on urban China and may be under- or over-estimating GAD prevalence in China. Causal inferences cannot be made given the cross-sectional nature of the study. GAD may be substantially under-diagnosed in urban China. More healthcare resources should be invested to alleviate the burden of GAD. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Midlatitude Forcing Mechanisms for Glacier Mass Balance Investigated Using General Circulation Models

    NARCIS (Netherlands)

    Reichert, B.K.; Bengtsson, L.; Oerlemans, J.

    2001-01-01

    A process-oriented modeling approach is applied in order to simulate glacier mass balance for individual glaciers using statistically downscaled general circulation models (GCMs). Glacier-specific seasonal sensitivity characteristics based on a mass balance model of intermediate complexity are used

  19. Generalized transport model for phase transition with memory

    International Nuclear Information System (INIS)

    Chen, Chi; Ciucci, Francesco

    2013-01-01

    A general model for phenomenological transport in phase transition is derived, which extends Jäckle and Frisch model of phase transition with memory and the Cahn–Hilliard model. In addition to including interfacial energy to account for the presence of interfaces, we introduce viscosity and relaxation contributions, which result from incorporating memory effect into the driving potential. Our simulation results show that even without interfacial energy term, the viscous term can lead to transient diffuse interfaces. From the phase transition induced hysteresis, we discover different energy dissipation mechanism for the interfacial energy and the viscosity effect. In addition, by combining viscosity and interfacial energy, we find that if the former dominates, then the concentration difference across the phase boundary is reduced; conversely, if the interfacial energy is greater then this difference is enlarged.

  20. Generalized isothermal models with strange equation of state

    Indian Academy of Sciences (India)

    intention to study the Einstein–Maxwell system with a linear equation of state with ... It is our intention to model the interior of a dense realistic star with a general ... The definition m(r) = 1. 2. ∫ r. 0 ω2ρ(ω)dω. (14) represents the mass contained within a radius r which is a useful physical quantity. The mass function (14) has ...

  1. General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data

    Energy Technology Data Exchange (ETDEWEB)

    Bagwell, L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Bennett, P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-02-21

    This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).

  2. Evaluation of a seven-year air quality simulation using the Weather Research and Forecasting (WRF)/Community Multiscale Air Quality (CMAQ) models in the eastern United States.

    Science.gov (United States)

    Zhang, Hongliang; Chen, Gang; Hu, Jianlin; Chen, Shu-Hua; Wiedinmyer, Christine; Kleeman, Michael; Ying, Qi

    2014-03-01

    The performance of the Weather Research and Forecasting (WRF)/Community Multi-scale Air Quality (CMAQ) system in the eastern United States is analyzed based on results from a seven-year modeling study with a 4-km spatial resolution. For 2-m temperature, the monthly averaged mean bias (MB) and gross error (GE) values are generally within the recommended performance criteria, although temperature is over-predicted with MB values up to 2K. Water vapor at 2-m is well-predicted but significant biases (>2 g kg(-1)) were observed in wintertime. Predictions for wind speed are satisfactory but biased towards over-prediction with 0nitrate and sulfate concentrations are also well reproduced. The other unresolved PM2.5 components (OTHER) are significantly overestimated by more than a factor of two. No conclusive explanations can be made regarding the possible cause of this universal overestimation, which warrants a follow-up study to better understand this problem. Copyright © 2013 Elsevier B.V. All rights reserved.

  3. General classical solutions in the noncommutative CP{sup N-1} model

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O.; Jack, I.; Jones, D.R.T

    2002-10-31

    We give an explicit construction of general classical solutions for the noncommutative CP{sup N-1} model in two dimensions, showing that they correspond to integer values for the action and topological charge. We also give explicit solutions for the Dirac equation in the background of these general solutions and show that the index theorem is satisfied.

  4. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    Science.gov (United States)

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  5. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    Science.gov (United States)

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  6. Optimisation of a parallel ocean general circulation model

    Science.gov (United States)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  7. Symplectic models for general insertion devices

    International Nuclear Information System (INIS)

    Wu, Y.; Forest, E.; Robin, D. S.; Nishimura, H.; Wolski, A.; Litvinenko, V. N.

    2001-01-01

    A variety of insertion devices (IDs), wigglers and undulators, linearly or elliptically polarized,are widely used as high brightness radiation sources at the modern light source rings. Long and high-field wigglers have also been proposed as the main source of radiation damping at next generation damping rings. As a result, it becomes increasingly important to understand the impact of IDs on the charged particle dynamics in the storage ring. In this paper, we report our recent development of a general explicit symplectic model for IDs with the paraxial ray approximation. High-order explicit symplectic integrators are developed to study real-world insertion devices with a number of wiggler harmonics and arbitrary polarizations

  8. Bayesian prediction of spatial count data using generalized linear mixed models

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge

    2002-01-01

    Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...

  9. General classical solutions of the complex Grassmannian and CP sub(N-1) sigma models

    International Nuclear Information System (INIS)

    Sasaki, Ryu.

    1983-05-01

    General classical solutions are constructed for the complex Grassmannian non-linear sigma models in two euclidean dimensions in terms of holomorphic functions. The Grassmannian sigma models are a simple generalization of the well known CP sup(N-1) model in two dimensions and they share various interesting properties; existence of (anti-) instantons, an infinite number of conserved quantities and complete integrability. (author)

  10. Improving Modeling of Extreme Events using Generalized Extreme Value Distribution or Generalized Pareto Distribution with Mixing Unconditional Disturbances

    OpenAIRE

    Suarez, R

    2001-01-01

    In this paper an alternative non-parametric historical simulation approach, the Mixing Unconditional Disturbances model with constant volatility, where price paths are generated by reshuffling disturbances for S&P 500 Index returns over the period 1950 - 1998, is used to estimate a Generalized Extreme Value Distribution and a Generalized Pareto Distribution. An ordinary back-testing for period 1999 - 2008 was made to verify this technique, providing higher accuracy returns level under upper ...

  11. Geometric mean IELT and premature ejaculation: appropriate statistics to avoid overestimation of treatment efficacy.

    Science.gov (United States)

    Waldinger, Marcel D; Zwinderman, Aeilko H; Olivier, Berend; Schweitzer, Dave H

    2008-02-01

    The intravaginal ejaculation latency time (IELT) behaves in a skewed manner and needs the appropriate statistics for correct interpretation of treatment results. To explain the rightful use of geometrical mean IELT values and the fold increase of the geometric mean IELT because of the positively skewed IELT distribution. Linking theoretical arguments to the outcome of several selective serotonin reuptake inhibitor and modern antidepressant study results. Geometric mean IELT and fold increase of geometrical mean IELT. Log-transforming each separate IELT measurement of each individual man is the basis for the calculation of the geometric mean IELT. A drug-induced positively skewed IELT distribution necessitates the calculation of the geometric mean IELTs at baseline and during drug treatment. In a positively skewed IELT distribution, the use of the "arithmetic" mean IELT risks an overestimation of the drug-induced ejaculation delay as the mean IELT is always higher than the geometric mean IELT. Strong ejaculation-delaying drugs give rise to a strong positively skewed IELT distribution, whereas weak ejaculation-delaying drugs give rise to (much) less skewed IELT distributions. Ejaculation delay is expressed in fold increase of the geometric mean IELT. Drug-induced ejaculatory performance discloses a positively skewed IELT distribution, requiring the use of the geometric mean IELT and the fold increase of the geometric mean IELT.

  12. Plane symmetric cosmological micro model in modified theory of Einstein’s general relativity

    Directory of Open Access Journals (Sweden)

    Panigrahi U.K.

    2003-01-01

    Full Text Available In this paper, we have investigated an anisotropic homogeneous plane symmetric cosmological micro-model in the presence of massless scalar field in modified theory of Einstein's general relativity. Some interesting physical and geometrical aspects of the model together with singularity in the model are discussed. Further, it is shown that this theory is valid and leads to Ein­stein's theory as the coupling parameter λ →>• 0 in micro (i.e. quantum level in general.

  13. Warm intermediate inflationary Universe model in the presence of a generalized Chaplygin gas

    Energy Technology Data Exchange (ETDEWEB)

    Herrera, Ramon [Pontificia Universidad Catolica de Valparaiso, Instituto de Fisica, Valparaiso (Chile); Videla, Nelson [Universidad de Chile, Departamento de Fisica, FCFM, Santiago (Chile); Olivares, Marco [Universidad Diego Portales, Facultad de Ingenieria, Santiago (Chile)

    2016-01-15

    A warm intermediate inflationary model in the context of generalized Chaplygin gas is investigated. We study this model in the weak and strong dissipative regimes, considering a generalized form of the dissipative coefficient Γ = Γ(T,φ), and we describe the inflationary dynamics in the slow-roll approximation. We find constraints on the parameters in our model considering the Planck 2015 data, together with the condition for warm inflation T > H, and the conditions for the weak and strong dissipative regimes. (orig.)

  14. Measuring and Examining General Self-Efficacy among Community College Students: A Structural Equation Modeling Approach

    Science.gov (United States)

    Chen, Yu; Starobin, Soko S.

    2018-01-01

    This study examined a psychosocial mechanism of how general self-efficacy interacts with other key factors and influences degree aspiration for students enrolled in an urban diverse community college. Using general self-efficacy scales, the authors hypothesized the General Self-efficacy model for Community College students (the GSE-CC model). A…

  15. A general modeling framework for describing spatially structured population dynamics

    Science.gov (United States)

    Sample, Christine; Fryxell, John; Bieri, Joanna; Federico, Paula; Earl, Julia; Wiederholt, Ruscena; Mattsson, Brady; Flockhart, Tyler; Nicol, Sam; Diffendorfer, James E.; Thogmartin, Wayne E.; Erickson, Richard A.; Norris, D. Ryan

    2017-01-01

    Variation in movement across time and space fundamentally shapes the abundance and distribution of populations. Although a variety of approaches model structured population dynamics, they are limited to specific types of spatially structured populations and lack a unifying framework. Here, we propose a unified network-based framework sufficiently novel in its flexibility to capture a wide variety of spatiotemporal processes including metapopulations and a range of migratory patterns. It can accommodate different kinds of age structures, forms of population growth, dispersal, nomadism and migration, and alternative life-history strategies. Our objective was to link three general elements common to all spatially structured populations (space, time and movement) under a single mathematical framework. To do this, we adopt a network modeling approach. The spatial structure of a population is represented by a weighted and directed network. Each node and each edge has a set of attributes which vary through time. The dynamics of our network-based population is modeled with discrete time steps. Using both theoretical and real-world examples, we show how common elements recur across species with disparate movement strategies and how they can be combined under a unified mathematical framework. We illustrate how metapopulations, various migratory patterns, and nomadism can be represented with this modeling approach. We also apply our network-based framework to four organisms spanning a wide range of life histories, movement patterns, and carrying capacities. General computer code to implement our framework is provided, which can be applied to almost any spatially structured population. This framework contributes to our theoretical understanding of population dynamics and has practical management applications, including understanding the impact of perturbations on population size, distribution, and movement patterns. By working within a common framework, there is less chance

  16. Stability of a general delayed virus dynamics model with humoral immunity and cellular infection

    Science.gov (United States)

    Elaiw, A. M.; Raezah, A. A.; Alofi, A. S.

    2017-06-01

    In this paper, we investigate the dynamical behavior of a general nonlinear model for virus dynamics with virus-target and infected-target incidences. The model incorporates humoral immune response and distributed time delays. The model is a four dimensional system of delay differential equations where the production and removal rates of the virus and cells are given by general nonlinear functions. We derive the basic reproduction parameter R˜0 G and the humoral immune response activation number R˜1 G and establish a set of conditions on the general functions which are sufficient to determine the global dynamics of the models. We use suitable Lyapunov functionals and apply LaSalle's invariance principle to prove the global asymptotic stability of the all equilibria of the model. We confirm the theoretical results by numerical simulations.

  17. The asymmetric effects of El Niño and La Niña on the East Asian winter monsoon and their simulation by CMIP5 atmospheric models

    Science.gov (United States)

    Guo, Zhun; Zhou, Tianjun; Wu, Bo

    2017-02-01

    El Niño-Southern Oscillation (ENSO) events significantly affect the year-by-year variations of the East Asian winter monsoon (EAWM). However, the effect of La Niña events on the EAWM is not a mirror image of that of El Niño events. Although the EAWM becomes generally weaker during El Niño events and stronger during La Niña winters, the enhanced precipitation over the southeastern China and warmer surface air temperature along the East Asian coastline during El Niño years are more significant. These asymmetric effects are caused by the asymmetric longitudinal positions of the western North Pacific (WNP) anticyclone during El Niño events and the WNP cyclone during La Niña events; specifically, the center of the WNP cyclone during La Niña events is westward-shifted relative to its El Niño counterpart. This central-position shift results from the longitudinal shift of remote El Niño and La Niña anomalous heating, and asymmetry in the amplitude of local sea surface temperature anomalies over the WNP. However, such asymmetric effects of ENSO on the EAWM are barely reproduced by the atmospheric models of Phase 5 of the Coupled Model Intercomparison Project (CMIP5), although the spatial patterns of anomalous circulations are reasonably reproduced. The major limitation of the CMIP5 models is an overestimation of the anomalous WNP anticyclone/cyclone, which leads to stronger EAWM rainfall responses. The overestimated latent heat flux anomalies near the South China Sea and the northern WNP might be a key factor behind the overestimated anomalous circulations.

  18. Reshocks, rarefactions, and the generalized Layzer model for hydrodynamic instabilities

    International Nuclear Information System (INIS)

    Mikaelian, K.O.

    2008-01-01

    We report numerical simulations and analytic modeling of shock tube experiments on Rayleigh-Taylor and Richtmyer-Meshkov instabilities. We examine single interfaces of the type A/B where the incident shock is initiated in A and the transmitted shock proceeds into B. Examples are He/air and air/He. In addition, we study finite-thickness or double-interface A/B/A configurations like air/SF 6 /air gas-curtain experiments. We first consider conventional shock tubes that have a 'fixed' boundary: A solid endwall which reflects the transmitted shock and reshocks the interface(s). Then we focus on new experiments with a 'free' boundary--a membrane disrupted mechanically or by the transmitted shock, sending back a rarefaction towards the interface(s). Complex acceleration histories are achieved, relevant for Inertial Confinement Fusion implosions. We compare our simulation results with a generalized Layzer model for two fluids with time-dependent densities, and derive a new freeze-out condition whereby accelerating and compressive forces cancel each other out. Except for the recently reported failures of the Layzer model, the generalized Layzer model and hydrocode simulations for reshocks and rarefactions agree well with each other, and remain to be verified experimentally

  19. A general scheme for training and optimization of the Grenander deformable template model

    DEFF Research Database (Denmark)

    Fisker, Rune; Schultz, Nette; Duta, N.

    2000-01-01

    parameters, a very fast general initialization algorithm and an adaptive likelihood model based on local means. The model parameters are trained by a combination of a 2D shape learning algorithm and a maximum likelihood based criteria. The fast initialization algorithm is based on a search approach using...... for applying the general deformable template model proposed by (Grenander et al., 1991) to a new problem with minimal manual interaction, beside supplying a training set, which can be done by a non-expert user. The main contributions compared to previous work are a supervised learning scheme for the model...

  20. General dosimetry model for internal contamination with radioisotopes

    International Nuclear Information System (INIS)

    Nino, L.

    1989-01-01

    Radiation dose by inner contamination with radioisotopes is not measured directly but evaluated by the application of mathematical models of fixation and elimination, taken into account biological activity of each organ with respect to the incorporated material. Models proposed by ICRP for the respiratory and gastrointestinal tracts (30) seems that they should not be applied independently because of the evident correlation between them. In this paper both models are integrated in a more general one with neither modification nor limitation of the starting models. It has been applied to some patients in the Instituto Nacional de Cancerologia, who received some I-131 dose via oral and results are quite similar to dose experimentally obtained via urine spectrograms. Based on this results the method was formalized and applied to professional exposed personnel of the medical staff at the same Institute; due to high doses found in some of the urine samples, probable I-131 air contamination could be supposed

  1. A General Model for Testing Mediation and Moderation Effects

    Science.gov (United States)

    MacKinnon, David P.

    2010-01-01

    This paper describes methods for testing mediation and moderation effects in a dataset, both together and separately. Investigations of this kind are especially valuable in prevention research to obtain information on the process by which a program achieves its effects and whether the program is effective for subgroups of individuals. A general model that simultaneously estimates mediation and moderation effects is presented, and the utility of combining the effects into a single model is described. Possible effects of interest in the model are explained, as are statistical methods to assess these effects. The methods are further illustrated in a hypothetical prevention program example. PMID:19003535

  2. A general phenomenological model for work function

    Science.gov (United States)

    Brodie, I.; Chou, S. H.; Yuan, H.

    2014-07-01

    A general phenomenological model is presented for obtaining the zero Kelvin work function of any crystal facet of metals and semiconductors, both clean and covered with a monolayer of electropositive atoms. It utilizes the known physical structure of the crystal and the Fermi energy of the two-dimensional electron gas assumed to form on the surface. A key parameter is the number of electrons donated to the surface electron gas per surface lattice site or adsorbed atom, which is taken to be an integer. Initially this is found by trial and later justified by examining the state of the valence electrons of the relevant atoms. In the case of adsorbed monolayers of electropositive atoms a satisfactory justification could not always be found, particularly for cesium, but a trial value always predicted work functions close to the experimental values. The model can also predict the variation of work function with temperature for clean crystal facets. The model is applied to various crystal faces of tungsten, aluminium, silver, and select metal oxides, and most demonstrate good fits compared to available experimental values.

  3. Water tracers in the general circulation model ECHAM

    International Nuclear Information System (INIS)

    Hoffmann, G.; Heimann, M.

    1993-01-01

    We have installed a water tracer model into the ECHAM General Circulation Model (GCM) parameterizing all fractionation processes of the stable water isotopes ( 1 H 2 18 O and 1 H 2 H 16 O). A five year simulation was performed under present day conditions. We focus on the applicability of such a water tracer model to obtain information about the quality of the hydrological cycle of the GCM. The analysis of the simulated 1 H 2 18 O composition of the precipitation indicates too weak fractionated precipitation over the Antarctic and Greenland ice sheets and too strong fractionated precipitation over large areas of the tropical and subtropical land masses. We can show that these deficiencies are connected with problems of model quantities such as the precipitation and the resolution of the orography. The linear relationship between temperature and the δ 18 O value, i.e. the Dansgaard slope, is reproduced quite well in the model. The slope is slightly too flat and the strong correlation between temperature and δ 18 O vanishes at very low temperatures compared to the observations. (orig.)

  4. A model for a career in a specialty of general surgery: One surgeon's opinion.

    Science.gov (United States)

    Ko, Bona; McHenry, Christopher R

    2018-01-01

    The integration of general and endocrine surgery was studied as a potential career model for fellowship trained general surgeons. Case logs collected from 1991-2016 and academic milestones were examined for a single general surgeon with a focused interest in endocrine surgery. Operations were categorized using CPT codes and the 2017 ACGME "Major Case Categories" and there frequencies were determined. 10,324 operations were performed on 8209 patients. 412.9 ± 84.9 operations were performed yearly including 279.3 ± 42.7 general and 133.7 ± 65.5 endocrine operations. A high-volume endocrine surgery practice and a rank of tenured professor were achieved by years 11 and 13, respectively. At year 25, the frequency of endocrine operations exceeded general surgery operations. Maintaining a foundation in broad-based general surgery with a specialty focus is a sustainable career model. Residents and fellows can use the model to help plan their careers with realistic expectations. Copyright © 2017. Published by Elsevier Inc.

  5. Comparative studies of the ITU-T prediction model for radiofrequency radiation emission and real time measurements at some selected mobile base transceiver stations in Accra, Ghana

    International Nuclear Information System (INIS)

    Obeng, S. O

    2014-07-01

    Recent developments in the electronics industry have led to the widespread use of radiofrequency (RF) devices in various areas including telecommunications. The increasing numbers of mobile base station (BTS) as well as their proximity to residential areas have been accompanied by public health concerns due to the radiation exposure. The main objective of this research was to compare and modify the ITU- T predictive model for radiofrequency radiation emission for BTS with measured data at some selected cell sites in Accra, Ghana. Theoretical and experimental assessment of radiofrequency exposures due to mobile base station antennas have been analysed. The maximum and minimum average power density measured from individual base station in the town was 1. 86µW/m2 and 0.00961µW/m2 respectively. The ITU-T Predictive model power density ranged between 6.40mW/m 2 and 0.344W/m 2 . Results obtained showed a variation between measured power density levels and the ITU-T predictive model. The ITU-T model power density levels decrease with increase in radial distance while real time measurements do not due to fluctuations during measurement. The ITU-T model overestimated the power density levels by a factor l0 5 as compared to real time measurements. The ITU-T model was modified to reduce the level of overestimation. The result showed that radiation intensity varies from one base station to another even at the same distance. Occupational exposure quotient ranged between 5.43E-10 and 1.89E-08 whilst general public exposure quotient ranged between 2.72E-09 and 9.44E-08. From the results, it shows that the RF exposure levels in Accra from these mobile phone base station antennas are below the permitted RF exposure limit to the general public recommended by the International Commission on Non-Ionizing Radiation Protection. (au)

  6. Two point function for a simple general relativistic quantum model

    OpenAIRE

    Colosi, Daniele

    2007-01-01

    We study the quantum theory of a simple general relativistic quantum model of two coupled harmonic oscillators and compute the two-point function following a proposal first introduced in the context of loop quantum gravity.

  7. Transferring and generalizing deep-learning-based neural encoding models across subjects.

    Science.gov (United States)

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-08-01

    Recent studies have shown the value of using deep learning models for mapping and characterizing how the brain represents and organizes information for natural vision. However, modeling the relationship between deep learning models and the brain (or encoding models), requires measuring cortical responses to large and diverse sets of natural visual stimuli from single subjects. This requirement limits prior studies to few subjects, making it difficult to generalize findings across subjects or for a population. In this study, we developed new methods to transfer and generalize encoding models across subjects. To train encoding models specific to a target subject, the models trained for other subjects were used as the prior models and were refined efficiently using Bayesian inference with a limited amount of data from the target subject. To train encoding models for a population, the models were progressively trained and updated with incremental data from different subjects. For the proof of principle, we applied these methods to functional magnetic resonance imaging (fMRI) data from three subjects watching tens of hours of naturalistic videos, while a deep residual neural network driven by image recognition was used to model visual cortical processing. Results demonstrate that the methods developed herein provide an efficient and effective strategy to establish both subject-specific and population-wide predictive models of cortical representations of high-dimensional and hierarchical visual features. Copyright © 2018 Elsevier Inc. All rights reserved.

  8. Generalized memory associativity in a network model for the neuroses

    Science.gov (United States)

    Wedemann, Roseli S.; Donangelo, Raul; de Carvalho, Luís A. V.

    2009-03-01

    We review concepts introduced in earlier work, where a neural network mechanism describes some mental processes in neurotic pathology and psychoanalytic working-through, as associative memory functioning, according to the findings of Freud. We developed a complex network model, where modules corresponding to sensorial and symbolic memories interact, representing unconscious and conscious mental processes. The model illustrates Freud's idea that consciousness is related to symbolic and linguistic memory activity in the brain. We have introduced a generalization of the Boltzmann machine to model memory associativity. Model behavior is illustrated with simulations and some of its properties are analyzed with methods from statistical mechanics.

  9. Generalized fish life-cycle poplulation model and computer program

    International Nuclear Information System (INIS)

    DeAngelis, D.L.; Van Winkle, W.; Christensen, S.W.; Blum, S.R.; Kirk, B.L.; Rust, B.W.; Ross, C.

    1978-03-01

    A generalized fish life-cycle population model and computer program have been prepared to evaluate the long-term effect of changes in mortality in age class 0. The general question concerns what happens to a fishery when density-independent sources of mortality are introduced that act on age class 0, particularly entrainment and impingement at power plants. This paper discusses the model formulation and computer program, including sample results. The population model consists of a system of difference equations involving age-dependent fecundity and survival. The fecundity for each age class is assumed to be a function of both the fraction of females sexually mature and the weight of females as they enter each age class. Natural mortality for age classes 1 and older is assumed to be independent of population size. Fishing mortality is assumed to vary with the number and weight of fish available to the fishery. Age class 0 is divided into six life stages. The probability of survival for age class 0 is estimated considering both density-independent mortality (natural and power plant) and density-dependent mortality for each life stage. Two types of density-dependent mortality are included. These are cannibalism of each life stage by older age classes and intra-life-stage competition

  10. Transcriptional responses of zebrafish to complex metal mixtures in laboratory studies overestimates the responses observed with environmental water.

    Science.gov (United States)

    Pradhan, Ajay; Ivarsson, Per; Ragnvaldsson, Daniel; Berg, Håkan; Jass, Jana; Olsson, Per-Erik

    2017-04-15

    Metals released into the environment continue to be of concern for human health. However, risk assessment of metal exposure is often based on total metal levels and usually does not take bioavailability data, metal speciation or matrix effects into consideration. The continued development of biological endpoint analyses are therefore of high importance for improved eco-toxicological risk analyses. While there is an on-going debate concerning synergistic or additive effects of low-level mixed exposures there is little environmental data confirming the observations obtained from laboratory experiments. In the present study we utilized qRT-PCR analysis to identify key metal response genes to develop a method for biomonitoring and risk-assessment of metal pollution. The gene expression patterns were determined for juvenile zebrafish exposed to waters from sites down-stream of a closed mining operation. Genes representing different physiological processes including stress response, inflammation, apoptosis, drug metabolism, ion channels and receptors, and genotoxicity were analyzed. The gene expression patterns of zebrafish exposed to laboratory prepared metal mixes were compared to the patterns obtained with fish exposed to the environmental samples with the same metal composition and concentrations. Exposure to environmental samples resulted in fewer alterations in gene expression compared to laboratory mixes. A biotic ligand model (BLM) was used to approximate the bioavailability of the metals in the environmental setting. However, the BLM results were not in agreement with the experimental data, suggesting that the BLM may be overestimating the risk in the environment. The present study therefore supports the inclusion of site-specific biological analyses to complement the present chemical based assays used for environmental risk-assessment. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. On the relation between cost and service models for general inventory systems

    NARCIS (Netherlands)

    Houtum, van G.J.J.A.N.; Zijm, W.H.M.

    2000-01-01

    In this paper, we present a systematic overview of possible relations between cost and service models for fairly general single- and multi-stage inventory systems. In particular, we relate various types of penalty costs in pure cost models to equivalent types of service measures in service models.

  12. A general science-based framework for dynamical spatio-temporal models

    Science.gov (United States)

    Wikle, C.K.; Hooten, M.B.

    2010-01-01

    Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic

  13. Zeros of the partition function for some generalized Ising models

    International Nuclear Information System (INIS)

    Dunlop, F.

    1981-01-01

    The author considers generalized Ising Models with two and four body interactions in a complex external field h such that Re h>=mod(Im h) + C, where C is an explicit function of the interaction parameters. The partition function Z(h) is then shown to satisfy mod(Z(h))>=Z(c), so that the pressure is analytic in h inside the given region. The method is applied to specific examples: the gauge invariant Ising Model, and the Widom Rowlinson model on the lattice. (Auth.)

  14. A Graphical User Interface to Generalized Linear Models in MATLAB

    Directory of Open Access Journals (Sweden)

    Peter Dunn

    1999-07-01

    Full Text Available Generalized linear models unite a wide variety of statistical models in a common theoretical framework. This paper discusses GLMLAB-software that enables such models to be fitted in the popular mathematical package MATLAB. It provides a graphical user interface to the powerful MATLAB computational engine to produce a program that is easy to use but with many features, including offsets, prior weights and user-defined distributions and link functions. MATLAB's graphical capacities are also utilized in providing a number of simple residual diagnostic plots.

  15. Testing for constant nonparametric effects in general semiparametric regression models with interactions

    KAUST Repository

    Wei, Jiawei; Carroll, Raymond J.; Maity, Arnab

    2011-01-01

    We consider the problem of testing for a constant nonparametric effect in a general semi-parametric regression model when there is the potential for interaction between the parametrically and nonparametrically modeled variables. The work

  16. Consensus-based training and assessment model for general surgery.

    Science.gov (United States)

    Szasz, P; Louridas, M; de Montbrun, S; Harris, K A; Grantcharov, T P

    2016-05-01

    Surgical education is becoming competency-based with the implementation of in-training milestones. Training guidelines should reflect these changes and determine the specific procedures for such milestone assessments. This study aimed to develop a consensus view regarding operative procedures and tasks considered appropriate for junior and senior trainees, and the procedures that can be used as technical milestone assessments for trainee progression in general surgery. A Delphi process was followed where questionnaires were distributed to all 17 Canadian general surgery programme directors. Items were ranked on a 5-point Likert scale, with consensus defined as Cronbach's α of at least 0·70. Items rated 4 or above on the 5-point Likert scale by 80 per cent of the programme directors were included in the models. Two Delphi rounds were completed, with 14 programme directors taking part in round one and 11 in round two. The overall consensus was high (Cronbach's α = 0·98). The training model included 101 unique procedures and tasks, 24 specific to junior trainees, 68 specific to senior trainees, and nine appropriate to all. The assessment model included four procedures. A system of operative procedures and tasks for junior- and senior-level trainees has been developed along with an assessment model for trainee progression. These can be used as milestones in competency-based assessments. © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.

  17. Preliminary evaluation of the Community Multiscale Air Quality model for 2002 over the Southeastern United States.

    Science.gov (United States)

    Morris, Ralph E; McNally, Dennis E; Tesche, Thomas W; Tonnesen, Gail; Boylan, James W; Brewer, Patricia

    2005-11-01

    The Visibility Improvement State and Tribal Association of the Southeast (VISTAS) is one of five Regional Planning Organizations that is charged with the management of haze, visibility, and other regional air quality issues in the United States. The VISTAS Phase I work effort modeled three episodes (January 2002, July 1999, and July 2001) to identify the optimal model configuration(s) to be used for the 2002 annual modeling in Phase II. Using model configurations recommended in the Phase I analysis, 2002 annual meteorological (Mesoscale Meterological Model [MM5]), emissions (Sparse Matrix Operator Kernal Emissions [SMOKE]), and air quality (Community Multiscale Air Quality [CMAQ]) simulations were performed on a 36-km grid covering the continental United States and a 12-km grid covering the Eastern United States. Model estimates were then compared against observations. This paper presents the results of the preliminary CMAQ model performance evaluation for the initial 2002 annual base case simulation. Model performance is presented for the Eastern United States using speciated fine particle concentration and wet deposition measurements from several monitoring networks. Initial results indicate fairly good performance for sulfate with fractional bias values generally within +/-20%. Nitrate is overestimated in the winter by approximately +50% and underestimated in the summer by more than -100%. Organic carbon exhibits a large summer underestimation bias of approximately -100% with much improved performance seen in the winter with a bias near zero. Performance for elemental carbon is reasonable with fractional bias values within +/- 40%. Other fine particulate (soil) and coarse particular matter exhibit large (80-150%) overestimation in the winter but improved performance in the summer. The preliminary 2002 CMAQ runs identified several areas of enhancements to improve model performance, including revised temporal allocation factors for ammonia emissions to improve

  18. A generalized business cycle model with delays in gross product and capital stock

    International Nuclear Information System (INIS)

    Hattaf, Khalid; Riad, Driss; Yousfi, Noura

    2017-01-01

    Highlights: • A generalized business cycle model is proposed and rigorously analyzed. • Well-posedness of the model and local stability of the economic equilibrium are investigated. • Direction of the Hopf bifurcation and stability of the bifurcating periodic solutions are determined. • A special case and some numerical simulations are presented. - Abstract: In this work, we propose a delayed business cycle model with general investment function. The time delays are introduced into gross product and capital stock, respectively. We first prove that the model is mathematically and economically well posed. In addition, the stability of the economic equilibrium and the existence of Hopf bifurcation are investigated. Our main results show that both time delays can cause the macro-economic system to fluctuate and the economic equilibrium to lose or gain its stability. Moreover, the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions are determined by means of the normal form method and center manifold theory. Furthermore, the models and results presented in many previous studies are improved and generalized.

  19. Singular solitons of generalized Camassa-Holm models

    International Nuclear Information System (INIS)

    Tian Lixin; Sun Lu

    2007-01-01

    Two generalizations of the Camassa-Holm system associated with the singular analysis are proposed for Painleve integrability properties and the extensions of already known analytic solitons. A remarkable feature of the physical model is that it has peakon solution which has peak form. An alternative WTC test which allowed the identifying of such models directly if formulated in terms of inserting a formed ansatz into these models. For the two models have Painleve property, Painleve-Baecklund systems can be constructed through the expansion of solitons about the singularity manifold. By the implementations of Maple, plentiful new type solitonic structures and some kink waves, which are affected by the variation of energy, are explored. If the energy is infinite in finite time, there will be a collapse in soliton systems by direct numerical simulations. Particularly, there are two collapses coexisting in our regular solitons, which occurred around its central regions. Simulation shows that in the bottom of periodic waves arises the non-zero parts of compactons and anti-compactons. We also get floating solitary waves whose amplitude is infinite. In contrary to which a finite-amplitude blow-up soliton is obtained. Periodic blow-ups are found too. Special kinks which have periodic cuspons are derived

  20. Transmittivity and wavefunctions in one-dimensional generalized Aubry models

    International Nuclear Information System (INIS)

    Basu, C.; Mookerjee, A.; Sen, A.K.; Thakur, P.K.

    1990-07-01

    We use the vector recursion method of Haydock to obtain the transmittance of a class of generalized Aubry models in one-dimension. We also study the phase change of the wavefunctions as they travel through the chain and also the behaviour of the conductance with changes in size. (author). 10 refs, 9 figs

  1. Implementation of perturbed-chain statistical associating fluid theory (PC-SAFT), generalized (G)SAFT+cubic, and cubic-plus-association (CPA) for modeling thermophysical properties of selected 1-alkyl-3-methylimidazolium ionic liquids in a wide pressure range.

    Science.gov (United States)

    Polishuk, Ilya

    2013-03-14

    This study is the first comparative investigation of predicting the isochoric and the isobaric heat capacities, the isothermal and the isentropic compressibilities, the isobaric thermal expansibilities, the thermal pressure coefficients, and the sound velocities of ionic liquids by statistical associating fluid theory (SAFT) equation of state (EoS) models and cubic-plus-association (CPA). It is demonstrated that, taking into account the high uncertainty of the literature data (excluding sound velocities), the generalized for heavy compounds version of SAFT+Cubic (GSAFT+Cubic) appears as a robust estimator of the auxiliary thermodynamic properties under consideration. In the case of the ionic liquids the performance of PC-SAFT seems to be less accurate in comparison to ordinary compounds. In particular, PC-SAFT substantially overestimates heat capacities and underestimates the temperature and pressure dependencies of sound velocities and compressibilities. An undesired phenomenon of predicting high fictitious critical temperatures of ionic liquids by PC-SAFT should be noticed as well. CPA is the less accurate estimator of the liquid phase properties, but it is advantageous in modeling vapor pressures and vaporization enthalpies of ionic liquids. At the same time, the preliminary results indicate that the inaccuracies in predicting the deep vacuum vapor pressures of ionic liquids do not influence modeling of phase equilibria in their mixtures at much higher pressures.

  2. Qualification of a Plant Disease Simulation Model: Performance of the LATEBLIGHT Model Across a Broad Range of Environments.

    Science.gov (United States)

    Andrade-Piedra, Jorge L; Forbes, Gregory A; Shtienberg, Dani; Grünwald, Niklaus J; Chacón, María G; Taipe, Marco V; Hijmans, Robert J; Fry, William E

    2005-12-01

    ABSTRACT The concept of model qualification, i.e., discovering the domain over which a validated model may be properly used, was illustrated with LATEBLIGHT, a mathematical model that simulates the effect of weather, host growth and resistance, and fungicide use on asexual development and growth of Phytophthora infestans on potato foliage. Late blight epidemics from Ecuador, Mexico, Israel, and the United States involving 13 potato cultivars (32 epidemics in total) were compared with model predictions using graphical and statistical tests. Fungicides were not applied in any of the epidemics. For the simulations, a host resistance level was assigned to each cultivar based on general categories reported by local investigators. For eight cultivars, the model predictions fit the observed data. For four cultivars, the model predictions overestimated disease, likely due to inaccurate estimates of host resistance. Model predictions were inconsistent for one cultivar and for one location. It was concluded that the domain of applicability of LATEBLIGHT can be extended from the range of conditions in Peru for which it has been previously validated to those observed in this study. A sensitivity analysis showed that, within the range of values observed empirically, LATEBLIGHT is more sensitive to changes in variables related to initial inoculum and to weather than to changes in variables relating to host resistance.

  3. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang

    2014-02-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.

  4. A stratiform cloud parameterization for General Circulation Models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  5. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    1997-10-01

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  6. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  7. Toward a General Research Process for Using Dubin's Theory Building Model

    Science.gov (United States)

    Holton, Elwood F.; Lowe, Janis S.

    2007-01-01

    Dubin developed a widely used methodology for theory building, which describes the components of the theory building process. Unfortunately, he does not define a research process for implementing his theory building model. This article proposes a seven-step general research process for implementing Dubin's theory building model. An example of a…

  8. General Form of Model-Free Control Law and Convergence Analyzing

    Directory of Open Access Journals (Sweden)

    Xiuying Li

    2012-01-01

    Full Text Available The general form of model-free control law is introduced, and its convergence is analyzed. Firstly, the necessity to improve the basic form of model free control law is explained, and the functional combination method as the approach of improvement is presented. Then, a series of sufficient conditions of convergence are given. The analysis denotes that these conditions can be satisfied easily in the engineering practice.

  9. A general-purpose process modelling framework for marine energy systems

    International Nuclear Information System (INIS)

    Dimopoulos, George G.; Georgopoulou, Chariklia A.; Stefanatos, Iason C.; Zymaris, Alexandros S.; Kakalis, Nikolaos M.P.

    2014-01-01

    Highlights: • Process modelling techniques applied in marine engineering. • Systems engineering approaches to manage the complexity of modern ship machinery. • General purpose modelling framework called COSSMOS. • Mathematical modelling of conservation equations and related chemical – transport phenomena. • Generic library of ship machinery component models. - Abstract: High fuel prices, environmental regulations and current shipping market conditions impose ships to operate in a more efficient and greener way. These drivers lead to the introduction of new technologies, fuels, and operations, increasing the complexity of modern ship energy systems. As a means to manage this complexity, in this paper we present the introduction of systems engineering methodologies in marine engineering via the development of a general-purpose process modelling framework for ships named as DNV COSSMOS. Shifting the focus from components – the standard approach in shipping- to systems, widens the space for optimal design and operation solutions. The associated computer implementation of COSSMOS is a platform that models, simulates and optimises integrated marine energy systems with respect to energy efficiency, emissions, safety/reliability and costs, under both steady-state and dynamic conditions. DNV COSSMOS can be used in assessment and optimisation of design and operation problems in existing vessels, new builds as well as new technologies. The main features and our modelling approach are presented and key capabilities are illustrated via two studies on the thermo-economic design and operation optimisation of a combined cycle system for large bulk carriers, and the transient operation simulation of an electric marine propulsion system

  10. Generalized Calogero-Sutherland systems from many-matrix models

    International Nuclear Information System (INIS)

    Polychronakos, Alexios P.

    1999-01-01

    We construct generalizations of the Calogero-Sutherland-Moser system by appropriately reducing a model involving many unitary matrices. The resulting systems consist of particles on the circle with internal degrees of freedom, coupled through modifications of the inverse-square potential. The coupling involves SU(M) non-invariant (anti) ferromagnetic interactions of the internal degrees of freedom. The systems are shown to be integrable and the spectrum and wavefunctions of the quantum version are derived

  11. Ocean bio-geophysical modeling using mixed layer-isopycnal general circulation model coupled with photosynthesis process

    Digital Repository Service at National Institute of Oceanography (India)

    Nakamoto, S.; Saito, H.; Muneyama, K.; Sato, T.; PrasannaKumar, S.; Kumar, A.; Frouin, R.

    -chemical system that supports steady carbon circulation in geological time scale in the world ocean using Mixed Layer-Isopycnal ocean General Circulation model with remotely sensed Coastal Zone Color Scanner (CZCS) chlorophyll pigment concentration....

  12. Generalized math model for simulation of high-altitude balloon systems

    Science.gov (United States)

    Nigro, N. J.; Elkouh, A. F.; Hinton, D. E.; Yang, J. K.

    1985-01-01

    Balloon systems have proved to be a cost-effective means for conducting research experiments (e.g., infrared astronomy) in the earth's atmosphere. The purpose of this paper is to present a generalized mathematical model that can be used to simulate the motion of these systems once they have attained float altitude. The resulting form of the model is such that the pendulation and spin motions of the system are uncoupled and can be analyzed independently. The model is evaluated by comparing the simulation results with data obtained from an actual balloon system flown by NASA.

  13. Validation of lower tropospheric carbon monoxide inferred from MOZART model simulation over India

    Science.gov (United States)

    Yarragunta, Y.; Srivastava, S.; Mitra, D.

    2017-02-01

    In the present study, MOZART-4 (Model for Ozone and Related chemical Tracers-Version-4) simulation has been made from 2003 to 2007 and compared with satellite and in-situ observations with a specific focus on Indian subcontinent to illustrate the capabilities of MOZART-4 model. The model simulated CO have been compared with latest version (version-6) of MOPITT (Measurement Of Pollution In The Troposphere) carbon monoxide (CO) retrievals at 900, 800 and 700 hPa. Model reproduces major features present in satellite observations. However model significantly overestimates CO over the entire Indian region at 900 hPa and moderately overestimates at 800 hPa and 700 hPa. The frequency distribution of all simulated data points with respect to MOZART error shows maximum in the error range of 10-20% at all pressure levels. Over total Indian landmass, the percentage of gridded CO data that are being overestimated in the range of 0-30% at 900 hPa, 800 hPa and 700 hPa are 58%, 62% and 66% respectively. The study reflects very good correlation between two datasets over Central India (CI) and Southern India (SI). The coefficient of determination (r2) is found to be 0.68-0.78 and 0.70-0.78 over the CI and SI respectively. The weak correlation is evident over Northern India (NI) with r2 values of 0.1-0.3. Over Eastern India (EI), Good correlation at 800 hPa (r2 = 0.72) and 700 hPa (r2 = 0.66) whereas moderately weak correlation at 900 hPa (r2 = 0.48) has been observed. In contrast, Over Western India (WI), strong correlation is evident at 900 hPa (r2 = 0.64) and moderately weak association is found to be present at 800 hPa and 700 hPa. Model fairly reproduces seasonal cycle of CO in the lower troposphere over most of the Indian regions. However, during June to December, model shows overestimation over NI. The magnitude of overestimation is increasing linearly from 900 hPa to 700 hPa level. During April-June months, model results are coinciding with observed CO concentrations over SI

  14. Thermospheric tides simulated by the national center for atmospheric research thermosphere-ionosphere general circulation model at equinox

    International Nuclear Information System (INIS)

    Fesen, C.G.; Roble, R.G.; Ridley, E.C.

    1993-01-01

    The authors use the National Center for Atmospheric Research (NCAR) thermosphere/ionosphere general circulation model (TIGCM) to model tides and dynamics in the thermosphere. This model incorporates the latest advances in the thermosphere general circulation model. Model results emphasized the 70 degree W longitude region to overlap a series of incoherent radar scatter installations. Data and the model are available on data bases. The results of this theoretical modeling are compared with available data, and with prediction of more empirical models. In general there is broad agreement within the comparisons

  15. Reshocks, rarefactions, and the generalized Layzer model for hydrodynamic instabilities

    Energy Technology Data Exchange (ETDEWEB)

    Mikaelian, K O

    2008-06-10

    We report numerical simulations and analytic modeling of shock tube experiments on Rayleigh-Taylor and Richtmyer-Meshkov instabilities. We examine single interfaces of the type A/B where the incident shock is initiated in A and the transmitted shock proceeds into B. Examples are He/air and air/He. In addition, we study finite-thickness or double-interface A/B/A configurations like air/SF{sub 6}/air gas-curtain experiments. We first consider conventional shock tubes that have a 'fixed' boundary: A solid endwall which reflects the transmitted shock and reshocks the interface(s). Then we focus on new experiments with a 'free' boundary--a membrane disrupted mechanically or by the transmitted shock, sending back a rarefaction towards the interface(s). Complex acceleration histories are achieved, relevant for Inertial Confinement Fusion implosions. We compare our simulation results with a generalized Layzer model for two fluids with time-dependent densities, and derive a new freeze-out condition whereby accelerating and compressive forces cancel each other out. Except for the recently reported failures of the Layzer model, the generalized Layzer model and hydrocode simulations for reshocks and rarefactions agree well with each other, and remain to be verified experimentally.

  16. Development of an inorganic and organic aerosol model (CHIMERE 2017β v1.0): seasonal and spatial evaluation over Europe

    Science.gov (United States)

    Couvidat, Florian; Bessagnet, Bertrand; Garcia-Vivanco, Marta; Real, Elsa; Menut, Laurent; Colette, Augustin

    2018-01-01

    A new aerosol module was developed and integrated in the air quality model CHIMERE. Developments include the use of the Model of Emissions and Gases and Aerosols from Nature (MEGAN) 2.1 for biogenic emissions, the implementation of the inorganic thermodynamic model ISORROPIA 2.1, revision of wet deposition processes and of the algorithms of condensation/evaporation and coagulation and the implementation of the secondary organic aerosol (SOA) mechanism H2O and the thermodynamic model SOAP. Concentrations of particles over Europe were simulated by the model for the year 2013. Model concentrations were compared to the European Monitoring and Evaluation Programme (EMEP) observations and other observations available in the EBAS database to evaluate the performance of the model. Performances were determined for several components of particles (sea salt, sulfate, ammonium, nitrate, organic aerosol) with a seasonal and regional analysis of results. The model gives satisfactory performance in general. For sea salt, the model succeeds in reproducing the seasonal evolution of concentrations for western and central Europe. For sulfate, except for an overestimation of sulfate in northern Europe, modeled concentrations are close to observations and the model succeeds in reproducing the seasonal evolution of concentrations. For organic aerosol, the model reproduces with satisfactory results concentrations for stations with strong modeled biogenic SOA concentrations. However, the model strongly overestimates ammonium nitrate concentrations during late autumn (possibly due to problems in the temporal evolution of emissions) and strongly underestimates summer organic aerosol concentrations over most of the stations (especially in the northern half of Europe). This underestimation could be due to a lack of anthropogenic SOA or biogenic emissions in northern Europe. A list of recommended tests and developments to improve the model is also given.

  17. Simulation of the Low-Level-Jet by general circulation models

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-04-01

    To what degree is the low-level jet climatology and it`s impact on clouds and precipitation being captured by current general circulation models? It is hypothesised that a need for a pramaterization exists. This paper describes this parameterization need.

  18. A General Nonlinear Fluid Model for Reacting Plasma-Neutral Mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Meier, E T; Shumlak, U

    2012-04-06

    A generalized, computationally tractable fluid model for capturing the effects of neutral particles in plasmas is derived. The model derivation begins with Boltzmann equations for singly charged ions, electrons, and a single neutral species. Electron-impact ionization, radiative recombination, and resonant charge exchange reactions are included. Moments of the reaction collision terms are detailed. Moments of the Boltzmann equations for electron, ion, and neutral species are combined to yield a two-component plasma-neutral fluid model. Separate density, momentum, and energy equations, each including reaction transfer terms, are produced for the plasma and neutral equations. The required closures for the plasma-neutral model are discussed.

  19. Assessment of the NeQuick-2 and IRI-Plas 2017 models using global and long-term GNSS measurements

    Science.gov (United States)

    Okoh, Daniel; Onwuneme, Sylvester; Seemala, Gopi; Jin, Shuanggen; Rabiu, Babatunde; Nava, Bruno; Uwamahoro, Jean

    2018-05-01

    The global ionospheric models NeQuick and IRI-Plas have been widely used. However, their uncertainties are not clear at global scale and long term. In this paper, a climatologic assessment of the NeQuick and IRI-Plas models is investigated at a global scale from global navigation satellite system (GNSS) observations. GNSS observations from 36 globally distributed locations were used to evaluate performances of both NeQuick-2 and IRI-Plas 2017 models from January 2006 to July 2017, covering more than the 11-year period of a solar cycle. An hourly interval of diurnal profiles computed on monthly basis was used to measure deviations of the model estimations from corresponding GNSS VTEC observations. Results show that both models are fairly accurate in trends with the GNSS measurements. The NeQuick predictions were generally better than the IRI-Plas predictions in most of the stations and the times. The mean annual prediction errors for the IRI-Plas model typically varied from about 3 TECU at the high latitude stations to about 12 TECU at the low latitude stations, while for the NeQuick the values are respectively about 2-7 TECU. Out of a total 4497 months in which GNSS data were available for all the stations put together for the entire period covered in this work, the NeQuick model was observed to perform better in about 83% of the months while the IRI-Plas performed better in about 17% of the months. The IRI-Plas generally performed better than the NeQuick at certain locations (e.g. DAV1, KERG, and ADIS). For both models, the most of the deviations were witnessed during local daytimes and during seasons that receive maximum solar radiation for various locations. In particular, the IRI-Plas model predictions were improved during periods of increased solar activity at the low latitude stations. The IRI-Plas model overestimates the GNSS VTEC values, except during high solar activity years at some high latitude stations. The NeQuick underestimates the TEC values during

  20. Advances in the physics modelling of CANDU liquid injection shutdown systems

    International Nuclear Information System (INIS)

    Smith, H.J.; Robinson, R.; Guertin, C.

    1993-01-01

    The physics modelling of liquid poison injection shutdown systems in CANDU reactors accounts for the major phenomena taking place by combining the effects of both moderator hydraulics and neutronics. This paper describes the advances in the physics modelling of liquid poison injection shutdown systems (LISS), discusses some of the effects of the more realistic modelling, and briefly describes the automation methodology. Modifications to the LISS methodology have improved the realism of the physics modelling, showing that the previous methodology significantly overestimated energy deposition during the simulation of a loss of coolant transient in Bruce A, by overestimating the reactivity transient. Furthermore, the automation of the modelling process has reduced the time needed to carry put LISS evaluations to the same level as required for shutoff-rod evaluations, while at the same time minimizing the amount of input, and providing a method for tracing all files used, thus adding a level of quality assurance to the calculation. 5 refs., 11 figs

  1. General mirror pairs for gauged linear sigma models

    Energy Technology Data Exchange (ETDEWEB)

    Aspinwall, Paul S.; Plesser, M. Ronen [Departments of Mathematics and Physics, Duke University,Box 90320, Durham, NC 27708-0320 (United States)

    2015-11-05

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  2. General mirror pairs for gauged linear sigma models

    International Nuclear Information System (INIS)

    Aspinwall, Paul S.; Plesser, M. Ronen

    2015-01-01

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  3. A generalized development model for testing GPS user equipment

    Science.gov (United States)

    Hemesath, N.

    1978-01-01

    The generalized development model (GDM) program, which was intended to establish how well GPS user equipment can perform under a combination of jamming and dynamics, is described. The systems design and the characteristics of the GDM are discussed. The performance aspects of the GDM are listed and the application of the GDM to civil aviation is examined.

  4. A general circulation model (GCM) parameterization of Pinatubo aerosols

    Energy Technology Data Exchange (ETDEWEB)

    Lacis, A.A.; Carlson, B.E.; Mishchenko, M.I. [NASA Goddard Institute for Space Studies, New York, NY (United States)

    1996-04-01

    The June 1991 volcanic eruption of Mt. Pinatubo is the largest and best documented global climate forcing experiment in recorded history. The time development and geographical dispersion of the aerosol has been closely monitored and sampled. Based on preliminary estimates of the Pinatubo aerosol loading, general circulation model predictions of the impact on global climate have been made.

  5. A general evolving model for growing bipartite networks

    International Nuclear Information System (INIS)

    Tian, Lixin; He, Yinghuan; Liu, Haijun; Du, Ruijin

    2012-01-01

    In this Letter, we propose and study an inner evolving bipartite network model. Significantly, we prove that the degree distribution of two different kinds of nodes both obey power-law form with adjustable exponents. Furthermore, the joint degree distribution of any two nodes for bipartite networks model is calculated analytically by the mean-field method. The result displays that such bipartite networks are nearly uncorrelated networks, which is different from one-mode networks. Numerical simulations and empirical results are given to verify the theoretical results. -- Highlights: ► We proposed a general evolving bipartite network model which was based on priority connection, reconnection and breaking edges. ► We prove that the degree distribution of two different kinds of nodes both obey power-law form with adjustable exponents. ► The joint degree distribution of any two nodes for bipartite networks model is calculated analytically by the mean-field method. ► The result displays that such bipartite networks are nearly uncorrelated networks, which is different from one-mode networks.

  6. Study of the properties of general relativistic Kink model (GRK)

    International Nuclear Information System (INIS)

    Oliveira, L.C.S. de.

    1980-01-01

    The stability of the general relativistic Kink model (GRK) is studied. It is shown that the model is stable at least against radial perturbations. Furthermore, the Dirac field in the background of the geometry generated by the GRK is studied. It is verified that the GRK localizes the Dirac field, around the region of largest curvature. The physical interpretation of this system (the Dirac field in the GRK background) is discussed. (Author) [pt

  7. Determining Rheological Parameters of Generalized Yield-Power-Law Fluid Model

    Directory of Open Access Journals (Sweden)

    Stryczek Stanislaw

    2004-09-01

    Full Text Available The principles of determining rheological parameters of drilling muds described by a generalized yield-power-law are presented in the paper. Functions between tangent stresses and shear rate are given. The conditions of laboratory measurements of rheological parameters of generalized yield-power-law fluids are described and necessary mathematical relations for rheological model parameters given. With the block diagrams, the methodics of numerical solution of these relations has been presented. Rheological parameters of an exemplary drilling mud have been calculated with the use of this numerical program.

  8. A General Framework for Portfolio Theory. Part I: theory and various models

    OpenAIRE

    Maier-Paape, Stanislaus; Zhu, Qiji Jim

    2017-01-01

    Utility and risk are two often competing measurements on the investment success. We show that efficient trade-off between these two measurements for investment portfolios happens, in general, on a convex curve in the two dimensional space of utility and risk. This is a rather general pattern. The modern portfolio theory of Markowitz [H. Markowitz, Portfolio Selection, 1959] and its natural generalization, the capital market pricing model, [W. F. Sharpe, Mutual fund performance , 1966] are spe...

  9. On the asymptotic ergodic capacity of FSO links with generalized pointing error model

    KAUST Repository

    Al-Quwaiee, Hessa

    2015-09-11

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantize the effect of these two factors on FSO system performance, we need an effective mathematical model for them. Scintillations are typically modeled by the log-normal and Gamma-Gamma distributions for weak and strong turbulence conditions, respectively. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive the asymptotic ergodic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. © 2015 IEEE.

  10. MODELING OF INNOVATION EDUCATIONAL ENVIRONMENT OF GENERAL EDUCATIONAL INSTITUTION: THE SCIENTIFIC APPROACHES

    OpenAIRE

    Anzhelika D. Tsymbalaru

    2010-01-01

    In the paper the scientific approaches to modeling of innovation educational environment of a general educational institution – system (analysis of object, process and result of modeling as system objects), activity (organizational and psychological structure) and synergetic (aspects and principles).

  11. Generalized kinetic model of reduction of molecular oxidant by metal containing redox

    International Nuclear Information System (INIS)

    Kravchenko, T.A.

    1986-01-01

    Present work is devoted to kinetics of reduction of molecular oxidant by metal containing redox. Constructed generalized kinetic model of redox process in the system solid redox - reagent solution allows to perform the general theoretical approach to research and to obtain new results on kinetics and mechanism of interaction of redox with oxidants.

  12. Teaching Generalized Imitation Skills to a Preschooler with Autism Using Video Modeling

    Science.gov (United States)

    Kleeberger, Vickie; Mirenda, Pat

    2010-01-01

    This study examined the effectiveness of video modeling to teach a preschooler with autism to imitate previously mastered and not mastered actions during song and toy play activities. A general case approach was used to examine the instructional universe of preschool songs and select exemplars that were most likely to facilitate generalization.…

  13. A general method for the inclusion of radiation chemistry in astrochemical models.

    Science.gov (United States)

    Shingledecker, Christopher N; Herbst, Eric

    2018-02-21

    In this paper, we propose a general formalism that allows for the estimation of radiolysis decomposition pathways and rate coefficients suitable for use in astrochemical models, with a focus on solid phase chemistry. Such a theory can help increase the connection between laboratory astrophysics experiments and astrochemical models by providing a means for modelers to incorporate radiation chemistry into chemical networks. The general method proposed here is targeted particularly at the majority of species now included in chemical networks for which little radiochemical data exist; however, the method can also be used as a starting point for considering better studied species. We here apply our theory to the irradiation of H 2 O ice and compare the results with previous experimental data.

  14. A stratiform cloud parameterization for general circulation models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in general circulation models (GCMs) is widely recognized as a major limitation in applying these models to predictions of global climate change. The purpose of this project is to develop in GCMs a stratiform cloud parameterization that expresses clouds in terms of bulk microphysical properties and their subgrid variability. Various clouds variables and their interactions are summarized. Precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  15. Flexible building stock modelling with array-programming

    DEFF Research Database (Denmark)

    Brøgger, Morten; Wittchen, Kim Bjarne

    2017-01-01

    Many building stock models employ archetype-buildings in order to capture the essential characteristics of a diverse building stock. However, these models often require multiple archetypes, which make them inflexible. This paper proposes an array-programming based model, which calculates the heat...... tend to overestimate potential energy-savings, if we do not consider these discrepancies. The proposed model makes it possible to compute and visualize potential energy-savings in a flexible and transparent way....

  16. An applied general equilibrium model for Dutch agribusiness policy analysis

    NARCIS (Netherlands)

    Peerlings, J.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of

  17. On Regularity Criteria for the Two-Dimensional Generalized Liquid Crystal Model

    Directory of Open Access Journals (Sweden)

    Yanan Wang

    2014-01-01

    Full Text Available We establish the regularity criteria for the two-dimensional generalized liquid crystal model. It turns out that the global existence results satisfy our regularity criteria naturally.

  18. A general equilibrium model of ecosystem services in a river basin

    Science.gov (United States)

    Travis Warziniack

    2014-01-01

    This study builds a general equilibrium model of ecosystem services, with sectors of the economy competing for use of the environment. The model recognizes that production processes in the real world require a combination of natural and human inputs, and understanding the value of these inputs and their competing uses is necessary when considering policies of resource...

  19. Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data.

    Science.gov (United States)

    Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao

    2013-01-01

    Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions.

  20. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  1. Development of an inorganic and organic aerosol model (CHIMERE 2017β v1.0: seasonal and spatial evaluation over Europe

    Directory of Open Access Journals (Sweden)

    F. Couvidat

    2018-01-01

    Full Text Available A new aerosol module was developed and integrated in the air quality model CHIMERE. Developments include the use of the Model of Emissions and Gases and Aerosols from Nature (MEGAN 2.1 for biogenic emissions, the implementation of the inorganic thermodynamic model ISORROPIA 2.1, revision of wet deposition processes and of the algorithms of condensation/evaporation and coagulation and the implementation of the secondary organic aerosol (SOA mechanism H2O and the thermodynamic model SOAP. Concentrations of particles over Europe were simulated by the model for the year 2013. Model concentrations were compared to the European Monitoring and Evaluation Programme (EMEP observations and other observations available in the EBAS database to evaluate the performance of the model. Performances were determined for several components of particles (sea salt, sulfate, ammonium, nitrate, organic aerosol with a seasonal and regional analysis of results. The model gives satisfactory performance in general. For sea salt, the model succeeds in reproducing the seasonal evolution of concentrations for western and central Europe. For sulfate, except for an overestimation of sulfate in northern Europe, modeled concentrations are close to observations and the model succeeds in reproducing the seasonal evolution of concentrations. For organic aerosol, the model reproduces with satisfactory results concentrations for stations with strong modeled biogenic SOA concentrations. However, the model strongly overestimates ammonium nitrate concentrations during late autumn (possibly due to problems in the temporal evolution of emissions and strongly underestimates summer organic aerosol concentrations over most of the stations (especially in the northern half of Europe. This underestimation could be due to a lack of anthropogenic SOA or biogenic emissions in northern Europe. A list of recommended tests and developments to improve the model is also given.

  2. From linear to generalized linear mixed models: A case study in repeated measures

    Science.gov (United States)

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  3. The Michigan Titan Thermospheric General Circulation Model (TTGCM)

    Science.gov (United States)

    Bell, J. M.; Bougher, S. W.; de Lahaye, V.; Waite, J. H.

    2005-12-01

    The Cassini flybys of Titan since late October, 2004 have provided data critical to better understanding its chemical and thermal structures. With this in mind, a 3-D TGCM of Titan's atmosphere from 600km to the exobase (~1450km) has been developed. This paper presents the first results from the partially operational code. Currently, the TTGCM includes static background chemistry (Lebonnois et al 2001, Vervack et al 2004) coupled with thermal conduction routines. The thermosphere remains dominated by solar EUV forcing and HCN rotational cooling, which is calculated by a full line-by-line radiative transfer routine along the lines of Yelle (1991) and Mueller-Wodarg (2000, 2002). In addition, an approximate treatment of magnetospheric heating is explored. This paper illustrates the model's capabilities as well as some initial results from the Titan Thermospheric General Circulation model that will be compared with both the Cassini INMS data and the model of Mueller-Wodarg (2000,2002).

  4. General relativity cosmological models without the big bang

    International Nuclear Information System (INIS)

    Rosen, N.

    1985-01-01

    Attention is given to the so-called standard model of the universe in the framework of the general theory of relativity. This model is taken to be homogeneous and isotropic and filled with an ideal fluid characterized by a density and a pressure. Taking into consideration, however, the assumption that the universe began in a singular state, it is found hard to understand why the universe is so nearly homogeneous and isotropic at present for a singularity represents a breakdown of physical laws, and the initial singularity cannot, therefore, predetermine the subsequent symmetries of the universe. The present investigation has the objective to find a way of avoiding this initial singularity, i.e., to look for a cosmological model without the big bang. The idea is proposed that there exists a limiting density of matter of the order of magnitude of the Planck density, and that this was the density of matter at the moment at which the universe began to expand

  5. Pharmaceutical industry and trade liberalization using computable general equilibrium model.

    Science.gov (United States)

    Barouni, M; Ghaderi, H; Banouei, Aa

    2012-01-01

    Computable general equilibrium models are known as a powerful instrument in economic analyses and widely have been used in order to evaluate trade liberalization effects. The purpose of this study was to provide the impacts of trade openness on pharmaceutical industry using CGE model. Using a computable general equilibrium model in this study, the effects of decrease in tariffs as a symbol of trade liberalization on key variables of Iranian pharmaceutical products were studied. Simulation was performed via two scenarios in this study. The first scenario was the effect of decrease in tariffs of pharmaceutical products as 10, 30, 50, and 100 on key drug variables, and the second was the effect of decrease in other sectors except pharmaceutical products on vital and economic variables of pharmaceutical products. The required data were obtained and the model parameters were calibrated according to the social accounting matrix of Iran in 2006. The results associated with simulation demonstrated that the first scenario has increased import, export, drug supply to markets and household consumption, while import, export, supply of product to market, and household consumption of pharmaceutical products would averagely decrease in the second scenario. Ultimately, society welfare would improve in all scenarios. We presents and synthesizes the CGE model which could be used to analyze trade liberalization policy issue in developing countries (like Iran), and thus provides information that policymakers can use to improve the pharmacy economics.

  6. Description of identical particles via gauged matrix models: a generalization of the Calogero-Sutherland system

    International Nuclear Information System (INIS)

    Park, Jeong-Hyuck

    2003-01-01

    We elaborate the idea that the matrix models equipped with the gauge symmetry provide a natural framework to describe identical particles. After demonstrating the general prescription, we study an exactly solvable harmonic oscillator type gauged matrix model. The model gives a generalization of the Calogero-Sutherland system where the strength of the inverse square potential is not fixed but dynamical bounded by below

  7. Fractal diffusion equations: Microscopic models with anomalous diffusion and its generalizations

    International Nuclear Information System (INIS)

    Arkhincheev, V.E.

    2001-04-01

    To describe the ''anomalous'' diffusion the generalized diffusion equations of fractal order are deduced from microscopic models with anomalous diffusion as Comb model and Levy flights. It is shown that two types of equations are possible: with fractional temporal and fractional spatial derivatives. The solutions of these equations are obtained and the physical sense of these fractional equations is discussed. The relation between diffusion and conductivity is studied and the well-known Einstein relation is generalized for the anomalous diffusion case. It is shown that for Levy flight diffusion the Ohm's law is not applied and the current depends on electric field in a nonlinear way due to the anomalous character of Levy flights. The results of numerical simulations, which confirmed this conclusion, are also presented. (author)

  8. A WRF/Chem sensitivity study using ensemble modelling for a high ozone episode in Slovenia and the Northern Adriatic area

    Science.gov (United States)

    Žabkar, Rahela; Koračin, Darko; Rakovec, Jože

    2013-10-01

    A high ozone (O3) concentrations episode during a heat wave event in the Northeastern Mediterranean was investigated using the WRF/Chem model. To understand the major model uncertainties and errors as well as the impacts of model inputs on the model accuracy, an ensemble modelling experiment was conducted. The 51-member ensemble was designed by varying model physics parameterization options (PBL schemes with different surface layer and land-surface modules, and radiation schemes); chemical initial and boundary conditions; anthropogenic and biogenic emission inputs; and model domain setup and resolution. The main impacts of the geographical and emission characteristics of three distinct regions (suburban Mediterranean, continental urban, and continental rural) on the model accuracy and O3 predictions were investigated. In spite of the large ensemble set size, the model generally failed to simulate the extremes; however, as expected from probabilistic forecasting the ensemble spread improved results with respect to extremes compared to the reference run. Noticeable model nighttime overestimations at the Mediterranean and some urban and rural sites can be explained by too strong simulated winds, which reduce the impact of dry deposition and O3 titration in the near surface layers during the nighttime. Another possible explanation could be inaccuracies in the chemical mechanisms, which are suggested also by model insensitivity to variations in the nitrogen oxides (NOx) and volatile organic compounds (VOC) emissions. Major impact factors for underestimations of the daytime O3 maxima at the Mediterranean and some rural sites include overestimation of the PBL depths, a lack of information on forest fires, too strong surface winds, and also possible inaccuracies in biogenic emissions. This numerical experiment with the ensemble runs also provided guidance on an optimum model setup and input data.

  9. General Voltage Feedback Circuit Model in the Two-Dimensional Networked Resistive Sensor Array

    Directory of Open Access Journals (Sweden)

    JianFeng Wu

    2015-01-01

    Full Text Available To analyze the feature of the two-dimensional networked resistive sensor array, we firstly proposed a general model of voltage feedback circuits (VFCs such as the voltage feedback non-scanned-electrode circuit, the voltage feedback non-scanned-sampling-electrode circuit, and the voltage feedback non-scanned-sampling-electrode circuit. By analyzing the general model, we then gave a general mathematical expression of the effective equivalent resistor of the element being tested in VFCs. Finally, we evaluated the features of VFCs with simulation and test experiment. The results show that the expression is applicable to analyze the VFCs’ performance of parameters such as the multiplexers’ switch resistors, the nonscanned elements, and array size.

  10. Seasonal changes in the atmospheric heat balance simulated by the GISS general circulation model

    Science.gov (United States)

    Stone, P. H.; Chow, S.; Helfand, H. M.; Quirk, W. J.; Somerville, R. C. J.

    1975-01-01

    Tests of the ability of numerical general circulation models to simulate the atmosphere have focussed so far on simulations of the January climatology. These models generally present boundary conditions such as sea surface temperature, but this does not prevent testing their ability to simulate seasonal changes in atmospheric processes that accompany presented seasonal changes in boundary conditions. Experiments to simulate changes in the zonally averaged heat balance are discussed since many simplified models of climatic processes are based solely on this balance.

  11. General Business Model Patterns for Local Energy Management Concepts

    International Nuclear Information System (INIS)

    Facchinetti, Emanuele; Sulzer, Sabine

    2016-01-01

    The transition toward a more sustainable global energy system, significantly relying on renewable energies and decentralized energy systems, requires a deep reorganization of the energy sector. The way how energy services are generated, delivered, and traded is expected to be very different in the coming years. Business model innovation is recognized as a key driver for the successful implementation of the energy turnaround. This work contributes to this topic by introducing a heuristic methodology easing the identification of general business model patterns best suited for Local Energy Management concepts such as Energy Hubs. A conceptual framework characterizing the Local Energy Management business model solution space is developed. Three reference business model patterns providing orientation across the defined solution space are identified, analyzed, and compared. Through a market review, a number of successfully implemented innovative business models have been analyzed and allocated within the defined solution space. The outcomes of this work offer to potential stakeholders a starting point and guidelines for the business model innovation process, as well as insights for policy makers on challenges and opportunities related to Local Energy Management concepts.

  12. General Business Model Patterns for Local Energy Management Concepts

    Energy Technology Data Exchange (ETDEWEB)

    Facchinetti, Emanuele, E-mail: emanuele.facchinetti@hslu.ch; Sulzer, Sabine [Lucerne Competence Center for Energy Research, Lucerne University of Applied Science and Arts, Horw (Switzerland)

    2016-03-03

    The transition toward a more sustainable global energy system, significantly relying on renewable energies and decentralized energy systems, requires a deep reorganization of the energy sector. The way how energy services are generated, delivered, and traded is expected to be very different in the coming years. Business model innovation is recognized as a key driver for the successful implementation of the energy turnaround. This work contributes to this topic by introducing a heuristic methodology easing the identification of general business model patterns best suited for Local Energy Management concepts such as Energy Hubs. A conceptual framework characterizing the Local Energy Management business model solution space is developed. Three reference business model patterns providing orientation across the defined solution space are identified, analyzed, and compared. Through a market review, a number of successfully implemented innovative business models have been analyzed and allocated within the defined solution space. The outcomes of this work offer to potential stakeholders a starting point and guidelines for the business model innovation process, as well as insights for policy makers on challenges and opportunities related to Local Energy Management concepts.

  13. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2017-11-01

    Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

  14. Some five-dimensional Bianchi type-iii string cosmological models in general relativity

    International Nuclear Information System (INIS)

    Samanta, G.C.; Biswal, S.K.; Mohanty, G.; Rameswarpatna, Bhubaneswar

    2011-01-01

    In this paper we have constructed some five-dimensional Bianchi type-III cosmological models in general relativity when source of gravitational field is a massive string. We obtained different classes of solutions by considering different functional forms of metric potentials. It is also observed that one of the models is not physically acceptable and the other models possess big-bang singularity. The physical and kinematical behaviors of the models are discussed

  15. A guide to developing resource selection functions from telemetry data using generalized estimating equations and generalized linear mixed models

    Directory of Open Access Journals (Sweden)

    Nicola Koper

    2012-03-01

    Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.

  16. Comparison between Duncan and Chang’s EB Model and the Generalized Plasticity Model in the Analysis of a High Earth-Rockfill Dam

    Directory of Open Access Journals (Sweden)

    Weixin Dong

    2013-01-01

    Full Text Available Nonlinear elastic model and elastoplastic model are two main kinds of constitutive models of soil, which are widely used in the numerical analyses of soil structure. In this study, Duncan and Chang's EB model and the generalized plasticity model proposed by Pastor, Zienkiewicz, and Chan was discussed and applied to describe the stress-strain relationship of rockfill materials. The two models were validated using the results of triaxial shear tests under different confining pressures. The comparisons between the fittings of models and test data showed that the modified generalized plasticity model is capable of simulating the mechanical behaviours of rockfill materials. The modified generalized plasticity model was implemented into a finite element code to carry out static analyses of a high earth-rockfill dam in China. Nonlinear elastic analyses were also performed with Duncan and Chang's EB model in the same program framework. The comparisons of FEM results and in situ monitoring data showed that the modified PZ-III model can give a better description of deformation of the earth-rockfill dam than Duncan and Chang’s EB model.

  17. Generalized Beer-Lambert model for near-infrared light propagation in thick biological tissues

    Science.gov (United States)

    Bhatt, Manish; Ayyalasomayajula, Kalyan R.; Yalavarthy, Phaneendra K.

    2016-07-01

    The attenuation of near-infrared (NIR) light intensity as it propagates in a turbid medium like biological tissue is described by modified the Beer-Lambert law (MBLL). The MBLL is generally used to quantify the changes in tissue chromophore concentrations for NIR spectroscopic data analysis. Even though MBLL is effective in terms of providing qualitative comparison, it suffers from its applicability across tissue types and tissue dimensions. In this work, we introduce Lambert-W function-based modeling for light propagation in biological tissues, which is a generalized version of the Beer-Lambert model. The proposed modeling provides parametrization of tissue properties, which includes two attenuation coefficients μ0 and η. We validated our model against the Monte Carlo simulation, which is the gold standard for modeling NIR light propagation in biological tissue. We included numerous human and animal tissues to validate the proposed empirical model, including an inhomogeneous adult human head model. The proposed model, which has a closed form (analytical), is first of its kind in providing accurate modeling of NIR light propagation in biological tissues.

  18. The generalized hedgehog and the projected chiral soliton model

    International Nuclear Information System (INIS)

    Fiolhais, M.; Kernforschungsanlage Juelich G.m.b.H.; Goeke, K.; Bochum Univ.; Gruemmer, F.; Urbano, J.N.

    1988-01-01

    The linear chiral soliton model with quark fields and elementary pion and sigma fields is solved in order to describe static properties of the nucleon and the delta resonance. To this end a Fock state of the system is constructed which consists of three valence quarks in a 1s orbit with a generalized hedgehog spin-flavour configuration cosηvertical strokeu↓> - sin ηvertical stroked↑>. Coherent states are used to provide a quantum description for the mesonic parts of the total wave function. The corresponding classical pion field also exhibits a generalized hedgehog structure. Various nucleon properties are calculated. These include proton and neutron charge raii, and the mangnetic moment of the proton for which experiment is obtained. (orig./HSI)

  19. A Dirichlet process mixture of generalized Dirichlet distributions for proportional data modeling.

    Science.gov (United States)

    Bouguila, Nizar; Ziou, Djemel

    2010-01-01

    In this paper, we propose a clustering algorithm based on both Dirichlet processes and generalized Dirichlet distribution which has been shown to be very flexible for proportional data modeling. Our approach can be viewed as an extension of the finite generalized Dirichlet mixture model to the infinite case. The extension is based on nonparametric Bayesian analysis. This clustering algorithm does not require the specification of the number of mixture components to be given in advance and estimates it in a principled manner. Our approach is Bayesian and relies on the estimation of the posterior distribution of clusterings using Gibbs sampler. Through some applications involving real-data classification and image databases categorization using visual words, we show that clustering via infinite mixture models offers a more powerful and robust performance than classic finite mixtures.

  20. Standard duplex criteria overestimate the degree of stenosis after eversion carotid endarterectomy.

    Science.gov (United States)

    Benzing, Travis; Wilhoit, Cameron; Wright, Sharee; McCann, P Aaron; Lessner, Susan; Brothers, Thomas E

    2015-06-01

    The eversion technique for carotid endarterectomy (eCEA) offers an alternative to longitudinal arteriotomy and patch closure (pCEA) for open carotid revascularization. In some reports, eCEA has been associated with a higher rate of >50% restenosis of the internal carotid when it is defined as peak systolic velocity (PSV) >125 cm/s by duplex imaging. Because the conformation of the carotid bifurcation may differ after eCEA compared with native carotid arteries, it was hypothesized that standard duplex criteria might not accurately reflect the presence of restenosis after eCEA. In a case-control study, the outcomes of all patients undergoing carotid endarterectomy by one surgeon during the last 10 years were analyzed retrospectively, with a primary end point of PSV >125 cm/s. Duplex flow velocities were compared with luminal diameter measurements for any carotid computed tomography arteriography or magnetic resonance angiography study obtained within 2 months of duplex imaging, with the degree of stenosis calculated by the methodology used in the North American Symptomatic Carotid Endarterectomy Trial (NASCET) and the European Carotid Surgery Trial (ECST) as well as cross-sectional area (CSA) reduction. Simulations were generated and analyzed by computational model simulations of the eCEA and pCEA arteries. Eversion and longitudinal arteriotomy with patch techniques were used in 118 and 177 carotid arteries, respectively. Duplex follow-up was available in 90 eCEA arteries at a median of 16 (range, 2-136) months and in 150 pCEA arteries at a median of 41 (range, 3-115) months postoperatively. PSV >125 cm/s was present at some time during follow-up in 31% of eCEA and pCEA carotid arteries, each, and in the most recent duplex examination in 7% after eCEA and 21% after pCEA (P = .003), with no eCEA and two pCEA arteries occluding completely during follow-up (P = .29). In 19 carotid arteries with PSV >125 cm/s after angle correction (median, 160 cm/s; interquartile range

  1. Performance evaluation of Maxwell and Cercignani-Lampis gas-wall interaction models in the modeling of thermally driven rarefied gas transport

    KAUST Repository

    Liang, Tengfei

    2013-07-16

    A systematic study on the performance of two empirical gas-wall interaction models, the Maxwell model and the Cercignani-Lampis (CL) model, in the entire Knudsen range is conducted. The models are evaluated by examining the accuracy of key macroscopic quantities such as temperature, density, and pressure, in three benchmark thermal problems, namely the Fourier thermal problem, the Knudsen force problem, and the thermal transpiration problem. The reference solutions are obtained from a validated hybrid DSMC-MD algorithm developed in-house. It has been found that while both models predict temperature and density reasonably well in the Fourier thermal problem, the pressure profile obtained from Maxwell model exhibits a trend that opposes that from the reference solution. As a consequence, the Maxwell model is unable to predict the orientation change of the Knudsen force acting on a cold cylinder embedded in a hot cylindrical enclosure at a certain Knudsen number. In the simulation of the thermal transpiration coefficient, although all three models overestimate the coefficient, the coefficient obtained from CL model is the closest to the reference solution. The Maxwell model performs the worst. The cause of the overestimated coefficient is investigated and its link to the overly constrained correlation between the tangential momentum accommodation coefficient and the tangential energy accommodation coefficient inherent in the models is pointed out. Directions for further improvement of models are suggested.

  2. General informatics teaching with B-Learning teaching model

    Directory of Open Access Journals (Sweden)

    Nguyen The Dung

    2018-03-01

    Full Text Available Blended learning (B-learning, a combination of face-to-face teaching and E-learning-supported-teaching in an online course, and Information and Communication Technology (ICT tools have been studied in recent years. In addition, the use of this teaching model is effective in teaching and learning conditions in which some certain subjects are appropriate for the specific teaching context. As it has been a matter of concern of the universities in Vietnam today, deep studies related to this topic is crucial to be conducted. In this article, the process of developing online courses and organizing teaching for the General Informatics subject for first-year students at the Hue University of Education with B-learning teaching model will be presented. The combination of 60% face-to-face and 40% online learning.

  3. Galilean generalized Robertson-Walker spacetimes: A new family of Galilean geometrical models

    Science.gov (United States)

    de la Fuente, Daniel; Rubio, Rafael M.

    2018-02-01

    We introduce a new family of Galilean spacetimes, the Galilean generalized Robertson-Walker spacetimes. This new family is relevant in the context of a generalized Newton-Cartan theory. We study its geometrical structure and analyse the completeness of its inextensible free falling observers. This sort of spacetimes constitutes the local geometric model of a much wider family of spacetimes admitting certain conformal symmetry. Moreover, we find some sufficient geometric conditions which guarantee a global splitting of a Galilean spacetime as a Galilean generalized Robertson-Walker spacetime.

  4. Development and validation of models for bubble coalescence and breakup

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Yiaxiang

    2013-10-08

    A generalized model for bubble coalescence and breakup has been developed, which is based on a comprehensive survey of existing theories and models. One important feature of the model is that all important mechanisms leading to bubble coalescence and breakup in a turbulent gas-liquid flow are considered. The new model is tested extensively in a 1D Test Solver and a 3D CFD code ANSYS CFX for the case of vertical gas-liquid pipe flow under adiabatic conditions, respectively. Two kinds of extensions of the standard multi-fluid model, i.e. the discrete population model and the inhomogeneous MUSIG (multiple-size group) model, are available in the two solvers, respectively. These extensions with suitable closure models such as those for coalescence and breakup are able to predict the evolution of bubble size distribution in dispersed flows and to overcome the mono-dispersed flow limitation of the standard multi-fluid model. For the validation of the model the high quality database of the TOPFLOW L12 experiments for air-water flow in a vertical pipe was employed. A wide range of test points, which cover the bubbly flow, turbulent-churn flow as well as the transition regime, is involved in the simulations. The comparison between the simulated results such as bubble size distribution, gas velocity and volume fraction and the measured ones indicates a generally good agreement for all selected test points. As the superficial gas velocity increases, bubble size distribution evolves via coalescence dominant regimes first, then breakup-dominant regimes and finally turns into a bimodal distribution. The tendency of the evolution is well reproduced by the model. However, the tendency is almost always overestimated, i.e. too much coalescence in the coalescence dominant case while too much breakup in breakup dominant ones. The reason of this problem is discussed by studying the contribution of each coalescence and breakup mechanism at different test points. The redistribution of the

  5. Development and validation of models for bubble coalescence and breakup

    International Nuclear Information System (INIS)

    Liao, Yiaxiang

    2013-01-01

    A generalized model for bubble coalescence and breakup has been developed, which is based on a comprehensive survey of existing theories and models. One important feature of the model is that all important mechanisms leading to bubble coalescence and breakup in a turbulent gas-liquid flow are considered. The new model is tested extensively in a 1D Test Solver and a 3D CFD code ANSYS CFX for the case of vertical gas-liquid pipe flow under adiabatic conditions, respectively. Two kinds of extensions of the standard multi-fluid model, i.e. the discrete population model and the inhomogeneous MUSIG (multiple-size group) model, are available in the two solvers, respectively. These extensions with suitable closure models such as those for coalescence and breakup are able to predict the evolution of bubble size distribution in dispersed flows and to overcome the mono-dispersed flow limitation of the standard multi-fluid model. For the validation of the model the high quality database of the TOPFLOW L12 experiments for air-water flow in a vertical pipe was employed. A wide range of test points, which cover the bubbly flow, turbulent-churn flow as well as the transition regime, is involved in the simulations. The comparison between the simulated results such as bubble size distribution, gas velocity and volume fraction and the measured ones indicates a generally good agreement for all selected test points. As the superficial gas velocity increases, bubble size distribution evolves via coalescence dominant regimes first, then breakup-dominant regimes and finally turns into a bimodal distribution. The tendency of the evolution is well reproduced by the model. However, the tendency is almost always overestimated, i.e. too much coalescence in the coalescence dominant case while too much breakup in breakup dominant ones. The reason of this problem is discussed by studying the contribution of each coalescence and breakup mechanism at different test points. The redistribution of the

  6. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang; Zhou, Lan; Huang, Jianhua Z.

    2014-01-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based

  7. Comparative performance of diabetes-specific and general population-based cardiovascular risk assessment models in people with diabetes mellitus.

    Science.gov (United States)

    Echouffo-Tcheugui, J-B; Kengne, A P

    2013-10-01

    Multivariable models for estimating cardiovascular disease (CVD) risk in people with diabetes comprise general population-based models and those from diabetic cohorts. Whether one set of models should receive preference is unclear. We evaluated the evidence on direct comparisons of the performance of general population vs diabetes-specific CVD risk models in people with diabetes. MEDLINE and EMBASE databases were searched up to March 2013. Two reviewers independently identified studies that compared the performance of general CVD models vs diabetes-specific ones in the same group of people with diabetes. Independent, dual data extraction on study design, risk models, outcomes; and measures of performance was conducted. Eleven articles reporting on 22 pair wise comparisons of a diabetes-specific model (UKPDS, ADVANCE and DCS risk models) to a general population model (three variants of the Framingham model, Prospective Cardiovascular Münster [PROCAM] score, CardioRisk Manager [CRM], Joint British Societies Coronary Risk Chart [JBSRC], Progetto Cuore algorithm and the CHD-Riskard algorithm) were eligible. Absolute differences in C-statistic of diabetes-specific vs general population-based models varied from -0.13 to 0.09. Comparisons for other performance measures were unusual. Outcomes definitions were congruent with those applied during model development. In 14 comparisons, the UKPDS, ADVANCE or DCS diabetes-specific models were superior to the general population CVD risk models. Authors reported better C-statistic for models they developed. The limited existing evidence suggests a possible discriminatory advantage of diabetes-specific over general population-based models for CVD risk stratification in diabetes. More robust head-to-head comparisons are needed to confirm this trend and strengthen recommendations. Copyright © 2013 Elsevier Masson SAS. All rights reserved.

  8. Energy spectra of odd nuclei in the generalized model

    Directory of Open Access Journals (Sweden)

    I. O. Korzh

    2015-04-01

    Full Text Available Based on the generalized nuclear model, energy spectra of the odd nuclei of such elements as 25Mg, 41K, and 65Cu are determined, and the structure of wave functions of these nuclei in the excited and normal states is studied. High quality in determining the energy spectra is possible due to the accurate calculations of all elements of the energy matrix. It is demonstrated that the structure of the wave functions so determined provides the possibility to more accurately select the nuclear model and the method for calculating the nucleon cross-sections of the inelastic scattering of nucleons by odd nuclei.

  9. Passive tracers in a general circulation model of the Southern Ocean

    Directory of Open Access Journals (Sweden)

    I. G. Stevens

    Full Text Available Passive tracers are used in an off-line version of the United Kingdom Fine Resolution Antarctic Model (FRAM to highlight features of the circulation and provide information on the inter-ocean exchange of water masses. The use of passive tracers allows a picture to be built up of the deep circulation which is not readily apparent from examination of the velocity or density fields. Comparison of observations with FRAM results gives good agreement for many features of the Southern Ocean circulation. Tracer distributions are consistent with the concept of a global "conveyor belt" with a return path via the Agulhas retroflection region for the replenishment of North Atlantic Deep Water.

    Key words. Oceanography: general (numerical modeling; water masses · Oceanography: physical (general circulation

  10. Why overestimate or underestimate chronic kidney disease when correct estimation is possible?

    Science.gov (United States)

    De Broe, Marc E; Gharbi, Mohamed Benghanem; Zamd, Mohamed; Elseviers, Monique

    2017-04-01

    There is no doubt that the introduction of the Kidney Disease: Improving Global Outcomes (KDIGO) guidelines 14 years ago, and their subsequent updates, have substantially contributed to the early detection of different stages of chronic kidney disease (CKD). Several recent studies from different parts of the world mention a CKD prevalence of 8-13%. However, some editorials and reviews have begun to describe the weaknesses of a substantial number of studies. Maremar (maladies rénales chroniques au Maroc) is a recently published prevalence study of CKD, hypertension, diabetes and obesity in a randomized, representative and high response rate (85%) sample of the adult population of Morocco that strictly applied the KDIGO guidelines. When adjusted to the actual adult population of Morocco (2015), a rather low prevalence of CKD (2.9%) was found. Several reasons for this low prevalence were identified; the tagine-like population pyramid of the Maremar population was a factor, but even more important were the confirmation of proteinuria found at first screening and the proof of chronicity of decreased estimated glomerular filtration rate (eGFR), eliminating false positive results. In addition, it was found that when an arbitrary single threshold of eGFR (55 years of age), particularly in those without proteinuria, haematuria or hypertension. It also resulted in a significant 'underdiagnosis' (false negatives) in younger individuals with an eGFR >60 mL/min/1.73 m2 and below the third percentile of their age-/gender-category. The use of the third percentile eGFR level as a cut-off, based on age-gender-specific reference values of eGFR, allows the detection of these false positives and negatives. There is an urgent need for additional quality studies of the prevalence of CKD using the recent KDIGO guidelines in the correct way, to avoid overestimation of the true disease state of CKD by ≥50% with potentially dramatic consequences. © The Author 2017. Published by Oxford

  11. Limb Symmetry Indexes Can Overestimate Knee Function After Anterior Cruciate Ligament Injury.

    Science.gov (United States)

    Wellsandt, Elizabeth; Failla, Mathew J; Snyder-Mackler, Lynn

    2017-05-01

    Study Design Prospective cohort. Background The high risk of second anterior cruciate ligament (ACL) injuries after return to sport highlights the importance of return-to-sport decision making. Objective return-to-sport criteria frequently use limb symmetry indexes (LSIs) to quantify quadriceps strength and hop scores. Whether using the uninvolved limb in LSIs is optimal is unknown. Objectives To evaluate the uninvolved limb as a reference standard for LSIs utilized in return-to-sport testing and its relationship with second ACL injury rates. Methods Seventy athletes completed quadriceps strength and 4 single-leg hop tests before anterior cruciate ligament reconstruction (ACLR) and 6 months after ACLR. Limb symmetry indexes for each test compared involved-limb measures at 6 months to uninvolved-limb measures at 6 months. Estimated preinjury capacity (EPIC) levels for each test compared involved-limb measures at 6 months to uninvolved-limb measures before ACLR. Second ACL injuries were tracked for a minimum follow-up of 2 years after ACLR. Results Forty (57.1%) patients achieved 90% LSIs for quadriceps strength and all hop tests. Only 20 (28.6%) patients met 90% EPIC levels (comparing the involved limb at 6 months after ACLR to the uninvolved limb before ACLR) for quadriceps strength and all hop tests. Twenty-four (34.3%) patients who achieved 90% LSIs for all measures 6 months after ACLR did not achieve 90% EPIC levels for all measures. Estimated preinjury capacity levels were more sensitive than LSIs in predicting second ACL injuries (LSIs, 0.273; 95% confidence interval [CI]: 0.010, 0.566 and EPIC, 0.818; 95% CI: 0.523, 0.949). Conclusion Limb symmetry indexes frequently overestimate knee function after ACLR and may be related to second ACL injury risk. These findings raise concern about whether the variable ACL return-to-sport criteria utilized in current clinical practice are stringent enough to achieve safe and successful return to sport. Level of Evidence

  12. Regional disaster impact analysis: comparing Input-Output and Computable General Equilibrium models

    NARCIS (Netherlands)

    Koks, E.E.; Carrera, L.; Jonkeren, O.; Aerts, J.C.J.H.; Husby, T.G.; Thissen, M.; Standardi, G.; Mysiak, J.

    2016-01-01

    A variety of models have been applied to assess the economic losses of disasters, of which the most common ones are input-output (IO) and computable general equilibrium (CGE) models. In addition, an increasing number of scholars have developed hybrid approaches: one that combines both or either of

  13. A generalization of the bond fluctuation model to viscoelastic environments

    International Nuclear Information System (INIS)

    Fritsch, Christian C

    2014-01-01

    A lattice-based simulation method for polymer diffusion in a viscoelastic medium is presented. This method combines the eight-site bond fluctuation model with an algorithm for the simulation of fractional Brownian motion on the lattice. The method applies to unentangled self-avoiding chains and is probed for anomalous diffusion exponents α between 0.7 and 1.0. The simulation results are in very good agreement with the predictions of the generalized Rouse model of a self-avoiding chain polymer in a viscoelastic medium. (paper)

  14. Analog quantum simulation of generalized Dicke models in trapped ions

    Science.gov (United States)

    Aedo, Ibai; Lamata, Lucas

    2018-04-01

    We propose the analog quantum simulation of generalized Dicke models in trapped ions. By combining bicromatic laser interactions on multiple ions we can generate all regimes of light-matter coupling in these models, where here the light mode is mimicked by a motional mode. We present numerical simulations of the three-qubit Dicke model both in the weak field (WF) regime, where the Jaynes-Cummings behavior arises, and the ultrastrong coupling (USC) regime, where a rotating-wave approximation cannot be considered. We also simulate the two-qubit biased Dicke model in the WF and USC regimes and the two-qubit anisotropic Dicke model in the USC regime and the deep-strong coupling regime. The agreement between the mathematical models and the ion system convinces us that these quantum simulations can be implemented in the laboratory with current or near-future technology. This formalism establishes an avenue for the quantum simulation of many-spin Dicke models in trapped ions.

  15. Confidence Intervals for Assessing Heterogeneity in Generalized Linear Mixed Models

    Science.gov (United States)

    Wagler, Amy E.

    2014-01-01

    Generalized linear mixed models are frequently applied to data with clustered categorical outcomes. The effect of clustering on the response is often difficult to practically assess partly because it is reported on a scale on which comparisons with regression parameters are difficult to make. This article proposes confidence intervals for…

  16. Generalized Hermite polynomials in superspace as eigenfunctions of the supersymmetric rational CMS model

    CERN Document Server

    Desrosiers, P; Mathieu, P; Desrosiers, Patrick; Lapointe, Luc; Mathieu, Pierre

    2003-01-01

    We present two constructions of the orthogonal eigenfunctions of the supersymmetric extension of the rational Calogero-Moser-Sutherland model with harmonic confinement. These eigenfunctions are the superspace extension of the generalized Hermite (or Hi-Jack) polynomials. The conserved quantities of the rational supersymmetric model are first related to their trigonometric relatives through a similarity transformation. This leads to a simple expression for the generalized Hermite superpolynomials as a differential operator acting on the corresponding Jack superpolynomials. The second construction relies on the action of the Hamiltonian on the supermonomial basis. This translates into determinantal expressions for the Hamiltonian's eigenfunctions. As an aside, the maximal superintegrability of the supersymmetric rational Calogero-Moser-Sutherland model is demonstrated.

  17. Dynamical reduction models with general gaussian noises

    International Nuclear Information System (INIS)

    Bassi, Angelo; Ghirardi, GianCarlo

    2002-02-01

    We consider the effect of replacing in stochastic differential equations leading to the dynamical collapse of the statevector, white noise stochastic processes with non white ones. We prove that such a modification can be consistently performed without altering the most interesting features of the previous models. One of the reasons to discuss this matter derives from the desire of being allowed to deal with physical stochastic fields, such as the gravitational one, which cannot give rise to white noises. From our point of view the most relevant motivation for the approach we propose here derives from the fact that in relativistic models the occurrence of white noises is the main responsible for the appearance of untractable divergences. Therefore, one can hope that resorting to non white noises one can overcome such a difficulty. We investigate stochastic equations with non white noises, we discuss their reduction properties and their physical implications. Our analysis has a precise interest not only for the above mentioned subject but also for the general study of dissipative systems and decoherence. (author)

  18. Dynamical reduction models with general Gaussian noises

    International Nuclear Information System (INIS)

    Bassi, Angelo; Ghirardi, GianCarlo

    2002-01-01

    We consider the effect of replacing in stochastic differential equations leading to the dynamical collapse of the state vector, white-noise stochastic processes with nonwhite ones. We prove that such a modification can be consistently performed without altering the most interesting features of the previous models. One of the reasons to discuss this matter derives from the desire of being allowed to deal with physical stochastic fields, such as the gravitational one, which cannot give rise to white noises. From our point of view, the most relevant motivation for the approach we propose here derives from the fact that in relativistic models intractable divergences appear as a consequence of the white nature of the noises. Therefore, one can hope that resorting to nonwhite noises, one can overcome such a difficulty. We investigate stochastic equations with nonwhite noises, we discuss their reduction properties and their physical implications. Our analysis has a precise interest not only for the above-mentioned subject but also for the general study of dissipative systems and decoherence

  19. Comparison of three-dimensional ocean general circulation models on a benchmark problem

    International Nuclear Information System (INIS)

    Chartier, M.

    1990-12-01

    A french and an american Ocean General Circulation Models for deep-sea disposal of radioactive wastes are compared on a benchmark test problem. Both models are three-dimensional. They solve the hydrostatic primitive equations of the ocean with two different finite difference techniques. Results show that the dynamics simulated by both models are consistent. Several methods for the running of a model from a known state are tested in the French model: the diagnostic method, the prognostic method, the acceleration of convergence and the robust-diagnostic method

  20. Vector models and generalized SYK models

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Cheng [Department of Physics, Brown University,Providence RI 02912 (United States)

    2017-05-23

    We consider the relation between SYK-like models and vector models by studying a toy model where a tensor field is coupled with a vector field. By integrating out the tensor field, the toy model reduces to the Gross-Neveu model in 1 dimension. On the other hand, a certain perturbation can be turned on and the toy model flows to an SYK-like model at low energy. A chaotic-nonchaotic phase transition occurs as the sign of the perturbation is altered. We further study similar models that possess chaos and enhanced reparameterization symmetries.

  1. On the Generalization of the Timoshenko Beam Model Based on the Micropolar Linear Theory: Static Case

    Directory of Open Access Journals (Sweden)

    Andrea Nobili

    2015-01-01

    Full Text Available Three generalizations of the Timoshenko beam model according to the linear theory of micropolar elasticity or its special cases, that is, the couple stress theory or the modified couple stress theory, recently developed in the literature, are investigated and compared. The analysis is carried out in a variational setting, making use of Hamilton’s principle. It is shown that both the Timoshenko and the (possibly modified couple stress models are based on a microstructural kinematics which is governed by kinosthenic (ignorable terms in the Lagrangian. Despite their difference, all models bring in a beam-plane theory only one microstructural material parameter. Besides, the micropolar model formally reduces to the couple stress model upon introducing the proper constraint on the microstructure kinematics, although the material parameter is generally different. Line loading on the microstructure results in a nonconservative force potential. Finally, the Hamiltonian form of the micropolar beam model is derived and the canonical equations are presented along with their general solution. The latter exhibits a general oscillatory pattern for the microstructure rotation and stress, whose behavior matches the numerical findings.

  2. On the sub-model errors of a generalized one-way coupling scheme for linking models at different scales

    Science.gov (United States)

    Zeng, Jicai; Zha, Yuanyuan; Zhang, Yonggen; Shi, Liangsheng; Zhu, Yan; Yang, Jinzhong

    2017-11-01

    Multi-scale modeling of the localized groundwater flow problems in a large-scale aquifer has been extensively investigated under the context of cost-benefit controversy. An alternative is to couple the parent and child models with different spatial and temporal scales, which may result in non-trivial sub-model errors in the local areas of interest. Basically, such errors in the child models originate from the deficiency in the coupling methods, as well as from the inadequacy in the spatial and temporal discretizations of the parent and child models. In this study, we investigate the sub-model errors within a generalized one-way coupling scheme given its numerical stability and efficiency, which enables more flexibility in choosing sub-models. To couple the models at different scales, the head solution at parent scale is delivered downward onto the child boundary nodes by means of the spatial and temporal head interpolation approaches. The efficiency of the coupling model is improved either by refining the grid or time step size in the parent and child models, or by carefully locating the sub-model boundary nodes. The temporal truncation errors in the sub-models can be significantly reduced by the adaptive local time-stepping scheme. The generalized one-way coupling scheme is promising to handle the multi-scale groundwater flow problems with complex stresses and heterogeneity.

  3. A Standardized Generalized Dimensionality Discrepancy Measure and a Standardized Model-Based Covariance for Dimensionality Assessment for Multidimensional Models

    Science.gov (United States)

    Levy, Roy; Xu, Yuning; Yel, Nedim; Svetina, Dubravka

    2015-01-01

    The standardized generalized dimensionality discrepancy measure and the standardized model-based covariance are introduced as tools to critique dimensionality assumptions in multidimensional item response models. These tools are grounded in a covariance theory perspective and associated connections between dimensionality and local independence.…

  4. Multi-variable evaluation of hydrological model predictions for a headwater basin in the Canadian Rocky Mountains

    Directory of Open Access Journals (Sweden)

    X. Fang

    2013-04-01

    Full Text Available One of the purposes of the Cold Regions Hydrological Modelling platform (CRHM is to diagnose inadequacies in the understanding of the hydrological cycle and its simulation. A physically based hydrological model including a full suite of snow and cold regions hydrology processes as well as warm season, hillslope and groundwater hydrology was developed in CRHM for application in the Marmot Creek Research Basin (~ 9.4 km2, located in the Front Ranges of the Canadian Rocky Mountains. Parameters were selected from digital elevation model, forest, soil, and geological maps, and from the results of many cold regions hydrology studies in the region and elsewhere. Non-calibrated simulations were conducted for six hydrological years during the period 2005–2011 and were compared with detailed field observations of several hydrological cycle components. The results showed good model performance for snow accumulation and snowmelt compared to the field observations for four seasons during the period 2007–2011, with a small bias and normalised root mean square difference (NRMSD ranging from 40 to 42% for the subalpine conifer forests and from 31 to 67% for the alpine tundra and treeline larch forest environments. Overestimation or underestimation of the peak SWE ranged from 1.6 to 29%. Simulations matched well with the observed unfrozen moisture fluctuation in the top soil layer at a lodgepole pine site during the period 2006–2011, with a NRMSD ranging from 17 to 39%, but with consistent overestimation of 7 to 34%. Evaluations of seasonal streamflow during the period 2006–2011 revealed that the model generally predicted well compared to observations at the basin scale, with a NRMSD of 60% and small model bias (1%, while at the sub-basin scale NRMSDs were larger, ranging from 72 to 76%, though overestimation or underestimation for the cumulative seasonal discharge was within 29%. Timing of discharge was better predicted at the Marmot Creek basin outlet

  5. About the Properties of a Modified Generalized Beverton-Holt Equation in Ecology Models

    Directory of Open Access Journals (Sweden)

    M. De La Sen

    2008-01-01

    Full Text Available This paper is devoted to the study of a generalized modified version of the well-known Beverton-Holt equation in ecology. The proposed model describes the population evolution of some species in a certain habitat driven by six parametrical sequences, namely, the intrinsic growth rate (associated with the reproduction capability, the degree of sympathy of the species with the habitat (described by a so-called environment carrying capacity, a penalty term to deal with overpopulation levels, the harvesting (fishing or hunting regulatory quota, or related to use of pesticides when fighting damaging plagues, and the independent consumption which basically quantifies predation. The independent consumption is considered as a part of a more general additive disturbance which also potentially includes another extra additive disturbance term which might be attributed to net migration from or to the habitat or modeling measuring errors. Both potential contributions are included for generalization purposes in the proposed modified generalized Beverton-Holt equation. The properties of stability and boundedness of the solution sequences, equilibrium points of the stationary model, and the existence of oscillatory solution sequences are investigated. A numerical example for a population of aphids is investigated with the theoretical tools developed in the paper.

  6. Decadal predictions of Southern Ocean sea ice : testing different initialization methods with an Earth-system Model of Intermediate Complexity

    Science.gov (United States)

    Zunz, Violette; Goosse, Hugues; Dubinkina, Svetlana

    2013-04-01

    The sea ice extent in the Southern Ocean has increased since 1979 but the causes of this expansion have not been firmly identified. In particular, the contribution of internal variability and external forcing to this positive trend has not been fully established. In this region, the lack of observations and the overestimation of internal variability of the sea ice by contemporary General Circulation Models (GCMs) make it difficult to understand the behaviour of the sea ice. Nevertheless, if its evolution is governed by the internal variability of the system and if this internal variability is in some way predictable, a suitable initialization method should lead to simulations results that better fit the reality. Current GCMs decadal predictions are generally initialized through a nudging towards some observed fields. This relatively simple method does not seem to be appropriated to the initialization of sea ice in the Southern Ocean. The present study aims at identifying an initialization method that could improve the quality of the predictions of Southern Ocean sea ice at decadal timescales. We use LOVECLIM, an Earth-system Model of Intermediate Complexity that allows us to perform, within a reasonable computational time, the large amount of simulations required to test systematically different initialization procedures. These involve three data assimilation methods: a nudging, a particle filter and an efficient particle filter. In a first step, simulations are performed in an idealized framework, i.e. data from a reference simulation of LOVECLIM are used instead of observations, herein after called pseudo-observations. In this configuration, the internal variability of the model obviously agrees with the one of the pseudo-observations. This allows us to get rid of the issues related to the overestimation of the internal variability by models compared to the observed one. This way, we can work out a suitable methodology to assess the efficiency of the

  7. Nuclear inertia for fission in a generalized cranking model

    International Nuclear Information System (INIS)

    Kunz, J.; Nix, J.R.

    1984-01-01

    A time dependent formalism which is appropriate for β vibrations and fission is developed for a generalized cranking model. The formalism leads to additional terms in the density matrix which affect the nuclear inertia. The case of a harmonic oscillator potential is used to demonstrate the contribution of the pairing gap term on the β vibrational inertia for Pu 240. The inertia remains finite and close to the limiting irrotational value

  8. Generalized Skyrme model with the loosely bound potential

    Science.gov (United States)

    Gudnason, Sven Bjarke; Zhang, Baiyang; Ma, Nana

    2016-12-01

    We study a generalization of the loosely bound Skyrme model which consists of the Skyrme model with a sixth-order derivative term—motivated by its fluidlike properties—and the second-order loosely bound potential—motivated by lowering the classical binding energies of higher-charged Skyrmions. We use the rational map approximation for the Skyrmion of topological charge B =4 , calculate the binding energy of the latter, and estimate the systematic error in using this approximation. In the parameter space that we can explore within the rational map approximation, we find classical binding energies as low as 1.8%, and once taking into account the contribution from spin-isospin quantization, we obtain binding energies as low as 5.3%. We also calculate the contribution from the sixth-order derivative term to the electric charge density and axial coupling.

  9. Convex Relaxations for a Generalized Chan-Vese Model

    KAUST Repository

    Bae, Egil

    2013-01-01

    We revisit the Chan-Vese model of image segmentation with a focus on the encoding with several integer-valued labeling functions. We relate several representations with varying amount of complexity and demonstrate the connection to recent relaxations for product sets and to dual maxflow-based formulations. For some special cases, it can be shown that it is possible to guarantee binary minimizers. While this is not true in general, we show how to derive a convex approximation of the combinatorial problem for more than 4 phases. We also provide a method to avoid overcounting of boundaries in the original Chan-Vese model without departing from the efficient product-set representation. Finally, we derive an algorithm to solve the associated discretized problem, and demonstrate that it allows to obtain good approximations for the segmentation problem with various number of regions. © 2013 Springer-Verlag.

  10. Dynamical generalization of a solvable family of two-electron model atoms with general interparticle repulsion

    International Nuclear Information System (INIS)

    Niehaus, T A; Suhai, S; March, N H

    2008-01-01

    Holas, Howard and March (2003 Phys. Lett. A 310 451) have obtained analytic solutions for ground-state properties of a whole family of two-electron spin-compensated harmonically confined model atoms whose different members are characterized by a specific interparticle potential energy u(r 12 ). Here, we make a start on the dynamic generalization of the harmonic external potential, the motivation being the serious criticism levelled recently against the foundations of time-dependent density-functional theory (e.g., Schirmer and Dreuw 2007 Phys. Rev. A 75 022513). In this context, we derive a simplified expression for the time-dependent electron density for arbitrary interparticle interaction, which is fully determined by a one-dimensional non-interacting Hamiltonian. Moreover, a closed solution for the momentum space density in the Moshinsky model is obtained

  11. A generalized model for coincidence counting

    International Nuclear Information System (INIS)

    Lu, Ming-Shih; Teichmann, T.

    1992-01-01

    The aim of this paper is to provide a description of the multiplicative processes associated with coincidence counting techniques, for example in the NDA of plutonium bearing materials. The model elucidates both the physical processes and the underlying mathematical formalism in a relatively simple but comprehensive way. In particular, it includes the effect of absorption by impurities or poisons, as well as that of neutron leakage on a parallel basis to the treatment of induced fission itself. The work thus parallels and generalizes the methods of Boehnel of Hage and Cifarelli, and more recently of Yanjushkin. This paper introduces the concept of a dual probability generating function to account for both the basic physical multiplication phenomena, as well as the detection phenomena. The underlying approach extends the idea of a simple probability generating function, due to De Moivre. The basic mathematical background may be found, for example, in Feller 1966

  12. Impact of an improved shortwave radiation scheme in the MAECHAM5 General Circulation Model

    Directory of Open Access Journals (Sweden)

    J. J. Morcrette

    2007-05-01

    Full Text Available In order to improve the representation of ozone absorption in the stratosphere of the MAECHAM5 general circulation model, the spectral resolution of the shortwave radiation parameterization used in the model has been increased from 4 to 6 bands. Two 20-years simulations with the general circulation model have been performed, one with the standard and the other with the newly introduced parameterization respectively, to evaluate the temperature and dynamical changes arising from the two different representations of the shortwave radiative transfer. In the simulation with the increased spectral resolution in the radiation parameterization, a significant warming of almost the entire model domain is reported. At the summer stratopause the temperature increase is about 6 K and alleviates the cold bias present in the model when the standard radiation scheme is used. These general circulation model results are consistent both with previous validation of the radiation scheme and with the offline clear-sky comparison performed in the current work with a discrete ordinate 4 stream scattering line by line radiative transfer model. The offline validation shows a substantial reduction of the daily averaged shortwave heating rate bias (1–2 K/day cooling that occurs for the standard radiation parameterization in the upper stratosphere, present under a range of atmospheric conditions. Therefore, the 6 band shortwave radiation parameterization is considered to be better suited for the representation of the ozone absorption in the stratosphere than the 4 band parameterization. Concerning the dynamical response in the general circulation model, it is found that the reported warming at the summer stratopause induces stronger zonal mean zonal winds in the middle atmosphere. These stronger zonal mean zonal winds thereafter appear to produce a dynamical feedback that results in a dynamical warming (cooling of the polar winter (summer mesosphere, caused by an

  13. MGF Approach to the Analysis of Generalized Two-Ray Fading Models

    KAUST Repository

    Rao, Milind; Lopez-Martinez, F. Javier; Alouini, Mohamed-Slim; Goldsmith, Andrea

    2015-01-01

    We analyze a class of Generalized Two-Ray (GTR) fading channels that consist of two line of sight (LOS) components with random phase plus a diffuse component. We derive a closedform expression for the moment generating function (MGF) of the signal-to-noise ratio (SNR) for this model, which greatly simplifies its analysis. This expression arises from the observation that the GTR fading model can be expressed in terms of a conditional underlying Rician distribution. We illustrate the approach to derive simple expressions for statistics and performance metrics of interest such as the amount of fading, the level crossing rate, the symbol error rate, and the ergodic capacity in GTR fading channels. We also show that the effect of considering a more general distribution for the phase difference between the LOS components has an impact on the average SNR.

  14. Bits or Shots in Combat? The Generalized Deitchman Model of Guerrilla Warfare

    OpenAIRE

    Kress, Moshe; MacKay, Niall J.

    2013-01-01

    Operations Research Letters, accepted. We generalize Deitchman's guerrilla warfare model to account for trade-off between intelligence ('bits') and firepower ('shots'). Intelligent targeting leads to aimed fire, absence of intelligence leads to unaimed fire, dependent on targets' density. We propose a new Lanchester-type model that mixes aimed with unaimed fire, the balance between these being determined by quality of information. We derive the model's conserved quantity, and use it ...

  15. A generalized linear-quadratic model incorporating reciprocal time pattern of radiation damage repair

    International Nuclear Information System (INIS)

    Huang, Zhibin; Mayr, Nina A.; Lo, Simon S.; Wang, Jian Z.; Jia Guang; Yuh, William T. C.; Johnke, Roberta

    2012-01-01

    Purpose: It has been conventionally assumed that the repair rate for sublethal damage (SLD) remains constant during the entire radiation course. However, increasing evidence from animal studies suggest that this may not the case. Rather, it appears that the repair rate for radiation-induced SLD slows down with increasing time. Such a slowdown in repair would suggest that the exponential repair pattern would not necessarily accurately predict repair process. As a result, the purpose of this study was to investigate a new generalized linear-quadratic (LQ) model incorporating a repair pattern with reciprocal time. The new formulas were tested with published experimental data. Methods: The LQ model has been widely used in radiation therapy, and the parameter G in the surviving fraction represents the repair process of sublethal damage with T r as the repair half-time. When a reciprocal pattern of repair process was adopted, a closed form of G was derived analytically for arbitrary radiation schemes. The published animal data adopted to test the reciprocal formulas. Results: A generalized LQ model to describe the repair process in a reciprocal pattern was obtained. Subsequently, formulas for special cases were derived from this general form. The reciprocal model showed a better fit to the animal data than the exponential model, particularly for the ED50 data (reduced χ 2 min of 2.0 vs 4.3, p = 0.11 vs 0.006), with the following gLQ parameters: α/β = 2.6-4.8 Gy, T r = 3.2-3.9 h for rat feet skin, and α/β = 0.9 Gy, T r = 1.1 h for rat spinal cord. Conclusions: These results of repair process following a reciprocal time suggest that the generalized LQ model incorporating the reciprocal time of sublethal damage repair shows a better fit than the exponential repair model. These formulas can be used to analyze the experimental and clinical data, where a slowing-down repair process appears during the course of radiation therapy.

  16. Generalized Efficient Inference on Factor Models with Long-Range Dependence

    DEFF Research Database (Denmark)

    Ergemen, Yunus Emre

    . Short-memory dynamics are allowed in the common factor structure and possibly heteroskedastic error term. In the estimation, a generalized version of the principal components (PC) approach is proposed to achieve efficiency. Asymptotics for efficient common factor and factor loading as well as long......A dynamic factor model is considered that contains stochastic time trends allowing for stationary and nonstationary long-range dependence. The model nests standard I(0) and I(1) behaviour smoothly in common factors and residuals, removing the necessity of a priori unit-root and stationarity testing...

  17. Modeling extreme PM10 concentration in Malaysia using generalized extreme value distribution

    Science.gov (United States)

    Hasan, Husna; Mansor, Nadiah; Salleh, Nur Hanim Mohd

    2015-05-01

    Extreme PM10 concentration from the Air Pollutant Index (API) at thirteen monitoring stations in Malaysia is modeled using the Generalized Extreme Value (GEV) distribution. The data is blocked into monthly selection period. The Mann-Kendall (MK) test suggests a non-stationary model so two models are considered for the stations with trend. The likelihood ratio test is used to determine the best fitted model and the result shows that only two stations favor the non-stationary model (Model 2) while the other eleven stations favor stationary model (Model 1). The return level of PM10 concentration that is expected to exceed the maximum once within a selected period is obtained.

  18. General basin modeling for site suitability. Draft report 1. Baseline data

    International Nuclear Information System (INIS)

    1979-04-01

    This report summarizes work completed by Golder Associates under Task 2 - Site Suitability specifically for modeling of fluid flow and mass transport in sedimentary basins containing thick shale or salt. It also describes ongoing and future work on the above topic. The purpose of the study is to develop a general model for a nuclear waste repository situated in a deep sedimentary basin environment. The model will be used in conjunction with Golder's fluid flow and mass transport codes to study specific aspects of nuclide transport by groundwater flow

  19. MODEL OF BRAZILIAN URBANIZATION: GENERAL NOTES

    Directory of Open Access Journals (Sweden)

    Leandro da Silva Guimarães

    2016-07-01

    Full Text Available The full text format seeks to analyze the social inequality in Brazil through the spatial process of that inequality in this sense it analyzes, scratching the edges of what is known of the Brazilian urbanization model and how this same model produced gentrification cities and exclusive. So search the text discuss the country’s urban exclusion through consolidation of what is conventionally called peripheral areas, or more generally, of peripheries. The text on screen is the result of research carried out at the Federal Fluminense University in Masters level. In this study, we tried to understand the genesis of an urban housing development located in São Gonçalo, Rio de Janeiro called Jardim Catarina. Understand what the problem space partner who originated it. In this sense, his analysis becomes consubstantial to understand the social and spatial inequalities in Brazil, as well as the role of the state as planning manager socio-spatial planning and principal agent in the solution of such problems. It is expected that with the realization of a study of greater amounts, from which this article is just a micro work can contribute subsidies that contribute to the arrangement and crystallization of public policies that give account of social inequalities and serve to leverage a country more fair and equitable cities.

  20. Modeling ultrashort electromagnetic pulses with a generalized Kadomtsev-Petviashvili equation

    Science.gov (United States)

    Hofstrand, A.; Moloney, J. V.

    2018-03-01

    In this paper we derive a properly scaled model for the nonlinear propagation of intense, ultrashort, mid-infrared electromagnetic pulses (10-100 femtoseconds) through an arbitrary dispersive medium. The derivation results in a generalized Kadomtsev-Petviashvili (gKP) equation. In contrast to envelope-based models such as the Nonlinear Schrödinger (NLS) equation, the gKP equation describes the dynamics of the field's actual carrier wave. It is important to resolve these dynamics when modeling ultrashort pulses. We proceed by giving an original proof of sufficient conditions on the initial pulse for a singularity to form in the field after a finite propagation distance. The model is then numerically simulated in 2D using a spectral-solver with initial data and physical parameters highlighting our theoretical results.