WorldWideScience

Sample records for models generally underestimate

  1. Evidence for link between modelled trends in Antarctic sea ice and underestimated westerly wind changes.

    Science.gov (United States)

    Purich, Ariaan; Cai, Wenju; England, Matthew H; Cowan, Tim

    2016-02-04

    Despite global warming, total Antarctic sea ice coverage increased over 1979-2013. However, the majority of Coupled Model Intercomparison Project phase 5 models simulate a decline. Mechanisms causing this discrepancy have so far remained elusive. Here we show that weaker trends in the intensification of the Southern Hemisphere westerly wind jet simulated by the models may contribute to this disparity. During austral summer, a strengthened jet leads to increased upwelling of cooler subsurface water and strengthened equatorward transport, conducive to increased sea ice. As the majority of models underestimate summer jet trends, this cooling process is underestimated compared with observations and is insufficient to offset warming in the models. Through the sea ice-albedo feedback, models produce a high-latitude surface ocean warming and sea ice decline, contrasting the observed net cooling and sea ice increase. A realistic simulation of observed wind changes may be crucial for reproducing the recent observed sea ice increase.

  2. Low modeled ozone production suggests underestimation of precursor emissions (especially NOx in Europe

    Directory of Open Access Journals (Sweden)

    E. Oikonomakis

    2018-02-01

    Full Text Available High surface ozone concentrations, which usually occur when photochemical ozone production takes place, pose a great risk to human health and vegetation. Air quality models are often used by policy makers as tools for the development of ozone mitigation strategies. However, the modeled ozone production is often not or not enough evaluated in many ozone modeling studies. The focus of this work is to evaluate the modeled ozone production in Europe indirectly, with the use of the ozone–temperature correlation for the summer of 2010 and to analyze its sensitivity to precursor emissions and meteorology by using the regional air quality model, the Comprehensive Air Quality Model with Extensions (CAMx. The results show that the model significantly underestimates the observed high afternoon surface ozone mixing ratios (≥  60 ppb by 10–20 ppb and overestimates the lower ones (<  40 ppb by 5–15 ppb, resulting in a misleading good agreement with the observations for average ozone. The model also underestimates the ozone–temperature regression slope by about a factor of 2 for most of the measurement stations. To investigate the impact of emissions, four scenarios were tested: (i increased volatile organic compound (VOC emissions by a factor of 1.5 and 2 for the anthropogenic and biogenic VOC emissions, respectively, (ii increased nitrogen oxide (NOx emissions by a factor of 2, (iii a combination of the first two scenarios and (iv increased traffic-only NOx emissions by a factor of 4. For southern, eastern, and central (except the Benelux area Europe, doubling NOx emissions seems to be the most efficient scenario to reduce the underestimation of the observed high ozone mixing ratios without significant degradation of the model performance for the lower ozone mixing ratios. The model performance for ozone–temperature correlation is also better when NOx emissions are doubled. In the Benelux area, however, the third scenario

  3. Quality of life and time to death: have the health gains of preventive interventions been underestimated?

    Science.gov (United States)

    Gheorghe, Maria; Brouwer, Werner B F; van Baal, Pieter H M

    2015-04-01

    This article explores the implications of the relation between quality of life (QoL) and time to death (TTD) for economic evaluations of preventive interventions. By using health survey data on QoL for the general Dutch population linked to the mortality registry, we quantify the magnitude of this relationship. For addressing specific features of the nonstandard QoL distribution such as boundness, skewness, and heteroscedasticity, we modeled QoL using a generalized additive model for location, scale, and shape (GAMLSS) with a β inflated outcome distribution. Our empirical results indicate that QoL decreases when approaching death, suggesting that there is a strong relationship between TTD and QoL. Predictions of different regression models revealed that ignoring this relationship results in an underestimation of the quality-adjusted life year (QALY) gains for preventive interventions. The underestimation ranged between 3% and 7% and depended on age, the number of years gained from the intervention, and the discount rate used. © The Author(s) 2014.

  4. Some sources of the underestimation of evaluated cross section uncertainties

    International Nuclear Information System (INIS)

    Badikov, S.A.; Gai, E.V.

    2003-01-01

    The problem of the underestimation of evaluated cross-section uncertainties is addressed. Two basic sources of the underestimation of evaluated cross-section uncertainties - a) inconsistency between declared and observable experimental uncertainties and b) inadequacy between applied statistical models and processed experimental data - are considered. Both the sources of the underestimation are mainly a consequence of existence of the uncertainties unrecognized by experimenters. A model of a 'constant shift' is proposed for taking unrecognised experimental uncertainties into account. The model is applied for statistical analysis of the 238 U(n,f)/ 235 U(n,f) reaction cross-section ratio measurements. It is demonstrated that multiplication by sqrt(χ 2 ) as instrument for correction of underestimated evaluated cross-section uncertainties fails in case of correlated measurements. It is shown that arbitrary assignment of uncertainties and correlation in a simple least squares fit of two correlated measurements of unknown mean leads to physically incorrect evaluated results. (author)

  5. Modeling microelectrode biosensors: free-flow calibration can substantially underestimate tissue concentrations.

    Science.gov (United States)

    Newton, Adam J H; Wall, Mark J; Richardson, Magnus J E

    2017-03-01

    Microelectrode amperometric biosensors are widely used to measure concentrations of analytes in solution and tissue including acetylcholine, adenosine, glucose, and glutamate. A great deal of experimental and modeling effort has been directed at quantifying the response of the biosensors themselves; however, the influence that the macroscopic tissue environment has on biosensor response has not been subjected to the same level of scrutiny. Here we identify an important issue in the way microelectrode biosensors are calibrated that is likely to have led to underestimations of analyte tissue concentrations. Concentration in tissue is typically determined by comparing the biosensor signal to that measured in free-flow calibration conditions. In a free-flow environment the concentration of the analyte at the outer surface of the biosensor can be considered constant. However, in tissue the analyte reaches the biosensor surface by diffusion through the extracellular space. Because the enzymes in the biosensor break down the analyte, a density gradient is set up resulting in a significantly lower concentration of analyte near the biosensor surface. This effect is compounded by the diminished volume fraction (porosity) and reduction in the diffusion coefficient due to obstructions (tortuosity) in tissue. We demonstrate this effect through modeling and experimentally verify our predictions in diffusive environments. NEW & NOTEWORTHY Microelectrode biosensors are typically calibrated in a free-flow environment where the concentrations at the biosensor surface are constant. However, when in tissue, the analyte reaches the biosensor via diffusion and so analyte breakdown by the biosensor results in a concentration gradient and consequently a lower concentration around the biosensor. This effect means that naive free-flow calibration will underestimate tissue concentration. We develop mathematical models to better quantify the discrepancy between the calibration and tissue

  6. Climatology of the HOPE-G global ocean general circulation model - Sea ice general circulation model

    Energy Technology Data Exchange (ETDEWEB)

    Legutke, S. [Deutsches Klimarechenzentrum (DKRZ), Hamburg (Germany); Maier-Reimer, E. [Max-Planck-Institut fuer Meteorologie, Hamburg (Germany)

    1999-12-01

    The HOPE-G global ocean general circulation model (OGCM) climatology, obtained in a long-term forced integration is described. HOPE-G is a primitive-equation z-level ocean model which contains a dynamic-thermodynamic sea-ice model. It is formulated on a 2.8 grid with increased resolution in low latitudes in order to better resolve equatorial dynamics. The vertical resolution is 20 layers. The purpose of the integration was both to investigate the models ability to reproduce the observed general circulation of the world ocean and to obtain an initial state for coupled atmosphere - ocean - sea-ice climate simulations. The model was driven with daily mean data of a 15-year integration of the atmosphere general circulation model ECHAM4, the atmospheric component in later coupled runs. Thereby, a maximum of the flux variability that is expected to appear in coupled simulations is included already in the ocean spin-up experiment described here. The model was run for more than 2000 years until a quasi-steady state was achieved. It reproduces the major current systems and the main features of the so-called conveyor belt circulation. The observed distribution of water masses is reproduced reasonably well, although with a saline bias in the intermediate water masses and a warm bias in the deep and bottom water of the Atlantic and Indian Oceans. The model underestimates the meridional transport of heat in the Atlantic Ocean. The simulated heat transport in the other basins, though, is in good agreement with observations. (orig.)

  7. Nuclear power plant cost underestimation: mechanisms and corrections

    International Nuclear Information System (INIS)

    Meyer, M.B.

    1984-01-01

    Criticisms of inaccurate nuclear power plant cost estimates have commonly focused upon what factors have caused actual costs to increase and not upon the engineering cost estimate methodology itself. This article describes two major sources of cost underestimation and suggests corrections for each which can be applied while retaining the traditional engineering methodology in general

  8. Low modeled ozone production suggests underestimation of precursor emissions (especially NOx) in Europe

    Science.gov (United States)

    Oikonomakis, Emmanouil; Aksoyoglu, Sebnem; Ciarelli, Giancarlo; Baltensperger, Urs; Prévôt, André Stephan Henry

    2018-02-01

    High surface ozone concentrations, which usually occur when photochemical ozone production takes place, pose a great risk to human health and vegetation. Air quality models are often used by policy makers as tools for the development of ozone mitigation strategies. However, the modeled ozone production is often not or not enough evaluated in many ozone modeling studies. The focus of this work is to evaluate the modeled ozone production in Europe indirectly, with the use of the ozone-temperature correlation for the summer of 2010 and to analyze its sensitivity to precursor emissions and meteorology by using the regional air quality model, the Comprehensive Air Quality Model with Extensions (CAMx). The results show that the model significantly underestimates the observed high afternoon surface ozone mixing ratios (≥ 60 ppb) by 10-20 ppb and overestimates the lower ones (degradation of the model performance for the lower ozone mixing ratios. The model performance for ozone-temperature correlation is also better when NOx emissions are doubled. In the Benelux area, however, the third scenario (where both NOx and VOC emissions are increased) leads to a better model performance. Although increasing only the traffic NOx emissions by a factor of 4 gave very similar results to the doubling of all NOx emissions, the first scenario is more consistent with the uncertainties reported by other studies than the latter, suggesting that high uncertainties in NOx emissions might originate mainly from the road-transport sector rather than from other sectors. The impact of meteorology was examined with three sensitivity tests: (i) increased surface temperature by 4 °C, (ii) reduced wind speed by 50 % and (iii) doubled wind speed. The first two scenarios led to a consistent increase in all surface ozone mixing ratios, thus improving the model performance for the high ozone values but significantly degrading it for the low ozone values, while the third scenario had exactly the

  9. Global models underestimate large decadal declining and rising water storage trends relative to GRACE satellite data

    Science.gov (United States)

    Scanlon, Bridget R.; Zhang, Zizhan; Save, Himanshu; Sun, Alexander Y.; van Beek, Ludovicus P. H.; Wiese, David N.; Reedy, Robert C.; Longuevergne, Laurent; Döll, Petra; Bierkens, Marc F. P.

    2018-01-01

    Assessing reliability of global models is critical because of increasing reliance on these models to address past and projected future climate and human stresses on global water resources. Here, we evaluate model reliability based on a comprehensive comparison of decadal trends (2002–2014) in land water storage from seven global models (WGHM, PCR-GLOBWB, GLDAS NOAH, MOSAIC, VIC, CLM, and CLSM) to trends from three Gravity Recovery and Climate Experiment (GRACE) satellite solutions in 186 river basins (∼60% of global land area). Medians of modeled basin water storage trends greatly underestimate GRACE-derived large decreasing (≤−0.5 km3/y) and increasing (≥0.5 km3/y) trends. Decreasing trends from GRACE are mostly related to human use (irrigation) and climate variations, whereas increasing trends reflect climate variations. For example, in the Amazon, GRACE estimates a large increasing trend of ∼43 km3/y, whereas most models estimate decreasing trends (−71 to 11 km3/y). Land water storage trends, summed over all basins, are positive for GRACE (∼71–82 km3/y) but negative for models (−450 to −12 km3/y), contributing opposing trends to global mean sea level change. Impacts of climate forcing on decadal land water storage trends exceed those of modeled human intervention by about a factor of 2. The model-GRACE comparison highlights potential areas of future model development, particularly simulated water storage. The inability of models to capture large decadal water storage trends based on GRACE indicates that model projections of climate and human-induced water storage changes may be underestimated. PMID:29358394

  10. Academic self-concept, learning motivation, and test anxiety of the underestimated student.

    Science.gov (United States)

    Urhahne, Detlef; Chao, Sheng-Han; Florineth, Maria Luise; Luttenberger, Silke; Paechter, Manuela

    2011-03-01

    BACKGROUND. Teachers' judgments of student performance on a standardized achievement test often result in an overestimation of students' abilities. In the majority of cases, a larger group of overestimated students and a smaller group of underestimated students are formed by these judgments. AIMS. In this research study, the consequences of the underestimation of students' mathematical performance potential were examined. SAMPLE. Two hundred and thirty-five fourth grade students and their fourteen mathematics teachers took part in the investigation. METHOD. Students worked on a standardized mathematics achievement test and completed a self-description questionnaire about motivation and affect. Teachers estimated each individual student's potential with regard to mathematics test performance as well as students' expectancy for success, level of aspiration, academic self-concept, learning motivation, and test anxiety. The differences between teachers' judgments on students' test performance and students' actual performance were used to build groups of underestimated and overestimated students. RESULTS. Underestimated students displayed equal levels of test performance, learning motivation, and level of aspiration in comparison with overestimated students, but had lower expectancy for success, lower academic self-concept, and experienced more test anxiety. Teachers expected that underestimated students would receive lower grades on the next mathematics test, believed that students were satisfied with lower grades, and assumed that the students have weaker learning motivation than their overestimated classmates. CONCLUSION. Teachers' judgment error was not confined to test performance but generalized to motivational and affective traits of the students. © 2010 The British Psychological Society.

  11. On the intra-seasonal variability within the extratropics in the ECHAM3 general circulation model

    International Nuclear Information System (INIS)

    May, W.

    1994-01-01

    First we consider the GCM's capability to reproduce the midlatitude variability on intra-seasonal time scales by a comparison with observational data (ECMWF analyses). Secondly we assess the possible influence of Sea Surface Temperatures on the intra-seasonal variability by comparing estimates obtained from different simulations performed with ECHAM3 with varying and fixed SST as boundary forcing. The intra-seasonal variability as simulated by ECHAM3 is underestimated over most of the Northern Hemisphere. While the contributions of the high-frequency transient fluctuations are reasonably well captured by the model, ECHAM3 fails to reproduce the observed level of low-frequency intra-seasonal variability. This is mainly due to the underestimation of the variability caused by the ultra-long planetary waves in the Northern Hemisphere midlatitudes by the model. In the Southern Hemisphere midlatitudes, on the other hand, the intra-seasonal variability as simulated by ECHAM3 is generally underestimated in the area north of about 50 southern latitude, but overestimated at higher latitudes. This is the case for the contributions of the high-frequency and the low-frequency transient fluctuations as well. Further, the model indicates a strong tendency for zonal symmetry, in particular with respect to the high-frequency transient fluctuations. While the two sets of simulations with varying and fixed Sea Surface Temepratures as boundary forcing reveal only small regional differences in the Southern Hemisphere, there is a strong response to be found in the Northern Hemisphere. The contributions of the high-frequency transient fluctuations to the intra-seasonal variability are generally stronger in the simulations with fixed SST. Further, the Pacific storm track is shifted slightly poleward in this set of simulations. For the low-frequency intra-seasonal variability the model gives a strong, but regional response to the interannual variations of the SST. (orig.)

  12. Underestimated effect sizes in GWAS: fundamental limitations of single SNP analysis for dichotomous phenotypes.

    Directory of Open Access Journals (Sweden)

    Sven Stringer

    Full Text Available Complex diseases are often highly heritable. However, for many complex traits only a small proportion of the heritability can be explained by observed genetic variants in traditional genome-wide association (GWA studies. Moreover, for some of those traits few significant SNPs have been identified. Single SNP association methods test for association at a single SNP, ignoring the effect of other SNPs. We show using a simple multi-locus odds model of complex disease that moderate to large effect sizes of causal variants may be estimated as relatively small effect sizes in single SNP association testing. This underestimation effect is most severe for diseases influenced by numerous risk variants. We relate the underestimation effect to the concept of non-collapsibility found in the statistics literature. As described, continuous phenotypes generated with linear genetic models are not affected by this underestimation effect. Since many GWA studies apply single SNP analysis to dichotomous phenotypes, previously reported results potentially underestimate true effect sizes, thereby impeding identification of true effect SNPs. Therefore, when a multi-locus model of disease risk is assumed, a multi SNP analysis may be more appropriate.

  13. Chronic rhinosinusitis in Europe - an underestimated disease. A GA(2) LEN study

    DEFF Research Database (Denmark)

    Hastan, D; Fokkens, W J; Bachert, C

    2011-01-01

    , Zuberbier T, Jarvis D, Burney P. Chronic rhinosinusitis in Europe - an underestimated disease. A GA(2) LEN study. Allergy 2011; 66: 1216-1223. ABSTRACT: Background:  Chronic rhinosinusitis (CRS) is a common health problem, with significant medical costs and impact on general health. Even so, prevalence...

  14. the Underestimation of Isorene in Houston during the Texas 2013 DISCOVER-AQ Campaign

    Science.gov (United States)

    Choi, Y.; Diao, L.; Czader, B.; Li, X.; Estes, M. J.

    2014-12-01

    This study applies principal component analysis to aircraft data from the Texas 2013 DISCOVER-AQ (Deriving Information on Surface Conditions from Column and Vertically Resolved Observations Relevant to Air Quality) field campaign to characterize isoprene sources over Houston during September 2013. The biogenic isoprene signature appears in the third principal component and anthropogenic signals in the following two. Evaluations of the Community Multiscale Air Quality (CMAQ) model simulations of isoprene with airborne measurements are more accurate for suburban areas than for industrial areas. This study also compares model outputs to eight surface automated gas chromatograph (Auto-GC) measurements near the Houston ship channel industrial area during the nighttime and shows that modeled anthropogenic isoprene is underestimated by a factor of 10.60. This study employs a new simulation with a modified anthropogenic emissions inventory (constraining using the ratios of observed values versus simulated ones) that yields closer isoprene predictions at night with a reduction in the mean bias by 56.93%, implying that model-estimated isoprene emissions from the 2008 National Emission Inventory are underestimated in the city of Houston and that other climate models or chemistry and transport models using the same emissions inventory might also be underestimated in other Houston-like areas in the United States.

  15. Underestimation of Project Costs

    Science.gov (United States)

    Jones, Harry W.

    2015-01-01

    Large projects almost always exceed their budgets. Estimating cost is difficult and estimated costs are usually too low. Three different reasons are suggested: bad luck, overoptimism, and deliberate underestimation. Project management can usually point to project difficulty and complexity, technical uncertainty, stakeholder conflicts, scope changes, unforeseen events, and other not really unpredictable bad luck. Project planning is usually over-optimistic, so the likelihood and impact of bad luck is systematically underestimated. Project plans reflect optimism and hope for success in a supposedly unique new effort rather than rational expectations based on historical data. Past project problems are claimed to be irrelevant because "This time it's different." Some bad luck is inevitable and reasonable optimism is understandable, but deliberate deception must be condemned. In a competitive environment, project planners and advocates often deliberately underestimate costs to help gain project approval and funding. Project benefits, cost savings, and probability of success are exaggerated and key risks ignored. Project advocates have incentives to distort information and conceal difficulties from project approvers. One naively suggested cure is more openness, honesty, and group adherence to shared overall goals. A more realistic alternative is threatening overrun projects with cancellation. Neither approach seems to solve the problem. A better method to avoid the delusions of over-optimism and the deceptions of biased advocacy is to base the project cost estimate on the actual costs of a large group of similar projects. Over optimism and deception can continue beyond the planning phase and into project execution. Hard milestones based on verified tests and demonstrations can provide a reality check.

  16. Earth System Models Underestimate Soil Carbon Diagnostic Times in Dry and Cold Regions.

    Science.gov (United States)

    Jing, W.; Xia, J.; Zhou, X.; Huang, K.; Huang, Y.; Jian, Z.; Jiang, L.; Xu, X.; Liang, J.; Wang, Y. P.; Luo, Y.

    2017-12-01

    Soils contain the largest organic carbon (C) reservoir in the Earth's surface and strongly modulate the terrestrial feedback to climate change. Large uncertainty exists in current Earth system models (ESMs) in simulating soil organic C (SOC) dynamics, calling for a systematic diagnosis on their performance based on observations. Here, we built a global database of SOC diagnostic time (i.e.,turnover times; τsoil) measured at 320 sites with four different approaches. We found that the estimated τsoil was comparable among approaches of 14C dating () (median with 25 and 75 percentiles), 13C shifts due to vegetation change () and the ratio of stock over flux (), but was shortest from laboratory incubation studies (). The state-of-the-art ESMs underestimated the τsoil in most biomes, even by >10 and >5 folds in cold and dry regions, respectively. Moreover,we identified clear negative dependences of τsoil on temperature and precipitation in both of the observational and modeling results. Compared with Community Land Model (version 4), the incorporation of soil vertical profile (CLM4.5) could substantially extend the τsoil of SOC. Our findings suggest the accuracy of climate-C cycle feedback in current ESMs could be enhanced by an improved understanding of SOC dynamics under the limited hydrothermal conditions.

  17. Underestimating belief in climate change

    Science.gov (United States)

    Jost, John T.

    2018-03-01

    People are influenced by second-order beliefs — beliefs about the beliefs of others. New research finds that citizens in the US and China systematically underestimate popular support for taking action to curb climate change. Fortunately, they seem willing and able to correct their misperceptions.

  18. Terrestrial biosphere models underestimate photosynthetic capacity and CO2 assimilation in the Arctic.

    Science.gov (United States)

    Rogers, Alistair; Serbin, Shawn P; Ely, Kim S; Sloan, Victoria L; Wullschleger, Stan D

    2017-12-01

    Terrestrial biosphere models (TBMs) are highly sensitive to model representation of photosynthesis, in particular the parameters maximum carboxylation rate and maximum electron transport rate at 25°C (V c,max.25 and J max.25 , respectively). Many TBMs do not include representation of Arctic plants, and those that do rely on understanding and parameterization from temperate species. We measured photosynthetic CO 2 response curves and leaf nitrogen (N) content in species representing the dominant vascular plant functional types found on the coastal tundra near Barrow, Alaska. The activation energies associated with the temperature response functions of V c,max and J max were 17% lower than commonly used values. When scaled to 25°C, V c,max.25 and J max.25 were two- to five-fold higher than the values used to parameterize current TBMs. This high photosynthetic capacity was attributable to a high leaf N content and the high fraction of N invested in Rubisco. Leaf-level modeling demonstrated that current parameterization of TBMs resulted in a two-fold underestimation of the capacity for leaf-level CO 2 assimilation in Arctic vegetation. This study highlights the poor representation of Arctic photosynthesis in TBMs, and provides the critical data necessary to improve our ability to project the response of the Arctic to global environmental change. No claim to original US Government works. New Phytologist © 2017 New Phytologist Trust.

  19. Non-differential underestimation may cause a threshold effect of exposure to appear as a dose-response relationship

    NARCIS (Netherlands)

    Verkerk, P. H.; Buitendijk, S. E.

    1992-01-01

    It is generally believed that non-differential misclassification will lead to a bias toward the null-value. However, using one graphical and one numerical example, we show that in situations where underestimation more than overestimation is the problem, non-differential misclassification may lead to

  20. Body Size Estimation from Early to Middle Childhood: Stability of Underestimation, BMI, and Gender Effects

    Directory of Open Access Journals (Sweden)

    Silje Steinsbekk

    2017-11-01

    Full Text Available Individuals who are overweight are more likely to underestimate their body size than those who are normal weight, and overweight underestimators are less likely to engage in weight loss efforts. Underestimation of body size might represent a barrier to prevention and treatment of overweight; thus insight in how underestimation of body size develops and tracks through the childhood years is needed. The aim of the present study was therefore to examine stability in children’s underestimation of body size, exploring predictors of underestimation over time. The prospective path from underestimation to BMI was also tested. In a Norwegian cohort of 6 year olds, followed up at ages 8 and 10 (analysis sample: n = 793 body size estimation was captured by the Children’s Body Image Scale, height and weight were measured and BMI calculated. Overall, children were more likely to underestimate than overestimate their body size. Individual stability in underestimation was modest, but significant. Higher BMI predicted future underestimation, even when previous underestimation was adjusted for, but there was no evidence for the opposite direction of influence. Boys were more likely than girls to underestimate their body size at ages 8 and 10 (age 8: 38.0% vs. 24.1%; Age 10: 57.9% vs. 30.8% and showed a steeper increase in underestimation with age compared to girls. In conclusion, the majority of 6, 8, and 10-year olds correctly estimate their body size (prevalence ranging from 40 to 70% depending on age and gender, although a substantial portion perceived themselves to be thinner than they actually were. Higher BMI forecasted future underestimation, but underestimation did not increase the risk for excessive weight gain in middle childhood.

  1. The Perception of Time Is Underestimated in Adolescents With Anorexia Nervosa.

    Science.gov (United States)

    Vicario, Carmelo M; Felmingham, Kim

    2018-01-01

    Research has revealed reduced temporal discounting (i.e., increased capacity to delay reward) and altered interoceptive awareness in anorexia nervosa (AN). In line with the research linking temporal underestimation with a reduced tendency to devalue a reward and reduced interoceptive awareness, we tested the hypothesis that time duration might be underestimated in AN. Our findings revealed that patients with AN displayed lower timing accuracy in the form of timing underestimation compared with controls. These results were not predicted by clinical, demographic factors, attention, and working memory performance of the participants. The evidence of a temporal underestimation bias in AN might be clinically relevant to explain their abnormal motivation in pursuing a long-term restrictive diet, in line with the evidence that increasing the subjective temporal proximity of remote future goals can boost motivation and the actual behavior to reach them.

  2. The Perception of Time Is Underestimated in Adolescents With Anorexia Nervosa

    Directory of Open Access Journals (Sweden)

    Carmelo M. Vicario

    2018-04-01

    Full Text Available Research has revealed reduced temporal discounting (i.e., increased capacity to delay reward and altered interoceptive awareness in anorexia nervosa (AN. In line with the research linking temporal underestimation with a reduced tendency to devalue a reward and reduced interoceptive awareness, we tested the hypothesis that time duration might be underestimated in AN. Our findings revealed that patients with AN displayed lower timing accuracy in the form of timing underestimation compared with controls. These results were not predicted by clinical, demographic factors, attention, and working memory performance of the participants. The evidence of a temporal underestimation bias in AN might be clinically relevant to explain their abnormal motivation in pursuing a long-term restrictive diet, in line with the evidence that increasing the subjective temporal proximity of remote future goals can boost motivation and the actual behavior to reach them.

  3. Underestimation of soil carbon stocks by Yasso07, Q, and CENTURY models in boreal forest linked to overlooking site fertility

    Science.gov (United States)

    Ťupek, Boris; Ortiz, Carina; Hashimoto, Shoji; Stendahl, Johan; Dahlgren, Jonas; Karltun, Erik; Lehtonen, Aleksi

    2016-04-01

    The soil organic carbon stock (SOC) changes estimated by the most process based soil carbon models (e.g. Yasso07, Q and CENTURY), needed for reporting of changes in soil carbon amounts for the United Nations Framework Convention on Climate Change (UNFCCC) and for mitigation of anthropogenic CO2 emissions by soil carbon management, can be biased if in a large mosaic of environments the models are missing a key factor driving SOC sequestration. To our knowledge soil nutrient status as a missing driver of these models was not tested in previous studies. Although, it's known that models fail to reconstruct the spatial variation and that soil nutrient status drives the ecosystem carbon use efficiency and soil carbon sequestration. We evaluated SOC stock estimates of Yasso07, Q and CENTURY process based models against the field data from Swedish Forest Soil National Inventories (3230 samples) organized by recursive partitioning method (RPART) into distinct soil groups with underlying SOC stock development linked to physicochemical conditions. These models worked for most soils with approximately average SOC stocks, but could not reproduce higher measured SOC stocks in our application. The Yasso07 and Q models that used only climate and litterfall input data and ignored soil properties generally agreed with two third of measurements. However, in comparison with measurements grouped according to the gradient of soil nutrient status we found that the models underestimated for the Swedish boreal forest soils with higher site fertility. Accounting for soil texture (clay, silt, and sand content) and structure (bulk density) in CENTURY model showed no improvement on carbon stock estimates, as CENTURY deviated in similar manner. We highlighted the mechanisms why models deviate from the measurements and the ways of considering soil nutrient status in further model development. Our analysis suggested that the models indeed lack other predominat drivers of SOC stabilization

  4. Underestimation of risk due to exposure misclassification

    DEFF Research Database (Denmark)

    Grandjean, Philippe; Budtz-Jørgensen, Esben; Keiding, Niels

    2004-01-01

    Exposure misclassification constitutes a major obstacle when developing dose-response relationships for risk assessment. A non-differentional error results in underestimation of the risk. If the degree of misclassification is known, adjustment may be achieved by sensitivity analysis. The purpose...

  5. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  6. Linear-quadratic model underestimates sparing effect of small doses per fraction in rat spinal cord

    International Nuclear Information System (INIS)

    Shun Wong, C.; Toronto University; Minkin, S.; Hill, R.P.; Toronto University

    1993-01-01

    The application of the linear-quadratic (LQ) model to describe iso-effective fractionation schedules for dose fraction sizes less than 2 Gy has been controversial. Experiments are described in which the effect of daily fractionated irradiation given with a wide range of fraction sizes was assessed in rat cervical spine cord. The first group of rats was given doses in 1, 2, 4, 8 and 40 fractions/day. The second group received 3 initial 'top-up'doses of 9 Gy given once daily, representing 3/4 tolerance, followed by doses in 1, 2, 10, 20, 30 and 40 fractions/day. The fractionated portion of the irradiation schedule therefore constituted only the final quarter of the tolerance dose. The endpoint of the experiments was paralysis of forelimbs secondary to white matter necrosis. Direct analysis of data from experiments with full course fractionation up to 40 fractions/day (25.0-1.98 Gy/fraction) indicated consistency with the LQ model yielding an α/β value of 2.41 Gy. Analysis of data from experiments in which the 3 'top-up' doses were followed by up to 10 fractions (10.0-1.64 Gy/fraction) gave an α/β value of 3.41 Gy. However, data from 'top-up' experiments with 20, 30 and 40 fractions (1.60-0.55 Gy/fraction) were inconsistent with LQ model and gave a very small α/β of 0.48 Gy. It is concluded that LQ model based on data from large doses/fraction underestimates the sparing effect of small doses/fraction, provided sufficient time is allowed between each fraction for repair of sublethal damage. (author). 28 refs., 5 figs., 1 tab

  7. Commonly used reference values underestimate oxygen uptake in healthy, 50-year-old Swedish women.

    Science.gov (United States)

    Genberg, M; Andrén, B; Lind, L; Hedenström, H; Malinovschi, A

    2018-01-01

    Cardiopulmonary exercise testing (CPET) is the gold standard among clinical exercise tests. It combines a conventional stress test with measurement of oxygen uptake (V O 2 ) and CO 2 production. No validated Swedish reference values exist, and reference values in women are generally understudied. Moreover, the importance of achieved respiratory exchange ratio (RER) and the significance of breathing reserve (BR) at peak exercise in healthy individuals are poorly understood. We compared V O 2 at maximal load (peakV O 2 ) and anaerobic threshold (V O 2@ AT ) in healthy Swedish individuals with commonly used reference values, taking gender into account. Further, we analysed maximal workload and peakV O 2 with regard to peak RER and BR. In all, 181 healthy, 50-year-old individuals (91 women) performed CPET. PeakV O 2 was best predicted using Jones et al. (100·5%), while SHIP reference values underestimated peakV O 2 most: 112·5%. Furthermore, underestimation of peakV O 2 in women was found for all studied reference values (P 1·1 (2 328·7 versus 2 176·7 ml min -1 , P = 0·11). Lower BR (≤30%) related to significantly higher peakV O 2 (Pvalues underestimated oxygen uptake in women. No evidence for demanding RER > 1·1 in healthy individuals was found. A lowered BR is probably a normal response to higher workloads in healthy individuals. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  8. Underestimation of boreal soil carbon stocks by mathematical soil carbon models linked to soil nutrient status

    Science.gov (United States)

    Ťupek, Boris; Ortiz, Carina A.; Hashimoto, Shoji; Stendahl, Johan; Dahlgren, Jonas; Karltun, Erik; Lehtonen, Aleksi

    2016-08-01

    Inaccurate estimate of the largest terrestrial carbon pool, soil organic carbon (SOC) stock, is the major source of uncertainty in simulating feedback of climate warming on ecosystem-atmosphere carbon dioxide exchange by process-based ecosystem and soil carbon models. Although the models need to simplify complex environmental processes of soil carbon sequestration, in a large mosaic of environments a missing key driver could lead to a modeling bias in predictions of SOC stock change.We aimed to evaluate SOC stock estimates of process-based models (Yasso07, Q, and CENTURY soil sub-model v4) against a massive Swedish forest soil inventory data set (3230 samples) organized by a recursive partitioning method into distinct soil groups with underlying SOC stock development linked to physicochemical conditions.For two-thirds of measurements all models predicted accurate SOC stock levels regardless of the detail of input data, e.g., whether they ignored or included soil properties. However, in fertile sites with high N deposition, high cation exchange capacity, or moderately increased soil water content, Yasso07 and Q models underestimated SOC stocks. In comparison to Yasso07 and Q, accounting for the site-specific soil characteristics (e. g. clay content and topsoil mineral N) by CENTURY improved SOC stock estimates for sites with high clay content, but not for sites with high N deposition.Our analysis suggested that the soils with poorly predicted SOC stocks, as characterized by the high nutrient status and well-sorted parent material, indeed have had other predominant drivers of SOC stabilization lacking in the models, presumably the mycorrhizal organic uptake and organo-mineral stabilization processes. Our results imply that the role of soil nutrient status as regulator of organic matter mineralization has to be re-evaluated, since correct SOC stocks are decisive for predicting future SOC change and soil CO2 efflux.

  9. The underestimated potential of solar energy to mitigate climate change

    Science.gov (United States)

    Creutzig, Felix; Agoston, Peter; Goldschmidt, Jan Christoph; Luderer, Gunnar; Nemet, Gregory; Pietzcker, Robert C.

    2017-09-01

    The Intergovernmental Panel on Climate Change's fifth assessment report emphasizes the importance of bioenergy and carbon capture and storage for achieving climate goals, but it does not identify solar energy as a strategically important technology option. That is surprising given the strong growth, large resource, and low environmental footprint of photovoltaics (PV). Here we explore how models have consistently underestimated PV deployment and identify the reasons for underlying bias in models. Our analysis reveals that rapid technological learning and technology-specific policy support were crucial to PV deployment in the past, but that future success will depend on adequate financing instruments and the management of system integration. We propose that with coordinated advances in multiple components of the energy system, PV could supply 30-50% of electricity in competitive markets.

  10. Guiding exploration in conformational feature space with Lipschitz underestimation for ab-initio protein structure prediction.

    Science.gov (United States)

    Hao, Xiaohu; Zhang, Guijun; Zhou, Xiaogen

    2018-04-01

    Computing conformations which are essential to associate structural and functional information with gene sequences, is challenging due to the high dimensionality and rugged energy surface of the protein conformational space. Consequently, the dimension of the protein conformational space should be reduced to a proper level, and an effective exploring algorithm should be proposed. In this paper, a plug-in method for guiding exploration in conformational feature space with Lipschitz underestimation (LUE) for ab-initio protein structure prediction is proposed. The conformational space is converted into ultrafast shape recognition (USR) feature space firstly. Based on the USR feature space, the conformational space can be further converted into Underestimation space according to Lipschitz estimation theory for guiding exploration. As a consequence of the use of underestimation model, the tight lower bound estimate information can be used for exploration guidance, the invalid sampling areas can be eliminated in advance, and the number of energy function evaluations can be reduced. The proposed method provides a novel technique to solve the exploring problem of protein conformational space. LUE is applied to differential evolution (DE) algorithm, and metropolis Monte Carlo(MMC) algorithm which is available in the Rosetta; When LUE is applied to DE and MMC, it will be screened by the underestimation method prior to energy calculation and selection. Further, LUE is compared with DE and MMC by testing on 15 small-to-medium structurally diverse proteins. Test results show that near-native protein structures with higher accuracy can be obtained more rapidly and efficiently with the use of LUE. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Tritium: an underestimated health risk- 'ACROnic du nucleaire' nr 85, June 2009

    International Nuclear Information System (INIS)

    Barbey, Pierre

    2009-06-01

    After having indicated how tritium released in the environment (under the form of tritiated water or gas) is absorbed by living species, the author describes the different biological effects of ionizing radiations and the risk associated with tritium. He evokes how the radiation protection system is designed with respect to standards, and outlines how the risk related to tritium is underestimated by different existing models and standards. The author discusses the consequences of tritium transmutation and of the isotopic effect

  12. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    Science.gov (United States)

    Fowler, Mike S; Ruokolainen, Lasse

    2013-01-01

    The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let

  13. High-risk lesions diagnosed at MRI-guided vacuum-assisted breast biopsy: can underestimation be predicted?

    Energy Technology Data Exchange (ETDEWEB)

    Crystal, Pavel [Mount Sinai Hospital, University Health Network, Division of Breast Imaging, Toronto, ON (Canada); Mount Sinai Hospital, Toronto, ON (Canada); Sadaf, Arifa; Bukhanov, Karina; Helbich, Thomas H. [Mount Sinai Hospital, University Health Network, Division of Breast Imaging, Toronto, ON (Canada); McCready, David [Princess Margaret Hospital, Department of Surgical Oncology, Toronto, ON (Canada); O' Malley, Frances [Mount Sinai Hospital, Department of Pathology, Laboratory Medicine, Toronto, ON (Canada)

    2011-03-15

    To evaluate the frequency of diagnosis of high-risk lesions at MRI-guided vacuum-assisted breast biopsy (MRgVABB) and to determine whether underestimation may be predicted. Retrospective review of the medical records of 161 patients who underwent MRgVABB was performed. The underestimation rate was defined as an upgrade of a high-risk lesion at MRgVABB to malignancy at surgery. Clinical data, MRI features of the biopsied lesions, and histological diagnosis of cases with and those without underestimation were compared. Of 161 MRgVABB, histology revealed 31 (19%) high-risk lesions. Of 26 excised high-risk lesions, 13 (50%) were upgraded to malignancy. The underestimation rates of lobular neoplasia, atypical apocrine metaplasia, atypical ductal hyperplasia, and flat epithelial atypia were 50% (4/8), 100% (5/5), 50% (3/6) and 50% (1/2) respectively. There was no underestimation in the cases of benign papilloma without atypia (0/3), and radial scar (0/2). No statistically significant differences (p > 0.1) between the cases with and those without underestimation were seen in patient age, indications for breast MRI, size of lesion on MRI, morphological and kinetic features of biopsied lesions. Imaging and clinical features cannot be used reliably to predict underestimation at MRgVABB. All high-risk lesions diagnosed at MRgVABB require surgical excision. (orig.)

  14. A Large Underestimate of Formic Acid from Tropical Fires: Constraints from Space-Borne Measurements.

    Science.gov (United States)

    Chaliyakunnel, S; Millet, D B; Wells, K C; Cady-Pereira, K E; Shephard, M W

    2016-06-07

    Formic acid (HCOOH) is one of the most abundant carboxylic acids and a dominant source of atmospheric acidity. Recent work indicates a major gap in the HCOOH budget, with atmospheric concentrations much larger than expected from known sources. Here, we employ recent space-based observations from the Tropospheric Emission Spectrometer with the GEOS-Chem atmospheric model to better quantify the HCOOH source from biomass burning, and assess whether fire emissions can help close the large budget gap for this species. The space-based data reveal a severe model HCOOH underestimate most prominent over tropical burning regions, suggesting a major missing source of organic acids from fires. We develop an approach for inferring the fractional fire contribution to ambient HCOOH and find, based on measurements over Africa, that pyrogenic HCOOH:CO enhancement ratios are much higher than expected from direct emissions alone, revealing substantial secondary organic acid production in fire plumes. Current models strongly underestimate (by 10 ± 5 times) the total primary and secondary HCOOH source from African fires. If a 10-fold bias were to extend to fires in other regions, biomass burning could produce 14 Tg/a of HCOOH in the tropics or 16 Tg/a worldwide. However, even such an increase would only represent 15-20% of the total required HCOOH source, implying the existence of other larger missing sources.

  15. Calorie Underestimation When Buying High-Calorie Beverages in Fast-Food Contexts.

    Science.gov (United States)

    Franckle, Rebecca L; Block, Jason P; Roberto, Christina A

    2016-07-01

    We asked 1877 adults and 1178 adolescents visiting 89 fast-food restaurants in New England in 2010 and 2011 to estimate calories purchased. Calorie underestimation was greater among those purchasing a high-calorie beverage than among those who did not (adults: 324 ±698 vs 102 ±591 calories; adolescents: 360 ±602 vs 198 ±509 calories). This difference remained significant for adults but not adolescents after adjusting for total calories purchased. Purchasing high-calorie beverages may uniquely contribute to calorie underestimation among adults.

  16. Satellite methods underestimate indirect climate forcing by aerosols

    Science.gov (United States)

    Penner, Joyce E.; Xu, Li; Wang, Minghuai

    2011-01-01

    Satellite-based estimates of the aerosol indirect effect (AIE) are consistently smaller than the estimates from global aerosol models, and, partly as a result of these differences, the assessment of this climate forcing includes large uncertainties. Satellite estimates typically use the present-day (PD) relationship between observed cloud drop number concentrations (Nc) and aerosol optical depths (AODs) to determine the preindustrial (PI) values of Nc. These values are then used to determine the PD and PI cloud albedos and, thus, the effect of anthropogenic aerosols on top of the atmosphere radiative fluxes. Here, we use a model with realistic aerosol and cloud processes to show that empirical relationships for ln(Nc) versus ln(AOD) derived from PD results do not represent the atmospheric perturbation caused by the addition of anthropogenic aerosols to the preindustrial atmosphere. As a result, the model estimates based on satellite methods of the AIE are between a factor of 3 to more than a factor of 6 smaller than model estimates based on actual PD and PI values for Nc. Using ln(Nc) versus ln(AI) (Aerosol Index, or the optical depth times angstrom exponent) to estimate preindustrial values for Nc provides estimates for Nc and forcing that are closer to the values predicted by the model. Nevertheless, the AIE using ln(Nc) versus ln(AI) may be substantially incorrect on a regional basis and may underestimate or overestimate the global average forcing by 25 to 35%. PMID:21808047

  17. Stress underestimation and mental health literacy of depression in Japanese workers: A cross-sectional study.

    Science.gov (United States)

    Nakamura-Taira, Nanako; Izawa, Shuhei; Yamada, Kosuke Chris

    2018-04-01

    Appropriately estimating stress levels in daily life is important for motivating people to undertake stress-management behaviors or seek out information on stress management and mental health. People who exhibit high stress underestimation might not be interested in information on mental health, and would therefore have less knowledge of it. We investigated the association between stress underestimation tendency and mental health literacy of depression (i.e., knowledge of the recognition, prognosis, and usefulness of resources of depression) in Japanese workers. We cross-sectionally surveyed 3718 Japanese workers using a web-based questionnaire on stress underestimation, mental health literacy of depression (vignettes on people with depression), and covariates (age, education, depressive symptoms, income, and worksite size). After adjusting for covariates, high stress underestimation was associated with greater odds of not recognizing depression (i.e., choosing anything other than depression). Furthermore, these individuals had greater odds of expecting the case to improve without treatment and not selecting useful sources of support (e.g. talk over with friends/family, see a psychiatrist, take medication, see a counselor) compared to those with moderate stress underestimation. These relationships were all stronger among males than among females. Stress underestimation was related to poorer mental health literacy of depression. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. Parental and Child Factors Associated with Under-Estimation of Children with Excess Weight in Spain.

    Science.gov (United States)

    de Ruiter, Ingrid; Olmedo-Requena, Rocío; Jiménez-Moleón, José Juan

    2017-11-01

    Objective Understanding obesity misperception and associated factors can improve strategies to increase obesity identification and intervention. We investigate underestimation of child excess weight with a broader perspective, incorporating perceptions, views, and psychosocial aspects associated with obesity. Methods This study used cross-sectional data from the Spanish National Health Survey in 2011-2012 for children aged 2-14 years who are overweight or obese. Percentages of parental misperceived excess weight were calculated. Crude and adjusted analyses were performed for both child and parental factors analyzing associations with underestimation. Results Two-five year olds have the highest prevalence of misperceived overweight or obesity around 90%. In the 10-14 year old age group approximately 63% of overweight teens were misperceived as normal weight and 35.7 and 40% of obese males and females. Child gender did not affect underestimation, whereas a younger age did. Aspects of child social and mental health were associated with under-estimation, as was short sleep duration. Exercise, weekend TV and videogames, and food habits had no effect on underestimation. Fathers were more likely to misperceive their child´s weight status; however parent's age had no effect. Smokers and parents with excess weight were less likely to misperceive their child´s weight status. Parents being on a diet also decreased odds of underestimation. Conclusions for practice This study identifies some characteristics of both parents and children which are associated with under-estimation of child excess weight. These characteristics can be used for consideration in primary care, prevention strategies and for further research.

  19. The role of underestimating body size for self-esteem and self-efficacy among grade five children in Canada.

    Science.gov (United States)

    Maximova, Katerina; Khan, Mohammad K A; Austin, S Bryn; Kirk, Sara F L; Veugelers, Paul J

    2015-10-01

    Underestimating body size hinders healthy behavior modification needed to prevent obesity. However, initiatives to improve body size misperceptions may have detrimental consequences on self-esteem and self-efficacy. Using sex-specific multiple mixed-effect logistic regression models, we examined the association of underestimating versus accurate body size perceptions with self-esteem and self-efficacy in a provincially representative sample of 5075 grade five school children. Body size perceptions were defined as the standardized difference between the body mass index (BMI, from measured height and weight) and self-perceived body size (Stunkard body rating scale). Self-esteem and self-efficacy for physical activity and healthy eating were self-reported. Most of overweight boys and girls (91% and 83%); and most of obese boys and girls (93% and 90%) underestimated body size. Underestimating weight was associated with greater self-efficacy for physical activity and healthy eating among normal-weight children (odds ratio: 1.9 and 1.6 for boys, 1.5 and 1.4 for girls) and greater self-esteem among overweight and obese children (odds ratio: 2.0 and 6.2 for boys, 2.0 and 3.4 for girls). Results highlight the importance of developing optimal intervention strategies as part of targeted obesity prevention efforts that de-emphasize the focus on body weight, while improving body size perceptions. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Tick-borne encephalitis (TBE): an underestimated risk…still: report of the 14th annual meeting of the International Scientific Working Group on Tick-Borne Encephalitis (ISW-TBE).

    Science.gov (United States)

    Kunze, Ursula

    2012-06-01

    Today, the risk of getting tick-borne encephalitis (TBE) is still underestimated in many parts of Europe and worldwide. Therefore, the 14th meeting of the International Scientific Working Group on Tick-Borne Encephalitis (ISW-TBE) - a group of neurologists, general practitioners, clinicians, travel physicians, virologists, pediatricians, and epidemiologists - was held under the title "Tick-borne encephalitis: an underestimated risk…still". Among the discussed issues were: TBE, an underestimated risk in children, a case report in two Dutch travelers, the very emotional report of a tick victim, an overview of the epidemiological situation, investigations to detect new TBE cases in Italy, TBE virus (TBEV) strains circulation in Northern Europe, TBE Program of the European Centre for Disease Prevention and Control (ECDC), efforts to increase the TBE vaccination rate in the Czech Republic, positioning statement of the World Health Organization (WHO), and TBE in dogs. To answer the question raised above: Yes, the risk of getting TBE is underestimated in children and adults, because awareness is still too low. It is still underestimated in several areas of Europe, where, for a lack of human cases, TBEV is thought to be absent. It is underestimated in travelers, because they still do not know enough about the risk, and diagnostic awareness in non-endemic countries is still low. Copyright © 2012. Published by Elsevier GmbH. All rights reserved.

  1. Predictive equations underestimate resting energy expenditure in female adolescents with phenylketonuria

    Science.gov (United States)

    Quirk, Meghan E.; Schmotzer, Brian J.; Schmotzer, Brian J.; Singh, Rani H.

    2010-01-01

    Resting energy expenditure (REE) is often used to estimate total energy needs. The Schofield equation based on weight and height has been reported to underestimate REE in female children with phenylketonuria (PKU). The objective of this observational, cross-sectional study was to evaluate the agreement of measured REE with predicted REE for female adolescents with PKU. A total of 36 females (aged 11.5-18.7 years) with PKU attending Emory University’s Metabolic Camp (June 2002 – June 2008) underwent indirect calorimetry. Measured REE was compared to six predictive equations using paired Student’s t-tests, regression-based analysis, and assessment of clinical accuracy. The differences between measured and predicted REE were modeled against clinical parameters to determine to if a relationship existed. All six selected equations significantly under predicted measured REE (P< 0.005). The Schofield equation based on weight had the greatest level of agreement, with the lowest mean prediction bias (144 kcal) and highest concordance correlation coefficient (0.626). However, the Schofield equation based on weight lacked clinical accuracy, predicting measured REE within ±10% in only 14 of 36 participants. Clinical parameters were not associated with bias for any of the equations. Predictive equations underestimated measured REE in this group of female adolescents with PKU. Currently, there is no accurate and precise alternative for indirect calorimetry in this population. PMID:20497783

  2. Underestimation of Severity of Previous Whiplash Injuries

    Science.gov (United States)

    Naqui, SZH; Lovell, SJ; Lovell, ME

    2008-01-01

    INTRODUCTION We noted a report that more significant symptoms may be expressed after second whiplash injuries by a suggested cumulative effect, including degeneration. We wondered if patients were underestimating the severity of their earlier injury. PATIENTS AND METHODS We studied recent medicolegal reports, to assess subjects with a second whiplash injury. They had been asked whether their earlier injury was worse, the same or lesser in severity. RESULTS From the study cohort, 101 patients (87%) felt that they had fully recovered from their first injury and 15 (13%) had not. Seventy-six subjects considered their first injury of lesser severity, 24 worse and 16 the same. Of the 24 that felt the violence of their first accident was worse, only 8 had worse symptoms, and 16 felt their symptoms were mainly the same or less than their symptoms from their second injury. Statistical analysis of the data revealed that the proportion of those claiming a difference who said the previous injury was lesser was 76% (95% CI 66–84%). The observed proportion with a lesser injury was considerably higher than the 50% anticipated. CONCLUSIONS We feel that subjects may underestimate the severity of an earlier injury and associated symptoms. Reasons for this may include secondary gain rather than any proposed cumulative effect. PMID:18201501

  3. Individuals underestimate moderate and vigorous intensity physical activity.

    Directory of Open Access Journals (Sweden)

    Karissa L Canning

    Full Text Available BACKGROUND: It is unclear whether the common physical activity (PA intensity descriptors used in PA guidelines worldwide align with the associated percent heart rate maximum method used for prescribing relative PA intensities consistently between sexes, ethnicities, age categories and across body mass index (BMI classifications. OBJECTIVES: The objectives of this study were to determine whether individuals properly select light, moderate and vigorous intensity PA using the intensity descriptions in PA guidelines and determine if there are differences in estimation across sex, ethnicity, age and BMI classifications. METHODS: 129 adults were instructed to walk/jog at a "light," "moderate" and "vigorous effort" in a randomized order. The PA intensities were categorized as being below, at or above the following %HRmax ranges of: 50-63% for light, 64-76% for moderate and 77-93% for vigorous effort. RESULTS: On average, people correctly estimated light effort as 51.5±8.3%HRmax but underestimated moderate effort as 58.7±10.7%HRmax and vigorous effort as 69.9±11.9%HRmax. Participants walked at a light intensity (57.4±10.5%HRmax when asked to walk at a pace that provided health benefits, wherein 52% of participants walked at a light effort pace, 19% walked at a moderate effort and 5% walked at a vigorous effort pace. These results did not differ by sex, ethnicity or BMI class. However, younger adults underestimated moderate and vigorous intensity more so than middle-aged adults (P<0.05. CONCLUSION: When the common PA guideline descriptors were aligned with the associated %HRmax ranges, the majority of participants underestimated the intensity of PA that is needed to obtain health benefits. Thus, new subjective descriptions for moderate and vigorous intensity may be warranted to aid individuals in correctly interpreting PA intensities.

  4. Development and evaluation of a prediction model for underestimated invasive breast cancer in women with ductal carcinoma in situ at stereotactic large core needle biopsy.

    Directory of Open Access Journals (Sweden)

    Suzanne C E Diepstraten

    Full Text Available BACKGROUND: We aimed to develop a multivariable model for prediction of underestimated invasiveness in women with ductal carcinoma in situ at stereotactic large core needle biopsy, that can be used to select patients for sentinel node biopsy at primary surgery. METHODS: From the literature, we selected potential preoperative predictors of underestimated invasive breast cancer. Data of patients with nonpalpable breast lesions who were diagnosed with ductal carcinoma in situ at stereotactic large core needle biopsy, drawn from the prospective COBRA (Core Biopsy after RAdiological localization and COBRA2000 cohort studies, were used to fit the multivariable model and assess its overall performance, discrimination, and calibration. RESULTS: 348 women with large core needle biopsy-proven ductal carcinoma in situ were available for analysis. In 100 (28.7% patients invasive carcinoma was found at subsequent surgery. Nine predictors were included in the model. In the multivariable analysis, the predictors with the strongest association were lesion size (OR 1.12 per cm, 95% CI 0.98-1.28, number of cores retrieved at biopsy (OR per core 0.87, 95% CI 0.75-1.01, presence of lobular cancerization (OR 5.29, 95% CI 1.25-26.77, and microinvasion (OR 3.75, 95% CI 1.42-9.87. The overall performance of the multivariable model was poor with an explained variation of 9% (Nagelkerke's R(2, mediocre discrimination with area under the receiver operating characteristic curve of 0.66 (95% confidence interval 0.58-0.73, and fairly good calibration. CONCLUSION: The evaluation of our multivariable prediction model in a large, clinically representative study population proves that routine clinical and pathological variables are not suitable to select patients with large core needle biopsy-proven ductal carcinoma in situ for sentinel node biopsy during primary surgery.

  5. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    Directory of Open Access Journals (Sweden)

    Mike S Fowler

    Full Text Available The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies dominate in red environments, rapid fluctuations (high frequencies in blue environments and white environments are purely random (no frequencies dominate. Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental series used in combination with population (dynamical feedback models: autoregressive [AR(1] and sinusoidal (1/f models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1 models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1 methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We

  6. Did the Stern Review underestimate US and global climate damages?

    International Nuclear Information System (INIS)

    Ackerman, Frank; Stanton, Elizabeth A.; Hope, Chris; Alberth, Stephane

    2009-01-01

    The Stern Review received widespread attention for its innovative approach to the economics of climate change when it appeared in 2006, and generated controversies that have continued to this day. One key controversy concerns the magnitude of the expected impacts of climate change. Stern's estimates, based on results from the PAGE2002 model, sounded substantially greater than those produced by many other models, leading several critics to suggest that Stern had inflated his damage figures. We reached the opposite conclusion in a recent application of PAGE2002 in a study of the costs to the US economy of inaction on climate change. This article describes our revisions to the PAGE estimates, and explains our conclusion that the model runs used in the Stern Review may well underestimate US and global damages. Stern's estimates from PAGE2002 implied that mean business-as-usual damages in 2100 would represent just 0.4 percent of GDP for the United States and 2.2 percent of GDP for the world. Our revisions and reinterpretation of the PAGE model imply that climate damages in 2100 could reach 2.6 percent of GDP for the United States and 10.8 percent for the world.

  7. Stress Underestimation and Mental Health Outcomes in Male Japanese Workers: a 1-Year Prospective Study.

    Science.gov (United States)

    Izawa, Shuhei; Nakamura-Taira, Nanako; Yamada, Kosuke Chris

    2016-12-01

    Being appropriately aware of the extent of stress experienced in daily life is essential in motivating stress management behaviours. Excessive stress underestimation obstructs this process, which is expected to exert adverse effects on health. We prospectively examined associations between stress underestimation and mental health outcomes in Japanese workers. Web-based surveys were conducted twice with an interval of 1 year on 2359 Japanese male workers. Participants were asked to complete survey items concerning stress underestimation, depressive symptoms, sickness absence, and antidepressant use. Multiple logistic regression analysis revealed that high baseline levels of 'overgeneralization of stress' and 'insensitivity to stress' were significantly associated with new-onset depressive symptoms (OR = 2.66 [95 % CI, 1.54-4.59], p stress underestimation, including stress insensitivity and the overgeneralization of stress, could exert adverse effects on mental health.

  8. Poverty Underestimation in Rural India- A Critique

    OpenAIRE

    Sivakumar, Marimuthu; Sarvalingam, A

    2010-01-01

    When ever the Planning Commission of India releases the poverty data, that data is being criticised by experts and economists. The main criticism is underestimation of poverty especially in rural India by the Planning Commission. This paper focuses on that criticism and compares the Indian Planning Commission’s 2004-05 rural poverty data with the India’s 2400 kcal poverty norms, World Bank’s US $1.08 poverty concept and Asian Development Bank’s US $1.35 poverty concept.

  9. Modeling the Dynamics of the Atmospheric Boundary Layer Over the Antarctic Plateau With a General Circulation Model

    Science.gov (United States)

    Vignon, Etienne; Hourdin, Frédéric; Genthon, Christophe; Van de Wiel, Bas J. H.; Gallée, Hubert; Madeleine, Jean-Baptiste; Beaumet, Julien

    2018-01-01

    Observations evidence extremely stable boundary layers (SBL) over the Antarctic Plateau and sharp regime transitions between weakly and very stable conditions. Representing such features is a challenge for climate models. This study assesses the modeling of the dynamics of the boundary layer over the Antarctic Plateau in the LMDZ general circulation model. It uses 1 year simulations with a stretched-grid over Dome C. The model is nudged with reanalyses outside of the Dome C region such as simulations can be directly compared to in situ observations. We underline the critical role of the downward longwave radiation for modeling the surface temperature. LMDZ reasonably represents the near-surface seasonal profiles of wind and temperature but strong temperature inversions are degraded by enhanced turbulent mixing formulations. Unlike ERA-Interim reanalyses, LMDZ reproduces two SBL regimes and the regime transition, with a sudden increase in the near-surface inversion with decreasing wind speed. The sharpness of the transition depends on the stability function used for calculating the surface drag coefficient. Moreover, using a refined vertical grid leads to a better reversed "S-shaped" relationship between the inversion and the wind. Sudden warming events associated to synoptic advections of warm and moist air are also well reproduced. Near-surface supersaturation with respect to ice is not allowed in LMDZ but the impact on the SBL structure is moderate. Finally, climate simulations with the free model show that the recommended configuration leads to stronger inversions and winds over the ice-sheet. However, the near-surface wind remains underestimated over the slopes of East-Antarctica.

  10. Social cure, what social cure? The propensity to underestimate the importance of social factors for health.

    Science.gov (United States)

    Haslam, S Alexander; McMahon, Charlotte; Cruwys, Tegan; Haslam, Catherine; Jetten, Jolanda; Steffens, Niklas K

    2018-02-01

    Recent meta-analytic research indicates that social support and social integration are highly protective against mortality, and that their importance is comparable to, or exceeds, that of many established behavioural risks such as smoking, high alcohol consumption, lack of exercise, and obesity that are the traditional focus of medical research (Holt-Lunstad et al., 2010). The present study examines perceptions of the contribution of these various factors to life expectancy within the community at large. American and British community respondents (N = 502) completed an on-line survey assessing the perceived importance of social and behavioural risk factors for mortality. As hypothesized, while respondents' perceptions of the importance of established behavioural risks was positively and highly correlated with their actual importance, social factors were seen to be far less important for health than they actually are. As a result, overall, there was a small but significant negative correlation between the perceived benefits and the actual benefits of different social and behavioural factors. Men, younger participants, and participants with a lower level of education were more likely to underestimate the importance of social factors for health. There was also evidence that underestimation was predicted by a cluster of ideological factors, the most significant of which was respondents' respect for prevailing convention and authorities as captured by Right-Wing Authoritarianism. Findings suggest that while people generally underestimate the importance of social factors for health this also varies as a function of demographic and ideological factors. They point to a range of challenges confronting those who seek to promote greater awareness of the importance of social factors for health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. BMI may underestimate the socioeconomic gradient in true obesity

    NARCIS (Netherlands)

    van den Berg, G.; van Eijsden, M.; Vrijkotte, T. G. M.; Gemke, R. J. B. J.

    2013-01-01

    Body mass index (BMI) does not make a distinction between fat mass and lean mass. In children, high fat mass appears to be associated with low maternal education, as well as low lean mass because maternal education is associated with physical activity. Therefore, BMI might underestimate true obesity

  12. Introduction to generalized linear models

    CERN Document Server

    Dobson, Annette J

    2008-01-01

    Introduction Background Scope Notation Distributions Related to the Normal Distribution Quadratic Forms Estimation Model Fitting Introduction Examples Some Principles of Statistical Modeling Notation and Coding for Explanatory Variables Exponential Family and Generalized Linear Models Introduction Exponential Family of Distributions Properties of Distributions in the Exponential Family Generalized Linear Models Examples Estimation Introduction Example: Failure Times for Pressure Vessels Maximum Likelihood Estimation Poisson Regression Example Inference Introduction Sampling Distribution for Score Statistics Taylor Series Approximations Sampling Distribution for MLEs Log-Likelihood Ratio Statistic Sampling Distribution for the Deviance Hypothesis Testing Normal Linear Models Introduction Basic Results Multiple Linear Regression Analysis of Variance Analysis of Covariance General Linear Models Binary Variables and Logistic Regression Probability Distributions ...

  13. California Wintertime Precipitation in Regional and Global Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Caldwell, P M

    2009-04-27

    In this paper, wintertime precipitation from a variety of observational datasets, regional climate models (RCMs), and general circulation models (GCMs) is averaged over the state of California (CA) and compared. Several averaging methodologies are considered and all are found to give similar values when model grid spacing is less than 3{sup o}. This suggests that CA is a reasonable size for regional intercomparisons using modern GCMs. Results show that reanalysis-forced RCMs tend to significantly overpredict CA precipitation. This appears to be due mainly to overprediction of extreme events; RCM precipitation frequency is generally underpredicted. Overprediction is also reflected in wintertime precipitation variability, which tends to be too high for RCMs on both daily and interannual scales. Wintertime precipitation in most (but not all) GCMs is underestimated. This is in contrast to previous studies based on global blended gauge/satellite observations which are shown here to underestimate precipitation relative to higher-resolution gauge-only datasets. Several GCMs provide reasonable daily precipitation distributions, a trait which doesn't seem tied to model resolution. GCM daily and interannual variability is generally underpredicted.

  14. A nonlinear generalization of the Savitzky-Golay filter and the quantitative analysis of saccades.

    Science.gov (United States)

    Dai, Weiwei; Selesnick, Ivan; Rizzo, John-Ross; Rucker, Janet; Hudson, Todd

    2017-08-01

    The Savitzky-Golay (SG) filter is widely used to smooth and differentiate time series, especially biomedical data. However, time series that exhibit abrupt departures from their typical trends, such as sharp waves or steps, which are of physiological interest, tend to be oversmoothed by the SG filter. Hence, the SG filter tends to systematically underestimate physiological parameters in certain situations. This article proposes a generalization of the SG filter to more accurately track abrupt deviations in time series, leading to more accurate parameter estimates (e.g., peak velocity of saccadic eye movements). The proposed filtering methodology models a time series as the sum of two component time series: a low-frequency time series for which the conventional SG filter is well suited, and a second time series that exhibits instantaneous deviations (e.g., sharp waves, steps, or more generally, discontinuities in a higher order derivative). The generalized SG filter is then applied to the quantitative analysis of saccadic eye movements. It is demonstrated that (a) the conventional SG filter underestimates the peak velocity of saccades, especially those of small amplitude, and (b) the generalized SG filter estimates peak saccadic velocity more accurately than the conventional filter.

  15. Testing the generalized partial credit model

    OpenAIRE

    Glas, Cornelis A.W.

    1996-01-01

    The partial credit model (PCM) (G.N. Masters, 1982) can be viewed as a generalization of the Rasch model for dichotomous items to the case of polytomous items. In many cases, the PCM is too restrictive to fit the data. Several generalizations of the PCM have been proposed. In this paper, a generalization of the PCM (GPCM), a further generalization of the one-parameter logistic model, is discussed. The model is defined and the conditional maximum likelihood procedure for the method is describe...

  16. Generalized, Linear, and Mixed Models

    CERN Document Server

    McCulloch, Charles E; Neuhaus, John M

    2011-01-01

    An accessible and self-contained introduction to statistical models-now in a modernized new editionGeneralized, Linear, and Mixed Models, Second Edition provides an up-to-date treatment of the essential techniques for developing and applying a wide variety of statistical models. The book presents thorough and unified coverage of the theory behind generalized, linear, and mixed models and highlights their similarities and differences in various construction, application, and computational aspects.A clear introduction to the basic ideas of fixed effects models, random effects models, and mixed m

  17. Uterine radiation dose from open sources: The potential for underestimation

    International Nuclear Information System (INIS)

    Cox, P.H.; Klijn, J.G.M.; Pillay, M.; Bontebal, M.; Schoenfeld, D.H.W.

    1990-01-01

    Recent observations on the biodistribution of a therapeutic dose of sodium iodide I 131 in a patient with an unsuspected early pregnancy lead us to suspect that current dose estimates with respect to uterine exposure (ARSAC 1988) may seriously underestimate the actual exposure of the developing foetus. (orig.)

  18. NRC Information No. 90-21: Potential failure of motor-operated butterfly valves to operate because valve seat friction was underestimated

    International Nuclear Information System (INIS)

    Rossi, C.E.

    1992-01-01

    In October 1988, at Catawba Nuclear Station Unit 1, a motor-operated butterfly valve in the service water system failed to open under high differential pressure conditions. The licensee concluded that the valve manufacturer, BIF/General Signal Corporation, had underestimated the degree to which the material used in the valve seat would harden with age (the responsibility for these valves has been transferred to Paul-Munroe Enertech). This underestimation of the age hardening had led the manufacturer to assume valve seat friction forces that were less than the actual friction forces in the installed valve. To overcome the larger-than-anticipated friction forces, the licensee's engineering staff recommended the open torque switch for 56 butterfly valves be reset to the maximum allowable value. The systems in which these valves are located include the component cooling water system, service water system, and various ventilation systems. By July 26, 1989, the torque switch adjustments were completed at Catawba Units 1 and 2. After reviewing the final settings, the licensee's engineering staff determined that the actuators for three butterfly valves in the component cooling water system might not be able to overcome the friction forces resulting from maximum seat hardening. On December 13, 1989, the licensee determined that the failure of these BIF/General Signal motor-operated valves (MOVs) could cause a loss of cooling water to residual heat removal system heat exchangers. To resolve the concern regarding the operability of these BIF/General Signal valves, a torque switch bypass was installed on two of the actuators to allow full motor capability during opening

  19. Radiographic Underestimation of In Vivo Cup Coverage Provided by Total Hip Arthroplasty for Dysplasia.

    Science.gov (United States)

    Nie, Yong; Wang, HaoYang; Huang, ZeYu; Shen, Bin; Kraus, Virginia Byers; Zhou, Zongke

    2018-01-01

    The accuracy of using 2-dimensional anteroposterior pelvic radiography to assess acetabular cup coverage among patients with developmental dysplasia of the hip after total hip arthroplasty (THA) remains unclear in retrospective clinical studies. A group of 20 patients with developmental dysplasia of the hip (20 hips) underwent cementless THA. During surgery but after acetabular reconstruction, bone wax was pressed onto the uncovered surface of the acetabular cup. A surface model of the bone wax was generated with 3-dimensional scanning. The percentage of the acetabular cup that was covered by intact host acetabular bone in vivo was calculated with modeling software. Acetabular cup coverage also was determined from a postoperative supine anteroposterior pelvic radiograph. The height of the hip center (distance from the center of the femoral head perpendicular to the inter-teardrop line) also was determined from radiographs. Radiographic cup coverage was a mean of 6.93% (SD, 2.47%) lower than in vivo cup coverage for these 20 patients with developmental dysplasia of the hip (Pcup coverage (Pearson r=0.761, Pcup (P=.001) but not the position of the hip center (high vs normal) was significantly associated with the difference between radiographic and in vivo cup coverage. Two-dimensional radiographically determined cup coverage conservatively reflects in vivo cup coverage and remains an important index (taking 7% underestimation errors and the effect of greater underestimation of larger cup size into account) for assessing the stability of the cup and monitoring for adequate ingrowth of bone. [Orthopedics. 2018; 41(1):e46-e51.]. Copyright 2017, SLACK Incorporated.

  20. Mucus: An Underestimated Gut Target for Environmental Pollutants and Food Additives.

    Science.gov (United States)

    Gillois, Kévin; Lévêque, Mathilde; Théodorou, Vassilia; Robert, Hervé; Mercier-Bonin, Muriel

    2018-06-15

    Synthetic chemicals (environmental pollutants, food additives) are widely used for many industrial purposes and consumer-related applications, which implies, through manufactured products, diet, and environment, a repeated exposure of the general population with growing concern regarding health disorders. The gastrointestinal tract is the first physical and biological barrier against these compounds, and thus their first target. Mounting evidence indicates that the gut microbiota represents a major player in the toxicity of environmental pollutants and food additives; however, little is known on the toxicological relevance of the mucus/pollutant interplay, even though mucus is increasingly recognized as essential in gut homeostasis. Here, we aimed at describing how environmental pollutants (heavy metals, pesticides, and other persistent organic pollutants) and food additives (emulsifiers, nanomaterials) might interact with mucus and mucus-related microbial species; that is, “mucophilic” bacteria such as mucus degraders. This review highlights that intestinal mucus, either directly or through its crosstalk with the gut microbiota, is a key, yet underestimated gut player that must be considered for better risk assessment and management of environmental pollution.

  1. Completeness and underestimation of cancer mortality rate in Iran: a report from Fars Province in southern Iran.

    Science.gov (United States)

    Marzban, Maryam; Haghdoost, Ali-Akbar; Dortaj, Eshagh; Bahrampour, Abbas; Zendehdel, Kazem

    2015-03-01

    The incidence and mortality rates of cancer are increasing worldwide, particularly in the developing countries. Valid data are needed for measuring the cancer burden and making appropriate decisions toward cancer control. We evaluated the completeness of death registry with regard to cancer death in Fars Province, I. R. of Iran. We used data from three sources in Fars Province, including the national death registry (source 1), the follow-up data from the pathology-based cancer registry (source 2) and hospital based records (source 3) during 2004 - 2006. We used the capture-recapture method and estimated underestimation and the true age standardized mortality rate (ASMR) for cancer. We used log-linear (LL) modeling for statistical analysis. We observed 1941, 480, and 355 cancer deaths in sources 1, 2 and 3, respectively. After data linkage, we estimated that mortality registry had about 40% underestimation for cancer death. After adjustment for this underestimation rate, the ASMR of cancer in the Fars Province for all cancer types increased from 44.8 per 100,000 (95% CI: 42.8 - 46.7) to 76.3 per 100,000 (95% CI: 73.3 - 78.9), accounting for 3309 (95% CI: 3151 - 3293) cancer deaths annually. The mortality rate of cancer is considerably higher than the rates reported by the routine registry in Iran. Improvement in the validity and completeness of the mortality registry is needed to estimate the true mortality rate caused by cancer in Iran.

  2. Misery Has More Company Than People Think: Underestimating the Prevalence of Others’ Negative Emotions

    Science.gov (United States)

    Jordan, Alexander H.; Monin, Benoît; Dweck, Carol S.; Lovett, Benjamin J.; John, Oliver P.; Gross, James J.

    2014-01-01

    Four studies document underestimations of the prevalence of others’ negative emotions, and suggest causes and correlates of these erroneous perceptions. In Study 1A, participants reported that their negative emotions were more private or hidden than their positive emotions; in Study 1B, participants underestimated the peer prevalence of common negative, but not positive, experiences described in Study 1A. In Study 2, people underestimated negative emotions and overestimated positive emotions even for well-known peers, and this effect was partially mediated by the degree to which those peers reported suppression of negative (vs. positive) emotions. Study 3 showed that lower estimations of the prevalence of negative emotional experiences predicted greater loneliness and rumination and lower life satisfaction, and that higher estimations for positive emotional experiences predicted lower life satisfaction. Taken together, these studies suggest that people may think they are more alone in their emotional difficulties than they really are. PMID:21177878

  3. Consumer underestimation of sodium in fast food restaurant meals: Results from a cross-sectional observational study.

    Science.gov (United States)

    Moran, Alyssa J; Ramirez, Maricelle; Block, Jason P

    2017-06-01

    Restaurants are key venues for reducing sodium intake in the U.S. but little is known about consumer perceptions of sodium in restaurant foods. This study quantifies the difference between estimated and actual sodium content of restaurant meals and examines predictors of underestimation in adult and adolescent diners at fast food restaurants. In 2013 and 2014, meal receipts and questionnaires were collected from adults and adolescents dining at six restaurant chains in four New England cities. The sample included 993 adults surveyed during 229 dinnertime visits to 44 restaurants and 794 adolescents surveyed during 298 visits to 49 restaurants after school or at lunchtime. Diners were asked to estimate the amount of sodium (mg) in the meal they had just purchased. Sodium estimates were compared with actual sodium in the meal, calculated by matching all items that the respondent purchased for personal consumption to sodium information on chain restaurant websites. Mean (SD) actual sodium (mg) content of meals was 1292 (970) for adults and 1128 (891) for adolescents. One-quarter of diners (176 (23%) adults, 155 (25%) adolescents) were unable or unwilling to provide estimates of the sodium content of their meals. Of those who provided estimates, 90% of adults and 88% of adolescents underestimated sodium in their meals, with adults underestimating sodium by a mean (SD) of 1013 mg (1,055) and adolescents underestimating by 876 mg (1,021). Respondents underestimated sodium content more for meals with greater sodium content. Education about sodium at point-of-purchase, such as provision of sodium information on restaurant menu boards, may help correct consumer underestimation, particularly for meals of high sodium content. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Intercomparison of the seasonal cycle of tropical surface stress in 17 AMIP atmospheric general circulation models

    Energy Technology Data Exchange (ETDEWEB)

    Saji, N.H.; Goswami, B.N. [Indian Inst. of Sci., Bangalore (India). Centre for Atmos. and Oceanic Sci.

    1997-08-01

    The mean state of the tropical atmosphere is important as the nature of the coupling between the ocean and the atmosphere depends nonlinearly on the basic state of the coupled system. The simulation of the annual cycle of the tropical surface wind stress by 17 atmospheric general circulation models (AGCMs) is examined and intercompared. The models considered were part of the atmospheric model intercomparison project (AMIP) and were integrated with observed sea surface temperature (SST) for the decade 1979-1988. Several measures have been devised to intercompare the performance of the 17 models on global tropical as well as regional scales. Within the limits of observational uncertainties, the models under examination simulate realistic tropical area-averaged zonal and meridional annual mean stresses. This is a noteworthy improvement over older generation low resolution models which were noted for their simulation of surface stresses considerably weaker than the observations. The models also simulate realistic magnitudes of the spatial distribution of the annual mean surface stress field and are seen to reproduce realistically its observed spatial pattern. Similar features are observed in the simulations of the annual variance field. The models perform well over almost all the tropical regions apart from a few. Of these, the simulations over Somali are interesting. Over this region, the models are seen to underestimate the annual mean zonal and meridional stresses. There is also wide variance between the different models in simulating these quantities. 44 refs.

  5. Generalized complex geometry, generalized branes and the Hitchin sigma model

    International Nuclear Information System (INIS)

    Zucchini, Roberto

    2005-01-01

    Hitchin's generalized complex geometry has been shown to be relevant in compactifications of superstring theory with fluxes and is expected to lead to a deeper understanding of mirror symmetry. Gualtieri's notion of generalized complex submanifold seems to be a natural candidate for the description of branes in this context. Recently, we introduced a Batalin-Vilkovisky field theoretic realization of generalized complex geometry, the Hitchin sigma model, extending the well known Poisson sigma model. In this paper, exploiting Gualtieri's formalism, we incorporate branes into the model. A detailed study of the boundary conditions obeyed by the world sheet fields is provided. Finally, it is found that, when branes are present, the classical Batalin-Vilkovisky cohomology contains an extra sector that is related non trivially to a novel cohomology associated with the branes as generalized complex submanifolds. (author)

  6. A general consumer-resource population model

    Science.gov (United States)

    Lafferty, Kevin D.; DeLeo, Giulio; Briggs, Cheryl J.; Dobson, Andrew P.; Gross, Thilo; Kuris, Armand M.

    2015-01-01

    Food-web dynamics arise from predator-prey, parasite-host, and herbivore-plant interactions. Models for such interactions include up to three consumer activity states (questing, attacking, consuming) and up to four resource response states (susceptible, exposed, ingested, resistant). Articulating these states into a general model allows for dissecting, comparing, and deriving consumer-resource models. We specify this general model for 11 generic consumer strategies that group mathematically into predators, parasites, and micropredators and then derive conditions for consumer success, including a universal saturating functional response. We further show how to use this framework to create simple models with a common mathematical lineage and transparent assumptions. Underlying assumptions, missing elements, and composite parameters are revealed when classic consumer-resource models are derived from the general model.

  7. A Bayesian model to correct underestimated 3-D wind speeds from sonic anemometers increases turbulent components of the surface energy balance

    Science.gov (United States)

    John M. Frank; William J. Massman; Brent E. Ewers

    2016-01-01

    Sonic anemometers are the principal instruments in micrometeorological studies of turbulence and ecosystem fluxes. Common designs underestimate vertical wind measurements because they lack a correction for transducer shadowing, with no consensus on a suitable correction. We reanalyze a subset of data collected during field experiments in 2011 and 2013 featuring two or...

  8. CMIP5 land surface models systematically underestimate inter-annual variability of net ecosystem exchange in semi-arid southwestern North America.

    Science.gov (United States)

    MacBean, N.; Scott, R. L.; Biederman, J. A.; Vuichard, N.; Hudson, A.; Barnes, M.; Fox, A. M.; Smith, W. K.; Peylin, P. P.; Maignan, F.; Moore, D. J.

    2017-12-01

    Recent studies based on analysis of atmospheric CO2 inversions, satellite data and terrestrial biosphere model simulations have suggested that semi-arid ecosystems play a dominant role in the interannual variability and long-term trend in the global carbon sink. These studies have largely cited the response of vegetation activity to changing moisture availability as the primary mechanism of variability. However, some land surface models (LSMs) used in these studies have performed poorly in comparison to satellite-based observations of vegetation dynamics in semi-arid regions. Further analysis is therefore needed to ensure semi-arid carbon cycle processes are well represented in global scale LSMs before we can fully establish their contribution to the global carbon cycle. In this study, we evaluated annual net ecosystem exchange (NEE) simulated by CMIP5 land surface models using observations from 20 Ameriflux sites across semi-arid southwestern North America. We found that CMIP5 models systematically underestimate the magnitude and sign of NEE inter-annual variability; therefore, the true role of semi-arid regions in the global carbon cycle may be even more important than previously thought. To diagnose the factors responsible for this bias, we used the ORCHIDEE LSM to test different climate forcing data, prescribed vegetation fractions and model structures. Climate and prescribed vegetation do contribute to uncertainty in annual NEE simulations, but the bias is primarily caused by incorrect timing and magnitude of peak gross carbon fluxes. Modifications to the hydrology scheme improved simulations of soil moisture in comparison to data. This in turn improved the seasonal cycle of carbon uptake due to a more realistic limitation on photosynthesis during water stress. However, the peak fluxes are still too low, and phenology is poorly represented for desert shrubs and grasses. We provide suggestions on model developments needed to tackle these issues in the future.

  9. Multivariate generalized linear mixed models using R

    CERN Document Server

    Berridge, Damon Mark

    2011-01-01

    Multivariate Generalized Linear Mixed Models Using R presents robust and methodologically sound models for analyzing large and complex data sets, enabling readers to answer increasingly complex research questions. The book applies the principles of modeling to longitudinal data from panel and related studies via the Sabre software package in R. A Unified Framework for a Broad Class of Models The authors first discuss members of the family of generalized linear models, gradually adding complexity to the modeling framework by incorporating random effects. After reviewing the generalized linear model notation, they illustrate a range of random effects models, including three-level, multivariate, endpoint, event history, and state dependence models. They estimate the multivariate generalized linear mixed models (MGLMMs) using either standard or adaptive Gaussian quadrature. The authors also compare two-level fixed and random effects linear models. The appendices contain additional information on quadrature, model...

  10. Childhood leukaemia and low-level radiation - are we underestimating the risk?

    International Nuclear Information System (INIS)

    Wakeford, R.

    1996-01-01

    The Seascale childhood leukaemia 'cluster' can be interpreted as indicating that the risk of childhood leukaemia arising from low-level exposure to ionising radiation has been underestimated. Indeed, several variants of such an interpretation have been advanced. These include exposure to particular radionuclides, an underestimation of the radiation risk coefficient for childhood leukaemia, and the existence of a previously unrecognized risk of childhood leukaemia from the preconceptional irradiation of fathers. However, the scientific assessment of epidemiological associations is a complex matter, and such associations must be interpreted with caution. It would now seem most likely that the Seascale 'cluster' does not represent an unanticipated effect of the exposure to ionising radiation, but rather the effect of unusual population mixing generated by the Sellafield site which has produced an increase in the infection-based risk of childhood leukaemia. This episode in the history of epidemiological research provides a timely reminder of the need for great care in the interpretation-of novel statistical associations. (author)

  11. Underestimated Rate of Status Epilepticus according to the Traditional Definition of Status Epilepticus.

    Science.gov (United States)

    Ong, Cheung-Ter; Wong, Yi-Sin; Sung, Sheng-Feng; Wu, Chi-Shun; Hsu, Yung-Chu; Su, Yu-Hsiang; Hung, Ling-Chien

    2015-01-01

    Status epilepticus (SE) is an important neurological emergency. Early diagnosis could improve outcomes. Traditionally, SE is defined as seizures lasting at least 30 min or repeated seizures over 30 min without recovery of consciousness. Some specialists argued that the duration of seizures qualifying as SE should be shorter and the operational definition of SE was suggested. It is unclear whether physicians follow the operational definition. The objective of this study was to investigate whether the incidence of SE was underestimated and to investigate the underestimate rate. This retrospective study evaluates the difference in diagnosis of SE between operational definition and traditional definition of status epilepticus. Between July 1, 2012, and June 30, 2014, patients discharged with ICD-9 codes for epilepsy (345.X) in Chia-Yi Christian Hospital were included in the study. A seizure lasting at least 30 min or repeated seizures over 30 min without recovery of consciousness were considered SE according to the traditional definition of SE (TDSE). A seizure lasting between 5 and 30 min was considered SE according to the operational definition of SE (ODSE); it was defined as underestimated status epilepticus (UESE). During a 2-year period, there were 256 episodes of seizures requiring hospital admission. Among the 256 episodes, 99 episodes lasted longer than 5 min, out of which 61 (61.6%) episodes persisted over 30 min (TDSE) and 38 (38.4%) episodes continued between 5 and 30 min (UESE). In the 38 episodes of seizure lasting 5 to 30 minutes, only one episode was previously discharged as SE (ICD-9-CM 345.3). Conclusion. We underestimated 37.4% of SE. Continuing education regarding the diagnosis and treatment of epilepsy is important for physicians.

  12. Systematic underestimation of the age of samples with saturating exponential behaviour and inhomogeneous dose distribution

    International Nuclear Information System (INIS)

    Brennan, B.J.

    2000-01-01

    In luminescence and ESR studies, a systematic underestimate of the (average) equivalent dose, and thus also the age, of a sample can occur when there is significant variation of the natural dose within the sample and some regions approach saturation. This is demonstrated explicitly for a material that exhibits a single-saturating-exponential growth of signal with dose. The result is valid for any geometry (e.g. a plain layer, spherical grain, etc.) and some illustrative cases are modelled, with the age bias exceeding 10% in extreme cases. If the dose distribution within the sample can be modelled accurately, it is possible to correct for the bias in the estimates of equivalent dose estimate and age. While quantifying the effect would be more difficult, similar systematic biases in dose and age estimates are likely in other situations more complex than the one modelled

  13. Is dream recall underestimated by retrospective measures and enhanced by keeping a logbook? A review.

    Science.gov (United States)

    Aspy, Denholm J; Delfabbro, Paul; Proeve, Michael

    2015-05-01

    There are two methods commonly used to measure dream recall in the home setting. The retrospective method involves asking participants to estimate their dream recall in response to a single question and the logbook method involves keeping a daily record of one's dream recall. Until recently, the implicit assumption has been that these measures are largely equivalent. However, this is challenged by the tendency for retrospective measures to yield significantly lower dream recall rates than logbooks. A common explanation for this is that retrospective measures underestimate dream recall. Another is that keeping a logbook enhances it. If retrospective measures underestimate dream recall and if logbooks enhance it they are both unlikely to reflect typical dream recall rates and may be confounded with variables associated with the underestimation and enhancement effects. To date, this issue has received insufficient attention. The present review addresses this gap in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.

  14. Micro Data and General Equilibrium Models

    DEFF Research Database (Denmark)

    Browning, Martin; Hansen, Lars Peter; Heckman, James J.

    1999-01-01

    Dynamic general equilibrium models are required to evaluate policies applied at the national level. To use these models to make quantitative forecasts requires knowledge of an extensive array of parameter values for the economy at large. This essay describes the parameters required for different...... economic models, assesses the discordance between the macromodels used in policy evaluation and the microeconomic models used to generate the empirical evidence. For concreteness, we focus on two general equilibrium models: the stochastic growth model extended to include some forms of heterogeneity...

  15. Fishermen´s underestimation of risk

    DEFF Research Database (Denmark)

    Knudsen, Fabienne; Grøn, Sisse

    2009-01-01

    to stress the positive potentiale of risk. This can be explained by several, interrelated factors such as the nature of fishing, it-self a risk-based enterprise; a life-form promoting independency and identification with the enterprise's pecuniary priorities; working conditions upholding a feeling......  Fishermen's underestimation of risk   Background: In order to understand the effect of footwear and flooring on slips, trips and falls, 1st author visited 4 fishing boats.  An important spinoff of the study was to get an in situ insight in the way, fishermen perceive risk.   Objectives......: The presentation will analyse fishermen's risk perception, its causes and consequences.   Methods: The first author participated in 3 voyages at sea on fishing vessels (from 1 to 10 days each and from 2 to 4 crewmembers) where  interviews and participant observation was undertaken. A 4th fishing boat was visited...

  16. Glauber model and its generalizations

    International Nuclear Information System (INIS)

    Bialkowski, G.

    The physical aspects of the Glauber model problems are studied: potential model, profile function and Feynman diagrams approaches. Different generalizations of the Glauber model are discussed: particularly higher and lower energy processes and large angles [fr

  17. The generalized circular model

    NARCIS (Netherlands)

    Webers, H.M.

    1995-01-01

    In this paper we present a generalization of the circular model. In this model there are two concentric circular markets, which enables us to study two types of markets simultaneously. There are switching costs involved for moving from one circle to the other circle, which can also be thought of as

  18. Sap flow is Underestimated by Thermal Dissipation Sensors due to Alterations of Wood Anatomy

    Science.gov (United States)

    Marañón-Jiménez, S.; Wiedemann, A.; van den Bulcke, J.; Cuntz, M.; Rebmann, C.; Steppe, K.

    2014-12-01

    The thermal dissipation technique (TD) is one of the most commonly adopted methods for sap flow measurements. However, underestimations of up to 60% of the tree transpiration have been reported with this technique, although the causes are not certainly known. The insertion of TD sensors within the stems causes damage of the wood tissue and subsequent healing reactions, changing wood anatomy and likely the sap flow path. However, the anatomical changes in response to the insertion of sap flow sensors and the effects on the measured flow have not been assessed yet. In this study, we investigate the alteration of vessel anatomy on wounds formed around TD sensors. Our main objectives were to elucidate the anatomical causes of sap flow underestimation for ring-porous and diffuse-porous species, and relate these changes to sap flow underestimations. Successive sets of TD probes were installed in early, mid and end of the growing season in Fagus sylvatica (diffuse-porous) and Quercus petraea (ring-porous) trees. They were logged after the growing season and additional sets of sensors were installed in the logged stems with presumably no healing reaction. The wood tissue surrounding each sensor was then excised and analysed by X-ray computed microtomography (X-ray micro CT). This technique allowed the quantification of vessel anatomical characteristics and the reconstruction of the 3-D internal microstructure of the xylem vessels so that extension and shape of the altered area could be determined. Gels and tyloses clogged the conductive vessels around the sensors in both beech and oak. The extension of the affected area was larger for beech although these anatomical changes led to similar sap flow underestimations in both species. The higher vessel size in oak may explain this result and, therefore, larger sap flow underestimation per area of affected conductive tissue. The wound healing reaction likely occurred within the first weeks after sensor installation, which

  19. FIA's volume-to-biomass conversion method (CRM) generally underestimates biomass in comparison to published equations

    Science.gov (United States)

    David. C. Chojnacky

    2012-01-01

    An update of the Jenkins et al. (2003) biomass estimation equations for North American tree species resulted in 35 generalized equations developed from published equations. These 35 equations, which predict aboveground biomass of individual species grouped according to a taxa classification (based on genus or family and sometimes specific gravity), generally predicted...

  20. Testing the generalized partial credit model

    NARCIS (Netherlands)

    Glas, Cornelis A.W.

    1996-01-01

    The partial credit model (PCM) (G.N. Masters, 1982) can be viewed as a generalization of the Rasch model for dichotomous items to the case of polytomous items. In many cases, the PCM is too restrictive to fit the data. Several generalizations of the PCM have been proposed. In this paper, a

  1. Generalized Nonlinear Yule Models

    OpenAIRE

    Lansky, Petr; Polito, Federico; Sacerdote, Laura

    2016-01-01

    With the aim of considering models with persistent memory we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macrovolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth...

  2. A generalized additive regression model for survival times

    DEFF Research Database (Denmark)

    Scheike, Thomas H.

    2001-01-01

    Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models......Additive Aalen model; counting process; disability model; illness-death model; generalized additive models; multiple time-scales; non-parametric estimation; survival data; varying-coefficient models...

  3. Is dream recall underestimated by retrospective measures and enhanced by keeping a logbook? An empirical investigation.

    Science.gov (United States)

    Aspy, Denholm J

    2016-05-01

    In a recent review, Aspy, Delfabbro, and Proeve (2015) highlighted the tendency for retrospective measures of dream recall to yield substantially lower recall rates than logbook measures, a phenomenon they termed the retrospective-logbook disparity. One explanation for this phenomenon is that retrospective measures underestimate true dream recall. Another explanation is that keeping a logbook tends to enhance dream recall. The present study provides a thorough empirical investigation into the retrospective-logbook disparity using a range of retrospective and logbook measures and three different types of logbook. Retrospective-logbook disparities were correlated with a range of variables theoretically related to the retrospective underestimation effect, and retrospective-logbook disparities were greater among participants that reported improved dream recall during the logbook period. These findings indicate that dream recall is underestimated by retrospective measures and enhanced by keeping a logbook. Recommendations for the use of retrospective and logbook measures of dream recall are provided. Copyright © 2016 Elsevier Inc. All rights reserved.

  4. Longitudinal Biases in the Seychelles Dome Simulated by 34 Ocean-Atmosphere Coupled General Circulation Models

    Science.gov (United States)

    Nagura, M.; Sasaki, W.; Tozuka, T.; Luo, J.; Behera, S. K.; Yamagata, T.

    2012-12-01

    The upwelling dome of the southern tropical Indian Ocean is examined by using simulated results from 34 ocean-atmosphere coupled general circulation models (CGCMs) including those from the phase five of the Coupled Model Intercomparison Project (CMIP5). Among the current set of the 34 CGCMs, 12 models erroneously produce the upwelling dome in the eastern half of the basin while the observed Seychelles Dome is located in the southwestern tropical Indian Ocean (Figure 1). The annual mean Ekman pumping velocity is almost zero in the southern off-equatorial region in these models. This is in contrast with the observations that show Ekman upwelling as the cause of the Seychelles Dome. In the models that produce the dome in the eastern basin, the easterly biases are prominent along the equator in boreal summer and fall that cause shallow thermocline biases along the Java and Sumatra coasts via Kelvin wave dynamics and result in a spurious upwelling dome there. In addition, these models tend to overestimate (underestimate) the magnitude of annual (semiannual) cycle of thermocline depth variability in the dome region, which is another consequence of the easterly wind biases in boreal summer-fall. Compared to the CMIP3 models (Yokoi et al. 2009), the CMIP5 models are even worse in simulating the dome longitudes and magnitudes of annual and semiannual cycles of thermocline depth variability in the dome region. Considering the increasing need to understand regional impacts of climate modes, these results may give serious caveats to interpretation of model results and help in further model developments.; Figure 1: The longitudes of the shallowest annual-mean D20 in 5°S-12°S. The open and filled circles are for the observations and the CGCMs, respectively.

  5. Simulation of global sulfate distribution and the influence of effective cloud drop radii with a coupled photochemistry-sulfur cycle model

    NARCIS (Netherlands)

    Roelofs, G.J.; Lelieveld, J.; Ganzeveld, L.N.

    1998-01-01

    A sulfur cycle model is coupled to a global chemistry-climate model. The simulated surface sulfate concentrations are generally within a factor of 2 of observed concentrations, and display a realistic seasonality for most background locations. However, the model tends to underestimate sulfate and

  6. Generalized Nonlinear Yule Models

    Science.gov (United States)

    Lansky, Petr; Polito, Federico; Sacerdote, Laura

    2016-11-01

    With the aim of considering models related to random graphs growth exhibiting persistent memory, we propose a fractional nonlinear modification of the classical Yule model often studied in the context of macroevolution. Here the model is analyzed and interpreted in the framework of the development of networks such as the World Wide Web. Nonlinearity is introduced by replacing the linear birth process governing the growth of the in-links of each specific webpage with a fractional nonlinear birth process with completely general birth rates. Among the main results we derive the explicit distribution of the number of in-links of a webpage chosen uniformly at random recognizing the contribution to the asymptotics and the finite time correction. The mean value of the latter distribution is also calculated explicitly in the most general case. Furthermore, in order to show the usefulness of our results, we particularize them in the case of specific birth rates giving rise to a saturating behaviour, a property that is often observed in nature. The further specialization to the non-fractional case allows us to extend the Yule model accounting for a nonlinear growth.

  7. The General Education Collaboration Model: A Model for Successful Mainstreaming.

    Science.gov (United States)

    Simpson, Richard L.; Myles, Brenda Smith

    1990-01-01

    The General Education Collaboration Model is designed to support general educators teaching mainstreamed disabled students, through collaboration with special educators. The model is based on flexible departmentalization, program ownership, identification and development of supportive attitudes, student assessment as a measure of program…

  8. A new General Lorentz Transformation model

    International Nuclear Information System (INIS)

    Novakovic, Branko; Novakovic, Alen; Novakovic, Dario

    2000-01-01

    A new general structure of Lorentz Transformations, in the form of General Lorentz Transformation model (GLT-model), has been derived. This structure includes both Lorentz-Einstein and Galilean Transformations as its particular (special) realizations. Since the free parameters of GLT-model have been identified in a gravitational field, GLT-model can be employed both in Special and General Relativity. Consequently, the possibilities of an unification of Einstein's Special and General Theories of Relativity, as well as an unification of electromagnetic and gravitational fields are opened. If GLT-model is correct then there exist four new observation phenomena (a length and time neutrality, and a length dilation and a time contraction). Besides, the well-known phenomena (a length contraction, and a time dilation) are also the constituents of GLT-model. It means that there is a symmetry in GLT-model, where the center of this symmetry is represented by a length and a time neutrality. A time and a length neutrality in a gravitational field can be realized if the velocity of a moving system is equal to the free fall velocity. A time and a length neutrality include an observation of a particle mass neutrality. A special consideration has been devoted to a correlation between GLT-model and a limitation on particle velocities in order to investigate the possibility of a travel time reduction. It is found out that an observation of a particle speed faster then c=299 792 458 m/s, is possible in a gravitational field, if certain conditions are fulfilled

  9. The General Aggression Model

    NARCIS (Netherlands)

    Allen, Johnie J.; Anderson, Craig A.; Bushman, Brad J.

    The General Aggression Model (GAM) is a comprehensive, integrative, framework for understanding aggression. It considers the role of social, cognitive, personality, developmental, and biological factors on aggression. Proximate processes of GAM detail how person and situation factors influence

  10. Generalized bi-additive modelling for categorical data

    NARCIS (Netherlands)

    P.J.F. Groenen (Patrick); A.J. Koning (Alex)

    2004-01-01

    textabstractGeneralized linear modelling (GLM) is a versatile technique, which may be viewed as a generalization of well-known techniques such as least squares regression, analysis of variance, loglinear modelling, and logistic regression. In may applications, low-order interaction (such as

  11. Generalized latent variable modeling multilevel, longitudinal, and structural equation models

    CERN Document Server

    Skrondal, Anders; Rabe-Hesketh, Sophia

    2004-01-01

    This book unifies and extends latent variable models, including multilevel or generalized linear mixed models, longitudinal or panel models, item response or factor models, latent class or finite mixture models, and structural equation models.

  12. On the Use of Generalized Volume Scattering Models for the Improvement of General Polarimetric Model-Based Decomposition

    Directory of Open Access Journals (Sweden)

    Qinghua Xie

    2017-01-01

    Full Text Available Recently, a general polarimetric model-based decomposition framework was proposed by Chen et al., which addresses several well-known limitations in previous decomposition methods and implements a simultaneous full-parameter inversion by using complete polarimetric information. However, it only employs four typical models to characterize the volume scattering component, which limits the parameter inversion performance. To overcome this issue, this paper presents two general polarimetric model-based decomposition methods by incorporating the generalized volume scattering model (GVSM or simplified adaptive volume scattering model, (SAVSM proposed by Antropov et al. and Huang et al., respectively, into the general decomposition framework proposed by Chen et al. By doing so, the final volume coherency matrix structure is selected from a wide range of volume scattering models within a continuous interval according to the data itself without adding unknowns. Moreover, the new approaches rely on one nonlinear optimization stage instead of four as in the previous method proposed by Chen et al. In addition, the parameter inversion procedure adopts the modified algorithm proposed by Xie et al. which leads to higher accuracy and more physically reliable output parameters. A number of Monte Carlo simulations of polarimetric synthetic aperture radar (PolSAR data are carried out and show that the proposed method with GVSM yields an overall improvement in the final accuracy of estimated parameters and outperforms both the version using SAVSM and the original approach. In addition, C-band Radarsat-2 and L-band AIRSAR fully polarimetric images over the San Francisco region are also used for testing purposes. A detailed comparison and analysis of decomposition results over different land-cover types are conducted. According to this study, the use of general decomposition models leads to a more accurate quantitative retrieval of target parameters. However, there

  13. Quantification of Underestimation of Physical Activity During Cycling to School When Using Accelerometry

    DEFF Research Database (Denmark)

    Tarp, Jakob; Andersen, Lars B; Østergaard, Lars

    2015-01-01

    Background: Cycling to and from school is an important source of physical activity (PA) in youth but it is not captured by the dominant objective method to quantify PA. The aim of this study was to quantify the underestimation of objectively assessed PA caused by cycling when using accelerometry....... Methods: Participants were 20 children aged 11-14 years from a randomized controlled trial performed in 2011. Physical activity was assessed by accelerometry with the addition of heart rate monitoring during cycling to school. Global positioning system (GPS) was used to identify periods of cycling...... to school. Results: Mean (95% CI) minutes of moderate-to-vigorous physical activity (MVPA) during round-trip commutes was 10.8 (7.1 - 16.6). Each kilometre of cycling meant an underestimation of 9314 (95%CI: 7719 - 11238) counts and 2.7 (95%CI: 2.1 - 3.5) minutes of MVPA. Adjusting for cycling to school...

  14. Simple implementation of general dark energy models

    International Nuclear Information System (INIS)

    Bloomfield, Jolyon K.; Pearson, Jonathan A.

    2014-01-01

    We present a formalism for the numerical implementation of general theories of dark energy, combining the computational simplicity of the equation of state for perturbations approach with the generality of the effective field theory approach. An effective fluid description is employed, based on a general action describing single-scalar field models. The formalism is developed from first principles, and constructed keeping the goal of a simple implementation into CAMB in mind. Benefits of this approach include its straightforward implementation, the generality of the underlying theory, the fact that the evolved variables are physical quantities, and that model-independent phenomenological descriptions may be straightforwardly investigated. We hope this formulation will provide a powerful tool for the comparison of theoretical models of dark energy with observational data

  15. Dual-energy X-ray absorptiometry underestimates in vivo lumbar spine bone mineral density in overweight rats.

    Science.gov (United States)

    Cherif, Rim; Vico, Laurence; Laroche, Norbert; Sakly, Mohsen; Attia, Nebil; Lavet, Cedric

    2018-01-01

    Dual-energy X-ray absorptiometry (DXA) is currently the most widely used technique for measuring areal bone mineral density (BMD). However, several studies have shown inaccuracy, with either overestimation or underestimation of DXA BMD measurements in the case of overweight or obese individuals. We have designed an overweight rat model based on junk food to compare the effect of obesity on in vivo and ex vivo BMD and bone mineral content measurements. Thirty-eight 6-month old male rats were given a chow diet (n = 13) or a high fat and sucrose diet (n = 25), with the calorie amount being kept the same in the two groups, for 19 weeks. L1 BMD, L1 bone mineral content, amount of abdominal fat, and amount of abdominal lean were obtained from in vivo DXA scan. Ex vivo L1 BMD was also measured. A difference between in vivo and ex vivo DXA BMD measurements (P body weight, perirenal fat, abdominal fat, and abdominal lean. Multiple linear regression analysis shows that body weight, abdominal fat, and abdominal lean were independently related to ex vivo BMD. DXA underestimated lumbar in vivo BMD in overweight rats, and this measurement error is related to body weight and abdominal fat. Therefore, caution must be used when one is interpreting BMD among overweight and obese individuals.

  16. Consequences of neurologic lesions assessed by Barthel Index after Botox® injection may be underestimated

    Directory of Open Access Journals (Sweden)

    Dionyssiotis Y

    2012-10-01

    Full Text Available Y Dionyssiotis,1,2 D Kiourtidis,3 A Karvouni,3 A Kaliontzoglou,3 I Kliafas31Medical Department, Rehabilitation Center Amyntaio, General Hospital of Florina, Amyntaio, Florina, 2Physical Medicine and Rehabilitation Department, Rhodes General Hospital, Rhodes, Dodecanese, 3Neurologic Department, Rhodes General Hospital, Rhodes, Dodecanese, GreecePurpose: The aim of this study was to investigate whether the consequences of neurologic lesions are underestimated when the Barthel Index (BI is used to assess the clinical outcome of botulinum toxin injection.Patients and methods: The records for all in- and outpatients with various neurologic lesions (stroke, multiple sclerosis, spinal cord injury, traumatic brain injury, and so forth who had been referred to the authors’ departments and who had received botulinum toxin type A (Botox® for spasticity within a 4-year period (2008–2011 were examined retrospectively. BI data were collected and analyzed.Results: The BI score was found to have increased in follow-up assessments (P = 0.048. No correlation was found between the degree of spasticity and the BI score.Conclusion: The specific injection of Botox in patients with neurologic lesions was not strongly correlated with a significant functional outcome according to the BI. The results of this study suggest that clinicians need to look at other measurement scales for the assessment of significant outcomes of Botox in the rehabilitation process after neurologic lesions.Keywords: botulinum toxin type A, spasticity, stroke, multiple sclerosis

  17. Actuarial statistics with generalized linear mixed models

    NARCIS (Netherlands)

    Antonio, K.; Beirlant, J.

    2007-01-01

    Over the last decade the use of generalized linear models (GLMs) in actuarial statistics has received a lot of attention, starting from the actuarial illustrations in the standard text by McCullagh and Nelder [McCullagh, P., Nelder, J.A., 1989. Generalized linear models. In: Monographs on Statistics

  18. The Surface Energy Balance at Local and Regional Scales-A Comparison of General Circulation Model Results with Observations.

    Science.gov (United States)

    Garratt, J. R.; Krummel, P. B.; Kowalczyk, E. A.

    1993-06-01

    Aspects of the mean monthly energy balance at continental surfaces are examined by appeal to the results of general circulation model (GCM) simulations, climatological maps of surface fluxes, and direct observations. Emphasis is placed on net radiation and evaporation for (i) five continental regions (each approximately 20°×150°) within Africa, Australia, Eurasia, South America, and the United States; (ii) a number of continental sites in both hemispheres. Both the mean monthly values of the local and regional fluxes and the mean monthly diurnal cycles of the local fluxes are described. Mostly, GCMs tend to overestimate the mean monthly levels of net radiation by about 15% -20% on an annual basis, for observed annual values in the range 50 to 100 Wm2. This is probably the result of several deficiencies, including (i) continental surface albedos being undervalued in a number of the models, resulting in overestimates of the net shortwave flux at the surface (though this deficiency is steadily being addressed by modelers); (ii) incoming shortwave fluxes being overestimated due to uncertainties in cloud schemes and clear-sky absorption; (iii) land-surface temperatures being under-estimated resulting in an underestimate of the outgoing longwave flux. In contrast, and even allowing for the poor observational base for evaporation, there is no obvious overall bias in mean monthly levels of evaporation determined in GCMS, with one or two exceptions. Rather, and far more so than with net radiation, there is a wide range in values of evaporation for all regions investigated. For continental regions and at times of the year of low to moderate rainfall, there is a tendency for the simulated evaporation to be closely related to the precipitation-this is not surprising. In contrast, for regions where there is sufficient or excessive rainfall, the evaporation tends to follow the behavior of the net radiation. Again, this is not surprising given the close relation between

  19. A Generalized QMRA Beta-Poisson Dose-Response Model.

    Science.gov (United States)

    Xie, Gang; Roiko, Anne; Stratton, Helen; Lemckert, Charles; Dunn, Peter K; Mengersen, Kerrie

    2016-10-01

    Quantitative microbial risk assessment (QMRA) is widely accepted for characterizing the microbial risks associated with food, water, and wastewater. Single-hit dose-response models are the most commonly used dose-response models in QMRA. Denoting PI(d) as the probability of infection at a given mean dose d, a three-parameter generalized QMRA beta-Poisson dose-response model, PI(d|α,β,r*), is proposed in which the minimum number of organisms required for causing infection, K min , is not fixed, but a random variable following a geometric distribution with parameter 0Poisson model, PI(d|α,β), is a special case of the generalized model with K min = 1 (which implies r*=1). The generalized beta-Poisson model is based on a conceptual model with greater detail in the dose-response mechanism. Since a maximum likelihood solution is not easily available, a likelihood-free approximate Bayesian computation (ABC) algorithm is employed for parameter estimation. By fitting the generalized model to four experimental data sets from the literature, this study reveals that the posterior median r* estimates produced fall short of meeting the required condition of r* = 1 for single-hit assumption. However, three out of four data sets fitted by the generalized models could not achieve an improvement in goodness of fit. These combined results imply that, at least in some cases, a single-hit assumption for characterizing the dose-response process may not be appropriate, but that the more complex models may be difficult to support especially if the sample size is small. The three-parameter generalized model provides a possibility to investigate the mechanism of a dose-response process in greater detail than is possible under a single-hit model. © 2016 Society for Risk Analysis.

  20. Bayesian Subset Modeling for High-Dimensional Generalized Linear Models

    KAUST Repository

    Liang, Faming

    2013-06-01

    This article presents a new prior setting for high-dimensional generalized linear models, which leads to a Bayesian subset regression (BSR) with the maximum a posteriori model approximately equivalent to the minimum extended Bayesian information criterion model. The consistency of the resulting posterior is established under mild conditions. Further, a variable screening procedure is proposed based on the marginal inclusion probability, which shares the same properties of sure screening and consistency with the existing sure independence screening (SIS) and iterative sure independence screening (ISIS) procedures. However, since the proposed procedure makes use of joint information from all predictors, it generally outperforms SIS and ISIS in real applications. This article also makes extensive comparisons of BSR with the popular penalized likelihood methods, including Lasso, elastic net, SIS, and ISIS. The numerical results indicate that BSR can generally outperform the penalized likelihood methods. The models selected by BSR tend to be sparser and, more importantly, of higher prediction ability. In addition, the performance of the penalized likelihood methods tends to deteriorate as the number of predictors increases, while this is not significant for BSR. Supplementary materials for this article are available online. © 2013 American Statistical Association.

  1. Impact bias or underestimation? Outcome specifications predict the direction of affective forecasting errors.

    Science.gov (United States)

    Buechel, Eva C; Zhang, Jiao; Morewedge, Carey K

    2017-05-01

    Affective forecasts are used to anticipate the hedonic impact of future events and decide which events to pursue or avoid. We propose that because affective forecasters are more sensitive to outcome specifications of events than experiencers, the outcome specification values of an event, such as its duration, magnitude, probability, and psychological distance, can be used to predict the direction of affective forecasting errors: whether affective forecasters will overestimate or underestimate its hedonic impact. When specifications are positively correlated with the hedonic impact of an event, forecasters will overestimate the extent to which high specification values will intensify and low specification values will discount its impact. When outcome specifications are negatively correlated with its hedonic impact, forecasters will overestimate the extent to which low specification values will intensify and high specification values will discount its impact. These affective forecasting errors compound additively when multiple specifications are aligned in their impact: In Experiment 1, affective forecasters underestimated the hedonic impact of winning a smaller prize that they expected to win, and they overestimated the hedonic impact of winning a larger prize that they did not expect to win. In Experiment 2, affective forecasters underestimated the hedonic impact of a short unpleasant video about a temporally distant event, and they overestimated the hedonic impact of a long unpleasant video about a temporally near event. Experiments 3A and 3B showed that differences in the affect-richness of forecasted and experienced events underlie these differences in sensitivity to outcome specifications, therefore accounting for both the impact bias and its reversal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  2. Aerosol modelling and validation during ESCOMPTE 2001

    Science.gov (United States)

    Cousin, F.; Liousse, C.; Cachier, H.; Bessagnet, B.; Guillaume, B.; Rosset, R.

    The ESCOMPTE 2001 programme (Atmospheric Research. 69(3-4) (2004) 241) has resulted in an exhaustive set of dynamical, radiative, gas and aerosol observations (surface and aircraft measurements). A previous paper (Atmospheric Research. (2004) in press) has dealt with dynamics and gas-phase chemistry. The present paper is an extension to aerosol formation, transport and evolution. To account for important loadings of primary and secondary aerosols and their transformation processes in the ESCOMPTE domain, the ORISAM aerosol module (Atmospheric Environment. 35 (2001) 4751) was implemented on-line in the air-quality Meso-NH-C model. Additional developments have been introduced in ORganic and Inorganic Spectral Aerosol Module (ORISAM) to improve the comparison between simulations and experimental surface and aircraft field data. This paper discusses this comparison for a simulation performed during one selected day, 24 June 2001, during the Intensive Observation Period IOP2b. Our work relies on BC and OCp emission inventories specifically developed for ESCOMPTE. This study confirms the need for a fine resolution aerosol inventory with spectral chemical speciation. BC levels are satisfactorily reproduced, thus validating our emission inventory and its processing through Meso-NH-C. However, comparisons for reactive species generally denote an underestimation of concentrations. Organic aerosol levels are rather well simulated though with a trend to underestimation in the afternoon. Inorganic aerosol species are underestimated for several reasons, some of them have been identified. For sulphates, primary emissions were introduced. Improvement was obtained too for modelled nitrate and ammonium levels after introducing heterogeneous chemistry. However, no modelling of terrigeneous particles is probably a major cause for nitrates and ammonium underestimations. Particle numbers and size distributions are well reproduced, but only in the submicrometer range. Our work points out

  3. A Generalized Deduction of the Ideal-Solution Model

    Science.gov (United States)

    Leo, Teresa J.; Perez-del-Notario, Pedro; Raso, Miguel A.

    2006-01-01

    A new general procedure for deriving the Gibbs energy of mixing is developed through general thermodynamic considerations, and the ideal-solution model is obtained as a special particular case of the general one. The deduction of the Gibbs energy of mixing for the ideal-solution model is a rational one and viewed suitable for advanced students who…

  4. Prediction of Periodontitis Occurrence: Influence of Classification and Sociodemographic and General Health Information

    DEFF Research Database (Denmark)

    Manzolli Leite, Fabio Renato; Peres, Karen Glazer; Do, Loc Giang

    2017-01-01

    BACKGROUND: Prediction of periodontitis development is challenging. Use of oral health-related data alone, especially in a young population, might underestimate disease risk. This study investigates accuracy of oral, systemic, and socioeconomic data on estimating periodontitis development...... in a population-based prospective cohort. METHODS: General health history and sociodemographic information were collected throughout the life-course of individuals. Oral examinations were performed at ages 24 and 31 years in the Pelotas 1982 birth cohort. Periodontitis at age 31 years according to six...... classifications was used as the gold standard to compute area under the receiver operating characteristic curve (AUC). Multivariable binomial regression models were used to evaluate the effects of oral health, general health, and socioeconomic characteristics on accuracy of periodontitis development prediction...

  5. Topics in the generalized vector dominance model

    International Nuclear Information System (INIS)

    Chavin, S.

    1976-01-01

    Two topics are covered in the generalized vector dominance model. In the first topic a model is constructed for dilepton production in hadron-hadron interactions based on the idea of generalized vector-dominance. It is argued that in the high mass region the generalized vector-dominance model and the Drell-Yan parton model are alternative descriptions of the same underlying physics. In the low mass regions the models differ; the vector-dominance approach predicts a greater production of dileptons. It is found that the high mass vector mesons which are the hallmark of the generalized vector-dominance model make little contribution to the large yield of leptons observed in the transverse-momentum range 1 less than p/sub perpendicular/ less than 6 GeV. The recently measured hadronic parameters lead one to believe that detailed fits to the data are possible under the model. The possibility was expected, and illustrated with a simple model the extreme sensitivity of the large-p/sub perpendicular/ lepton yield to the large-transverse-momentum tail of vector-meson production. The second topic is an attempt to explain the mysterious phenomenon of photon shadowing in nuclei utilizing the contribution of the longitudinally polarized photon. It is argued that if the scalar photon anti-shadows, it could compensate for the transverse photon, which is presumed to shadow. It is found in a very simple model that the scalar photon could indeed anti-shadow. The principal feature of the model is a cancellation of amplitudes. The scheme is consistent with scalar photon-nucleon data as well. The idea is tested with two simple GVDM models and finds that the anti-shadowing contribution of the scalar photon is not sufficient to compensate for the contribution of the transverse photon. It is found doubtful that the scalar photon makes a significant contribution to the total photon-nuclear cross section

  6. A Generalized Random Regret Minimization Model

    NARCIS (Netherlands)

    Chorus, C.G.

    2013-01-01

    This paper presents, discusses and tests a generalized Random Regret Minimization (G-RRM) model. The G-RRM model is created by replacing a fixed constant in the attribute-specific regret functions of the RRM model, by a regret-weight variable. Depending on the value of the regret-weights, the G-RRM

  7. The DINA model as a constrained general diagnostic model: Two variants of a model equivalency.

    Science.gov (United States)

    von Davier, Matthias

    2014-02-01

    The 'deterministic-input noisy-AND' (DINA) model is one of the more frequently applied diagnostic classification models for binary observed responses and binary latent variables. The purpose of this paper is to show that the model is equivalent to a special case of a more general compensatory family of diagnostic models. Two equivalencies are presented. Both project the original DINA skill space and design Q-matrix using mappings into a transformed skill space as well as a transformed Q-matrix space. Both variants of the equivalency produce a compensatory model that is mathematically equivalent to the (conjunctive) DINA model. This equivalency holds for all DINA models with any type of Q-matrix, not only for trivial (simple-structure) cases. The two versions of the equivalency presented in this paper are not implied by the recently suggested log-linear cognitive diagnosis model or the generalized DINA approach. The equivalencies presented here exist independent of these recently derived models since they solely require a linear - compensatory - general diagnostic model without any skill interaction terms. Whenever it can be shown that one model can be viewed as a special case of another more general one, conclusions derived from any particular model-based estimates are drawn into question. It is widely known that multidimensional models can often be specified in multiple ways while the model-based probabilities of observed variables stay the same. This paper goes beyond this type of equivalency by showing that a conjunctive diagnostic classification model can be expressed as a constrained special case of a general compensatory diagnostic modelling framework. © 2013 The British Psychological Society.

  8. Exposure limits: the underestimation of absorbed cell phone radiation, especially in children.

    Science.gov (United States)

    Gandhi, Om P; Morgan, L Lloyd; de Salles, Alvaro Augusto; Han, Yueh-Ying; Herberman, Ronald B; Davis, Devra Lee

    2012-03-01

    The existing cell phone certification process uses a plastic model of the head called the Specific Anthropomorphic Mannequin (SAM), representing the top 10% of U.S. military recruits in 1989 and greatly underestimating the Specific Absorption Rate (SAR) for typical mobile phone users, especially children. A superior computer simulation certification process has been approved by the Federal Communications Commission (FCC) but is not employed to certify cell phones. In the United States, the FCC determines maximum allowed exposures. Many countries, especially European Union members, use the "guidelines" of International Commission on Non-Ionizing Radiation Protection (ICNIRP), a non governmental agency. Radiofrequency (RF) exposure to a head smaller than SAM will absorb a relatively higher SAR. Also, SAM uses a fluid having the average electrical properties of the head that cannot indicate differential absorption of specific brain tissue, nor absorption in children or smaller adults. The SAR for a 10-year old is up to 153% higher than the SAR for the SAM model. When electrical properties are considered, a child's head's absorption can be over two times greater, and absorption of the skull's bone marrow can be ten times greater than adults. Therefore, a new certification process is needed that incorporates different modes of use, head sizes, and tissue properties. Anatomically based models should be employed in revising safety standards for these ubiquitous modern devices and standards should be set by accountable, independent groups.

  9. Log-normal frailty models fitted as Poisson generalized linear mixed models.

    Science.gov (United States)

    Hirsch, Katharina; Wienke, Andreas; Kuss, Oliver

    2016-12-01

    The equivalence of a survival model with a piecewise constant baseline hazard function and a Poisson regression model has been known since decades. As shown in recent studies, this equivalence carries over to clustered survival data: A frailty model with a log-normal frailty term can be interpreted and estimated as a generalized linear mixed model with a binary response, a Poisson likelihood, and a specific offset. Proceeding this way, statistical theory and software for generalized linear mixed models are readily available for fitting frailty models. This gain in flexibility comes at the small price of (1) having to fix the number of pieces for the baseline hazard in advance and (2) having to "explode" the data set by the number of pieces. In this paper we extend the simulations of former studies by using a more realistic baseline hazard (Gompertz) and by comparing the model under consideration with competing models. Furthermore, the SAS macro %PCFrailty is introduced to apply the Poisson generalized linear mixed approach to frailty models. The simulations show good results for the shared frailty model. Our new %PCFrailty macro provides proper estimates, especially in case of 4 events per piece. The suggested Poisson generalized linear mixed approach for log-normal frailty models based on the %PCFrailty macro provides several advantages in the analysis of clustered survival data with respect to more flexible modelling of fixed and random effects, exact (in the sense of non-approximate) maximum likelihood estimation, and standard errors and different types of confidence intervals for all variance parameters. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. A Note on the Identifiability of Generalized Linear Mixed Models

    DEFF Research Database (Denmark)

    Labouriau, Rodrigo

    2014-01-01

    I present here a simple proof that, under general regularity conditions, the standard parametrization of generalized linear mixed model is identifiable. The proof is based on the assumptions of generalized linear mixed models on the first and second order moments and some general mild regularity...... conditions, and, therefore, is extensible to quasi-likelihood based generalized linear models. In particular, binomial and Poisson mixed models with dispersion parameter are identifiable when equipped with the standard parametrization...

  11. Quantifying the underestimation of relative risks from genome-wide association studies.

    Directory of Open Access Journals (Sweden)

    Chris Spencer

    2011-03-01

    Full Text Available Genome-wide association studies (GWAS have identified hundreds of associated loci across many common diseases. Most risk variants identified by GWAS will merely be tags for as-yet-unknown causal variants. It is therefore possible that identification of the causal variant, by fine mapping, will identify alleles with larger effects on genetic risk than those currently estimated from GWAS replication studies. We show that under plausible assumptions, whilst the majority of the per-allele relative risks (RR estimated from GWAS data will be close to the true risk at the causal variant, some could be considerable underestimates. For example, for an estimated RR in the range 1.2-1.3, there is approximately a 38% chance that it exceeds 1.4 and a 10% chance that it is over 2. We show how these probabilities can vary depending on the true effects associated with low-frequency variants and on the minor allele frequency (MAF of the most associated SNP. We investigate the consequences of the underestimation of effect sizes for predictions of an individual's disease risk and interpret our results for the design of fine mapping experiments. Although these effects mean that the amount of heritability explained by known GWAS loci is expected to be larger than current projections, this increase is likely to explain a relatively small amount of the so-called "missing" heritability.

  12. Multiple phase transitions in the generalized Curie-Weiss model

    International Nuclear Information System (INIS)

    Eisele, T.; Ellis, R.S.

    1988-01-01

    The generalized Curie-Weiss model is an extension of the classical Curie-Weiss model in which the quadratic interaction function of the mean spin value is replaced by a more general interaction function. It is shown that the generalized Curie-Weiss model can have a sequence of phase transitions at different critical temperatures. Both first-order and second-order phase transitions can occur, and explicit criteria for the two types are given. Three examples of generalized Curie-Weiss models are worked out in detail, including one example with infinitely many phase transitions. A number of results are derived using large-deviation techniques

  13. Generalization of the quark rearrangement model

    International Nuclear Information System (INIS)

    Fields, T.; Chen, C.K.

    1976-01-01

    An extension and generalization of the quark rearrangement model of baryon annihilation is described which can be applied to all annihilation reactions and which incorporates some of the features of the highly successful quark parton model. Some p anti-p interactions are discussed

  14. A didactical structural model – linking analysis of teaching and analysis of educational media

    DEFF Research Database (Denmark)

    Graf, Stefan Ting

    1. Gap between general didactics and textbook/media research There seems to be a gap between general didactics (theory of teaching) and research in textbooks or educational media in general at least in the Nordic and German speaking countries. General didactic and their models seem to underestimate...... related questions (e.g. readability) without establishing a link to what is useful for the teacher’s tasks both on the level of preparation, practice and reflection, i.e. without an explicit theory of teaching. 2. Media in general didactics I will discuss the status of media in some current models...... of reflection in general didactics (Hiim/Hippe, Meyer, Klafki) and present a reconstruction of a didactical model of structure (Strukturmodel), whose counterstones are ‘intentional content’, ‘media/expression’ and ‘teaching method/activity’. The inclusion of media/expression in the model resumes a seemingly...

  15. Predictors of underestimation of malignancy after image-guided core needle biopsy diagnosis of flat epithelial atypia or atypical ductal hyperplasia.

    Science.gov (United States)

    Yu, Chi-Chang; Ueng, Shir-Hwa; Cheung, Yun-Chung; Shen, Shih-Che; Kuo, Wen-Lin; Tsai, Hsiu-Pei; Lo, Yung-Feng; Chen, Shin-Cheh

    2015-01-01

    Flat epithelial atypia (FEA) and atypical ductal hyperplasia (ADH) are precursors of breast malignancy. Management of FEA or ADH after image-guided core needle biopsy (CNB) remains controversial. The aim of this study was to evaluate malignancy underestimation rates after FEA or ADH diagnosis using image-guided CNB and to identify clinical characteristics and imaging features associated with malignancy as well as identify cases with low underestimation rates that may be treatable by observation only. We retrospectively reviewed 2,875 consecutive image-guided CNBs recorded in an electronic data base from January 2010 to December 2011 and identified 128 (4.5%) FEA and 83 (2.9%) ADH diagnoses (211 total cases). Of these, 64 (30.3%) were echo-guided CNB procedures and 147 (69.7%) mammography-guided CNBs. Twenty patients (9.5%) were upgraded to malignancy. Multivariate analysis indicated that age (OR = 1.123, p = 0.002, increase of 1 year), mass-type lesion with calcifications (OR = 8.213, p = 0.006), and ADH in CNB specimens (OR = 8.071, p = 0.003) were independent predictors of underestimation. In univariate analysis of echo-guided CNB (n = 64), mass with calcifications had the highest underestimation rate (p < 0.001). Multivariate analysis of 147 mammography-guided CNBs revealed that age (OR = 1.122, p = 0.040, increase of 1 year) and calcification distribution were significant independent predictors of underestimation. No FEA case in which, complete calcification retrieval was recorded after CNB was upgraded to malignancy. Older age at diagnosis on image-guided CNB was a predictor of malignancy underestimation. Mass with calcifications was more likely to be associated with malignancy, and in cases presenting as calcifications only, segmental distribution or linear shapes were significantly associated with upgrading. Excision after FEA or ADH diagnosis by image-guided CNB is warranted except for FEA diagnosed using mammography-guided CNB with complete calcification

  16. A generalized logarithmic image processing model based on the gigavision sensor model.

    Science.gov (United States)

    Deng, Guang

    2012-03-01

    The logarithmic image processing (LIP) model is a mathematical theory providing generalized linear operations for image processing. The gigavision sensor (GVS) is a new imaging device that can be described by a statistical model. In this paper, by studying these two seemingly unrelated models, we develop a generalized LIP (GLIP) model. With the LIP model being its special case, the GLIP model not only provides new insights into the LIP model but also defines new image representations and operations for solving general image processing problems that are not necessarily related to the GVS. A new parametric LIP model is also developed. To illustrate the application of the new scalar multiplication operation, we propose an energy-preserving algorithm for tone mapping, which is a necessary step in image dehazing. By comparing with results using two state-of-the-art algorithms, we show that the new scalar multiplication operation is an effective tool for tone mapping.

  17. Automated Volumetric Mammographic Breast Density Measurements May Underestimate Percent Breast Density for High-density Breasts

    NARCIS (Netherlands)

    Rahbar, K.; Gubern Merida, A.; Patrie, J.T.; Harvey, J.A.

    2017-01-01

    RATIONALE AND OBJECTIVES: The purpose of this study was to evaluate discrepancy in breast composition measurements obtained from mammograms using two commercially available software methods for systematic trends in overestimation or underestimation compared to magnetic resonance-derived

  18. Inferring Perspective Versus Getting Perspective: Underestimating the Value of Being in Another Person's Shoes.

    Science.gov (United States)

    Zhou, Haotian; Majka, Elizabeth A; Epley, Nicholas

    2017-04-01

    People use at least two strategies to solve the challenge of understanding another person's mind: inferring that person's perspective by reading his or her behavior (theorization) and getting that person's perspective by experiencing his or her situation (simulation). The five experiments reported here demonstrate a strong tendency for people to underestimate the value of simulation. Predictors estimated a stranger's emotional reactions toward 50 pictures. They could either infer the stranger's perspective by reading his or her facial expressions or simulate the stranger's perspective by watching the pictures he or she viewed. Predictors were substantially more accurate when they got perspective through simulation, but overestimated the accuracy they had achieved by inferring perspective. Predictors' miscalibrated confidence stemmed from overestimating the information revealed through facial expressions and underestimating the similarity in people's reactions to a given situation. People seem to underappreciate a useful strategy for understanding the minds of others, even after they gain firsthand experience with both strategies.

  19. The DART general equilibrium model: A technical description

    OpenAIRE

    Springer, Katrin

    1998-01-01

    This paper provides a technical description of the Dynamic Applied Regional Trade (DART) General Equilibrium Model. The DART model is a recursive dynamic, multi-region, multi-sector computable general equilibrium model. All regions are fully specified and linked by bilateral trade flows. The DART model can be used to project economic activities, energy use and trade flows for each of the specified regions to simulate various trade policy as well as environmental policy scenarios, and to analy...

  20. The atmospheric chemistry general circulation model ECHAM5/MESSy1: consistent simulation of ozone from the surface to the mesosphere

    Directory of Open Access Journals (Sweden)

    P. Jöckel

    2006-01-01

    Full Text Available The new Modular Earth Submodel System (MESSy describes atmospheric chemistry and meteorological processes in a modular framework, following strict coding standards. It has been coupled to the ECHAM5 general circulation model, which has been slightly modified for this purpose. A 90-layer model setup up to 0.01 hPa was used at spectral T42 resolution to simulate the lower and middle atmosphere. With the high vertical resolution the model simulates the Quasi-Biennial Oscillation. The model meteorology has been tested to check the influence of the changes to ECHAM5 and the radiation interactions with the new representation of atmospheric composition. In the simulations presented here a Newtonian relaxation technique was applied in the tropospheric part of the domain to weakly nudge the model towards the analysed meteorology during the period 1998–2005. This allows an efficient and direct evaluation with satellite and in-situ data. It is shown that the tropospheric wave forcing of the stratosphere in the model suffices to reproduce major stratospheric warming events leading e.g. to the vortex split over Antarctica in 2002. Characteristic features such as dehydration and denitrification caused by the sedimentation of polar stratospheric cloud particles and ozone depletion during winter and spring are simulated well, although ozone loss in the lower polar stratosphere is slightly underestimated. The model realistically simulates stratosphere-troposphere exchange processes as indicated by comparisons with satellite and in situ measurements. The evaluation of tropospheric chemistry presented here focuses on the distributions of ozone, hydroxyl radicals, carbon monoxide and reactive nitrogen compounds. In spite of minor shortcomings, mostly related to the relatively coarse T42 resolution and the neglect of inter-annual changes in biomass burning emissions, the main characteristics of the trace gas distributions are generally reproduced well. The MESSy

  1. A generalized model via random walks for information filtering

    Energy Technology Data Exchange (ETDEWEB)

    Ren, Zhuo-Ming, E-mail: zhuomingren@gmail.com [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Kong, Yixiu [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland); Shang, Ming-Sheng, E-mail: msshang@cigit.ac.cn [Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, ChongQing, 400714 (China); Zhang, Yi-Cheng [Department of Physics, University of Fribourg, Chemin du Musée 3, CH-1700, Fribourg (Switzerland)

    2016-08-06

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  2. A generalized model via random walks for information filtering

    International Nuclear Information System (INIS)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-01-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation. - Highlights: • We propose a generalized recommendation model employing the random walk dynamics. • The proposed model with single and hybrid of degree information is analyzed. • A strategy with the hybrid degree information improves precision of recommendation.

  3. Generalized Ordinary Differential Equation Models.

    Science.gov (United States)

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-10-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.

  4. Kalman Filter for Generalized 2-D Roesser Models

    Institute of Scientific and Technical Information of China (English)

    SHENG Mei; ZOU Yun

    2007-01-01

    The design problem of the state filter for the generalized stochastic 2-D Roesser models, which appears when both the state and measurement are simultaneously subjected to the interference from white noise, is discussed. The wellknown Kalman filter design is extended to the generalized 2-D Roesser models. Based on the method of "scanning line by line", the filtering problem of generalized 2-D Roesser models with mode-energy reconstruction is solved. The formula of the optimal filtering, which minimizes the variance of the estimation error of the state vectors, is derived. The validity of the designed filter is verified by the calculation steps and the examples are introduced.

  5. A Model Fit Statistic for Generalized Partial Credit Model

    Science.gov (United States)

    Liang, Tie; Wells, Craig S.

    2009-01-01

    Investigating the fit of a parametric model is an important part of the measurement process when implementing item response theory (IRT), but research examining it is limited. A general nonparametric approach for detecting model misfit, introduced by J. Douglas and A. S. Cohen (2001), has exhibited promising results for the two-parameter logistic…

  6. Lesion stiffness measured by shear-wave elastography: Preoperative predictor of the histologic underestimation of US-guided core needle breast biopsy.

    Science.gov (United States)

    Park, Ah Young; Son, Eun Ju; Kim, Jeong-Ah; Han, Kyunghwa; Youk, Ji Hyun

    2015-12-01

    To determine whether lesion stiffness measured by shear-wave elastography (SWE) can be used to predict the histologic underestimation of ultrasound (US)-guided 14-gauge core needle biopsy (CNB) for breast masses. This retrospective study enrolled 99 breast masses from 93 patients, including 40 high-risk lesions and 59 ductal carcinoma in situ (DCIS), which were diagnosed by US-guided 14-gauge CNB. SWE was performed for all breast masses to measure quantitative elasticity values before US-guided CNB. To identify the preoperative factors associated with histologic underestimation, patients' age, symptoms, lesion size, B-mode US findings, and quantitative SWE parameters were compared according to the histologic upgrade after surgery using the chi-square test, Fisher's exact test, or independent t-test. The independent factors for predicting histologic upgrade were evaluated using multivariate logistic regression analysis. The underestimation rate was 28.3% (28/99) in total, 25.0% (10/40) in high-risk lesions, and 30.5% (18/59) in DCIS. All elasticity values of the upgrade group were significantly higher than those of the non-upgrade group (PBreast lesion stiffness quantitatively measured by SWE could be helpful to predict the underestimation of malignancy in US-guided 14-gauge CNB. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  7. Disclosing bias in bisulfite assay: MethPrimers underestimate high DNA methylation.

    Directory of Open Access Journals (Sweden)

    Andrea Fuso

    Full Text Available Discordant results obtained in bisulfite assays using MethPrimers (PCR primers designed using MethPrimer software or assuming that non-CpGs cytosines are non methylated versus primers insensitive to cytosine methylation lead us to hypothesize a technical bias. We therefore used the two kinds of primers to study different experimental models and methylation statuses. We demonstrated that MethPrimers negatively select hypermethylated DNA sequences in the PCR step of the bisulfite assay, resulting in CpG methylation underestimation and non-CpG methylation masking, failing to evidence differential methylation statuses. We also describe the characteristics of "Methylation-Insensitive Primers" (MIPs, having degenerated bases (G/A to cope with the uncertain C/U conversion. As CpG and non-CpG DNA methylation patterns are largely variable depending on the species, developmental stage, tissue and cell type, a variable extent of the bias is expected. The more the methylome is methylated, the greater is the extent of the bias, with a prevalent effect of non-CpG methylation. These findings suggest a revision of several DNA methylation patterns so far documented and also point out the necessity of applying unbiased analyses to the increasing number of epigenomic studies.

  8. Disk Masses around Solar-mass Stars are Underestimated by CO Observations

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Mo; Evans II, Neal J. [Astronomy Department, University of Texas, 2515 Speedway, Stop C1400, Austin, TX 78712 (United States); Dodson-Robinson, Sarah E. [University of Delaware, Department of Physics and Astronomy, 217 Sharp Lab, Newark, DE 19716 (United States); Willacy, Karen; Turner, Neal J. [Mail Stop 169-506, Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States)

    2017-05-20

    Gas in protostellar disks provides the raw material for giant planet formation and controls the dynamics of the planetesimal-building dust grains. Accurate gas mass measurements help map the observed properties of planet-forming disks onto the formation environments of known exoplanets. Rare isotopologues of carbon monoxide (CO) have been used as gas mass tracers for disks in the Lupus star-forming region, with an assumed interstellar CO/H{sub 2} abundance ratio. Unfortunately, observations of T-Tauri disks show that CO abundance is not interstellar, a finding reproduced by models that show CO abundance decreasing both with distance from the star and as a function of time. Here, we present radiative transfer simulations that assess the accuracy of CO-based disk mass measurements. We find that the combination of CO chemical depletion in the outer disk and optically thick emission from the inner disk leads observers to underestimate gas mass by more than an order of magnitude if they use the standard assumptions of interstellar CO/H{sub 2} ratio and optically thin emission. Furthermore, CO abundance changes on million-year timescales, introducing an age/mass degeneracy into observations. To reach a factor of a few accuracy for CO-based disk mass measurements, we suggest that observers and modelers adopt the following strategies: (1) select low- J transitions; (2) observe multiple CO isotopologues and use either intensity ratios or normalized line profiles to diagnose CO chemical depletion; and (3) use spatially resolved observations to measure the CO-abundance distribution.

  9. A general model for membrane-based separation processes

    DEFF Research Database (Denmark)

    Soni, Vipasha; Abildskov, Jens; Jonsson, Gunnar Eigil

    2009-01-01

    behaviour will play an important role. In this paper, modelling of membrane-based processes for separation of gas and liquid mixtures are considered. Two general models, one for membrane-based liquid separation processes (with phase change) and another for membrane-based gas separation are presented....... The separation processes covered are: membrane-based gas separation processes, pervaporation and various types of membrane distillation processes. The specific model for each type of membrane-based process is generated from the two general models by applying the specific system descriptions and the corresponding...

  10. Testing Parametric versus Semiparametric Modelling in Generalized Linear Models

    NARCIS (Netherlands)

    Härdle, W.K.; Mammen, E.; Müller, M.D.

    1996-01-01

    We consider a generalized partially linear model E(Y|X,T) = G{X'b + m(T)} where G is a known function, b is an unknown parameter vector, and m is an unknown function.The paper introduces a test statistic which allows to decide between a parametric and a semiparametric model: (i) m is linear, i.e.

  11. Calibration and validation of a general infiltration model

    Science.gov (United States)

    Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.

    1999-08-01

    A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.

  12. Partially Observed Mixtures of IRT Models: An Extension of the Generalized Partial-Credit Model

    Science.gov (United States)

    Von Davier, Matthias; Yamamoto, Kentaro

    2004-01-01

    The generalized partial-credit model (GPCM) is used frequently in educational testing and in large-scale assessments for analyzing polytomous data. Special cases of the generalized partial-credit model are the partial-credit model--or Rasch model for ordinal data--and the two parameter logistic (2PL) model. This article extends the GPCM to the…

  13. Cosmological models in general relativity

    Indian Academy of Sciences (India)

    Cosmological models in general relativity. B B PAUL. Department of Physics, Nowgong College, Nagaon, Assam, India. MS received 4 October 2002; revised 6 March 2003; accepted 21 May 2003. Abstract. LRS Bianchi type-I space-time filled with perfect fluid is considered here with deceler- ation parameter as variable.

  14. Generalizations of the noisy-or model

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří

    2015-01-01

    Roč. 51, č. 3 (2015), s. 508-524 ISSN 0023-5954 R&D Projects: GA ČR GA13-20012S Institutional support: RVO:67985556 Keywords : Bayesian networks * noisy-or model * classification * generalized linear models Subject RIV: JD - Computer Applications, Robotics Impact factor: 0.628, year: 2015 http://library.utia.cas.cz/separaty/2015/MTR/vomlel-0447357.pdf

  15. Generalized Born Models of Macromolecular Solvation Effects

    Science.gov (United States)

    Bashford, Donald; Case, David A.

    2000-10-01

    It would often be useful in computer simulations to use a simple description of solvation effects, instead of explicitly representing the individual solvent molecules. Continuum dielectric models often work well in describing the thermodynamic aspects of aqueous solvation, and approximations to such models that avoid the need to solve the Poisson equation are attractive because of their computational efficiency. Here we give an overview of one such approximation, the generalized Born model, which is simple and fast enough to be used for molecular dynamics simulations of proteins and nucleic acids. We discuss its strengths and weaknesses, both for its fidelity to the underlying continuum model and for its ability to replace explicit consideration of solvent molecules in macromolecular simulations. We focus particularly on versions of the generalized Born model that have a pair-wise analytical form, and therefore fit most naturally into conventional molecular mechanics calculations.

  16. Infrared problems in two-dimensional generalized σ-models

    International Nuclear Information System (INIS)

    Curci, G.; Paffuti, G.

    1989-01-01

    We study the correlations of the energy-momentum tensor for classically conformally invariant generalized σ-models in the Wilson operator-product-expansion approach. We find that these correlations are, in general, infrared divergent. The absence of infrared divergences is obtained, as one can expect, for σ-models on a group manifold or for σ-models with a string-like interpretation. Moreover, the infrared divergences spoil the naive scaling arguments used by Zamolodchikov in the demonstration of the C-theorem. (orig.)

  17. Generalized Landau-Lifshitz models on the interval

    International Nuclear Information System (INIS)

    Doikou, Anastasia; Karaiskos, Nikos

    2011-01-01

    We study the classical generalized gl n Landau-Lifshitz (L-L) model with special boundary conditions that preserve integrability. We explicitly derive the first non-trivial local integral of motion, which corresponds to the boundary Hamiltonian for the sl 2 L-L model. Novel expressions of the modified Lax pairs associated to the integrals of motion are also extracted. The relevant equations of motion with the corresponding boundary conditions are determined. Dynamical integrable boundary conditions are also examined within this spirit. Then the generalized isotropic and anisotropic gl n Landau-Lifshitz models are considered, and novel expressions of the boundary Hamiltonians and the relevant equations of motion and boundary conditions are derived.

  18. A QCD Model Using Generalized Yang-Mills Theory

    International Nuclear Information System (INIS)

    Wang Dianfu; Song Heshan; Kou Lina

    2007-01-01

    Generalized Yang-Mills theory has a covariant derivative, which contains both vector and scalar gauge bosons. Based on this theory, we construct a strong interaction model by using the group U(4). By using this U(4) generalized Yang-Mills model, we also obtain a gauge potential solution, which can be used to explain the asymptotic behavior and color confinement.

  19. A generalized model via random walks for information filtering

    Science.gov (United States)

    Ren, Zhuo-Ming; Kong, Yixiu; Shang, Ming-Sheng; Zhang, Yi-Cheng

    2016-08-01

    There could exist a simple general mechanism lurking beneath collaborative filtering and interdisciplinary physics approaches which have been successfully applied to online E-commerce platforms. Motivated by this idea, we propose a generalized model employing the dynamics of the random walk in the bipartite networks. Taking into account the degree information, the proposed generalized model could deduce the collaborative filtering, interdisciplinary physics approaches and even the enormous expansion of them. Furthermore, we analyze the generalized model with single and hybrid of degree information on the process of random walk in bipartite networks, and propose a possible strategy by using the hybrid degree information for different popular objects to toward promising precision of the recommendation.

  20. Multivariate statistical modelling based on generalized linear models

    CERN Document Server

    Fahrmeir, Ludwig

    1994-01-01

    This book is concerned with the use of generalized linear models for univariate and multivariate regression analysis. Its emphasis is to provide a detailed introductory survey of the subject based on the analysis of real data drawn from a variety of subjects including the biological sciences, economics, and the social sciences. Where possible, technical details and proofs are deferred to an appendix in order to provide an accessible account for non-experts. Topics covered include: models for multi-categorical responses, model checking, time series and longitudinal data, random effects models, and state-space models. Throughout, the authors have taken great pains to discuss the underlying theoretical ideas in ways that relate well to the data at hand. As a result, numerous researchers whose work relies on the use of these models will find this an invaluable account to have on their desks. "The basic aim of the authors is to bring together and review a large part of recent advances in statistical modelling of m...

  1. The Canadian Centre for Climate Modelling and Analysis global coupled model and its climate

    Energy Technology Data Exchange (ETDEWEB)

    Flato, G.M.; Boer, G.J.; Lee, W.G.; McFarlane, N.A.; Ramsden, D.; Reader, M.C. [Canadian Centre for Climate Modelling and Analysis, Victoria, BC (Canada); Weaver, A.J. [School of Earth and Ocean Sciences, University of Victoria, BC (Canada)

    2000-06-01

    A global, three-dimensional climate model, developed by coupling the CCCma second-generation atmospheric general circulation model (GCM2) to a version of the GFDL modular ocean model (MOM1), forms the basis for extended simulations of past, current and projected future climate. The spin-up and coupling procedures are described, as is the resulting climate based on a 200 year model simulation with constant atmospheric composition and external forcing. The simulated climate is systematically compared to available observations in terms of mean climate quantities and their spatial patterns, temporal variability, and regional behavior. Such comparison demonstrates a generally successful reproduction of the broad features of mean climate quantities, albeit with local discrepancies. Variability is generally well-simulated over land, but somewhat underestimated in the tropical ocean and the extratropical storm-track regions. The modelled climate state shows only small trends, indicating a reasonable level of balance at the surface, which is achieved in part by the use of heat and freshwater flux adjustments. The control simulation provides a basis against which to compare simulated climate change due to historical and projected greenhouse gas and aerosol forcing as described in companion publications. (orig.)

  2. EOP MIT General Circulation Model (MITgcm)

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — This data contains a regional implementation of the Massachusetts Institute of Technology general circulation model (MITgcm) at a 1-km spatial resolution for the...

  3. A Systematic Evaluation of Ultrasound-based Fetal Weight Estimation Models on Indian Population

    Directory of Open Access Journals (Sweden)

    Sujitkumar S. Hiwale

    2017-12-01

    Conclusion: We found that the existing fetal weight estimation models have high systematic and random errors on Indian population, with a general tendency of overestimation of fetal weight in the LBW category and underestimation in the HBW category. We also observed that these models have a limited ability to predict babies at a risk of either low or high birth weight. It is recommended that the clinicians should consider all these factors, while interpreting estimated weight given by the existing models.

  4. Panoramic radiographs underestimate extensions of the anterior loop and mandibular incisive canal

    International Nuclear Information System (INIS)

    De Brito, Ana Caroline Ramos; Nejaim, Yuri; De Freitas, Deborah Queiroz; De Oliveira Santos, Christiano

    2016-01-01

    The purpose of this study was to detect the anterior loop of the mental nerve and the mandibular incisive canal in panoramic radiographs (PAN) and cone-beam computed tomography (CBCT) images, as well as to determine the anterior/mesial extension of these structures in panoramic and cross-sectional reconstructions using PAN and CBCT images. Images (both PAN and CBCT) from 90 patients were evaluated by 2 independent observers. Detection of the anterior loop and the incisive canal were compared between PAN and CBCT. The anterior/mesial extension of these structures was compared between PAN and both cross-sectional and panoramic CBCT reconstructions. In CBCT, the anterior loop and the incisive canal were observed in 7.7% and 24.4% of the hemimandibles, respectively. In PAN, the anterior loop and the incisive canal were detected in 15% and 5.5% of cases, respectively. PAN presented more difficulties in the visualization of structures. The anterior/mesial extensions ranged from 0.0 mm to 19.0 mm on CBCT. PAN underestimated the measurements by approximately 2.0 mm. CBCT appears to be a more reliable imaging modality than PAN for preoperative workups of the anterior mandible. Individual variations in the anterior/mesial extensions of the anterior loop of the mental nerve and the mandibular incisive canal mean that is not prudent to rely on a general safe zone for implant placement or bone surgery in the interforaminal region

  5. Panoramic radiographs underestimate extensions of the anterior loop and mandibular incisive canal

    Energy Technology Data Exchange (ETDEWEB)

    De Brito, Ana Caroline Ramos; Nejaim, Yuri; De Freitas, Deborah Queiroz [Dept. of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas, Sao Paulo (Brazil); De Oliveira Santos, Christiano [Dept. of Stomatology, Public Oral Health and Forensic Dentistry, School of Dentistry of Ribeirao Preto, University of Sao Paulo, Sao Paulo (Brazil)

    2016-09-15

    The purpose of this study was to detect the anterior loop of the mental nerve and the mandibular incisive canal in panoramic radiographs (PAN) and cone-beam computed tomography (CBCT) images, as well as to determine the anterior/mesial extension of these structures in panoramic and cross-sectional reconstructions using PAN and CBCT images. Images (both PAN and CBCT) from 90 patients were evaluated by 2 independent observers. Detection of the anterior loop and the incisive canal were compared between PAN and CBCT. The anterior/mesial extension of these structures was compared between PAN and both cross-sectional and panoramic CBCT reconstructions. In CBCT, the anterior loop and the incisive canal were observed in 7.7% and 24.4% of the hemimandibles, respectively. In PAN, the anterior loop and the incisive canal were detected in 15% and 5.5% of cases, respectively. PAN presented more difficulties in the visualization of structures. The anterior/mesial extensions ranged from 0.0 mm to 19.0 mm on CBCT. PAN underestimated the measurements by approximately 2.0 mm. CBCT appears to be a more reliable imaging modality than PAN for preoperative workups of the anterior mandible. Individual variations in the anterior/mesial extensions of the anterior loop of the mental nerve and the mandibular incisive canal mean that is not prudent to rely on a general safe zone for implant placement or bone surgery in the interforaminal region.

  6. a Proposal for Generalization of 3d Models

    Science.gov (United States)

    Uyar, A.; Ulugtekin, N. N.

    2017-11-01

    In recent years, 3D models have been created of many cities around the world. Most of the 3D city models have been introduced as completely graphic or geometric models, and the semantic and topographic aspects of the models have been neglected. In order to use 3D city models beyond the task, a generalization is necessary. CityGML is an open data model and XML-based format for the storage and exchange of virtual 3D city models. Level of Details (LoD) which is an important concept for 3D modelling, can be defined as outlined degree or prior representation of real-world objects. The paper aim is first describes some requirements of 3D model generalization, then presents problems and approaches that have been developed in recent years. In conclude the paper will be a summary and outlook on problems and future work.

  7. Crash data modeling with a generalized estimator.

    Science.gov (United States)

    Ye, Zhirui; Xu, Yueru; Lord, Dominique

    2018-05-11

    The investigation of relationships between traffic crashes and relevant factors is important in traffic safety management. Various methods have been developed for modeling crash data. In real world scenarios, crash data often display the characteristics of over-dispersion. However, on occasions, some crash datasets have exhibited under-dispersion, especially in cases where the data are conditioned upon the mean. The commonly used models (such as the Poisson and the NB regression models) have associated limitations to cope with various degrees of dispersion. In light of this, a generalized event count (GEC) model, which can be generally used to handle over-, equi-, and under-dispersed data, is proposed in this study. This model was first applied to case studies using data from Toronto, characterized by over-dispersion, and then to crash data from railway-highway crossings in Korea, characterized with under-dispersion. The results from the GEC model were compared with those from the Negative binomial and the hyper-Poisson models. The cases studies show that the proposed model provides good performance for crash data characterized with over- and under-dispersion. Moreover, the proposed model simplifies the modeling process and the prediction of crash data. Copyright © 2018 Elsevier Ltd. All rights reserved.

  8. Underestimated risks of recurrent long-range ash dispersal from northern Pacific Arc volcanoes.

    Science.gov (United States)

    Bourne, A J; Abbott, P M; Albert, P G; Cook, E; Pearce, N J G; Ponomareva, V; Svensson, A; Davies, S M

    2016-07-21

    Widespread ash dispersal poses a significant natural hazard to society, particularly in relation to disruption to aviation. Assessing the extent of the threat of far-travelled ash clouds on flight paths is substantially hindered by an incomplete volcanic history and an underestimation of the potential reach of distant eruptive centres. The risk of extensive ash clouds to aviation is thus poorly quantified. New evidence is presented of explosive Late Pleistocene eruptions in the Pacific Arc, currently undocumented in the proximal geological record, which dispersed ash up to 8000 km from source. Twelve microscopic ash deposits or cryptotephra, invisible to the naked eye, discovered within Greenland ice-cores, and ranging in age between 11.1 and 83.7 ka b2k, are compositionally matched to northern Pacific Arc sources including Japan, Kamchatka, Cascades and Alaska. Only two cryptotephra deposits are correlated to known high-magnitude eruptions (Towada-H, Japan, ca 15 ka BP and Mount St Helens Set M, ca 28 ka BP). For the remaining 10 deposits, there is no evidence of age- and compositionally-equivalent eruptive events in regional volcanic stratigraphies. This highlights the inherent problem of under-reporting eruptions and the dangers of underestimating the long-term risk of widespread ash dispersal for trans-Pacific and trans-Atlantic flight routes.

  9. Generalized versus non-generalized neural network model for multi-lead inflow forecasting at Aswan High Dam

    Directory of Open Access Journals (Sweden)

    A. El-Shafie

    2011-03-01

    Full Text Available Artificial neural networks (ANN have been found efficient, particularly in problems where characteristics of the processes are stochastic and difficult to describe using explicit mathematical models. However, time series prediction based on ANN algorithms is fundamentally difficult and faces problems. One of the major shortcomings is the search for the optimal input pattern in order to enhance the forecasting capabilities for the output. The second challenge is the over-fitting problem during the training procedure and this occurs when ANN loses its generalization. In this research, autocorrelation and cross correlation analyses are suggested as a method for searching the optimal input pattern. On the other hand, two generalized methods namely, Regularized Neural Network (RNN and Ensemble Neural Network (ENN models are developed to overcome the drawbacks of classical ANN models. Using Generalized Neural Network (GNN helped avoid over-fitting of training data which was observed as a limitation of classical ANN models. Real inflow data collected over the last 130 years at Lake Nasser was used to train, test and validate the proposed model. Results show that the proposed GNN model outperforms non-generalized neural network and conventional auto-regressive models and it could provide accurate inflow forecasting.

  10. Generalized heat-transport equations: parabolic and hyperbolic models

    Science.gov (United States)

    Rogolino, Patrizia; Kovács, Robert; Ván, Peter; Cimmelli, Vito Antonio

    2018-03-01

    We derive two different generalized heat-transport equations: the most general one, of the first order in time and second order in space, encompasses some well-known heat equations and describes the hyperbolic regime in the absence of nonlocal effects. Another, less general, of the second order in time and fourth order in space, is able to describe hyperbolic heat conduction also in the presence of nonlocal effects. We investigate the thermodynamic compatibility of both models by applying some generalizations of the classical Liu and Coleman-Noll procedures. In both cases, constitutive equations for the entropy and for the entropy flux are obtained. For the second model, we consider a heat-transport equation which includes nonlocal terms and study the resulting set of balance laws, proving that the corresponding thermal perturbations propagate with finite speed.

  11. X-ray computed microtomography characterizes the wound effect that causes sap flow underestimation by thermal dissipation sensors.

    Science.gov (United States)

    Marañón-Jiménez, S; Van den Bulcke, J; Piayda, A; Van Acker, J; Cuntz, M; Rebmann, C; Steppe, K

    2018-02-01

    Insertion of thermal dissipation (TD) sap flow sensors in living tree stems causes damage of the wood tissue, as is the case with other invasive methods. The subsequent wound formation is one of the main causes of underestimation of tree water-use measured by TD sensors. However, the specific alterations in wood anatomy in response to inserted sensors have not yet been characterized, and the linked dysfunctions in xylem conductance and sensor accuracy are still unknown. In this study, we investigate the anatomical mechanisms prompting sap flow underestimation and the dynamic process of wound formation. Successive sets of TD sensors were installed in the early, mid and end stage of the growing season in diffuse- and ring-porous trees, Fagus sylvatica (Linnaeus) and Quercus petraea ((Mattuschka) Lieblein), respectively. The trees were cut in autumn and additional sensors were installed in the cut stem segments as controls without wound formation. The wounded area and volume surrounding each sensor was then visually determined by X-ray computed microtomography (X-ray microCT). This technique allowed the characterization of vessel anatomical transformations such as tyloses formation, their spatial distribution and quantification of reduction in conductive area. MicroCT scans showed considerable formation of tyloses that reduced the conductive area of vessels surrounding the inserted TD probes, thus causing an underestimation in sap flux density (SFD) in both beech and oak. Discolored wood tissue was ellipsoidal, larger in the radial plane, more extensive in beech than in oak, and also for sensors installed for longer times. However, the severity of anatomical transformations did not always follow this pattern. Increased wound size with time, for example, did not result in larger SFD underestimation. This information helps us to better understand the mechanisms involved in wound effects with TD sensors and allows the provision of practical recommendations to reduce

  12. The general dynamic model

    DEFF Research Database (Denmark)

    Borregaard, Michael K.; Matthews, Thomas J.; Whittaker, Robert James

    2016-01-01

    Aim: Island biogeography focuses on understanding the processes that underlie a set of well-described patterns on islands, but it lacks a unified theoretical framework for integrating these processes. The recently proposed general dynamic model (GDM) of oceanic island biogeography offers a step...... towards this goal. Here, we present an analysis of causality within the GDM and investigate its potential for the further development of island biogeographical theory. Further, we extend the GDM to include subduction-based island arcs and continental fragment islands. Location: A conceptual analysis...... of evolutionary processes in simulations derived from the mechanistic assumptions of the GDM corresponded broadly to those initially suggested, with the exception of trends in extinction rates. Expanding the model to incorporate different scenarios of island ontogeny and isolation revealed a sensitivity...

  13. Generalized Path Analysis and Generalized Simultaneous Equations Model for Recursive Systems with Responses of Mixed Types

    Science.gov (United States)

    Tsai, Tien-Lung; Shau, Wen-Yi; Hu, Fu-Chang

    2006-01-01

    This article generalizes linear path analysis (PA) and simultaneous equations models (SiEM) to deal with mixed responses of different types in a recursive or triangular system. An efficient instrumental variable (IV) method for estimating the structural coefficients of a 2-equation partially recursive generalized path analysis (GPA) model and…

  14. Reliability assessment of competing risks with generalized mixed shock models

    International Nuclear Information System (INIS)

    Rafiee, Koosha; Feng, Qianmei; Coit, David W.

    2017-01-01

    This paper investigates reliability modeling for systems subject to dependent competing risks considering the impact from a new generalized mixed shock model. Two dependent competing risks are soft failure due to a degradation process, and hard failure due to random shocks. The shock process contains fatal shocks that can cause hard failure instantaneously, and nonfatal shocks that impact the system in three different ways: 1) damaging the unit by immediately increasing the degradation level, 2) speeding up the deterioration by accelerating the degradation rate, and 3) weakening the unit strength by reducing the hard failure threshold. While the first impact from nonfatal shocks comes from each individual shock, the other two impacts are realized when the condition for a new generalized mixed shock model is satisfied. Unlike most existing mixed shock models that consider a combination of two shock patterns, our new generalized mixed shock model includes three classic shock patterns. According to the proposed generalized mixed shock model, the degradation rate and the hard failure threshold can simultaneously shift multiple times, whenever the condition for one of these three shock patterns is satisfied. An example using micro-electro-mechanical systems devices illustrates the effectiveness of the proposed approach with sensitivity analysis. - Highlights: • A rich reliability model for systems subject to dependent failures is proposed. • The degradation rate and the hard failure threshold can shift simultaneously. • The shift is triggered by a new generalized mixed shock model. • The shift can occur multiple times under the generalized mixed shock model.

  15. Foundations of linear and generalized linear models

    CERN Document Server

    Agresti, Alan

    2015-01-01

    A valuable overview of the most important ideas and results in statistical analysis Written by a highly-experienced author, Foundations of Linear and Generalized Linear Models is a clear and comprehensive guide to the key concepts and results of linear statistical models. The book presents a broad, in-depth overview of the most commonly used statistical models by discussing the theory underlying the models, R software applications, and examples with crafted models to elucidate key ideas and promote practical model building. The book begins by illustrating the fundamentals of linear models,

  16. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  17. Republic of Georgia estimates for prevalence of drug use: Randomized response techniques suggest under-estimation.

    Science.gov (United States)

    Kirtadze, Irma; Otiashvili, David; Tabatadze, Mzia; Vardanashvili, Irina; Sturua, Lela; Zabransky, Tomas; Anthony, James C

    2018-06-01

    Validity of responses in surveys is an important research concern, especially in emerging market economies where surveys in the general population are a novelty, and the level of social control is traditionally higher. The Randomized Response Technique (RRT) can be used as a check on response validity when the study aim is to estimate population prevalence of drug experiences and other socially sensitive and/or illegal behaviors. To apply RRT and to study potential under-reporting of drug use in a nation-scale, population-based general population survey of alcohol and other drug use. For this first-ever household survey on addictive substances for the Country of Georgia, we used the multi-stage probability sampling of 18-to-64-year-old household residents of 111 urban and 49 rural areas. During the interviewer-administered assessments, RRT involved pairing of sensitive and non-sensitive questions about drug experiences. Based upon the standard household self-report survey estimate, an estimated 17.3% [95% confidence interval, CI: 15.5%, 19.1%] of Georgian household residents have tried cannabis. The corresponding RRT estimate was 29.9% [95% CI: 24.9%, 34.9%]. The RRT estimates for other drugs such as heroin also were larger than the standard self-report estimates. We remain unsure about what is the "true" value for prevalence of using illegal psychotropic drugs in the Republic of Georgia study population. Our RRT results suggest that standard non-RRT approaches might produce 'under-estimates' or at best, highly conservative, lower-end estimates. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. College Students' Underestimation of Blood Alcohol Concentration from Hypothetical Consumption of Supersized Alcopops: Results from a Cluster-Randomized Classroom Study.

    Science.gov (United States)

    Rossheim, Matthew E; Thombs, Dennis L; Krall, Jenna R; Jernigan, David H

    2018-05-30

    Supersized alcopops are a class of single-serving beverages popular among underage drinkers. These products contain large quantities of alcohol. This study examines the extent to which young adults recognize how intoxicated they would become from consuming these products. The study sample included 309 undergraduates who had consumed alcohol within the past year. Thirty-two sections of a college English course were randomized to 1 of 2 survey conditions, based on hypothetical consumption of supersized alcopops or beer of comparable liquid volume. Students were provided an empty can of 1 of the 2 beverages to help them answer the survey questions. Equation-calculated blood alcohol concentrations (BACs)-based on body weight and sex-were compared to the students' self-estimated BACs for consuming 1, 2, and 3 cans of the beverage provided to them. In adjusted regression models, students randomized to the supersized alcopop group greatly underestimated their BAC, whereas students randomized to the beer group overestimated it. The supersized alcopop group underestimated their BAC by 0.04 (95% confidence interval [CI]: 0.034, 0.053), 0.09 (95% CI: 0.067, 0.107), and 0.13 g/dl (95% CI: 0.097, 0.163) compared to the beer group. When asked how much alcohol they could consume before it would be unsafe to drive, students in the supersized alcopop group had 7 times the odds of estimating consumption that would generate a calculated BAC of at least 0.08 g/dl, compared to those making estimates based on beer consumption (95% CI: 3.734, 13.025). Students underestimated the intoxication they would experience from consuming supersized alcopops. Revised product warning labels are urgently needed to clearly identify the number of standard drinks contained in a supersized alcopop can. Moreover, regulations are needed to limit alcohol content of single-serving products. Copyright © 2018 by the Research Society on Alcoholism.

  19. Generalized algebra-valued models of set theory

    NARCIS (Netherlands)

    Löwe, B.; Tarafder, S.

    2015-01-01

    We generalize the construction of lattice-valued models of set theory due to Takeuti, Titani, Kozawa and Ozawa to a wider class of algebras and show that this yields a model of a paraconsistent logic that validates all axioms of the negation-free fragment of Zermelo-Fraenkel set theory.

  20. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of nonlinear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  1. SU-F-T-132: Variable RBE Models Predict Possible Underestimation of Vaginal Dose for Anal Cancer Patients Treated Using Single-Field Proton Treatments

    Energy Technology Data Exchange (ETDEWEB)

    McNamara, A; Underwood, T; Wo, J; Paganetti, H [Massachusetts General Hospital & Harvard Medical School, Boston, MA (United States)

    2016-06-15

    Purpose: Anal cancer patients treated using a posterior proton beam may be at risk of vaginal wall injury due to the increased linear energy transfer (LET) and relative biological effectiveness (RBE) at the beam distal edge. We investigate the vaginal dose received. Methods: Five patients treated for anal cancer with proton pencil beam scanning were considered, all treated to a prescription dose of 54 Gy(RBE) over 28–30 fractions. Dose and LET distributions were calculated using the Monte Carlo simulation toolkit TOPAS. In addition to the standard assumption of a fixed RBE of 1.1, variable RBE was considered via the application of published models. Dose volume histograms (DVHs) were extracted for the planning treatment volume (PTV) and vagina, the latter being used to calculate the vaginal normal tissue complication probability (NTCP). Results: Compared to the assumption of a fixed RBE of 1.1, the variable RBE model predicts a dose increase of approximately 3.3 ± 1.7 Gy at the end of beam range. NTCP parameters for the vagina are incomplete in the current literature, however, inferring value ranges from the existing data we use D{sub 50} = 50 Gy and LKB model parameters a=1–2 and m=0.2–0.4. We estimate the NTCP for the vagina to be 37–48% and 42–47% for the fixed and variable RBE cases, respectively. Additionally, a difference in the dose distribution was observed between the analytical calculation and Monte Carlo methods. We find that the target dose is overestimated on average by approximately 1–2%. Conclusion: For patients treated with posterior beams, the vaginal wall may coincide with the distal end of the proton beam and may receive a substantial increase in dose if variable RBE models are applied compared to using the current clinical standard of RBE equal to 1.1. This could potentially lead to underestimating toxicities when treating with protons.

  2. Multivariate covariance generalized linear models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Jørgensen, Bent

    2016-01-01

    are fitted by using an efficient Newton scoring algorithm based on quasi-likelihood and Pearson estimating functions, using only second-moment assumptions. This provides a unified approach to a wide variety of types of response variables and covariance structures, including multivariate extensions......We propose a general framework for non-normal multivariate data analysis called multivariate covariance generalized linear models, designed to handle multivariate response variables, along with a wide range of temporal and spatial correlation structures defined in terms of a covariance link...... function combined with a matrix linear predictor involving known matrices. The method is motivated by three data examples that are not easily handled by existing methods. The first example concerns multivariate count data, the second involves response variables of mixed types, combined with repeated...

  3. General regression and representation model for classification.

    Directory of Open Access Journals (Sweden)

    Jianjun Qian

    Full Text Available Recently, the regularized coding-based classification methods (e.g. SRC and CRC show a great potential for pattern classification. However, most existing coding methods assume that the representation residuals are uncorrelated. In real-world applications, this assumption does not hold. In this paper, we take account of the correlations of the representation residuals and develop a general regression and representation model (GRR for classification. GRR not only has advantages of CRC, but also takes full use of the prior information (e.g. the correlations between representation residuals and representation coefficients and the specific information (weight matrix of image pixels to enhance the classification performance. GRR uses the generalized Tikhonov regularization and K Nearest Neighbors to learn the prior information from the training data. Meanwhile, the specific information is obtained by using an iterative algorithm to update the feature (or image pixel weights of the test sample. With the proposed model as a platform, we design two classifiers: basic general regression and representation classifier (B-GRR and robust general regression and representation classifier (R-GRR. The experimental results demonstrate the performance advantages of proposed methods over state-of-the-art algorithms.

  4. Modelling uncertainty with generalized credal sets: application to conjunction and decision

    Science.gov (United States)

    Bronevich, Andrey G.; Rozenberg, Igor N.

    2018-01-01

    To model conflict, non-specificity and contradiction in information, upper and lower generalized credal sets are introduced. Any upper generalized credal set is a convex subset of plausibility measures interpreted as lower probabilities whose bodies of evidence consist of singletons and a certain event. Analogously, contradiction is modelled in the theory of evidence by a belief function that is greater than zero at empty set. Based on generalized credal sets, we extend the conjunctive rule for contradictory sources of information, introduce constructions like natural extension in the theory of imprecise probabilities and show that the model of generalized credal sets coincides with the model of imprecise probabilities if the profile of a generalized credal set consists of probability measures. We give ways how the introduced model can be applied to decision problems.

  5. Generalized continua as models for classical and advanced materials

    CERN Document Server

    Forest, Samuel

    2016-01-01

    This volume is devoted to an actual topic which is the focus world-wide of various research groups. It contains contributions describing the material behavior on different scales, new existence and uniqueness theorems, the formulation of constitutive equations for advanced materials. The main emphasis of the contributions is directed on the following items - Modelling and simulation of natural and artificial materials with significant microstructure, - Generalized continua as a result of multi-scale models, - Multi-field actions on materials resulting in generalized material models, - Theories including higher gradients, and - Comparison with discrete modelling approaches.

  6. Parameter Estimation for a Computable General Equilibrium Model

    DEFF Research Database (Denmark)

    Arndt, Channing; Robinson, Sherman; Tarp, Finn

    2002-01-01

    We introduce a maximum entropy approach to parameter estimation for computable general equilibrium (CGE) models. The approach applies information theory to estimating a system of non-linear simultaneous equations. It has a number of advantages. First, it imposes all general equilibrium constraints...

  7. Generalized waste package containment model

    International Nuclear Information System (INIS)

    Liebetrau, A.M.; Apted, M.J.

    1985-02-01

    The US Department of Energy (DOE) is developing a performance assessment strategy to demonstrate compliance with standards and technical requirements of the Environmental Protection Agency (EPA) and the Nuclear Regulatory Commission (NRC) for the permanent disposal of high-level nuclear wastes in geologic repositories. One aspect of this strategy is the development of a unified performance model of the entire geologic repository system. Details of a generalized waste package containment (WPC) model and its relationship with other components of an overall repository model are presented in this paper. The WPC model provides stochastically determined estimates of the distributions of times-to-failure of the barriers of a waste package by various corrosion mechanisms and degradation processes. The model consists of a series of modules which employ various combinations of stochastic (probabilistic) and mechanistic process models, and which are individually designed to reflect the current state of knowledge. The WPC model is designed not only to take account of various site-specific conditions and processes, but also to deal with a wide range of site, repository, and waste package configurations. 11 refs., 3 figs., 2 tabs

  8. Geometrical efficiency in computerized tomography: generalized model

    International Nuclear Information System (INIS)

    Costa, P.R.; Robilotta, C.C.

    1992-01-01

    A simplified model for producing sensitivity and exposure profiles in computerized tomographic system was recently developed allowing the forecast of profiles behaviour in the rotation center of the system. The generalization of this model for some point of the image plane was described, and the geometrical efficiency could be evaluated. (C.G.C.)

  9. Generalized formal model of Big Data

    OpenAIRE

    Shakhovska, N.; Veres, O.; Hirnyak, M.

    2016-01-01

    This article dwells on the basic characteristic features of the Big Data technologies. It is analyzed the existing definition of the “big data” term. The article proposes and describes the elements of the generalized formal model of big data. It is analyzed the peculiarities of the application of the proposed model components. It is described the fundamental differences between Big Data technology and business analytics. Big Data is supported by the distributed file system Google File System ...

  10. Adaptive Inference on General Graphical Models

    OpenAIRE

    Acar, Umut A.; Ihler, Alexander T.; Mettu, Ramgopal; Sumer, Ozgur

    2012-01-01

    Many algorithms and applications involve repeatedly solving variations of the same inference problem; for example we may want to introduce new evidence to the model or perform updates to conditional dependencies. The goal of adaptive inference is to take advantage of what is preserved in the model and perform inference more rapidly than from scratch. In this paper, we describe techniques for adaptive inference on general graphs that support marginal computation and updates to the conditional ...

  11. Higher dimensional generalizations of the SYK model

    Energy Technology Data Exchange (ETDEWEB)

    Berkooz, Micha [Department of Particle Physics and Astrophysics, Weizmann Institute of Science,Rehovot 7610001 (Israel); Narayan, Prithvi [International Centre for Theoretical Sciences, Hesaraghatta,Bengaluru North, 560 089 (India); Rozali, Moshe [Department of Physics and Astronomy, University of British Columbia,Vancouver, BC V6T 1Z1 (Canada); Simón, Joan [School of Mathematics and Maxwell Institute for Mathematical Sciences, University of Edinburgh,King’s Buildings, Edinburgh EH9 3FD (United Kingdom)

    2017-01-31

    We discuss a 1+1 dimensional generalization of the Sachdev-Ye-Kitaev model. The model contains N Majorana fermions at each lattice site with a nearest-neighbour hopping term. The SYK random interaction is restricted to low momentum fermions of definite chirality within each lattice site. This gives rise to an ordinary 1+1 field theory above some energy scale and a low energy SYK-like behavior. We exhibit a class of low-pass filters which give rise to a rich variety of hyperscaling behaviour in the IR. We also discuss another set of generalizations which describes probing an SYK system with an external fermion, together with the new scaling behavior they exhibit in the IR.

  12. How and why DNA barcodes underestimate the diversity of microbial eukaryotes.

    Directory of Open Access Journals (Sweden)

    Gwenael Piganeau

    Full Text Available BACKGROUND: Because many picoplanktonic eukaryotic species cannot currently be maintained in culture, direct sequencing of PCR-amplified 18S ribosomal gene DNA fragments from filtered sea-water has been successfully used to investigate the astounding diversity of these organisms. The recognition of many novel planktonic organisms is thus based solely on their 18S rDNA sequence. However, a species delimited by its 18S rDNA sequence might contain many cryptic species, which are highly differentiated in their protein coding sequences. PRINCIPAL FINDINGS: Here, we investigate the issue of species identification from one gene to the whole genome sequence. Using 52 whole genome DNA sequences, we estimated the global genetic divergence in protein coding genes between organisms from different lineages and compared this to their ribosomal gene sequence divergences. We show that this relationship between proteome divergence and 18S divergence is lineage dependent. Unicellular lineages have especially low 18S divergences relative to their protein sequence divergences, suggesting that 18S ribosomal genes are too conservative to assess planktonic eukaryotic diversity. We provide an explanation for this lineage dependency, which suggests that most species with large effective population sizes will show far less divergence in 18S than protein coding sequences. CONCLUSIONS: There is therefore a trade-off between using genes that are easy to amplify in all species, but which by their nature are highly conserved and underestimate the true number of species, and using genes that give a better description of the number of species, but which are more difficult to amplify. We have shown that this trade-off differs between unicellular and multicellular organisms as a likely consequence of differences in effective population sizes. We anticipate that biodiversity of microbial eukaryotic species is underestimated and that numerous "cryptic species" will become

  13. Learning general phonological rules from distributional information: a computational model.

    Science.gov (United States)

    Calamaro, Shira; Jarosz, Gaja

    2015-04-01

    Phonological rules create alternations in the phonetic realizations of related words. These rules must be learned by infants in order to identify the phonological inventory, the morphological structure, and the lexicon of a language. Recent work proposes a computational model for the learning of one kind of phonological alternation, allophony (Peperkamp, Le Calvez, Nadal, & Dupoux, 2006). This paper extends the model to account for learning of a broader set of phonological alternations and the formalization of these alternations as general rules. In Experiment 1, we apply the original model to new data in Dutch and demonstrate its limitations in learning nonallophonic rules. In Experiment 2, we extend the model to allow it to learn general rules for alternations that apply to a class of segments. In Experiment 3, the model is further extended to allow for generalization by context; we argue that this generalization must be constrained by linguistic principles. Copyright © 2014 Cognitive Science Society, Inc.

  14. Can CFMIP2 models reproduce the leading modes of cloud vertical structure in the CALIPSO-GOCCP observations?

    Science.gov (United States)

    Wang, Fang; Yang, Song

    2018-02-01

    Using principal component (PC) analysis, three leading modes of cloud vertical structure (CVS) are revealed by the GCM-Oriented CALIPSO Cloud Product (GOCCP), i.e. tropical high, subtropical anticyclonic and extratropical cyclonic cloud modes (THCM, SACM and ECCM, respectively). THCM mainly reflect the contrast between tropical high clouds and clouds in middle/high latitudes. SACM is closely associated with middle-high clouds in tropical convective cores, few-cloud regimes in subtropical anticyclonic clouds and stratocumulus over subtropical eastern oceans. ECCM mainly corresponds to clouds along extratropical cyclonic regions. Models of phase 2 of Cloud Feedback Model Intercomparison Project (CFMIP2) well reproduce the THCM, but SACM and ECCM are generally poorly simulated compared to GOCCP. Standardized PCs corresponding to CVS modes are generally captured, whereas original PCs (OPCs) are consistently underestimated (overestimated) for THCM (SACM and ECCM) by CFMIP2 models. The effects of CVS modes on relative cloud radiative forcing (RSCRF/RLCRF) (RSCRF being calculated at the surface while RLCRF at the top of atmosphere) are studied in terms of principal component regression method. Results show that CFMIP2 models tend to overestimate (underestimated or simulate the opposite sign) RSCRF/RLCRF radiative effects (REs) of ECCM (THCM and SACM) in unit global mean OPC compared to observations. These RE biases may be attributed to two factors, one of which is underestimation (overestimation) of low/middle clouds (high clouds) (also known as stronger (weaker) REs in unit low/middle (high) clouds) in simulated global mean cloud profiles, the other is eigenvector biases in CVS modes (especially for SACM and ECCM). It is suggested that much more attention should be paid on improvement of CVS, especially cloud parameterization associated with particular physical processes (e.g. downwelling regimes with the Hadley circulation, extratropical storm tracks and others), which

  15. Automation of electroweak NLO corrections in general models

    Energy Technology Data Exchange (ETDEWEB)

    Lang, Jean-Nicolas [Universitaet Wuerzburg (Germany)

    2016-07-01

    I discuss the automation of generation of scattering amplitudes in general quantum field theories at next-to-leading order in perturbation theory. The work is based on Recola, a highly efficient one-loop amplitude generator for the Standard Model, which I have extended so that it can deal with general quantum field theories. Internally, Recola computes off-shell currents and for new models new rules for off-shell currents emerge which are derived from the Feynman rules. My work relies on the UFO format which can be obtained by a suited model builder, e.g. FeynRules. I have developed tools to derive the necessary counterterm structures and to perform the renormalization within Recola in an automated way. I describe the procedure using the example of the two-Higgs-doublet model.

  16. Comparison of body composition between fashion models and women in general.

    Science.gov (United States)

    Park, Sunhee

    2017-12-31

    The present study compared the physical characteristics and body composition of professional fashion models and women in general, utilizing the skinfold test. The research sample consisted of 90 professional fashion models presently active in Korea and 100 females in the general population, all selected through convenience sampling. Measurement was done following standardized methods and procedures set by the International Society for the Advancement of Kinanthropometry. Body density (mg/ mm) and body fat (%) were measured at the biceps, triceps, subscapular, and suprailiac areas. The results showed that the biceps, triceps, subscapular, and suprailiac areas of professional fashion models were significantly thinner than those of women in general (pfashion models were significantly lower than those in women in general (pfashion models was significantly greater (pfashion models is higher, due to taller stature, than in women in general. Moreover, there is an effort on the part of fashion models to lose weight in order to maintain a thin body and a low weight for occupational reasons. ©2017 The Korean Society for Exercise Nutrition

  17. Generalized Linear Models with Applications in Engineering and the Sciences

    CERN Document Server

    Myers, Raymond H; Vining, G Geoffrey; Robinson, Timothy J

    2012-01-01

    Praise for the First Edition "The obvious enthusiasm of Myers, Montgomery, and Vining and their reliance on their many examples as a major focus of their pedagogy make Generalized Linear Models a joy to read. Every statistician working in any area of applied science should buy it and experience the excitement of these new approaches to familiar activities."-Technometrics Generalized Linear Models: With Applications in Engineering and the Sciences, Second Edition continues to provide a clear introduction to the theoretical foundations and key applications of generalized linear models (GLMs). Ma

  18. Double generalized linear compound poisson models to insurance claims data

    DEFF Research Database (Denmark)

    Andersen, Daniel Arnfeldt; Bonat, Wagner Hugo

    2017-01-01

    This paper describes the specification, estimation and comparison of double generalized linear compound Poisson models based on the likelihood paradigm. The models are motivated by insurance applications, where the distribution of the response variable is composed by a degenerate distribution...... implementation and illustrate the application of double generalized linear compound Poisson models using a data set about car insurances....

  19. Modeling age-specific mortality for countries with generalized HIV epidemics.

    Directory of Open Access Journals (Sweden)

    David J Sharrow

    Full Text Available In a given population the age pattern of mortality is an important determinant of total number of deaths, age structure, and through effects on age structure, the number of births and thereby growth. Good mortality models exist for most populations except those experiencing generalized HIV epidemics and some developing country populations. The large number of deaths concentrated at very young and adult ages in HIV-affected populations produce a unique 'humped' age pattern of mortality that is not reproduced by any existing mortality models. Both burden of disease reporting and population projection methods require age-specific mortality rates to estimate numbers of deaths and produce plausible age structures. For countries with generalized HIV epidemics these estimates should take into account the future trajectory of HIV prevalence and its effects on age-specific mortality. In this paper we present a parsimonious model of age-specific mortality for countries with generalized HIV/AIDS epidemics.The model represents a vector of age-specific mortality rates as the weighted sum of three independent age-varying components. We derive the age-varying components from a Singular Value Decomposition of the matrix of age-specific mortality rate schedules. The weights are modeled as a function of HIV prevalence and one of three possible sets of inputs: life expectancy at birth, a measure of child mortality, or child mortality with a measure of adult mortality. We calibrate the model with 320 five-year life tables for each sex from the World Population Prospects 2010 revision that come from the 40 countries of the world that have and are experiencing a generalized HIV epidemic. Cross validation shows that the model is able to outperform several existing model life table systems.We present a flexible, parsimonious model of age-specific mortality for countries with generalized HIV epidemics. Combined with the outputs of existing epidemiological and

  20. Membrane models and generalized Z2 gauge theories

    International Nuclear Information System (INIS)

    Lowe, M.J.; Wallace, D.J.

    1980-01-01

    We consider models of (d-n)-dimensional membranes fluctuating in a d-dimensional space under the action of surface tension. We investigate the renormalization properties of these models perturbatively and in 1/n expansion. The potential relationships of these models to generalized Z 2 gauge theories are indicated. (orig.)

  1. Drastic underestimation of amphipod biodiversity in the endangered Irano-Anatolian and Caucasus biodiversity hotspots.

    Science.gov (United States)

    Katouzian, Ahmad-Reza; Sari, Alireza; Macher, Jan N; Weiss, Martina; Saboori, Alireza; Leese, Florian; Weigand, Alexander M

    2016-03-01

    Biodiversity hotspots are centers of biological diversity and particularly threatened by anthropogenic activities. Their true magnitude of species diversity and endemism, however, is still largely unknown as species diversity is traditionally assessed using morphological descriptions only, thereby ignoring cryptic species. This directly limits evidence-based monitoring and management strategies. Here we used molecular species delimitation methods to quantify cryptic diversity of the montane amphipods in the Irano-Anatolian and Caucasus biodiversity hotspots. Amphipods are ecosystem engineers in rivers and lakes. Species diversity was assessed by analysing two genetic markers (mitochondrial COI and nuclear 28S rDNA), compared with morphological assignments. Our results unambiguously demonstrate that species diversity and endemism is dramatically underestimated, with 42 genetically identified freshwater species in only five reported morphospecies. Over 90% of the newly recovered species cluster inside Gammarus komareki and G. lacustris; 69% of the recovered species comprise narrow range endemics. Amphipod biodiversity is drastically underestimated for the studied regions. Thus, the risk of biodiversity loss is significantly greater than currently inferred as most endangered species remain unrecognized and/or are only found locally. Integrative application of genetic assessments in monitoring programs will help to understand the true magnitude of biodiversity and accurately evaluate its threat status.

  2. Modeling Mediterranean Ocean climate of the Last Glacial Maximum

    Directory of Open Access Journals (Sweden)

    U. Mikolajewicz

    2011-03-01

    Full Text Available A regional ocean general circulation model of the Mediterranean is used to study the climate of the Last Glacial Maximum. The atmospheric forcing for these simulations has been derived from simulations with an atmospheric general circulation model, which in turn was forced with surface conditions from a coarse resolution earth system model. The model is successful in reproducing the general patterns of reconstructed sea surface temperature anomalies with the strongest cooling in summer in the northwestern Mediterranean and weak cooling in the Levantine, although the model underestimates the extent of the summer cooling in the western Mediterranean. However, there is a strong vertical gradient associated with this pattern of summer cooling, which makes the comparison with reconstructions complicated. The exchange with the Atlantic is decreased to roughly one half of its present value, which can be explained by the shallower Strait of Gibraltar as a consequence of lower global sea level. This reduced exchange causes a strong increase of salinity in the Mediterranean in spite of reduced net evaporation.

  3. A General Model for Estimating Macroevolutionary Landscapes.

    Science.gov (United States)

    Boucher, Florian C; Démery, Vincent; Conti, Elena; Harmon, Luke J; Uyeda, Josef

    2018-03-01

    The evolution of quantitative characters over long timescales is often studied using stochastic diffusion models. The current toolbox available to students of macroevolution is however limited to two main models: Brownian motion and the Ornstein-Uhlenbeck process, plus some of their extensions. Here, we present a very general model for inferring the dynamics of quantitative characters evolving under both random diffusion and deterministic forces of any possible shape and strength, which can accommodate interesting evolutionary scenarios like directional trends, disruptive selection, or macroevolutionary landscapes with multiple peaks. This model is based on a general partial differential equation widely used in statistical mechanics: the Fokker-Planck equation, also known in population genetics as the Kolmogorov forward equation. We thus call the model FPK, for Fokker-Planck-Kolmogorov. We first explain how this model can be used to describe macroevolutionary landscapes over which quantitative traits evolve and, more importantly, we detail how it can be fitted to empirical data. Using simulations, we show that the model has good behavior both in terms of discrimination from alternative models and in terms of parameter inference. We provide R code to fit the model to empirical data using either maximum-likelihood or Bayesian estimation, and illustrate the use of this code with two empirical examples of body mass evolution in mammals. FPK should greatly expand the set of macroevolutionary scenarios that can be studied since it opens the way to estimating macroevolutionary landscapes of any conceivable shape. [Adaptation; bounds; diffusion; FPK model; macroevolution; maximum-likelihood estimation; MCMC methods; phylogenetic comparative data; selection.].

  4. A multi-resolution assessment of the Community Multiscale Air Quality (CMAQ model v4.7 wet deposition estimates for 2002–2006

    Directory of Open Access Journals (Sweden)

    K. W. Appel

    2011-05-01

    Full Text Available This paper examines the operational performance of the Community Multiscale Air Quality (CMAQ model simulations for 2002–2006 using both 36-km and 12-km horizontal grid spacing, with a primary focus on the performance of the CMAQ model in predicting wet deposition of sulfate (SO4=, ammonium (NH4+ and nitrate (NO3. Performance of the wet deposition estimates from the model is determined by comparing CMAQ predicted concentrations to concentrations measured by the National Acid Deposition Program (NADP, specifically the National Trends Network (NTN. For SO4= wet deposition, the CMAQ model estimates were generally comparable between the 36-km and 12-km simulations for the eastern US, with the 12-km simulation giving slightly higher estimates of SO4= wet deposition than the 36-km simulation on average. The result is a slightly larger normalized mean bias (NMB for the 12-km simulation; however both simulations had annual biases that were less than ±15 % for each of the five years. The model estimated SO4= wet deposition values improved when they were adjusted to account for biases in the model estimated precipitation. The CMAQ model underestimates NH4+ wet deposition over the eastern US, with a slightly larger underestimation in the 36-km simulation. The largest underestimations occur in the winter and spring periods, while the summer and fall have slightly smaller underestimations of NH4+ wet deposition. The underestimation in NH4+ wet deposition is likely due in part to the poor temporal and spatial representation of ammonia (NH3 emissions, particularly those emissions associated with fertilizer applications and NH3 bi-directional exchange. The model performance for estimates of NO3 wet deposition are

  5. Dynamical CP violation of the generalized Yang-Mills model

    International Nuclear Information System (INIS)

    Wang Dianfu; Chang Xiaojing; Sun Xiaoyu

    2011-01-01

    Starting from the generalized Yang-Mills model which contains, besides the vector part V μ , also a scalar part S and a pseudoscalar part P . It is shown, in terms of the Nambu-Jona-Lasinio (NJL) mechanism, that CP violation can be realized dynamically. The combination of the generalized Yang-Mills model and the NJL mechanism provides a new way to explain CP violation. (authors)

  6. Generalized Reduced Order Model Generation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — M4 Engineering proposes to develop a generalized reduced order model generation method. This method will allow for creation of reduced order aeroservoelastic state...

  7. Underestimating the Toxicological Challenges Associated with the Use of Herbal Medicinal Products in Developing Countries

    Directory of Open Access Journals (Sweden)

    Vidushi S. Neergheen-Bhujun

    2013-01-01

    Full Text Available Various reports suggest a high contemporaneous prevalence of herb-drug use in both developed and developing countries. The World Health Organisation indicates that 80% of the Asian and African populations rely on traditional medicine as the primary method for their health care needs. Since time immemorial and despite the beneficial and traditional roles of herbs in different communities, the toxicity and herb-drug interactions that emanate from this practice have led to severe adverse effects and fatalities. As a result of the perception that herbal medicinal products have low risk, consumers usually disregard any association between their use and any adverse reactions hence leading to underreporting of adverse reactions. This is particularly common in developing countries and has led to a paucity of scientific data regarding the toxicity and interactions of locally used traditional herbal medicine. Other factors like general lack of compositional and toxicological information of herbs and poor quality of adverse reaction case reports present hurdles which are highly underestimated by the population in the developing world. This review paper addresses these toxicological challenges and calls for natural health product regulations as well as for protocols and guidance documents on safety and toxicity testing of herbal medicinal products.

  8. Underestimating the toxicological challenges associated with the use of herbal medicinal products in developing countries.

    Science.gov (United States)

    Neergheen-Bhujun, Vidushi S

    2013-01-01

    Various reports suggest a high contemporaneous prevalence of herb-drug use in both developed and developing countries. The World Health Organisation indicates that 80% of the Asian and African populations rely on traditional medicine as the primary method for their health care needs. Since time immemorial and despite the beneficial and traditional roles of herbs in different communities, the toxicity and herb-drug interactions that emanate from this practice have led to severe adverse effects and fatalities. As a result of the perception that herbal medicinal products have low risk, consumers usually disregard any association between their use and any adverse reactions hence leading to underreporting of adverse reactions. This is particularly common in developing countries and has led to a paucity of scientific data regarding the toxicity and interactions of locally used traditional herbal medicine. Other factors like general lack of compositional and toxicological information of herbs and poor quality of adverse reaction case reports present hurdles which are highly underestimated by the population in the developing world. This review paper addresses these toxicological challenges and calls for natural health product regulations as well as for protocols and guidance documents on safety and toxicity testing of herbal medicinal products.

  9. Anisotropic charged generalized polytropic models

    Science.gov (United States)

    Nasim, A.; Azam, M.

    2018-06-01

    In this paper, we found some new anisotropic charged models admitting generalized polytropic equation of state with spherically symmetry. An analytic solution of the Einstein-Maxwell field equations is obtained through the transformation introduced by Durgapal and Banerji (Phys. Rev. D 27:328, 1983). The physical viability of solutions corresponding to polytropic index η =1/2, 2/3, 1, 2 is analyzed graphically. For this, we plot physical quantities such as radial and tangential pressure, anisotropy, speed of sound which demonstrated that these models achieve all the considerable physical conditions required for a relativistic star. Further, it is mentioned here that previous results for anisotropic charged matter with linear, quadratic and polytropic equation of state can be retrieved.

  10. Generalized model of the microwave auditory effect

    International Nuclear Information System (INIS)

    Yitzhak, N M; Ruppin, R; Hareuveny, R

    2009-01-01

    A generalized theoretical model for evaluating the amplitudes of the sound waves generated in a spherical head model, which is irradiated by microwave pulses, is developed. The thermoelastic equation of motion is solved for a spherically symmetric heating pattern of arbitrary form. For previously treated heating patterns that are peaked at the sphere centre, the results reduce to those presented before. The generalized model is applied to the case in which the microwave absorption is concentrated near the sphere surface. It is found that, for equal average specific absorption rates, the sound intensity generated by a surface localized heating pattern is comparable to that generated by a heating pattern that is peaked at the centre. The dependence of the induced sound pressure on the shape of the microwave pulse is explored. Another theoretical extension, to the case of repeated pulses, is developed and applied to the interpretation of existing experimental data on the dependence of the human hearing effect threshold on the pulse repetition frequency.

  11. Generalized Empirical Likelihood-Based Focused Information Criterion and Model Averaging

    Directory of Open Access Journals (Sweden)

    Naoya Sueishi

    2013-07-01

    Full Text Available This paper develops model selection and averaging methods for moment restriction models. We first propose a focused information criterion based on the generalized empirical likelihood estimator. We address the issue of selecting an optimal model, rather than a correct model, for estimating a specific parameter of interest. Then, this study investigates a generalized empirical likelihood-based model averaging estimator that minimizes the asymptotic mean squared error. A simulation study suggests that our averaging estimator can be a useful alternative to existing post-selection estimators.

  12. Assessment of an extended version of the Jenkinson-Collison classification on CMIP5 models over Europe

    Science.gov (United States)

    Otero, Noelia; Sillmann, Jana; Butler, Tim

    2018-03-01

    A gridded, geographically extended weather type classification has been developed based on the Jenkinson-Collison (JC) classification system and used to evaluate the representation of weather types over Europe in a suite of climate model simulations. To this aim, a set of models participating in the Coupled Model Intercomparison Project Phase 5 (CMIP5) is compared with the circulation from two reanalysis products. Furthermore, we examine seasonal changes between simulated frequencies of weather types at present and future climate conditions. The models are in reasonably good agreement with the reanalyses, but some discrepancies occur in cyclonic days being overestimated over North, and underestimated over South Europe, while anticyclonic situations were overestimated over South, and underestimated over North Europe. Low flow conditions were generally underestimated, especially in summer over South Europe, and Westerly conditions were generally overestimated. The projected frequencies of weather types in the late twenty-first century suggest an increase of Anticyclonic days over South Europe in all seasons except summer, while Westerly days increase over North and Central Europe, particularly in winter. We find significant changes in the frequency of Low flow conditions and the Easterly type that become more frequent during the warmer seasons over Southeast and Southwest Europe, respectively. Our results indicate that in winter the Westerly type has significant impacts on positive anomalies of maximum and minimum temperature over most of Europe. Except in winter, the warmer temperatures are linked to Easterlies, Anticyclonic and Low Flow conditions, especially over the Mediterranean area. Furthermore, we show that changes in the frequency of weather types represent a minor contribution of the total change of European temperatures, which would be mainly driven by changes in the temperature anomalies associated with the weather types themselves.

  13. Models of clinical reasoning with a focus on general practice: A critical review.

    Science.gov (United States)

    Yazdani, Shahram; Hosseinzadeh, Mohammad; Hosseini, Fakhrolsadat

    2017-10-01

    Diagnosis lies at the heart of general practice. Every day general practitioners (GPs) visit patients with a wide variety of complaints and concerns, with often minor but sometimes serious symptoms. General practice has many features which differentiate it from specialty care setting, but during the last four decades little attention was paid to clinical reasoning in general practice. Therefore, we aimed to critically review the clinical reasoning models with a focus on the clinical reasoning in general practice or clinical reasoning of general practitioners to find out to what extent the existing models explain the clinical reasoning specially in primary care and also identity the gaps of the model for use in primary care settings. A systematic search to find models of clinical reasoning were performed. To have more precision, we excluded the studies that focused on neurobiological aspects of reasoning, reasoning in disciplines other than medicine decision making or decision analysis on treatment or management plan. All the articles and documents were first scanned to see whether they include important relevant contents or any models. The selected studies which described a model of clinical reasoning in general practitioners or with a focus on general practice were then reviewed and appraisal or critics of other authors on these models were included. The reviewed documents on the model were synthesized. Six models of clinical reasoning were identified including hypothetic-deductive model, pattern recognition, a dual process diagnostic reasoning model, pathway for clinical reasoning, an integrative model of clinical reasoning, and model of diagnostic reasoning strategies in primary care. Only one model had specifically focused on general practitioners reasoning. A Model of clinical reasoning that included specific features of general practice to better help the general practitioners with the difficulties of clinical reasoning in this setting is needed.

  14. Models of clinical reasoning with a focus on general practice: a critical review

    Directory of Open Access Journals (Sweden)

    SHAHRAM YAZDANI

    2017-10-01

    Full Text Available Introduction: Diagnosis lies at the heart of general practice. Every day general practitioners (GPs visit patients with a wide variety of complaints and concerns, with often minor but sometimes serious symptoms. General practice has many features which differentiate it from specialty care setting, but during the last four decades little attention was paid to clinical reasoning in general practice. Therefore, we aimed to critically review the clinical reasoning models with a focus on the clinical reasoning in general practice or clinical reasoning of general practitioners to find out to what extent the existing models explain the clinical reasoning specially in primary care and also identity the gaps of the model for use in primary care settings Methods: A systematic search to find models of clinical reasoning were performed. To have more precision, we excluded the studies that focused on neurobiological aspects of reasoning, reasoning in disciplines other than medicine decision making or decision analysis on treatment or management plan. All the articles and documents were first scanned to see whether they include important relevant contents or any models. The selected studies which described a model of clinical reasoning in general practitioners or with a focus on general practice were then reviewed and appraisal or critics of other authors on these models were included. The reviewed documents on the model were synthesized Results: Six models of clinical reasoning were identified including hypothetic-deductive model, pattern recognition, a dual process diagnostic reasoning model, pathway for clinical reasoning, an integrative model of clinical reasoning, and model of diagnostic reasoning strategies in primary care. Only one model had specifically focused on general practitioners reasoning. Conclusion: A Model of clinical reasoning that included specific features of general practice to better help the general practitioners with the difficulties

  15. Presentation of the EURODELTA III intercomparison exercise - evaluation of the chemistry transport models' performance on criteria pollutants and joint analysis with meteorology

    Science.gov (United States)

    Bessagnet, Bertrand; Pirovano, Guido; Mircea, Mihaela; Cuvelier, Cornelius; Aulinger, Armin; Calori, Giuseppe; Ciarelli, Giancarlo; Manders, Astrid; Stern, Rainer; Tsyro, Svetlana; García Vivanco, Marta; Thunis, Philippe; Pay, Maria-Teresa; Colette, Augustin; Couvidat, Florian; Meleux, Frédérik; Rouïl, Laurence; Ung, Anthony; Aksoyoglu, Sebnem; María Baldasano, José; Bieser, Johannes; Briganti, Gino; Cappelletti, Andrea; D'Isidoro, Massimo; Finardi, Sandro; Kranenburg, Richard; Silibello, Camillo; Carnevale, Claudio; Aas, Wenche; Dupont, Jean-Charles; Fagerli, Hilde; Gonzalez, Lucia; Menut, Laurent; Prévôt, André S. H.; Roberts, Pete; White, Les

    2016-10-01

    The EURODELTA III exercise has facilitated a comprehensive intercomparison and evaluation of chemistry transport model performances. Participating models performed calculations for four 1-month periods in different seasons in the years 2006 to 2009, allowing the influence of different meteorological conditions on model performances to be evaluated. The exercise was performed with strict requirements for the input data, with few exceptions. As a consequence, most of differences in the outputs will be attributed to the differences in model formulations of chemical and physical processes. The models were evaluated mainly for background rural stations in Europe. The performance was assessed in terms of bias, root mean square error and correlation with respect to the concentrations of air pollutants (NO2, O3, SO2, PM10 and PM2.5), as well as key meteorological variables. Though most of meteorological parameters were prescribed, some variables like the planetary boundary layer (PBL) height and the vertical diffusion coefficient were derived in the model preprocessors and can partly explain the spread in model results. In general, the daytime PBL height is underestimated by all models. The largest variability of predicted PBL is observed over the ocean and seas. For ozone, this study shows the importance of proper boundary conditions for accurate model calculations and then on the regime of the gas and particle chemistry. The models show similar and quite good performance for nitrogen dioxide, whereas they struggle to accurately reproduce measured sulfur dioxide concentrations (for which the agreement with observations is the poorest). In general, the models provide a close-to-observations map of particulate matter (PM2.5 and PM10) concentrations over Europe rather with correlations in the range 0.4-0.7 and a systematic underestimation reaching -10 µg m-3 for PM10. The highest concentrations are much more underestimated, particularly in wintertime. Further evaluation of

  16. Effects of Host-rock Fracturing on Elastic-deformation Source Models of Volcano Deflation.

    Science.gov (United States)

    Holohan, Eoghan P; Sudhaus, Henriette; Walter, Thomas R; Schöpfer, Martin P J; Walsh, John J

    2017-09-08

    Volcanoes commonly inflate or deflate during episodes of unrest or eruption. Continuum mechanics models that assume linear elastic deformation of the Earth's crust are routinely used to invert the observed ground motions. The source(s) of deformation in such models are generally interpreted in terms of magma bodies or pathways, and thus form a basis for hazard assessment and mitigation. Using discontinuum mechanics models, we show how host-rock fracturing (i.e. non-elastic deformation) during drainage of a magma body can progressively change the shape and depth of an elastic-deformation source. We argue that this effect explains the marked spatio-temporal changes in source model attributes inferred for the March-April 2007 eruption of Piton de la Fournaise volcano, La Reunion. We find that pronounced deflation-related host-rock fracturing can: (1) yield inclined source model geometries for a horizontal magma body; (2) cause significant upward migration of an elastic-deformation source, leading to underestimation of the true magma body depth and potentially to a misinterpretation of ascending magma; and (3) at least partly explain underestimation by elastic-deformation sources of changes in sub-surface magma volume.

  17. Generalized entropy formalism and a new holographic dark energy model

    Science.gov (United States)

    Sayahian Jahromi, A.; Moosavi, S. A.; Moradpour, H.; Morais Graça, J. P.; Lobo, I. P.; Salako, I. G.; Jawad, A.

    2018-05-01

    Recently, the Rényi and Tsallis generalized entropies have extensively been used in order to study various cosmological and gravitational setups. Here, using a special type of generalized entropy, a generalization of both the Rényi and Tsallis entropy, together with holographic principle, we build a new model for holographic dark energy. Thereinafter, considering a flat FRW universe, filled by a pressureless component and the new obtained dark energy model, the evolution of cosmos has been investigated showing satisfactory results and behavior. In our model, the Hubble horizon plays the role of IR cutoff, and there is no mutual interaction between the cosmos components. Our results indicate that the generalized entropy formalism may open a new window to become more familiar with the nature of spacetime and its properties.

  18. Underestimation of weight and its associated factors in overweight and obese university students from 21 low, middle and emerging economy countries.

    Science.gov (United States)

    Peltzer, Karl; Pengpid, Supa

    2015-01-01

    Awareness of overweight status is an important factor of weight control and may have more impact on one's decision to lose weight than objective weight status. The purpose of this study was to assess the prevalence of underestimation of overweight/obesity and its associated factors among university students from 21 low, middle and emerging economy countries. In a cross-sectional survey the total sample included 15,068 undergraduate university students (mean age 20.8, SD=2.8, age range of 16-30 years) from 21 countries. Anthropometric measurements and self-administrated questionnaire were applied to collected data. The prevalence of weight underestimation (being normal or underweight) for overweight or obese university students was 33.3% (41% in men and 25.1% in women), among overweight students, 39% felt they had normal weight or were under weight, and among obese students 67% did not rate themselves as obese or very overweight. In multivariate logistic regression analysis, being male, poor subjective health status, lack of overweight health risk awareness, lack of importance to lose weight, not trying and not dieting to lose weight, and regular breakfast was associated with underestimation of weight in overweight and obese university students. The study found a high prevalence of underestimation of overweight/obesity among university students. Several factors identified can be utilized in health promotion programmes including diet and weight management behaviours to focus on inaccurate weight perceptions on the design of weight control, in particular for men. Copyright © 2014 Asian Oceanian Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  19. A multi-model evaluation of aerosols over South Asia: common problems and possible causes

    Science.gov (United States)

    Pan, X.; Chin, M.; Gautam, R.; Bian, H.; Kim, D.; Colarco, P. R.; Diehl, T. L.; Takemura, T.; Pozzoli, L.; Tsigaridis, K.; Bauer, S.; Bellouin, N.

    2015-05-01

    Atmospheric pollution over South Asia attracts special attention due to its effects on regional climate, water cycle and human health. These effects are potentially growing owing to rising trends of anthropogenic aerosol emissions. In this study, the spatio-temporal aerosol distributions over South Asia from seven global aerosol models are evaluated against aerosol retrievals from NASA satellite sensors and ground-based measurements for the period of 2000-2007. Overall, substantial underestimations of aerosol loading over South Asia are found systematically in most model simulations. Averaged over the entire South Asia, the annual mean aerosol optical depth (AOD) is underestimated by a range 15 to 44% across models compared to MISR (Multi-angle Imaging SpectroRadiometer), which is the lowest bound among various satellite AOD retrievals (from MISR, SeaWiFS (Sea-Viewing Wide Field-of-View Sensor), MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua and Terra). In particular during the post-monsoon and wintertime periods (i.e., October-January), when agricultural waste burning and anthropogenic emissions dominate, models fail to capture AOD and aerosol absorption optical depth (AAOD) over the Indo-Gangetic Plain (IGP) compared to ground-based Aerosol Robotic Network (AERONET) sunphotometer measurements. The underestimations of aerosol loading in models generally occur in the lower troposphere (below 2 km) based on the comparisons of aerosol extinction profiles calculated by the models with those from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) data. Furthermore, surface concentrations of all aerosol components (sulfate, nitrate, organic aerosol (OA) and black carbon (BC)) from the models are found much lower than in situ measurements in winter. Several possible causes for these common problems of underestimating aerosols in models during the post-monsoon and wintertime periods are identified: the aerosol hygroscopic growth and formation of

  20. A Generalized Yang-Mills Model and Dynamical Breaking of Gauge Symmetry

    International Nuclear Information System (INIS)

    Wang Dianfu; Song Heshan

    2005-01-01

    A generalized Yang-Mills model, which contains, besides the vector part V μ , also a scalar part S, is constructed and the dynamical breaking of gauge symmetry in the model is also discussed. It is shown, in terms of Nambu-Jona-Lasinio (NJL) mechanism, that the gauge symmetry breaking can be realized dynamically in the generalized Yang-Mills model. The combination of the generalized Yang-Mills model and the NJL mechanism provides a way to overcome the difficulties related to the Higgs field and the Higgs mechanism in the usual spontaneous symmetry breaking theory.

  1. Practical likelihood analysis for spatial generalized linear mixed models

    DEFF Research Database (Denmark)

    Bonat, W. H.; Ribeiro, Paulo Justiniano

    2016-01-01

    We investigate an algorithm for maximum likelihood estimation of spatial generalized linear mixed models based on the Laplace approximation. We compare our algorithm with a set of alternative approaches for two datasets from the literature. The Rhizoctonia root rot and the Rongelap are......, respectively, examples of binomial and count datasets modeled by spatial generalized linear mixed models. Our results show that the Laplace approximation provides similar estimates to Markov Chain Monte Carlo likelihood, Monte Carlo expectation maximization, and modified Laplace approximation. Some advantages...... of Laplace approximation include the computation of the maximized log-likelihood value, which can be used for model selection and tests, and the possibility to obtain realistic confidence intervals for model parameters based on profile likelihoods. The Laplace approximation also avoids the tuning...

  2. A generalized statistical model for the size distribution of wealth

    International Nuclear Information System (INIS)

    Clementi, F; Gallegati, M; Kaniadakis, G

    2012-01-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature. (paper)

  3. A generalized statistical model for the size distribution of wealth

    Science.gov (United States)

    Clementi, F.; Gallegati, M.; Kaniadakis, G.

    2012-12-01

    In a recent paper in this journal (Clementi et al 2009 J. Stat. Mech. P02037), we proposed a new, physically motivated, distribution function for modeling individual incomes, having its roots in the framework of the κ-generalized statistical mechanics. The performance of the κ-generalized distribution was checked against real data on personal income for the United States in 2003. In this paper we extend our previous model so as to be able to account for the distribution of wealth. Probabilistic functions and inequality measures of this generalized model for wealth distribution are obtained in closed form. In order to check the validity of the proposed model, we analyze the US household wealth distributions from 1984 to 2009 and conclude an excellent agreement with the data that is superior to any other model already known in the literature.

  4. Generalized Tavis-Cummings models and quantum networks

    Science.gov (United States)

    Gorokhov, A. V.

    2018-04-01

    The properties of quantum networks based on generalized Tavis-Cummings models are theoretically investigated. We have calculated the information transfer success rate from one node to another in a simple model of a quantum network realized with two-level atoms placed in the cavities and interacting with an external laser field and cavity photons. The method of dynamical group of the Hamiltonian and technique of corresponding coherent states were used for investigation of the temporal dynamics of the two nodes model.

  5. Echocardiography underestimates stroke volume and aortic valve area: implications for patients with small-area low-gradient aortic stenosis.

    Science.gov (United States)

    Chin, Calvin W L; Khaw, Hwan J; Luo, Elton; Tan, Shuwei; White, Audrey C; Newby, David E; Dweck, Marc R

    2014-09-01

    Discordance between small aortic valve area (AVA; area (LVOTarea) and stroke volume alongside inconsistencies in recommended thresholds. One hundred thirty-three patients with mild to severe AS and 33 control individuals underwent comprehensive echocardiography and cardiovascular magnetic resonance imaging (MRI). Stroke volume and LVOTarea were calculated using echocardiography and MRI, and the effects on AVA estimation were assessed. The relationship between AVA and MPG measurements was then modelled with nonlinear regression and consistent thresholds for these parameters calculated. Finally the effect of these modified AVA measurements and novel thresholds on the number of patients with small-area low-gradient AS was investigated. Compared with MRI, echocardiography underestimated LVOTarea (n = 40; -0.7 cm(2); 95% confidence interval [CI], -2.6 to 1.3), stroke volumes (-6.5 mL/m(2); 95% CI, -28.9 to 16.0) and consequently, AVA (-0.23 cm(2); 95% CI, -1.01 to 0.59). Moreover, an AVA of 1.0 cm(2) corresponded to MPG of 24 mm Hg based on echocardiographic measurements and 37 mm Hg after correction with MRI-derived stroke volumes. Based on conventional measures, 56 patients had discordant small-area low-gradient AS. Using MRI-derived stroke volumes and the revised thresholds, a 48% reduction in discordance was observed (n = 29). Echocardiography underestimated LVOTarea, stroke volume, and therefore AVA, compared with MRI. The thresholds based on current guidelines were also inconsistent. In combination, these factors explain > 40% of patients with discordant small-area low-gradient AS. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  6. On the Bengtsson-Frauendorf cranked-quasiparticle model

    International Nuclear Information System (INIS)

    Pal, K.F.; Nagarajan, M.A.; Rowley, N.

    1989-01-01

    The cranked-quasiparticle model of Bengtsson and Frauendorf (non-self-consistent HFB) is compared with some exact calculations of particles moving in a cranked, deformed mean field but interacting via rotationally invariant two-body forces. In order to make the exact calculations manageable, a single shell is used but despite this small basis the quasiparticle model is shown to have a high degree of success. The usual choice of pair gap is discussed and shown to be good. The general structures of band crossings in the exact calculations are well reproduced and some crossing frequencies are given quantitatively though the odd-particle systems require blocking. Interaction strengths are not well reproduced though some qualitative features, e.g. oscillations, are obtained. These interactions are generally underestimated, an effect which causes the HFB yrast band to behave less collectively than it should. (orig.)

  7. Presentation of the EURODELTA III intercomparison exercise – evaluation of the chemistry transport models' performance on criteria pollutants and joint analysis with meteorology

    Directory of Open Access Journals (Sweden)

    B. Bessagnet

    2016-10-01

    Full Text Available The EURODELTA III exercise has facilitated a comprehensive intercomparison and evaluation of chemistry transport model performances. Participating models performed calculations for four 1-month periods in different seasons in the years 2006 to 2009, allowing the influence of different meteorological conditions on model performances to be evaluated. The exercise was performed with strict requirements for the input data, with few exceptions. As a consequence, most of differences in the outputs will be attributed to the differences in model formulations of chemical and physical processes. The models were evaluated mainly for background rural stations in Europe. The performance was assessed in terms of bias, root mean square error and correlation with respect to the concentrations of air pollutants (NO2, O3, SO2, PM10 and PM2.5, as well as key meteorological variables. Though most of meteorological parameters were prescribed, some variables like the planetary boundary layer (PBL height and the vertical diffusion coefficient were derived in the model preprocessors and can partly explain the spread in model results. In general, the daytime PBL height is underestimated by all models. The largest variability of predicted PBL is observed over the ocean and seas. For ozone, this study shows the importance of proper boundary conditions for accurate model calculations and then on the regime of the gas and particle chemistry. The models show similar and quite good performance for nitrogen dioxide, whereas they struggle to accurately reproduce measured sulfur dioxide concentrations (for which the agreement with observations is the poorest. In general, the models provide a close-to-observations map of particulate matter (PM2.5 and PM10 concentrations over Europe rather with correlations in the range 0.4–0.7 and a systematic underestimation reaching −10 µg m−3 for PM10. The highest concentrations are much more underestimated, particularly in

  8. General Friction Model Extended by the Effect of Strain Hardening

    DEFF Research Database (Denmark)

    Nielsen, Chris V.; Martins, Paulo A.F.; Bay, Niels

    2016-01-01

    An extension to the general friction model proposed by Wanheim and Bay [1] to include the effect of strain hardening is proposed. The friction model relates the friction stress to the fraction of real contact area by a friction factor under steady state sliding. The original model for the real...... contact area as function of the normalized contact pressure is based on slip-line analysis and hence on the assumption of rigid-ideally plastic material behavior. In the present work, a general finite element model is established to, firstly, reproduce the original model under the assumption of rigid...

  9. A General Microscopic Traffic Model Yielding Dissipative Shocks

    DEFF Research Database (Denmark)

    Gaididei, Yuri Borisovich; Caputo, Jean Guy; Christiansen, Peter Leth

    2018-01-01

    We consider a general microscopic traffic model with a delay. An algebraic traffic function reduces the equation to the Aw-Rascle microscopic model while a sigmoid function gives the standard “follow the leader”. For zero delay we prove that the homogeneous solution is globally stable...

  10. What Models and Satellites Tell Us (and Don't Tell Us) About Arctic Sea Ice Melt Season Length

    Science.gov (United States)

    Ahlert, A.; Jahn, A.

    2017-12-01

    Melt season length—the difference between the sea ice melt onset date and the sea ice freeze onset date—plays an important role in the radiation balance of the Arctic and the predictability of the sea ice cover. However, there are multiple possible definitions for sea ice melt and freeze onset in climate models, and none of them exactly correspond to the remote sensing definition. Using the CESM Large Ensemble model simulations, we show how this mismatch between model and remote sensing definitions of melt and freeze onset limits the utility of melt season remote sensing data for bias detection in models. It also opens up new questions about the precise physical meaning of the melt season remote sensing data. Despite these challenges, we find that the increase in melt season length in the CESM is not as large as that derived from remote sensing data, even when we account for internal variability and different definitions. At the same time, we find that the CESM ensemble members that have the largest trend in sea ice extent over the period 1979-2014 also have the largest melt season trend, driven primarily by the trend towards later freeze onsets. This might be an indication that an underestimation of the melt season length trend is one factor contributing to the generally underestimated sea ice loss within the CESM, and potentially climate models in general.

  11. Are We Underestimating Microplastic Contamination in Aquatic Environments?

    Science.gov (United States)

    Conkle, Jeremy L.; Báez Del Valle, Christian D.; Turner, Jeffrey W.

    2018-01-01

    Plastic debris, specifically microplastic in the aquatic environment, is an escalating environmental crisis. Efforts at national scales to reduce or ban microplastics in personal care products are starting to pay off, but this will not affect those materials already in the environment or those that result from unregulated products and materials. To better inform future microplastic research and mitigation efforts this study (1) evaluates methods currently used to quantify microplastics in the environment and (2) characterizes the concentration and size distribution of microplastics in a variety of products. In this study, 50 published aquatic surveys were reviewed and they demonstrated that most ( 80%) only account for plastics ≥ 300 μm in diameter. In addition, we surveyed 770 personal care products to determine the occurrence, concentration and size distribution of polyethylene microbeads. Particle concentrations ranged from 1.9 to 71.9 mg g-1 of product or 1649 to 31,266 particles g-1 of product. The large majority ( > 95%) of particles in products surveyed were less than the 300 μm minimum diameter, indicating that previous environmental surveys could be underestimating microplastic contamination. To account for smaller particles as well as microfibers from synthetic textiles, we strongly recommend that future surveys consider methods that materials < 300 μm in diameter.

  12. A generalized conditional heteroscedastic model for temperature downscaling

    Science.gov (United States)

    Modarres, R.; Ouarda, T. B. M. J.

    2014-11-01

    This study describes a method for deriving the time varying second order moment, or heteroscedasticity, of local daily temperature and its association to large Coupled Canadian General Circulation Models predictors. This is carried out by applying a multivariate generalized autoregressive conditional heteroscedasticity (MGARCH) approach to construct the conditional variance-covariance structure between General Circulation Models (GCMs) predictors and maximum and minimum temperature time series during 1980-2000. Two MGARCH specifications namely diagonal VECH and dynamic conditional correlation (DCC) are applied and 25 GCM predictors were selected for a bivariate temperature heteroscedastic modeling. It is observed that the conditional covariance between predictors and temperature is not very strong and mostly depends on the interaction between the random process governing temporal variation of predictors and predictants. The DCC model reveals a time varying conditional correlation between GCM predictors and temperature time series. No remarkable increasing or decreasing change is observed for correlation coefficients between GCM predictors and observed temperature during 1980-2000 while weak winter-summer seasonality is clear for both conditional covariance and correlation. Furthermore, the stationarity and nonlinearity Kwiatkowski-Phillips-Schmidt-Shin (KPSS) and Brock-Dechert-Scheinkman (BDS) tests showed that GCM predictors, temperature and their conditional correlation time series are nonlinear but stationary during 1980-2000 according to BDS and KPSS test results. However, the degree of nonlinearity of temperature time series is higher than most of the GCM predictors.

  13. Improvement of PM10 prediction in East Asia using inverse modeling

    Science.gov (United States)

    Koo, Youn-Seo; Choi, Dae-Ryun; Kwon, Hi-Yong; Jang, Young-Kee; Han, Jin-Seok

    2015-04-01

    Aerosols from anthropogenic emissions in industrialized region in China as well as dust emissions from southern Mongolia and northern China that transport along prevailing northwestern wind have a large influence on the air quality in Korea. The emission inventory in the East Asia region is an important factor in chemical transport modeling (CTM) for PM10 (particulate matters less than 10 ㎛ in aerodynamic diameter) forecasts and air quality management in Korea. Most previous studies showed that predictions of PM10 mass concentration by the CTM were underestimated when comparing with observational data. In order to fill the gap in discrepancies between observations and CTM predictions, the inverse Bayesian approach with Comprehensive Air-quality Model with extension (CAMx) forward model was applied to obtain optimized a posteriori PM10 emissions in East Asia. The predicted PM10 concentrations with a priori emission were first compared with observations at monitoring sites in China and Korea for January and August 2008. The comparison showed that PM10 concentrations with a priori PM10 emissions for anthropogenic and dust sources were generally under-predicted. The result from the inverse modeling indicated that anthropogenic PM10 emissions in the industrialized and urbanized areas in China were underestimated while dust emissions from desert and barren soil in southern Mongolia and northern China were overestimated. A priori PM10 emissions from northeastern China regions including Shenyang, Changchun, and Harbin were underestimated by about 300% (i.e., the ratio of a posteriori to a priori PM10 emission was a factor of about 3). The predictions of PM10 concentrations with a posteriori emission showed better agreement with the observations, implying that the inverse modeling minimized the discrepancies in the model predictions by improving PM10 emissions in East Asia.

  14. Analysis of Error Propagation Within Hierarchical Air Combat Models

    Science.gov (United States)

    2016-06-01

    values alone are propagated through layers of combat models, the final results will likely be biased, and risk underestimated. An air-to-air...values alone are propagated through layers of combat models, the final results will likely be biased, and risk underestimated. An air-to-air engagement... PROPAGATION WITHIN HIERARCHICAL AIR COMBAT MODELS by Salih Ilaslan June 2016 Thesis Advisor: Thomas W. Lucas Second Reader: Jeffrey

  15. The generalized spherical model of ferromagnetic films

    International Nuclear Information System (INIS)

    Costache, G.

    1977-12-01

    The D→ infinity of the D-vectorial model of a ferromagnetic film with free surfaces is exactly solved. The mathematical mechanism responsible for the onset of a phase transition in the system is a generalized sticking phenomenon. It is shown that the temperature at which the sticking appears, the transition temperature of the model is monotonously increasing with increasing the number of layers of the film, contrary to what happens in the spherical model with overall constraint. Certain correlation inequalities of Griffiths type are shown to hold. (author)

  16. Simulation modelling in agriculture: General considerations. | R.I. ...

    African Journals Online (AJOL)

    A computer simulation model is a detailed working hypothesis about a given system. The computer does all the necessary arithmetic when the hypothesis is invoked to predict the future behaviour of the simulated system under given conditions.A general pragmatic approach to model building is discussed; techniques are ...

  17. Assessing the ability of isotope-enabled General Circulation Models to simulate the variability of Iceland water vapor isotopic composition

    Science.gov (United States)

    Erla Sveinbjornsdottir, Arny; Steen-Larsen, Hans Christian; Jonsson, Thorsteinn; Ritter, Francois; Riser, Camilla; Messon-Delmotte, Valerie; Bonne, Jean Louis; Dahl-Jensen, Dorthe

    2014-05-01

    During the fall of 2010 we installed an autonomous water vapor spectroscopy laser (Los Gatos Research analyzer) in a lighthouse on the Southwest coast of Iceland (63.83°N, 21.47°W). Despite initial significant problems with volcanic ash, high wind, and attack of sea gulls, the system has been continuously operational since the end of 2011 with limited down time. The system automatically performs calibration every 2 hours, which results in high accuracy and precision allowing for analysis of the second order parameter, d-excess, in the water vapor. We find a strong linear relationship between d-excess and local relative humidity (RH) when normalized to SST. The observed slope of approximately -45 o/oo/% is similar to theoretical predictions by Merlivat and Jouzel [1979] for smooth surface, but the calculated intercept is significant lower than predicted. Despite this good linear agreement with theoretical calculations, mismatches arise between the simulated seasonal cycle of water vapour isotopic composition using LMDZiso GCM nudged to large-scale winds from atmospheric analyses, and our data. The GCM is not able to capture seasonal variations in local RH, nor seasonal variations in d-excess. Based on daily data, the performance of LMDZiso to resolve day-to-day variability is measured based on the strength of the correlation coefficient between observations and model outputs. This correlation coefficient reaches ~0.8 for surface absolute humidity, but decreases to ~0.6 for δD and ~0.45 d-excess. Moreover, the magnitude of day-to-day humidity variations is also underestimated by LMDZiso, which can explain the underestimated magnitude of isotopic depletion. Finally, the simulated and observed d-excess vs. RH has similar slopes. We conclude that the under-estimation of d-excess variability may partly arise from the poor performance of the humidity simulations.

  18. Non-linear general instability of ring-stiffened conical shells under external hydrostatic pressure

    International Nuclear Information System (INIS)

    Ross, C T F; Kubelt, C; McLaughlin, I; Etheridge, A; Turner, K; Paraskevaides, D; Little, A P F

    2011-01-01

    The paper presents the experimental results for 15 ring-stiffened circular steel conical shells, which failed by non-linear general instability. The results of these investigations were compared with various theoretical analyses, including an ANSYS eigen buckling analysis and another ANSYS analysis; which involved a step-by-step method until collapse; where both material and geometrical nonlinearity were considered. The investigation also involved an analysis using BS5500 (PD 5500), together with the method of Ross of the University of Portsmouth. The ANSYS eigen buckling analysis tended to overestimate the predicted buckling pressures; whereas the ANSYS nonlinear results compared favourably with the experimental results. The PD5500 analysis was very time consuming and tended to grossly underestimate the experimental buckling pressures and in some cases, overestimate them. In contrast to PD5500 and ANSYS, the design charts of Ross of the University of Portsmouth were the easiest of all these methods to use and generally only slightly underestimated the experimental collapse pressures. The ANSYS analyses gave some excellent graphical displays.

  19. Does the surface property of a disposable applanation tonometer account for its underestimation of intraocular pressure when compared with the Goldmann tonometer?

    Science.gov (United States)

    Osborne, Sarah F; Williams, Rachel; Batterbury, Mark; Wong, David

    2007-04-01

    Disposable tonometers are increasingly being adopted partly because of concerns over the transmission of variant Creutzfeldt-Jakob disease and partly for convenience. Recently, we have found one such tonometer (Tonojet by Luneau Ophthalmologie, France) underestimated the intraocular pressure (IOP). We hypothesized that this underestimation was caused by a difference in the surface property of the tonometers. A tensiometer was used to measure the suction force resulting from interfacial tension between a solution of lignocaine and fluorescein and the tonometers. The results showed that the suction force was significantly greater for the Goldmann compared to the Tonojet. The magnitude of this force was too small to account for the difference in IOP measurements. The Tonojet was less hydrophilic than the Goldmann, and the contact angle of the fluid was therefore greater. For a given tear film, less hydrophilic tonometers will tend to have thicker mires, and this may lead to underestimation of the IOP. When such disposable tonometers are used, it is recommended care should be taken to reject readings from thick mires.

  20. Exploiting Soil Moisture, Precipitation, and Streamflow Observations to Evaluate Soil Moisture/Runoff Coupling in Land Surface Models

    Science.gov (United States)

    Crow, W. T.; Chen, F.; Reichle, R. H.; Xia, Y.; Liu, Q.

    2018-05-01

    Accurate partitioning of precipitation into infiltration and runoff is a fundamental objective of land surface models tasked with characterizing the surface water and energy balance. Temporal variability in this partitioning is due, in part, to changes in prestorm soil moisture, which determine soil infiltration capacity and unsaturated storage. Utilizing the National Aeronautics and Space Administration Soil Moisture Active Passive Level-4 soil moisture product in combination with streamflow and precipitation observations, we demonstrate that land surface models (LSMs) generally underestimate the strength of the positive rank correlation between prestorm soil moisture and event runoff coefficients (i.e., the fraction of rainfall accumulation volume converted into stormflow runoff during a storm event). Underestimation is largest for LSMs employing an infiltration-excess approach for stormflow runoff generation. More accurate coupling strength is found in LSMs that explicitly represent subsurface stormflow or saturation-excess runoff generation processes.

  1. Fermions as generalized Ising models

    Directory of Open Access Journals (Sweden)

    C. Wetterich

    2017-04-01

    Full Text Available We establish a general map between Grassmann functionals for fermions and probability or weight distributions for Ising spins. The equivalence between the two formulations is based on identical transfer matrices and expectation values of products of observables. The map preserves locality properties and can be realized for arbitrary dimensions. We present a simple example where a quantum field theory for free massless Dirac fermions in two-dimensional Minkowski space is represented by an asymmetric Ising model on a euclidean square lattice.

  2. International Competition and Inequality: A Generalized Ricardian Model

    OpenAIRE

    Adolfo Figueroa

    2014-01-01

    Why does the gap in real wage rates persist between the First World and the Third World after so many years of increasing globalization? The standard neoclassical trade model predicts that real wage rates will be equalized with international trade, whereas the standard Ricardian trade model does not. Facts are thus consistent with the Ricardian model. However, this model leaves undetermined income distribution. The objective of this paper is to fill this gap by developing a generalized Ricard...

  3. On a Generalized Squared Gaussian Diffusion Model for Option Valuation

    Directory of Open Access Journals (Sweden)

    Edeki S.O.

    2017-01-01

    Full Text Available In financial mathematics, option pricing models are vital tools whose usefulness cannot be overemphasized. Modern approaches and modelling of financial derivatives are therefore required in option pricing and valuation settings. In this paper, we derive via the application of Ito lemma, a pricing model referred to as Generalized Squared Gaussian Diffusion Model (GSGDM for option pricing and valuation. Same approach can be considered via Stratonovich stochastic dynamics. We also show that the classical Black-Scholes, and the square root constant elasticity of variance models are special cases of the GSGDM. In addition, general solution of the GSGDM is obtained using modified variational iterative method (MVIM.

  4. Underestimated effects of sediments on enhanced startup performance of biofilm systems for polluted source water pretreatment.

    Science.gov (United States)

    Lv, Zheng-Hui; Wang, Jing; Yang, Guang-Feng; Feng, Li-Juan; Mu, Jun; Zhu, Liang; Xu, Xiang-Yang

    2018-02-01

    In order to evaluate the enhancement mechanisms of enhanced startup performance in biofilm systems for polluted source water pretreatment, three lab-scale reactors with elastic stereo media (ESM) were operated under different enhanced sediment and hydraulic agitation conditions. It is interesting to found the previously underestimated or overlooked effects of sediment on the enhancement of pollutants removal performance and enrichment of functional bacteria in biofilm systems. The maximum NH 4 + -N removal rate of 0.35 mg L -1 h -1 in sediment enhanced condition was 2.19 times of that in control reactor. Sediment contributed to 42.0-56.5% of NH 4 + -N removal and 15.4-41.2% of total nitrogen removal in different reactors under different operation conditions. The enhanced hydraulic agitation with sediment further improved the operation performance and accumulation of functional bacteria. Generally, Proteobacteria (48.9-52.1%), Bacteroidetes (18.9-20.8%) and Actinobacteria (15.7-18.5%) were dominant in both sediment and ESM bioiflm at  phylum level. The potentially functional bacteria found in sediment and ESM biofilm samples with some functional bacteria mainly presented in sediment samples only (e.g., Genera Bacillus and Lactococcus of Firmicutes phylum) may commonly contribute to the removal of nitrogen and organics.

  5. Generalized network modeling of capillary-dominated two-phase flow.

    Science.gov (United States)

    Raeini, Ali Q; Bijeljic, Branko; Blunt, Martin J

    2018-02-01

    We present a generalized network model for simulating capillary-dominated two-phase flow through porous media at the pore scale. Three-dimensional images of the pore space are discretized using a generalized network-described in a companion paper [A. Q. Raeini, B. Bijeljic, and M. J. Blunt, Phys. Rev. E 96, 013312 (2017)2470-004510.1103/PhysRevE.96.013312]-which comprises pores that are divided into smaller elements called half-throats and subsequently into corners. Half-throats define the connectivity of the network at the coarsest level, connecting each pore to half-throats of its neighboring pores from their narrower ends, while corners define the connectivity of pore crevices. The corners are discretized at different levels for accurate calculation of entry pressures, fluid volumes, and flow conductivities that are obtained using direct simulation of flow on the underlying image. This paper discusses the two-phase flow model that is used to compute the averaged flow properties of the generalized network, including relative permeability and capillary pressure. We validate the model using direct finite-volume two-phase flow simulations on synthetic geometries, and then present a comparison of the model predictions with a conventional pore-network model and experimental measurements of relative permeability in the literature.

  6. Generalized network modeling of capillary-dominated two-phase flow

    Science.gov (United States)

    Raeini, Ali Q.; Bijeljic, Branko; Blunt, Martin J.

    2018-02-01

    We present a generalized network model for simulating capillary-dominated two-phase flow through porous media at the pore scale. Three-dimensional images of the pore space are discretized using a generalized network—described in a companion paper [A. Q. Raeini, B. Bijeljic, and M. J. Blunt, Phys. Rev. E 96, 013312 (2017), 10.1103/PhysRevE.96.013312]—which comprises pores that are divided into smaller elements called half-throats and subsequently into corners. Half-throats define the connectivity of the network at the coarsest level, connecting each pore to half-throats of its neighboring pores from their narrower ends, while corners define the connectivity of pore crevices. The corners are discretized at different levels for accurate calculation of entry pressures, fluid volumes, and flow conductivities that are obtained using direct simulation of flow on the underlying image. This paper discusses the two-phase flow model that is used to compute the averaged flow properties of the generalized network, including relative permeability and capillary pressure. We validate the model using direct finite-volume two-phase flow simulations on synthetic geometries, and then present a comparison of the model predictions with a conventional pore-network model and experimental measurements of relative permeability in the literature.

  7. Generalized modeling of the fractional-order memcapacitor and its character analysis

    Science.gov (United States)

    Guo, Zhang; Si, Gangquan; Diao, Lijie; Jia, Lixin; Zhang, Yanbin

    2018-06-01

    Memcapacitor is a new type of memory device generalized from the memristor. This paper proposes a generalized fractional-order memcapacitor model by introducing the fractional calculus into the model. The generalized formulas are studied and the two fractional-order parameter α, β are introduced where α mostly affects the fractional calculus value of charge q within the generalized Ohm's law and β generalizes the state equation which simulates the physical mechanism of a memcapacitor into the fractional sense. This model will be reduced to the conventional memcapacitor as α = 1 , β = 0 and to the conventional memristor as α = 0 , β = 1 . Then the numerical analysis of the fractional-order memcapacitor is studied. And the characteristics and output behaviors of the fractional-order memcapacitor applied with sinusoidal charge are derived. The analysis results have shown that there are four basic v - q and v - i curve patterns when the fractional order α, β respectively equal to 0 or 1, moreover all v - q and v - i curves of the other fractional-order models are transition curves between the four basic patterns.

  8. Current definition and a generalized federbush model

    International Nuclear Information System (INIS)

    Singh, L.P.S.; Hagen, C.R.

    1978-01-01

    The Federbush model is studied, with particular attention being given to the definition of currents. Inasmuch as there is no a priori restriction of local gauge invariance, the currents in the interacting case can be defined more generally than in Q.E.D. It is found that two arbitrary parameters are thereby introduced into the theory. Lowest order perturbation calculations for the current correlation functions and the Fermion propagators indicate that the theory admits a whole class of solutions dependent upon these parameters with the closed solution of Federbush emerging as a special case. The theory is shown to be locally covariant, and a conserved energy--momentum tensor is displayed. One finds in addition that the generators of gauge transformations for the fields are conserved. Finally it is shown that the general theory yields the Federbush solution if suitable Thirring model type counterterms are added

  9. Is hyperthyroidism underestimated in pregnancy and misdiagnosed as hyperemesis gravidarum?

    Science.gov (United States)

    Luetic, Ana Tikvica; Miskovic, Berivoj

    2010-10-01

    Thyroid changes are considered to be normal events that happen as a large maternal multiorganic adjustment to pregnancy. However, hyperthyroidism occurs in pregnancy with clinical presentation similar to hyperemesis gravidarum (HG) and pregnancy itself. Moreover, 10% of women with HG will continue to have symptoms throughout the pregnancy suggesting that the underlying cause might not be elevation of human chorionic gonadotropin in the first trimester. Variable frequency of both hyperthyroidism and HG worldwide might suggest the puzzlement of inclusion criteria for both diagnoses enhanced by the alternation of thyroid hormone levels assessed in normal pregnancy. Increased number of hyperthyroidism among women population without the expected rise in gestational hyperthyroidism encouraged us for creating the hypotheses that hyperthyroidism could be underestimated in normal pregnancy and even misdiagnosed as HG. This hypothesis, if confirmed, might have beneficial clinical implications, such as better detection of hyperthyroidism in pregnancies, application of therapy when needed with the reduction of maternal or fetal consequences. Copyright 2010 Elsevier Ltd. All rights reserved.

  10. Retrofitting Non-Cognitive-Diagnostic Reading Assessment under the Generalized DINA Model Framework

    Science.gov (United States)

    Chen, Huilin; Chen, Jinsong

    2016-01-01

    Cognitive diagnosis models (CDMs) are psychometric models developed mainly to assess examinees' specific strengths and weaknesses in a set of skills or attributes within a domain. By adopting the Generalized-DINA model framework, the recently developed general modeling framework, we attempted to retrofit the PISA reading assessments, a…

  11. Tilted Bianchi type I dust fluid cosmological model in general relativity

    Indian Academy of Sciences (India)

    Tilted Bianchi type I dust fluid cosmological model in general relativity ... In this paper, we have investigated a tilted Bianchi type I cosmological model filled with dust of perfect fluid in general relativity. ... Pramana – Journal of Physics | News ...

  12. Combining a popularity-productivity stochastic block model with a discriminative-content model for general structure detection.

    Science.gov (United States)

    Chai, Bian-fang; Yu, Jian; Jia, Cai-Yan; Yang, Tian-bao; Jiang, Ya-wen

    2013-07-01

    Latent community discovery that combines links and contents of a text-associated network has drawn more attention with the advance of social media. Most of the previous studies aim at detecting densely connected communities and are not able to identify general structures, e.g., bipartite structure. Several variants based on the stochastic block model are more flexible for exploring general structures by introducing link probabilities between communities. However, these variants cannot identify the degree distributions of real networks due to a lack of modeling of the differences among nodes, and they are not suitable for discovering communities in text-associated networks because they ignore the contents of nodes. In this paper, we propose a popularity-productivity stochastic block (PPSB) model by introducing two random variables, popularity and productivity, to model the differences among nodes in receiving links and producing links, respectively. This model has the flexibility of existing stochastic block models in discovering general community structures and inherits the richness of previous models that also exploit popularity and productivity in modeling the real scale-free networks with power law degree distributions. To incorporate the contents in text-associated networks, we propose a combined model which combines the PPSB model with a discriminative model that models the community memberships of nodes by their contents. We then develop expectation-maximization (EM) algorithms to infer the parameters in the two models. Experiments on synthetic and real networks have demonstrated that the proposed models can yield better performances than previous models, especially on networks with general structures.

  13. Maximally Generalized Yang-Mills Model and Dynamical Breaking of Gauge Symmetry

    International Nuclear Information System (INIS)

    Wang Dianfu; Song Heshan

    2006-01-01

    A maximally generalized Yang-Mills model, which contains, besides the vector part V μ , also an axial-vector part A μ , a scalar part S, a pseudoscalar part P, and a tensor part T μν , is constructed and the dynamical breaking of gauge symmetry in the model is also discussed. It is shown, in terms of the Nambu-Jona-Lasinio mechanism, that the gauge symmetry breaking can be realized dynamically in the maximally generalized Yang-Mills model. The combination of the maximally generalized Yang-Mills model and the NJL mechanism provides a way to overcome the difficulties related to the Higgs field and the Higgs mechanism in the usual spontaneous symmetry breaking theory.

  14. Anomaly General Circulation Models.

    Science.gov (United States)

    Navarra, Antonio

    The feasibility of the anomaly model is assessed using barotropic and baroclinic models. In the barotropic case, both a stationary and a time-dependent model has been formulated and constructed, whereas only the stationary, linear case is considered in the baroclinic case. Results from the barotropic model indicate that a relation between the stationary solution and the time-averaged non-linear solution exists. The stationary linear baroclinic solution can therefore be considered with some confidence. The linear baroclinic anomaly model poses a formidable mathematical problem because it is necessary to solve a gigantic linear system to obtain the solution. A new method to find solution of large linear system, based on a projection on the Krylov subspace is shown to be successful when applied to the linearized baroclinic anomaly model. The scheme consists of projecting the original linear system on the Krylov subspace, thereby reducing the dimensionality of the matrix to be inverted to obtain the solution. With an appropriate setting of the damping parameters, the iterative Krylov method reaches a solution even using a Krylov subspace ten times smaller than the original space of the problem. This generality allows the treatment of the important problem of linear waves in the atmosphere. A larger class (nonzonally symmetric) of basic states can now be treated for the baroclinic primitive equations. These problem leads to large unsymmetrical linear systems of order 10000 and more which can now be successfully tackled by the Krylov method. The (R7) linear anomaly model is used to investigate extensively the linear response to equatorial and mid-latitude prescribed heating. The results indicate that the solution is deeply affected by the presence of the stationary waves in the basic state. The instability of the asymmetric flows, first pointed out by Simmons et al. (1983), is active also in the baroclinic case. However, the presence of baroclinic processes modifies the

  15. Risk assessment of oil price from static and dynamic modelling approaches

    DEFF Research Database (Denmark)

    Mi, Zhi-Fu; Wei, Yi-Ming; Tang, Bao-Jun

    2017-01-01

    ) and GARCH model on the basis of generalized error distribution (GED). The results show that EVT is a powerful approach to capture the risk in the oil markets. On the contrary, the traditional variance–covariance (VC) and Monte Carlo (MC) approaches tend to overestimate risk when the confidence level is 95......%, but underestimate risk at the confidence level of 99%. The VaR of WTI returns is larger than that of Brent returns at identical confidence levels. Moreover, the GED-GARCH model can estimate the downside dynamic VaR accurately for WTI and Brent oil returns....

  16. Extending the linear model with R generalized linear, mixed effects and nonparametric regression models

    CERN Document Server

    Faraway, Julian J

    2005-01-01

    Linear models are central to the practice of statistics and form the foundation of a vast range of statistical methodologies. Julian J. Faraway''s critically acclaimed Linear Models with R examined regression and analysis of variance, demonstrated the different methods available, and showed in which situations each one applies. Following in those footsteps, Extending the Linear Model with R surveys the techniques that grow from the regression model, presenting three extensions to that framework: generalized linear models (GLMs), mixed effect models, and nonparametric regression models. The author''s treatment is thoroughly modern and covers topics that include GLM diagnostics, generalized linear mixed models, trees, and even the use of neural networks in statistics. To demonstrate the interplay of theory and practice, throughout the book the author weaves the use of the R software environment to analyze the data of real examples, providing all of the R commands necessary to reproduce the analyses. All of the ...

  17. Generalized outcome-based strategy classification: comparing deterministic and probabilistic choice models.

    Science.gov (United States)

    Hilbig, Benjamin E; Moshagen, Morten

    2014-12-01

    Model comparisons are a vital tool for disentangling which of several strategies a decision maker may have used--that is, which cognitive processes may have governed observable choice behavior. However, previous methodological approaches have been limited to models (i.e., decision strategies) with deterministic choice rules. As such, psychologically plausible choice models--such as evidence-accumulation and connectionist models--that entail probabilistic choice predictions could not be considered appropriately. To overcome this limitation, we propose a generalization of Bröder and Schiffer's (Journal of Behavioral Decision Making, 19, 361-380, 2003) choice-based classification method, relying on (1) parametric order constraints in the multinomial processing tree framework to implement probabilistic models and (2) minimum description length for model comparison. The advantages of the generalized approach are demonstrated through recovery simulations and an experiment. In explaining previous methods and our generalization, we maintain a nontechnical focus--so as to provide a practical guide for comparing both deterministic and probabilistic choice models.

  18. Modelling debris flows down general channels

    Directory of Open Access Journals (Sweden)

    S. P. Pudasaini

    2005-01-01

    Full Text Available This paper is an extension of the single-phase cohesionless dry granular avalanche model over curved and twisted channels proposed by Pudasaini and Hutter (2003. It is a generalisation of the Savage and Hutter (1989, 1991 equations based on simple channel topography to a two-phase fluid-solid mixture of debris material. Important terms emerging from the correct treatment of the kinematic and dynamic boundary condition, and the variable basal topography are systematically taken into account. For vanishing fluid contribution and torsion-free channel topography our new model equations exactly degenerate to the previous Savage-Hutter model equations while such a degeneration was not possible by the Iverson and Denlinger (2001 model, which, in fact, also aimed to extend the Savage and Hutter model. The model equations of this paper have been rigorously derived; they include the effects of the curvature and torsion of the topography, generally for arbitrarily curved and twisted channels of variable channel width. The equations are put into a standard conservative form of partial differential equations. From these one can easily infer the importance and influence of the pore-fluid-pressure distribution in debris flow dynamics. The solid-phase is modelled by applying a Coulomb dry friction law whereas the fluid phase is assumed to be an incompressible Newtonian fluid. Input parameters of the equations are the internal and bed friction angles of the solid particles, the viscosity and volume fraction of the fluid, the total mixture density and the pore pressure distribution of the fluid at the bed. Given the bed topography and initial geometry and the initial velocity profile of the debris mixture, the model equations are able to describe the dynamics of the depth profile and bed parallel depth-averaged velocity distribution from the initial position to the final deposit. A shock capturing, total variation diminishing numerical scheme is implemented to

  19. General Equilibrium Models: Improving the Microeconomics Classroom

    Science.gov (United States)

    Nicholson, Walter; Westhoff, Frank

    2009-01-01

    General equilibrium models now play important roles in many fields of economics including tax policy, environmental regulation, international trade, and economic development. The intermediate microeconomics classroom has not kept pace with these trends, however. Microeconomics textbooks primarily focus on the insights that can be drawn from the…

  20. A general diagnostic model applied to language testing data.

    Science.gov (United States)

    von Davier, Matthias

    2008-11-01

    Probabilistic models with one or more latent variables are designed to report on a corresponding number of skills or cognitive attributes. Multidimensional skill profiles offer additional information beyond what a single test score can provide, if the reported skills can be identified and distinguished reliably. Many recent approaches to skill profile models are limited to dichotomous data and have made use of computationally intensive estimation methods such as Markov chain Monte Carlo, since standard maximum likelihood (ML) estimation techniques were deemed infeasible. This paper presents a general diagnostic model (GDM) that can be estimated with standard ML techniques and applies to polytomous response variables as well as to skills with two or more proficiency levels. The paper uses one member of a larger class of diagnostic models, a compensatory diagnostic model for dichotomous and partial credit data. Many well-known models, such as univariate and multivariate versions of the Rasch model and the two-parameter logistic item response theory model, the generalized partial credit model, as well as a variety of skill profile models, are special cases of this GDM. In addition to an introduction to this model, the paper presents a parameter recovery study using simulated data and an application to real data from the field test for TOEFL Internet-based testing.

  1. A Generalized Nonlocal Calculus with Application to the Peridynamics Model for Solid Mechanics

    OpenAIRE

    Alali, Bacim; Liu, Kuo; Gunzburger, Max

    2014-01-01

    A nonlocal vector calculus was introduced in [2] that has proved useful for the analysis of the peridynamics model of nonlocal mechanics and nonlocal diffusion models. A generalization is developed that provides a more general setting for the nonlocal vector calculus that is independent of particular nonlocal models. It is shown that general nonlocal calculus operators are integral operators with specific integral kernels. General nonlocal calculus properties are developed, including nonlocal...

  2. Critical Comments on the General Model of Instructional Communication

    Science.gov (United States)

    Walton, Justin D.

    2014-01-01

    This essay presents a critical commentary on McCroskey et al.'s (2004) general model of instructional communication. In particular, five points are examined which make explicit and problematize the meta-theoretical assumptions of the model. Comments call attention to the limitations of the model and argue for a broader approach to…

  3. An Object-oriented Knowledge Link Model for General Knowledge Management

    OpenAIRE

    Xiao-hong, CHEN; Bang-chuan, LAI

    2005-01-01

    The knowledge link is the basic on knowledge share and the indispensable part in knowledge standardization management. In this paper, a object-oriented knowledge link model is proposed for general knowledge management by using objectoriented representation based on knowledge levels system. In the model, knowledge link is divided into general knowledge link and integrated knowledge with corresponding link properties and methods. What’s more, its BNF syntax is described and designed.

  4. Modeling Answer Change Behavior: An Application of a Generalized Item Response Tree Model

    Science.gov (United States)

    Jeon, Minjeong; De Boeck, Paul; van der Linden, Wim

    2017-01-01

    We present a novel application of a generalized item response tree model to investigate test takers' answer change behavior. The model allows us to simultaneously model the observed patterns of the initial and final responses after an answer change as a function of a set of latent traits and item parameters. The proposed application is illustrated…

  5. Generalized structured component analysis a component-based approach to structural equation modeling

    CERN Document Server

    Hwang, Heungsun

    2014-01-01

    Winner of the 2015 Sugiyama Meiko Award (Publication Award) of the Behaviormetric Society of Japan Developed by the authors, generalized structured component analysis is an alternative to two longstanding approaches to structural equation modeling: covariance structure analysis and partial least squares path modeling. Generalized structured component analysis allows researchers to evaluate the adequacy of a model as a whole, compare a model to alternative specifications, and conduct complex analyses in a straightforward manner. Generalized Structured Component Analysis: A Component-Based Approach to Structural Equation Modeling provides a detailed account of this novel statistical methodology and its various extensions. The authors present the theoretical underpinnings of generalized structured component analysis and demonstrate how it can be applied to various empirical examples. The book enables quantitative methodologists, applied researchers, and practitioners to grasp the basic concepts behind this new a...

  6. Pricing Participating Products under a Generalized Jump-Diffusion Model

    Directory of Open Access Journals (Sweden)

    Tak Kuen Siu

    2008-01-01

    Full Text Available We propose a model for valuing participating life insurance products under a generalized jump-diffusion model with a Markov-switching compensator. It also nests a number of important and popular models in finance, including the classes of jump-diffusion models and Markovian regime-switching models. The Esscher transform is employed to determine an equivalent martingale measure. Simulation experiments are conducted to illustrate the practical implementation of the model and to highlight some features that can be obtained from our model.

  7. Efficient trawl avoidance by mesopelagic fishes causes large underestimation of their biomass

    KAUST Repository

    Kaartvedt, Stein

    2012-06-07

    Mesopelagic fishes occur in all the world’s oceans, but their abundance and consequently their ecological significance remains uncertain. The current global estimate based on net sampling prior to 1980 suggests a global abundance of one gigatonne (109 t) wet weight. Here we report novel evidence of efficient avoidance of such sampling by the most common myctophid fish in the Northern Atlantic, i.e. Benthosema glaciale. We reason that similar avoidance of nets may explain consistently higher acoustic abundance estimates of mesopelagic fish from different parts of the world’s oceans. It appears that mesopelagic fish abundance may be underestimated by one order of magnitude, suggesting that the role of mesopelagic fish in the oceans might need to be revised.

  8. a Model Study of Small-Scale World Map Generalization

    Science.gov (United States)

    Cheng, Y.; Yin, Y.; Li, C. M.; Wu, W.; Guo, P. P.; Ma, X. L.; Hu, F. M.

    2018-04-01

    With the globalization and rapid development every filed is taking an increasing interest in physical geography and human economics. There is a surging demand for small scale world map in large formats all over the world. Further study of automated mapping technology, especially the realization of small scale production on a large scale global map, is the key of the cartographic field need to solve. In light of this, this paper adopts the improved model (with the map and data separated) in the field of the mapmaking generalization, which can separate geographic data from mapping data from maps, mainly including cross-platform symbols and automatic map-making knowledge engine. With respect to the cross-platform symbol library, the symbol and the physical symbol in the geographic information are configured at all scale levels. With respect to automatic map-making knowledge engine consists 97 types, 1086 subtypes, 21845 basic algorithm and over 2500 relevant functional modules.In order to evaluate the accuracy and visual effect of our model towards topographic maps and thematic maps, we take the world map generalization in small scale as an example. After mapping generalization process, combining and simplifying the scattered islands make the map more explicit at 1 : 2.1 billion scale, and the map features more complete and accurate. Not only it enhance the map generalization of various scales significantly, but achieve the integration among map-makings of various scales, suggesting that this model provide a reference in cartographic generalization for various scales.

  9. Single-Column Modeling of Convection During the CINDY2011/DYNAMO Field Campaign With the CNRM Climate Model Version 6

    Science.gov (United States)

    Abdel-Lathif, Ahmat Younous; Roehrig, Romain; Beau, Isabelle; Douville, Hervé

    2018-03-01

    A single-column model (SCM) approach is used to assess the CNRM climate model (CNRM-CM) version 6 ability to represent the properties of the apparent heat source (Q1) and moisture sink (Q2) as observed during the 3 month CINDY2011/DYNAMO field campaign, over its Northern Sounding Array (NSA). The performance of the CNRM SCM is evaluated in a constrained configuration in which the latent and sensible heat surface fluxes are prescribed, as, when forced by observed sea surface temperature, the model is strongly limited by the underestimate of the surface fluxes, most probably related to the SCM forcing itself. The model exhibits a significant cold bias in the upper troposphere, near 200 hPa, and strong wet biases close to the surface and above 700 hPa. The analysis of the Q1 and Q2 profile distributions emphasizes the properties of the convective parameterization of the CNRM-CM physics. The distribution of the Q2 profile is particularly challenging. The model strongly underestimates the frequency of occurrence of the deep moistening profiles, which likely involve misrepresentation of the shallow and congestus convection. Finally, a statistical approach is used to objectively define atmospheric regimes and construct a typical convection life cycle. A composite analysis shows that the CNRM SCM captures the general transition from bottom-heavy to mid-heavy to top-heavy convective heating. Some model errors are shown to be related to the stratiform regimes. The moistening observed during the shallow and congestus convection regimes also requires further improvements of this CNRM-CM physics.

  10. A Duality Result for the Generalized Erlang Risk Model

    Directory of Open Access Journals (Sweden)

    Lanpeng Ji

    2014-11-01

    Full Text Available In this article, we consider the generalized Erlang risk model and its dual model. By using a conditional measure-preserving correspondence between the two models, we derive an identity for two interesting conditional probabilities. Applications to the discounted joint density of the surplus prior to ruin and the deficit at ruin are also discussed.

  11. Efficient probabilistic model checking on general purpose graphic processors

    NARCIS (Netherlands)

    Bosnacki, D.; Edelkamp, S.; Sulewski, D.; Pasareanu, C.S.

    2009-01-01

    We present algorithms for parallel probabilistic model checking on general purpose graphic processing units (GPGPUs). For this purpose we exploit the fact that some of the basic algorithms for probabilistic model checking rely on matrix vector multiplication. Since this kind of linear algebraic

  12. A Generalized Radiation Model for Human Mobility: Spatial Scale, Searching Direction and Trip Constraint.

    Directory of Open Access Journals (Sweden)

    Chaogui Kang

    Full Text Available We generalized the recently introduced "radiation model", as an analog to the generalization of the classic "gravity model", to consolidate its nature of universality for modeling diverse mobility systems. By imposing the appropriate scaling exponent λ, normalization factor κ and system constraints including searching direction and trip OD constraint, the generalized radiation model accurately captures real human movements in various scenarios and spatial scales, including two different countries and four different cities. Our analytical results also indicated that the generalized radiation model outperformed alternative mobility models in various empirical analyses.

  13. Generalized linear models with random effects unified analysis via H-likelihood

    CERN Document Server

    Lee, Youngjo; Pawitan, Yudi

    2006-01-01

    Since their introduction in 1972, generalized linear models (GLMs) have proven useful in the generalization of classical normal models. Presenting methods for fitting GLMs with random effects to data, Generalized Linear Models with Random Effects: Unified Analysis via H-likelihood explores a wide range of applications, including combining information over trials (meta-analysis), analysis of frailty models for survival data, genetic epidemiology, and analysis of spatial and temporal models with correlated errors.Written by pioneering authorities in the field, this reference provides an introduction to various theories and examines likelihood inference and GLMs. The authors show how to extend the class of GLMs while retaining as much simplicity as possible. By maximizing and deriving other quantities from h-likelihood, they also demonstrate how to use a single algorithm for all members of the class, resulting in a faster algorithm as compared to existing alternatives. Complementing theory with examples, many of...

  14. Linear and Generalized Linear Mixed Models and Their Applications

    CERN Document Server

    Jiang, Jiming

    2007-01-01

    This book covers two major classes of mixed effects models, linear mixed models and generalized linear mixed models, and it presents an up-to-date account of theory and methods in analysis of these models as well as their applications in various fields. The book offers a systematic approach to inference about non-Gaussian linear mixed models. Furthermore, it has included recently developed methods, such as mixed model diagnostics, mixed model selection, and jackknife method in the context of mixed models. The book is aimed at students, researchers and other practitioners who are interested

  15. Thurstonian models for sensory discrimination tests as generalized linear models

    DEFF Research Database (Denmark)

    Brockhoff, Per B.; Christensen, Rune Haubo Bojesen

    2010-01-01

    as a so-called generalized linear model. The underlying sensory difference 6 becomes directly a parameter of the statistical model and the estimate d' and it's standard error becomes the "usual" output of the statistical analysis. The d' for the monadic A-NOT A method is shown to appear as a standard......Sensory discrimination tests such as the triangle, duo-trio, 2-AFC and 3-AFC tests produce binary data and the Thurstonian decision rule links the underlying sensory difference 6 to the observed number of correct responses. In this paper it is shown how each of these four situations can be viewed...

  16. PerMallows: An R Package for Mallows and Generalized Mallows Models

    Directory of Open Access Journals (Sweden)

    Ekhine Irurozki

    2016-08-01

    Full Text Available In this paper we present the R package PerMallows, which is a complete toolbox to work with permutations, distances and some of the most popular probability models for permutations: Mallows and the Generalized Mallows models. The Mallows model is an exponential location model, considered as analogous to the Gaussian distribution. It is based on the definition of a distance between permutations. The Generalized Mallows model is its best-known extension. The package includes functions for making inference, sampling and learning such distributions. The distances considered in PerMallows are Kendall's τ , Cayley, Hamming and Ulam.

  17. General classical solutions in the noncommutative CPN-1 model

    International Nuclear Information System (INIS)

    Foda, O.; Jack, I.; Jones, D.R.T.

    2002-01-01

    We give an explicit construction of general classical solutions for the noncommutative CP N-1 model in two dimensions, showing that they correspond to integer values for the action and topological charge. We also give explicit solutions for the Dirac equation in the background of these general solutions and show that the index theorem is satisfied

  18. Interest Rates with Long Memory: A Generalized Affine Term-Structure Model

    DEFF Research Database (Denmark)

    Osterrieder, Daniela

    .S. government bonds, we model the time series of the state vector by means of a co-fractional vector autoregressive model. The implication is that yields of all maturities exhibit nonstationary, yet mean-reverting, long-memory behavior of the order d ≈ 0.87. The long-run dynamics of the state vector are driven......We propose a model for the term structure of interest rates that is a generalization of the discrete-time, Gaussian, affine yield-curve model. Compared to standard affine models, our model allows for general linear dynamics in the vector of state variables. In an application to real yields of U...... forecasts that outperform several benchmark models, especially at long forecasting horizons....

  19. Merons in a generally covariant model with Gursey term

    International Nuclear Information System (INIS)

    Akdeniz, K.G.; Smailagic, A.

    1982-10-01

    We study meron solutions of the generally covariant and Weyl invariant fermionic model with Gursey term. We find that, due to the presence of this term, merons can exist even without the cosmological constant. This is a new feature compared to previously studied models. (author)

  20. A General Polygon-based Deformable Model for Object Recognition

    DEFF Research Database (Denmark)

    Jensen, Rune Fisker; Carstensen, Jens Michael

    1999-01-01

    We propose a general scheme for object localization and recognition based on a deformable model. The model combines shape and image properties by warping a arbitrary prototype intensity template according to the deformation in shape. The shape deformations are constrained by a probabilistic distr...

  1. On the characterization and software implementation of general protein lattice models.

    Directory of Open Access Journals (Sweden)

    Alessio Bechini

    Full Text Available models of proteins have been widely used as a practical means to computationally investigate general properties of the system. In lattice models any sterically feasible conformation is represented as a self-avoiding walk on a lattice, and residue types are limited in number. So far, only two- or three-dimensional lattices have been used. The inspection of the neighborhood of alpha carbons in the core of real proteins reveals that also lattices with higher coordination numbers, possibly in higher dimensional spaces, can be adopted. In this paper, a new general parametric lattice model for simplified protein conformations is proposed and investigated. It is shown how the supporting software can be consistently designed to let algorithms that operate on protein structures be implemented in a lattice-agnostic way. The necessary theoretical foundations are developed and organically presented, pinpointing the role of the concept of main directions in lattice-agnostic model handling. Subsequently, the model features across dimensions and lattice types are explored in tests performed on benchmark protein sequences, using a Python implementation. Simulations give insights on the use of square and triangular lattices in a range of dimensions. The trend of potential minimum for sequences of different lengths, varying the lattice dimension, is uncovered. Moreover, an extensive quantitative characterization of the usage of the so-called "move types" is reported for the first time. The proposed general framework for the development of lattice models is simple yet complete, and an object-oriented architecture can be proficiently employed for the supporting software, by designing ad-hoc classes. The proposed framework represents a new general viewpoint that potentially subsumes a number of solutions previously studied. The adoption of the described model pushes to look at protein structure issues from a more general and essential perspective, making

  2. Generalized model for Memristor-based Wien family oscillators

    KAUST Repository

    Talukdar, Abdul Hafiz Ibne; Radwan, Ahmed G.; Salama, Khaled N.

    2012-01-01

    In this paper, we report the unconventional characteristics of Memristor in Wien oscillators. Generalized mathematical models are developed to analyze four members of the Wien family using Memristors. Sustained oscillation is reported for all types

  3. Generalized model of island biodiversity

    Science.gov (United States)

    Kessler, David A.; Shnerb, Nadav M.

    2015-04-01

    The dynamics of a local community of competing species with weak immigration from a static regional pool is studied. Implementing the generalized competitive Lotka-Volterra model with demographic noise, a rich dynamics with four qualitatively distinct phases is unfolded. When the overall interspecies competition is weak, the island species recapitulate the mainland species. For higher values of the competition parameter, the system still admits an equilibrium community, but now some of the mainland species are absent on the island. Further increase in competition leads to an intermittent "disordered" phase, where the dynamics is controlled by invadable combinations of species and the turnover rate is governed by the migration. Finally, the strong competition phase is glasslike, dominated by uninvadable states and noise-induced transitions. Our model contains, as a special case, the celebrated neutral island theories of Wilson-MacArthur and Hubbell. Moreover, we show that slight deviations from perfect neutrality may lead to each of the phases, as the Hubbell point appears to be quadracritical.

  4. A General Framework for Portfolio Theory—Part I: Theory and Various Models

    Directory of Open Access Journals (Sweden)

    Stanislaus Maier-Paape

    2018-05-01

    Full Text Available Utility and risk are two often competing measurements on the investment success. We show that efficient trade-off between these two measurements for investment portfolios happens, in general, on a convex curve in the two-dimensional space of utility and risk. This is a rather general pattern. The modern portfolio theory of Markowitz (1959 and the capital market pricing model Sharpe (1964, are special cases of our general framework when the risk measure is taken to be the standard deviation and the utility function is the identity mapping. Using our general framework, we also recover and extend the results in Rockafellar et al. (2006, which were already an extension of the capital market pricing model to allow for the use of more general deviation measures. This generalized capital asset pricing model also applies to e.g., when an approximation of the maximum drawdown is considered as a risk measure. Furthermore, the consideration of a general utility function allows for going beyond the “additive” performance measure to a “multiplicative” one of cumulative returns by using the log utility. As a result, the growth optimal portfolio theory Lintner (1965 and the leverage space portfolio theory Vince (2009 can also be understood and enhanced under our general framework. Thus, this general framework allows a unification of several important existing portfolio theories and goes far beyond. For simplicity of presentation, we phrase all for a finite underlying probability space and a one period market model, but generalizations to more complex structures are straightforward.

  5. Terrestrial pesticide exposure of amphibians: an underestimated cause of global decline?

    Science.gov (United States)

    Brühl, Carsten A; Schmidt, Thomas; Pieper, Silvia; Alscher, Annika

    2013-01-01

    Amphibians, a class of animals in global decline, are present in agricultural landscapes characterized by agrochemical inputs. Effects of pesticides on terrestrial life stages of amphibians such as juvenile and adult frogs, toads and newts are little understood and a specific risk assessment for pesticide exposure, mandatory for other vertebrate groups, is currently not conducted. We studied the effects of seven pesticide products on juvenile European common frogs (Rana temporaria) in an agricultural overspray scenario. Mortality ranged from 100% after one hour to 40% after seven days at the recommended label rate of currently registered products. The demonstrated toxicity is alarming and a large-scale negative effect of terrestrial pesticide exposure on amphibian populations seems likely. Terrestrial pesticide exposure might be underestimated as a driver of their decline calling for more attention in conservation efforts and the risk assessment procedures in place do not protect this vanishing animal group.

  6. Tilted Bianchi type I dust fluid cosmological model in general relativity

    Indian Academy of Sciences (India)

    Home; Journals; Pramana – Journal of Physics; Volume 58; Issue 3. Tilted Bianchi type I dust fluid cosmological model in general ... In this paper, we have investigated a tilted Bianchi type I cosmological model filled with dust of perfect fluid in general relativity. To get a determinate solution, we have assumed a condition  ...

  7. Generalized Linear Models in Vehicle Insurance

    Directory of Open Access Journals (Sweden)

    Silvie Kafková

    2014-01-01

    Full Text Available Actuaries in insurance companies try to find the best model for an estimation of insurance premium. It depends on many risk factors, e.g. the car characteristics and the profile of the driver. In this paper, an analysis of the portfolio of vehicle insurance data using a generalized linear model (GLM is performed. The main advantage of the approach presented in this article is that the GLMs are not limited by inflexible preconditions. Our aim is to predict the relation of annual claim frequency on given risk factors. Based on a large real-world sample of data from 57 410 vehicles, the present study proposed a classification analysis approach that addresses the selection of predictor variables. The models with different predictor variables are compared by analysis of deviance and Akaike information criterion (AIC. Based on this comparison, the model for the best estimate of annual claim frequency is chosen. All statistical calculations are computed in R environment, which contains stats package with the function for the estimation of parameters of GLM and the function for analysis of deviation.

  8. An Overview of Generalized Gamma Mittag–Leffler Model and Its Applications

    Directory of Open Access Journals (Sweden)

    Seema S. Nair

    2015-08-01

    Full Text Available Recently, probability models with thicker or thinner tails have gained more importance among statisticians and physicists because of their vast applications in random walks, Lévi flights, financial modeling, etc. In this connection, we introduce here a new family of generalized probability distributions associated with the Mittag–Leffler function. This family gives an extension to the generalized gamma family, opens up a vast area of potential applications and establishes connections to the topics of fractional calculus, nonextensive statistical mechanics, Tsallis statistics, superstatistics, the Mittag–Leffler stochastic process, the Lévi process and time series. Apart from examining the properties, the matrix-variate analogue and the connection to fractional calculus are also explained. By using the pathway model of Mathai, the model is further generalized. Connections to Mittag–Leffler distributions and corresponding autoregressive processes are also discussed.

  9. The General Aggression Model.

    Science.gov (United States)

    Allen, Johnie J; Anderson, Craig A; Bushman, Brad J

    2018-02-01

    The General Aggression Model (GAM) is a comprehensive, integrative, framework for understanding aggression. It considers the role of social, cognitive, personality, developmental, and biological factors on aggression. Proximate processes of GAM detail how person and situation factors influence cognitions, feelings, and arousal, which in turn affect appraisal and decision processes, which in turn influence aggressive or nonaggressive behavioral outcomes. Each cycle of the proximate processes serves as a learning trial that affects the development and accessibility of aggressive knowledge structures. Distal processes of GAM detail how biological and persistent environmental factors can influence personality through changes in knowledge structures. GAM has been applied to understand aggression in many contexts including media violence effects, domestic violence, intergroup violence, temperature effects, pain effects, and the effects of global climate change. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Topics in conformal invariance and generalized sigma models

    International Nuclear Information System (INIS)

    Bernardo, L.M.; Lawrence Berkeley National Lab., CA

    1997-05-01

    This thesis consists of two different parts, having in common the fact that in both, conformal invariance plays a central role. In the first part, the author derives conditions for conformal invariance, in the large N limit, and for the existence of an infinite number of commuting classical conserved quantities, in the Generalized Thirring Model. The treatment uses the bosonized version of the model. Two different approaches are used to derive conditions for conformal invariance: the background field method and the Hamiltonian method based on an operator algebra, and the agreement between them is established. The author constructs two infinite sets of non-local conserved charges, by specifying either periodic or open boundary conditions, and he finds the Poisson Bracket algebra satisfied by them. A free field representation of the algebra satisfied by the relevant dynamical variables of the model is also presented, and the structure of the stress tensor in terms of free fields (and free currents) is studied in detail. In the second part, the author proposes a new approach for deriving the string field equations from a general sigma model on the world sheet. This approach leads to an equation which combines some of the attractive features of both the renormalization group method and the covariant beta function treatment of the massless excitations. It has the advantage of being covariant under a very general set of both local and non-local transformations in the field space. The author applies it to the tachyon, massless and first massive level, and shows that the resulting field equations reproduce the correct spectrum of a left-right symmetric closed bosonic string

  11. A Statistical Evaluation of Atmosphere-Ocean General Circulation Models: Complexity vs. Simplicity

    OpenAIRE

    Robert K. Kaufmann; David I. Stern

    2004-01-01

    The principal tools used to model future climate change are General Circulation Models which are deterministic high resolution bottom-up models of the global atmosphere-ocean system that require large amounts of supercomputer time to generate results. But are these models a cost-effective way of predicting future climate change at the global level? In this paper we use modern econometric techniques to evaluate the statistical adequacy of three general circulation models (GCMs) by testing thre...

  12. Analysis of dental caries using generalized linear and count regression models

    Directory of Open Access Journals (Sweden)

    Javali M. Phil

    2013-11-01

    Full Text Available Generalized linear models (GLM are generalization of linear regression models, which allow fitting regression models to response data in all the sciences especially medical and dental sciences that follow a general exponential family. These are flexible and widely used class of such models that can accommodate response variables. Count data are frequently characterized by overdispersion and excess zeros. Zero-inflated count models provide a parsimonious yet powerful way to model this type of situation. Such models assume that the data are a mixture of two separate data generation processes: one generates only zeros, and the other is either a Poisson or a negative binomial data-generating process. Zero inflated count regression models such as the zero-inflated Poisson (ZIP, zero-inflated negative binomial (ZINB regression models have been used to handle dental caries count data with many zeros. We present an evaluation framework to the suitability of applying the GLM, Poisson, NB, ZIP and ZINB to dental caries data set where the count data may exhibit evidence of many zeros and over-dispersion. Estimation of the model parameters using the method of maximum likelihood is provided. Based on the Vuong test statistic and the goodness of fit measure for dental caries data, the NB and ZINB regression models perform better than other count regression models.

  13. Generalized additive model of air pollution to daily mortality

    International Nuclear Information System (INIS)

    Kim, J.; Yang, H.E.

    2005-01-01

    The association of air pollution with daily mortality due to cardiovascular disease, respiratory disease, and old age (65 or older) in Seoul, Korea was investigated in 1999 using daily values of TSP, PM10, O 3 , SO 2 , NO 2 , and CO. Generalized additive Poisson models were applied to allow for the highly flexible fitting of daily trends in air pollution as well as nonlinear association with meteorological variables such as temperature, humidity, and wind speed. To estimate the effect of air pollution and weather on mortality, LOESS smoothing was used in generalized additive models. The findings suggest that air pollution levels affect significantly the daily mortality. (orig.)

  14. Whole-word response scoring underestimates functional spelling ability for some individuals with global agraphia

    Directory of Open Access Journals (Sweden)

    Andrew Tesla Demarco

    2015-05-01

    These data suggest that conventional whole-word scoring may significantly underestimate functional spelling performance. Because by-letter scoring boosted pre-treatment scores to the same extent as post-treatment scores, the magnitude of treatment gains was no greater than estimates from conventional whole-word scoring. Nonetheless, the surprisingly large disparity between conventional whole-word scoring and by-letter scoring suggests that by-letter scoring methods may warrant further investigation. Because by-letter analyses may hold interest to others, we plan to make the software tool used in this study available on-line for use to researchers and clinicians at large.

  15. Specific and General Human Capital in an Endogenous Growth Model

    OpenAIRE

    Evangelia Vourvachaki; Vahagn Jerbashian; : Sergey Slobodyan

    2014-01-01

    In this article, we define specific (general) human capital in terms of the occupations whose use is spread in a limited (wide) set of industries. We analyze the growth impact of an economy's composition of specific and general human capital, in a model where education and research and development are costly and complementary activities. The model suggests that a declining share of specific human capital, as observed in the Czech Republic, can be associated with a lower rate of long-term grow...

  16. Optimisation of a parallel ocean general circulation model

    OpenAIRE

    M. I. Beare; D. P. Stevens

    1997-01-01

    International audience; This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by...

  17. Modeling the brain morphology distribution in the general aging population

    Science.gov (United States)

    Huizinga, W.; Poot, D. H. J.; Roshchupkin, G.; Bron, E. E.; Ikram, M. A.; Vernooij, M. W.; Rueckert, D.; Niessen, W. J.; Klein, S.

    2016-03-01

    Both normal aging and neurodegenerative diseases such as Alzheimer's disease cause morphological changes of the brain. To better distinguish between normal and abnormal cases, it is necessary to model changes in brain morphology owing to normal aging. To this end, we developed a method for analyzing and visualizing these changes for the entire brain morphology distribution in the general aging population. The method is applied to 1000 subjects from a large population imaging study in the elderly, from which 900 were used to train the model and 100 were used for testing. The results of the 100 test subjects show that the model generalizes to subjects outside the model population. Smooth percentile curves showing the brain morphology changes as a function of age and spatiotemporal atlases derived from the model population are publicly available via an interactive web application at agingbrain.bigr.nl.

  18. Generalized Additive Models for Nowcasting Cloud Shading

    Czech Academy of Sciences Publication Activity Database

    Brabec, Marek; Paulescu, M.; Badescu, V.

    2014-01-01

    Roč. 101, March (2014), s. 272-282 ISSN 0038-092X R&D Projects: GA MŠk LD12009 Grant - others:European Cooperation in Science and Technology(XE) COST ES1002 Institutional support: RVO:67985807 Keywords : sunshine number * nowcasting * generalized additive model * Markov chain Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.469, year: 2014

  19. Dynamic generalized linear models for monitoring endemic diseases

    DEFF Research Database (Denmark)

    Lopes Antunes, Ana Carolina; Jensen, Dan; Hisham Beshara Halasa, Tariq

    2016-01-01

    The objective was to use a Dynamic Generalized Linear Model (DGLM) based on abinomial distribution with a linear trend, for monitoring the PRRS (Porcine Reproductive and Respiratory Syndrome sero-prevalence in Danish swine herds. The DGLM was described and its performance for monitoring control...... and eradication programmes based on changes in PRRS sero-prevalence was explored. Results showed a declining trend in PRRS sero-prevalence between 2007 and 2014 suggesting that Danish herds are slowly eradicating PRRS. The simulation study demonstrated the flexibility of DGLMs in adapting to changes intrends...... in sero-prevalence. Based on this, it was possible to detect variations in the growth model component. This study is a proof-of-concept, demonstrating the use of DGLMs for monitoring endemic diseases. In addition, the principles stated might be useful in general research on monitoring and surveillance...

  20. Performance evaluation of WAVEWATCH III model in the Persian Gulf using different wind resources

    Science.gov (United States)

    Kazeminezhad, Mohammad Hossein; Siadatmousavi, Seyed Mostafa

    2017-07-01

    The third-generation wave model, WAVEWATCH III, was employed to simulate bulk wave parameters in the Persian Gulf using three different wind sources: ERA-Interim, CCMP, and GFS-Analysis. Different formulations for whitecapping term and the energy transfer from wind to wave were used, namely the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996), WAM cycle 4 (BJA and WAM4), and Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) (TEST405 and TEST451 parameterizations) source term packages. The obtained results from numerical simulations were compared to altimeter-derived significant wave heights and measured wave parameters at two stations in the northern part of the Persian Gulf through statistical indicators and the Taylor diagram. Comparison of the bulk wave parameters with measured values showed underestimation of wave height using all wind sources. However, the performance of the model was best when GFS-Analysis wind data were used. In general, when wind veering from southeast to northwest occurred, and wind speed was high during the rotation, the model underestimation of wave height was severe. Except for the Tolman and Chalikov (J Phys Oceanogr 26:497-518, 1996) source term package, which severely underestimated the bulk wave parameters during stormy condition, the performances of other formulations were practically similar. However, in terms of statistics, the Ardhuin et al. (J Phys Oceanogr 40(9):1917-1941, 2010) source terms with TEST405 parameterization were the most successful formulation in the Persian Gulf when compared to in situ and altimeter-derived observations.

  1. Application of Improved Radiation Modeling to General Circulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Michael J Iacono

    2011-04-07

    This research has accomplished its primary objectives of developing accurate and efficient radiation codes, validating them with measurements and higher resolution models, and providing these advancements to the global modeling community to enhance the treatment of cloud and radiative processes in weather and climate prediction models. A critical component of this research has been the development of the longwave and shortwave broadband radiative transfer code for general circulation model (GCM) applications, RRTMG, which is based on the single-column reference code, RRTM, also developed at AER. RRTMG is a rigorously tested radiation model that retains a considerable level of accuracy relative to higher resolution models and measurements despite the performance enhancements that have made it possible to apply this radiation code successfully to global dynamical models. This model includes the radiative effects of all significant atmospheric gases, and it treats the absorption and scattering from liquid and ice clouds and aerosols. RRTMG also includes a statistical technique for representing small-scale cloud variability, such as cloud fraction and the vertical overlap of clouds, which has been shown to improve cloud radiative forcing in global models. This development approach has provided a direct link from observations to the enhanced radiative transfer provided by RRTMG for application to GCMs. Recent comparison of existing climate model radiation codes with high resolution models has documented the improved radiative forcing capability provided by RRTMG, especially at the surface, relative to other GCM radiation models. Due to its high accuracy, its connection to observations, and its computational efficiency, RRTMG has been implemented operationally in many national and international dynamical models to provide validated radiative transfer for improving weather forecasts and enhancing the prediction of global climate change.

  2. Evaluation of high intensity precipitation from 16 Regional climate models over a meso-scale catchment in the Midlands Regions of England

    Science.gov (United States)

    Wetterhall, F.; He, Y.; Cloke, H.; Pappenberger, F.; Freer, J.; Wilson, M.; McGregor, G.

    2009-04-01

    Local flooding events are often triggered by high-intensity rain-fall events, and it is important that these can be correctly modelled by Regional Climate Models (RCMs) if the results are to be used in climate impact assessment. In this study, daily precipitation from 16 RCMs was compared with observations over a meso-scale catchment in the Midlands Region of England. The RCM data was provided from the European research project ENSEMBLES and the precipitation data from the UK MetOffice. The RCMs were all driven by reanalysis data from the ERA40 dataset over the time period 1961-2000. The ENSEMBLES data is on the spatial scale of 25 x 25 km and it was disaggregated onto a 5 x 5 km grid over the catchment and compared with interpolated observational data with the same resolution. The mean precipitation was generally underestimated by the ENSEMBLES data, and the maximum and persistence of high intensity rainfall was even more underestimated. The inter-annual variability was not fully captured by the RCMs, and there was a systematic underestimation of precipitation during the autumn months. The spatial pattern in the modelled precipitation data was too smooth in comparison with the observed data, especially in the high altitudes in the western part of the catchment where the high precipitation usually occurs. The RCM outputs cannot reproduce the current high intensity precipitation events that are needed to sufficiently model extreme flood events. The results point out the discrepancy between climate model output and the high intensity precipitation input needs for hydrological impact modelling.

  3. Generalized Continuum: from Voigt to the Modeling of Quasi-Brittle Materials

    Directory of Open Access Journals (Sweden)

    Jamile Salim Fuina

    2010-12-01

    Full Text Available This article discusses the use of the generalized continuum theories to incorporate the effects of the microstructure in the nonlinear finite element analysis of quasi-brittle materials and, thus, to solve mesh dependency problems. A description of the problem called numerically induced strain localization, often found in Finite Element Method material non-linear analysis, is presented. A brief historic about the Generalized Continuum Mechanics based models is presented, since the initial work of Voigt (1887 until the more recent studies. By analyzing these models, it is observed that the Cosserat and microstretch approaches are particular cases of a general formulation that describes the micromorphic continuum. After reporting attempts to incorporate the material microstructure in Classical Continuum Mechanics based models, the article shows the recent tendency of doing it according to assumptions of the Generalized Continuum Mechanics. Finally, it presents numerical results which enable to characterize this tendency as a promising way to solve the problem.

  4. Adaptation of a general circulation model to ocean dynamics

    Science.gov (United States)

    Turner, R. E.; Rees, T. H.; Woodbury, G. E.

    1976-01-01

    A primitive-variable general circulation model of the ocean was formulated in which fast external gravity waves are suppressed with rigid-lid surface constraint pressires which also provide a means for simulating the effects of large-scale free-surface topography. The surface pressure method is simpler to apply than the conventional stream function models, and the resulting model can be applied to both global ocean and limited region situations. Strengths and weaknesses of the model are also presented.

  5. Relative efficiency of joint-model and full-conditional-specification multiple imputation when conditional models are compatible: The general location model.

    Science.gov (United States)

    Seaman, Shaun R; Hughes, Rachael A

    2018-06-01

    Estimating the parameters of a regression model of interest is complicated by missing data on the variables in that model. Multiple imputation is commonly used to handle these missing data. Joint model multiple imputation and full-conditional specification multiple imputation are known to yield imputed data with the same asymptotic distribution when the conditional models of full-conditional specification are compatible with that joint model. We show that this asymptotic equivalence of imputation distributions does not imply that joint model multiple imputation and full-conditional specification multiple imputation will also yield asymptotically equally efficient inference about the parameters of the model of interest, nor that they will be equally robust to misspecification of the joint model. When the conditional models used by full-conditional specification multiple imputation are linear, logistic and multinomial regressions, these are compatible with a restricted general location joint model. We show that multiple imputation using the restricted general location joint model can be substantially more asymptotically efficient than full-conditional specification multiple imputation, but this typically requires very strong associations between variables. When associations are weaker, the efficiency gain is small. Moreover, full-conditional specification multiple imputation is shown to be potentially much more robust than joint model multiple imputation using the restricted general location model to mispecification of that model when there is substantial missingness in the outcome variable.

  6. Retention of tritium in reference persons: a metabolic model. Derivation of parameters and application of the model to the general public and to workers

    International Nuclear Information System (INIS)

    Galeriu, D; Melintescu, A

    2010-01-01

    Tritium ( 3 H) is a radioactive isotope of hydrogen that is ubiquitous in environmental and biological systems. Following debate on the human health risk from exposure to tritium, there have been claims that the current biokinetic model recommended by the International Commission on Radiological Protection (ICRP) may underestimate tritium doses. A new generic model for tritium in mammals, based on energy metabolism and body composition, together with all its input data, has been described in a recent paper and successfully tested for farm and laboratory mammals. That model considers only dietary intake of tritium and was extended to humans. This paper presents the latest development of the human model with explicit consideration of brain energy metabolism. Model testing with human experimental data on organically bound tritium (OBT) in urine after tritiated water (HTO) or OBT intakes is presented. Predicted absorbed doses show a moderate increase for OBT intakes compared with doses recommended by the ICRP. Infants have higher tritium retention-a factor of 2 longer than the ICRP estimate. The highest tritium concentration is in adipose tissue, which has a very low radiobiological sensitivity. The ranges of uncertainty in retention and doses are investigated. The advantage of the new model is its ability to be applied to the interpretation of bioassay data.

  7. Retention of tritium in reference persons: a metabolic model. Derivation of parameters and application of the model to the general public and to workers

    Energy Technology Data Exchange (ETDEWEB)

    Galeriu, D; Melintescu, A, E-mail: galdan@ifin.nipne.r, E-mail: dangaler@yahoo.co [' Horia Hulubei' National Institute for Physics and Nuclear Engineering, Department of Life and Environmental Physics, 407 Atomistilor Street, Bucharest-Magurele, POB MG-6, RO-077125 (Romania)

    2010-09-15

    Tritium ({sup 3}H) is a radioactive isotope of hydrogen that is ubiquitous in environmental and biological systems. Following debate on the human health risk from exposure to tritium, there have been claims that the current biokinetic model recommended by the International Commission on Radiological Protection (ICRP) may underestimate tritium doses. A new generic model for tritium in mammals, based on energy metabolism and body composition, together with all its input data, has been described in a recent paper and successfully tested for farm and laboratory mammals. That model considers only dietary intake of tritium and was extended to humans. This paper presents the latest development of the human model with explicit consideration of brain energy metabolism. Model testing with human experimental data on organically bound tritium (OBT) in urine after tritiated water (HTO) or OBT intakes is presented. Predicted absorbed doses show a moderate increase for OBT intakes compared with doses recommended by the ICRP. Infants have higher tritium retention-a factor of 2 longer than the ICRP estimate. The highest tritium concentration is in adipose tissue, which has a very low radiobiological sensitivity. The ranges of uncertainty in retention and doses are investigated. The advantage of the new model is its ability to be applied to the interpretation of bioassay data.

  8. Longitudinal beta-binomial modeling using GEE for overdispersed binomial data.

    Science.gov (United States)

    Wu, Hongqian; Zhang, Ying; Long, Jeffrey D

    2017-03-15

    Longitudinal binomial data are frequently generated from multiple questionnaires and assessments in various scientific settings for which the binomial data are often overdispersed. The standard generalized linear mixed effects model may result in severe underestimation of standard errors of estimated regression parameters in such cases and hence potentially bias the statistical inference. In this paper, we propose a longitudinal beta-binomial model for overdispersed binomial data and estimate the regression parameters under a probit model using the generalized estimating equation method. A hybrid algorithm of the Fisher scoring and the method of moments is implemented for computing the method. Extensive simulation studies are conducted to justify the validity of the proposed method. Finally, the proposed method is applied to analyze functional impairment in subjects who are at risk of Huntington disease from a multisite observational study of prodromal Huntington disease. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  9. Working covariance model selection for generalized estimating equations.

    Science.gov (United States)

    Carey, Vincent J; Wang, You-Gan

    2011-11-20

    We investigate methods for data-based selection of working covariance models in the analysis of correlated data with generalized estimating equations. We study two selection criteria: Gaussian pseudolikelihood and a geodesic distance based on discrepancy between model-sensitive and model-robust regression parameter covariance estimators. The Gaussian pseudolikelihood is found in simulation to be reasonably sensitive for several response distributions and noncanonical mean-variance relations for longitudinal data. Application is also made to a clinical dataset. Assessment of adequacy of both correlation and variance models for longitudinal data should be routine in applications, and we describe open-source software supporting this practice. Copyright © 2011 John Wiley & Sons, Ltd.

  10. Accuracy and bias of ICT self-efficacy: an empirical study into students' over- and underestimation of their ICT competences

    NARCIS (Netherlands)

    Aesaert, K.; Voogt, J.; Kuiper, E.; van Braak, J.

    2017-01-01

    Most studies on the assessment of ICT competences use measures of ICT self-efficacy. These studies are often accused that they suffer from self-reported bias, i.e. students can over- and/or underestimate their ICT competences. As such, taking bias and accuracy of ICT self-efficacy into account,

  11. On a generalized Dirac oscillator interaction for the nonrelativistic limit 3 D generalized SUSY model oscillator Hamiltonian of Celka and Hussin

    International Nuclear Information System (INIS)

    Jayaraman, Jambunatha; Lima Rodrigues, R. de

    1994-01-01

    In the context of the 3 D generalized SUSY model oscillator Hamiltonian of Celka and Hussin (CH), a generalized Dirac oscillator interaction is studied, that leads, in the non-relativistic limit considered for both signs of energy, to the CH's generalized 3 D SUSY oscillator. The relevance of this interaction to the CH's SUSY model and the SUSY breaking dependent on the Wigner parameter is brought out. (author). 6 refs

  12. Optimal Designs for the Generalized Partial Credit Model

    OpenAIRE

    Bürkner, Paul-Christian; Schwabe, Rainer; Holling, Heinz

    2018-01-01

    Analyzing ordinal data becomes increasingly important in psychology, especially in the context of item response theory. The generalized partial credit model (GPCM) is probably the most widely used ordinal model and finds application in many large scale educational assessment studies such as PISA. In the present paper, optimal test designs are investigated for estimating persons' abilities with the GPCM for calibrated tests when item parameters are known from previous studies. We will derive t...

  13. Stability analysis for a general age-dependent vaccination model

    International Nuclear Information System (INIS)

    El Doma, M.

    1995-05-01

    An SIR epidemic model of a general age-dependent vaccination model is investigated when the fertility, mortality and removal rates depends on age. We give threshold criteria of the existence of equilibriums and perform stability analysis. Furthermore a critical vaccination coverage that is sufficient to eradicate the disease is determined. (author). 12 refs

  14. Estimation of group means when adjusting for covariates in generalized linear models.

    Science.gov (United States)

    Qu, Yongming; Luo, Junxiang

    2015-01-01

    Generalized linear models are commonly used to analyze categorical data such as binary, count, and ordinal outcomes. Adjusting for important prognostic factors or baseline covariates in generalized linear models may improve the estimation efficiency. The model-based mean for a treatment group produced by most software packages estimates the response at the mean covariate, not the mean response for this treatment group for the studied population. Although this is not an issue for linear models, the model-based group mean estimates in generalized linear models could be seriously biased for the true group means. We propose a new method to estimate the group mean consistently with the corresponding variance estimation. Simulation showed the proposed method produces an unbiased estimator for the group means and provided the correct coverage probability. The proposed method was applied to analyze hypoglycemia data from clinical trials in diabetes. Copyright © 2014 John Wiley & Sons, Ltd.

  15. A Generalized Partial Credit Model: Application of an EM Algorithm.

    Science.gov (United States)

    Muraki, Eiji

    1992-01-01

    The partial credit model with a varying slope parameter is developed and called the generalized partial credit model (GPCM). Analysis results for simulated data by this and other polytomous item-response models demonstrate that the rating formulation of the GPCM is adaptable to the analysis of polytomous item responses. (SLD)

  16. Interacting holographic dark energy models: a general approach

    Science.gov (United States)

    Som, S.; Sil, A.

    2014-08-01

    Dark energy models inspired by the cosmological holographic principle are studied in homogeneous isotropic spacetime with a general choice for the dark energy density . Special choices of the parameters enable us to obtain three different holographic models, including the holographic Ricci dark energy (RDE) model. Effect of interaction between dark matter and dark energy on the dynamics of those models are investigated for different popular forms of interaction. It is found that crossing of phantom divide can be avoided in RDE models for β>0.5 irrespective of the presence of interaction. A choice of α=1 and β=2/3 leads to a varying Λ-like model introducing an IR cutoff length Λ -1/2. It is concluded that among the popular choices an interaction of the form Q∝ Hρ m suits the best in avoiding the coincidence problem in this model.

  17. General Potential-Current Model and Validation for Electrocoagulation

    International Nuclear Information System (INIS)

    Dubrawski, Kristian L.; Du, Codey; Mohseni, Madjid

    2014-01-01

    A model relating potential and current in continuous parallel plate iron electrocoagulation (EC) was developed for application in drinking water treatment. The general model can be applied to any EC parallel plate system relying only on geometric and tabulated input variables without the need of system-specific experimentally derived constants. For the theoretical model, the anode and cathode were vertically divided into n equipotential segments in a single pass, upflow, and adiabatic EC reactor. Potential and energy balances were simultaneously solved at each vertical segment, which included the contribution of ionic concentrations, solution temperature and conductivity, cathodic hydrogen flux, and gas/liquid ratio. We experimentally validated the numerical model with a vertical upflow EC reactor using a 24 cm height 99.99% pure iron anode divided into twelve 2 cm segments. Individual experimental currents from each segment were summed to determine total current, and compared with the theoretically derived value. Several key variables were studied to determine their impact on model accuracy: solute type, solute concentration, current density, flow rate, inter-electrode gap, and electrode surface condition. Model results were in good agreement with experimental values at cell potentials of 2-20 V (corresponding to a current density range of approximately 50-800 A/m 2 ), with mean relative deviation of 9% for low flow rate, narrow electrode gap, polished electrodes, and 150 mg/L NaCl. Highest deviation occurred with a large electrode gap, unpolished electrodes, and Na 2 SO 4 electrolyte, due to parasitic H 2 O oxidation and less than unity current efficiency. This is the first general model which can be applied to any parallel plate EC system for accurate electrochemical voltage or current prediction

  18. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part I: Single layer cloud

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Stephen A.; McCoy, Renata B.; Morrison, Hugh; Ackerman, Andrew S.; Avramov, Alexander; de Boer, Gijs; Chen, Mingxuan; Cole, Jason N.S.; Del Genio, Anthony D.; Falk, Michael; Foster, Michael J.; Fridlind, Ann; Golaz, Jean-Christophe; Hashino, Tempei; Harrington, Jerry Y.; Hoose, Corinna; Khairoutdinov, Marat F.; Larson, Vincent E.; Liu, Xiaohong; Luo, Yali; McFarquhar, Greg M.; Menon, Surabi; Neggers, Roel A. J.; Park, Sungsu; Poellot, Michael R.; Schmidt, Jerome M.; Sednev, Igor; Shipway, Ben J.; Shupe, Matthew D.; Spangenberg, Douglas A.; Sud, Yogesh C.; Turner, David D.; Veron, Dana E.; von Salzen, Knut; Walker, Gregory K.; Wang, Zhien; Wolf, Audrey B.; Xie, Shaocheng; Xu, Kuan-Man; Yang, Fanglin; Zhang, Gong

    2009-02-02

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a cold-air outbreak mixed-phase stratocumulus cloud observed during the Atmospheric Radiation Measurement (ARM) program's Mixed-Phase Arctic Cloud Experiment. The observed cloud occurred in a well-mixed boundary layer with a cloud top temperature of -15 C. The observed average liquid water path of around 160 g m{sup -2} was about two-thirds of the adiabatic value and much greater than the average mass of ice crystal precipitation which when integrated from the surface to cloud top was around 15 g m{sup -2}. The simulations were performed by seventeen single-column models (SCMs) and nine cloud-resolving models (CRMs). While the simulated ice water path is generally consistent with the observed values, the median SCM and CRM liquid water path is a factor of three smaller than observed. Results from a sensitivity study in which models removed ice microphysics suggest that in many models the interaction between liquid and ice-phase microphysics is responsible for the large model underestimate of liquid water path. Despite this general underestimate, the simulated liquid and ice water paths of several models are consistent with the observed values. Furthermore, there is evidence that models with more sophisticated microphysics simulate liquid and ice water paths that are in better agreement with the observed values, although considerable scatter is also present. Although no single factor guarantees a good simulation, these results emphasize the need for improvement in the model representation of mixed-phase microphysics.

  19. Is oral cancer incidence among patients with oral lichen planus/oral lichenoid lesions underestimated?

    Science.gov (United States)

    Gonzalez-Moles, M A; Gil-Montoya, J A; Ruiz-Avila, I; Bravo, M

    2017-02-01

    Oral lichen planus (OLP) and oral lichenoid lesions (OLL) are considered potentially malignant disorders with a cancer incidence of around 1% of cases, although this estimation is controversial. The aim of this study was to analyze the cancer incidence in a case series of patients with OLP and OLL and to explore clinicopathological aspects that may cause underestimation of the cancer incidence in these diseases. A retrospective study was conducted of 102 patients diagnosed with OLP (n = 21, 20.58%) or OLL (n = 81) between January 2006 and January 2016. Patients were informed of the risk of malignization and followed up annually. The number of sessions programmed for each patient was compared with the number actually attended. Follow-up was classified as complete (100% attendance), good (75-99%), moderate (25-74%), or poor (<25% attendance) compliance. Cancer was developed by four patients (3.9%), three males and one male. One of these developed three carcinomas, which were diagnosed at the follow-up visit (two in lower gingiva, one in floor of mouth); one had OLL and the other three had OLP. The carcinoma developed in mucosal areas with no OLP or OLL involvement in three of these patients, while OLP and cancer were diagnosed simultaneously in the fourth. Of the six carcinomas diagnosed, five (83.3%) were T1 and one (16.7%) T2. None were N+, and all patients remain alive and disease-free. The cancer incidence in OLP and OLL appears to be underestimated due to the strict exclusion criteria usually imposed. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  20. A General Business Model for Marine Reserves

    Science.gov (United States)

    Sala, Enric; Costello, Christopher; Dougherty, Dawn; Heal, Geoffrey; Kelleher, Kieran; Murray, Jason H.; Rosenberg, Andrew A.; Sumaila, Rashid

    2013-01-01

    Marine reserves are an effective tool for protecting biodiversity locally, with potential economic benefits including enhancement of local fisheries, increased tourism, and maintenance of ecosystem services. However, fishing communities often fear short-term income losses associated with closures, and thus may oppose marine reserves. Here we review empirical data and develop bioeconomic models to show that the value of marine reserves (enhanced adjacent fishing + tourism) may often exceed the pre-reserve value, and that economic benefits can offset the costs in as little as five years. These results suggest the need for a new business model for creating and managing reserves, which could pay for themselves and turn a profit for stakeholder groups. Our model could be expanded to include ecosystem services and other benefits, and it provides a general framework to estimate costs and benefits of reserves and to develop such business models. PMID:23573192

  1. Can we determine what controls the spatio-temporal distribution of d-excess and 17O-excess in precipitation using the LMDZ general circulation model?

    Directory of Open Access Journals (Sweden)

    C. Risi

    2013-09-01

    Full Text Available Combined measurements of the H218O and HDO isotopic ratios in precipitation, leading to second-order parameter D-excess, have provided additional constraints on past climates compared to the H218O isotopic ratio alone. More recently, measurements of H217O have led to another second-order parameter: 17O-excess. Recent studies suggest that 17O-excess in polar ice may provide information on evaporative conditions at the moisture source. However, the processes controlling the spatio-temporal distribution of 17O-excess are still far from being fully understood. We use the isotopic general circulation model (GCM LMDZ to better understand what controls d-excess and 17O-excess in precipitation at present-day (PD and during the last glacial maximum (LGM. The simulation of D-excess and 17O-excess is evaluated against measurements in meteoric water, water vapor and polar ice cores. A set of sensitivity tests and diagnostics are used to quantify the relative effects of evaporative conditions (sea surface temperature and relative humidity, Rayleigh distillation, mixing between vapors from different origins, precipitation re-evaporation and supersaturation during condensation at low temperature. In LMDZ, simulations suggest that in the tropics convective processes and rain re-evaporation are important controls on precipitation D-excess and 17O-excess. In higher latitudes, the effect of distillation, mixing between vapors from different origins and supersaturation are the most important controls. For example, the lower d-excess and 17O-excess at LGM simulated at LGM are mainly due to the supersaturation effect. The effect of supersaturation is however very sensitive to a parameter whose tuning would require more measurements and laboratory experiments. Evaporative conditions had previously been suggested to be key controlling factors of d-excess and 17O-excess, but LMDZ underestimates their role. More generally, some shortcomings in the simulation of 17O

  2. Comparison of body composition between fashion models and women in general

    OpenAIRE

    Park, Sunhee

    2017-01-01

    [Purpose] The present study compared the physical characteristics and body composition of professional fashion models and women in general, utilizing the skinfold test. [Methods] The research sample consisted of 90 professional fashion models presently active in Korea and 100 females in the general population, all selected through convenience sampling. Measurement was done following standardized methods and procedures set by the International Society for the Advancement of Kinanthropometry. B...

  3. The Generalized Quantum Episodic Memory Model.

    Science.gov (United States)

    Trueblood, Jennifer S; Hemmer, Pernille

    2017-11-01

    Recent evidence suggests that experienced events are often mapped to too many episodic states, including those that are logically or experimentally incompatible with one another. For example, episodic over-distribution patterns show that the probability of accepting an item under different mutually exclusive conditions violates the disjunction rule. A related example, called subadditivity, occurs when the probability of accepting an item under mutually exclusive and exhaustive instruction conditions sums to a number >1. Both the over-distribution effect and subadditivity have been widely observed in item and source-memory paradigms. These phenomena are difficult to explain using standard memory frameworks, such as signal-detection theory. A dual-trace model called the over-distribution (OD) model (Brainerd & Reyna, 2008) can explain the episodic over-distribution effect, but not subadditivity. Our goal is to develop a model that can explain both effects. In this paper, we propose the Generalized Quantum Episodic Memory (GQEM) model, which extends the Quantum Episodic Memory (QEM) model developed by Brainerd, Wang, and Reyna (2013). We test GQEM by comparing it to the OD model using data from a novel item-memory experiment and a previously published source-memory experiment (Kellen, Singmann, & Klauer, 2014) examining the over-distribution effect. Using the best-fit parameters from the over-distribution experiments, we conclude by showing that the GQEM model can also account for subadditivity. Overall these results add to a growing body of evidence suggesting that quantum probability theory is a valuable tool in modeling recognition memory. Copyright © 2016 Cognitive Science Society, Inc.

  4. The algebra of the general Markov model on phylogenetic trees and networks.

    Science.gov (United States)

    Sumner, J G; Holland, B R; Jarvis, P D

    2012-04-01

    It is known that the Kimura 3ST model of sequence evolution on phylogenetic trees can be extended quite naturally to arbitrary split systems. However, this extension relies heavily on mathematical peculiarities of the associated Hadamard transformation, and providing an analogous augmentation of the general Markov model has thus far been elusive. In this paper, we rectify this shortcoming by showing how to extend the general Markov model on trees to include incompatible edges; and even further to more general network models. This is achieved by exploring the algebra of the generators of the continuous-time Markov chain together with the “splitting” operator that generates the branching process on phylogenetic trees. For simplicity, we proceed by discussing the two state case and then show that our results are easily extended to more states with little complication. Intriguingly, upon restriction of the two state general Markov model to the parameter space of the binary symmetric model, our extension is indistinguishable from the Hadamard approach only on trees; as soon as any incompatible splits are introduced the two approaches give rise to differing probability distributions with disparate structure. Through exploration of a simple example, we give an argument that our extension to more general networks has desirable properties that the previous approaches do not share. In particular, our construction allows for convergent evolution of previously divergent lineages; a property that is of significant interest for biological applications.

  5. Comparison of nonstationary generalized logistic models based on Monte Carlo simulation

    Directory of Open Access Journals (Sweden)

    S. Kim

    2015-06-01

    Full Text Available Recently, the evidences of climate change have been observed in hydrologic data such as rainfall and flow data. The time-dependent characteristics of statistics in hydrologic data are widely defined as nonstationarity. Therefore, various nonstationary GEV and generalized Pareto models have been suggested for frequency analysis of nonstationary annual maximum and POT (peak-over-threshold data, respectively. However, the alternative models are required for nonstatinoary frequency analysis because of analyzing the complex characteristics of nonstationary data based on climate change. This study proposed the nonstationary generalized logistic model including time-dependent parameters. The parameters of proposed model are estimated using the method of maximum likelihood based on the Newton-Raphson method. In addition, the proposed model is compared by Monte Carlo simulation to investigate the characteristics of models and applicability.

  6. Volatility forecasting with the wavelet transformation algorithm GARCH model: Evidence from African stock markets

    Directory of Open Access Journals (Sweden)

    Mohd Tahir Ismail

    2016-06-01

    Full Text Available The daily returns of four African countries' stock market indices for the period January 2, 2000, to December 31, 2014, were employed to compare the GARCH(1,1 model and a newly proposed Maximal Overlap Discreet Wavelet Transform (MODWT-GARCH(1,1 model. The results showed that although both models fit the returns data well, the forecast produced by the GARCH(1,1 model underestimates the observed returns whereas the newly proposed MODWT-GARCH(1,1 model generates an accurate forecast value of the observed returns. The results generally showed that the newly proposed MODWT-GARCH(1,1 model best fits returns series for these African countries. Hence the proposed MODWT-GARCH should be applied on other context to further verify its validity.

  7. Underestimating nearby nature: affective forecasting errors obscure the happy path to sustainability.

    Science.gov (United States)

    Nisbet, Elizabeth K; Zelenski, John M

    2011-09-01

    Modern lifestyles disconnect people from nature, and this may have adverse consequences for the well-being of both humans and the environment. In two experiments, we found that although outdoor walks in nearby nature made participants much happier than indoor walks did, participants made affective forecasting errors, such that they systematically underestimated nature's hedonic benefit. The pleasant moods experienced on outdoor nature walks facilitated a subjective sense of connection with nature, a construct strongly linked with concern for the environment and environmentally sustainable behavior. To the extent that affective forecasts determine choices, our findings suggest that people fail to maximize their time in nearby nature and thus miss opportunities to increase their happiness and relatedness to nature. Our findings suggest a happy path to sustainability, whereby contact with nature fosters individual happiness and environmentally responsible behavior.

  8. Explained variation and predictive accuracy in general parametric statistical models: the role of model misspecification

    DEFF Research Database (Denmark)

    Rosthøj, Susanne; Keiding, Niels

    2004-01-01

    When studying a regression model measures of explained variation are used to assess the degree to which the covariates determine the outcome of interest. Measures of predictive accuracy are used to assess the accuracy of the predictions based on the covariates and the regression model. We give a ...... a detailed and general introduction to the two measures and the estimation procedures. The framework we set up allows for a study of the effect of misspecification on the quantities estimated. We also introduce a generalization to survival analysis....

  9. Tundra water budget and implications of precipitation underestimation.

    Science.gov (United States)

    Liljedahl, Anna K; Hinzman, Larry D; Kane, Douglas L; Oechel, Walter C; Tweedie, Craig E; Zona, Donatella

    2017-08-01

    Difficulties in obtaining accurate precipitation measurements have limited meaningful hydrologic assessment for over a century due to performance challenges of conventional snowfall and rainfall gauges in windy environments. Here, we compare snowfall observations and bias adjusted snowfall to end-of-winter snow accumulation measurements on the ground for 16 years (1999-2014) and assess the implication of precipitation underestimation on the water balance for a low-gradient tundra wetland near Utqiagvik (formerly Barrow), Alaska (2007-2009). In agreement with other studies, and not accounting for sublimation, conventional snowfall gauges captured 23-56% of end-of-winter snow accumulation. Once snowfall and rainfall are bias adjusted, long-term annual precipitation estimates more than double (from 123 to 274 mm), highlighting the risk of studies using conventional or unadjusted precipitation that dramatically under-represent water balance components. Applying conventional precipitation information to the water balance analysis produced consistent storage deficits (79 to 152 mm) that were all larger than the largest actual deficit (75 mm), which was observed in the unusually low rainfall summer of 2007. Year-to-year variability in adjusted rainfall (±33 mm) was larger than evapotranspiration (±13 mm). Measured interannual variability in partitioning of snow into runoff (29% in 2008 to 68% in 2009) in years with similar end-of-winter snow accumulation (180 and 164 mm, respectively) highlights the importance of the previous summer's rainfall (25 and 60 mm, respectively) on spring runoff production. Incorrect representation of precipitation can therefore have major implications for Arctic water budget descriptions that in turn can alter estimates of carbon and energy fluxes.

  10. [A competency model of rural general practitioners: theory construction and empirical study].

    Science.gov (United States)

    Yang, Xiu-Mu; Qi, Yu-Long; Shne, Zheng-Fu; Han, Bu-Xin; Meng, Bei

    2015-04-01

    To perform theory construction and empirical study of the competency model of rural general practitioners. Through literature study, job analysis, interviews, and expert team discussion, the questionnaire of rural general practitioners competency was constructed. A total of 1458 rural general practitioners were surveyed by the questionnaire in 6 central provinces. The common factors were constructed using the principal component method of exploratory factor analysis and confirmatory factor analysis. The influence of the competency characteristics on the working performance was analyzed using regression equation analysis. The Cronbach 's alpha coefficient of the questionnaire was 0.974. The model consisted of 9 dimensions and 59 items. The 9 competency dimensions included basic public health service ability, basic clinical skills, system analysis capability, information management capability, communication and cooperation ability, occupational moral ability, non-medical professional knowledge, personal traits and psychological adaptability. The rate of explained cumulative total variance was 76.855%. The model fitting index were Χ(2)/df 1.88, GFI=0.94, NFI=0.96, NNFI=0.98, PNFI=0.91, RMSEA=0.068, CFI=0.97, IFI=0.97, RFI=0.96, suggesting good model fitting. Regression analysis showed that the competency characteristics had a significant effect on job performance. The rural general practitioners competency model provides reference for rural doctor training, rural order directional cultivation of medical students, and competency performance management of the rural general practitioners.

  11. Davidson's generalization of the Fenyes-Nelson stochastic model of quantum mechanics

    International Nuclear Information System (INIS)

    Shucker, D.S.

    1980-01-01

    Davidson's generalization of the Fenyes-Nelson stochastic model of quantum mechanics is discussed. It is shown that this author's previous results concerning the Fenyes-Nelson process extend to the more general theory of Davidson. (orig.)

  12. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-based monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  13. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-base monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  14. Generalized Whittle-Matern random field as a model of correlated fluctuations

    International Nuclear Information System (INIS)

    Lim, S C; Teo, L P

    2009-01-01

    This paper considers a generalization of the Gaussian random field with covariance function of the Whittle-Matern family. Such a random field can be obtained as the solution to the fractional stochastic differential equation with two fractional orders. Asymptotic properties of the covariance functions belonging to this generalized Whittle-Matern family are studied, which are used to deduce the sample path properties of the random field. The Whittle-Matern field has been widely used in modeling geostatistical data such as sea beam data, wind speed, field temperature and soil data. In this paper we show that the generalized Whittle-Matern field provides a more flexible model for wind speed data

  15. Using video modeling for generalizing toy play in children with autism.

    Science.gov (United States)

    Paterson, Claire R; Arco, Lucius

    2007-09-01

    The present study examined effects of video modeling on generalized independent toy play of two boys with autism. Appropriate and repetitive verbal and motor play were measured, and intermeasure relationships were examined. Two single-participant experiments with multiple baselines and withdrawals across toy play were used. One boy was presented with three physically unrelated toys, whereas the other was presented with three related toys. Video modeling produced increases in appropriate play and decreases in repetitive play, but generalized play was observed only with the related toys. Generalization may have resulted from variables including the toys' common physical characteristics and natural reinforcing properties and the increased correspondence between verbal and motor play.

  16. Generalized semi-Markovian dividend discount model: risk and return

    OpenAIRE

    D'Amico, Guglielmo

    2016-01-01

    The article presents a general discrete time dividend valuation model when the dividend growth rate is a general continuous variable. The main assumption is that the dividend growth rate follows a discrete time semi-Markov chain with measurable space. The paper furnishes sufficient conditions that assure finiteness of fundamental prices and risks and new equations that describe the first and second order price-dividend ratios. Approximation methods to solve equations are provided and some new...

  17. On concurvity in nonlinear and nonparametric regression models

    Directory of Open Access Journals (Sweden)

    Sonia Amodio

    2014-12-01

    Full Text Available When data are affected by multicollinearity in the linear regression framework, then concurvity will be present in fitting a generalized additive model (GAM. The term concurvity describes nonlinear dependencies among the predictor variables. As collinearity results in inflated variance of the estimated regression coefficients in the linear regression model, the result of the presence of concurvity leads to instability of the estimated coefficients in GAMs. Even if the backfitting algorithm will always converge to a solution, in case of concurvity the final solution of the backfitting procedure in fitting a GAM is influenced by the starting functions. While exact concurvity is highly unlikely, approximate concurvity, the analogue of multicollinearity, is of practical concern as it can lead to upwardly biased estimates of the parameters and to underestimation of their standard errors, increasing the risk of committing type I error. We compare the existing approaches to detect concurvity, pointing out their advantages and drawbacks, using simulated and real data sets. As a result, this paper will provide a general criterion to detect concurvity in nonlinear and non parametric regression models.

  18. Are the impacts of land use on warming underestimated in climate policy?

    Science.gov (United States)

    Mahowald, Natalie M.; Ward, Daniel S.; Doney, Scott C.; Hess, Peter G.; Randerson, James T.

    2017-09-01

    While carbon dioxide emissions from energy use must be the primary target of climate change mitigation efforts, land use and land cover change (LULCC) also represent an important source of climate forcing. In this study we compute time series of global surface temperature change separately for LULCC and non-LULCC sources (primarily fossil fuel burning), and show that because of the extra warming associated with the co-emission of methane and nitrous oxide with LULCC carbon dioxide emissions, and a co-emission of cooling aerosols with non-LULCC emissions of carbon dioxide, the linear relationship between cumulative carbon dioxide emissions and temperature has a two-fold higher slope for LULCC than for non-LULCC activities. Moreover, projections used in the Intergovernmental Panel on Climate Change (IPCC) for the rate of tropical land conversion in the future are relatively low compared to contemporary observations, suggesting that the future projections of land conversion used in the IPCC may underestimate potential impacts of LULCC. By including a ‘business as usual’ future LULCC scenario for tropical deforestation, we find that even if all non-LULCC emissions are switched off in 2015, it is likely that 1.5 °C of warming relative to the preindustrial era will occur by 2100. Thus, policies to reduce LULCC emissions must remain a high priority if we are to achieve the low to medium temperature change targets proposed as a part of the Paris Agreement. Future studies using integrated assessment models and other climate simulations should include more realistic deforestation rates and the integration of policy that would reduce LULCC emissions.

  19. Generalized model for Memristor-based Wien family oscillators

    KAUST Repository

    Talukdar, Abdul Hafiz Ibne

    2012-07-23

    In this paper, we report the unconventional characteristics of Memristor in Wien oscillators. Generalized mathematical models are developed to analyze four members of the Wien family using Memristors. Sustained oscillation is reported for all types though oscillating resistance and time dependent poles are present. We have also proposed an analytical model to estimate the desired amplitude of oscillation before the oscillation starts. These Memristor-based oscillation results, presented for the first time, are in good agreement with simulation results. © 2011 Elsevier Ltd.

  20. Verification and Validation of a Three-Dimensional Generalized Composite Material Model

    Science.gov (United States)

    Hoffarth, Canio; Harrington, Joseph; Rajan, Subramaniam D.; Goldberg, Robert K.; Carney, Kelly S.; DuBois, Paul; Blankenhorn, Gunther

    2015-01-01

    A general purpose orthotropic elasto-plastic computational constitutive material model has been developed to improve predictions of the response of composites subjected to high velocity impact. The three-dimensional orthotropic elasto-plastic composite material model is being implemented initially for solid elements in LS-DYNA as MAT213. In order to accurately represent the response of a composite, experimental stress-strain curves are utilized as input, allowing for a more general material model that can be used on a variety of composite applications. The theoretical details are discussed in a companion paper. This paper documents the implementation, verification and qualitative validation of the material model using the T800-F3900 fiber/resin composite material

  1. Nature of dynamical suppressions in the generalized Veneziano model

    International Nuclear Information System (INIS)

    Odorico, R.

    1976-05-01

    It is shown by explicit numerical calculations that of a class of coupling suppressions existing in the generalized Veneziano model, which have been recently used to interpret the psi data and other related phenomena, only a part can be attributed to the exponential growth with energy of the number of levels in the model. The remaining suppressions have a more direct dual origin

  2. Language issues, an underestimated danger in major hazard control?

    Science.gov (United States)

    Lindhout, Paul; Ale, Ben J M

    2009-12-15

    Language issues are problems with communication via speech, signs, gestures or their written equivalents. They may result from poor reading and writing skills, a mix of foreign languages and other circumstances. Language issues are not picked up as a safety risk on the shop floor by current safety management systems. These safety risks need to be identified, acknowledged, quantified and prioritized in order to allow risk reducing measures to be taken. This study investigates the nature of language issues related danger in literature, by experiment and by a survey among the Seveso II companies in the Netherlands. Based on human error frequencies, and on the contents of accident investigation reports, the risks associated with language issues were ranked. Accident investigation method causal factor categories were found not to be sufficiently representative for the type and magnitude of these risks. Readability of safety related documents used by the companies was investigated and found to be poor in many cases. Interviews among regulators and a survey among Seveso II companies were used to identify the gap between the language issue related dangers found in literature and current best practices. This study demonstrates by means of triangulation with different investigative methods that language issue related risks are indeed underestimated. A recommended coarse of action in order to arrive at appropriate measures is presented.

  3. Language issues, an underestimated danger in major hazard control?

    Energy Technology Data Exchange (ETDEWEB)

    Lindhout, Paul, E-mail: plindhout@minszw.nl [Ministry of Social Affairs and Employment, AI-MHC, Anna van Hannoverstraat 4, P.O. Box 90801, 2509 LV The Hague (Netherlands); Ale, Ben J.M. [Delft University of Technology, TBM-Safety Science Group, Jaffalaan 5, 2628 BX Delft (Netherlands)

    2009-12-15

    Language issues are problems with communication via speech, signs, gestures or their written equivalents. They may result from poor reading and writing skills, a mix of foreign languages and other circumstances. Language issues are not picked up as a safety risk on the shop floor by current safety management systems. These safety risks need to be identified, acknowledged, quantified and prioritised in order to allow risk reducing measures to be taken. This study investigates the nature of language issues related danger in literature, by experiment and by a survey among the Seveso II companies in the Netherlands. Based on human error frequencies, and on the contents of accident investigation reports, the risks associated with language issues were ranked. Accident investigation method causal factor categories were found not to be sufficiently representative for the type and magnitude of these risks. Readability of safety related documents used by the companies was investigated and found to be poor in many cases. Interviews among regulators and a survey among Seveso II companies were used to identify the gap between the language issue related dangers found in literature and current best practices. This study demonstrates by means of triangulation with different investigative methods that language issue related risks are indeed underestimated. A recommended coarse of action in order to arrive at appropriate measures is presented.

  4. Language issues, an underestimated danger in major hazard control?

    International Nuclear Information System (INIS)

    Lindhout, Paul; Ale, Ben J.M.

    2009-01-01

    Language issues are problems with communication via speech, signs, gestures or their written equivalents. They may result from poor reading and writing skills, a mix of foreign languages and other circumstances. Language issues are not picked up as a safety risk on the shop floor by current safety management systems. These safety risks need to be identified, acknowledged, quantified and prioritised in order to allow risk reducing measures to be taken. This study investigates the nature of language issues related danger in literature, by experiment and by a survey among the Seveso II companies in the Netherlands. Based on human error frequencies, and on the contents of accident investigation reports, the risks associated with language issues were ranked. Accident investigation method causal factor categories were found not to be sufficiently representative for the type and magnitude of these risks. Readability of safety related documents used by the companies was investigated and found to be poor in many cases. Interviews among regulators and a survey among Seveso II companies were used to identify the gap between the language issue related dangers found in literature and current best practices. This study demonstrates by means of triangulation with different investigative methods that language issue related risks are indeed underestimated. A recommended coarse of action in order to arrive at appropriate measures is presented.

  5. Anisotropic cosmological models and generalized scalar tensor theory

    Indian Academy of Sciences (India)

    Abstract. In this paper generalized scalar tensor theory has been considered in the background of anisotropic cosmological models, namely, axially symmetric Bianchi-I, Bianchi-III and Kortowski–. Sachs space-time. For bulk viscous fluid, both exponential and power-law solutions have been stud- ied and some assumptions ...

  6. Anisotropic cosmological models and generalized scalar tensor theory

    Indian Academy of Sciences (India)

    In this paper generalized scalar tensor theory has been considered in the background of anisotropic cosmological models, namely, axially symmetric Bianchi-I, Bianchi-III and Kortowski–Sachs space-time. For bulk viscous fluid, both exponential and power-law solutions have been studied and some assumptions among the ...

  7. Parenting practices, parents' underestimation of daughters' risks, and alcohol and sexual behaviors of urban girls.

    Science.gov (United States)

    O'Donnell, Lydia; Stueve, Ann; Duran, Richard; Myint-U, Athi; Agronick, Gail; San Doval, Alexi; Wilson-Simmons, Renée

    2008-05-01

    In urban economically distressed communities, high rates of early sexual initiation combined with alcohol use place adolescent girls at risk for myriad negative health consequences. This article reports on the extent to which parents of young teens underestimate both the risks their daughters are exposed to and the considerable influence that they have over their children's decisions and behaviors. Surveys were conducted with more than 700 sixth-grade girls and their parents, recruited from seven New York City schools serving low-income families. Bivariate and multivariate analyses examined relationships among parents' practices and perceptions of daughters' risks, girls' reports of parenting, and outcomes of girls' alcohol use, media and peer conduct, and heterosexual romantic and social behaviors that typically precede sexual intercourse. Although only four parents thought that their daughters had used alcohol, 22% of the daughters reported drinking in the past year. Approximately 5% of parents thought that daughters had hugged and kissed a boy for a long time or had "hung out" with older boys, whereas 38% of girls reported these behaviors. Parents' underestimation of risk was correlated with lower reports of positive parenting practices by daughters. In multivariate analyses, girls' reports of parental oversight, rules, and disapproval of risk are associated with all three behavioral outcomes. Adult reports of parenting practices are associated with girls' conduct and heterosexual behaviors, but not with their alcohol use. Creating greater awareness of the early onset of risk behaviors among urban adolescent girls is important for fostering positive parenting practices, which in turn may help parents to support their daughters' healthier choices.

  8. Processes influencing model-data mismatch in drought-stressed, fire-disturbed eddy flux sites

    Science.gov (United States)

    Mitchell, Stephen; Beven, Keith; Freer, Jim; Law, Beverly

    2011-06-01

    Semiarid forests are very sensitive to climatic change and among the most difficult ecosystems to accurately model. We tested the performance of the Biome-BGC model against eddy flux data taken from young (years 2004-2008), mature (years 2002-2008), and old-growth (year 2000) ponderosa pine stands at Metolius, Oregon, and subsequently examined several potential causes for model-data mismatch. We used the Generalized Likelihood Uncertainty Estimation methodology, which involved 500,000 model runs for each stand (1,500,000 total). Each simulation was run with randomly generated parameter values from a uniform distribution based on published parameter ranges, resulting in modeled estimates of net ecosystem CO2 exchange (NEE) that were compared to measured eddy flux data. Simulations for the young stand exhibited the highest level of performance, though they overestimated ecosystem C accumulation (-NEE) 99% of the time. Among the simulations for the mature and old-growth stands, 100% and 99% of the simulations underestimated ecosystem C accumulation. One obvious area of model-data mismatch is soil moisture, which was overestimated by the model in the young and old-growth stands yet underestimated in the mature stand. However, modeled estimates of soil water content and associated water deficits did not appear to be the primary cause of model-data mismatch; our analysis indicated that gross primary production can be accurately modeled even if soil moisture content is not. Instead, difficulties in adequately modeling ecosystem respiration, mainly autotrophic respiration, appeared to be the fundamental cause of model-data mismatch.

  9. The generalized collective model

    International Nuclear Information System (INIS)

    Troltenier, D.

    1992-07-01

    In this thesis a new way of proceeding, basing on the method of the finite elements, for the solution of the collective Schroedinger equation in the framework of the Generalized Collective Model was presented. The numerically reachable accuracy was illustrated by the comparison to analytically known solutions by means of numerous examples. Furthermore the potential-energy surfaces of the 182-196 Hg, 242-248 Cm, and 242-246 Pu isotopes were determined by the fitting of the parameters of the Gneuss-Greiner potential to the experimental data. In the Hg isotopes a shape consistency of nearly spherical and oblate deformations is shown, while the Cm and Pu isotopes possess an essentially equal remaining prolate deformation. By means of the pseudo-symplectic model the potential-energy surfaces of 24 Mg, 190 Pt, and 238 U were microscopically calculated. Using a deformation-independent kinetic energy so the collective excitation spectra and the electrical properties (B(E2), B(E4) values, quadrupole moments) of these nuclei were calculated and compared with the experiment. Finally an analytic relation between the (g R -Z/A) value and the quadrupole moment was derived. The study of the experimental data of the 166-170 Er isotopes shows an in the framework of the measurement accuracy a sufficient agreement with this relation. Furthermore it is by this relation possible to determine the effective magnetic dipole moment parameter-freely. (orig./HSI) [de

  10. A generalized multivariate regression model for modelling ocean wave heights

    Science.gov (United States)

    Wang, X. L.; Feng, Y.; Swail, V. R.

    2012-04-01

    In this study, a generalized multivariate linear regression model is developed to represent the relationship between 6-hourly ocean significant wave heights (Hs) and the corresponding 6-hourly mean sea level pressure (MSLP) fields. The model is calibrated using the ERA-Interim reanalysis of Hs and MSLP fields for 1981-2000, and is validated using the ERA-Interim reanalysis for 2001-2010 and ERA40 reanalysis of Hs and MSLP for 1958-2001. The performance of the fitted model is evaluated in terms of Pierce skill score, frequency bias index, and correlation skill score. Being not normally distributed, wave heights are subjected to a data adaptive Box-Cox transformation before being used in the model fitting. Also, since 6-hourly data are being modelled, lag-1 autocorrelation must be and is accounted for. The models with and without Box-Cox transformation, and with and without accounting for autocorrelation, are inter-compared in terms of their prediction skills. The fitted MSLP-Hs relationship is then used to reconstruct historical wave height climate from the 6-hourly MSLP fields taken from the Twentieth Century Reanalysis (20CR, Compo et al. 2011), and to project possible future wave height climates using CMIP5 model simulations of MSLP fields. The reconstructed and projected wave heights, both seasonal means and maxima, are subject to a trend analysis that allows for non-linear (polynomial) trends.

  11. Disguised Distress in Children and Adolescents "Flying under the Radar": Why Psychological Problems Are Underestimated and How Schools Must Respond

    Science.gov (United States)

    Flett, Gordon L.; Hewitt, Paul L.

    2013-01-01

    It is now recognized that there is a very high prevalence of psychological disorders among children and adolescents and relatively few receive psychological treatment. In the current article, we present the argument that levels of distress and dysfunction among young people are substantially underestimated and the prevalence of psychological…

  12. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina; Cantoni, Eva; Genton, Marc G.

    2012-01-01

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  13. A Non-Gaussian Spatial Generalized Linear Latent Variable Model

    KAUST Repository

    Irincheeva, Irina

    2012-08-03

    We consider a spatial generalized linear latent variable model with and without normality distributional assumption on the latent variables. When the latent variables are assumed to be multivariate normal, we apply a Laplace approximation. To relax the assumption of marginal normality in favor of a mixture of normals, we construct a multivariate density with Gaussian spatial dependence and given multivariate margins. We use the pairwise likelihood to estimate the corresponding spatial generalized linear latent variable model. The properties of the resulting estimators are explored by simulations. In the analysis of an air pollution data set the proposed methodology uncovers weather conditions to be a more important source of variability than air pollution in explaining all the causes of non-accidental mortality excluding accidents. © 2012 International Biometric Society.

  14. Maximum rates of climate change are systematically underestimated in the geological record.

    Science.gov (United States)

    Kemp, David B; Eichenseer, Kilian; Kiessling, Wolfgang

    2015-11-10

    Recently observed rates of environmental change are typically much higher than those inferred for the geological past. At the same time, the magnitudes of ancient changes were often substantially greater than those established in recent history. The most pertinent disparity, however, between recent and geological rates is the timespan over which the rates are measured, which typically differ by several orders of magnitude. Here we show that rates of marked temperature changes inferred from proxy data in Earth history scale with measurement timespan as an approximate power law across nearly six orders of magnitude (10(2) to >10(7) years). This scaling reveals how climate signals measured in the geological record alias transient variability, even during the most pronounced climatic perturbations of the Phanerozoic. Our findings indicate that the true attainable pace of climate change on timescales of greatest societal relevance is underestimated in geological archives.

  15. Dividend taxation in an infinite-horizon general equilibrium model

    OpenAIRE

    Pham, Ngoc-Sang

    2017-01-01

    We consider an infinite-horizon general equilibrium model with heterogeneous agents and financial market imperfections. We investigate the role of dividend taxation on economic growth and asset price. The optimal dividend taxation is also studied.

  16. Aspects of general linear modelling of migration.

    Science.gov (United States)

    Congdon, P

    1992-01-01

    "This paper investigates the application of general linear modelling principles to analysing migration flows between areas. Particular attention is paid to specifying the form of the regression and error components, and the nature of departures from Poisson randomness. Extensions to take account of spatial and temporal correlation are discussed as well as constrained estimation. The issue of specification bears on the testing of migration theories, and assessing the role migration plays in job and housing markets: the direction and significance of the effects of economic variates on migration depends on the specification of the statistical model. The application is in the context of migration in London and South East England in the 1970s and 1980s." excerpt

  17. General extrapolation model for an important chemical dose-rate effect

    International Nuclear Information System (INIS)

    Gillen, K.T.; Clough, R.L.

    1984-12-01

    In order to extrapolate material accelerated aging data, methodologies must be developed based on sufficient understanding of the processes leading to material degradation. One of the most important mechanisms leading to chemical dose-rate effects in polymers involves the breakdown of intermediate hydroperoxide species. A general model for this mechanism is derived based on the underlying chemical steps. The results lead to a general formalism for understanding dose rate and sequential aging effects when hydroperoxide breakdown is important. We apply the model to combined radiation/temperature aging data for a PVC material and show that this data is consistent with the model and that model extrapolations are in excellent agreement with 12-year real-time aging results from an actual nuclear plant. This model and other techniques discussed in this report can aid in the selection of appropriate accelerated aging methods and can also be used to compare and select materials for use in safety-related components. This will result in increased assurance that equipment qualification procedures are adequate

  18. Examining Pedestrian Injury Severity Using Alternative Disaggregate Models

    DEFF Research Database (Denmark)

    Abay, Kibrom Araya

    2013-01-01

    This paper investigates the injury severity of pedestrians considering detailed road user characteristics and alternative model specification using a high-quality Danish road accident data. Such detailed and alternative modeling approach helps to assess the sensitivity of empirical inferences...... to the choice of these models. The empirical analysis reveals that detailed road user characteristics such as crime history of drivers and momentary activities of road users at the time of the accident provides an interesting insight in the injury severity analysis. Likewise, the alternative analytical...... specification of the models reveals that some of the conventionally employed fixed parameters injury severity models could underestimate the effect of some important behavioral attributes of the accidents. For instance, the standard ordered logit model underestimated the marginal effects of some...

  19. Generalized Modeling of the Human Lower Limb Assembly

    Science.gov (United States)

    Cofaru, Ioana; Huzu, Iulia

    2014-11-01

    The main reason for creating a generalized assembly of the main bones of the lower human member is to create the premises of realizing a biomechanic assisted study which could be used for the study of the high range of varieties of pathologies that exist at this level. Starting from 3D CAD models of the main bones of the lower human member, which were realized in previous researches, in this study a generalized assembly system was developed, system in which are highlighted both the situation of an healthy subject and the situation of the situation of a subject affected by axial deviations. In order to achieve these purpose reference systems were created, systems that are in accordance with the mechanical axes and the anatomic axes of the lower member, which were later generally assembled in a manner that provides an easy customization option

  20. A General Model for Thermal, Hydraulic and Electric Analysis of Superconducting Cables

    CERN Document Server

    Bottura, L; Rosso, C

    2000-01-01

    In this paper we describe a generic, multi-component and multi-channel model for the analysis of superconducting cables. The aim of the model is to treat in a general and consistent manner simultaneous thermal, electric and hydraulic transients in cables. The model is devised for most general situations, but reduces in limiting cases to most common approximations without loss of efficiency. We discuss here the governing equations, and we write them in a matrix form that is well adapted to numerical treatment. We finally demonstrate the model capability by comparison with published experimental data on current distribution in a two-strand cable.

  1. Multiple-event probability in general-relativistic quantum mechanics. II. A discrete model

    International Nuclear Information System (INIS)

    Mondragon, Mauricio; Perez, Alejandro; Rovelli, Carlo

    2007-01-01

    We introduce a simple quantum mechanical model in which time and space are discrete and periodic. These features avoid the complications related to continuous-spectrum operators and infinite-norm states. The model provides a tool for discussing the probabilistic interpretation of generally covariant quantum systems, without the confusion generated by spurious infinities. We use the model to illustrate the formalism of general-relativistic quantum mechanics, and to test the definition of multiple-event probability introduced in a companion paper [Phys. Rev. D 75, 084033 (2007)]. We consider a version of the model with unitary time evolution and a version without unitary time evolution

  2. Statistical modeling of the Internet traffic dynamics: To which extent do we need long-term correlations?

    Science.gov (United States)

    Markelov, Oleg; Nguyen Duc, Viet; Bogachev, Mikhail

    2017-11-01

    Recently we have suggested a universal superstatistical model of user access patterns and aggregated network traffic. The model takes into account the irregular character of end user access patterns on the web via the non-exponential distributions of the local access rates, but neglects the long-term correlations between these rates. While the model is accurate for quasi-stationary traffic records, its performance under highly variable and especially non-stationary access dynamics remains questionable. In this paper, using an example of the traffic patterns from a highly loaded network cluster hosting the website of the 1998 FIFA World Cup, we suggest a generalization of the previously suggested superstatistical model by introducing long-term correlations between access rates. Using queueing system simulations, we show explicitly that this generalization is essential for modeling network nodes with highly non-stationary access patterns, where neglecting long-term correlations leads to the underestimation of the empirical average sojourn time by several decades under high throughput utilization.

  3. Structural dynamic analysis with generalized damping models analysis

    CERN Document Server

    Adhikari , Sondipon

    2013-01-01

    Since Lord Rayleigh introduced the idea of viscous damping in his classic work ""The Theory of Sound"" in 1877, it has become standard practice to use this approach in dynamics, covering a wide range of applications from aerospace to civil engineering. However, in the majority of practical cases this approach is adopted more for mathematical convenience than for modeling the physics of vibration damping. Over the past decade, extensive research has been undertaken on more general ""non-viscous"" damping models and vibration of non-viscously damped systems. This book, along with a related book

  4. Testing a generalized cubic Galileon gravity model with the Coma Cluster

    Energy Technology Data Exchange (ETDEWEB)

    Terukina, Ayumu; Yamamoto, Kazuhiro; Okabe, Nobuhiro [Department of Physical Sciences, Hiroshima University, 1-3-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Matsushita, Kyoko; Sasaki, Toru, E-mail: telkina@theo.phys.sci.hiroshima-u.ac.jp, E-mail: kazuhiro@hiroshima-u.ac.jp, E-mail: okabe@hiroshima-u.ac.jp, E-mail: matusita@rs.kagu.tus.ac.jp, E-mail: j1213703@ed.tus.ac.jp [Department of Physics, Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601 (Japan)

    2015-10-01

    We obtain a constraint on the parameters of a generalized cubic Galileon gravity model exhibiting the Vainshtein mechanism by using multi-wavelength observations of the Coma Cluster. The generalized cubic Galileon model is characterized by three parameters of the turning scale associated with the Vainshtein mechanism, and the amplitude of modifying a gravitational potential and a lensing potential. X-ray and Sunyaev-Zel'dovich (SZ) observations of the intra-cluster medium are sensitive to the gravitational potential, while the weak-lensing (WL) measurement is specified by the lensing potential. A joint fit of a complementary multi-wavelength dataset of X-ray, SZ and WL measurements enables us to simultaneously constrain these three parameters of the generalized cubic Galileon model for the first time. We also find a degeneracy between the cluster mass parameters and the gravitational modification parameters, which is influential in the limit of the weak screening of the fifth force.

  5. A NEW GENERAL 3DOF QUASI-STEADY AERODYNAMIC INSTABILITY MODEL

    DEFF Research Database (Denmark)

    Gjelstrup, Henrik; Larsen, Allan; Georgakis, Christos

    2008-01-01

    but can generally be applied for aerodynamic instability prediction for prismatic bluff bodies. The 3DOF, which make up the movement of the model, are the displacements in the XY-plane and the rotation around the bluff body’s rotational axis. The proposed model incorporates inertia coupling between...

  6. The microcomputer scientific software series 2: general linear model--regression.

    Science.gov (United States)

    Harold M. Rauscher

    1983-01-01

    The general linear model regression (GLMR) program provides the microcomputer user with a sophisticated regression analysis capability. The output provides a regression ANOVA table, estimators of the regression model coefficients, their confidence intervals, confidence intervals around the predicted Y-values, residuals for plotting, a check for multicollinearity, a...

  7. Particles and holes equivalence for generalized seniority and the interacting boson model

    International Nuclear Information System (INIS)

    Talmi, I.

    1982-01-01

    An apparent ambiguity was recently reported in coupling either pairs of identical fermions or hole pairs. This is explained here as due to a Hamiltonian whose lowest eigenstates do not have the structure prescribed by generalized seniority. It is shown that generalized seniority eigenstates can be equivalently constructed from correlated J = 0 and J = 2 pair states of either particles or holes. The interacting boson model parameters calculated can be unambiguously interpreted and then are of real interest to the shell model basis of interacting boson model

  8. A general relativistic hydrostatic model for a galaxy

    International Nuclear Information System (INIS)

    Hojman, R.; Pena, L.; Zamorano, N.

    1991-08-01

    The existence of huge amounts of mass laying at the center of some galaxies has been inferred by data gathered at different wavelengths. It seems reasonable then, to incorporate general relativity in the study of these objects. A general relativistic hydrostatic model for a galaxy is studied. We assume that the galaxy is dominated by the dark mass except at the nucleus, where the luminous matter prevails. It considers four different concentric spherically symmetric regions, properly matched and with a specific equation of state for each of them. It yields a slowly raising orbital velocity for a test particle moving in the background gravitational field of the dark matter region. In this sense we think of this model as representing a spiral galaxy. The dependence of the mass on the radius in cluster and field spiral galaxies published recently, can be used to fix the size of the inner luminous core. A vanishing pressure at the edge of the galaxy and the assumption of hydrostatic equilibrium everywhere generates a jump in the density and the orbital velocity at the shell enclosing the galaxy. This is a prediction of this model. The ratio between the size core and the shells introduced here are proportional to their densities. In this sense the model is scale invariant. It can be used to reproduce a galaxy or the central region of a galaxy. We have also compared our results with those obtained with the Newtonian isothermal sphere. The luminosity is not included in our model as an extra variable in the determination of the orbital velocity. (author). 29 refs, 10 figs

  9. Vector generalized linear and additive models with an implementation in R

    CERN Document Server

    Yee, Thomas W

    2015-01-01

    This book presents a statistical framework that expands generalized linear models (GLMs) for regression modelling. The framework shared in this book allows analyses based on many semi-traditional applied statistics models to be performed as a coherent whole. This is possible through the approximately half-a-dozen major classes of statistical models included in the book and the software infrastructure component, which makes the models easily operable.    The book’s methodology and accompanying software (the extensive VGAM R package) are directed at these limitations, and this is the first time the methodology and software are covered comprehensively in one volume. Since their advent in 1972, GLMs have unified important distributions under a single umbrella with enormous implications. The demands of practical data analysis, however, require a flexibility that GLMs do not have. Data-driven GLMs, in the form of generalized additive models (GAMs), are also largely confined to the exponential family. This book ...

  10. A nested Atlantic-Mediterranean Sea general circulation model for operational forecasting

    Directory of Open Access Journals (Sweden)

    P. Oddo

    2009-10-01

    Full Text Available A new numerical general circulation ocean model for the Mediterranean Sea has been implemented nested within an Atlantic general circulation model within the framework of the Marine Environment and Security for the European Area project (MERSEA, Desaubies, 2006. A 4-year twin experiment was carried out from January 2004 to December 2007 with two different models to evaluate the impact on the Mediterranean Sea circulation of open lateral boundary conditions in the Atlantic Ocean. One model considers a closed lateral boundary in a large Atlantic box and the other is nested in the same box in a global ocean circulation model. Impact was observed comparing the two simulations with independent observations: ARGO for temperature and salinity profiles and tide gauges and along-track satellite observations for the sea surface height. The improvement in the nested Atlantic-Mediterranean model with respect to the closed one is particularly evident in the salinity characteristics of the Modified Atlantic Water and in the Mediterranean sea level seasonal variability.

  11. Parameter identification in a generalized time-harmonic Rayleigh damping model for elastography.

    Directory of Open Access Journals (Sweden)

    Elijah E W Van Houten

    Full Text Available The identifiability of the two damping components of a Generalized Rayleigh Damping model is investigated through analysis of the continuum equilibrium equations as well as a simple spring-mass system. Generalized Rayleigh Damping provides a more diversified attenuation model than pure Viscoelasticity, with two parameters to describe attenuation effects and account for the complex damping behavior found in biological tissue. For heterogeneous Rayleigh Damped materials, there is no equivalent Viscoelastic system to describe the observed motions. For homogeneous systems, the inverse problem to determine the two Rayleigh Damping components is seen to be uniquely posed, in the sense that the inverse matrix for parameter identification is full rank, with certain conditions: when either multi-frequency data is available or when both shear and dilatational wave propagation is taken into account. For the multi-frequency case, the frequency dependency of the elastic parameters adds a level of complexity to the reconstruction problem that must be addressed for reasonable solutions. For the dilatational wave case, the accuracy of compressional wave measurement in fluid saturated soft tissues becomes an issue for qualitative parameter identification. These issues can be addressed with reasonable assumptions on the negligible damping levels of dilatational waves in soft tissue. In general, the parameters of a Generalized Rayleigh Damping model are identifiable for the elastography inverse problem, although with more complex conditions than the simpler Viscoelastic damping model. The value of this approach is the additional structural information provided by the Generalized Rayleigh Damping model, which can be linked to tissue composition as well as rheological interpretations.

  12. Generalized symmetries and conserved quantities of the Lotka-Volterra model

    Science.gov (United States)

    Baumann, G.; Freyberger, M.

    1991-07-01

    We examine the generalized symmetries of the Lotka-Volterra model to find the parameter values at which one time-dependent integral of motion exists. In this case the integral can be read off from the symmetries themselves. We also demonstrate the connection to a Hamiltonian structure of the Lotka-Volterra model.

  13. Itinerant deaf educator and general educator perceptions of the D/HH push-in model.

    Science.gov (United States)

    Rabinsky, Rebecca J

    2013-01-01

    A qualitative case study using the deaf and hard of hearing (D/HH) push-in model was conducted on the perceptions of 3 itinerant deaf educators and 3 general educators working in 1 school district. Participants worked in pairs of 1 deaf educator and 1 general educator at 3 elementary schools. Open-ended research questions guided the study, which was concerned with teachers' perceptions of the model in general and with the model's advantages, disadvantages, and effectiveness. Data collected from observations, one-to-one interviews, and a focus group interview enabled the investigator to uncover 4 themes: Participants (a) had an overall positive experience, (b) viewed general education immersion as an advantage, (c) considered high noise levels a disadvantage, and (d) believed the effectiveness of the push-in model was dependent on several factors, in particular, the needs of the student and the nature of the general education classroom environment.

  14. Doubly robust estimation of generalized partial linear models for longitudinal data with dropouts.

    Science.gov (United States)

    Lin, Huiming; Fu, Bo; Qin, Guoyou; Zhu, Zhongyi

    2017-12-01

    We develop a doubly robust estimation of generalized partial linear models for longitudinal data with dropouts. Our method extends the highly efficient aggregate unbiased estimating function approach proposed in Qu et al. (2010) to a doubly robust one in the sense that under missing at random (MAR), our estimator is consistent when either the linear conditional mean condition is satisfied or a model for the dropout process is correctly specified. We begin with a generalized linear model for the marginal mean, and then move forward to a generalized partial linear model, allowing for nonparametric covariate effect by using the regression spline smoothing approximation. We establish the asymptotic theory for the proposed method and use simulation studies to compare its finite sample performance with that of Qu's method, the complete-case generalized estimating equation (GEE) and the inverse-probability weighted GEE. The proposed method is finally illustrated using data from a longitudinal cohort study. © 2017, The International Biometric Society.

  15. Generalized Roe's numerical scheme for a two-fluid model

    International Nuclear Information System (INIS)

    Toumi, I.; Raymond, P.

    1993-01-01

    This paper is devoted to a mathematical and numerical study of a six equation two-fluid model. We will prove that the model is strictly hyperbolic due to the inclusion of the virtual mass force term in the phasic momentum equations. The two-fluid model is naturally written under a nonconservative form. To solve the nonlinear Riemann problem for this nonconservative hyperbolic system, a generalized Roe's approximate Riemann solver, is used, based on a linearization of the nonconservative terms. A Godunov type numerical scheme is built, using this approximate Riemann solver. 10 refs., 5 figs,

  16. Generalized Penner models and multicritical behavior

    International Nuclear Information System (INIS)

    Tan, C.

    1992-01-01

    In this paper, we are interested in the critical behavior of generalized Penner models at t∼-1+μ/N where the topological expansion for the free energy develops logarithmic singularities: Γ∼-(χ 0 μ 2 lnμ+χ 1 lnμ+...). We demonstrate that these criticalities can best be characterized by the fact that the large-N generating function becomes meromorphic with a single pole term of unit residue, F(z)→1/(z-a), where a is the location of the ''sink.'' For a one-band eigenvalue distribution, we identify multicritical potentials; we find that none of these can be associated with the c=1 string compactified at an integral multiple of the self-dual radius. We also give an exact solution to the Gaussian Penner model and explicitly demonstrate that, at criticality, this solution does not correspond to a c=1 string compactified at twice the self-dual radius

  17. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  18. General formulation of standard model the standard model is in need of new concepts

    International Nuclear Information System (INIS)

    Khodjaev, L.Sh.

    2001-01-01

    The phenomenological basis for formulation of the Standard Model has been reviewed. The Standard Model based on the fundamental postulates has been formulated. The concept of the fundamental symmetries has been introduced: To look for not fundamental particles but fundamental symmetries. By searching of more general theory it is natural to search first of all global symmetries and than to learn consequence connected with the localisation of this global symmetries like wise of the standard Model

  19. Attractive Hubbard model with disorder and the generalized Anderson theorem

    International Nuclear Information System (INIS)

    Kuchinskii, E. Z.; Kuleeva, N. A.; Sadovskii, M. V.

    2015-01-01

    Using the generalized DMFT+Σ approach, we study the influence of disorder on single-particle properties of the normal phase and the superconducting transition temperature in the attractive Hubbard model. A wide range of attractive potentials U is studied, from the weak coupling region, where both the instability of the normal phase and superconductivity are well described by the BCS model, to the strong-coupling region, where the superconducting transition is due to Bose-Einstein condensation (BEC) of compact Cooper pairs, formed at temperatures much higher than the superconducting transition temperature. We study two typical models of the conduction band with semi-elliptic and flat densities of states, respectively appropriate for three-dimensional and two-dimensional systems. For the semi-elliptic density of states, the disorder influence on all single-particle properties (e.g., density of states) is universal for an arbitrary strength of electronic correlations and disorder and is due to only the general disorder widening of the conduction band. In the case of a flat density of states, universality is absent in the general case, but still the disorder influence is mainly due to band widening, and the universal behavior is restored for large enough disorder. Using the combination of DMFT+Σ and Nozieres-Schmitt-Rink approximations, we study the disorder influence on the superconducting transition temperature T c for a range of characteristic values of U and disorder, including the BCS-BEC crossover region and the limit of strong-coupling. Disorder can either suppress T c (in the weak-coupling region) or significantly increase T c (in the strong-coupling region). However, in all cases, the generalized Anderson theorem is valid and all changes of the superconducting critical temperature are essentially due to only the general disorder widening of the conduction band

  20. Does verbatim sentence recall underestimate the language competence of near-native speakers?

    Directory of Open Access Journals (Sweden)

    Judith eSchweppe

    2015-02-01

    Full Text Available Verbatim sentence recall is widely used to test the language competence of native and non-native speakers since it involves comprehension and production of connected speech. However, we assume that, to maintain surface information, sentence recall relies particularly on attentional resources, which differentially affects native and non-native speakers. Since even in near-natives language processing is less automatized than in native speakers, processing a sentence in a foreign language plus retaining its surface may result in a cognitive overload. We contrasted sentence recall performance of German native speakers with that of highly proficient non-natives. Non-natives recalled the sentences significantly poorer than the natives, but performed equally well on a cloze test. This implies that sentence recall underestimates the language competence of good non-native speakers in mixed groups with native speakers. The findings also suggest that theories of sentence recall need to consider both its linguistic and its attentional aspects.

  1. Simplicial models for trace spaces II: General higher dimensional automata

    DEFF Research Database (Denmark)

    Raussen, Martin

    of directed paths with given end points in a pre-cubical complex as the nerve of a particular category. The paper generalizes the results from Raussen [19, 18] in which we had to assume that the HDA in question arises from a semaphore model. In particular, important for applications, it allows for models...

  2. Look before You Leap: Underestimating Chinese Student History, Chinese University Setting and Chinese University Steering in Sino-British HE Joint Ventures?

    Science.gov (United States)

    Dow, Ewan G.

    2010-01-01

    This article makes the case--in three parts--that many Anglo-Chinese university collaborations (joint ventures) to date have seriously underestimated Chinese (student) history, the Chinese university setting and Chinese national governmental steering as part of the process of "glocalisation". Recent turbulence in this particular HE…

  3. A general graphical user interface for automatic reliability modeling

    Science.gov (United States)

    Liceaga, Carlos A.; Siewiorek, Daniel P.

    1991-01-01

    Reported here is a general Graphical User Interface (GUI) for automatic reliability modeling of Processor Memory Switch (PMS) structures using a Markov model. This GUI is based on a hierarchy of windows. One window has graphical editing capabilities for specifying the system's communication structure, hierarchy, reconfiguration capabilities, and requirements. Other windows have field texts, popup menus, and buttons for specifying parameters and selecting actions. An example application of the GUI is given.

  4. A Novel Method Using Abstract Convex Underestimation in Ab-Initio Protein Structure Prediction for Guiding Search in Conformational Feature Space.

    Science.gov (United States)

    Hao, Xiao-Hu; Zhang, Gui-Jun; Zhou, Xiao-Gen; Yu, Xu-Feng

    2016-01-01

    To address the searching problem of protein conformational space in ab-initio protein structure prediction, a novel method using abstract convex underestimation (ACUE) based on the framework of evolutionary algorithm was proposed. Computing such conformations, essential to associate structural and functional information with gene sequences, is challenging due to the high-dimensionality and rugged energy surface of the protein conformational space. As a consequence, the dimension of protein conformational space should be reduced to a proper level. In this paper, the high-dimensionality original conformational space was converted into feature space whose dimension is considerably reduced by feature extraction technique. And, the underestimate space could be constructed according to abstract convex theory. Thus, the entropy effect caused by searching in the high-dimensionality conformational space could be avoided through such conversion. The tight lower bound estimate information was obtained to guide the searching direction, and the invalid searching area in which the global optimal solution is not located could be eliminated in advance. Moreover, instead of expensively calculating the energy of conformations in the original conformational space, the estimate value is employed to judge if the conformation is worth exploring to reduce the evaluation time, thereby making computational cost lower and the searching process more efficient. Additionally, fragment assembly and the Monte Carlo method are combined to generate a series of metastable conformations by sampling in the conformational space. The proposed method provides a novel technique to solve the searching problem of protein conformational space. Twenty small-to-medium structurally diverse proteins were tested, and the proposed ACUE method was compared with It Fix, HEA, Rosetta and the developed method LEDE without underestimate information. Test results show that the ACUE method can more rapidly and more

  5. Border Collision Bifurcations in a Generalized Model of Population Dynamics

    Directory of Open Access Journals (Sweden)

    Lilia M. Ladino

    2016-01-01

    Full Text Available We analyze the dynamics of a generalized discrete time population model of a two-stage species with recruitment and capture. This generalization, which is inspired by other approaches and real data that one can find in literature, consists in considering no restriction for the value of the two key parameters appearing in the model, that is, the natural death rate and the mortality rate due to fishing activity. In the more general case the feasibility of the system has been preserved by posing opportune formulas for the piecewise map defining the model. The resulting two-dimensional nonlinear map is not smooth, though continuous, as its definition changes as any border is crossed in the phase plane. Hence, techniques from the mathematical theory of piecewise smooth dynamical systems must be applied to show that, due to the existence of borders, abrupt changes in the dynamic behavior of population sizes and multistability emerge. The main novelty of the present contribution with respect to the previous ones is that, while using real data, richer dynamics are produced, such as fluctuations and multistability. Such new evidences are of great interest in biology since new strategies to preserve the survival of the species can be suggested.

  6. Response of an ocean general circulation model to wind and ...

    Indian Academy of Sciences (India)

    The stretched-coordinate ocean general circulation model has been designed to study the observed variability due to wind and thermodynamic forcings. The model domain extends from 60°N to 60°S and cyclically continuous in the longitudinal direction. The horizontal resolution is 5° × 5° and 9 discrete vertical levels.

  7. A generalized model for compact stars

    Energy Technology Data Exchange (ETDEWEB)

    Aziz, Abdul [Bodai High School (H.S.), Department of Physics, Kolkata, West Bengal (India); Ray, Saibal [Government College of Engineering and Ceramic Technology, Department of Physics, Kolkata, West Bengal (India); Rahaman, Farook [Jadavpur University, Department of Mathematics, Kolkata, West Bengal (India)

    2016-05-15

    By virtue of the maximum entropy principle, we get an Euler-Lagrange equation which is a highly nonlinear differential equation containing the mass function and its derivatives. Solving the equation by a homotopy perturbation method we derive a generalized expression for the mass which is a polynomial function of the radial distance. Using the mass function we find a partially stable configuration and its characteristics. We show that different physical features of the known compact stars, viz. Her X-1, RX J 1856-37, SAX J (SS1), SAX J (SS2), and PSR J 1614-2230, can be explained by the present model. (orig.)

  8. A proposed general model of information behaviour.

    Directory of Open Access Journals (Sweden)

    2003-01-01

    Full Text Available Presents a critical description of Wilson's (1996 global model of information behaviour and proposes major modification on the basis of research into information behaviour of managers, conducted in Poland. The theoretical analysis and research results suggest that Wilson's model has certain imperfections, both in its conceptual content, and in graphical presentation. The model, for example, cannot be used to describe managers' information behaviour, since managers basically are not the end users of external from organization or computerized information services, and they acquire information mainly through various intermediaries. Therefore, the model cannot be considered as a general model, applicable to every category of information users. The proposed new model encompasses the main concepts of Wilson's model, such as: person-in-context, three categories of intervening variables (individual, social and environmental, activating mechanisms, cyclic character of information behaviours, and the adoption of a multidisciplinary approach to explain them. However, the new model introduces several changes. They include: 1. identification of 'context' with the intervening variables; 2. immersion of the chain of information behaviour in the 'context', to indicate that the context variables influence behaviour at all stages of the process (identification of needs, looking for information, processing and using it; 3. stress is put on the fact that the activating mechanisms also can occur at all stages of the information acquisition process; 4. introduction of two basic strategies of looking for information: personally and/or using various intermediaries.

  9. On the general procedure for modelling complex ecological systems

    International Nuclear Information System (INIS)

    He Shanyu.

    1987-12-01

    In this paper, the principle of a general procedure for modelling complex ecological systems, i.e. the Adaptive Superposition Procedure (ASP) is shortly stated. The result of application of ASP in a national project for ecological regionalization is also described. (author). 3 refs

  10. A Generalized Dynamic Model of Geared System: Establishment and Application

    Directory of Open Access Journals (Sweden)

    Hui Liu

    2011-12-01

    Full Text Available In order to make the dynamic characteristic simulation of the ordinary and planetary gears drive more accurate and more efficient , a generalized dynamic model of geared system is established including internal and external mesh gears in this paper. It is used to build a mathematical model, which achieves the auto judgment of the gear mesh state. We do not need to concern about active or passive gears any more, and the complicated power flow analysis can be avoided. With the numerical integration computation, the axis orbits diagram and dynamic gear mesh force characteristic are acquired and the results show that the dynamic response of translational displacement is greater when contacting line direction change is considered, and with the quickly change of direction of contacting line, the amplitude of mesh force would be increased, which easily causes the damage to the gear tooth. Moreover, compared with ordinary gear, dynamic responses of planetary gear would be affected greater by the gear backlash. Simulation results show the effectiveness of the generalized dynamic model and the mathematical model.

  11. A report on workshops: General circulation model study of climate- chemistry interaction

    International Nuclear Information System (INIS)

    Wei-Chyung, Wang; Isaksen, I.S.A.

    1993-01-01

    This report summarizes the discussion on General Circulation Model Study of Climate-Chemistry Interaction from two workshops, the first held 19--21 August 1992 at Oslo, Norway and the second 26--27 May 1993 at Albany, New York, USA. The workshops are the IAMAP activities under the Trace Constituent Working Group. The main objective of the two workshops was to recommend specific general circulation model (GCM) studies of the ozone distribution and the climatic effect of its changes. The workshops also discussed the climatic implications of increasing sulfate aerosols because of its importance to regional climate. The workshops were organized into four working groups: observation of atmospheric O 3 ; modeling of atmospheric chemical composition; modeling of sulfate aerosols; and aspects of climate modeling

  12. On-line validation of linear process models using generalized likelihood ratios

    International Nuclear Information System (INIS)

    Tylee, J.L.

    1981-12-01

    A real-time method for testing the validity of linear models of nonlinear processes is described and evaluated. Using generalized likelihood ratios, the model dynamics are continually monitored to see if the process has moved far enough away from the nominal linear model operating point to justify generation of a new linear model. The method is demonstrated using a seventh-order model of a natural circulation steam generator

  13. Comparison of two recent models for estimating actual evapotranspiration using only regularly recorded data

    Science.gov (United States)

    Ali, M. F.; Mawdsley, J. A.

    1987-09-01

    An advection-aridity model for estimating actual evapotranspiration ET is tested with over 700 days of lysimeter evapotranspiration and meteorological data from barley, turf and rye-grass from three sites in the U.K. The performance of the model is also compared with the API model . It is observed from the test that the advection-aridity model overestimates nonpotential ET and tends to underestimate potential ET, but when tested with potential and nonpotential data together, the tendencies appear to cancel each other. On a daily basis the performance level of this model is found to be of the same order as the API model: correlation coefficients were obtained between the model estimates and lysimeter data of 0.62 and 0.68 respectively. For periods greater than one day, generally the performance of the models are improved. Proposed by Mawdsley and Ali (1979)

  14. A General Attribute and Rule Based Role-Based Access Control Model

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Growing numbers of users and many access control policies which involve many different resource attributes in service-oriented environments bring various problems in protecting resource. This paper analyzes the relationships of resource attributes to user attributes in all policies, and propose a general attribute and rule based role-based access control(GAR-RBAC) model to meet the security needs. The model can dynamically assign users to roles via rules to meet the need of growing numbers of users. These rules use different attribute expression and permission as a part of authorization constraints, and are defined by analyzing relations of resource attributes to user attributes in many access policies that are defined by the enterprise. The model is a general access control model, and can support many access control policies, and also can be used to wider application for service. The paper also describes how to use the GAR-RBAC model in Web service environments.

  15. Exploring the squeezed three-point galaxy correlation function with generalized halo occupation distribution models

    Science.gov (United States)

    Yuan, Sihan; Eisenstein, Daniel J.; Garrison, Lehman H.

    2018-04-01

    We present the GeneRalized ANd Differentiable Halo Occupation Distribution (GRAND-HOD) routine that generalizes the standard 5 parameter halo occupation distribution model (HOD) with various halo-scale physics and assembly bias. We describe the methodology of 4 different generalizations: satellite distribution generalization, velocity bias, closest approach distance generalization, and assembly bias. We showcase the signatures of these generalizations in the 2-point correlation function (2PCF) and the squeezed 3-point correlation function (squeezed 3PCF). We identify generalized HOD prescriptions that are nearly degenerate in the projected 2PCF and demonstrate that these degeneracies are broken in the redshift-space anisotropic 2PCF and the squeezed 3PCF. We also discuss the possibility of identifying degeneracies in the anisotropic 2PCF and further demonstrate the extra constraining power of the squeezed 3PCF on galaxy-halo connection models. We find that within our current HOD framework, the anisotropic 2PCF can predict the squeezed 3PCF better than its statistical error. This implies that a discordant squeezed 3PCF measurement could falsify the particular HOD model space. Alternatively, it is possible that further generalizations of the HOD model would open opportunities for the squeezed 3PCF to provide novel parameter measurements. The GRAND-HOD Python package is publicly available at https://github.com/SandyYuan/GRAND-HOD.

  16. Use of Paired Simple and Complex Models to Reduce Predictive Bias and Quantify Uncertainty

    DEFF Research Database (Denmark)

    Doherty, John; Christensen, Steen

    2011-01-01

    -constrained uncertainty analysis. Unfortunately, however, many system and process details on which uncertainty may depend are, by design, omitted from simple models. This can lead to underestimation of the uncertainty associated with many predictions of management interest. The present paper proposes a methodology...... of these details born of the necessity for model outputs to replicate observations of historical system behavior. In contrast, the rapid run times and general numerical reliability of simple models often promulgates good calibration and ready implementation of sophisticated methods of calibration...... that attempts to overcome the problems associated with complex models on the one hand and simple models on the other hand, while allowing access to the benefits each of them offers. It provides a theoretical analysis of the simplification process from a subspace point of view, this yielding insights...

  17. Vacuum Expectation Value Profiles of the Bulk Scalar Field in the Generalized Randall-Sundrum Model

    International Nuclear Information System (INIS)

    Moazzen, M.; Tofighi, A.; Farokhtabar, A.

    2015-01-01

    In the generalized Randall-Sundrum warped brane-world model the cosmological constant induced on the visible brane can be positive or negative. In this paper we investigate profiles of vacuum expectation value of the bulk scalar field under general Dirichlet and Neumann boundary conditions in the generalized warped brane-world model. We show that the VEV profiles generally depend on the value of the brane cosmological constant. We find that the VEV profiles of the bulk scalar field for a visible brane with negative cosmological constant and positive tension are quite distinct from those of Randall-Sundrum model. In addition we show that the VEV profiles for a visible brane with large positive cosmological constant are also different from those of the Randall-Sundrum model. We also verify that Goldberger and Wise mechanism can work under nonzero Dirichlet boundary conditions in the generalized Randall-Sundrum model.

  18. On the treatment of airline travelers in mathematical models.

    Directory of Open Access Journals (Sweden)

    Michael A Johansson

    Full Text Available The global spread of infectious diseases is facilitated by the ability of infected humans to travel thousands of miles in short time spans, rapidly transporting pathogens to distant locations. Mathematical models of the actual and potential spread of specific pathogens can assist public health planning in the case of such an event. Models should generally be parsimonious, but must consider all potentially important components of the system to the greatest extent possible. We demonstrate and discuss important assumptions relative to the parameterization and structural treatment of airline travel in mathematical models. Among other findings, we show that the most common structural treatment of travelers leads to underestimation of the speed of spread and that connecting travel is critical to a realistic spread pattern. Models involving travelers can be improved significantly by relatively simple structural changes but also may require further attention to details of parameterization.

  19. Modelling road accident blackspots data with the discrete generalized Pareto distribution.

    Science.gov (United States)

    Prieto, Faustino; Gómez-Déniz, Emilio; Sarabia, José María

    2014-10-01

    This study shows how road traffic networks events, in particular road accidents on blackspots, can be modelled with simple probabilistic distributions. We considered the number of crashes and the number of fatalities on Spanish blackspots in the period 2003-2007, from Spanish General Directorate of Traffic (DGT). We modelled those datasets, respectively, with the discrete generalized Pareto distribution (a discrete parametric model with three parameters) and with the discrete Lomax distribution (a discrete parametric model with two parameters, and particular case of the previous model). For that, we analyzed the basic properties of both parametric models: cumulative distribution, survival, probability mass, quantile and hazard functions, genesis and rth-order moments; applied two estimation methods of their parameters: the μ and (μ+1) frequency method and the maximum likelihood method; used two goodness-of-fit tests: Chi-square test and discrete Kolmogorov-Smirnov test based on bootstrap resampling; and compared them with the classical negative binomial distribution in terms of absolute probabilities and in models including covariates. We found that those probabilistic models can be useful to describe the road accident blackspots datasets analyzed. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Self-organization of critical behavior in controlled general queueing models

    International Nuclear Information System (INIS)

    Blanchard, Ph.; Hongler, M.-O.

    2004-01-01

    We consider general queueing models of the (G/G/1) type with service times controlled by the busy period. For feedback control mechanisms driving the system to very high traffic load, it is shown the busy period probability density exhibits a generic -((3)/(2)) power law which is a typical mean field behavior of SOC models

  1. General sets of coherent states and the Jaynes-Cummings model

    International Nuclear Information System (INIS)

    Daoud, M.; Hussin, V.

    2002-01-01

    General sets of coherent states are constructed for quantum systems admitting a nondegenerate infinite discrete energy spectrum. They are eigenstates of an annihilation operator and satisfy the usual properties of standard coherent states. The application of such a construction to the quantum optics Jaynes-Cummings model leads to a new understanding of the properties of this model. (author)

  2. Self-organization of critical behavior in controlled general queueing models

    Science.gov (United States)

    Blanchard, Ph.; Hongler, M.-O.

    2004-03-01

    We consider general queueing models of the (G/G/1) type with service times controlled by the busy period. For feedback control mechanisms driving the system to very high traffic load, it is shown the busy period probability density exhibits a generic - {3}/{2} power law which is a typical mean field behavior of SOC models.

  3. Field Measurements Indicate Unexpected, Serious Underestimation of Mussel Heart Rates and Thermal Tolerance by Laboratory Studies.

    Directory of Open Access Journals (Sweden)

    Morgana Tagliarolo

    Full Text Available Attempts to predict the response of species to long-term environmental change are generally based on extrapolations from laboratory experiments that inevitably simplify the complex interacting effects that occur in the field. We recorded heart rates of two genetic lineages of the brown mussel Perna perna over a full tidal cycle in-situ at two different sites in order to evaluate the cardiac responses of the two genetic lineages present on the South African coast to temperature and the immersion/emersion cycle. "Robomussel" temperature loggers were used to monitor thermal conditions at the two sites over one year. Comparison with live animals showed that robomussels provided a good estimate of mussel body temperatures. A significant difference in estimated body temperatures was observed between the sites and the results showed that, under natural conditions, temperatures regularly approach or exceed the thermal limits of P. perna identified in the laboratory. The two P. perna lineages showed similar tidal and diel patterns of heart rate, with higher cardiac activity during daytime immersion and minimal values during daytime emersion. Comparison of the heart rates measured in the field with data previously measured in the laboratory indicates that laboratory results seriously underestimate heart rate activity, by as much as 75%, especially during immersion. Unexpectedly, field estimates of body temperatures indicated an ability to tolerate temperatures considered lethal on the basis of laboratory measurements. This suggests that the interaction of abiotic conditions in the field does not necessarily raise vulnerability to high temperatures.

  4. Large proportions of overweight and obese children, as well as their parents, underestimate children's weight status across Europe. The ENERGY (EuropeaN Energy balance Research to prevent excessive weight Gain among Youth) project.

    Science.gov (United States)

    Manios, Yannis; Moschonis, George; Karatzi, Kalliopi; Androutsos, Odysseas; Chinapaw, Mai; Moreno, Luis A; Bere, Elling; Molnar, Denes; Jan, Natasha; Dössegger, Alain; De Bourdeaudhuij, Ilse; Singh, Amika; Brug, Johannes

    2015-08-01

    To investigate the magnitude and country-specific differences in underestimation of children's weight status by children and their parents in Europe and to further explore its associations with family characteristics and sociodemographic factors. Children's weight and height were objectively measured. Parental anthropometric and sociodemographic data were self-reported. Children and their parents were asked to comment on children's weight status based on five-point Likert-type scales, ranging from 'I am much too thin' to 'I am much too fat' (children) and 'My child's weight is way too little' to 'My child's weight is way too much' (parents). These data were combined with children's actual weight status, in order to assess underestimation of children's weight status by children themselves and by their parents, respectively. Chi-square tests and multilevel logistic regression analyses were conducted to examine the aims of the current study. Eight European countries participating in the ENERGY (EuropeaN Energy balance Research to prevent excessive weight Gain among Youth) project. A school-based survey among 6113 children aged 10-12 years and their parents. In the total sample, 42·9 % of overweight/obese children and 27·6 % of parents of overweight/obese children underestimated their and their children's weight status, respectively. A higher likelihood for this underestimation of weight status by children and their parents was observed in Eastern and Southern compared with Central/Northern countries. Overweight or obese parents (OR=1·81; 95 % CI 1·39, 2·35 and OR=1·78, 95 % CI 1·22, 2·60), parents of boys (OR=1·32; 95 % CI 1·05, 1·67) and children from overweight/obese (OR=1·60; 95 % CI 1·29, 1·98 and OR=1·76; 95 % CI 1·29, 2·41) or unemployed parents (OR=1·53; 95 % CI 1·22, 1·92) were more likely to underestimate children's weight status. Children of overweight or obese parents, those from Eastern and Southern Europe, boys, younger children and

  5. Generalized Jaynes-Cummings model as a quantum search algorithm

    International Nuclear Information System (INIS)

    Romanelli, A.

    2009-01-01

    We propose a continuous time quantum search algorithm using a generalization of the Jaynes-Cummings model. In this model the states of the atom are the elements among which the algorithm realizes the search, exciting resonances between the initial and the searched states. This algorithm behaves like Grover's algorithm; the optimal search time is proportional to the square root of the size of the search set and the probability to find the searched state oscillates periodically in time. In this frame, it is possible to reinterpret the usual Jaynes-Cummings model as a trivial case of the quantum search algorithm.

  6. Doctor-patient relationships in general practice--a different model.

    Science.gov (United States)

    Kushner, T

    1981-09-01

    Philosophical concerns cannot be excluded from even a cursory examination of the physician-patient relationship. Two possible alternatives for determining what this relationship entails are the teleological (outcome) approach vs the deontological (process) one. Traditionally, this relationship has been structured around the 'clinical model' which views the physician-patient relationship in teleological terms. Data on the actual content of general medical practice indicate the advisability of reassessing this relationship, and suggest that the 'clinical model' may be too limiting, and that a more appropriate basis for the physician-patient relationship is one described in this paper as the 'relational model'.

  7. General circulation model study of atmospheric carbon monoxide

    International Nuclear Information System (INIS)

    Pinto, J.P.; Yung, Y.L.; Rind, D.; Russell, G.L.; Lerner, J.A.; Hansen, J.E.; Hameed, S.

    1983-01-01

    The carbon monoxide cycle is studied by incorporating the known and hypothetical sources and sinks in a tracer model that uses the winds generated by a general circulation model. Photochemical production and loss terms, which depend on OH radical concentrations, are calculated in an interactive fashion. The computed global distribution and seasonal variations of CO are compared with observations to obtain constraints on the distribution and magnitude of the sources and sinks of CO, and on the tropospheric abundance of OH. The simplest model that accounts for available observations requires a low latitude plant source of about 1.3 x 10 15 g yr -1 , in addition to sources from incomplete combustion of fossil fuels and oxidation of methane. The globally averaged OH concentration calculated in the model is 7 x 10 5 cm -3 . Models that calculate globally averaged OH concentrations much lower than our nominal value are not consistent with the observed variability of CO. Such models are also inconsistent with measurements of CO isotopic abundances, which imply the existence of plant sources

  8. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  9. The Chemistry of Atmosphere-Forest Exchange (CAFE Model – Part 2: Application to BEARPEX-2007 observations

    Directory of Open Access Journals (Sweden)

    G. M. Wolfe

    2011-02-01

    Full Text Available In a companion paper, we introduced the Chemistry of Atmosphere-Forest Exchange (CAFE model, a vertically-resolved 1-D chemical transport model designed to probe the details of near-surface reactive gas exchange. Here, we apply CAFE to noontime observations from the 2007 Biosphere Effects on Aerosols and Photochemistry Experiment (BEARPEX-2007. In this work we evaluate the CAFE modeling approach, demonstrate the significance of in-canopy chemistry for forest-atmosphere exchange and identify key shortcomings in the current understanding of intra-canopy processes.

    CAFE generally reproduces BEARPEX-2007 observations but requires an enhanced radical recycling mechanism to overcome a factor of 6 underestimate of hydroxyl (OH concentrations observed during a warm (~29 °C period. Modeled fluxes of acyl peroxy nitrates (APN are quite sensitive to gradients in chemical production and loss, demonstrating that chemistry may perturb forest-atmosphere exchange even when the chemical timescale is long relative to the canopy mixing timescale. The model underestimates peroxy acetyl nitrate (PAN fluxes by 50% and the exchange velocity by nearly a factor of three under warmer conditions, suggesting that near-surface APN sinks are underestimated relative to the sources. Nitric acid typically dominates gross dry N deposition at this site, though other reactive nitrogen (NOy species can comprise up to 28% of the N deposition budget under cooler conditions. Upward NO2 fluxes cause the net above-canopy NOy flux to be ~30% lower than the gross depositional flux. CAFE under-predicts ozone fluxes and exchange velocities by ~20%. Large uncertainty in the parameterization of cuticular and ground deposition precludes conclusive attribution of non-stomatal fluxes to chemistry or surface uptake. Model-measurement comparisons of vertical concentration gradients for several emitted species suggests that the lower canopy airspace may be

  10. The epistemological status of general circulation models

    Science.gov (United States)

    Loehle, Craig

    2018-03-01

    Forecasts of both likely anthropogenic effects on climate and consequent effects on nature and society are based on large, complex software tools called general circulation models (GCMs). Forecasts generated by GCMs have been used extensively in policy decisions related to climate change. However, the relation between underlying physical theories and results produced by GCMs is unclear. In the case of GCMs, many discretizations and approximations are made, and simulating Earth system processes is far from simple and currently leads to some results with unknown energy balance implications. Statistical testing of GCM forecasts for degree of agreement with data would facilitate assessment of fitness for use. If model results need to be put on an anomaly basis due to model bias, then both visual and quantitative measures of model fit depend strongly on the reference period used for normalization, making testing problematic. Epistemology is here applied to problems of statistical inference during testing, the relationship between the underlying physics and the models, the epistemic meaning of ensemble statistics, problems of spatial and temporal scale, the existence or not of an unforced null for climate fluctuations, the meaning of existing uncertainty estimates, and other issues. Rigorous reasoning entails carefully quantifying levels of uncertainty.

  11. The use of Chernobyl fallout to test model predictions of the transfer of radioiodine from air to vegetation to milk

    International Nuclear Information System (INIS)

    Hoffman, F.O.; Amaral, E.

    1989-01-01

    Comparison of observed values with model predictions indicate a tendency for the models to overpredict the air-vegetation-milk transfer of Chernobyl I-131 by one to two orders of magnitude. Detailed analysis of the data indicated that, in general, most overpredictions were accounted for by the portion of the air-pasture-cow-milk pathway dealing with the transfer from air to pasture vegetation rather than the transfer from vegetation to milk. A partial analysis using available data to infer site-specific conditions and parameter values indicates that differences between model predictions and observations can be explained by: 1) overestimation of the fraction of the total amount of I-131 in air that was present as molecular vapour, 2) overestimation of wet and dry deposition of elemental and organic iodine and particulate aerosols, 3) overestimation of initial vegetation interception of material deposited during sever thunderstorms, 4) underestimation of the rates of weathering and growth dilution of material deposited on vegetation during periods of spring growth, 5) underestimation of the amount of uncontaminated feed consumed by dairy cows, and 6) overestimation of the diet-to-milk transfer coefficient for I-131. (orig./HP)

  12. Self-dual configurations in Abelian Higgs models with k-generalized gauge field dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Casana, R.; Cavalcante, A. [Departamento de Física, Universidade Federal do Maranhão,65080-805, São Luís, Maranhão (Brazil); Hora, E. da [Departamento de Física, Universidade Federal do Maranhão,65080-805, São Luís, Maranhão (Brazil); Coordenadoria Interdisciplinar de Ciência e Tecnologia, Universidade Federal do Maranhão,65080-805, São Luís, Maranhão (Brazil)

    2016-12-14

    We have shown the existence of self-dual solutions in new Maxwell-Higgs scenarios where the gauge field possesses a k-generalized dynamic, i.e., the kinetic term of gauge field is a highly nonlinear function of F{sub μν}F{sup μν}. We have implemented our proposal by means of a k-generalized model displaying the spontaneous symmetry breaking phenomenon. We implement consistently the Bogomol’nyi-Prasad-Sommerfield formalism providing highly nonlinear self-dual equations whose solutions are electrically neutral possessing total energy proportional to the magnetic flux. Among the infinite set of possible configurations, we have found families of k-generalized models whose self-dual equations have a form mathematically similar to the ones arising in the Maxwell-Higgs or Chern-Simons-Higgs models. Furthermore, we have verified that our proposal also supports infinite twinlike models with |ϕ|{sup 4}-potential or |ϕ|{sup 6}-potential. With the aim to show explicitly that the BPS equations are able to provide well-behaved configurations, we have considered a test model in order to study axially symmetric vortices. By depending of the self-dual potential, we have shown that the k-generalized model is able to produce solutions that for long distances have a exponential decay (as Abrikosov-Nielsen-Olesen vortices) or have a power-law decay (characterizing delocalized vortices). In all cases, we observe that the generalization modifies the vortex core size, the magnetic field amplitude and the bosonic masses but the total energy remains proportional to the quantized magnetic flux.

  13. Underestimation of nuclear fuel burnup – theory, demonstration and solution in numerical models

    Directory of Open Access Journals (Sweden)

    Gajda Paweł

    2016-01-01

    Full Text Available Monte Carlo methodology provides reference statistical solution of neutron transport criticality problems of nuclear systems. Estimated reaction rates can be applied as an input to Bateman equations that govern isotopic evolution of reactor materials. Because statistical solution of Boltzmann equation is computationally expensive, it is in practice applied to time steps of limited length. In this paper we show that simple staircase step model leads to underprediction of numerical fuel burnup (Fissions per Initial Metal Atom – FIMA. Theoretical considerations indicates that this error is inversely proportional to the length of the time step and origins from the variation of heating per source neutron. The bias can be diminished by application of predictor-corrector step model. A set of burnup simulations with various step length and coupling schemes has been performed. SERPENT code version 1.17 has been applied to the model of a typical fuel assembly from Pressurized Water Reactor. In reference case FIMA reaches 6.24% that is equivalent to about 60 GWD/tHM of industrial burnup. The discrepancies up to 1% have been observed depending on time step model and theoretical predictions are consistent with numerical results. Conclusions presented in this paper are important for research and development concerning nuclear fuel cycle also in the context of Gen4 systems.

  14. Testing for constant nonparametric effects in general semiparametric regression models with interactions

    KAUST Repository

    Wei, Jiawei

    2011-07-01

    We consider the problem of testing for a constant nonparametric effect in a general semi-parametric regression model when there is the potential for interaction between the parametrically and nonparametrically modeled variables. The work was originally motivated by a unique testing problem in genetic epidemiology (Chatterjee, et al., 2006) that involved a typical generalized linear model but with an additional term reminiscent of the Tukey one-degree-of-freedom formulation, and their interest was in testing for main effects of the genetic variables, while gaining statistical power by allowing for a possible interaction between genes and the environment. Later work (Maity, et al., 2009) involved the possibility of modeling the environmental variable nonparametrically, but they focused on whether there was a parametric main effect for the genetic variables. In this paper, we consider the complementary problem, where the interest is in testing for the main effect of the nonparametrically modeled environmental variable. We derive a generalized likelihood ratio test for this hypothesis, show how to implement it, and provide evidence that our method can improve statistical power when compared to standard partially linear models with main effects only. We use the method for the primary purpose of analyzing data from a case-control study of colorectal adenoma.

  15. A General Accelerated Degradation Model Based on the Wiener Process.

    Science.gov (United States)

    Liu, Le; Li, Xiaoyang; Sun, Fuqiang; Wang, Ning

    2016-12-06

    Accelerated degradation testing (ADT) is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  16. A General Accelerated Degradation Model Based on the Wiener Process

    Directory of Open Access Journals (Sweden)

    Le Liu

    2016-12-01

    Full Text Available Accelerated degradation testing (ADT is an efficient tool to conduct material service reliability and safety evaluations by analyzing performance degradation data. Traditional stochastic process models are mainly for linear or linearization degradation paths. However, those methods are not applicable for the situations where the degradation processes cannot be linearized. Hence, in this paper, a general ADT model based on the Wiener process is proposed to solve the problem for accelerated degradation data analysis. The general model can consider the unit-to-unit variation and temporal variation of the degradation process, and is suitable for both linear and nonlinear ADT analyses with single or multiple acceleration variables. The statistical inference is given to estimate the unknown parameters in both constant stress and step stress ADT. The simulation example and two real applications demonstrate that the proposed method can yield reliable lifetime evaluation results compared with the existing linear and time-scale transformation Wiener processes in both linear and nonlinear ADT analyses.

  17. Estimating and Forecasting Generalized Fractional Long Memory Stochastic Volatility Models

    Directory of Open Access Journals (Sweden)

    Shelton Peiris

    2017-12-01

    Full Text Available This paper considers a flexible class of time series models generated by Gegenbauer polynomials incorporating the long memory in stochastic volatility (SV components in order to develop the General Long Memory SV (GLMSV model. We examine the corresponding statistical properties of this model, discuss the spectral likelihood estimation and investigate the finite sample properties via Monte Carlo experiments. We provide empirical evidence by applying the GLMSV model to three exchange rate return series and conjecture that the results of out-of-sample forecasts adequately confirm the use of GLMSV model in certain financial applications.

  18. Informing a hydrological model of the Ogooué with multi-mission remote sensing data

    DEFF Research Database (Denmark)

    Kittel, Cecile Marie Margaretha; Nielsen, Karina; Tøttrup, C.

    2018-01-01

    with publicly available and free remote sensing observations. We used a rainfall–runoff model based on the Budyko framework coupled with a Muskingum routing approach. We parametrized the model using the Shuttle Radar Topography Mission digital elevation model (SRTM DEM) and forced it using precipitation from......Remote sensing provides a unique opportunity to inform and constrain a hydrological model and to increase its value as a decision-support tool. In this study, we applied a multi-mission approach to force, calibrate and validate a hydrological model of the ungauged Ogooué river basin in Africa...... model also captures overall total water storage change patterns, although the amplitude of storage change is generally underestimated. By combining hydrological modeling with multi-mission remote sensing from 10 different satellite missions, we obtain new information on an otherwise unstudied basin...

  19. Python tools for rapid development, calibration, and analysis of generalized groundwater-flow models

    Science.gov (United States)

    Starn, J. J.; Belitz, K.

    2014-12-01

    National-scale water-quality data sets for the United States have been available for several decades; however, groundwater models to interpret these data are available for only a small percentage of the country. Generalized models may be adequate to explain and project groundwater-quality trends at the national scale by using regional scale models (defined as watersheds at or between the HUC-6 and HUC-8 levels). Coast-to-coast data such as the National Hydrologic Dataset Plus (NHD+) make it possible to extract the basic building blocks for a model anywhere in the country. IPython notebooks have been developed to automate the creation of generalized groundwater-flow models from the NHD+. The notebook format allows rapid testing of methods for model creation, calibration, and analysis. Capabilities within the Python ecosystem greatly speed up the development and testing of algorithms. GeoPandas is used for very efficient geospatial processing. Raster processing includes the Geospatial Data Abstraction Library and image processing tools. Model creation is made possible through Flopy, a versatile input and output writer for several MODFLOW-based flow and transport model codes. Interpolation, integration, and map plotting included in the standard Python tool stack also are used, making the notebook a comprehensive platform within on to build and evaluate general models. Models with alternative boundary conditions, number of layers, and cell spacing can be tested against one another and evaluated by using water-quality data. Novel calibration criteria were developed by comparing modeled heads to land-surface and surface-water elevations. Information, such as predicted age distributions, can be extracted from general models and tested for its ability to explain water-quality trends. Groundwater ages then can be correlated with horizontal and vertical hydrologic position, a relation that can be used for statistical assessment of likely groundwater-quality conditions

  20. Pain begets pain: When marathon runners are not in pain anymore, they underestimate their memory of marathon pain: A mediation analysis

    NARCIS (Netherlands)

    Babel, P.; Bajcar, E.A.; Smieja, M.; Adamczyk, W.; Swider, K.J.; Kicman, P.; Lisinska, N.

    2018-01-01

    Background: A previous study has shown that memory of pain induced by running a marathon might be underestimated. However, little is known about the factors that might influence such a memory distortion during pain recall. The aim of the study was to investigate the memory of pain induced by running

  1. Climate Simulations from Super-parameterized and Conventional General Circulation Models with a Third-order Turbulence Closure

    Science.gov (United States)

    Xu, Kuan-Man; Cheng, Anning

    2014-05-01

    A high-resolution cloud-resolving model (CRM) embedded in a general circulation model (GCM) is an attractive alternative for climate modeling because it replaces all traditional cloud parameterizations and explicitly simulates cloud physical processes in each grid column of the GCM. Such an approach is called "Multiscale Modeling Framework." MMF still needs to parameterize the subgrid-scale (SGS) processes associated with clouds and large turbulent eddies because circulations associated with planetary boundary layer (PBL) and in-cloud turbulence are unresolved by CRMs with horizontal grid sizes on the order of a few kilometers. A third-order turbulence closure (IPHOC) has been implemented in the CRM component of the super-parameterized Community Atmosphere Model (SPCAM). IPHOC is used to predict (or diagnose) fractional cloudiness and the variability of temperature and water vapor at scales that are not resolved on the CRM's grid. This model has produced promised results, especially for low-level cloud climatology, seasonal variations and diurnal variations (Cheng and Xu 2011, 2013a, b; Xu and Cheng 2013a, b). Because of the enormous computational cost of SPCAM-IPHOC, which is 400 times of a conventional CAM, we decided to bypass the CRM and implement the IPHOC directly to CAM version 5 (CAM5). IPHOC replaces the PBL/stratocumulus, shallow convection, and cloud macrophysics parameterizations in CAM5. Since there are large discrepancies in the spatial and temporal scales between CRM and CAM5, IPHOC used in CAM5 has to be modified from that used in SPCAM. In particular, we diagnose all second- and third-order moments except for the fluxes. These prognostic and diagnostic moments are used to select a double-Gaussian probability density function to describe the SGS variability. We also incorporate a diagnostic PBL height parameterization to represent the strong inversion above PBL. The goal of this study is to compare the simulation of the climatology from these three

  2. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part II: Multi-layered cloud

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, H; McCoy, R B; Klein, S A; Xie, S; Luo, Y; Avramov, A; Chen, M; Cole, J; Falk, M; Foster, M; Genio, A D; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; McFarquhar, G; Poellot, M; Shipway, B; Shupe, M; Sud, Y; Turner, D; Veron, D; Walker, G; Wang, Z; Wolf, A; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a deep, multi-layered, mixed-phase cloud system observed during the ARM Mixed-Phase Arctic Cloud Experiment. This cloud system was associated with strong surface turbulent sensible and latent heat fluxes as cold air flowed over the open Arctic Ocean, combined with a low pressure system that supplied moisture at mid-level. The simulations, performed by 13 single-column and 4 cloud-resolving models, generally overestimate the liquid water path and strongly underestimate the ice water path, although there is a large spread among the models. This finding is in contrast with results for the single-layer, low-level mixed-phase stratocumulus case in Part I of this study, as well as previous studies of shallow mixed-phase Arctic clouds, that showed an underprediction of liquid water path. The overestimate of liquid water path and underestimate of ice water path occur primarily when deeper mixed-phase clouds extending into the mid-troposphere were observed. These results suggest important differences in the ability of models to simulate Arctic mixed-phase clouds that are deep and multi-layered versus shallow and single-layered. In general, models with a more sophisticated, two-moment treatment of the cloud microphysics produce a somewhat smaller liquid water path that is closer to observations. The cloud-resolving models tend to produce a larger cloud fraction than the single-column models. The liquid water path and especially the cloud fraction have a large impact on the cloud radiative forcing at the surface, which is dominated by the longwave flux for this case.

  3. Midlatitude Forcing Mechanisms for Glacier Mass Balance Investigated Using General Circulation Models

    NARCIS (Netherlands)

    Reichert, B.K.; Bengtsson, L.; Oerlemans, J.

    2001-01-01

    A process-oriented modeling approach is applied in order to simulate glacier mass balance for individual glaciers using statistically downscaled general circulation models (GCMs). Glacier-specific seasonal sensitivity characteristics based on a mass balance model of intermediate complexity are used

  4. Generalized transport model for phase transition with memory

    International Nuclear Information System (INIS)

    Chen, Chi; Ciucci, Francesco

    2013-01-01

    A general model for phenomenological transport in phase transition is derived, which extends Jäckle and Frisch model of phase transition with memory and the Cahn–Hilliard model. In addition to including interfacial energy to account for the presence of interfaces, we introduce viscosity and relaxation contributions, which result from incorporating memory effect into the driving potential. Our simulation results show that even without interfacial energy term, the viscous term can lead to transient diffuse interfaces. From the phase transition induced hysteresis, we discover different energy dissipation mechanism for the interfacial energy and the viscosity effect. In addition, by combining viscosity and interfacial energy, we find that if the former dominates, then the concentration difference across the phase boundary is reduced; conversely, if the interfacial energy is greater then this difference is enlarged.

  5. Underestimated Halogen Bonds Forming with Protein Backbone in Protein Data Bank.

    Science.gov (United States)

    Zhang, Qian; Xu, Zhijian; Shi, Jiye; Zhu, Weiliang

    2017-07-24

    Halogen bonds (XBs) are attracting increasing attention in biological systems. Protein Data Bank (PDB) archives experimentally determined XBs in biological macromolecules. However, no software for structure refinement in X-ray crystallography takes into account XBs, which might result in the weakening or even vanishing of experimentally determined XBs in PDB. In our previous study, we showed that side-chain XBs forming with protein side chains are underestimated in PDB on the basis of the phenomenon that the proportion of side-chain XBs to overall XBs decreases as structural resolution becomes lower and lower. However, whether the dominant backbone XBs forming with protein backbone are overlooked is still a mystery. Here, with the help of the ratio (R F ) of the observed XBs' frequency of occurrence to their frequency expected at random, we demonstrated that backbone XBs are largely overlooked in PDB, too. Furthermore, three cases were discovered possessing backbone XBs in high resolution structures while losing the XBs in low resolution structures. In the last two cases, even at 1.80 Å resolution, the backbone XBs were lost, manifesting the urgent need to consider XBs in the refinement process during X-ray crystallography study.

  6. Generalized isothermal models with strange equation of state

    Indian Academy of Sciences (India)

    intention to study the Einstein–Maxwell system with a linear equation of state with ... It is our intention to model the interior of a dense realistic star with a general ... The definition m(r) = 1. 2. ∫ r. 0 ω2ρ(ω)dω. (14) represents the mass contained within a radius r which is a useful physical quantity. The mass function (14) has ...

  7. General Separations Area (GSA) Groundwater Flow Model Update: Hydrostratigraphic Data

    Energy Technology Data Exchange (ETDEWEB)

    Bagwell, L. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Bennett, P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL); Flach, G. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2017-02-21

    This document describes the assembly, selection, and interpretation of hydrostratigraphic data for input to an updated groundwater flow model for the General Separations Area (GSA; Figure 1) at the Department of Energy’s (DOE) Savannah River Site (SRS). This report is one of several discrete but interrelated tasks that support development of an updated groundwater model (Bagwell and Flach, 2016).

  8. General classical solutions in the noncommutative CP{sup N-1} model

    Energy Technology Data Exchange (ETDEWEB)

    Foda, O.; Jack, I.; Jones, D.R.T

    2002-10-31

    We give an explicit construction of general classical solutions for the noncommutative CP{sup N-1} model in two dimensions, showing that they correspond to integer values for the action and topological charge. We also give explicit solutions for the Dirac equation in the background of these general solutions and show that the index theorem is satisfied.

  9. EVALUATING PREDICTIVE ERRORS OF A COMPLEX ENVIRONMENTAL MODEL USING A GENERAL LINEAR MODEL AND LEAST SQUARE MEANS

    Science.gov (United States)

    A General Linear Model (GLM) was used to evaluate the deviation of predicted values from expected values for a complex environmental model. For this demonstration, we used the default level interface of the Regional Mercury Cycling Model (R-MCM) to simulate epilimnetic total mer...

  10. Application of the Hyper-Poisson Generalized Linear Model for Analyzing Motor Vehicle Crashes.

    Science.gov (United States)

    Khazraee, S Hadi; Sáez-Castillo, Antonio Jose; Geedipally, Srinivas Reddy; Lord, Dominique

    2015-05-01

    The hyper-Poisson distribution can handle both over- and underdispersion, and its generalized linear model formulation allows the dispersion of the distribution to be observation-specific and dependent on model covariates. This study's objective is to examine the potential applicability of a newly proposed generalized linear model framework for the hyper-Poisson distribution in analyzing motor vehicle crash count data. The hyper-Poisson generalized linear model was first fitted to intersection crash data from Toronto, characterized by overdispersion, and then to crash data from railway-highway crossings in Korea, characterized by underdispersion. The results of this study are promising. When fitted to the Toronto data set, the goodness-of-fit measures indicated that the hyper-Poisson model with a variable dispersion parameter provided a statistical fit as good as the traditional negative binomial model. The hyper-Poisson model was also successful in handling the underdispersed data from Korea; the model performed as well as the gamma probability model and the Conway-Maxwell-Poisson model previously developed for the same data set. The advantages of the hyper-Poisson model studied in this article are noteworthy. Unlike the negative binomial model, which has difficulties in handling underdispersed data, the hyper-Poisson model can handle both over- and underdispersed crash data. Although not a major issue for the Conway-Maxwell-Poisson model, the effect of each variable on the expected mean of crashes is easily interpretable in the case of this new model. © 2014 Society for Risk Analysis.

  11. Optimisation of a parallel ocean general circulation model

    Science.gov (United States)

    Beare, M. I.; Stevens, D. P.

    1997-10-01

    This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  12. Symplectic models for general insertion devices

    International Nuclear Information System (INIS)

    Wu, Y.; Forest, E.; Robin, D. S.; Nishimura, H.; Wolski, A.; Litvinenko, V. N.

    2001-01-01

    A variety of insertion devices (IDs), wigglers and undulators, linearly or elliptically polarized,are widely used as high brightness radiation sources at the modern light source rings. Long and high-field wigglers have also been proposed as the main source of radiation damping at next generation damping rings. As a result, it becomes increasingly important to understand the impact of IDs on the charged particle dynamics in the storage ring. In this paper, we report our recent development of a general explicit symplectic model for IDs with the paraxial ray approximation. High-order explicit symplectic integrators are developed to study real-world insertion devices with a number of wiggler harmonics and arbitrary polarizations

  13. Bayesian prediction of spatial count data using generalized linear mixed models

    DEFF Research Database (Denmark)

    Christensen, Ole Fredslund; Waagepetersen, Rasmus Plenge

    2002-01-01

    Spatial weed count data are modeled and predicted using a generalized linear mixed model combined with a Bayesian approach and Markov chain Monte Carlo. Informative priors for a data set with sparse sampling are elicited using a previously collected data set with extensive sampling. Furthermore, ...

  14. The Evaluation of Bivariate Mixed Models in Meta-analyses of Diagnostic Accuracy Studies with SAS, Stata and R.

    Science.gov (United States)

    Vogelgesang, Felicitas; Schlattmann, Peter; Dewey, Marc

    2018-05-01

    Meta-analyses require a thoroughly planned procedure to obtain unbiased overall estimates. From a statistical point of view not only model selection but also model implementation in the software affects the results. The present simulation study investigates the accuracy of different implementations of general and generalized bivariate mixed models in SAS (using proc mixed, proc glimmix and proc nlmixed), Stata (using gllamm, xtmelogit and midas) and R (using reitsma from package mada and glmer from package lme4). Both models incorporate the relationship between sensitivity and specificity - the two outcomes of interest in meta-analyses of diagnostic accuracy studies - utilizing random effects. Model performance is compared in nine meta-analytic scenarios reflecting the combination of three sizes for meta-analyses (89, 30 and 10 studies) with three pairs of sensitivity/specificity values (97%/87%; 85%/75%; 90%/93%). The evaluation of accuracy in terms of bias, standard error and mean squared error reveals that all implementations of the generalized bivariate model calculate sensitivity and specificity estimates with deviations less than two percentage points. proc mixed which together with reitsma implements the general bivariate mixed model proposed by Reitsma rather shows convergence problems. The random effect parameters are in general underestimated. This study shows that flexibility and simplicity of model specification together with convergence robustness should influence implementation recommendations, as the accuracy in terms of bias was acceptable in all implementations using the generalized approach. Schattauer GmbH.

  15. General classical solutions of the complex Grassmannian and CP sub(N-1) sigma models

    International Nuclear Information System (INIS)

    Sasaki, Ryu.

    1983-05-01

    General classical solutions are constructed for the complex Grassmannian non-linear sigma models in two euclidean dimensions in terms of holomorphic functions. The Grassmannian sigma models are a simple generalization of the well known CP sup(N-1) model in two dimensions and they share various interesting properties; existence of (anti-) instantons, an infinite number of conserved quantities and complete integrability. (author)

  16. Improving Modeling of Extreme Events using Generalized Extreme Value Distribution or Generalized Pareto Distribution with Mixing Unconditional Disturbances

    OpenAIRE

    Suarez, R

    2001-01-01

    In this paper an alternative non-parametric historical simulation approach, the Mixing Unconditional Disturbances model with constant volatility, where price paths are generated by reshuffling disturbances for S&P 500 Index returns over the period 1950 - 1998, is used to estimate a Generalized Extreme Value Distribution and a Generalized Pareto Distribution. An ordinary back-testing for period 1999 - 2008 was made to verify this technique, providing higher accuracy returns level under upper ...

  17. Multi-model evaluation of short-lived pollutant distributions over east Asia during summer 2008

    Science.gov (United States)

    Quennehen, B.; Raut, J.-C.; Law, K. S.; Daskalakis, N.; Ancellet, G.; Clerbaux, C.; Kim, S.-W.; Lund, M. T.; Myhre, G.; Olivié, D. J. L.; Safieddine, S.; Skeie, R. B.; Thomas, J. L.; Tsyro, S.; Bazureau, A.; Bellouin, N.; Hu, M.; Kanakidou, M.; Klimont, Z.; Kupiainen, K.; Myriokefalitakis, S.; Quaas, J.; Rumbold, S. T.; Schulz, M.; Cherian, R.; Shimizu, A.; Wang, J.; Yoon, S.-C.; Zhu, T.

    2016-08-01

    is too weak to explain the differences between the models. Our results rather point to an overestimation of SO2 emissions, in particular, close to the surface in Chinese urban areas. However, we also identify a clear underestimation of aerosol concentrations over northern India, suggesting that the rapid recent growth of emissions in India, as well as their spatial extension, is underestimated in emission inventories. Model deficiencies in the representation of pollution accumulation due to the Indian monsoon may also be playing a role. Comparison with vertical aerosol lidar measurements highlights a general underestimation of scattering aerosols in the boundary layer associated with overestimation in the free troposphere pointing to modelled aerosol lifetimes that are too long. This is likely linked to too strong vertical transport and/or insufficient deposition efficiency during transport or export from the boundary layer, rather than chemical processing (in the case of sulphate aerosols). Underestimation of sulphate in the boundary layer implies potentially large errors in simulated aerosol-cloud interactions, via impacts on boundary-layer clouds.This evaluation has important implications for accurate assessment of air pollutants on regional air quality and global climate based on global model calculations. Ideally, models should be run at higher resolution over source regions to better simulate urban-rural pollutant gradients and/or chemical regimes, and also to better resolve pollutant processing and loss by wet deposition as well as vertical transport. Discrepancies in vertical distributions require further quantification and improvement since these are a key factor in the determination of radiative forcing from short-lived pollutants.

  18. Multi-model evaluation of short-lived pollutant distributions over east Asia during summer 2008

    Directory of Open Access Journals (Sweden)

    B. Quennehen

    2016-08-01

    mitigation in Beijing is too weak to explain the differences between the models. Our results rather point to an overestimation of SO2 emissions, in particular, close to the surface in Chinese urban areas. However, we also identify a clear underestimation of aerosol concentrations over northern India, suggesting that the rapid recent growth of emissions in India, as well as their spatial extension, is underestimated in emission inventories. Model deficiencies in the representation of pollution accumulation due to the Indian monsoon may also be playing a role. Comparison with vertical aerosol lidar measurements highlights a general underestimation of scattering aerosols in the boundary layer associated with overestimation in the free troposphere pointing to modelled aerosol lifetimes that are too long. This is likely linked to too strong vertical transport and/or insufficient deposition efficiency during transport or export from the boundary layer, rather than chemical processing (in the case of sulphate aerosols. Underestimation of sulphate in the boundary layer implies potentially large errors in simulated aerosol–cloud interactions, via impacts on boundary-layer clouds.This evaluation has important implications for accurate assessment of air pollutants on regional air quality and global climate based on global model calculations. Ideally, models should be run at higher resolution over source regions to better simulate urban–rural pollutant gradients and/or chemical regimes, and also to better resolve pollutant processing and loss by wet deposition as well as vertical transport. Discrepancies in vertical distributions require further quantification and improvement since these are a key factor in the determination of radiative forcing from short-lived pollutants.

  19. Plane symmetric cosmological micro model in modified theory of Einstein’s general relativity

    Directory of Open Access Journals (Sweden)

    Panigrahi U.K.

    2003-01-01

    Full Text Available In this paper, we have investigated an anisotropic homogeneous plane symmetric cosmological micro-model in the presence of massless scalar field in modified theory of Einstein's general relativity. Some interesting physical and geometrical aspects of the model together with singularity in the model are discussed. Further, it is shown that this theory is valid and leads to Ein­stein's theory as the coupling parameter λ →>• 0 in micro (i.e. quantum level in general.

  20. Warm intermediate inflationary Universe model in the presence of a generalized Chaplygin gas

    Energy Technology Data Exchange (ETDEWEB)

    Herrera, Ramon [Pontificia Universidad Catolica de Valparaiso, Instituto de Fisica, Valparaiso (Chile); Videla, Nelson [Universidad de Chile, Departamento de Fisica, FCFM, Santiago (Chile); Olivares, Marco [Universidad Diego Portales, Facultad de Ingenieria, Santiago (Chile)

    2016-01-15

    A warm intermediate inflationary model in the context of generalized Chaplygin gas is investigated. We study this model in the weak and strong dissipative regimes, considering a generalized form of the dissipative coefficient Γ = Γ(T,φ), and we describe the inflationary dynamics in the slow-roll approximation. We find constraints on the parameters in our model considering the Planck 2015 data, together with the condition for warm inflation T > H, and the conditions for the weak and strong dissipative regimes. (orig.)

  1. Measuring and Examining General Self-Efficacy among Community College Students: A Structural Equation Modeling Approach

    Science.gov (United States)

    Chen, Yu; Starobin, Soko S.

    2018-01-01

    This study examined a psychosocial mechanism of how general self-efficacy interacts with other key factors and influences degree aspiration for students enrolled in an urban diverse community college. Using general self-efficacy scales, the authors hypothesized the General Self-efficacy model for Community College students (the GSE-CC model). A…

  2. A general modeling framework for describing spatially structured population dynamics

    Science.gov (United States)

    Sample, Christine; Fryxell, John; Bieri, Joanna; Federico, Paula; Earl, Julia; Wiederholt, Ruscena; Mattsson, Brady; Flockhart, Tyler; Nicol, Sam; Diffendorfer, James E.; Thogmartin, Wayne E.; Erickson, Richard A.; Norris, D. Ryan

    2017-01-01

    Variation in movement across time and space fundamentally shapes the abundance and distribution of populations. Although a variety of approaches model structured population dynamics, they are limited to specific types of spatially structured populations and lack a unifying framework. Here, we propose a unified network-based framework sufficiently novel in its flexibility to capture a wide variety of spatiotemporal processes including metapopulations and a range of migratory patterns. It can accommodate different kinds of age structures, forms of population growth, dispersal, nomadism and migration, and alternative life-history strategies. Our objective was to link three general elements common to all spatially structured populations (space, time and movement) under a single mathematical framework. To do this, we adopt a network modeling approach. The spatial structure of a population is represented by a weighted and directed network. Each node and each edge has a set of attributes which vary through time. The dynamics of our network-based population is modeled with discrete time steps. Using both theoretical and real-world examples, we show how common elements recur across species with disparate movement strategies and how they can be combined under a unified mathematical framework. We illustrate how metapopulations, various migratory patterns, and nomadism can be represented with this modeling approach. We also apply our network-based framework to four organisms spanning a wide range of life histories, movement patterns, and carrying capacities. General computer code to implement our framework is provided, which can be applied to almost any spatially structured population. This framework contributes to our theoretical understanding of population dynamics and has practical management applications, including understanding the impact of perturbations on population size, distribution, and movement patterns. By working within a common framework, there is less chance

  3. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part I: Single layer cloud

    Energy Technology Data Exchange (ETDEWEB)

    Klein, S A; McCoy, R B; Morrison, H; Ackerman, A; Avramov, A; deBoer, G; Chen, M; Cole, J; DelGenio, A; Golaz, J; Hashino, T; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; Luo, Y; McFarquhar, G; Menon, S; Neggers, R; Park, S; Poellot, M; von Salzen, K; Schmidt, J; Sednev, I; Shipway, B; Shupe, M; Spangenberg, D; Sud, Y; Turner, D; Veron, D; Falk, M; Foster, M; Fridlind, A; Walker, G; Wang, Z; Wolf, A; Xie, S; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a cold-air outbreak mixed-phase stratocumulus cloud observed during the Atmospheric Radiation Measurement (ARM) program's Mixed-Phase Arctic Cloud Experiment. The observed cloud occurred in a well-mixed boundary layer with a cloud top temperature of -15 C. The observed liquid water path of around 160 g m{sup -2} was about two-thirds of the adiabatic value and much greater than the mass of ice crystal precipitation which when integrated from the surface to cloud top was around 15 g m{sup -2}. The simulations were performed by seventeen single-column models (SCMs) and nine cloud-resolving models (CRMs). While the simulated ice water path is generally consistent with the observed values, the median SCM and CRM liquid water path is a factor of three smaller than observed. Results from a sensitivity study in which models removed ice microphysics indicate that in many models the interaction between liquid and ice-phase microphysics is responsible for the large model underestimate of liquid water path. Despite this general underestimate, the simulated liquid and ice water paths of several models are consistent with the observed values. Furthermore, there is some evidence that models with more sophisticated microphysics simulate liquid and ice water paths that are in better agreement with the observed values, although considerable scatter is also present. Although no single factor guarantees a good simulation, these results emphasize the need for improvement in the model representation of mixed-phase microphysics. This case study, which has been well observed from both aircraft and ground-based remote sensors, could be a benchmark for model simulations of mixed-phase clouds.

  4. Stability of a general delayed virus dynamics model with humoral immunity and cellular infection

    Science.gov (United States)

    Elaiw, A. M.; Raezah, A. A.; Alofi, A. S.

    2017-06-01

    In this paper, we investigate the dynamical behavior of a general nonlinear model for virus dynamics with virus-target and infected-target incidences. The model incorporates humoral immune response and distributed time delays. The model is a four dimensional system of delay differential equations where the production and removal rates of the virus and cells are given by general nonlinear functions. We derive the basic reproduction parameter R˜0 G and the humoral immune response activation number R˜1 G and establish a set of conditions on the general functions which are sufficient to determine the global dynamics of the models. We use suitable Lyapunov functionals and apply LaSalle's invariance principle to prove the global asymptotic stability of the all equilibria of the model. We confirm the theoretical results by numerical simulations.

  5. Reshocks, rarefactions, and the generalized Layzer model for hydrodynamic instabilities

    International Nuclear Information System (INIS)

    Mikaelian, K.O.

    2008-01-01

    We report numerical simulations and analytic modeling of shock tube experiments on Rayleigh-Taylor and Richtmyer-Meshkov instabilities. We examine single interfaces of the type A/B where the incident shock is initiated in A and the transmitted shock proceeds into B. Examples are He/air and air/He. In addition, we study finite-thickness or double-interface A/B/A configurations like air/SF 6 /air gas-curtain experiments. We first consider conventional shock tubes that have a 'fixed' boundary: A solid endwall which reflects the transmitted shock and reshocks the interface(s). Then we focus on new experiments with a 'free' boundary--a membrane disrupted mechanically or by the transmitted shock, sending back a rarefaction towards the interface(s). Complex acceleration histories are achieved, relevant for Inertial Confinement Fusion implosions. We compare our simulation results with a generalized Layzer model for two fluids with time-dependent densities, and derive a new freeze-out condition whereby accelerating and compressive forces cancel each other out. Except for the recently reported failures of the Layzer model, the generalized Layzer model and hydrocode simulations for reshocks and rarefactions agree well with each other, and remain to be verified experimentally

  6. A general scheme for training and optimization of the Grenander deformable template model

    DEFF Research Database (Denmark)

    Fisker, Rune; Schultz, Nette; Duta, N.

    2000-01-01

    parameters, a very fast general initialization algorithm and an adaptive likelihood model based on local means. The model parameters are trained by a combination of a 2D shape learning algorithm and a maximum likelihood based criteria. The fast initialization algorithm is based on a search approach using...... for applying the general deformable template model proposed by (Grenander et al., 1991) to a new problem with minimal manual interaction, beside supplying a training set, which can be done by a non-expert user. The main contributions compared to previous work are a supervised learning scheme for the model...

  7. Performance of the SUBSTOR-potato model across contrasting growing conditions

    DEFF Research Database (Denmark)

    Raymundo, Rubí; Asseng, Senthold; Prassad, Rishi

    2017-01-01

    and cultivars, N fertilizer application, water supply, sowing dates, soil types, temperature environments, and atmospheric CO2 concentrations, and included open top chamber and Free-Air-CO2-Enrichment (FACE) experiments. Tuber yields were generally well simulated with the SUBSTOR-potato model across a wide.......4% for tuber fresh weight. Cultivars ‘Desiree’ and ‘Atlantic’ were grown in experiments across the globe and well simulated using consistent cultivar parameters. However, the model underestimated the impact of elevated atmospheric CO2 concentrations and poorly simulated high temperature effects on crop growth....... Other simulated crop variables, including leaf area, stem weight, crop N, and soil water, differed frequently from measurements; some of these variables had significant large measurement errors. The SUBSTOR-potato model was shown to be suitable to simulate tuber growth and yields over a wide range...

  8. Flat epithelial atypia and atypical ductal hyperplasia: carcinoma underestimation rate.

    Science.gov (United States)

    Ingegnoli, Anna; d'Aloia, Cecilia; Frattaruolo, Antonia; Pallavera, Lara; Martella, Eugenia; Crisi, Girolamo; Zompatori, Maurizio

    2010-01-01

    This study was carried out to determine the underestimation rate of carcinoma upon surgical biopsy after a diagnosis of flat epithelial atypia and atypical ductal hyperplasia and 11-gauge vacuum-assisted breast biopsy. A retrospective review was conducted of 476 vacuum-assisted breast biopsy performed from May 2005 to January 2007 and a total of 70 cases of atypia were identified. Fifty cases (71%) were categorized as pure atypical ductal hyperplasia, 18 (26%) as pure flat epithelial atypia and two (3%) as concomitant flat epithelial atypia and atypical ductal hyperplasia. Each group were compared with the subsequent open surgical specimens. Surgical biopsy was performed in 44 patients with atypical ductal hyperplasia, 15 patients with flat epithelial atypia, and two patients with flat epithelial atypia and atypical ductal hyperplasia. Five cases of atypical ductal hyperplasia were upgraded to ductal carcinoma in situ, three cases of flat epithelial atypia yielded one ductal carcinoma in situ and two cases of invasive ductal carcinoma, and one case of flat epithelial atypia/atypical ductal hyperplasia had invasive ductal carcinoma. The overall rate of malignancy was 16% for atypical ductal hyperplasia (including flat epithelial atypia/atypical ductal hyperplasia patients) and 20% for flat epithelial atypia. The presence of flat epithelial atypia and atypical ductal hyperplasia at biopsy requires careful consideration, and surgical excision should be suggested.

  9. General dosimetry model for internal contamination with radioisotopes

    International Nuclear Information System (INIS)

    Nino, L.

    1989-01-01

    Radiation dose by inner contamination with radioisotopes is not measured directly but evaluated by the application of mathematical models of fixation and elimination, taken into account biological activity of each organ with respect to the incorporated material. Models proposed by ICRP for the respiratory and gastrointestinal tracts (30) seems that they should not be applied independently because of the evident correlation between them. In this paper both models are integrated in a more general one with neither modification nor limitation of the starting models. It has been applied to some patients in the Instituto Nacional de Cancerologia, who received some I-131 dose via oral and results are quite similar to dose experimentally obtained via urine spectrograms. Based on this results the method was formalized and applied to professional exposed personnel of the medical staff at the same Institute; due to high doses found in some of the urine samples, probable I-131 air contamination could be supposed

  10. A General Model for Testing Mediation and Moderation Effects

    Science.gov (United States)

    MacKinnon, David P.

    2010-01-01

    This paper describes methods for testing mediation and moderation effects in a dataset, both together and separately. Investigations of this kind are especially valuable in prevention research to obtain information on the process by which a program achieves its effects and whether the program is effective for subgroups of individuals. A general model that simultaneously estimates mediation and moderation effects is presented, and the utility of combining the effects into a single model is described. Possible effects of interest in the model are explained, as are statistical methods to assess these effects. The methods are further illustrated in a hypothetical prevention program example. PMID:19003535

  11. A Model for Nitrogen Chemistry in Oxy-Fuel Combustion of Pulverized Coal

    DEFF Research Database (Denmark)

    Hashemi, Hamid; Hansen, Stine; Toftegaard, Maja Bøg

    2011-01-01

    , heating and devolatilization of particles, and gas–solid reactions. The model is validated by comparison with entrained flow reactor results from the present work and from the literature on pulverized coal combustion in O2/CO2 and air, covering the effects of fuel, mixing conditions, temperature......In this work, a model for the nitrogen chemistry in the oxy-fuel combustion of pulverized coal has been developed. The model is a chemical reaction engineering type of model with a detailed reaction mechanism for the gas-phase chemistry, together with a simplified description of the mixing of flows......, stoichiometry, and inlet NO level. In general, the model provides a satisfactory description of NO formation in air and oxy-fuel combustion of coal, but under some conditions, it underestimates the impact on NO of replacing N2 with CO2. According to the model, differences in the NO yield between the oxy...

  12. A general phenomenological model for work function

    Science.gov (United States)

    Brodie, I.; Chou, S. H.; Yuan, H.

    2014-07-01

    A general phenomenological model is presented for obtaining the zero Kelvin work function of any crystal facet of metals and semiconductors, both clean and covered with a monolayer of electropositive atoms. It utilizes the known physical structure of the crystal and the Fermi energy of the two-dimensional electron gas assumed to form on the surface. A key parameter is the number of electrons donated to the surface electron gas per surface lattice site or adsorbed atom, which is taken to be an integer. Initially this is found by trial and later justified by examining the state of the valence electrons of the relevant atoms. In the case of adsorbed monolayers of electropositive atoms a satisfactory justification could not always be found, particularly for cesium, but a trial value always predicted work functions close to the experimental values. The model can also predict the variation of work function with temperature for clean crystal facets. The model is applied to various crystal faces of tungsten, aluminium, silver, and select metal oxides, and most demonstrate good fits compared to available experimental values.

  13. Water tracers in the general circulation model ECHAM

    International Nuclear Information System (INIS)

    Hoffmann, G.; Heimann, M.

    1993-01-01

    We have installed a water tracer model into the ECHAM General Circulation Model (GCM) parameterizing all fractionation processes of the stable water isotopes ( 1 H 2 18 O and 1 H 2 H 16 O). A five year simulation was performed under present day conditions. We focus on the applicability of such a water tracer model to obtain information about the quality of the hydrological cycle of the GCM. The analysis of the simulated 1 H 2 18 O composition of the precipitation indicates too weak fractionated precipitation over the Antarctic and Greenland ice sheets and too strong fractionated precipitation over large areas of the tropical and subtropical land masses. We can show that these deficiencies are connected with problems of model quantities such as the precipitation and the resolution of the orography. The linear relationship between temperature and the δ 18 O value, i.e. the Dansgaard slope, is reproduced quite well in the model. The slope is slightly too flat and the strong correlation between temperature and δ 18 O vanishes at very low temperatures compared to the observations. (orig.)

  14. Are WISC IQ scores in children with mathematical learning disabilities underestimated? The influence of a specialized intervention on test performance.

    Science.gov (United States)

    Lambert, Katharina; Spinath, Birgit

    2018-01-01

    Intelligence measures play a pivotal role in the diagnosis of mathematical learning disabilities (MLD). Probably as a result of math-related material in IQ tests, children with MLD often display reduced IQ scores. However, it remains unclear whether the effects of math remediation extend to IQ scores. The present study investigated the impact of a special remediation program compared to a control group receiving private tutoring (PT) on the WISC IQ scores of children with MLD. We included N=45 MLD children (7-12 years) in a study with a pre- and post-test control group design. Children received remediation for two years on average. The analyses revealed significantly greater improvements in the experimental group on the Full-Scale IQ, and the Verbal Comprehension, Perceptual Reasoning, and Working Memory indices, but not Processing Speed, compared to the PT group. Children in the experimental group showed an average WISC IQ gain of more than ten points. Results indicate that the WISC IQ scores of MLD children might be underestimated and that an effective math intervention can improve WISC IQ test performance. Taking limitations into account, we discuss the use of IQ measures more generally for defining MLD in research and practice. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. A model for a career in a specialty of general surgery: One surgeon's opinion.

    Science.gov (United States)

    Ko, Bona; McHenry, Christopher R

    2018-01-01

    The integration of general and endocrine surgery was studied as a potential career model for fellowship trained general surgeons. Case logs collected from 1991-2016 and academic milestones were examined for a single general surgeon with a focused interest in endocrine surgery. Operations were categorized using CPT codes and the 2017 ACGME "Major Case Categories" and there frequencies were determined. 10,324 operations were performed on 8209 patients. 412.9 ± 84.9 operations were performed yearly including 279.3 ± 42.7 general and 133.7 ± 65.5 endocrine operations. A high-volume endocrine surgery practice and a rank of tenured professor were achieved by years 11 and 13, respectively. At year 25, the frequency of endocrine operations exceeded general surgery operations. Maintaining a foundation in broad-based general surgery with a specialty focus is a sustainable career model. Residents and fellows can use the model to help plan their careers with realistic expectations. Copyright © 2017. Published by Elsevier Inc.

  16. Two point function for a simple general relativistic quantum model

    OpenAIRE

    Colosi, Daniele

    2007-01-01

    We study the quantum theory of a simple general relativistic quantum model of two coupled harmonic oscillators and compute the two-point function following a proposal first introduced in the context of loop quantum gravity.

  17. Transferring and generalizing deep-learning-based neural encoding models across subjects.

    Science.gov (United States)

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-08-01

    Recent studies have shown the value of using deep learning models for mapping and characterizing how the brain represents and organizes information for natural vision. However, modeling the relationship between deep learning models and the brain (or encoding models), requires measuring cortical responses to large and diverse sets of natural visual stimuli from single subjects. This requirement limits prior studies to few subjects, making it difficult to generalize findings across subjects or for a population. In this study, we developed new methods to transfer and generalize encoding models across subjects. To train encoding models specific to a target subject, the models trained for other subjects were used as the prior models and were refined efficiently using Bayesian inference with a limited amount of data from the target subject. To train encoding models for a population, the models were progressively trained and updated with incremental data from different subjects. For the proof of principle, we applied these methods to functional magnetic resonance imaging (fMRI) data from three subjects watching tens of hours of naturalistic videos, while a deep residual neural network driven by image recognition was used to model visual cortical processing. Results demonstrate that the methods developed herein provide an efficient and effective strategy to establish both subject-specific and population-wide predictive models of cortical representations of high-dimensional and hierarchical visual features. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Generalized memory associativity in a network model for the neuroses

    Science.gov (United States)

    Wedemann, Roseli S.; Donangelo, Raul; de Carvalho, Luís A. V.

    2009-03-01

    We review concepts introduced in earlier work, where a neural network mechanism describes some mental processes in neurotic pathology and psychoanalytic working-through, as associative memory functioning, according to the findings of Freud. We developed a complex network model, where modules corresponding to sensorial and symbolic memories interact, representing unconscious and conscious mental processes. The model illustrates Freud's idea that consciousness is related to symbolic and linguistic memory activity in the brain. We have introduced a generalization of the Boltzmann machine to model memory associativity. Model behavior is illustrated with simulations and some of its properties are analyzed with methods from statistical mechanics.

  19. Generalized fish life-cycle poplulation model and computer program

    International Nuclear Information System (INIS)

    DeAngelis, D.L.; Van Winkle, W.; Christensen, S.W.; Blum, S.R.; Kirk, B.L.; Rust, B.W.; Ross, C.

    1978-03-01

    A generalized fish life-cycle population model and computer program have been prepared to evaluate the long-term effect of changes in mortality in age class 0. The general question concerns what happens to a fishery when density-independent sources of mortality are introduced that act on age class 0, particularly entrainment and impingement at power plants. This paper discusses the model formulation and computer program, including sample results. The population model consists of a system of difference equations involving age-dependent fecundity and survival. The fecundity for each age class is assumed to be a function of both the fraction of females sexually mature and the weight of females as they enter each age class. Natural mortality for age classes 1 and older is assumed to be independent of population size. Fishing mortality is assumed to vary with the number and weight of fish available to the fishery. Age class 0 is divided into six life stages. The probability of survival for age class 0 is estimated considering both density-independent mortality (natural and power plant) and density-dependent mortality for each life stage. Two types of density-dependent mortality are included. These are cannibalism of each life stage by older age classes and intra-life-stage competition

  20. On the relation between cost and service models for general inventory systems

    NARCIS (Netherlands)

    Houtum, van G.J.J.A.N.; Zijm, W.H.M.

    2000-01-01

    In this paper, we present a systematic overview of possible relations between cost and service models for fairly general single- and multi-stage inventory systems. In particular, we relate various types of penalty costs in pure cost models to equivalent types of service measures in service models.

  1. A general science-based framework for dynamical spatio-temporal models

    Science.gov (United States)

    Wikle, C.K.; Hooten, M.B.

    2010-01-01

    Spatio-temporal statistical models are increasingly being used across a wide variety of scientific disciplines to describe and predict spatially-explicit processes that evolve over time. Correspondingly, in recent years there has been a significant amount of research on new statistical methodology for such models. Although descriptive models that approach the problem from the second-order (covariance) perspective are important, and innovative work is being done in this regard, many real-world processes are dynamic, and it can be more efficient in some cases to characterize the associated spatio-temporal dependence by the use of dynamical models. The chief challenge with the specification of such dynamical models has been related to the curse of dimensionality. Even in fairly simple linear, first-order Markovian, Gaussian error settings, statistical models are often over parameterized. Hierarchical models have proven invaluable in their ability to deal to some extent with this issue by allowing dependency among groups of parameters. In addition, this framework has allowed for the specification of science based parameterizations (and associated prior distributions) in which classes of deterministic dynamical models (e. g., partial differential equations (PDEs), integro-difference equations (IDEs), matrix models, and agent-based models) are used to guide specific parameterizations. Most of the focus for the application of such models in statistics has been in the linear case. The problems mentioned above with linear dynamic models are compounded in the case of nonlinear models. In this sense, the need for coherent and sensible model parameterizations is not only helpful, it is essential. Here, we present an overview of a framework for incorporating scientific information to motivate dynamical spatio-temporal models. First, we illustrate the methodology with the linear case. We then develop a general nonlinear spatio-temporal framework that we call general quadratic

  2. Zeros of the partition function for some generalized Ising models

    International Nuclear Information System (INIS)

    Dunlop, F.

    1981-01-01

    The author considers generalized Ising Models with two and four body interactions in a complex external field h such that Re h>=mod(Im h) + C, where C is an explicit function of the interaction parameters. The partition function Z(h) is then shown to satisfy mod(Z(h))>=Z(c), so that the pressure is analytic in h inside the given region. The method is applied to specific examples: the gauge invariant Ising Model, and the Widom Rowlinson model on the lattice. (Auth.)

  3. A Graphical User Interface to Generalized Linear Models in MATLAB

    Directory of Open Access Journals (Sweden)

    Peter Dunn

    1999-07-01

    Full Text Available Generalized linear models unite a wide variety of statistical models in a common theoretical framework. This paper discusses GLMLAB-software that enables such models to be fitted in the popular mathematical package MATLAB. It provides a graphical user interface to the powerful MATLAB computational engine to produce a program that is easy to use but with many features, including offsets, prior weights and user-defined distributions and link functions. MATLAB's graphical capacities are also utilized in providing a number of simple residual diagnostic plots.

  4. Testing for constant nonparametric effects in general semiparametric regression models with interactions

    KAUST Repository

    Wei, Jiawei; Carroll, Raymond J.; Maity, Arnab

    2011-01-01

    We consider the problem of testing for a constant nonparametric effect in a general semi-parametric regression model when there is the potential for interaction between the parametrically and nonparametrically modeled variables. The work

  5. Consensus-based training and assessment model for general surgery.

    Science.gov (United States)

    Szasz, P; Louridas, M; de Montbrun, S; Harris, K A; Grantcharov, T P

    2016-05-01

    Surgical education is becoming competency-based with the implementation of in-training milestones. Training guidelines should reflect these changes and determine the specific procedures for such milestone assessments. This study aimed to develop a consensus view regarding operative procedures and tasks considered appropriate for junior and senior trainees, and the procedures that can be used as technical milestone assessments for trainee progression in general surgery. A Delphi process was followed where questionnaires were distributed to all 17 Canadian general surgery programme directors. Items were ranked on a 5-point Likert scale, with consensus defined as Cronbach's α of at least 0·70. Items rated 4 or above on the 5-point Likert scale by 80 per cent of the programme directors were included in the models. Two Delphi rounds were completed, with 14 programme directors taking part in round one and 11 in round two. The overall consensus was high (Cronbach's α = 0·98). The training model included 101 unique procedures and tasks, 24 specific to junior trainees, 68 specific to senior trainees, and nine appropriate to all. The assessment model included four procedures. A system of operative procedures and tasks for junior- and senior-level trainees has been developed along with an assessment model for trainee progression. These can be used as milestones in competency-based assessments. © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.

  6. A generalized business cycle model with delays in gross product and capital stock

    International Nuclear Information System (INIS)

    Hattaf, Khalid; Riad, Driss; Yousfi, Noura

    2017-01-01

    Highlights: • A generalized business cycle model is proposed and rigorously analyzed. • Well-posedness of the model and local stability of the economic equilibrium are investigated. • Direction of the Hopf bifurcation and stability of the bifurcating periodic solutions are determined. • A special case and some numerical simulations are presented. - Abstract: In this work, we propose a delayed business cycle model with general investment function. The time delays are introduced into gross product and capital stock, respectively. We first prove that the model is mathematically and economically well posed. In addition, the stability of the economic equilibrium and the existence of Hopf bifurcation are investigated. Our main results show that both time delays can cause the macro-economic system to fluctuate and the economic equilibrium to lose or gain its stability. Moreover, the direction of the Hopf bifurcation and the stability of the bifurcating periodic solutions are determined by means of the normal form method and center manifold theory. Furthermore, the models and results presented in many previous studies are improved and generalized.

  7. Singular solitons of generalized Camassa-Holm models

    International Nuclear Information System (INIS)

    Tian Lixin; Sun Lu

    2007-01-01

    Two generalizations of the Camassa-Holm system associated with the singular analysis are proposed for Painleve integrability properties and the extensions of already known analytic solitons. A remarkable feature of the physical model is that it has peakon solution which has peak form. An alternative WTC test which allowed the identifying of such models directly if formulated in terms of inserting a formed ansatz into these models. For the two models have Painleve property, Painleve-Baecklund systems can be constructed through the expansion of solitons about the singularity manifold. By the implementations of Maple, plentiful new type solitonic structures and some kink waves, which are affected by the variation of energy, are explored. If the energy is infinite in finite time, there will be a collapse in soliton systems by direct numerical simulations. Particularly, there are two collapses coexisting in our regular solitons, which occurred around its central regions. Simulation shows that in the bottom of periodic waves arises the non-zero parts of compactons and anti-compactons. We also get floating solitary waves whose amplitude is infinite. In contrary to which a finite-amplitude blow-up soliton is obtained. Periodic blow-ups are found too. Special kinks which have periodic cuspons are derived

  8. Transmittivity and wavefunctions in one-dimensional generalized Aubry models

    International Nuclear Information System (INIS)

    Basu, C.; Mookerjee, A.; Sen, A.K.; Thakur, P.K.

    1990-07-01

    We use the vector recursion method of Haydock to obtain the transmittance of a class of generalized Aubry models in one-dimension. We also study the phase change of the wavefunctions as they travel through the chain and also the behaviour of the conductance with changes in size. (author). 10 refs, 9 figs

  9. Evaluation of cloud-resolving model simulations of midlatitude cirrus with ARM and A-train observations

    Science.gov (United States)

    Muhlbauer, A.; Ackerman, T. P.; Lawson, R. P.; Xie, S.; Zhang, Y.

    2015-07-01

    Cirrus clouds are ubiquitous in the upper troposphere and still constitute one of the largest uncertainties in climate predictions. This paper evaluates cloud-resolving model (CRM) and cloud system-resolving model (CSRM) simulations of a midlatitude cirrus case with comprehensive observations collected under the auspices of the Atmospheric Radiation Measurements (ARM) program and with spaceborne observations from the National Aeronautics and Space Administration A-train satellites. The CRM simulations are driven with periodic boundary conditions and ARM forcing data, whereas the CSRM simulations are driven by the ERA-Interim product. Vertical profiles of temperature, relative humidity, and wind speeds are reasonably well simulated by the CSRM and CRM, but there are remaining biases in the temperature, wind speeds, and relative humidity, which can be mitigated through nudging the model simulations toward the observed radiosonde profiles. Simulated vertical velocities are underestimated in all simulations except in the CRM simulations with grid spacings of 500 m or finer, which suggests that turbulent vertical air motions in cirrus clouds need to be parameterized in general circulation models and in CSRM simulations with horizontal grid spacings on the order of 1 km. The simulated ice water content and ice number concentrations agree with the observations in the CSRM but are underestimated in the CRM simulations. The underestimation of ice number concentrations is consistent with the overestimation of radar reflectivity in the CRM simulations and suggests that the model produces too many large ice particles especially toward the cloud base. Simulated cloud profiles are rather insensitive to perturbations in the initial conditions or the dimensionality of the model domain, but the treatment of the forcing data has a considerable effect on the outcome of the model simulations. Despite considerable progress in observations and microphysical parameterizations, simulating

  10. Efficient semiparametric estimation in generalized partially linear additive models for longitudinal/clustered data

    KAUST Repository

    Cheng, Guang

    2014-02-01

    We consider efficient estimation of the Euclidean parameters in a generalized partially linear additive models for longitudinal/clustered data when multiple covariates need to be modeled nonparametrically, and propose an estimation procedure based on a spline approximation of the nonparametric part of the model and the generalized estimating equations (GEE). Although the model in consideration is natural and useful in many practical applications, the literature on this model is very limited because of challenges in dealing with dependent data for nonparametric additive models. We show that the proposed estimators are consistent and asymptotically normal even if the covariance structure is misspecified. An explicit consistent estimate of the asymptotic variance is also provided. Moreover, we derive the semiparametric efficiency score and information bound under general moment conditions. By showing that our estimators achieve the semiparametric information bound, we effectively establish their efficiency in a stronger sense than what is typically considered for GEE. The derivation of our asymptotic results relies heavily on the empirical processes tools that we develop for the longitudinal/clustered data. Numerical results are used to illustrate the finite sample performance of the proposed estimators. © 2014 ISI/BS.

  11. A stratiform cloud parameterization for General Circulation Models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in General Circulation Models (GCMs) is widely recognized as a major limitation in the application of these models to predictions of global climate change. The purpose of this project is to develop a paxameterization for stratiform clouds in GCMs that expresses stratiform clouds in terms of bulk microphysical properties and their subgrid variability. In this parameterization, precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  12. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    1997-10-01

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  13. Optimisation of a parallel ocean general circulation model

    Directory of Open Access Journals (Sweden)

    M. I. Beare

    Full Text Available This paper presents the development of a general-purpose parallel ocean circulation model, for use on a wide range of computer platforms, from traditional scalar machines to workstation clusters and massively parallel processors. Parallelism is provided, as a modular option, via high-level message-passing routines, thus hiding the technical intricacies from the user. An initial implementation highlights that the parallel efficiency of the model is adversely affected by a number of factors, for which optimisations are discussed and implemented. The resulting ocean code is portable and, in particular, allows science to be achieved on local workstations that could otherwise only be undertaken on state-of-the-art supercomputers.

  14. Toward a General Research Process for Using Dubin's Theory Building Model

    Science.gov (United States)

    Holton, Elwood F.; Lowe, Janis S.

    2007-01-01

    Dubin developed a widely used methodology for theory building, which describes the components of the theory building process. Unfortunately, he does not define a research process for implementing his theory building model. This article proposes a seven-step general research process for implementing Dubin's theory building model. An example of a…

  15. A general method dealing with correlations in uncertainty propagation in fault trees

    International Nuclear Information System (INIS)

    Qin Zhang

    1989-01-01

    This paper deals with the correlations among the failure probabilities (frequencies) of not only the identical basic events but also other basic events in a fault tree. It presents a general and simple method to include these correlations in uncertainty propagation. Two examples illustrate this method and show that neglecting these correlations results in large underestimation of the top event failure probability (frequency). One is the failure of the primary pump in a chemical reactor cooling system, the other example is an accident to a road transport truck carrying toxic waste. (author)

  16. General Form of Model-Free Control Law and Convergence Analyzing

    Directory of Open Access Journals (Sweden)

    Xiuying Li

    2012-01-01

    Full Text Available The general form of model-free control law is introduced, and its convergence is analyzed. Firstly, the necessity to improve the basic form of model free control law is explained, and the functional combination method as the approach of improvement is presented. Then, a series of sufficient conditions of convergence are given. The analysis denotes that these conditions can be satisfied easily in the engineering practice.

  17. A general-purpose process modelling framework for marine energy systems

    International Nuclear Information System (INIS)

    Dimopoulos, George G.; Georgopoulou, Chariklia A.; Stefanatos, Iason C.; Zymaris, Alexandros S.; Kakalis, Nikolaos M.P.

    2014-01-01

    Highlights: • Process modelling techniques applied in marine engineering. • Systems engineering approaches to manage the complexity of modern ship machinery. • General purpose modelling framework called COSSMOS. • Mathematical modelling of conservation equations and related chemical – transport phenomena. • Generic library of ship machinery component models. - Abstract: High fuel prices, environmental regulations and current shipping market conditions impose ships to operate in a more efficient and greener way. These drivers lead to the introduction of new technologies, fuels, and operations, increasing the complexity of modern ship energy systems. As a means to manage this complexity, in this paper we present the introduction of systems engineering methodologies in marine engineering via the development of a general-purpose process modelling framework for ships named as DNV COSSMOS. Shifting the focus from components – the standard approach in shipping- to systems, widens the space for optimal design and operation solutions. The associated computer implementation of COSSMOS is a platform that models, simulates and optimises integrated marine energy systems with respect to energy efficiency, emissions, safety/reliability and costs, under both steady-state and dynamic conditions. DNV COSSMOS can be used in assessment and optimisation of design and operation problems in existing vessels, new builds as well as new technologies. The main features and our modelling approach are presented and key capabilities are illustrated via two studies on the thermo-economic design and operation optimisation of a combined cycle system for large bulk carriers, and the transient operation simulation of an electric marine propulsion system

  18. Generalized Calogero-Sutherland systems from many-matrix models

    International Nuclear Information System (INIS)

    Polychronakos, Alexios P.

    1999-01-01

    We construct generalizations of the Calogero-Sutherland-Moser system by appropriately reducing a model involving many unitary matrices. The resulting systems consist of particles on the circle with internal degrees of freedom, coupled through modifications of the inverse-square potential. The coupling involves SU(M) non-invariant (anti) ferromagnetic interactions of the internal degrees of freedom. The systems are shown to be integrable and the spectrum and wavefunctions of the quantum version are derived

  19. Ocean bio-geophysical modeling using mixed layer-isopycnal general circulation model coupled with photosynthesis process

    Digital Repository Service at National Institute of Oceanography (India)

    Nakamoto, S.; Saito, H.; Muneyama, K.; Sato, T.; PrasannaKumar, S.; Kumar, A.; Frouin, R.

    -chemical system that supports steady carbon circulation in geological time scale in the world ocean using Mixed Layer-Isopycnal ocean General Circulation model with remotely sensed Coastal Zone Color Scanner (CZCS) chlorophyll pigment concentration....

  20. Generalized math model for simulation of high-altitude balloon systems

    Science.gov (United States)

    Nigro, N. J.; Elkouh, A. F.; Hinton, D. E.; Yang, J. K.

    1985-01-01

    Balloon systems have proved to be a cost-effective means for conducting research experiments (e.g., infrared astronomy) in the earth's atmosphere. The purpose of this paper is to present a generalized mathematical model that can be used to simulate the motion of these systems once they have attained float altitude. The resulting form of the model is such that the pendulation and spin motions of the system are uncoupled and can be analyzed independently. The model is evaluated by comparing the simulation results with data obtained from an actual balloon system flown by NASA.

  1. Reliability of Monte Carlo simulations in modeling neutron yields from a shielded fission source

    Energy Technology Data Exchange (ETDEWEB)

    McArthur, Matthew S., E-mail: matthew.s.mcarthur@gmail.com; Rees, Lawrence B., E-mail: Lawrence_Rees@byu.edu; Czirr, J. Bart, E-mail: czirr@juno.com

    2016-08-11

    Using the combination of a neutron-sensitive {sup 6}Li glass scintillator detector with a neutron-insensitive {sup 7}Li glass scintillator detector, we are able to make an accurate measurement of the capture rate of fission neutrons on {sup 6}Li. We used this detector with a {sup 252}Cf neutron source to measure the effects of both non-borated polyethylene and 5% borated polyethylene shielding on detection rates over a range of shielding thicknesses. Both of these measurements were compared with MCNP calculations to determine how well the calculations reproduced the measurements. When the source is highly shielded, the number of interactions experienced by each neutron prior to arriving at the detector is large, so it is important to compare Monte Carlo modeling with actual experimental measurements. MCNP reproduces the data fairly well, but it does generally underestimate detector efficiency both with and without polyethylene shielding. For non-borated polyethylene it underestimates the measured value by an average of 8%. This increases to an average of 11% for borated polyethylene.

  2. Thermospheric tides simulated by the national center for atmospheric research thermosphere-ionosphere general circulation model at equinox

    International Nuclear Information System (INIS)

    Fesen, C.G.; Roble, R.G.; Ridley, E.C.

    1993-01-01

    The authors use the National Center for Atmospheric Research (NCAR) thermosphere/ionosphere general circulation model (TIGCM) to model tides and dynamics in the thermosphere. This model incorporates the latest advances in the thermosphere general circulation model. Model results emphasized the 70 degree W longitude region to overlap a series of incoherent radar scatter installations. Data and the model are available on data bases. The results of this theoretical modeling are compared with available data, and with prediction of more empirical models. In general there is broad agreement within the comparisons

  3. Reshocks, rarefactions, and the generalized Layzer model for hydrodynamic instabilities

    Energy Technology Data Exchange (ETDEWEB)

    Mikaelian, K O

    2008-06-10

    We report numerical simulations and analytic modeling of shock tube experiments on Rayleigh-Taylor and Richtmyer-Meshkov instabilities. We examine single interfaces of the type A/B where the incident shock is initiated in A and the transmitted shock proceeds into B. Examples are He/air and air/He. In addition, we study finite-thickness or double-interface A/B/A configurations like air/SF{sub 6}/air gas-curtain experiments. We first consider conventional shock tubes that have a 'fixed' boundary: A solid endwall which reflects the transmitted shock and reshocks the interface(s). Then we focus on new experiments with a 'free' boundary--a membrane disrupted mechanically or by the transmitted shock, sending back a rarefaction towards the interface(s). Complex acceleration histories are achieved, relevant for Inertial Confinement Fusion implosions. We compare our simulation results with a generalized Layzer model for two fluids with time-dependent densities, and derive a new freeze-out condition whereby accelerating and compressive forces cancel each other out. Except for the recently reported failures of the Layzer model, the generalized Layzer model and hydrocode simulations for reshocks and rarefactions agree well with each other, and remain to be verified experimentally.

  4. Simulation of the Low-Level-Jet by general circulation models

    Energy Technology Data Exchange (ETDEWEB)

    Ghan, S.J. [Pacific Northwest National Lab., Richland, WA (United States)

    1996-04-01

    To what degree is the low-level jet climatology and it`s impact on clouds and precipitation being captured by current general circulation models? It is hypothesised that a need for a pramaterization exists. This paper describes this parameterization need.

  5. A General Nonlinear Fluid Model for Reacting Plasma-Neutral Mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Meier, E T; Shumlak, U

    2012-04-06

    A generalized, computationally tractable fluid model for capturing the effects of neutral particles in plasmas is derived. The model derivation begins with Boltzmann equations for singly charged ions, electrons, and a single neutral species. Electron-impact ionization, radiative recombination, and resonant charge exchange reactions are included. Moments of the reaction collision terms are detailed. Moments of the Boltzmann equations for electron, ion, and neutral species are combined to yield a two-component plasma-neutral fluid model. Separate density, momentum, and energy equations, each including reaction transfer terms, are produced for the plasma and neutral equations. The required closures for the plasma-neutral model are discussed.

  6. General mirror pairs for gauged linear sigma models

    Energy Technology Data Exchange (ETDEWEB)

    Aspinwall, Paul S.; Plesser, M. Ronen [Departments of Mathematics and Physics, Duke University,Box 90320, Durham, NC 27708-0320 (United States)

    2015-11-05

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  7. General mirror pairs for gauged linear sigma models

    International Nuclear Information System (INIS)

    Aspinwall, Paul S.; Plesser, M. Ronen

    2015-01-01

    We carefully analyze the conditions for an abelian gauged linear σ-model to exhibit nontrivial IR behavior described by a nonsingular superconformal field theory determining a superstring vacuum. This is done without reference to a geometric phase, by associating singular behavior to a noncompact space of (semi-)classical vacua. We find that models determined by reflexive combinatorial data are nonsingular for generic values of their parameters. This condition has the pleasant feature that the mirror of a nonsingular gauged linear σ-model is another such model, but it is clearly too strong and we provide an example of a non-reflexive mirror pair. We discuss a weaker condition inspired by considering extremal transitions, which is also mirror symmetric and which we conjecture to be sufficient. We apply these ideas to extremal transitions and to understanding the way in which both Berglund-Hübsch mirror symmetry and the Vafa-Witten mirror orbifold with discrete torsion can be seen as special cases of the general combinatorial duality of gauged linear σ-models. In the former case we encounter an example showing that our weaker condition is still not necessary.

  8. A generalized development model for testing GPS user equipment

    Science.gov (United States)

    Hemesath, N.

    1978-01-01

    The generalized development model (GDM) program, which was intended to establish how well GPS user equipment can perform under a combination of jamming and dynamics, is described. The systems design and the characteristics of the GDM are discussed. The performance aspects of the GDM are listed and the application of the GDM to civil aviation is examined.

  9. A general circulation model (GCM) parameterization of Pinatubo aerosols

    Energy Technology Data Exchange (ETDEWEB)

    Lacis, A.A.; Carlson, B.E.; Mishchenko, M.I. [NASA Goddard Institute for Space Studies, New York, NY (United States)

    1996-04-01

    The June 1991 volcanic eruption of Mt. Pinatubo is the largest and best documented global climate forcing experiment in recorded history. The time development and geographical dispersion of the aerosol has been closely monitored and sampled. Based on preliminary estimates of the Pinatubo aerosol loading, general circulation model predictions of the impact on global climate have been made.

  10. A general evolving model for growing bipartite networks

    International Nuclear Information System (INIS)

    Tian, Lixin; He, Yinghuan; Liu, Haijun; Du, Ruijin

    2012-01-01

    In this Letter, we propose and study an inner evolving bipartite network model. Significantly, we prove that the degree distribution of two different kinds of nodes both obey power-law form with adjustable exponents. Furthermore, the joint degree distribution of any two nodes for bipartite networks model is calculated analytically by the mean-field method. The result displays that such bipartite networks are nearly uncorrelated networks, which is different from one-mode networks. Numerical simulations and empirical results are given to verify the theoretical results. -- Highlights: ► We proposed a general evolving bipartite network model which was based on priority connection, reconnection and breaking edges. ► We prove that the degree distribution of two different kinds of nodes both obey power-law form with adjustable exponents. ► The joint degree distribution of any two nodes for bipartite networks model is calculated analytically by the mean-field method. ► The result displays that such bipartite networks are nearly uncorrelated networks, which is different from one-mode networks.

  11. Study of the properties of general relativistic Kink model (GRK)

    International Nuclear Information System (INIS)

    Oliveira, L.C.S. de.

    1980-01-01

    The stability of the general relativistic Kink model (GRK) is studied. It is shown that the model is stable at least against radial perturbations. Furthermore, the Dirac field in the background of the geometry generated by the GRK is studied. It is verified that the GRK localizes the Dirac field, around the region of largest curvature. The physical interpretation of this system (the Dirac field in the GRK background) is discussed. (Author) [pt

  12. Determining Rheological Parameters of Generalized Yield-Power-Law Fluid Model

    Directory of Open Access Journals (Sweden)

    Stryczek Stanislaw

    2004-09-01

    Full Text Available The principles of determining rheological parameters of drilling muds described by a generalized yield-power-law are presented in the paper. Functions between tangent stresses and shear rate are given. The conditions of laboratory measurements of rheological parameters of generalized yield-power-law fluids are described and necessary mathematical relations for rheological model parameters given. With the block diagrams, the methodics of numerical solution of these relations has been presented. Rheological parameters of an exemplary drilling mud have been calculated with the use of this numerical program.

  13. A General Framework for Portfolio Theory. Part I: theory and various models

    OpenAIRE

    Maier-Paape, Stanislaus; Zhu, Qiji Jim

    2017-01-01

    Utility and risk are two often competing measurements on the investment success. We show that efficient trade-off between these two measurements for investment portfolios happens, in general, on a convex curve in the two dimensional space of utility and risk. This is a rather general pattern. The modern portfolio theory of Markowitz [H. Markowitz, Portfolio Selection, 1959] and its natural generalization, the capital market pricing model, [W. F. Sharpe, Mutual fund performance , 1966] are spe...

  14. On the asymptotic ergodic capacity of FSO links with generalized pointing error model

    KAUST Repository

    Al-Quwaiee, Hessa

    2015-09-11

    Free-space optical (FSO) communication systems are negatively affected by two physical phenomenon, namely, scintillation due to atmospheric turbulence and pointing errors. To quantize the effect of these two factors on FSO system performance, we need an effective mathematical model for them. Scintillations are typically modeled by the log-normal and Gamma-Gamma distributions for weak and strong turbulence conditions, respectively. In this paper, we propose and study a generalized pointing error model based on the Beckmann distribution. We then derive the asymptotic ergodic capacity of FSO systems under the joint impact of turbulence and generalized pointing error impairments. © 2015 IEEE.

  15. MODELING OF INNOVATION EDUCATIONAL ENVIRONMENT OF GENERAL EDUCATIONAL INSTITUTION: THE SCIENTIFIC APPROACHES

    OpenAIRE

    Anzhelika D. Tsymbalaru

    2010-01-01

    In the paper the scientific approaches to modeling of innovation educational environment of a general educational institution – system (analysis of object, process and result of modeling as system objects), activity (organizational and psychological structure) and synergetic (aspects and principles).

  16. Generalized kinetic model of reduction of molecular oxidant by metal containing redox

    International Nuclear Information System (INIS)

    Kravchenko, T.A.

    1986-01-01

    Present work is devoted to kinetics of reduction of molecular oxidant by metal containing redox. Constructed generalized kinetic model of redox process in the system solid redox - reagent solution allows to perform the general theoretical approach to research and to obtain new results on kinetics and mechanism of interaction of redox with oxidants.

  17. Teaching Generalized Imitation Skills to a Preschooler with Autism Using Video Modeling

    Science.gov (United States)

    Kleeberger, Vickie; Mirenda, Pat

    2010-01-01

    This study examined the effectiveness of video modeling to teach a preschooler with autism to imitate previously mastered and not mastered actions during song and toy play activities. A general case approach was used to examine the instructional universe of preschool songs and select exemplars that were most likely to facilitate generalization.…

  18. A general method for the inclusion of radiation chemistry in astrochemical models.

    Science.gov (United States)

    Shingledecker, Christopher N; Herbst, Eric

    2018-02-21

    In this paper, we propose a general formalism that allows for the estimation of radiolysis decomposition pathways and rate coefficients suitable for use in astrochemical models, with a focus on solid phase chemistry. Such a theory can help increase the connection between laboratory astrophysics experiments and astrochemical models by providing a means for modelers to incorporate radiation chemistry into chemical networks. The general method proposed here is targeted particularly at the majority of species now included in chemical networks for which little radiochemical data exist; however, the method can also be used as a starting point for considering better studied species. We here apply our theory to the irradiation of H 2 O ice and compare the results with previous experimental data.

  19. A stratiform cloud parameterization for general circulation models

    International Nuclear Information System (INIS)

    Ghan, S.J.; Leung, L.R.; Chuang, C.C.; Penner, J.E.; McCaa, J.

    1994-01-01

    The crude treatment of clouds in general circulation models (GCMs) is widely recognized as a major limitation in applying these models to predictions of global climate change. The purpose of this project is to develop in GCMs a stratiform cloud parameterization that expresses clouds in terms of bulk microphysical properties and their subgrid variability. Various clouds variables and their interactions are summarized. Precipitating cloud species are distinguished from non-precipitating species, and the liquid phase is distinguished from the ice phase. The size of the non-precipitating cloud particles (which influences both the cloud radiative properties and the conversion of non-precipitating cloud species to precipitating species) is determined by predicting both the mass and number concentrations of each species

  20. An applied general equilibrium model for Dutch agribusiness policy analysis

    NARCIS (Netherlands)

    Peerlings, J.

    1993-01-01

    The purpose of this thesis was to develop a basic static applied general equilibrium (AGE) model to analyse the effects of agricultural policy changes on Dutch agribusiness. In particular the effects on inter-industry transactions, factor demand, income, and trade are of

  1. On Regularity Criteria for the Two-Dimensional Generalized Liquid Crystal Model

    Directory of Open Access Journals (Sweden)

    Yanan Wang

    2014-01-01

    Full Text Available We establish the regularity criteria for the two-dimensional generalized liquid crystal model. It turns out that the global existence results satisfy our regularity criteria naturally.

  2. A general equilibrium model of ecosystem services in a river basin

    Science.gov (United States)

    Travis Warziniack

    2014-01-01

    This study builds a general equilibrium model of ecosystem services, with sectors of the economy competing for use of the environment. The model recognizes that production processes in the real world require a combination of natural and human inputs, and understanding the value of these inputs and their competing uses is necessary when considering policies of resource...

  3. Parametrically Guided Generalized Additive Models with Application to Mergers and Acquisitions Data.

    Science.gov (United States)

    Fan, Jianqing; Maity, Arnab; Wang, Yihui; Wu, Yichao

    2013-01-01

    Generalized nonparametric additive models present a flexible way to evaluate the effects of several covariates on a general outcome of interest via a link function. In this modeling framework, one assumes that the effect of each of the covariates is nonparametric and additive. However, in practice, often there is prior information available about the shape of the regression functions, possibly from pilot studies or exploratory analysis. In this paper, we consider such situations and propose an estimation procedure where the prior information is used as a parametric guide to fit the additive model. Specifically, we first posit a parametric family for each of the regression functions using the prior information (parametric guides). After removing these parametric trends, we then estimate the remainder of the nonparametric functions using a nonparametric generalized additive model, and form the final estimates by adding back the parametric trend. We investigate the asymptotic properties of the estimates and show that when a good guide is chosen, the asymptotic variance of the estimates can be reduced significantly while keeping the asymptotic variance same as the unguided estimator. We observe the performance of our method via a simulation study and demonstrate our method by applying to a real data set on mergers and acquisitions.

  4. Seasonal predictability of Kiremt rainfall in coupled general circulation models

    Science.gov (United States)

    Gleixner, Stephanie; Keenlyside, Noel S.; Demissie, Teferi D.; Counillon, François; Wang, Yiguo; Viste, Ellen

    2017-11-01

    The Ethiopian economy and population is strongly dependent on rainfall. Operational seasonal predictions for the main rainy season (Kiremt, June-September) are based on statistical approaches with Pacific sea surface temperatures (SST) as the main predictor. Here we analyse dynamical predictions from 11 coupled general circulation models for the Kiremt seasons from 1985-2005 with the forecasts starting from the beginning of May. We find skillful predictions from three of the 11 models, but no model beats a simple linear prediction model based on the predicted Niño3.4 indices. The skill of the individual models for dynamically predicting Kiremt rainfall depends on the strength of the teleconnection between Kiremt rainfall and concurrent Pacific SST in the models. Models that do not simulate this teleconnection fail to capture the observed relationship between Kiremt rainfall and the large-scale Walker circulation.

  5. Comparison of Decadal Water Storage Trends from Global Hydrological Models and GRACE Satellite Data

    Science.gov (United States)

    Scanlon, B. R.; Zhang, Z. Z.; Save, H.; Sun, A. Y.; Mueller Schmied, H.; Van Beek, L. P.; Wiese, D. N.; Wada, Y.; Long, D.; Reedy, R. C.; Doll, P. M.; Longuevergne, L.

    2017-12-01

    Global hydrology is increasingly being evaluated using models; however, the reliability of these global models is not well known. In this study we compared decadal trends (2002-2014) in land water storage from 7 global models (WGHM, PCR-GLOBWB, and GLDAS: NOAH, MOSAIC, VIC, CLM, and CLSM) to storage trends from new GRACE satellite mascon solutions (CSR-M and JPL-M). The analysis was conducted over 186 river basins, representing about 60% of the global land area. Modeled total water storage trends agree with those from GRACE-derived trends that are within ±0.5 km3/yr but greatly underestimate large declining and rising trends outside this range. Large declining trends are found mostly in intensively irrigated basins and in some basins in northern latitudes. Rising trends are found in basins with little or no irrigation and are generally related to increasing trends in precipitation. The largest decline is found in the Ganges (-12 km3/yr) and the largest rise in the Amazon (43 km3/yr). Differences between models and GRACE are greatest in large basins (>0.5x106 km2) mostly in humid regions. There is very little agreement in storage trends between models and GRACE and among the models with values of r2 mostly store water over decadal timescales that is underrepresented by the models. The storage capacity in the modeled soil and groundwater compartments may be insufficient to accommodate the range in water storage variations shown by GRACE data. The inability of the models to capture the large storage trends indicates that model projections of climate and human-induced changes in water storage may be mostly underestimated. Future GRACE and model studies should try to reduce the various sources of uncertainty in water storage trends and should consider expanding the modeled storage capacity of the soil profiles and their interaction with groundwater.

  6. From linear to generalized linear mixed models: A case study in repeated measures

    Science.gov (United States)

    Compared to traditional linear mixed models, generalized linear mixed models (GLMMs) can offer better correspondence between response variables and explanatory models, yielding more efficient estimates and tests in the analysis of data from designed experiments. Using proportion data from a designed...

  7. The Michigan Titan Thermospheric General Circulation Model (TTGCM)

    Science.gov (United States)

    Bell, J. M.; Bougher, S. W.; de Lahaye, V.; Waite, J. H.

    2005-12-01

    The Cassini flybys of Titan since late October, 2004 have provided data critical to better understanding its chemical and thermal structures. With this in mind, a 3-D TGCM of Titan's atmosphere from 600km to the exobase (~1450km) has been developed. This paper presents the first results from the partially operational code. Currently, the TTGCM includes static background chemistry (Lebonnois et al 2001, Vervack et al 2004) coupled with thermal conduction routines. The thermosphere remains dominated by solar EUV forcing and HCN rotational cooling, which is calculated by a full line-by-line radiative transfer routine along the lines of Yelle (1991) and Mueller-Wodarg (2000, 2002). In addition, an approximate treatment of magnetospheric heating is explored. This paper illustrates the model's capabilities as well as some initial results from the Titan Thermospheric General Circulation model that will be compared with both the Cassini INMS data and the model of Mueller-Wodarg (2000,2002).

  8. General relativity cosmological models without the big bang

    International Nuclear Information System (INIS)

    Rosen, N.

    1985-01-01

    Attention is given to the so-called standard model of the universe in the framework of the general theory of relativity. This model is taken to be homogeneous and isotropic and filled with an ideal fluid characterized by a density and a pressure. Taking into consideration, however, the assumption that the universe began in a singular state, it is found hard to understand why the universe is so nearly homogeneous and isotropic at present for a singularity represents a breakdown of physical laws, and the initial singularity cannot, therefore, predetermine the subsequent symmetries of the universe. The present investigation has the objective to find a way of avoiding this initial singularity, i.e., to look for a cosmological model without the big bang. The idea is proposed that there exists a limiting density of matter of the order of magnitude of the Planck density, and that this was the density of matter at the moment at which the universe began to expand

  9. Pharmaceutical industry and trade liberalization using computable general equilibrium model.

    Science.gov (United States)

    Barouni, M; Ghaderi, H; Banouei, Aa

    2012-01-01

    Computable general equilibrium models are known as a powerful instrument in economic analyses and widely have been used in order to evaluate trade liberalization effects. The purpose of this study was to provide the impacts of trade openness on pharmaceutical industry using CGE model. Using a computable general equilibrium model in this study, the effects of decrease in tariffs as a symbol of trade liberalization on key variables of Iranian pharmaceutical products were studied. Simulation was performed via two scenarios in this study. The first scenario was the effect of decrease in tariffs of pharmaceutical products as 10, 30, 50, and 100 on key drug variables, and the second was the effect of decrease in other sectors except pharmaceutical products on vital and economic variables of pharmaceutical products. The required data were obtained and the model parameters were calibrated according to the social accounting matrix of Iran in 2006. The results associated with simulation demonstrated that the first scenario has increased import, export, drug supply to markets and household consumption, while import, export, supply of product to market, and household consumption of pharmaceutical products would averagely decrease in the second scenario. Ultimately, society welfare would improve in all scenarios. We presents and synthesizes the CGE model which could be used to analyze trade liberalization policy issue in developing countries (like Iran), and thus provides information that policymakers can use to improve the pharmacy economics.

  10. Description of identical particles via gauged matrix models: a generalization of the Calogero-Sutherland system

    International Nuclear Information System (INIS)

    Park, Jeong-Hyuck

    2003-01-01

    We elaborate the idea that the matrix models equipped with the gauge symmetry provide a natural framework to describe identical particles. After demonstrating the general prescription, we study an exactly solvable harmonic oscillator type gauged matrix model. The model gives a generalization of the Calogero-Sutherland system where the strength of the inverse square potential is not fixed but dynamical bounded by below

  11. Fractal diffusion equations: Microscopic models with anomalous diffusion and its generalizations

    International Nuclear Information System (INIS)

    Arkhincheev, V.E.

    2001-04-01

    To describe the ''anomalous'' diffusion the generalized diffusion equations of fractal order are deduced from microscopic models with anomalous diffusion as Comb model and Levy flights. It is shown that two types of equations are possible: with fractional temporal and fractional spatial derivatives. The solutions of these equations are obtained and the physical sense of these fractional equations is discussed. The relation between diffusion and conductivity is studied and the well-known Einstein relation is generalized for the anomalous diffusion case. It is shown that for Levy flight diffusion the Ohm's law is not applied and the current depends on electric field in a nonlinear way due to the anomalous character of Levy flights. The results of numerical simulations, which confirmed this conclusion, are also presented. (author)

  12. Observed Screen (Air) and GCM Surface/Screen Temperatures: Implications for Outgoing Longwave Fluxes at the Surface.

    Science.gov (United States)

    Garratt, J. R.

    1995-05-01

    There is direct evidence that excess net radiation calculated in general circulation models at continental surfaces [of about 11-17 W m2 (20%-27%) on an annual ~1 is not only due to overestimates in annual incoming shortwave fluxes [of 9-18 W m2 (6%-9%)], but also to underestimates in outgoing longwave fluxes. The bias in the outgoing longwave flux is deduced from a comparison of screen-air temperature observations, available as a global climatology of mean monthly values, and model-calculated surface and screen-air temperatures. An underestimate in the screen temperature computed in general circulation models over continents, of about 3 K on an annual basis, implies an underestimate in the outgoing longwave flux, averaged in six models under study, of 11-15 W m2 (3%-4%). For a set of 22 inland stations studied previously, the residual bias on an annual basis (the residual is the net radiation minus incoming shortwave plus outgoing longwave) varies between 18 and 23 W m2 for the models considered. Additional biases in one or both of the reflected shortwave and incoming longwave components cannot be ruled out.

  13. General Voltage Feedback Circuit Model in the Two-Dimensional Networked Resistive Sensor Array

    Directory of Open Access Journals (Sweden)

    JianFeng Wu

    2015-01-01

    Full Text Available To analyze the feature of the two-dimensional networked resistive sensor array, we firstly proposed a general model of voltage feedback circuits (VFCs such as the voltage feedback non-scanned-electrode circuit, the voltage feedback non-scanned-sampling-electrode circuit, and the voltage feedback non-scanned-sampling-electrode circuit. By analyzing the general model, we then gave a general mathematical expression of the effective equivalent resistor of the element being tested in VFCs. Finally, we evaluated the features of VFCs with simulation and test experiment. The results show that the expression is applicable to analyze the VFCs’ performance of parameters such as the multiplexers’ switch resistors, the nonscanned elements, and array size.

  14. Seasonal changes in the atmospheric heat balance simulated by the GISS general circulation model

    Science.gov (United States)

    Stone, P. H.; Chow, S.; Helfand, H. M.; Quirk, W. J.; Somerville, R. C. J.

    1975-01-01

    Tests of the ability of numerical general circulation models to simulate the atmosphere have focussed so far on simulations of the January climatology. These models generally present boundary conditions such as sea surface temperature, but this does not prevent testing their ability to simulate seasonal changes in atmospheric processes that accompany presented seasonal changes in boundary conditions. Experiments to simulate changes in the zonally averaged heat balance are discussed since many simplified models of climatic processes are based solely on this balance.

  15. General Business Model Patterns for Local Energy Management Concepts

    International Nuclear Information System (INIS)

    Facchinetti, Emanuele; Sulzer, Sabine

    2016-01-01

    The transition toward a more sustainable global energy system, significantly relying on renewable energies and decentralized energy systems, requires a deep reorganization of the energy sector. The way how energy services are generated, delivered, and traded is expected to be very different in the coming years. Business model innovation is recognized as a key driver for the successful implementation of the energy turnaround. This work contributes to this topic by introducing a heuristic methodology easing the identification of general business model patterns best suited for Local Energy Management concepts such as Energy Hubs. A conceptual framework characterizing the Local Energy Management business model solution space is developed. Three reference business model patterns providing orientation across the defined solution space are identified, analyzed, and compared. Through a market review, a number of successfully implemented innovative business models have been analyzed and allocated within the defined solution space. The outcomes of this work offer to potential stakeholders a starting point and guidelines for the business model innovation process, as well as insights for policy makers on challenges and opportunities related to Local Energy Management concepts.

  16. General Business Model Patterns for Local Energy Management Concepts

    Energy Technology Data Exchange (ETDEWEB)

    Facchinetti, Emanuele, E-mail: emanuele.facchinetti@hslu.ch; Sulzer, Sabine [Lucerne Competence Center for Energy Research, Lucerne University of Applied Science and Arts, Horw (Switzerland)

    2016-03-03

    The transition toward a more sustainable global energy system, significantly relying on renewable energies and decentralized energy systems, requires a deep reorganization of the energy sector. The way how energy services are generated, delivered, and traded is expected to be very different in the coming years. Business model innovation is recognized as a key driver for the successful implementation of the energy turnaround. This work contributes to this topic by introducing a heuristic methodology easing the identification of general business model patterns best suited for Local Energy Management concepts such as Energy Hubs. A conceptual framework characterizing the Local Energy Management business model solution space is developed. Three reference business model patterns providing orientation across the defined solution space are identified, analyzed, and compared. Through a market review, a number of successfully implemented innovative business models have been analyzed and allocated within the defined solution space. The outcomes of this work offer to potential stakeholders a starting point and guidelines for the business model innovation process, as well as insights for policy makers on challenges and opportunities related to Local Energy Management concepts.

  17. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2017-11-01

    Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

  18. Some five-dimensional Bianchi type-iii string cosmological models in general relativity

    International Nuclear Information System (INIS)

    Samanta, G.C.; Biswal, S.K.; Mohanty, G.; Rameswarpatna, Bhubaneswar

    2011-01-01

    In this paper we have constructed some five-dimensional Bianchi type-III cosmological models in general relativity when source of gravitational field is a massive string. We obtained different classes of solutions by considering different functional forms of metric potentials. It is also observed that one of the models is not physically acceptable and the other models possess big-bang singularity. The physical and kinematical behaviors of the models are discussed

  19. A guide to developing resource selection functions from telemetry data using generalized estimating equations and generalized linear mixed models

    Directory of Open Access Journals (Sweden)

    Nicola Koper

    2012-03-01

    Full Text Available Resource selection functions (RSF are often developed using satellite (ARGOS or Global Positioning System (GPS telemetry datasets, which provide a large amount of highly correlated data. We discuss and compare the use of generalized linear mixed-effects models (GLMM and generalized estimating equations (GEE for using this type of data to develop RSFs. GLMMs directly model differences among caribou, while GEEs depend on an adjustment of the standard error to compensate for correlation of data points within individuals. Empirical standard errors, rather than model-based standard errors, must be used with either GLMMs or GEEs when developing RSFs. There are several important differences between these approaches; in particular, GLMMs are best for producing parameter estimates that predict how management might influence individuals, while GEEs are best for predicting how management might influence populations. As the interpretation, value, and statistical significance of both types of parameter estimates differ, it is important that users select the appropriate analytical method. We also outline the use of k-fold cross validation to assess fit of these models. Both GLMMs and GEEs hold promise for developing RSFs as long as they are used appropriately.

  20. Comparison between Duncan and Chang’s EB Model and the Generalized Plasticity Model in the Analysis of a High Earth-Rockfill Dam

    Directory of Open Access Journals (Sweden)

    Weixin Dong

    2013-01-01

    Full Text Available Nonlinear elastic model and elastoplastic model are two main kinds of constitutive models of soil, which are widely used in the numerical analyses of soil structure. In this study, Duncan and Chang's EB model and the generalized plasticity model proposed by Pastor, Zienkiewicz, and Chan was discussed and applied to describe the stress-strain relationship of rockfill materials. The two models were validated using the results of triaxial shear tests under different confining pressures. The comparisons between the fittings of models and test data showed that the modified generalized plasticity model is capable of simulating the mechanical behaviours of rockfill materials. The modified generalized plasticity model was implemented into a finite element code to carry out static analyses of a high earth-rockfill dam in China. Nonlinear elastic analyses were also performed with Duncan and Chang's EB model in the same program framework. The comparisons of FEM results and in situ monitoring data showed that the modified PZ-III model can give a better description of deformation of the earth-rockfill dam than Duncan and Chang’s EB model.