WorldWideScience

Sample records for model systematically underestimates

  1. Systematic underestimation of the age of samples with saturating exponential behaviour and inhomogeneous dose distribution

    International Nuclear Information System (INIS)

    Brennan, B.J.

    2000-01-01

    In luminescence and ESR studies, a systematic underestimate of the (average) equivalent dose, and thus also the age, of a sample can occur when there is significant variation of the natural dose within the sample and some regions approach saturation. This is demonstrated explicitly for a material that exhibits a single-saturating-exponential growth of signal with dose. The result is valid for any geometry (e.g. a plain layer, spherical grain, etc.) and some illustrative cases are modelled, with the age bias exceeding 10% in extreme cases. If the dose distribution within the sample can be modelled accurately, it is possible to correct for the bias in the estimates of equivalent dose estimate and age. While quantifying the effect would be more difficult, similar systematic biases in dose and age estimates are likely in other situations more complex than the one modelled

  2. Underestimating belief in climate change

    Science.gov (United States)

    Jost, John T.

    2018-03-01

    People are influenced by second-order beliefs — beliefs about the beliefs of others. New research finds that citizens in the US and China systematically underestimate popular support for taking action to curb climate change. Fortunately, they seem willing and able to correct their misperceptions.

  3. CMIP5 land surface models systematically underestimate inter-annual variability of net ecosystem exchange in semi-arid southwestern North America.

    Science.gov (United States)

    MacBean, N.; Scott, R. L.; Biederman, J. A.; Vuichard, N.; Hudson, A.; Barnes, M.; Fox, A. M.; Smith, W. K.; Peylin, P. P.; Maignan, F.; Moore, D. J.

    2017-12-01

    Recent studies based on analysis of atmospheric CO2 inversions, satellite data and terrestrial biosphere model simulations have suggested that semi-arid ecosystems play a dominant role in the interannual variability and long-term trend in the global carbon sink. These studies have largely cited the response of vegetation activity to changing moisture availability as the primary mechanism of variability. However, some land surface models (LSMs) used in these studies have performed poorly in comparison to satellite-based observations of vegetation dynamics in semi-arid regions. Further analysis is therefore needed to ensure semi-arid carbon cycle processes are well represented in global scale LSMs before we can fully establish their contribution to the global carbon cycle. In this study, we evaluated annual net ecosystem exchange (NEE) simulated by CMIP5 land surface models using observations from 20 Ameriflux sites across semi-arid southwestern North America. We found that CMIP5 models systematically underestimate the magnitude and sign of NEE inter-annual variability; therefore, the true role of semi-arid regions in the global carbon cycle may be even more important than previously thought. To diagnose the factors responsible for this bias, we used the ORCHIDEE LSM to test different climate forcing data, prescribed vegetation fractions and model structures. Climate and prescribed vegetation do contribute to uncertainty in annual NEE simulations, but the bias is primarily caused by incorrect timing and magnitude of peak gross carbon fluxes. Modifications to the hydrology scheme improved simulations of soil moisture in comparison to data. This in turn improved the seasonal cycle of carbon uptake due to a more realistic limitation on photosynthesis during water stress. However, the peak fluxes are still too low, and phenology is poorly represented for desert shrubs and grasses. We provide suggestions on model developments needed to tackle these issues in the future.

  4. Earth System Models Underestimate Soil Carbon Diagnostic Times in Dry and Cold Regions.

    Science.gov (United States)

    Jing, W.; Xia, J.; Zhou, X.; Huang, K.; Huang, Y.; Jian, Z.; Jiang, L.; Xu, X.; Liang, J.; Wang, Y. P.; Luo, Y.

    2017-12-01

    Soils contain the largest organic carbon (C) reservoir in the Earth's surface and strongly modulate the terrestrial feedback to climate change. Large uncertainty exists in current Earth system models (ESMs) in simulating soil organic C (SOC) dynamics, calling for a systematic diagnosis on their performance based on observations. Here, we built a global database of SOC diagnostic time (i.e.,turnover times; τsoil) measured at 320 sites with four different approaches. We found that the estimated τsoil was comparable among approaches of 14C dating () (median with 25 and 75 percentiles), 13C shifts due to vegetation change () and the ratio of stock over flux (), but was shortest from laboratory incubation studies (). The state-of-the-art ESMs underestimated the τsoil in most biomes, even by >10 and >5 folds in cold and dry regions, respectively. Moreover,we identified clear negative dependences of τsoil on temperature and precipitation in both of the observational and modeling results. Compared with Community Land Model (version 4), the incorporation of soil vertical profile (CLM4.5) could substantially extend the τsoil of SOC. Our findings suggest the accuracy of climate-C cycle feedback in current ESMs could be enhanced by an improved understanding of SOC dynamics under the limited hydrothermal conditions.

  5. Underestimation of Project Costs

    Science.gov (United States)

    Jones, Harry W.

    2015-01-01

    Large projects almost always exceed their budgets. Estimating cost is difficult and estimated costs are usually too low. Three different reasons are suggested: bad luck, overoptimism, and deliberate underestimation. Project management can usually point to project difficulty and complexity, technical uncertainty, stakeholder conflicts, scope changes, unforeseen events, and other not really unpredictable bad luck. Project planning is usually over-optimistic, so the likelihood and impact of bad luck is systematically underestimated. Project plans reflect optimism and hope for success in a supposedly unique new effort rather than rational expectations based on historical data. Past project problems are claimed to be irrelevant because "This time it's different." Some bad luck is inevitable and reasonable optimism is understandable, but deliberate deception must be condemned. In a competitive environment, project planners and advocates often deliberately underestimate costs to help gain project approval and funding. Project benefits, cost savings, and probability of success are exaggerated and key risks ignored. Project advocates have incentives to distort information and conceal difficulties from project approvers. One naively suggested cure is more openness, honesty, and group adherence to shared overall goals. A more realistic alternative is threatening overrun projects with cancellation. Neither approach seems to solve the problem. A better method to avoid the delusions of over-optimism and the deceptions of biased advocacy is to base the project cost estimate on the actual costs of a large group of similar projects. Over optimism and deception can continue beyond the planning phase and into project execution. Hard milestones based on verified tests and demonstrations can provide a reality check.

  6. Method for activity measurement in large packages of radioactive wastes. Is the overall activity stored inside a final repository systematically under-estimated?

    International Nuclear Information System (INIS)

    Rottner, B.

    2005-01-01

    The activity of a rad waste package is usually evaluated from gamma spectrometry measurements or dose rates emitted by the package, associated with transfer functions. These functions are calculated assuming that both activity and mass distributions are homogeneous. The proposed method, OPROF-STAT (patented) evaluates the error arising from this homogeneous assumption. This error has a systematic part, leading to an over or underestimation of the overall activity in a family of similar waste packages, and a stochastic part, whose mean effect on the overall activity of the family is null. The method consists in building a virtual family of packages, by numeric simulation of the filling of each package of the family. The simulated filling has a stochastic part, so that the mass and activity distributions inside a package are different from one package to another. The virtual packages are wholly known, which is not the case for the real family, and it is then possible to compute the result of a measurement, and the associated error, for each package of the virtual family. A way to fit and demonstrate the representativeness of the virtual is described. The main trends and parameters modifying the error are explored: a systematic underestimation of the activity in a large family of rad waste packages is possible. (author)

  7. A Systematic Evaluation of Ultrasound-based Fetal Weight Estimation Models on Indian Population

    Directory of Open Access Journals (Sweden)

    Sujitkumar S. Hiwale

    2017-12-01

    Conclusion: We found that the existing fetal weight estimation models have high systematic and random errors on Indian population, with a general tendency of overestimation of fetal weight in the LBW category and underestimation in the HBW category. We also observed that these models have a limited ability to predict babies at a risk of either low or high birth weight. It is recommended that the clinicians should consider all these factors, while interpreting estimated weight given by the existing models.

  8. Evidence for link between modelled trends in Antarctic sea ice and underestimated westerly wind changes.

    Science.gov (United States)

    Purich, Ariaan; Cai, Wenju; England, Matthew H; Cowan, Tim

    2016-02-04

    Despite global warming, total Antarctic sea ice coverage increased over 1979-2013. However, the majority of Coupled Model Intercomparison Project phase 5 models simulate a decline. Mechanisms causing this discrepancy have so far remained elusive. Here we show that weaker trends in the intensification of the Southern Hemisphere westerly wind jet simulated by the models may contribute to this disparity. During austral summer, a strengthened jet leads to increased upwelling of cooler subsurface water and strengthened equatorward transport, conducive to increased sea ice. As the majority of models underestimate summer jet trends, this cooling process is underestimated compared with observations and is insufficient to offset warming in the models. Through the sea ice-albedo feedback, models produce a high-latitude surface ocean warming and sea ice decline, contrasting the observed net cooling and sea ice increase. A realistic simulation of observed wind changes may be crucial for reproducing the recent observed sea ice increase.

  9. Low modeled ozone production suggests underestimation of precursor emissions (especially NOx in Europe

    Directory of Open Access Journals (Sweden)

    E. Oikonomakis

    2018-02-01

    Full Text Available High surface ozone concentrations, which usually occur when photochemical ozone production takes place, pose a great risk to human health and vegetation. Air quality models are often used by policy makers as tools for the development of ozone mitigation strategies. However, the modeled ozone production is often not or not enough evaluated in many ozone modeling studies. The focus of this work is to evaluate the modeled ozone production in Europe indirectly, with the use of the ozone–temperature correlation for the summer of 2010 and to analyze its sensitivity to precursor emissions and meteorology by using the regional air quality model, the Comprehensive Air Quality Model with Extensions (CAMx. The results show that the model significantly underestimates the observed high afternoon surface ozone mixing ratios (≥  60 ppb by 10–20 ppb and overestimates the lower ones (<  40 ppb by 5–15 ppb, resulting in a misleading good agreement with the observations for average ozone. The model also underestimates the ozone–temperature regression slope by about a factor of 2 for most of the measurement stations. To investigate the impact of emissions, four scenarios were tested: (i increased volatile organic compound (VOC emissions by a factor of 1.5 and 2 for the anthropogenic and biogenic VOC emissions, respectively, (ii increased nitrogen oxide (NOx emissions by a factor of 2, (iii a combination of the first two scenarios and (iv increased traffic-only NOx emissions by a factor of 4. For southern, eastern, and central (except the Benelux area Europe, doubling NOx emissions seems to be the most efficient scenario to reduce the underestimation of the observed high ozone mixing ratios without significant degradation of the model performance for the lower ozone mixing ratios. The model performance for ozone–temperature correlation is also better when NOx emissions are doubled. In the Benelux area, however, the third scenario

  10. Some sources of the underestimation of evaluated cross section uncertainties

    International Nuclear Information System (INIS)

    Badikov, S.A.; Gai, E.V.

    2003-01-01

    The problem of the underestimation of evaluated cross-section uncertainties is addressed. Two basic sources of the underestimation of evaluated cross-section uncertainties - a) inconsistency between declared and observable experimental uncertainties and b) inadequacy between applied statistical models and processed experimental data - are considered. Both the sources of the underestimation are mainly a consequence of existence of the uncertainties unrecognized by experimenters. A model of a 'constant shift' is proposed for taking unrecognised experimental uncertainties into account. The model is applied for statistical analysis of the 238 U(n,f)/ 235 U(n,f) reaction cross-section ratio measurements. It is demonstrated that multiplication by sqrt(χ 2 ) as instrument for correction of underestimated evaluated cross-section uncertainties fails in case of correlated measurements. It is shown that arbitrary assignment of uncertainties and correlation in a simple least squares fit of two correlated measurements of unknown mean leads to physically incorrect evaluated results. (author)

  11. Automated Volumetric Mammographic Breast Density Measurements May Underestimate Percent Breast Density for High-density Breasts

    NARCIS (Netherlands)

    Rahbar, K.; Gubern Merida, A.; Patrie, J.T.; Harvey, J.A.

    2017-01-01

    RATIONALE AND OBJECTIVES: The purpose of this study was to evaluate discrepancy in breast composition measurements obtained from mammograms using two commercially available software methods for systematic trends in overestimation or underestimation compared to magnetic resonance-derived

  12. Modeling microelectrode biosensors: free-flow calibration can substantially underestimate tissue concentrations.

    Science.gov (United States)

    Newton, Adam J H; Wall, Mark J; Richardson, Magnus J E

    2017-03-01

    Microelectrode amperometric biosensors are widely used to measure concentrations of analytes in solution and tissue including acetylcholine, adenosine, glucose, and glutamate. A great deal of experimental and modeling effort has been directed at quantifying the response of the biosensors themselves; however, the influence that the macroscopic tissue environment has on biosensor response has not been subjected to the same level of scrutiny. Here we identify an important issue in the way microelectrode biosensors are calibrated that is likely to have led to underestimations of analyte tissue concentrations. Concentration in tissue is typically determined by comparing the biosensor signal to that measured in free-flow calibration conditions. In a free-flow environment the concentration of the analyte at the outer surface of the biosensor can be considered constant. However, in tissue the analyte reaches the biosensor surface by diffusion through the extracellular space. Because the enzymes in the biosensor break down the analyte, a density gradient is set up resulting in a significantly lower concentration of analyte near the biosensor surface. This effect is compounded by the diminished volume fraction (porosity) and reduction in the diffusion coefficient due to obstructions (tortuosity) in tissue. We demonstrate this effect through modeling and experimentally verify our predictions in diffusive environments. NEW & NOTEWORTHY Microelectrode biosensors are typically calibrated in a free-flow environment where the concentrations at the biosensor surface are constant. However, when in tissue, the analyte reaches the biosensor via diffusion and so analyte breakdown by the biosensor results in a concentration gradient and consequently a lower concentration around the biosensor. This effect means that naive free-flow calibration will underestimate tissue concentration. We develop mathematical models to better quantify the discrepancy between the calibration and tissue

  13. Low modeled ozone production suggests underestimation of precursor emissions (especially NOx) in Europe

    Science.gov (United States)

    Oikonomakis, Emmanouil; Aksoyoglu, Sebnem; Ciarelli, Giancarlo; Baltensperger, Urs; Prévôt, André Stephan Henry

    2018-02-01

    High surface ozone concentrations, which usually occur when photochemical ozone production takes place, pose a great risk to human health and vegetation. Air quality models are often used by policy makers as tools for the development of ozone mitigation strategies. However, the modeled ozone production is often not or not enough evaluated in many ozone modeling studies. The focus of this work is to evaluate the modeled ozone production in Europe indirectly, with the use of the ozone-temperature correlation for the summer of 2010 and to analyze its sensitivity to precursor emissions and meteorology by using the regional air quality model, the Comprehensive Air Quality Model with Extensions (CAMx). The results show that the model significantly underestimates the observed high afternoon surface ozone mixing ratios (≥ 60 ppb) by 10-20 ppb and overestimates the lower ones (degradation of the model performance for the lower ozone mixing ratios. The model performance for ozone-temperature correlation is also better when NOx emissions are doubled. In the Benelux area, however, the third scenario (where both NOx and VOC emissions are increased) leads to a better model performance. Although increasing only the traffic NOx emissions by a factor of 4 gave very similar results to the doubling of all NOx emissions, the first scenario is more consistent with the uncertainties reported by other studies than the latter, suggesting that high uncertainties in NOx emissions might originate mainly from the road-transport sector rather than from other sectors. The impact of meteorology was examined with three sensitivity tests: (i) increased surface temperature by 4 °C, (ii) reduced wind speed by 50 % and (iii) doubled wind speed. The first two scenarios led to a consistent increase in all surface ozone mixing ratios, thus improving the model performance for the high ozone values but significantly degrading it for the low ozone values, while the third scenario had exactly the

  14. Global models underestimate large decadal declining and rising water storage trends relative to GRACE satellite data

    Science.gov (United States)

    Scanlon, Bridget R.; Zhang, Zizhan; Save, Himanshu; Sun, Alexander Y.; van Beek, Ludovicus P. H.; Wiese, David N.; Reedy, Robert C.; Longuevergne, Laurent; Döll, Petra; Bierkens, Marc F. P.

    2018-01-01

    Assessing reliability of global models is critical because of increasing reliance on these models to address past and projected future climate and human stresses on global water resources. Here, we evaluate model reliability based on a comprehensive comparison of decadal trends (2002–2014) in land water storage from seven global models (WGHM, PCR-GLOBWB, GLDAS NOAH, MOSAIC, VIC, CLM, and CLSM) to trends from three Gravity Recovery and Climate Experiment (GRACE) satellite solutions in 186 river basins (∼60% of global land area). Medians of modeled basin water storage trends greatly underestimate GRACE-derived large decreasing (≤−0.5 km3/y) and increasing (≥0.5 km3/y) trends. Decreasing trends from GRACE are mostly related to human use (irrigation) and climate variations, whereas increasing trends reflect climate variations. For example, in the Amazon, GRACE estimates a large increasing trend of ∼43 km3/y, whereas most models estimate decreasing trends (−71 to 11 km3/y). Land water storage trends, summed over all basins, are positive for GRACE (∼71–82 km3/y) but negative for models (−450 to −12 km3/y), contributing opposing trends to global mean sea level change. Impacts of climate forcing on decadal land water storage trends exceed those of modeled human intervention by about a factor of 2. The model-GRACE comparison highlights potential areas of future model development, particularly simulated water storage. The inability of models to capture large decadal water storage trends based on GRACE indicates that model projections of climate and human-induced water storage changes may be underestimated. PMID:29358394

  15. How systematic age underestimation can impede understanding of fish population dynamics: Lessons learned from a Lake Superior cisco stock

    Science.gov (United States)

    Yule, D.L.; Stockwell, J.D.; Black, J.A.; Cullis, K.I.; Cholwek, G.A.; Myers, J.T.

    2008-01-01

    Systematic underestimation of fish age can impede understanding of recruitment variability and adaptive strategies (like longevity) and can bias estimates of survivorship. We suspected that previous estimates of annual survival (S; range = 0.20-0.44) for Lake Superior ciscoes Coregonus artedi developed from scale ages were biased low. To test this hypothesis, we estimated the total instantaneous mortality rate of adult ciscoes from the Thunder Bay, Ontario, stock by use of cohort-based catch curves developed from commercial gill-net catches and otolith-aged fish. Mean S based on otolith ages was greater for adult females (0.80) than for adult males (0.75), but these differences were not significant. Applying the results of a study of agreement between scale and otolith ages, we modeled a scale age for each otolith-aged fish to reconstruct catch curves. Using modeled scale ages, estimates of S (0.42 for females, 0.36 for males) were comparable with those reported in past studies. We conducted a November 2005 acoustic and midwater trawl survey to estimate the abundance of ciscoes when the fish were being harvested for roe. Estimated exploitation rates were 0.085 for females and 0.025 for males, and the instantaneous rates of fishing mortality were 0.089 for females and 0.025 for males. The instantaneous rates of natural mortality were 0.131 and 0.265 for females and males, respectively. Using otolith ages, we found that strong year-classes at large during November 2005 were caught in high numbers as age-1 fish in previous annual bottom trawl surveys, whereas weak or absent year-classes were not. For decades, large-scale fisheries on the Great Lakes were allowed to operate because ciscoes were assumed to be short lived and to have regular recruitment. We postulate that the collapse of these fisheries was linked in part to a misunderstanding of cisco biology driven by scale-ageing error. ?? Copyright by the American Fisheries Society 2008.

  16. Sampling in Atypical Endometrial Hyperplasia: Which Method Results in the Lowest Underestimation of Endometrial Cancer? A Systematic Review and Meta-analysis.

    Science.gov (United States)

    Bourdel, Nicolas; Chauvet, Pauline; Tognazza, Enrica; Pereira, Bruno; Botchorishvili, Revaz; Canis, Michel

    2016-01-01

    Our objective was to identify the most accurate method of endometrial sampling for the diagnosis of complex atypical hyperplasia (CAH), and the related risk of underestimation of endometrial cancer. We conducted a systematic literature search in PubMed and EMBASE (January 1999-September 2013) to identify all registered articles on this subject. Studies were selected with a 2-step method. First, titles and abstracts were analyzed by 2 reviewers, and 69 relevant articles were selected for full reading. Then, the full articles were evaluated to determine whether full inclusion criteria were met. We selected 27 studies, taking into consideration the comparison between histology of endometrial hyperplasia obtained by diagnostic tests of interest (uterine curettage, hysteroscopically guided biopsy, or hysteroscopic endometrial resection) and subsequent results of hysterectomy. Analysis of the studies reviewed focused on 1106 patients with a preoperative diagnosis of atypical endometrial hyperplasia. The mean risk of finding endometrial cancer at hysterectomy after atypical endometrial hyperplasia diagnosed by uterine curettage was 32.7% (95% confidence interval [CI], 26.2-39.9), with a risk of 45.3% (95% CI, 32.8-58.5) after hysteroscopically guided biopsy and 5.8% (95% CI, 0.8-31.7) after hysteroscopic resection. In total, the risk of underestimation of endometrial cancer reaches a very high rate in patients with CAH using the classic method of evaluation (i.e., uterine curettage or hysteroscopically guided biopsy). This rate of underdiagnosed endometrial cancer leads to the risk of inappropriate surgical procedures (31.7% of tubal conservation in the data available and no abdominal exploration in 24.6% of the cases). Hysteroscopic resection seems to reduce the risk of underdiagnosed endometrial cancer. Copyright © 2016 AAGL. Published by Elsevier Inc. All rights reserved.

  17. Underestimated effect sizes in GWAS: fundamental limitations of single SNP analysis for dichotomous phenotypes.

    Directory of Open Access Journals (Sweden)

    Sven Stringer

    Full Text Available Complex diseases are often highly heritable. However, for many complex traits only a small proportion of the heritability can be explained by observed genetic variants in traditional genome-wide association (GWA studies. Moreover, for some of those traits few significant SNPs have been identified. Single SNP association methods test for association at a single SNP, ignoring the effect of other SNPs. We show using a simple multi-locus odds model of complex disease that moderate to large effect sizes of causal variants may be estimated as relatively small effect sizes in single SNP association testing. This underestimation effect is most severe for diseases influenced by numerous risk variants. We relate the underestimation effect to the concept of non-collapsibility found in the statistics literature. As described, continuous phenotypes generated with linear genetic models are not affected by this underestimation effect. Since many GWA studies apply single SNP analysis to dichotomous phenotypes, previously reported results potentially underestimate true effect sizes, thereby impeding identification of true effect SNPs. Therefore, when a multi-locus model of disease risk is assumed, a multi SNP analysis may be more appropriate.

  18. the Underestimation of Isorene in Houston during the Texas 2013 DISCOVER-AQ Campaign

    Science.gov (United States)

    Choi, Y.; Diao, L.; Czader, B.; Li, X.; Estes, M. J.

    2014-12-01

    This study applies principal component analysis to aircraft data from the Texas 2013 DISCOVER-AQ (Deriving Information on Surface Conditions from Column and Vertically Resolved Observations Relevant to Air Quality) field campaign to characterize isoprene sources over Houston during September 2013. The biogenic isoprene signature appears in the third principal component and anthropogenic signals in the following two. Evaluations of the Community Multiscale Air Quality (CMAQ) model simulations of isoprene with airborne measurements are more accurate for suburban areas than for industrial areas. This study also compares model outputs to eight surface automated gas chromatograph (Auto-GC) measurements near the Houston ship channel industrial area during the nighttime and shows that modeled anthropogenic isoprene is underestimated by a factor of 10.60. This study employs a new simulation with a modified anthropogenic emissions inventory (constraining using the ratios of observed values versus simulated ones) that yields closer isoprene predictions at night with a reduction in the mean bias by 56.93%, implying that model-estimated isoprene emissions from the 2008 National Emission Inventory are underestimated in the city of Houston and that other climate models or chemistry and transport models using the same emissions inventory might also be underestimated in other Houston-like areas in the United States.

  19. Maximum rates of climate change are systematically underestimated in the geological record.

    Science.gov (United States)

    Kemp, David B; Eichenseer, Kilian; Kiessling, Wolfgang

    2015-11-10

    Recently observed rates of environmental change are typically much higher than those inferred for the geological past. At the same time, the magnitudes of ancient changes were often substantially greater than those established in recent history. The most pertinent disparity, however, between recent and geological rates is the timespan over which the rates are measured, which typically differ by several orders of magnitude. Here we show that rates of marked temperature changes inferred from proxy data in Earth history scale with measurement timespan as an approximate power law across nearly six orders of magnitude (10(2) to >10(7) years). This scaling reveals how climate signals measured in the geological record alias transient variability, even during the most pronounced climatic perturbations of the Phanerozoic. Our findings indicate that the true attainable pace of climate change on timescales of greatest societal relevance is underestimated in geological archives.

  20. Terrestrial biosphere models underestimate photosynthetic capacity and CO2 assimilation in the Arctic.

    Science.gov (United States)

    Rogers, Alistair; Serbin, Shawn P; Ely, Kim S; Sloan, Victoria L; Wullschleger, Stan D

    2017-12-01

    Terrestrial biosphere models (TBMs) are highly sensitive to model representation of photosynthesis, in particular the parameters maximum carboxylation rate and maximum electron transport rate at 25°C (V c,max.25 and J max.25 , respectively). Many TBMs do not include representation of Arctic plants, and those that do rely on understanding and parameterization from temperate species. We measured photosynthetic CO 2 response curves and leaf nitrogen (N) content in species representing the dominant vascular plant functional types found on the coastal tundra near Barrow, Alaska. The activation energies associated with the temperature response functions of V c,max and J max were 17% lower than commonly used values. When scaled to 25°C, V c,max.25 and J max.25 were two- to five-fold higher than the values used to parameterize current TBMs. This high photosynthetic capacity was attributable to a high leaf N content and the high fraction of N invested in Rubisco. Leaf-level modeling demonstrated that current parameterization of TBMs resulted in a two-fold underestimation of the capacity for leaf-level CO 2 assimilation in Arctic vegetation. This study highlights the poor representation of Arctic photosynthesis in TBMs, and provides the critical data necessary to improve our ability to project the response of the Arctic to global environmental change. No claim to original US Government works. New Phytologist © 2017 New Phytologist Trust.

  1. Underestimating nearby nature: affective forecasting errors obscure the happy path to sustainability.

    Science.gov (United States)

    Nisbet, Elizabeth K; Zelenski, John M

    2011-09-01

    Modern lifestyles disconnect people from nature, and this may have adverse consequences for the well-being of both humans and the environment. In two experiments, we found that although outdoor walks in nearby nature made participants much happier than indoor walks did, participants made affective forecasting errors, such that they systematically underestimated nature's hedonic benefit. The pleasant moods experienced on outdoor nature walks facilitated a subjective sense of connection with nature, a construct strongly linked with concern for the environment and environmentally sustainable behavior. To the extent that affective forecasts determine choices, our findings suggest that people fail to maximize their time in nearby nature and thus miss opportunities to increase their happiness and relatedness to nature. Our findings suggest a happy path to sustainability, whereby contact with nature fosters individual happiness and environmentally responsible behavior.

  2. Body Size Estimation from Early to Middle Childhood: Stability of Underestimation, BMI, and Gender Effects

    Directory of Open Access Journals (Sweden)

    Silje Steinsbekk

    2017-11-01

    Full Text Available Individuals who are overweight are more likely to underestimate their body size than those who are normal weight, and overweight underestimators are less likely to engage in weight loss efforts. Underestimation of body size might represent a barrier to prevention and treatment of overweight; thus insight in how underestimation of body size develops and tracks through the childhood years is needed. The aim of the present study was therefore to examine stability in children’s underestimation of body size, exploring predictors of underestimation over time. The prospective path from underestimation to BMI was also tested. In a Norwegian cohort of 6 year olds, followed up at ages 8 and 10 (analysis sample: n = 793 body size estimation was captured by the Children’s Body Image Scale, height and weight were measured and BMI calculated. Overall, children were more likely to underestimate than overestimate their body size. Individual stability in underestimation was modest, but significant. Higher BMI predicted future underestimation, even when previous underestimation was adjusted for, but there was no evidence for the opposite direction of influence. Boys were more likely than girls to underestimate their body size at ages 8 and 10 (age 8: 38.0% vs. 24.1%; Age 10: 57.9% vs. 30.8% and showed a steeper increase in underestimation with age compared to girls. In conclusion, the majority of 6, 8, and 10-year olds correctly estimate their body size (prevalence ranging from 40 to 70% depending on age and gender, although a substantial portion perceived themselves to be thinner than they actually were. Higher BMI forecasted future underestimation, but underestimation did not increase the risk for excessive weight gain in middle childhood.

  3. The Perception of Time Is Underestimated in Adolescents With Anorexia Nervosa.

    Science.gov (United States)

    Vicario, Carmelo M; Felmingham, Kim

    2018-01-01

    Research has revealed reduced temporal discounting (i.e., increased capacity to delay reward) and altered interoceptive awareness in anorexia nervosa (AN). In line with the research linking temporal underestimation with a reduced tendency to devalue a reward and reduced interoceptive awareness, we tested the hypothesis that time duration might be underestimated in AN. Our findings revealed that patients with AN displayed lower timing accuracy in the form of timing underestimation compared with controls. These results were not predicted by clinical, demographic factors, attention, and working memory performance of the participants. The evidence of a temporal underestimation bias in AN might be clinically relevant to explain their abnormal motivation in pursuing a long-term restrictive diet, in line with the evidence that increasing the subjective temporal proximity of remote future goals can boost motivation and the actual behavior to reach them.

  4. The Perception of Time Is Underestimated in Adolescents With Anorexia Nervosa

    Directory of Open Access Journals (Sweden)

    Carmelo M. Vicario

    2018-04-01

    Full Text Available Research has revealed reduced temporal discounting (i.e., increased capacity to delay reward and altered interoceptive awareness in anorexia nervosa (AN. In line with the research linking temporal underestimation with a reduced tendency to devalue a reward and reduced interoceptive awareness, we tested the hypothesis that time duration might be underestimated in AN. Our findings revealed that patients with AN displayed lower timing accuracy in the form of timing underestimation compared with controls. These results were not predicted by clinical, demographic factors, attention, and working memory performance of the participants. The evidence of a temporal underestimation bias in AN might be clinically relevant to explain their abnormal motivation in pursuing a long-term restrictive diet, in line with the evidence that increasing the subjective temporal proximity of remote future goals can boost motivation and the actual behavior to reach them.

  5. Underestimation of risk due to exposure misclassification

    DEFF Research Database (Denmark)

    Grandjean, Philippe; Budtz-Jørgensen, Esben; Keiding, Niels

    2004-01-01

    Exposure misclassification constitutes a major obstacle when developing dose-response relationships for risk assessment. A non-differentional error results in underestimation of the risk. If the degree of misclassification is known, adjustment may be achieved by sensitivity analysis. The purpose...

  6. Quality of life and time to death: have the health gains of preventive interventions been underestimated?

    Science.gov (United States)

    Gheorghe, Maria; Brouwer, Werner B F; van Baal, Pieter H M

    2015-04-01

    This article explores the implications of the relation between quality of life (QoL) and time to death (TTD) for economic evaluations of preventive interventions. By using health survey data on QoL for the general Dutch population linked to the mortality registry, we quantify the magnitude of this relationship. For addressing specific features of the nonstandard QoL distribution such as boundness, skewness, and heteroscedasticity, we modeled QoL using a generalized additive model for location, scale, and shape (GAMLSS) with a β inflated outcome distribution. Our empirical results indicate that QoL decreases when approaching death, suggesting that there is a strong relationship between TTD and QoL. Predictions of different regression models revealed that ignoring this relationship results in an underestimation of the quality-adjusted life year (QALY) gains for preventive interventions. The underestimation ranged between 3% and 7% and depended on age, the number of years gained from the intervention, and the discount rate used. © The Author(s) 2014.

  7. Linear-quadratic model underestimates sparing effect of small doses per fraction in rat spinal cord

    International Nuclear Information System (INIS)

    Shun Wong, C.; Toronto University; Minkin, S.; Hill, R.P.; Toronto University

    1993-01-01

    The application of the linear-quadratic (LQ) model to describe iso-effective fractionation schedules for dose fraction sizes less than 2 Gy has been controversial. Experiments are described in which the effect of daily fractionated irradiation given with a wide range of fraction sizes was assessed in rat cervical spine cord. The first group of rats was given doses in 1, 2, 4, 8 and 40 fractions/day. The second group received 3 initial 'top-up'doses of 9 Gy given once daily, representing 3/4 tolerance, followed by doses in 1, 2, 10, 20, 30 and 40 fractions/day. The fractionated portion of the irradiation schedule therefore constituted only the final quarter of the tolerance dose. The endpoint of the experiments was paralysis of forelimbs secondary to white matter necrosis. Direct analysis of data from experiments with full course fractionation up to 40 fractions/day (25.0-1.98 Gy/fraction) indicated consistency with the LQ model yielding an α/β value of 2.41 Gy. Analysis of data from experiments in which the 3 'top-up' doses were followed by up to 10 fractions (10.0-1.64 Gy/fraction) gave an α/β value of 3.41 Gy. However, data from 'top-up' experiments with 20, 30 and 40 fractions (1.60-0.55 Gy/fraction) were inconsistent with LQ model and gave a very small α/β of 0.48 Gy. It is concluded that LQ model based on data from large doses/fraction underestimates the sparing effect of small doses/fraction, provided sufficient time is allowed between each fraction for repair of sublethal damage. (author). 28 refs., 5 figs., 1 tab

  8. Underestimation of boreal soil carbon stocks by mathematical soil carbon models linked to soil nutrient status

    Science.gov (United States)

    Ťupek, Boris; Ortiz, Carina A.; Hashimoto, Shoji; Stendahl, Johan; Dahlgren, Jonas; Karltun, Erik; Lehtonen, Aleksi

    2016-08-01

    Inaccurate estimate of the largest terrestrial carbon pool, soil organic carbon (SOC) stock, is the major source of uncertainty in simulating feedback of climate warming on ecosystem-atmosphere carbon dioxide exchange by process-based ecosystem and soil carbon models. Although the models need to simplify complex environmental processes of soil carbon sequestration, in a large mosaic of environments a missing key driver could lead to a modeling bias in predictions of SOC stock change.We aimed to evaluate SOC stock estimates of process-based models (Yasso07, Q, and CENTURY soil sub-model v4) against a massive Swedish forest soil inventory data set (3230 samples) organized by a recursive partitioning method into distinct soil groups with underlying SOC stock development linked to physicochemical conditions.For two-thirds of measurements all models predicted accurate SOC stock levels regardless of the detail of input data, e.g., whether they ignored or included soil properties. However, in fertile sites with high N deposition, high cation exchange capacity, or moderately increased soil water content, Yasso07 and Q models underestimated SOC stocks. In comparison to Yasso07 and Q, accounting for the site-specific soil characteristics (e. g. clay content and topsoil mineral N) by CENTURY improved SOC stock estimates for sites with high clay content, but not for sites with high N deposition.Our analysis suggested that the soils with poorly predicted SOC stocks, as characterized by the high nutrient status and well-sorted parent material, indeed have had other predominant drivers of SOC stabilization lacking in the models, presumably the mycorrhizal organic uptake and organo-mineral stabilization processes. Our results imply that the role of soil nutrient status as regulator of organic matter mineralization has to be re-evaluated, since correct SOC stocks are decisive for predicting future SOC change and soil CO2 efflux.

  9. The underestimated potential of solar energy to mitigate climate change

    Science.gov (United States)

    Creutzig, Felix; Agoston, Peter; Goldschmidt, Jan Christoph; Luderer, Gunnar; Nemet, Gregory; Pietzcker, Robert C.

    2017-09-01

    The Intergovernmental Panel on Climate Change's fifth assessment report emphasizes the importance of bioenergy and carbon capture and storage for achieving climate goals, but it does not identify solar energy as a strategically important technology option. That is surprising given the strong growth, large resource, and low environmental footprint of photovoltaics (PV). Here we explore how models have consistently underestimated PV deployment and identify the reasons for underlying bias in models. Our analysis reveals that rapid technological learning and technology-specific policy support were crucial to PV deployment in the past, but that future success will depend on adequate financing instruments and the management of system integration. We propose that with coordinated advances in multiple components of the energy system, PV could supply 30-50% of electricity in competitive markets.

  10. Guiding exploration in conformational feature space with Lipschitz underestimation for ab-initio protein structure prediction.

    Science.gov (United States)

    Hao, Xiaohu; Zhang, Guijun; Zhou, Xiaogen

    2018-04-01

    Computing conformations which are essential to associate structural and functional information with gene sequences, is challenging due to the high dimensionality and rugged energy surface of the protein conformational space. Consequently, the dimension of the protein conformational space should be reduced to a proper level, and an effective exploring algorithm should be proposed. In this paper, a plug-in method for guiding exploration in conformational feature space with Lipschitz underestimation (LUE) for ab-initio protein structure prediction is proposed. The conformational space is converted into ultrafast shape recognition (USR) feature space firstly. Based on the USR feature space, the conformational space can be further converted into Underestimation space according to Lipschitz estimation theory for guiding exploration. As a consequence of the use of underestimation model, the tight lower bound estimate information can be used for exploration guidance, the invalid sampling areas can be eliminated in advance, and the number of energy function evaluations can be reduced. The proposed method provides a novel technique to solve the exploring problem of protein conformational space. LUE is applied to differential evolution (DE) algorithm, and metropolis Monte Carlo(MMC) algorithm which is available in the Rosetta; When LUE is applied to DE and MMC, it will be screened by the underestimation method prior to energy calculation and selection. Further, LUE is compared with DE and MMC by testing on 15 small-to-medium structurally diverse proteins. Test results show that near-native protein structures with higher accuracy can be obtained more rapidly and efficiently with the use of LUE. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Academic self-concept, learning motivation, and test anxiety of the underestimated student.

    Science.gov (United States)

    Urhahne, Detlef; Chao, Sheng-Han; Florineth, Maria Luise; Luttenberger, Silke; Paechter, Manuela

    2011-03-01

    BACKGROUND. Teachers' judgments of student performance on a standardized achievement test often result in an overestimation of students' abilities. In the majority of cases, a larger group of overestimated students and a smaller group of underestimated students are formed by these judgments. AIMS. In this research study, the consequences of the underestimation of students' mathematical performance potential were examined. SAMPLE. Two hundred and thirty-five fourth grade students and their fourteen mathematics teachers took part in the investigation. METHOD. Students worked on a standardized mathematics achievement test and completed a self-description questionnaire about motivation and affect. Teachers estimated each individual student's potential with regard to mathematics test performance as well as students' expectancy for success, level of aspiration, academic self-concept, learning motivation, and test anxiety. The differences between teachers' judgments on students' test performance and students' actual performance were used to build groups of underestimated and overestimated students. RESULTS. Underestimated students displayed equal levels of test performance, learning motivation, and level of aspiration in comparison with overestimated students, but had lower expectancy for success, lower academic self-concept, and experienced more test anxiety. Teachers expected that underestimated students would receive lower grades on the next mathematics test, believed that students were satisfied with lower grades, and assumed that the students have weaker learning motivation than their overestimated classmates. CONCLUSION. Teachers' judgment error was not confined to test performance but generalized to motivational and affective traits of the students. © 2010 The British Psychological Society.

  12. Tritium: an underestimated health risk- 'ACROnic du nucleaire' nr 85, June 2009

    International Nuclear Information System (INIS)

    Barbey, Pierre

    2009-06-01

    After having indicated how tritium released in the environment (under the form of tritiated water or gas) is absorbed by living species, the author describes the different biological effects of ionizing radiations and the risk associated with tritium. He evokes how the radiation protection system is designed with respect to standards, and outlines how the risk related to tritium is underestimated by different existing models and standards. The author discusses the consequences of tritium transmutation and of the isotopic effect

  13. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    Science.gov (United States)

    Fowler, Mike S; Ruokolainen, Lasse

    2013-01-01

    The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let

  14. High-risk lesions diagnosed at MRI-guided vacuum-assisted breast biopsy: can underestimation be predicted?

    Energy Technology Data Exchange (ETDEWEB)

    Crystal, Pavel [Mount Sinai Hospital, University Health Network, Division of Breast Imaging, Toronto, ON (Canada); Mount Sinai Hospital, Toronto, ON (Canada); Sadaf, Arifa; Bukhanov, Karina; Helbich, Thomas H. [Mount Sinai Hospital, University Health Network, Division of Breast Imaging, Toronto, ON (Canada); McCready, David [Princess Margaret Hospital, Department of Surgical Oncology, Toronto, ON (Canada); O' Malley, Frances [Mount Sinai Hospital, Department of Pathology, Laboratory Medicine, Toronto, ON (Canada)

    2011-03-15

    To evaluate the frequency of diagnosis of high-risk lesions at MRI-guided vacuum-assisted breast biopsy (MRgVABB) and to determine whether underestimation may be predicted. Retrospective review of the medical records of 161 patients who underwent MRgVABB was performed. The underestimation rate was defined as an upgrade of a high-risk lesion at MRgVABB to malignancy at surgery. Clinical data, MRI features of the biopsied lesions, and histological diagnosis of cases with and those without underestimation were compared. Of 161 MRgVABB, histology revealed 31 (19%) high-risk lesions. Of 26 excised high-risk lesions, 13 (50%) were upgraded to malignancy. The underestimation rates of lobular neoplasia, atypical apocrine metaplasia, atypical ductal hyperplasia, and flat epithelial atypia were 50% (4/8), 100% (5/5), 50% (3/6) and 50% (1/2) respectively. There was no underestimation in the cases of benign papilloma without atypia (0/3), and radial scar (0/2). No statistically significant differences (p > 0.1) between the cases with and those without underestimation were seen in patient age, indications for breast MRI, size of lesion on MRI, morphological and kinetic features of biopsied lesions. Imaging and clinical features cannot be used reliably to predict underestimation at MRgVABB. All high-risk lesions diagnosed at MRgVABB require surgical excision. (orig.)

  15. Under-diagnosis of rickettsial disease in clinical practice: A systematic review

    NARCIS (Netherlands)

    van Eekeren, Louise E.; de Vries, Sophia G.; Wagenaar, Jiri F. P.; Spijker, René; Grobusch, Martin P.; Goorhuis, Abraham

    2018-01-01

    Rickettsial diseases present as acute febrile illnesses, sometimes with inoculation eschars. We performed a systematic review of studies published between 1997 and 2017 to assess the underestimation of non-eschar rickettsial disease (NERD) relative to eschar rickettsial disease (ERD), as a cause of

  16. A Large Underestimate of Formic Acid from Tropical Fires: Constraints from Space-Borne Measurements.

    Science.gov (United States)

    Chaliyakunnel, S; Millet, D B; Wells, K C; Cady-Pereira, K E; Shephard, M W

    2016-06-07

    Formic acid (HCOOH) is one of the most abundant carboxylic acids and a dominant source of atmospheric acidity. Recent work indicates a major gap in the HCOOH budget, with atmospheric concentrations much larger than expected from known sources. Here, we employ recent space-based observations from the Tropospheric Emission Spectrometer with the GEOS-Chem atmospheric model to better quantify the HCOOH source from biomass burning, and assess whether fire emissions can help close the large budget gap for this species. The space-based data reveal a severe model HCOOH underestimate most prominent over tropical burning regions, suggesting a major missing source of organic acids from fires. We develop an approach for inferring the fractional fire contribution to ambient HCOOH and find, based on measurements over Africa, that pyrogenic HCOOH:CO enhancement ratios are much higher than expected from direct emissions alone, revealing substantial secondary organic acid production in fire plumes. Current models strongly underestimate (by 10 ± 5 times) the total primary and secondary HCOOH source from African fires. If a 10-fold bias were to extend to fires in other regions, biomass burning could produce 14 Tg/a of HCOOH in the tropics or 16 Tg/a worldwide. However, even such an increase would only represent 15-20% of the total required HCOOH source, implying the existence of other larger missing sources.

  17. Calorie Underestimation When Buying High-Calorie Beverages in Fast-Food Contexts.

    Science.gov (United States)

    Franckle, Rebecca L; Block, Jason P; Roberto, Christina A

    2016-07-01

    We asked 1877 adults and 1178 adolescents visiting 89 fast-food restaurants in New England in 2010 and 2011 to estimate calories purchased. Calorie underestimation was greater among those purchasing a high-calorie beverage than among those who did not (adults: 324 ±698 vs 102 ±591 calories; adolescents: 360 ±602 vs 198 ±509 calories). This difference remained significant for adults but not adolescents after adjusting for total calories purchased. Purchasing high-calorie beverages may uniquely contribute to calorie underestimation among adults.

  18. Satellite methods underestimate indirect climate forcing by aerosols

    Science.gov (United States)

    Penner, Joyce E.; Xu, Li; Wang, Minghuai

    2011-01-01

    Satellite-based estimates of the aerosol indirect effect (AIE) are consistently smaller than the estimates from global aerosol models, and, partly as a result of these differences, the assessment of this climate forcing includes large uncertainties. Satellite estimates typically use the present-day (PD) relationship between observed cloud drop number concentrations (Nc) and aerosol optical depths (AODs) to determine the preindustrial (PI) values of Nc. These values are then used to determine the PD and PI cloud albedos and, thus, the effect of anthropogenic aerosols on top of the atmosphere radiative fluxes. Here, we use a model with realistic aerosol and cloud processes to show that empirical relationships for ln(Nc) versus ln(AOD) derived from PD results do not represent the atmospheric perturbation caused by the addition of anthropogenic aerosols to the preindustrial atmosphere. As a result, the model estimates based on satellite methods of the AIE are between a factor of 3 to more than a factor of 6 smaller than model estimates based on actual PD and PI values for Nc. Using ln(Nc) versus ln(AI) (Aerosol Index, or the optical depth times angstrom exponent) to estimate preindustrial values for Nc provides estimates for Nc and forcing that are closer to the values predicted by the model. Nevertheless, the AIE using ln(Nc) versus ln(AI) may be substantially incorrect on a regional basis and may underestimate or overestimate the global average forcing by 25 to 35%. PMID:21808047

  19. Stress underestimation and mental health literacy of depression in Japanese workers: A cross-sectional study.

    Science.gov (United States)

    Nakamura-Taira, Nanako; Izawa, Shuhei; Yamada, Kosuke Chris

    2018-04-01

    Appropriately estimating stress levels in daily life is important for motivating people to undertake stress-management behaviors or seek out information on stress management and mental health. People who exhibit high stress underestimation might not be interested in information on mental health, and would therefore have less knowledge of it. We investigated the association between stress underestimation tendency and mental health literacy of depression (i.e., knowledge of the recognition, prognosis, and usefulness of resources of depression) in Japanese workers. We cross-sectionally surveyed 3718 Japanese workers using a web-based questionnaire on stress underestimation, mental health literacy of depression (vignettes on people with depression), and covariates (age, education, depressive symptoms, income, and worksite size). After adjusting for covariates, high stress underestimation was associated with greater odds of not recognizing depression (i.e., choosing anything other than depression). Furthermore, these individuals had greater odds of expecting the case to improve without treatment and not selecting useful sources of support (e.g. talk over with friends/family, see a psychiatrist, take medication, see a counselor) compared to those with moderate stress underestimation. These relationships were all stronger among males than among females. Stress underestimation was related to poorer mental health literacy of depression. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Parental and Child Factors Associated with Under-Estimation of Children with Excess Weight in Spain.

    Science.gov (United States)

    de Ruiter, Ingrid; Olmedo-Requena, Rocío; Jiménez-Moleón, José Juan

    2017-11-01

    Objective Understanding obesity misperception and associated factors can improve strategies to increase obesity identification and intervention. We investigate underestimation of child excess weight with a broader perspective, incorporating perceptions, views, and psychosocial aspects associated with obesity. Methods This study used cross-sectional data from the Spanish National Health Survey in 2011-2012 for children aged 2-14 years who are overweight or obese. Percentages of parental misperceived excess weight were calculated. Crude and adjusted analyses were performed for both child and parental factors analyzing associations with underestimation. Results Two-five year olds have the highest prevalence of misperceived overweight or obesity around 90%. In the 10-14 year old age group approximately 63% of overweight teens were misperceived as normal weight and 35.7 and 40% of obese males and females. Child gender did not affect underestimation, whereas a younger age did. Aspects of child social and mental health were associated with under-estimation, as was short sleep duration. Exercise, weekend TV and videogames, and food habits had no effect on underestimation. Fathers were more likely to misperceive their child´s weight status; however parent's age had no effect. Smokers and parents with excess weight were less likely to misperceive their child´s weight status. Parents being on a diet also decreased odds of underestimation. Conclusions for practice This study identifies some characteristics of both parents and children which are associated with under-estimation of child excess weight. These characteristics can be used for consideration in primary care, prevention strategies and for further research.

  1. The role of underestimating body size for self-esteem and self-efficacy among grade five children in Canada.

    Science.gov (United States)

    Maximova, Katerina; Khan, Mohammad K A; Austin, S Bryn; Kirk, Sara F L; Veugelers, Paul J

    2015-10-01

    Underestimating body size hinders healthy behavior modification needed to prevent obesity. However, initiatives to improve body size misperceptions may have detrimental consequences on self-esteem and self-efficacy. Using sex-specific multiple mixed-effect logistic regression models, we examined the association of underestimating versus accurate body size perceptions with self-esteem and self-efficacy in a provincially representative sample of 5075 grade five school children. Body size perceptions were defined as the standardized difference between the body mass index (BMI, from measured height and weight) and self-perceived body size (Stunkard body rating scale). Self-esteem and self-efficacy for physical activity and healthy eating were self-reported. Most of overweight boys and girls (91% and 83%); and most of obese boys and girls (93% and 90%) underestimated body size. Underestimating weight was associated with greater self-efficacy for physical activity and healthy eating among normal-weight children (odds ratio: 1.9 and 1.6 for boys, 1.5 and 1.4 for girls) and greater self-esteem among overweight and obese children (odds ratio: 2.0 and 6.2 for boys, 2.0 and 3.4 for girls). Results highlight the importance of developing optimal intervention strategies as part of targeted obesity prevention efforts that de-emphasize the focus on body weight, while improving body size perceptions. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Predictive equations underestimate resting energy expenditure in female adolescents with phenylketonuria

    Science.gov (United States)

    Quirk, Meghan E.; Schmotzer, Brian J.; Schmotzer, Brian J.; Singh, Rani H.

    2010-01-01

    Resting energy expenditure (REE) is often used to estimate total energy needs. The Schofield equation based on weight and height has been reported to underestimate REE in female children with phenylketonuria (PKU). The objective of this observational, cross-sectional study was to evaluate the agreement of measured REE with predicted REE for female adolescents with PKU. A total of 36 females (aged 11.5-18.7 years) with PKU attending Emory University’s Metabolic Camp (June 2002 – June 2008) underwent indirect calorimetry. Measured REE was compared to six predictive equations using paired Student’s t-tests, regression-based analysis, and assessment of clinical accuracy. The differences between measured and predicted REE were modeled against clinical parameters to determine to if a relationship existed. All six selected equations significantly under predicted measured REE (P< 0.005). The Schofield equation based on weight had the greatest level of agreement, with the lowest mean prediction bias (144 kcal) and highest concordance correlation coefficient (0.626). However, the Schofield equation based on weight lacked clinical accuracy, predicting measured REE within ±10% in only 14 of 36 participants. Clinical parameters were not associated with bias for any of the equations. Predictive equations underestimated measured REE in this group of female adolescents with PKU. Currently, there is no accurate and precise alternative for indirect calorimetry in this population. PMID:20497783

  3. Underestimation of Severity of Previous Whiplash Injuries

    Science.gov (United States)

    Naqui, SZH; Lovell, SJ; Lovell, ME

    2008-01-01

    INTRODUCTION We noted a report that more significant symptoms may be expressed after second whiplash injuries by a suggested cumulative effect, including degeneration. We wondered if patients were underestimating the severity of their earlier injury. PATIENTS AND METHODS We studied recent medicolegal reports, to assess subjects with a second whiplash injury. They had been asked whether their earlier injury was worse, the same or lesser in severity. RESULTS From the study cohort, 101 patients (87%) felt that they had fully recovered from their first injury and 15 (13%) had not. Seventy-six subjects considered their first injury of lesser severity, 24 worse and 16 the same. Of the 24 that felt the violence of their first accident was worse, only 8 had worse symptoms, and 16 felt their symptoms were mainly the same or less than their symptoms from their second injury. Statistical analysis of the data revealed that the proportion of those claiming a difference who said the previous injury was lesser was 76% (95% CI 66–84%). The observed proportion with a lesser injury was considerably higher than the 50% anticipated. CONCLUSIONS We feel that subjects may underestimate the severity of an earlier injury and associated symptoms. Reasons for this may include secondary gain rather than any proposed cumulative effect. PMID:18201501

  4. Individuals underestimate moderate and vigorous intensity physical activity.

    Directory of Open Access Journals (Sweden)

    Karissa L Canning

    Full Text Available BACKGROUND: It is unclear whether the common physical activity (PA intensity descriptors used in PA guidelines worldwide align with the associated percent heart rate maximum method used for prescribing relative PA intensities consistently between sexes, ethnicities, age categories and across body mass index (BMI classifications. OBJECTIVES: The objectives of this study were to determine whether individuals properly select light, moderate and vigorous intensity PA using the intensity descriptions in PA guidelines and determine if there are differences in estimation across sex, ethnicity, age and BMI classifications. METHODS: 129 adults were instructed to walk/jog at a "light," "moderate" and "vigorous effort" in a randomized order. The PA intensities were categorized as being below, at or above the following %HRmax ranges of: 50-63% for light, 64-76% for moderate and 77-93% for vigorous effort. RESULTS: On average, people correctly estimated light effort as 51.5±8.3%HRmax but underestimated moderate effort as 58.7±10.7%HRmax and vigorous effort as 69.9±11.9%HRmax. Participants walked at a light intensity (57.4±10.5%HRmax when asked to walk at a pace that provided health benefits, wherein 52% of participants walked at a light effort pace, 19% walked at a moderate effort and 5% walked at a vigorous effort pace. These results did not differ by sex, ethnicity or BMI class. However, younger adults underestimated moderate and vigorous intensity more so than middle-aged adults (P<0.05. CONCLUSION: When the common PA guideline descriptors were aligned with the associated %HRmax ranges, the majority of participants underestimated the intensity of PA that is needed to obtain health benefits. Thus, new subjective descriptions for moderate and vigorous intensity may be warranted to aid individuals in correctly interpreting PA intensities.

  5. Nuclear power plant cost underestimation: mechanisms and corrections

    International Nuclear Information System (INIS)

    Meyer, M.B.

    1984-01-01

    Criticisms of inaccurate nuclear power plant cost estimates have commonly focused upon what factors have caused actual costs to increase and not upon the engineering cost estimate methodology itself. This article describes two major sources of cost underestimation and suggests corrections for each which can be applied while retaining the traditional engineering methodology in general

  6. Development and evaluation of a prediction model for underestimated invasive breast cancer in women with ductal carcinoma in situ at stereotactic large core needle biopsy.

    Directory of Open Access Journals (Sweden)

    Suzanne C E Diepstraten

    Full Text Available BACKGROUND: We aimed to develop a multivariable model for prediction of underestimated invasiveness in women with ductal carcinoma in situ at stereotactic large core needle biopsy, that can be used to select patients for sentinel node biopsy at primary surgery. METHODS: From the literature, we selected potential preoperative predictors of underestimated invasive breast cancer. Data of patients with nonpalpable breast lesions who were diagnosed with ductal carcinoma in situ at stereotactic large core needle biopsy, drawn from the prospective COBRA (Core Biopsy after RAdiological localization and COBRA2000 cohort studies, were used to fit the multivariable model and assess its overall performance, discrimination, and calibration. RESULTS: 348 women with large core needle biopsy-proven ductal carcinoma in situ were available for analysis. In 100 (28.7% patients invasive carcinoma was found at subsequent surgery. Nine predictors were included in the model. In the multivariable analysis, the predictors with the strongest association were lesion size (OR 1.12 per cm, 95% CI 0.98-1.28, number of cores retrieved at biopsy (OR per core 0.87, 95% CI 0.75-1.01, presence of lobular cancerization (OR 5.29, 95% CI 1.25-26.77, and microinvasion (OR 3.75, 95% CI 1.42-9.87. The overall performance of the multivariable model was poor with an explained variation of 9% (Nagelkerke's R(2, mediocre discrimination with area under the receiver operating characteristic curve of 0.66 (95% confidence interval 0.58-0.73, and fairly good calibration. CONCLUSION: The evaluation of our multivariable prediction model in a large, clinically representative study population proves that routine clinical and pathological variables are not suitable to select patients with large core needle biopsy-proven ductal carcinoma in situ for sentinel node biopsy during primary surgery.

  7. Spatially unresolved SED fitting can underestimate galaxy masses: a solution to the missing mass problem

    Science.gov (United States)

    Sorba, Robert; Sawicki, Marcin

    2018-05-01

    We perform spatially resolved, pixel-by-pixel Spectral Energy Distribution (SED) fitting on galaxies up to z ˜ 2.5 in the Hubble eXtreme Deep Field (XDF). Comparing stellar mass estimates from spatially resolved and spatially unresolved photometry we find that unresolved masses can be systematically underestimated by factors of up to 5. The ratio of the unresolved to resolved mass measurement depends on the galaxy's specific star formation rate (sSFR): at low sSFRs the bias is small, but above sSFR ˜ 10-9.5 yr-1 the discrepancy increases rapidly such that galaxies with sSFRs ˜ 10-8 yr-1 have unresolved mass estimates of only one-half to one-fifth of the resolved value. This result indicates that stellar masses estimated from spatially unresolved data sets need to be systematically corrected, in some cases by large amounts, and we provide an analytic prescription for applying this correction. We show that correcting stellar mass measurements for this bias changes the normalization and slope of the star-forming main sequence and reduces its intrinsic width; most dramatically, correcting for the mass bias increases the stellar mass density of the Universe at high redshift and can resolve the long-standing discrepancy between the directly measured cosmic SFR density at z ≳ 1 and that inferred from stellar mass densities (`the missing mass problem').

  8. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    Directory of Open Access Journals (Sweden)

    Mike S Fowler

    Full Text Available The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies dominate in red environments, rapid fluctuations (high frequencies in blue environments and white environments are purely random (no frequencies dominate. Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental series used in combination with population (dynamical feedback models: autoregressive [AR(1] and sinusoidal (1/f models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1 models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1 methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We

  9. Did the Stern Review underestimate US and global climate damages?

    International Nuclear Information System (INIS)

    Ackerman, Frank; Stanton, Elizabeth A.; Hope, Chris; Alberth, Stephane

    2009-01-01

    The Stern Review received widespread attention for its innovative approach to the economics of climate change when it appeared in 2006, and generated controversies that have continued to this day. One key controversy concerns the magnitude of the expected impacts of climate change. Stern's estimates, based on results from the PAGE2002 model, sounded substantially greater than those produced by many other models, leading several critics to suggest that Stern had inflated his damage figures. We reached the opposite conclusion in a recent application of PAGE2002 in a study of the costs to the US economy of inaction on climate change. This article describes our revisions to the PAGE estimates, and explains our conclusion that the model runs used in the Stern Review may well underestimate US and global damages. Stern's estimates from PAGE2002 implied that mean business-as-usual damages in 2100 would represent just 0.4 percent of GDP for the United States and 2.2 percent of GDP for the world. Our revisions and reinterpretation of the PAGE model imply that climate damages in 2100 could reach 2.6 percent of GDP for the United States and 10.8 percent for the world.

  10. Correction of systematic bias in ultrasound dating in studies of small-for-gestational-age birth: an example from the Iowa Health in Pregnancy Study.

    Science.gov (United States)

    Harland, Karisa K; Saftlas, Audrey F; Wallis, Anne B; Yankowitz, Jerome; Triche, Elizabeth W; Zimmerman, M Bridget

    2012-09-01

    The authors examined whether early ultrasound dating (≤20 weeks) of gestational age (GA) in small-for-gestational-age (SGA) fetuses may underestimate gestational duration and therefore the incidence of SGA birth. Within a population-based case-control study (May 2002-June 2005) of Iowa SGA births and preterm deliveries identified from birth records (n = 2,709), the authors illustrate a novel methodological approach with which to assess and correct for systematic underestimation of GA by early ultrasound in women with suspected SGA fetuses. After restricting the analysis to subjects with first-trimester prenatal care, a nonmissing date of the last menstrual period (LMP), and early ultrasound (n = 1,135), SGA subjects' ultrasound GA was 5.5 days less than their LMP GA, on average. Multivariable linear regression was conducted to determine the extent to which ultrasound GA predicted LMP dating and to correct for systematic misclassification that results after applying standard guidelines to adjudicate differences in these measures. In the unadjusted model, SGA subjects required a correction of +1.5 weeks to the ultrasound estimate. With adjustment for maternal age, smoking, and first-trimester vaginal bleeding, standard guidelines for adjudicating differences in ultrasound and LMP dating underestimated SGA birth by 12.9% and overestimated preterm delivery by 8.7%. This methodological approach can be applied by researchers using different study populations in similar research contexts.

  11. Stress Underestimation and Mental Health Outcomes in Male Japanese Workers: a 1-Year Prospective Study.

    Science.gov (United States)

    Izawa, Shuhei; Nakamura-Taira, Nanako; Yamada, Kosuke Chris

    2016-12-01

    Being appropriately aware of the extent of stress experienced in daily life is essential in motivating stress management behaviours. Excessive stress underestimation obstructs this process, which is expected to exert adverse effects on health. We prospectively examined associations between stress underestimation and mental health outcomes in Japanese workers. Web-based surveys were conducted twice with an interval of 1 year on 2359 Japanese male workers. Participants were asked to complete survey items concerning stress underestimation, depressive symptoms, sickness absence, and antidepressant use. Multiple logistic regression analysis revealed that high baseline levels of 'overgeneralization of stress' and 'insensitivity to stress' were significantly associated with new-onset depressive symptoms (OR = 2.66 [95 % CI, 1.54-4.59], p stress underestimation, including stress insensitivity and the overgeneralization of stress, could exert adverse effects on mental health.

  12. Poverty Underestimation in Rural India- A Critique

    OpenAIRE

    Sivakumar, Marimuthu; Sarvalingam, A

    2010-01-01

    When ever the Planning Commission of India releases the poverty data, that data is being criticised by experts and economists. The main criticism is underestimation of poverty especially in rural India by the Planning Commission. This paper focuses on that criticism and compares the Indian Planning Commission’s 2004-05 rural poverty data with the India’s 2400 kcal poverty norms, World Bank’s US $1.08 poverty concept and Asian Development Bank’s US $1.35 poverty concept.

  13. BMI may underestimate the socioeconomic gradient in true obesity

    NARCIS (Netherlands)

    van den Berg, G.; van Eijsden, M.; Vrijkotte, T. G. M.; Gemke, R. J. B. J.

    2013-01-01

    Body mass index (BMI) does not make a distinction between fat mass and lean mass. In children, high fat mass appears to be associated with low maternal education, as well as low lean mass because maternal education is associated with physical activity. Therefore, BMI might underestimate true obesity

  14. ANALYSIS AND CORRECTION OF SYSTEMATIC HEIGHT MODEL ERRORS

    Directory of Open Access Journals (Sweden)

    K. Jacobsen

    2016-06-01

    Full Text Available The geometry of digital height models (DHM determined with optical satellite stereo combinations depends upon the image orientation, influenced by the satellite camera, the system calibration and attitude registration. As standard these days the image orientation is available in form of rational polynomial coefficients (RPC. Usually a bias correction of the RPC based on ground control points is required. In most cases the bias correction requires affine transformation, sometimes only shifts, in image or object space. For some satellites and some cases, as caused by small base length, such an image orientation does not lead to the possible accuracy of height models. As reported e.g. by Yong-hua et al. 2015 and Zhang et al. 2015, especially the Chinese stereo satellite ZiYuan-3 (ZY-3 has a limited calibration accuracy and just an attitude recording of 4 Hz which may not be satisfying. Zhang et al. 2015 tried to improve the attitude based on the color sensor bands of ZY-3, but the color images are not always available as also detailed satellite orientation information. There is a tendency of systematic deformation at a Pléiades tri-stereo combination with small base length. The small base length enlarges small systematic errors to object space. But also in some other satellite stereo combinations systematic height model errors have been detected. The largest influence is the not satisfying leveling of height models, but also low frequency height deformations can be seen. A tilt of the DHM by theory can be eliminated by ground control points (GCP, but often the GCP accuracy and distribution is not optimal, not allowing a correct leveling of the height model. In addition a model deformation at GCP locations may lead to not optimal DHM leveling. Supported by reference height models better accuracy has been reached. As reference height model the Shuttle Radar Topography Mission (SRTM digital surface model (DSM or the new AW3D30 DSM, based on ALOS

  15. Systematic Digital Forensic Investigation Model

    OpenAIRE

    Systematic Digital Forensic Investigation Model

    2011-01-01

    Law practitioners are in an uninterrupted battle with criminals in the application of digital/computertechnologies, and require the development of a proper methodology to systematically searchdigital devices for significant evidence. Computer fraud and digital crimes are growing day by dayand unfortunately less than two percent of the reported cases result in confidence. This paperexplores the development of the digital forensics process model, compares digital forensicmethodologies, and fina...

  16. Simple way to avoid underestimating uncertainties of the evaluated values for sets of consistent data: a proposal for an improvement of the evaluations

    International Nuclear Information System (INIS)

    Chechev, V.P.

    2001-01-01

    To avoid underestimating the uncertainty of the evaluated values for sets of consistent data the following rule is proposed: if the smallest of the input measurement uncertainties (σ min ) is more than the uncertainty obtained from statistical data processing, the σ min should be used as a final uncertainty of the evaluated value. This rule is justified by the fact that almost any measurement is indirect and the total uncertainty of any precise measurement includes mainly the systematic error of the measurement method. Exceptions can be only for measured data obtained by essentially different methods (for example, half life measurements by calorimetry and specific activity determination)

  17. Forecasting the mortality rates using Lee-Carter model and Heligman-Pollard model

    Science.gov (United States)

    Ibrahim, R. I.; Ngataman, N.; Abrisam, W. N. A. Wan Mohd

    2017-09-01

    Improvement in life expectancies has driven further declines in mortality. The sustained reduction in mortality rates and its systematic underestimation has been attracting the significant interest of researchers in recent years because of its potential impact on population size and structure, social security systems, and (from an actuarial perspective) the life insurance and pensions industry worldwide. Among all forecasting methods, the Lee-Carter model has been widely accepted by the actuarial community and Heligman-Pollard model has been widely used by researchers in modelling and forecasting future mortality. Therefore, this paper only focuses on Lee-Carter model and Heligman-Pollard model. The main objective of this paper is to investigate how accurately these two models will perform using Malaysian data. Since these models involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 8.0 (MATLAB 8.0) software will be used to estimate the parameters of the models. Autoregressive Integrated Moving Average (ARIMA) procedure is applied to acquire the forecasted parameters for both models as the forecasted mortality rates are obtained by using all the values of forecasted parameters. To investigate the accuracy of the estimation, the forecasted results will be compared against actual data of mortality rates. The results indicate that both models provide better results for male population. However, for the elderly female population, Heligman-Pollard model seems to underestimate to the mortality rates while Lee-Carter model seems to overestimate to the mortality rates.

  18. Uterine radiation dose from open sources: The potential for underestimation

    International Nuclear Information System (INIS)

    Cox, P.H.; Klijn, J.G.M.; Pillay, M.; Bontebal, M.; Schoenfeld, D.H.W.

    1990-01-01

    Recent observations on the biodistribution of a therapeutic dose of sodium iodide I 131 in a patient with an unsuspected early pregnancy lead us to suspect that current dose estimates with respect to uterine exposure (ARSAC 1988) may seriously underestimate the actual exposure of the developing foetus. (orig.)

  19. A SYSTEMATIC STUDY OF SOFTWARE QUALITY MODELS

    OpenAIRE

    Dr.Vilas. M. Thakare; Ashwin B. Tomar

    2011-01-01

    This paper aims to provide a basis for software quality model research, through a systematic study ofpapers. It identifies nearly seventy software quality research papers from journals and classifies paper asper research topic, estimation approach, study context and data set. The paper results combined withother knowledge provides support for recommendations in future software quality model research, toincrease the area of search for relevant studies, carefully select the papers within a set ...

  20. Radiographic Underestimation of In Vivo Cup Coverage Provided by Total Hip Arthroplasty for Dysplasia.

    Science.gov (United States)

    Nie, Yong; Wang, HaoYang; Huang, ZeYu; Shen, Bin; Kraus, Virginia Byers; Zhou, Zongke

    2018-01-01

    The accuracy of using 2-dimensional anteroposterior pelvic radiography to assess acetabular cup coverage among patients with developmental dysplasia of the hip after total hip arthroplasty (THA) remains unclear in retrospective clinical studies. A group of 20 patients with developmental dysplasia of the hip (20 hips) underwent cementless THA. During surgery but after acetabular reconstruction, bone wax was pressed onto the uncovered surface of the acetabular cup. A surface model of the bone wax was generated with 3-dimensional scanning. The percentage of the acetabular cup that was covered by intact host acetabular bone in vivo was calculated with modeling software. Acetabular cup coverage also was determined from a postoperative supine anteroposterior pelvic radiograph. The height of the hip center (distance from the center of the femoral head perpendicular to the inter-teardrop line) also was determined from radiographs. Radiographic cup coverage was a mean of 6.93% (SD, 2.47%) lower than in vivo cup coverage for these 20 patients with developmental dysplasia of the hip (Pcup coverage (Pearson r=0.761, Pcup (P=.001) but not the position of the hip center (high vs normal) was significantly associated with the difference between radiographic and in vivo cup coverage. Two-dimensional radiographically determined cup coverage conservatively reflects in vivo cup coverage and remains an important index (taking 7% underestimation errors and the effect of greater underestimation of larger cup size into account) for assessing the stability of the cup and monitoring for adequate ingrowth of bone. [Orthopedics. 2018; 41(1):e46-e51.]. Copyright 2017, SLACK Incorporated.

  1. Completeness and underestimation of cancer mortality rate in Iran: a report from Fars Province in southern Iran.

    Science.gov (United States)

    Marzban, Maryam; Haghdoost, Ali-Akbar; Dortaj, Eshagh; Bahrampour, Abbas; Zendehdel, Kazem

    2015-03-01

    The incidence and mortality rates of cancer are increasing worldwide, particularly in the developing countries. Valid data are needed for measuring the cancer burden and making appropriate decisions toward cancer control. We evaluated the completeness of death registry with regard to cancer death in Fars Province, I. R. of Iran. We used data from three sources in Fars Province, including the national death registry (source 1), the follow-up data from the pathology-based cancer registry (source 2) and hospital based records (source 3) during 2004 - 2006. We used the capture-recapture method and estimated underestimation and the true age standardized mortality rate (ASMR) for cancer. We used log-linear (LL) modeling for statistical analysis. We observed 1941, 480, and 355 cancer deaths in sources 1, 2 and 3, respectively. After data linkage, we estimated that mortality registry had about 40% underestimation for cancer death. After adjustment for this underestimation rate, the ASMR of cancer in the Fars Province for all cancer types increased from 44.8 per 100,000 (95% CI: 42.8 - 46.7) to 76.3 per 100,000 (95% CI: 73.3 - 78.9), accounting for 3309 (95% CI: 3151 - 3293) cancer deaths annually. The mortality rate of cancer is considerably higher than the rates reported by the routine registry in Iran. Improvement in the validity and completeness of the mortality registry is needed to estimate the true mortality rate caused by cancer in Iran.

  2. Forest loss maps from regional satellite monitoring systematically underestimate deforestation in two rapidly changing parts of the Amazon

    Science.gov (United States)

    Milodowski, D. T.; Mitchard, E. T. A.; Williams, M.

    2017-09-01

    Accurate, consistent reporting of changing forest area, stratified by forest type, is required for all countries under their commitments to the Paris Agreement (UNFCCC 2015 Adoption of the Paris Agreement (Paris: UNFCCC)). Such change reporting may directly impact on payments through comparisons to national Reference (Emissions) Levels under the Reducing Emissions from Deforestation and forest Degradation (REDD+) framework. The emergence of global, satellite-based forest monitoring systems, including Global Forest Watch (GFW) and FORMA, have great potential in aiding this endeavour. However, the accuracy of these systems has been questioned and their uncertainties are poorly constrained, both in terms of the spatial extent of forest loss and timing of change. Here, using annual time series of 5 m optical imagery at two sites in the Brazilian Amazon, we demonstrate that GFW more accurately detects forest loss than the coarser-resolution FORMA or Brazil’s national-level PRODES product, though all underestimate the rate of loss. We conclude GFW provides robust indicators of forest loss, at least for larger-scale forest change, but under-predicts losses driven by small-scale disturbances (< 2 ha), even though these are much larger than its minimum mapping unit (0.09 ha).

  3. Misery Has More Company Than People Think: Underestimating the Prevalence of Others’ Negative Emotions

    Science.gov (United States)

    Jordan, Alexander H.; Monin, Benoît; Dweck, Carol S.; Lovett, Benjamin J.; John, Oliver P.; Gross, James J.

    2014-01-01

    Four studies document underestimations of the prevalence of others’ negative emotions, and suggest causes and correlates of these erroneous perceptions. In Study 1A, participants reported that their negative emotions were more private or hidden than their positive emotions; in Study 1B, participants underestimated the peer prevalence of common negative, but not positive, experiences described in Study 1A. In Study 2, people underestimated negative emotions and overestimated positive emotions even for well-known peers, and this effect was partially mediated by the degree to which those peers reported suppression of negative (vs. positive) emotions. Study 3 showed that lower estimations of the prevalence of negative emotional experiences predicted greater loneliness and rumination and lower life satisfaction, and that higher estimations for positive emotional experiences predicted lower life satisfaction. Taken together, these studies suggest that people may think they are more alone in their emotional difficulties than they really are. PMID:21177878

  4. Consumer underestimation of sodium in fast food restaurant meals: Results from a cross-sectional observational study.

    Science.gov (United States)

    Moran, Alyssa J; Ramirez, Maricelle; Block, Jason P

    2017-06-01

    Restaurants are key venues for reducing sodium intake in the U.S. but little is known about consumer perceptions of sodium in restaurant foods. This study quantifies the difference between estimated and actual sodium content of restaurant meals and examines predictors of underestimation in adult and adolescent diners at fast food restaurants. In 2013 and 2014, meal receipts and questionnaires were collected from adults and adolescents dining at six restaurant chains in four New England cities. The sample included 993 adults surveyed during 229 dinnertime visits to 44 restaurants and 794 adolescents surveyed during 298 visits to 49 restaurants after school or at lunchtime. Diners were asked to estimate the amount of sodium (mg) in the meal they had just purchased. Sodium estimates were compared with actual sodium in the meal, calculated by matching all items that the respondent purchased for personal consumption to sodium information on chain restaurant websites. Mean (SD) actual sodium (mg) content of meals was 1292 (970) for adults and 1128 (891) for adolescents. One-quarter of diners (176 (23%) adults, 155 (25%) adolescents) were unable or unwilling to provide estimates of the sodium content of their meals. Of those who provided estimates, 90% of adults and 88% of adolescents underestimated sodium in their meals, with adults underestimating sodium by a mean (SD) of 1013 mg (1,055) and adolescents underestimating by 876 mg (1,021). Respondents underestimated sodium content more for meals with greater sodium content. Education about sodium at point-of-purchase, such as provision of sodium information on restaurant menu boards, may help correct consumer underestimation, particularly for meals of high sodium content. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Systematic identification of crystallization kinetics within a generic modelling framework

    DEFF Research Database (Denmark)

    Abdul Samad, Noor Asma Fazli Bin; Meisler, Kresten Troelstrup; Gernaey, Krist

    2012-01-01

    A systematic development of constitutive models within a generic modelling framework has been developed for use in design, analysis and simulation of crystallization operations. The framework contains a tool for model identification connected with a generic crystallizer modelling tool-box, a tool...

  6. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  7. Revisiting typhoid fever surveillance in low and middle income countries: lessons from systematic literature review of population-based longitudinal studies.

    Science.gov (United States)

    Mogasale, Vittal; Mogasale, Vijayalaxmi V; Ramani, Enusa; Lee, Jung Seok; Park, Ju Yeon; Lee, Kang Sung; Wierzba, Thomas F

    2016-01-29

    The control of typhoid fever being an important public health concern in low and middle income countries, improving typhoid surveillance will help in planning and implementing typhoid control activities such as deployment of new generation Vi conjugate typhoid vaccines. We conducted a systematic literature review of longitudinal population-based blood culture-confirmed typhoid fever studies from low and middle income countries published from 1(st) January 1990 to 31(st) December 2013. We quantitatively summarized typhoid fever incidence rates and qualitatively reviewed study methodology that could have influenced rate estimates. We used meta-analysis approach based on random effects model in summarizing the hospitalization rates. Twenty-two papers presented longitudinal population-based and blood culture-confirmed typhoid fever incidence estimates from 20 distinct sites in low and middle income countries. The reported incidence and hospitalizations rates were heterogeneous as well as the study methodology across the sites. We elucidated how the incidence rates were underestimated in published studies. We summarized six categories of under-estimation biases observed in these studies and presented potential solutions. Published longitudinal typhoid fever studies in low and middle income countries are geographically clustered and the methodology employed has a potential for underestimation. Future studies should account for these limitations.

  8. A Bayesian model to correct underestimated 3-D wind speeds from sonic anemometers increases turbulent components of the surface energy balance

    Science.gov (United States)

    John M. Frank; William J. Massman; Brent E. Ewers

    2016-01-01

    Sonic anemometers are the principal instruments in micrometeorological studies of turbulence and ecosystem fluxes. Common designs underestimate vertical wind measurements because they lack a correction for transducer shadowing, with no consensus on a suitable correction. We reanalyze a subset of data collected during field experiments in 2011 and 2013 featuring two or...

  9. Childhood leukaemia and low-level radiation - are we underestimating the risk?

    International Nuclear Information System (INIS)

    Wakeford, R.

    1996-01-01

    The Seascale childhood leukaemia 'cluster' can be interpreted as indicating that the risk of childhood leukaemia arising from low-level exposure to ionising radiation has been underestimated. Indeed, several variants of such an interpretation have been advanced. These include exposure to particular radionuclides, an underestimation of the radiation risk coefficient for childhood leukaemia, and the existence of a previously unrecognized risk of childhood leukaemia from the preconceptional irradiation of fathers. However, the scientific assessment of epidemiological associations is a complex matter, and such associations must be interpreted with caution. It would now seem most likely that the Seascale 'cluster' does not represent an unanticipated effect of the exposure to ionising radiation, but rather the effect of unusual population mixing generated by the Sellafield site which has produced an increase in the infection-based risk of childhood leukaemia. This episode in the history of epidemiological research provides a timely reminder of the need for great care in the interpretation-of novel statistical associations. (author)

  10. Effects of waveform model systematics on the interpretation of GW150914

    Science.gov (United States)

    Abbott, B. P.; Abbott, R.; Abbott, T. D.; Abernathy, M. R.; Acernese, F.; Ackley, K.; Adams, C.; Adams, T.; Addesso, P.; Adhikari, R. X.; Adya, V. B.; Affeldt, C.; Agathos, M.; Agatsuma, K.; Aggarwal, N.; Aguiar, O. D.; Aiello, L.; Ain, A.; Ajith, P.; Allen, B.; Allocca, A.; Altin, P. A.; Ananyeva, A.; Anderson, S. B.; Anderson, W. G.; Appert, S.; Arai, K.; Araya, M. C.; Areeda, J. S.; Arnaud, N.; Arun, K. G.; Ascenzi, S.; Ashton, G.; Ast, M.; Aston, S. M.; Astone, P.; Aufmuth, P.; Aulbert, C.; Avila-Alvarez, A.; Babak, S.; Bacon, P.; Bader, M. K. M.; Baker, P. T.; Baldaccini, F.; Ballardin, G.; Ballmer, S. W.; Barayoga, J. C.; E Barclay, S.; Barish, B. C.; Barker, D.; Barone, F.; Barr, B.; Barsotti, L.; Barsuglia, M.; Barta, D.; Bartlett, J.; Bartos, I.; Bassiri, R.; Basti, A.; Batch, J. C.; Baune, C.; Bavigadda, V.; Bazzan, M.; Beer, C.; Bejger, M.; Belahcene, I.; Belgin, M.; Bell, A. S.; Berger, B. K.; Bergmann, G.; Berry, C. P. L.; Bersanetti, D.; Bertolini, A.; Betzwieser, J.; Bhagwat, S.; Bhandare, R.; Bilenko, I. A.; Billingsley, G.; Billman, C. R.; Birch, J.; Birney, R.; Birnholtz, O.; Biscans, S.; Bisht, A.; Bitossi, M.; Biwer, C.; Bizouard, M. A.; Blackburn, J. K.; Blackman, J.; Blair, C. D.; Blair, D. G.; Blair, R. M.; Bloemen, S.; Bock, O.; Boer, M.; Bogaert, G.; Bohe, A.; Bondu, F.; Bonnand, R.; Boom, B. A.; Bork, R.; Boschi, V.; Bose, S.; Bouffanais, Y.; Bozzi, A.; Bradaschia, C.; Brady, P. R.; Braginsky, V. B.; Branchesi, M.; E Brau, J.; Briant, T.; Brillet, A.; Brinkmann, M.; Brisson, V.; Brockill, P.; E Broida, J.; Brooks, A. F.; Brown, D. A.; Brown, D. D.; Brown, N. M.; Brunett, S.; Buchanan, C. C.; Buikema, A.; Bulik, T.; Bulten, H. J.; Buonanno, A.; Buskulic, D.; Buy, C.; Byer, R. L.; Cabero, M.; Cadonati, L.; Cagnoli, G.; Cahillane, C.; Calderón Bustillo, J.; Callister, T. A.; Calloni, E.; Camp, J. B.; Cannon, K. C.; Cao, H.; Cao, J.; Capano, C. D.; Capocasa, E.; Carbognani, F.; Caride, S.; Casanueva Diaz, J.; Casentini, C.; Caudill, S.; Cavaglià, M.; Cavalier, F.; Cavalieri, R.; Cella, G.; Cepeda, C. B.; Cerboni Baiardi, L.; Cerretani, G.; Cesarini, E.; Chamberlin, S. J.; Chan, M.; Chao, S.; Charlton, P.; Chassande-Mottin, E.; Cheeseboro, B. D.; Chen, H. Y.; Chen, Y.; Cheng, H.-P.; Chincarini, A.; Chiummo, A.; Chmiel, T.; Cho, H. S.; Cho, M.; Chow, J. H.; Christensen, N.; Chu, Q.; Chua, A. J. K.; Chua, S.; Chung, S.; Ciani, G.; Clara, F.; Clark, J. A.; Cleva, F.; Cocchieri, C.; Coccia, E.; Cohadon, P.-F.; Colla, A.; Collette, C. G.; Cominsky, L.; Constancio, M., Jr.; Conti, L.; Cooper, S. J.; Corbitt, T. R.; Cornish, N.; Corsi, A.; Cortese, S.; Costa, C. A.; Coughlin, M. W.; Coughlin, S. B.; Coulon, J.-P.; Countryman, S. T.; Couvares, P.; Covas, P. B.; E Cowan, E.; Coward, D. M.; Cowart, M. J.; Coyne, D. C.; Coyne, R.; E Creighton, J. D.; Creighton, T. D.; Cripe, J.; Crowder, S. G.; Cullen, T. J.; Cumming, A.; Cunningham, L.; Cuoco, E.; Dal Canton, T.; Danilishin, S. L.; D'Antonio, S.; Danzmann, K.; Dasgupta, A.; Da Silva Costa, C. F.; Dattilo, V.; Dave, I.; Davier, M.; Davies, G. S.; Davis, D.; Daw, E. J.; Day, B.; Day, R.; De, S.; DeBra, D.; Debreczeni, G.; Degallaix, J.; De Laurentis, M.; Deléglise, S.; Del Pozzo, W.; Denker, T.; Dent, T.; Dergachev, V.; De Rosa, R.; DeRosa, R. T.; DeSalvo, R.; Devenson, J.; Devine, R. C.; Dhurandhar, S.; Díaz, M. C.; Di Fiore, L.; Di Giovanni, M.; Di Girolamo, T.; Di Lieto, A.; Di Pace, S.; Di Palma, I.; Di Virgilio, A.; Doctor, Z.; Dolique, V.; Donovan, F.; Dooley, K. L.; Doravari, S.; Dorrington, I.; Douglas, R.; Dovale Álvarez, M.; Downes, T. P.; Drago, M.; Drever, R. W. P.; Driggers, J. C.; Du, Z.; Ducrot, M.; E Dwyer, S.; Edo, T. B.; Edwards, M. C.; Effler, A.; Eggenstein, H.-B.; Ehrens, P.; Eichholz, J.; Eikenberry, S. S.; Eisenstein, R. A.; Essick, R. C.; Etienne, Z.; Etzel, T.; Evans, M.; Evans, T. M.; Everett, R.; Factourovich, M.; Fafone, V.; Fair, H.; Fairhurst, S.; Fan, X.; Farinon, S.; Farr, B.; Farr, W. M.; Fauchon-Jones, E. J.; Favata, M.; Fays, M.; Fehrmann, H.; Fejer, M. M.; Fernández Galiana, A.; Ferrante, I.; Ferreira, E. C.; Ferrini, F.; Fidecaro, F.; Fiori, I.; Fiorucci, D.; Fisher, R. P.; Flaminio, R.; Fletcher, M.; Fong, H.; Forsyth, S. S.; Fournier, J.-D.; Frasca, S.; Frasconi, F.; Frei, Z.; Freise, A.; Frey, R.; Frey, V.; Fries, E. M.; Fritschel, P.; Frolov, V. V.; Fulda, P.; Fyffe, M.; Gabbard, H.; Gadre, B. U.; Gaebel, S. M.; Gair, J. R.; Gammaitoni, L.; Gaonkar, S. G.; Garufi, F.; Gaur, G.; Gayathri, V.; Gehrels, N.; Gemme, G.; Genin, E.; Gennai, A.; George, J.; Gergely, L.; Germain, V.; Ghonge, S.; Ghosh, Abhirup; Ghosh, Archisman; Ghosh, S.; Giaime, J. A.; Giardina, K. D.; Giazotto, A.; Gill, K.; Glaefke, A.; Goetz, E.; Goetz, R.; Gondan, L.; González, G.; Gonzalez Castro, J. M.; Gopakumar, A.; Gorodetsky, M. L.; E Gossan, S.; Gosselin, M.; Gouaty, R.; Grado, A.; Graef, C.; Granata, M.; Grant, A.; Gras, S.; Gray, C.; Greco, G.; Green, A. C.; Groot, P.; Grote, H.; Grunewald, S.; Guidi, G. M.; Guo, X.; Gupta, A.; Gupta, M. K.; E Gushwa, K.; Gustafson, E. K.; Gustafson, R.; Hacker, J. J.; Hall, B. R.; Hall, E. D.; Hammond, G.; Haney, M.; Hanke, M. M.; Hanks, J.; Hanna, C.; Hannam, M. D.; Hanson, J.; Hardwick, T.; Harms, J.; Harry, G. M.; Harry, I. W.; Hart, M. J.; Hartman, M. T.; Haster, C.-J.; Haughian, K.; Healy, J.; Heidmann, A.; Heintze, M. C.; Heitmann, H.; Hello, P.; Hemming, G.; Hendry, M.; Heng, I. S.; Hennig, J.; Henry, J.; Heptonstall, A. W.; Heurs, M.; Hild, S.; Hoak, D.; Hofman, D.; Holt, K.; E Holz, D.; Hopkins, P.; Hough, J.; Houston, E. A.; Howell, E. J.; Hu, Y. M.; Huerta, E. A.; Huet, D.; Hughey, B.; Husa, S.; Huttner, S. H.; Huynh-Dinh, T.; Indik, N.; Ingram, D. R.; Inta, R.; Isa, H. N.; Isac, J.-M.; Isi, M.; Isogai, T.; Iyer, B. R.; Izumi, K.; Jacqmin, T.; Jani, K.; Jaranowski, P.; Jawahar, S.; Jiménez-Forteza, F.; Johnson, W. W.; Jones, D. I.; Jones, R.; Jonker, R. J. G.; Ju, L.; Junker, J.; Kalaghatgi, C. V.; Kalogera, V.; Kandhasamy, S.; Kang, G.; Kanner, J. B.; Karki, S.; Karvinen, K. S.; Kasprzack, M.; Katsavounidis, E.; Katzman, W.; Kaufer, S.; Kaur, T.; Kawabe, K.; Kéfélian, F.; Keitel, D.; Kelley, D. B.; Kennedy, R.; Key, J. S.; Khalili, F. Y.; Khan, I.; Khan, S.; Khan, Z.; Khazanov, E. A.; Kijbunchoo, N.; Kim, Chunglee; Kim, J. C.; Kim, Whansun; Kim, W.; Kim, Y.-M.; Kimbrell, S. J.; King, E. J.; King, P. J.; Kirchhoff, R.; Kissel, J. S.; Klein, B.; Kleybolte, L.; Klimenko, S.; Koch, P.; Koehlenbeck, S. M.; Koley, S.; Kondrashov, V.; Kontos, A.; Korobko, M.; Korth, W. Z.; Kowalska, I.; Kozak, D. B.; Krämer, C.; Kringel, V.; Krishnan, B.; Królak, A.; Kuehn, G.; Kumar, P.; Kumar, R.; Kuo, L.; Kutynia, A.; Lackey, B. D.; Landry, M.; Lang, R. N.; Lange, J.; Lantz, B.; Lanza, R. K.; Lartaux-Vollard, A.; Lasky, P. D.; Laxen, M.; Lazzarini, A.; Lazzaro, C.; Leaci, P.; Leavey, S.; Lebigot, E. O.; Lee, C. H.; Lee, H. K.; Lee, H. M.; Lee, K.; Lehmann, J.; Lenon, A.; Leonardi, M.; Leong, J. R.; Leroy, N.; Letendre, N.; Levin, Y.; Li, T. G. F.; Libson, A.; Littenberg, T. B.; Liu, J.; Lockerbie, N. A.; Lombardi, A. L.; London, L. T.; E Lord, J.; Lorenzini, M.; Loriette, V.; Lormand, M.; Losurdo, G.; Lough, J. D.; Lovelace, G.; Lück, H.; Lundgren, A. P.; Lynch, R.; Ma, Y.; Macfoy, S.; Machenschalk, B.; MacInnis, M.; Macleod, D. M.; Magaña-Sandoval, F.; Majorana, E.; Maksimovic, I.; Malvezzi, V.; Man, N.; Mandic, V.; Mangano, V.; Mansell, G. L.; Manske, M.; Mantovani, M.; Marchesoni, F.; Marion, F.; Márka, S.; Márka, Z.; Markosyan, A. S.; Maros, E.; Martelli, F.; Martellini, L.; Martin, I. W.; Martynov, D. V.; Mason, K.; Masserot, A.; Massinger, T. J.; Masso-Reid, M.; Mastrogiovanni, S.; Matichard, F.; Matone, L.; Mavalvala, N.; Mazumder, N.; McCarthy, R.; E McClelland, D.; McCormick, S.; McGrath, C.; McGuire, S. C.; McIntyre, G.; McIver, J.; McManus, D. J.; McRae, T.; McWilliams, S. T.; Meacher, D.; Meadors, G. D.; Meidam, J.; Melatos, A.; Mendell, G.; Mendoza-Gandara, D.; Mercer, R. A.; Merilh, E. L.; Merzougui, M.; Meshkov, S.; Messenger, C.; Messick, C.; Metzdorff, R.; Meyers, P. M.; Mezzani, F.; Miao, H.; Michel, C.; Middleton, H.; E Mikhailov, E.; Milano, L.; Miller, A. L.; Miller, A.; Miller, B. B.; Miller, J.; Millhouse, M.; Minenkov, Y.; Ming, J.; Mirshekari, S.; Mishra, C.; Mitra, S.; Mitrofanov, V. P.; Mitselmakher, G.; Mittleman, R.; Moggi, A.; Mohan, M.; Mohapatra, S. R. P.; Montani, M.; Moore, B. C.; Moore, C. J.; Moraru, D.; Moreno, G.; Morriss, S. R.; Mours, B.; Mow-Lowry, C. M.; Mueller, G.; Muir, A. W.; Mukherjee, Arunava; Mukherjee, D.; Mukherjee, S.; Mukund, N.; Mullavey, A.; Munch, J.; Muniz, E. A. M.; Murray, P. G.; Mytidis, A.; Napier, K.; Nardecchia, I.; Naticchioni, L.; Nelemans, G.; Nelson, T. J. N.; Neri, M.; Nery, M.; Neunzert, A.; Newport, J. M.; Newton, G.; Nguyen, T. T.; Nielsen, A. B.; Nissanke, S.; Nitz, A.; Noack, A.; Nocera, F.; Nolting, D.; Normandin, M. E. N.; Nuttall, L. K.; Oberling, J.; Ochsner, E.; Oelker, E.; Ogin, G. H.; Oh, J. J.; Oh, S. H.; Ohme, F.; Oliver, M.; Oppermann, P.; Oram, Richard J.; O'Reilly, B.; O'Shaughnessy, R.; Ottaway, D. J.; Overmier, H.; Owen, B. J.; E Pace, A.; Page, J.; Pai, A.; Pai, S. A.; Palamos, J. R.; Palashov, O.; Palomba, C.; Pal-Singh, A.; Pan, H.; Pankow, C.; Pannarale, F.; Pant, B. C.; Paoletti, F.; Paoli, A.; Papa, M. A.; Paris, H. R.; Parker, W.; Pascucci, D.; Pasqualetti, A.; Passaquieti, R.; Passuello, D.; Patricelli, B.; Pearlstone, B. L.; Pedraza, M.; Pedurand, R.; Pekowsky, L.; Pele, A.; Penn, S.; Perez, C. J.; Perreca, A.; Perri, L. M.; Pfeiffer, H. P.; Phelps, M.; Piccinni, O. J.; Pichot, M.; Piergiovanni, F.; Pierro, V.; Pillant, G.; Pinard, L.; Pinto, I. M.; Pitkin, M.; Poe, M.; Poggiani, R.; Popolizio, P.; Post, A.; Powell, J.; Prasad, J.; Pratt, J. W. W.; Predoi, V.; Prestegard, T.; Prijatelj, M.; Principe, M.; Privitera, S.; Prodi, G. A.; Prokhorov, L. G.; Puncken, O.; Punturo, M.; Puppo, P.; Pürrer, M.; Qi, H.; Qin, J.; Qiu, S.; Quetschke, V.; Quintero, E. A.; Quitzow-James, R.; Raab, F. J.; Rabeling, D. S.; Radkins, H.; Raffai, P.; Raja, S.; Rajan, C.; Rakhmanov, M.; Rapagnani, P.; Raymond, V.; Razzano, M.; Re, V.; Read, J.; Regimbau, T.; Rei, L.; Reid, S.; Reitze, D. H.; Rew, H.; Reyes, S. D.; Rhoades, E.; Ricci, F.; Riles, K.; Rizzo, M.; Robertson, N. A.; Robie, R.; Robinet, F.; Rocchi, A.; Rolland, L.; Rollins, J. G.; Roma, V. J.; Romano, J. D.; Romano, R.; Romie, J. H.; Rosińska, D.; Rowan, S.; Rüdiger, A.; Ruggi, P.; Ryan, K.; Sachdev, S.; Sadecki, T.; Sadeghian, L.; Sakellariadou, M.; Salconi, L.; Saleem, M.; Salemi, F.; Samajdar, A.; Sammut, L.; Sampson, L. M.; Sanchez, E. J.; Sandberg, V.; Sanders, J. R.; Sassolas, B.; Sathyaprakash, B. S.; Saulson, P. R.; Sauter, O.; Savage, R. L.; Sawadsky, A.; Schale, P.; Scheuer, J.; Schmidt, E.; Schmidt, J.; Schmidt, P.; Schnabel, R.; Schofield, R. M. S.; Schönbeck, A.; Schreiber, E.; Schuette, D.; Schutz, B. F.; Schwalbe, S. G.; Scott, J.; Scott, S. M.; Sellers, D.; Sengupta, A. S.; Sentenac, D.; Sequino, V.; Sergeev, A.; Setyawati, Y.; Shaddock, D. A.; Shaffer, T. J.; Shahriar, M. S.; Shapiro, B.; Shawhan, P.; Sheperd, A.; Shoemaker, D. H.; Shoemaker, D. M.; Siellez, K.; Siemens, X.; Sieniawska, M.; Sigg, D.; Silva, A. D.; Singer, A.; Singer, L. P.; Singh, A.; Singh, R.; Singhal, A.; Sintes, A. M.; Slagmolen, B. J. J.; Smith, B.; Smith, J. R.; E Smith, R. J.; Son, E. J.; Sorazu, B.; Sorrentino, F.; Souradeep, T.; Spencer, A. P.; Srivastava, A. K.; Staley, A.; Steinke, M.; Steinlechner, J.; Steinlechner, S.; Steinmeyer, D.; Stephens, B. C.; Stevenson, S. P.; Stone, R.; Strain, K. A.; Straniero, N.; Stratta, G.; E Strigin, S.; Sturani, R.; Stuver, A. L.; Summerscales, T. Z.; Sun, L.; Sunil, S.; Sutton, P. J.; Swinkels, B. L.; Szczepańczyk, M. J.; Tacca, M.; Talukder, D.; Tanner, D. B.; Tápai, M.; Taracchini, A.; Taylor, R.; Theeg, T.; Thomas, E. G.; Thomas, M.; Thomas, P.; Thorne, K. A.; Thrane, E.; Tippens, T.; Tiwari, S.; Tiwari, V.; Tokmakov, K. V.; Toland, K.; Tomlinson, C.; Tonelli, M.; Tornasi, Z.; Torrie, C. I.; Töyrä, D.; Travasso, F.; Traylor, G.; Trifirò, D.; Trinastic, J.; Tringali, M. C.; Trozzo, L.; Tse, M.; Tso, R.; Turconi, M.; Tuyenbayev, D.; Ugolini, D.; Unnikrishnan, C. S.; Urban, A. L.; Usman, S. A.; Vahlbruch, H.; Vajente, G.; Valdes, G.; van Bakel, N.; van Beuzekom, M.; van den Brand, J. F. J.; Van Den Broeck, C.; Vander-Hyde, D. C.; van der Schaaf, L.; van Heijningen, J. V.; van Veggel, A. A.; Vardaro, M.; Varma, V.; Vass, S.; Vasúth, M.; Vecchio, A.; Vedovato, G.; Veitch, J.; Veitch, P. J.; Venkateswara, K.; Venugopalan, G.; Verkindt, D.; Vetrano, F.; Viceré, A.; Viets, A. D.; Vinciguerra, S.; Vine, D. J.; Vinet, J.-Y.; Vitale, S.; Vo, T.; Vocca, H.; Vorvick, C.; Voss, D. V.; Vousden, W. D.; Vyatchanin, S. P.; Wade, A. R.; E Wade, L.; Wade, M.; Walker, M.; Wallace, L.; Walsh, S.; Wang, G.; Wang, H.; Wang, M.; Wang, Y.; Ward, R. L.; Warner, J.; Was, M.; Watchi, J.; Weaver, B.; Wei, L.-W.; Weinert, M.; Weinstein, A. J.; Weiss, R.; Wen, L.; Weßels, P.; Westphal, T.; Wette, K.; Whelan, J. T.; Whiting, B. F.; Whittle, C.; Williams, D.; Williams, R. D.; Williamson, A. R.; Willis, J. L.; Willke, B.; Wimmer, M. H.; Winkler, W.; Wipf, C. C.; Wittel, H.; Woan, G.; Woehler, J.; Worden, J.; Wright, J. L.; Wu, D. S.; Wu, G.; Yam, W.; Yamamoto, H.; Yancey, C. C.; Yap, M. J.; Yu, Hang; Yu, Haocun; Yvert, M.; Zadrożny, A.; Zangrando, L.; Zanolin, M.; Zendri, J.-P.; Zevin, M.; Zhang, L.; Zhang, M.; Zhang, T.; Zhang, Y.; Zhao, C.; Zhou, M.; Zhou, Z.; Zhu, S. J.; Zhu, X. J.; E Zucker, M.; Zweizig, J.; LIGO Scientific Collaboration; Virgo Collaboration; Boyle, M.; Chu, T.; Hemberger, D.; Hinder, I.; E Kidder, L.; Ossokine, S.; Scheel, M.; Szilagyi, B.; Teukolsky, S.; Vano Vinuales, A.

    2017-05-01

    Parameter estimates of GW150914 were obtained using Bayesian inference, based on three semi-analytic waveform models for binary black hole coalescences. These waveform models differ from each other in their treatment of black hole spins, and all three models make some simplifying assumptions, notably to neglect sub-dominant waveform harmonic modes and orbital eccentricity. Furthermore, while the models are calibrated to agree with waveforms obtained by full numerical solutions of Einstein’s equations, any such calibration is accurate only to some non-zero tolerance and is limited by the accuracy of the underlying phenomenology, availability, quality, and parameter-space coverage of numerical simulations. This paper complements the original analyses of GW150914 with an investigation of the effects of possible systematic errors in the waveform models on estimates of its source parameters. To test for systematic errors we repeat the original Bayesian analysis on mock signals from numerical simulations of a series of binary configurations with parameters similar to those found for GW150914. Overall, we find no evidence for a systematic bias relative to the statistical error of the original parameter recovery of GW150914 due to modeling approximations or modeling inaccuracies. However, parameter biases are found to occur for some configurations disfavored by the data of GW150914: for binaries inclined edge-on to the detector over a small range of choices of polarization angles, and also for eccentricities greater than  ˜0.05. For signals with higher signal-to-noise ratio than GW150914, or in other regions of the binary parameter space (lower masses, larger mass ratios, or higher spins), we expect that systematic errors in current waveform models may impact gravitational-wave measurements, making more accurate models desirable for future observations.

  11. Underestimated Rate of Status Epilepticus according to the Traditional Definition of Status Epilepticus.

    Science.gov (United States)

    Ong, Cheung-Ter; Wong, Yi-Sin; Sung, Sheng-Feng; Wu, Chi-Shun; Hsu, Yung-Chu; Su, Yu-Hsiang; Hung, Ling-Chien

    2015-01-01

    Status epilepticus (SE) is an important neurological emergency. Early diagnosis could improve outcomes. Traditionally, SE is defined as seizures lasting at least 30 min or repeated seizures over 30 min without recovery of consciousness. Some specialists argued that the duration of seizures qualifying as SE should be shorter and the operational definition of SE was suggested. It is unclear whether physicians follow the operational definition. The objective of this study was to investigate whether the incidence of SE was underestimated and to investigate the underestimate rate. This retrospective study evaluates the difference in diagnosis of SE between operational definition and traditional definition of status epilepticus. Between July 1, 2012, and June 30, 2014, patients discharged with ICD-9 codes for epilepsy (345.X) in Chia-Yi Christian Hospital were included in the study. A seizure lasting at least 30 min or repeated seizures over 30 min without recovery of consciousness were considered SE according to the traditional definition of SE (TDSE). A seizure lasting between 5 and 30 min was considered SE according to the operational definition of SE (ODSE); it was defined as underestimated status epilepticus (UESE). During a 2-year period, there were 256 episodes of seizures requiring hospital admission. Among the 256 episodes, 99 episodes lasted longer than 5 min, out of which 61 (61.6%) episodes persisted over 30 min (TDSE) and 38 (38.4%) episodes continued between 5 and 30 min (UESE). In the 38 episodes of seizure lasting 5 to 30 minutes, only one episode was previously discharged as SE (ICD-9-CM 345.3). Conclusion. We underestimated 37.4% of SE. Continuing education regarding the diagnosis and treatment of epilepsy is important for physicians.

  12. Thermal sensation models: a systematic comparison.

    Science.gov (United States)

    Koelblen, B; Psikuta, A; Bogdan, A; Annaheim, S; Rossi, R M

    2017-05-01

    Thermal sensation models, capable of predicting human's perception of thermal surroundings, are commonly used to assess given indoor conditions. These models differ in many aspects, such as the number and type of input conditions, the range of conditions in which the models can be applied, and the complexity of equations. Moreover, the models are associated with various thermal sensation scales. In this study, a systematic comparison of seven existing thermal sensation models has been performed with regard to exposures including various air temperatures, clothing thermal insulation, and metabolic rate values after a careful investigation of the models' range of applicability. Thermo-physiological data needed as input for some of the models were obtained from a mathematical model for human physiological responses. The comparison showed differences between models' predictions for the analyzed conditions, mostly higher than typical intersubject differences in votes. Therefore, it can be concluded that the choice of model strongly influences the assessment of indoor spaces. The issue of comparing different thermal sensation scales has also been discussed. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. A Systematic Identification Method for Thermodynamic Property Modelling

    DEFF Research Database (Denmark)

    Ana Perederic, Olivia; Cunico, Larissa; Sarup, Bent

    2017-01-01

    In this work, a systematic identification method for thermodynamic property modelling is proposed. The aim of the method is to improve the quality of phase equilibria prediction by group contribution based property prediction models. The method is applied to lipid systems where the Original UNIFAC...... model is used. Using the proposed method for estimating the interaction parameters using only VLE data, a better phase equilibria prediction for both VLE and SLE was obtained. The results were validated and compared with the original model performance...

  14. FROM ATOMISTIC TO SYSTEMATIC COARSE-GRAINED MODELS FOR MOLECULAR SYSTEMS

    KAUST Repository

    Harmandaris, Vagelis; Kalligiannaki, Evangelia; Katsoulakis, Markos; Plechac, Petr

    2017-01-01

    The development of systematic (rigorous) coarse-grained mesoscopic models for complex molecular systems is an intense research area. Here we first give an overview of methods for obtaining optimal parametrized coarse-grained models, starting from

  15. Is dream recall underestimated by retrospective measures and enhanced by keeping a logbook? A review.

    Science.gov (United States)

    Aspy, Denholm J; Delfabbro, Paul; Proeve, Michael

    2015-05-01

    There are two methods commonly used to measure dream recall in the home setting. The retrospective method involves asking participants to estimate their dream recall in response to a single question and the logbook method involves keeping a daily record of one's dream recall. Until recently, the implicit assumption has been that these measures are largely equivalent. However, this is challenged by the tendency for retrospective measures to yield significantly lower dream recall rates than logbooks. A common explanation for this is that retrospective measures underestimate dream recall. Another is that keeping a logbook enhances it. If retrospective measures underestimate dream recall and if logbooks enhance it they are both unlikely to reflect typical dream recall rates and may be confounded with variables associated with the underestimation and enhancement effects. To date, this issue has received insufficient attention. The present review addresses this gap in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Breakdown of hot-spot model in determining convective amplification in large homogeneous systems

    International Nuclear Information System (INIS)

    Mounaix, Philippe; Divol, Laurent

    2004-01-01

    Convective amplification in large homogeneous systems is studied, both analytically and numerically, in the case of a linear diffraction-free stochastic amplifier. Overall amplification does not result from successive amplifications in small scale high intensity hot spots, but from a single amplification in a delocalized mode of the driver field spreading over the whole interaction length. For this model, the hot-spot approach is found to systematically underestimate the gain factor by more than 50%

  17. An Accurate Fire-Spread Algorithm in the Weather Research and Forecasting Model Using the Level-Set Method

    Science.gov (United States)

    Muñoz-Esparza, Domingo; Kosović, Branko; Jiménez, Pedro A.; Coen, Janice L.

    2018-04-01

    The level-set method is typically used to track and propagate the fire perimeter in wildland fire models. Herein, a high-order level-set method using fifth-order WENO scheme for the discretization of spatial derivatives and third-order explicit Runge-Kutta temporal integration is implemented within the Weather Research and Forecasting model wildland fire physics package, WRF-Fire. The algorithm includes solution of an additional partial differential equation for level-set reinitialization. The accuracy of the fire-front shape and rate of spread in uncoupled simulations is systematically analyzed. It is demonstrated that the common implementation used by level-set-based wildfire models yields to rate-of-spread errors in the range 10-35% for typical grid sizes (Δ = 12.5-100 m) and considerably underestimates fire area. Moreover, the amplitude of fire-front gradients in the presence of explicitly resolved turbulence features is systematically underestimated. In contrast, the new WRF-Fire algorithm results in rate-of-spread errors that are lower than 1% and that become nearly grid independent. Also, the underestimation of fire area at the sharp transition between the fire front and the lateral flanks is found to be reduced by a factor of ≈7. A hybrid-order level-set method with locally reduced artificial viscosity is proposed, which substantially alleviates the computational cost associated with high-order discretizations while preserving accuracy. Simulations of the Last Chance wildfire demonstrate additional benefits of high-order accurate level-set algorithms when dealing with complex fuel heterogeneities, enabling propagation across narrow fuel gaps and more accurate fire backing over the lee side of no fuel clusters.

  18. Fishermen´s underestimation of risk

    DEFF Research Database (Denmark)

    Knudsen, Fabienne; Grøn, Sisse

    2009-01-01

    to stress the positive potentiale of risk. This can be explained by several, interrelated factors such as the nature of fishing, it-self a risk-based enterprise; a life-form promoting independency and identification with the enterprise's pecuniary priorities; working conditions upholding a feeling......  Fishermen's underestimation of risk   Background: In order to understand the effect of footwear and flooring on slips, trips and falls, 1st author visited 4 fishing boats.  An important spinoff of the study was to get an in situ insight in the way, fishermen perceive risk.   Objectives......: The presentation will analyse fishermen's risk perception, its causes and consequences.   Methods: The first author participated in 3 voyages at sea on fishing vessels (from 1 to 10 days each and from 2 to 4 crewmembers) where  interviews and participant observation was undertaken. A 4th fishing boat was visited...

  19. Bayesian Network Models in Cyber Security: A Systematic Review

    OpenAIRE

    Chockalingam, S.; Pieters, W.; Herdeiro Teixeira, A.M.; van Gelder, P.H.A.J.M.; Lipmaa, Helger; Mitrokotsa, Aikaterini; Matulevicius, Raimundas

    2017-01-01

    Bayesian Networks (BNs) are an increasingly popular modelling technique in cyber security especially due to their capability to overcome data limitations. This is also instantiated by the growth of BN models development in cyber security. However, a comprehensive comparison and analysis of these models is missing. In this paper, we conduct a systematic review of the scientific literature and identify 17 standard BN models in cyber security. We analyse these models based on 9 different criteri...

  20. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  1. Underestimation of soil carbon stocks by Yasso07, Q, and CENTURY models in boreal forest linked to overlooking site fertility

    Science.gov (United States)

    Ťupek, Boris; Ortiz, Carina; Hashimoto, Shoji; Stendahl, Johan; Dahlgren, Jonas; Karltun, Erik; Lehtonen, Aleksi

    2016-04-01

    The soil organic carbon stock (SOC) changes estimated by the most process based soil carbon models (e.g. Yasso07, Q and CENTURY), needed for reporting of changes in soil carbon amounts for the United Nations Framework Convention on Climate Change (UNFCCC) and for mitigation of anthropogenic CO2 emissions by soil carbon management, can be biased if in a large mosaic of environments the models are missing a key factor driving SOC sequestration. To our knowledge soil nutrient status as a missing driver of these models was not tested in previous studies. Although, it's known that models fail to reconstruct the spatial variation and that soil nutrient status drives the ecosystem carbon use efficiency and soil carbon sequestration. We evaluated SOC stock estimates of Yasso07, Q and CENTURY process based models against the field data from Swedish Forest Soil National Inventories (3230 samples) organized by recursive partitioning method (RPART) into distinct soil groups with underlying SOC stock development linked to physicochemical conditions. These models worked for most soils with approximately average SOC stocks, but could not reproduce higher measured SOC stocks in our application. The Yasso07 and Q models that used only climate and litterfall input data and ignored soil properties generally agreed with two third of measurements. However, in comparison with measurements grouped according to the gradient of soil nutrient status we found that the models underestimated for the Swedish boreal forest soils with higher site fertility. Accounting for soil texture (clay, silt, and sand content) and structure (bulk density) in CENTURY model showed no improvement on carbon stock estimates, as CENTURY deviated in similar manner. We highlighted the mechanisms why models deviate from the measurements and the ways of considering soil nutrient status in further model development. Our analysis suggested that the models indeed lack other predominat drivers of SOC stabilization

  2. Sap flow is Underestimated by Thermal Dissipation Sensors due to Alterations of Wood Anatomy

    Science.gov (United States)

    Marañón-Jiménez, S.; Wiedemann, A.; van den Bulcke, J.; Cuntz, M.; Rebmann, C.; Steppe, K.

    2014-12-01

    The thermal dissipation technique (TD) is one of the most commonly adopted methods for sap flow measurements. However, underestimations of up to 60% of the tree transpiration have been reported with this technique, although the causes are not certainly known. The insertion of TD sensors within the stems causes damage of the wood tissue and subsequent healing reactions, changing wood anatomy and likely the sap flow path. However, the anatomical changes in response to the insertion of sap flow sensors and the effects on the measured flow have not been assessed yet. In this study, we investigate the alteration of vessel anatomy on wounds formed around TD sensors. Our main objectives were to elucidate the anatomical causes of sap flow underestimation for ring-porous and diffuse-porous species, and relate these changes to sap flow underestimations. Successive sets of TD probes were installed in early, mid and end of the growing season in Fagus sylvatica (diffuse-porous) and Quercus petraea (ring-porous) trees. They were logged after the growing season and additional sets of sensors were installed in the logged stems with presumably no healing reaction. The wood tissue surrounding each sensor was then excised and analysed by X-ray computed microtomography (X-ray micro CT). This technique allowed the quantification of vessel anatomical characteristics and the reconstruction of the 3-D internal microstructure of the xylem vessels so that extension and shape of the altered area could be determined. Gels and tyloses clogged the conductive vessels around the sensors in both beech and oak. The extension of the affected area was larger for beech although these anatomical changes led to similar sap flow underestimations in both species. The higher vessel size in oak may explain this result and, therefore, larger sap flow underestimation per area of affected conductive tissue. The wound healing reaction likely occurred within the first weeks after sensor installation, which

  3. Underestimation of Microearthquake Size by the Magnitude Scale of the Japan Meteorological Agency: Influence on Earthquake Statistics

    Science.gov (United States)

    Uchide, Takahiko; Imanishi, Kazutoshi

    2018-01-01

    Magnitude scales based on the amplitude of seismic waves, including the Japan Meteorological Agency magnitude scale (Mj), are commonly used in routine processes. The moment magnitude scale (Mw), however, is more physics based and is able to evaluate any type and size of earthquake. This paper addresses the relation between Mj and Mw for microearthquakes. The relative moment magnitudes among earthquakes are well constrained by multiple spectral ratio analyses. The results for the events in the Fukushima Hamadori and northern Ibaraki prefecture areas of Japan imply that Mj is significantly and systematically smaller than Mw for microearthquakes. The Mj-Mw curve has slopes of 1/2 and 1 for small and large values of Mj, respectively; for example, Mj = 1.0 corresponds to Mw = 2.0. A simple numerical simulation implies that this is due to anelastic attenuation and the recording using a finite sampling interval. The underestimation affects earthquake statistics. The completeness magnitude, Mc, for magnitudes lower than which the magnitude-frequency distribution deviates from the Gutenberg-Richter law, is effectively lower for Mw than that for Mj, by taking into account the systematic difference between Mj and Mw. The b values of the Gutenberg-Richter law are larger for Mw than for Mj. As the b values for Mj and Mw are well correlated, qualitative argument using b values is not affected. While the estimated b values for Mj are below 1.5, those for Mw often exceed 1.5. This may affect the physical implication of the seismicity.

  4. Hospitality and Tourism Online Review Research: A Systematic Analysis and Heuristic-Systematic Model

    Directory of Open Access Journals (Sweden)

    Sunyoung Hlee

    2018-04-01

    Full Text Available With tremendous growth and potential of online consumer reviews, online reviews of hospitality and tourism are now playing a significant role in consumer attitude and buying behaviors. This study reviewed and analyzed hospitality and tourism related articles published in academic journals. The systematic approach was used to analyze 55 research articles between January 2008 and December 2017. This study presented a brief synthesis of research by investigating content-related characteristics of hospitality and tourism online reviews (HTORs in different market segments. Two research questions were addressed. Building upon our literature analysis, we used the heuristic-systematic model (HSM to summarize and classify the characteristics affecting consumer perception in previous HTOR studies. We believe that the framework helps researchers to identify the research topic in extended HTORs literature and to point out possible direction for future studies.

  5. Dynamic Measurement Modeling: Using Nonlinear Growth Models to Estimate Student Learning Capacity

    Science.gov (United States)

    Dumas, Denis G.; McNeish, Daniel M.

    2017-01-01

    Single-timepoint educational measurement practices are capable of assessing student ability at the time of testing but are not designed to be informative of student capacity for developing in any particular academic domain, despite commonly being used in such a manner. For this reason, such measurement practice systematically underestimates the…

  6. Black carbon in the Arctic: the underestimated role of gas flaring and residential combustion emissions

    Directory of Open Access Journals (Sweden)

    A. Stohl

    2013-09-01

    annual mean Arctic BC surface concentrations due to residential combustion by 68% when using daily emissions. A large part (93% of this systematic increase can be captured also when using monthly emissions; the increase is compensated by a decreased BC burden at lower latitudes. In a comparison with BC measurements at six Arctic stations, we find that using daily-varying residential combustion emissions and introducing gas flaring emissions leads to large improvements of the simulated Arctic BC, both in terms of mean concentration levels and simulated seasonality. Case studies based on BC and carbon monoxide (CO measurements from the Zeppelin observatory appear to confirm flaring as an important BC source that can produce pollution plumes in the Arctic with a high BC / CO enhancement ratio, as expected for this source type. BC measurements taken during a research ship cruise in the White, Barents and Kara seas north of the region with strong flaring emissions reveal very high concentrations of the order of 200–400 ng m−3. The model underestimates these concentrations substantially, which indicates that the flaring emissions (and probably also other emissions in northern Siberia are rather under- than overestimated in our emission data set. Our results suggest that it may not be "vertical transport that is too strong or scavenging rates that are too low" and "opposite biases in these processes" in the Arctic and elsewhere in current aerosol models, as suggested in a recent review article (Bond et al., Bounding the role of black carbon in the climate system: a scientific assessment, J. Geophys. Res., 2013, but missing emission sources and lacking time resolution of the emission data that are causing opposite model biases in simulated BC concentrations in the Arctic and in the mid-latitudes.

  7. Systematic experimental based modeling of a rotary piezoelectric ultrasonic motor

    DEFF Research Database (Denmark)

    Mojallali, Hamed; Amini, Rouzbeh; Izadi-Zamanabadi, Roozbeh

    2007-01-01

    In this paper, a new method for equivalent circuit modeling of a traveling wave ultrasonic motor is presented. The free stator of the motor is modeled by an equivalent circuit containing complex circuit elements. A systematic approach for identifying the elements of the equivalent circuit is sugg...

  8. Chronic rhinosinusitis in Europe - an underestimated disease. A GA(2) LEN study

    DEFF Research Database (Denmark)

    Hastan, D; Fokkens, W J; Bachert, C

    2011-01-01

    , Zuberbier T, Jarvis D, Burney P. Chronic rhinosinusitis in Europe - an underestimated disease. A GA(2) LEN study. Allergy 2011; 66: 1216-1223. ABSTRACT: Background:  Chronic rhinosinusitis (CRS) is a common health problem, with significant medical costs and impact on general health. Even so, prevalence...

  9. Is dream recall underestimated by retrospective measures and enhanced by keeping a logbook? An empirical investigation.

    Science.gov (United States)

    Aspy, Denholm J

    2016-05-01

    In a recent review, Aspy, Delfabbro, and Proeve (2015) highlighted the tendency for retrospective measures of dream recall to yield substantially lower recall rates than logbook measures, a phenomenon they termed the retrospective-logbook disparity. One explanation for this phenomenon is that retrospective measures underestimate true dream recall. Another explanation is that keeping a logbook tends to enhance dream recall. The present study provides a thorough empirical investigation into the retrospective-logbook disparity using a range of retrospective and logbook measures and three different types of logbook. Retrospective-logbook disparities were correlated with a range of variables theoretically related to the retrospective underestimation effect, and retrospective-logbook disparities were greater among participants that reported improved dream recall during the logbook period. These findings indicate that dream recall is underestimated by retrospective measures and enhanced by keeping a logbook. Recommendations for the use of retrospective and logbook measures of dream recall are provided. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Simulation Modelling in Healthcare: An Umbrella Review of Systematic Literature Reviews.

    Science.gov (United States)

    Salleh, Syed; Thokala, Praveen; Brennan, Alan; Hughes, Ruby; Booth, Andrew

    2017-09-01

    Numerous studies examine simulation modelling in healthcare. These studies present a bewildering array of simulation techniques and applications, making it challenging to characterise the literature. The aim of this paper is to provide an overview of the level of activity of simulation modelling in healthcare and the key themes. We performed an umbrella review of systematic literature reviews of simulation modelling in healthcare. Searches were conducted of academic databases (JSTOR, Scopus, PubMed, IEEE, SAGE, ACM, Wiley Online Library, ScienceDirect) and grey literature sources, enhanced by citation searches. The articles were included if they performed a systematic review of simulation modelling techniques in healthcare. After quality assessment of all included articles, data were extracted on numbers of studies included in each review, types of applications, techniques used for simulation modelling, data sources and simulation software. The search strategy yielded a total of 117 potential articles. Following sifting, 37 heterogeneous reviews were included. Most reviews achieved moderate quality rating on a modified AMSTAR (A Measurement Tool used to Assess systematic Reviews) checklist. All the review articles described the types of applications used for simulation modelling; 15 reviews described techniques used for simulation modelling; three reviews described data sources used for simulation modelling; and six reviews described software used for simulation modelling. The remaining reviews either did not report or did not provide enough detail for the data to be extracted. Simulation modelling techniques have been used for a wide range of applications in healthcare, with a variety of software tools and data sources. The number of reviews published in recent years suggest an increased interest in simulation modelling in healthcare.

  11. Quantification of Underestimation of Physical Activity During Cycling to School When Using Accelerometry

    DEFF Research Database (Denmark)

    Tarp, Jakob; Andersen, Lars B; Østergaard, Lars

    2015-01-01

    Background: Cycling to and from school is an important source of physical activity (PA) in youth but it is not captured by the dominant objective method to quantify PA. The aim of this study was to quantify the underestimation of objectively assessed PA caused by cycling when using accelerometry....... Methods: Participants were 20 children aged 11-14 years from a randomized controlled trial performed in 2011. Physical activity was assessed by accelerometry with the addition of heart rate monitoring during cycling to school. Global positioning system (GPS) was used to identify periods of cycling...... to school. Results: Mean (95% CI) minutes of moderate-to-vigorous physical activity (MVPA) during round-trip commutes was 10.8 (7.1 - 16.6). Each kilometre of cycling meant an underestimation of 9314 (95%CI: 7719 - 11238) counts and 2.7 (95%CI: 2.1 - 3.5) minutes of MVPA. Adjusting for cycling to school...

  12. Dual-energy X-ray absorptiometry underestimates in vivo lumbar spine bone mineral density in overweight rats.

    Science.gov (United States)

    Cherif, Rim; Vico, Laurence; Laroche, Norbert; Sakly, Mohsen; Attia, Nebil; Lavet, Cedric

    2018-01-01

    Dual-energy X-ray absorptiometry (DXA) is currently the most widely used technique for measuring areal bone mineral density (BMD). However, several studies have shown inaccuracy, with either overestimation or underestimation of DXA BMD measurements in the case of overweight or obese individuals. We have designed an overweight rat model based on junk food to compare the effect of obesity on in vivo and ex vivo BMD and bone mineral content measurements. Thirty-eight 6-month old male rats were given a chow diet (n = 13) or a high fat and sucrose diet (n = 25), with the calorie amount being kept the same in the two groups, for 19 weeks. L1 BMD, L1 bone mineral content, amount of abdominal fat, and amount of abdominal lean were obtained from in vivo DXA scan. Ex vivo L1 BMD was also measured. A difference between in vivo and ex vivo DXA BMD measurements (P body weight, perirenal fat, abdominal fat, and abdominal lean. Multiple linear regression analysis shows that body weight, abdominal fat, and abdominal lean were independently related to ex vivo BMD. DXA underestimated lumbar in vivo BMD in overweight rats, and this measurement error is related to body weight and abdominal fat. Therefore, caution must be used when one is interpreting BMD among overweight and obese individuals.

  13. A systematic fault tree analysis based on multi-level flow modeling

    International Nuclear Information System (INIS)

    Gofuku, Akio; Ohara, Ai

    2010-01-01

    The fault tree analysis (FTA) is widely applied for the safety evaluation of a large-scale and mission-critical system. Because the potential of the FTA, however, strongly depends on human skill of analyzers, problems are pointed out in (1) education and training, (2) unreliable quality, (3) necessity of expertise knowledge, and (4) update of FTA results after the reconstruction of a target system. To get rid of these problems, many techniques to systematize FTA activities by applying computer technologies have been proposed. However, these techniques only use structural information of a target system and do not use functional information that is one of important properties of an artifact. The principle of FTA is to trace comprehensively cause-effect relations from a top undesirable effect to anomaly causes. The tracing is similar to the causality estimation technique that the authors proposed to find plausible counter actions to prevent or to mitigate the undesirable behavior of plants based on the model by a functional modeling technique, Multilevel Flow Modeling (MFM). The authors have extended this systematic technique to construct a fault tree (FT). This paper presents an algorithm of systematic construction of FT based on MFM models and demonstrates the applicability of the extended technique by the FT construction result of a cooling plant of nitric acid. (author)

  14. Application of blocking diagnosis methods to general circulation models. Part II: model simulations

    Energy Technology Data Exchange (ETDEWEB)

    Barriopedro, D.; Trigo, R.M. [Universidade de Lisboa, CGUL-IDL, Faculdade de Ciencias, Lisbon (Portugal); Garcia-Herrera, R.; Gonzalez-Rouco, J.F. [Universidad Complutense de Madrid, Departamento de Fisica de la Tierra II, Facultad de C.C. Fisicas, Madrid (Spain)

    2010-12-15

    A previously defined automatic method is applied to reanalysis and present-day (1950-1989) forced simulations of the ECHO-G model in order to assess its performance in reproducing atmospheric blocking in the Northern Hemisphere. Unlike previous methodologies, critical parameters and thresholds to estimate blocking occurrence in the model are not calibrated with an observed reference, but objectively derived from the simulated climatology. The choice of model dependent parameters allows for an objective definition of blocking and corrects for some intrinsic model bias, the difference between model and observed thresholds providing a measure of systematic errors in the model. The model captures reasonably the main blocking features (location, amplitude, annual cycle and persistence) found in observations, but reveals a relative southward shift of Eurasian blocks and an overall underestimation of blocking activity, especially over the Euro-Atlantic sector. Blocking underestimation mostly arises from the model inability to generate long persistent blocks with the observed frequency. This error is mainly attributed to a bias in the basic state. The bias pattern consists of excessive zonal winds over the Euro-Atlantic sector and a southward shift at the exit zone of the jet stream extending into in the Eurasian continent, that are more prominent in cold and warm seasons and account for much of Euro-Atlantic and Eurasian blocking errors, respectively. It is shown that other widely used blocking indices or empirical observational thresholds may not give a proper account of the lack of realism in the model as compared with the proposed method. This suggests that in addition to blocking changes that could be ascribed to natural variability processes or climate change signals in the simulated climate, attention should be paid to significant departures in the diagnosis of phenomena that can also arise from an inappropriate adaptation of detection methods to the climate of the

  15. Absorbing systematic effects to obtain a better background model in a search for new physics

    International Nuclear Information System (INIS)

    Caron, S; Horner, S; Sundermann, J E; Cowan, G; Gross, E

    2009-01-01

    This paper presents a novel approach to estimate the Standard Model backgrounds based on modifying Monte Carlo predictions within their systematic uncertainties. The improved background model is obtained by altering the original predictions with successively more complex correction functions in signal-free control selections. Statistical tests indicate when sufficient compatibility with data is reached. In this way, systematic effects are absorbed into the new background model. The same correction is then applied on the Monte Carlo prediction in the signal region. Comparing this method to other background estimation techniques shows improvements with respect to statistical and systematic uncertainties. The proposed method can also be applied in other fields beyond high energy physics.

  16. Systematic model building with flavor symmetries

    Energy Technology Data Exchange (ETDEWEB)

    Plentinger, Florian

    2009-12-19

    The observation of neutrino masses and lepton mixing has highlighted the incompleteness of the Standard Model of particle physics. In conjunction with this discovery, new questions arise: why are the neutrino masses so small, which form has their mass hierarchy, why is the mixing in the quark and lepton sectors so different or what is the structure of the Higgs sector. In order to address these issues and to predict future experimental results, different approaches are considered. One particularly interesting possibility, are Grand Unified Theories such as SU(5) or SO(10). GUTs are vertical symmetries since they unify the SM particles into multiplets and usually predict new particles which can naturally explain the smallness of the neutrino masses via the seesaw mechanism. On the other hand, also horizontal symmetries, i.e., flavor symmetries, acting on the generation space of the SM particles, are promising. They can serve as an explanation for the quark and lepton mass hierarchies as well as for the different mixings in the quark and lepton sectors. In addition, flavor symmetries are significantly involved in the Higgs sector and predict certain forms of mass matrices. This high predictivity makes GUTs and flavor symmetries interesting for both, theorists and experimentalists. These extensions of the SM can be also combined with theories such as supersymmetry or extra dimensions. In addition, they usually have implications on the observed matter-antimatter asymmetry of the universe or can provide a dark matter candidate. In general, they also predict the lepton flavor violating rare decays {mu} {yields} e{gamma}, {tau} {yields} {mu}{gamma}, and {tau} {yields} e{gamma} which are strongly bounded by experiments but might be observed in the future. In this thesis, we combine all of these approaches, i.e., GUTs, the seesaw mechanism and flavor symmetries. Moreover, our request is to develop and perform a systematic model building approach with flavor symmetries and

  17. Systematic model building with flavor symmetries

    International Nuclear Information System (INIS)

    Plentinger, Florian

    2009-01-01

    The observation of neutrino masses and lepton mixing has highlighted the incompleteness of the Standard Model of particle physics. In conjunction with this discovery, new questions arise: why are the neutrino masses so small, which form has their mass hierarchy, why is the mixing in the quark and lepton sectors so different or what is the structure of the Higgs sector. In order to address these issues and to predict future experimental results, different approaches are considered. One particularly interesting possibility, are Grand Unified Theories such as SU(5) or SO(10). GUTs are vertical symmetries since they unify the SM particles into multiplets and usually predict new particles which can naturally explain the smallness of the neutrino masses via the seesaw mechanism. On the other hand, also horizontal symmetries, i.e., flavor symmetries, acting on the generation space of the SM particles, are promising. They can serve as an explanation for the quark and lepton mass hierarchies as well as for the different mixings in the quark and lepton sectors. In addition, flavor symmetries are significantly involved in the Higgs sector and predict certain forms of mass matrices. This high predictivity makes GUTs and flavor symmetries interesting for both, theorists and experimentalists. These extensions of the SM can be also combined with theories such as supersymmetry or extra dimensions. In addition, they usually have implications on the observed matter-antimatter asymmetry of the universe or can provide a dark matter candidate. In general, they also predict the lepton flavor violating rare decays μ → eγ, τ → μγ, and τ → eγ which are strongly bounded by experiments but might be observed in the future. In this thesis, we combine all of these approaches, i.e., GUTs, the seesaw mechanism and flavor symmetries. Moreover, our request is to develop and perform a systematic model building approach with flavor symmetries and to search for phenomenological

  18. A multi-model evaluation of aerosols over South Asia: common problems and possible causes

    Science.gov (United States)

    Pan, X.; Chin, M.; Gautam, R.; Bian, H.; Kim, D.; Colarco, P. R.; Diehl, T. L.; Takemura, T.; Pozzoli, L.; Tsigaridis, K.; Bauer, S.; Bellouin, N.

    2015-05-01

    Atmospheric pollution over South Asia attracts special attention due to its effects on regional climate, water cycle and human health. These effects are potentially growing owing to rising trends of anthropogenic aerosol emissions. In this study, the spatio-temporal aerosol distributions over South Asia from seven global aerosol models are evaluated against aerosol retrievals from NASA satellite sensors and ground-based measurements for the period of 2000-2007. Overall, substantial underestimations of aerosol loading over South Asia are found systematically in most model simulations. Averaged over the entire South Asia, the annual mean aerosol optical depth (AOD) is underestimated by a range 15 to 44% across models compared to MISR (Multi-angle Imaging SpectroRadiometer), which is the lowest bound among various satellite AOD retrievals (from MISR, SeaWiFS (Sea-Viewing Wide Field-of-View Sensor), MODIS (Moderate Resolution Imaging Spectroradiometer) Aqua and Terra). In particular during the post-monsoon and wintertime periods (i.e., October-January), when agricultural waste burning and anthropogenic emissions dominate, models fail to capture AOD and aerosol absorption optical depth (AAOD) over the Indo-Gangetic Plain (IGP) compared to ground-based Aerosol Robotic Network (AERONET) sunphotometer measurements. The underestimations of aerosol loading in models generally occur in the lower troposphere (below 2 km) based on the comparisons of aerosol extinction profiles calculated by the models with those from Cloud-Aerosol Lidar with Orthogonal Polarization (CALIOP) data. Furthermore, surface concentrations of all aerosol components (sulfate, nitrate, organic aerosol (OA) and black carbon (BC)) from the models are found much lower than in situ measurements in winter. Several possible causes for these common problems of underestimating aerosols in models during the post-monsoon and wintertime periods are identified: the aerosol hygroscopic growth and formation of

  19. Impact bias or underestimation? Outcome specifications predict the direction of affective forecasting errors.

    Science.gov (United States)

    Buechel, Eva C; Zhang, Jiao; Morewedge, Carey K

    2017-05-01

    Affective forecasts are used to anticipate the hedonic impact of future events and decide which events to pursue or avoid. We propose that because affective forecasters are more sensitive to outcome specifications of events than experiencers, the outcome specification values of an event, such as its duration, magnitude, probability, and psychological distance, can be used to predict the direction of affective forecasting errors: whether affective forecasters will overestimate or underestimate its hedonic impact. When specifications are positively correlated with the hedonic impact of an event, forecasters will overestimate the extent to which high specification values will intensify and low specification values will discount its impact. When outcome specifications are negatively correlated with its hedonic impact, forecasters will overestimate the extent to which low specification values will intensify and high specification values will discount its impact. These affective forecasting errors compound additively when multiple specifications are aligned in their impact: In Experiment 1, affective forecasters underestimated the hedonic impact of winning a smaller prize that they expected to win, and they overestimated the hedonic impact of winning a larger prize that they did not expect to win. In Experiment 2, affective forecasters underestimated the hedonic impact of a short unpleasant video about a temporally distant event, and they overestimated the hedonic impact of a long unpleasant video about a temporally near event. Experiments 3A and 3B showed that differences in the affect-richness of forecasted and experienced events underlie these differences in sensitivity to outcome specifications, therefore accounting for both the impact bias and its reversal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  20. Information Processing and Risk Perception: An Adaptation of the Heuristic-Systematic Model.

    Science.gov (United States)

    Trumbo, Craig W.

    2002-01-01

    Describes heuristic-systematic information-processing model and risk perception--the two major conceptual areas of the analysis. Discusses the proposed model, describing the context of the data collections (public health communication involving cancer epidemiology) and providing the results of a set of three replications using the proposed model.…

  1. Simulation models in population breast cancer screening: A systematic review.

    Science.gov (United States)

    Koleva-Kolarova, Rositsa G; Zhan, Zhuozhao; Greuter, Marcel J W; Feenstra, Talitha L; De Bock, Geertruida H

    2015-08-01

    The aim of this review was to critically evaluate published simulation models for breast cancer screening of the general population and provide a direction for future modeling. A systematic literature search was performed to identify simulation models with more than one application. A framework for qualitative assessment which incorporated model type; input parameters; modeling approach, transparency of input data sources/assumptions, sensitivity analyses and risk of bias; validation, and outcomes was developed. Predicted mortality reduction (MR) and cost-effectiveness (CE) were compared to estimates from meta-analyses of randomized control trials (RCTs) and acceptability thresholds. Seven original simulation models were distinguished, all sharing common input parameters. The modeling approach was based on tumor progression (except one model) with internal and cross validation of the resulting models, but without any external validation. Differences in lead times for invasive or non-invasive tumors, and the option for cancers not to progress were not explicitly modeled. The models tended to overestimate the MR (11-24%) due to screening as compared to optimal RCTs 10% (95% CI - 2-21%) MR. Only recently, potential harms due to regular breast cancer screening were reported. Most scenarios resulted in acceptable cost-effectiveness estimates given current thresholds. The selected models have been repeatedly applied in various settings to inform decision making and the critical analysis revealed high risk of bias in their outcomes. Given the importance of the models, there is a need for externally validated models which use systematical evidence for input data to allow for more critical evaluation of breast cancer screening. Copyright © 2015 Elsevier Ltd. All rights reserved.

  2. Exposure limits: the underestimation of absorbed cell phone radiation, especially in children.

    Science.gov (United States)

    Gandhi, Om P; Morgan, L Lloyd; de Salles, Alvaro Augusto; Han, Yueh-Ying; Herberman, Ronald B; Davis, Devra Lee

    2012-03-01

    The existing cell phone certification process uses a plastic model of the head called the Specific Anthropomorphic Mannequin (SAM), representing the top 10% of U.S. military recruits in 1989 and greatly underestimating the Specific Absorption Rate (SAR) for typical mobile phone users, especially children. A superior computer simulation certification process has been approved by the Federal Communications Commission (FCC) but is not employed to certify cell phones. In the United States, the FCC determines maximum allowed exposures. Many countries, especially European Union members, use the "guidelines" of International Commission on Non-Ionizing Radiation Protection (ICNIRP), a non governmental agency. Radiofrequency (RF) exposure to a head smaller than SAM will absorb a relatively higher SAR. Also, SAM uses a fluid having the average electrical properties of the head that cannot indicate differential absorption of specific brain tissue, nor absorption in children or smaller adults. The SAR for a 10-year old is up to 153% higher than the SAR for the SAM model. When electrical properties are considered, a child's head's absorption can be over two times greater, and absorption of the skull's bone marrow can be ten times greater than adults. Therefore, a new certification process is needed that incorporates different modes of use, head sizes, and tissue properties. Anatomically based models should be employed in revising safety standards for these ubiquitous modern devices and standards should be set by accountable, independent groups.

  3. Understanding in vivo modelling of depression in non-human animals: a systematic review protocol

    DEFF Research Database (Denmark)

    Bannach-Brown, Alexandra; Liao, Jing; Wegener, Gregers

    2016-01-01

    experimental model(s) to induce or mimic a depressive-like phenotype. Data that will be extracted include the model or method of induction; species and gender of the animals used; the behavioural, anatomical, electrophysiological, neurochemical or genetic outcome measure(s) used; risk of bias......The aim of this study is to systematically collect all published preclinical non-human animal literature on depression to provide an unbiased overview of existing knowledge. A systematic search will be carried out in PubMed and Embase. Studies will be included if they use non-human animal......-analysis of the preclinical studies modelling depression-like behaviours and phenotypes in animals....

  4. Quantifying the underestimation of relative risks from genome-wide association studies.

    Directory of Open Access Journals (Sweden)

    Chris Spencer

    2011-03-01

    Full Text Available Genome-wide association studies (GWAS have identified hundreds of associated loci across many common diseases. Most risk variants identified by GWAS will merely be tags for as-yet-unknown causal variants. It is therefore possible that identification of the causal variant, by fine mapping, will identify alleles with larger effects on genetic risk than those currently estimated from GWAS replication studies. We show that under plausible assumptions, whilst the majority of the per-allele relative risks (RR estimated from GWAS data will be close to the true risk at the causal variant, some could be considerable underestimates. For example, for an estimated RR in the range 1.2-1.3, there is approximately a 38% chance that it exceeds 1.4 and a 10% chance that it is over 2. We show how these probabilities can vary depending on the true effects associated with low-frequency variants and on the minor allele frequency (MAF of the most associated SNP. We investigate the consequences of the underestimation of effect sizes for predictions of an individual's disease risk and interpret our results for the design of fine mapping experiments. Although these effects mean that the amount of heritability explained by known GWAS loci is expected to be larger than current projections, this increase is likely to explain a relatively small amount of the so-called "missing" heritability.

  5. Commonly used reference values underestimate oxygen uptake in healthy, 50-year-old Swedish women.

    Science.gov (United States)

    Genberg, M; Andrén, B; Lind, L; Hedenström, H; Malinovschi, A

    2018-01-01

    Cardiopulmonary exercise testing (CPET) is the gold standard among clinical exercise tests. It combines a conventional stress test with measurement of oxygen uptake (V O 2 ) and CO 2 production. No validated Swedish reference values exist, and reference values in women are generally understudied. Moreover, the importance of achieved respiratory exchange ratio (RER) and the significance of breathing reserve (BR) at peak exercise in healthy individuals are poorly understood. We compared V O 2 at maximal load (peakV O 2 ) and anaerobic threshold (V O 2@ AT ) in healthy Swedish individuals with commonly used reference values, taking gender into account. Further, we analysed maximal workload and peakV O 2 with regard to peak RER and BR. In all, 181 healthy, 50-year-old individuals (91 women) performed CPET. PeakV O 2 was best predicted using Jones et al. (100·5%), while SHIP reference values underestimated peakV O 2 most: 112·5%. Furthermore, underestimation of peakV O 2 in women was found for all studied reference values (P 1·1 (2 328·7 versus 2 176·7 ml min -1 , P = 0·11). Lower BR (≤30%) related to significantly higher peakV O 2 (Pvalues underestimated oxygen uptake in women. No evidence for demanding RER > 1·1 in healthy individuals was found. A lowered BR is probably a normal response to higher workloads in healthy individuals. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  6. Asteroseismic modelling of solar-type stars: internal systematics from input physics and surface correction methods

    Science.gov (United States)

    Nsamba, B.; Campante, T. L.; Monteiro, M. J. P. F. G.; Cunha, M. S.; Rendle, B. M.; Reese, D. R.; Verma, K.

    2018-04-01

    Asteroseismic forward modelling techniques are being used to determine fundamental properties (e.g. mass, radius, and age) of solar-type stars. The need to take into account all possible sources of error is of paramount importance towards a robust determination of stellar properties. We present a study of 34 solar-type stars for which high signal-to-noise asteroseismic data is available from multi-year Kepler photometry. We explore the internal systematics on the stellar properties, that is, associated with the uncertainty in the input physics used to construct the stellar models. In particular, we explore the systematics arising from: (i) the inclusion of the diffusion of helium and heavy elements; and (ii) the uncertainty in solar metallicity mixture. We also assess the systematics arising from (iii) different surface correction methods used in optimisation/fitting procedures. The systematics arising from comparing results of models with and without diffusion are found to be 0.5%, 0.8%, 2.1%, and 16% in mean density, radius, mass, and age, respectively. The internal systematics in age are significantly larger than the statistical uncertainties. We find the internal systematics resulting from the uncertainty in solar metallicity mixture to be 0.7% in mean density, 0.5% in radius, 1.4% in mass, and 6.7% in age. The surface correction method by Sonoi et al. and Ball & Gizon's two-term correction produce the lowest internal systematics among the different correction methods, namely, ˜1%, ˜1%, ˜2%, and ˜8% in mean density, radius, mass, and age, respectively. Stellar masses obtained using the surface correction methods by Kjeldsen et al. and Ball & Gizon's one-term correction are systematically higher than those obtained using frequency ratios.

  7. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks.

    Science.gov (United States)

    Jarama, Ángel J; López-Araquistain, Jaime; Miguel, Gonzalo de; Besada, Juan A

    2017-09-21

    In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases) is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature). It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  8. 'When measurements mean action' decision models for portal image review to eliminate systematic set-up errors

    International Nuclear Information System (INIS)

    Wratten, C.R.; Denham, J.W.; O; Brien, P.; Hamilton, C.S.; Kron, T.; London Regional Cancer Centre, London, Ontario

    2004-01-01

    The aim of the present paper is to evaluate how the use of decision models in the review of portal images can eliminate systematic set-up errors during conformal therapy. Sixteen patients undergoing four-field irradiation of prostate cancer have had daily portal images obtained during the first two treatment weeks and weekly thereafter. The magnitude of random and systematic variations has been calculated by comparison of the portal image with the reference simulator images using the two-dimensional decision model embodied in the Hotelling's evaluation process (HEP). Random day-to-day set-up variation was small in this group of patients. Systematic errors were, however, common. In 15 of 16 patients, one or more errors of >2 mm were diagnosed at some stage during treatment. Sixteen of the 23 errors were between 2 and 4 mm. Although there were examples of oversensitivity of the HEP in three cases, and one instance of undersensitivity, the HEP proved highly sensitive to the small (2-4 mm) systematic errors that must be eliminated during high precision radiotherapy. The HEP has proven valuable in diagnosing very small ( 4 mm) systematic errors using one-dimensional decision models, HEP can eliminate the majority of systematic errors during the first 2 treatment weeks. Copyright (2004) Blackwell Science Pty Ltd

  9. The application of the heuristic-systematic processing model to treatment decision making about prostate cancer.

    Science.gov (United States)

    Steginga, Suzanne K; Occhipinti, Stefano

    2004-01-01

    The study investigated the utility of the Heuristic-Systematic Processing Model as a framework for the investigation of patient decision making. A total of 111 men recently diagnosed with localized prostate cancer were assessed using Verbal Protocol Analysis and self-report measures. Study variables included men's use of nonsystematic and systematic information processing, desire for involvement in decision making, and the individual differences of health locus of control, tolerance of ambiguity, and decision-related uncertainty. Most men (68%) preferred that decision making be shared equally between them and their doctor. Men's use of the expert opinion heuristic was related to men's verbal reports of decisional uncertainty and having a positive orientation to their doctor and medical care; a desire for greater involvement in decision making was predicted by a high internal locus of health control. Trends were observed for systematic information processing to increase when the heuristic strategy used was negatively affect laden and when men were uncertain about the probabilities for cure and side effects. There was a trend for decreased systematic processing when the expert opinion heuristic was used. Findings were consistent with the Heuristic-Systematic Processing Model and suggest that this model has utility for future research in applied decision making about health.

  10. Asymmetries of poverty: why global burden of disease valuations underestimate the burden of neglected tropical diseases.

    Directory of Open Access Journals (Sweden)

    Charles H King

    2008-03-01

    Full Text Available The disability-adjusted life year (DALY initially appeared attractive as a health metric in the Global Burden of Disease (GBD program, as it purports to be a comprehensive health assessment that encompassed premature mortality, morbidity, impairment, and disability. It was originally thought that the DALY would be useful in policy settings, reflecting normative valuations as a standardized unit of ill health. However, the design of the DALY and its use in policy estimates contain inherent flaws that result in systematic undervaluation of the importance of chronic diseases, such as many of the neglected tropical diseases (NTDs, in world health. The conceptual design of the DALY comes out of a perspective largely focused on the individual risk rather than the ecology of disease, thus failing to acknowledge the implications of context on the burden of disease for the poor. It is nonrepresentative of the impact of poverty on disability, which results in the significant underestimation of disability weights for chronic diseases such as the NTDs. Finally, the application of the DALY in policy estimates does not account for the nonlinear effects of poverty in the cost-utility analysis of disease control, effectively discounting the utility of comprehensively treating NTDs. The present DALY framework needs to be substantially revised if the GBD is to become a valid and useful system for determining health priorities.

  11. Simulating Various Terrestrial and Uav LIDAR Scanning Configurations for Understory Forest Structure Modelling

    Science.gov (United States)

    Hämmerle, M.; Lukač, N.; Chen, K.-C.; Koma, Zs.; Wang, C.-K.; Anders, K.; Höfle, B.

    2017-09-01

    Information about the 3D structure of understory vegetation is of high relevance in forestry research and management (e.g., for complete biomass estimations). However, it has been hardly investigated systematically with state-of-the-art methods such as static terrestrial laser scanning (TLS) or laser scanning from unmanned aerial vehicle platforms (ULS). A prominent challenge for scanning forests is posed by occlusion, calling for proper TLS scan position or ULS flight line configurations in order to achieve an accurate representation of understory vegetation. The aim of our study is to examine the effect of TLS or ULS scanning strategies on (1) the height of individual understory trees and (2) understory canopy height raster models. We simulate full-waveform TLS and ULS point clouds of a virtual forest plot captured from various combinations of max. 12 TLS scan positions or 3 ULS flight lines. The accuracy of the respective datasets is evaluated with reference values given by the virtually scanned 3D triangle mesh tree models. TLS tree height underestimations range up to 1.84 m (15.30 % of tree height) for single TLS scan positions, but combining three scan positions reduces the underestimation to maximum 0.31 m (2.41 %). Combining ULS flight lines also results in improved tree height representation, with a maximum underestimation of 0.24 m (2.15 %). The presented simulation approach offers a complementary source of information for efficient planning of field campaigns aiming at understory vegetation modelling.

  12. SIMULATING VARIOUS TERRESTRIAL AND UAV LIDAR SCANNING CONFIGURATIONS FOR UNDERSTORY FOREST STRUCTURE MODELLING

    Directory of Open Access Journals (Sweden)

    M. Hämmerle

    2017-09-01

    Full Text Available Information about the 3D structure of understory vegetation is of high relevance in forestry research and management (e.g., for complete biomass estimations. However, it has been hardly investigated systematically with state-of-the-art methods such as static terrestrial laser scanning (TLS or laser scanning from unmanned aerial vehicle platforms (ULS. A prominent challenge for scanning forests is posed by occlusion, calling for proper TLS scan position or ULS flight line configurations in order to achieve an accurate representation of understory vegetation. The aim of our study is to examine the effect of TLS or ULS scanning strategies on (1 the height of individual understory trees and (2 understory canopy height raster models. We simulate full-waveform TLS and ULS point clouds of a virtual forest plot captured from various combinations of max. 12 TLS scan positions or 3 ULS flight lines. The accuracy of the respective datasets is evaluated with reference values given by the virtually scanned 3D triangle mesh tree models. TLS tree height underestimations range up to 1.84 m (15.30 % of tree height for single TLS scan positions, but combining three scan positions reduces the underestimation to maximum 0.31 m (2.41 %. Combining ULS flight lines also results in improved tree height representation, with a maximum underestimation of 0.24 m (2.15 %. The presented simulation approach offers a complementary source of information for efficient planning of field campaigns aiming at understory vegetation modelling.

  13. Comparison of two stochastic techniques for reliable urban runoff prediction by modeling systematic errors

    DEFF Research Database (Denmark)

    Del Giudice, Dario; Löwe, Roland; Madsen, Henrik

    2015-01-01

    from different fields and have not yet been compared in environmental modeling. To compare the two approaches, we develop a unifying terminology, evaluate them theoretically, and apply them to conceptual rainfall-runoff modeling in the same drainage system. Our results show that both approaches can......In urban rainfall-runoff, commonly applied statistical techniques for uncertainty quantification mostly ignore systematic output errors originating from simplified models and erroneous inputs. Consequently, the resulting predictive uncertainty is often unreliable. Our objective is to present two...... approaches which use stochastic processes to describe systematic deviations and to discuss their advantages and drawbacks for urban drainage modeling. The two methodologies are an external bias description (EBD) and an internal noise description (IND, also known as stochastic gray-box modeling). They emerge...

  14. Predictors of underestimation of malignancy after image-guided core needle biopsy diagnosis of flat epithelial atypia or atypical ductal hyperplasia.

    Science.gov (United States)

    Yu, Chi-Chang; Ueng, Shir-Hwa; Cheung, Yun-Chung; Shen, Shih-Che; Kuo, Wen-Lin; Tsai, Hsiu-Pei; Lo, Yung-Feng; Chen, Shin-Cheh

    2015-01-01

    Flat epithelial atypia (FEA) and atypical ductal hyperplasia (ADH) are precursors of breast malignancy. Management of FEA or ADH after image-guided core needle biopsy (CNB) remains controversial. The aim of this study was to evaluate malignancy underestimation rates after FEA or ADH diagnosis using image-guided CNB and to identify clinical characteristics and imaging features associated with malignancy as well as identify cases with low underestimation rates that may be treatable by observation only. We retrospectively reviewed 2,875 consecutive image-guided CNBs recorded in an electronic data base from January 2010 to December 2011 and identified 128 (4.5%) FEA and 83 (2.9%) ADH diagnoses (211 total cases). Of these, 64 (30.3%) were echo-guided CNB procedures and 147 (69.7%) mammography-guided CNBs. Twenty patients (9.5%) were upgraded to malignancy. Multivariate analysis indicated that age (OR = 1.123, p = 0.002, increase of 1 year), mass-type lesion with calcifications (OR = 8.213, p = 0.006), and ADH in CNB specimens (OR = 8.071, p = 0.003) were independent predictors of underestimation. In univariate analysis of echo-guided CNB (n = 64), mass with calcifications had the highest underestimation rate (p < 0.001). Multivariate analysis of 147 mammography-guided CNBs revealed that age (OR = 1.122, p = 0.040, increase of 1 year) and calcification distribution were significant independent predictors of underestimation. No FEA case in which, complete calcification retrieval was recorded after CNB was upgraded to malignancy. Older age at diagnosis on image-guided CNB was a predictor of malignancy underestimation. Mass with calcifications was more likely to be associated with malignancy, and in cases presenting as calcifications only, segmental distribution or linear shapes were significantly associated with upgrading. Excision after FEA or ADH diagnosis by image-guided CNB is warranted except for FEA diagnosed using mammography-guided CNB with complete calcification

  15. A systematic approach for development of a PWR cladding corrosion model

    International Nuclear Information System (INIS)

    Quecedo, M.; Serna, J.J.; Weiner, R.A.; Kersting, P.J.

    2001-01-01

    A new model for the in-reactor corrosion of Improved (low-tin) Zircaloy-4 cladding irradiated in commercial pressurized water reactors (PWRs) is described. The model is based on an extensive database of PWR fuel cladding corrosion data from fuel irradiated in commercial reactors, with a range of fuel duty and coolant chemistry control strategies which bracket current PWR fuel management practices. The fuel thermal duty with these current fuel management practices is characterized by a significant amount of sub-cooled nucleate boiling (SNB) during the fuel's residence in-core, and the cladding corrosion model is very sensitive to the coolant heat transfer models used to calculate the coolant temperature at the oxide surface. The systematic approach to developing the new corrosion model therefore began with a review and evaluation of several alternative models for the forced convection and SNB coolant heat transfer. The heat transfer literature is not sufficient to determine which of these heat transfer models is most appropriate for PWR fuel rod operating conditions, and the selection of the coolant heat transfer model used in the new cladding corrosion model has been coupled with a statistical analysis of the in-reactor corrosion enhancement factors and their impact on obtaining the best fit to the cladding corrosion data. The in-reactor corrosion enhancement factors considered in this statistical analysis are based on a review of the current literature for PWR cladding corrosion phenomenology and models. Fuel operating condition factors which this literature review indicated could have a significant effect on the cladding corrosion performance were also evaluated in detail in developing the corrosion model. An iterative least squares fitting procedure was used to obtain the model coefficients and select the coolant heat transfer models and in-reactor corrosion enhancement factors. This statistical procedure was completed with an exhaustive analysis of the model

  16. Presentation of the EURODELTA III intercomparison exercise - evaluation of the chemistry transport models' performance on criteria pollutants and joint analysis with meteorology

    Science.gov (United States)

    Bessagnet, Bertrand; Pirovano, Guido; Mircea, Mihaela; Cuvelier, Cornelius; Aulinger, Armin; Calori, Giuseppe; Ciarelli, Giancarlo; Manders, Astrid; Stern, Rainer; Tsyro, Svetlana; García Vivanco, Marta; Thunis, Philippe; Pay, Maria-Teresa; Colette, Augustin; Couvidat, Florian; Meleux, Frédérik; Rouïl, Laurence; Ung, Anthony; Aksoyoglu, Sebnem; María Baldasano, José; Bieser, Johannes; Briganti, Gino; Cappelletti, Andrea; D'Isidoro, Massimo; Finardi, Sandro; Kranenburg, Richard; Silibello, Camillo; Carnevale, Claudio; Aas, Wenche; Dupont, Jean-Charles; Fagerli, Hilde; Gonzalez, Lucia; Menut, Laurent; Prévôt, André S. H.; Roberts, Pete; White, Les

    2016-10-01

    The EURODELTA III exercise has facilitated a comprehensive intercomparison and evaluation of chemistry transport model performances. Participating models performed calculations for four 1-month periods in different seasons in the years 2006 to 2009, allowing the influence of different meteorological conditions on model performances to be evaluated. The exercise was performed with strict requirements for the input data, with few exceptions. As a consequence, most of differences in the outputs will be attributed to the differences in model formulations of chemical and physical processes. The models were evaluated mainly for background rural stations in Europe. The performance was assessed in terms of bias, root mean square error and correlation with respect to the concentrations of air pollutants (NO2, O3, SO2, PM10 and PM2.5), as well as key meteorological variables. Though most of meteorological parameters were prescribed, some variables like the planetary boundary layer (PBL) height and the vertical diffusion coefficient were derived in the model preprocessors and can partly explain the spread in model results. In general, the daytime PBL height is underestimated by all models. The largest variability of predicted PBL is observed over the ocean and seas. For ozone, this study shows the importance of proper boundary conditions for accurate model calculations and then on the regime of the gas and particle chemistry. The models show similar and quite good performance for nitrogen dioxide, whereas they struggle to accurately reproduce measured sulfur dioxide concentrations (for which the agreement with observations is the poorest). In general, the models provide a close-to-observations map of particulate matter (PM2.5 and PM10) concentrations over Europe rather with correlations in the range 0.4-0.7 and a systematic underestimation reaching -10 µg m-3 for PM10. The highest concentrations are much more underestimated, particularly in wintertime. Further evaluation of

  17. Inferring Perspective Versus Getting Perspective: Underestimating the Value of Being in Another Person's Shoes.

    Science.gov (United States)

    Zhou, Haotian; Majka, Elizabeth A; Epley, Nicholas

    2017-04-01

    People use at least two strategies to solve the challenge of understanding another person's mind: inferring that person's perspective by reading his or her behavior (theorization) and getting that person's perspective by experiencing his or her situation (simulation). The five experiments reported here demonstrate a strong tendency for people to underestimate the value of simulation. Predictors estimated a stranger's emotional reactions toward 50 pictures. They could either infer the stranger's perspective by reading his or her facial expressions or simulate the stranger's perspective by watching the pictures he or she viewed. Predictors were substantially more accurate when they got perspective through simulation, but overestimated the accuracy they had achieved by inferring perspective. Predictors' miscalibrated confidence stemmed from overestimating the information revealed through facial expressions and underestimating the similarity in people's reactions to a given situation. People seem to underappreciate a useful strategy for understanding the minds of others, even after they gain firsthand experience with both strategies.

  18. Methodologies for measuring travelers' risk perception of infectious diseases: A systematic review.

    Science.gov (United States)

    Sridhar, Shruti; Régner, Isabelle; Brouqui, Philippe; Gautret, Philippe

    2016-01-01

    Numerous studies in the past have stressed the importance of travelers' psychology and perception in the implementation of preventive measures. The aim of this systematic review was to identify the methodologies used in studies reporting on travelers' risk perception of infectious diseases. A systematic search for relevant literature was conducted according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. There were 39 studies identified. In 35 of 39 studies, the methodology used was that of a knowledge, attitude and practice (KAP) survey based on questionnaires. One study used a combination of questionnaires and a visual psychometric measuring instrument called the 'pictorial representation of illness and self-measurement" or PRISM. One study used a self-representation model (SRM) method. Two studies measured psychosocial factors. Valuable information was obtained from KAP surveys showing an overall lack of knowledge among travelers about the most frequent travel-associated infections and associated preventive measures. This methodological approach however, is mainly descriptive, addressing knowledge, attitudes, and practices separately and lacking an examination of the interrelationships between these three components. Another limitation of the KAP method is underestimating psychosocial variables that have proved influential in health related behaviors, including perceived benefits and costs of preventive measures, perceived social pressure, perceived personal control, unrealistic optimism and risk propensity. Future risk perception studies in travel medicine should consider psychosocial variables with inferential and multivariate statistical analyses. The use of implicit measurements of attitudes could also provide new insights in the field of travelers' risk perception of travel-associated infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Using logic model methods in systematic review synthesis: describing complex pathways in referral management interventions.

    Science.gov (United States)

    Baxter, Susan K; Blank, Lindsay; Woods, Helen Buckley; Payne, Nick; Rimmer, Melanie; Goyder, Elizabeth

    2014-05-10

    There is increasing interest in innovative methods to carry out systematic reviews of complex interventions. Theory-based approaches, such as logic models, have been suggested as a means of providing additional insights beyond that obtained via conventional review methods. This paper reports the use of an innovative method which combines systematic review processes with logic model techniques to synthesise a broad range of literature. The potential value of the model produced was explored with stakeholders. The review identified 295 papers that met the inclusion criteria. The papers consisted of 141 intervention studies and 154 non-intervention quantitative and qualitative articles. A logic model was systematically built from these studies. The model outlines interventions, short term outcomes, moderating and mediating factors and long term demand management outcomes and impacts. Interventions were grouped into typologies of practitioner education, process change, system change, and patient intervention. Short-term outcomes identified that may result from these interventions were changed physician or patient knowledge, beliefs or attitudes and also interventions related to changed doctor-patient interaction. A range of factors which may influence whether these outcomes lead to long term change were detailed. Demand management outcomes and intended impacts included content of referral, rate of referral, and doctor or patient satisfaction. The logic model details evidence and assumptions underpinning the complex pathway from interventions to demand management impact. The method offers a useful addition to systematic review methodologies. PROSPERO registration number: CRD42013004037.

  20. Complete Systematic Error Model of SSR for Sensor Registration in ATC Surveillance Networks

    Directory of Open Access Journals (Sweden)

    Ángel J. Jarama

    2017-09-01

    Full Text Available In this paper, a complete and rigorous mathematical model for secondary surveillance radar systematic errors (biases is developed. The model takes into account the physical effects systematically affecting the measurement processes. The azimuth biases are calculated from the physical error of the antenna calibration and the errors of the angle determination dispositive. Distance bias is calculated from the delay of the signal produced by the refractivity index of the atmosphere, and from clock errors, while the altitude bias is calculated taking into account the atmosphere conditions (pressure and temperature. It will be shown, using simulated and real data, that adapting a classical bias estimation process to use the complete parametrized model results in improved accuracy in the bias estimation.

  1. Presentation of the EURODELTA III intercomparison exercise – evaluation of the chemistry transport models' performance on criteria pollutants and joint analysis with meteorology

    Directory of Open Access Journals (Sweden)

    B. Bessagnet

    2016-10-01

    Full Text Available The EURODELTA III exercise has facilitated a comprehensive intercomparison and evaluation of chemistry transport model performances. Participating models performed calculations for four 1-month periods in different seasons in the years 2006 to 2009, allowing the influence of different meteorological conditions on model performances to be evaluated. The exercise was performed with strict requirements for the input data, with few exceptions. As a consequence, most of differences in the outputs will be attributed to the differences in model formulations of chemical and physical processes. The models were evaluated mainly for background rural stations in Europe. The performance was assessed in terms of bias, root mean square error and correlation with respect to the concentrations of air pollutants (NO2, O3, SO2, PM10 and PM2.5, as well as key meteorological variables. Though most of meteorological parameters were prescribed, some variables like the planetary boundary layer (PBL height and the vertical diffusion coefficient were derived in the model preprocessors and can partly explain the spread in model results. In general, the daytime PBL height is underestimated by all models. The largest variability of predicted PBL is observed over the ocean and seas. For ozone, this study shows the importance of proper boundary conditions for accurate model calculations and then on the regime of the gas and particle chemistry. The models show similar and quite good performance for nitrogen dioxide, whereas they struggle to accurately reproduce measured sulfur dioxide concentrations (for which the agreement with observations is the poorest. In general, the models provide a close-to-observations map of particulate matter (PM2.5 and PM10 concentrations over Europe rather with correlations in the range 0.4–0.7 and a systematic underestimation reaching −10 µg m−3 for PM10. The highest concentrations are much more underestimated, particularly in

  2. Evaluation of high intensity precipitation from 16 Regional climate models over a meso-scale catchment in the Midlands Regions of England

    Science.gov (United States)

    Wetterhall, F.; He, Y.; Cloke, H.; Pappenberger, F.; Freer, J.; Wilson, M.; McGregor, G.

    2009-04-01

    Local flooding events are often triggered by high-intensity rain-fall events, and it is important that these can be correctly modelled by Regional Climate Models (RCMs) if the results are to be used in climate impact assessment. In this study, daily precipitation from 16 RCMs was compared with observations over a meso-scale catchment in the Midlands Region of England. The RCM data was provided from the European research project ENSEMBLES and the precipitation data from the UK MetOffice. The RCMs were all driven by reanalysis data from the ERA40 dataset over the time period 1961-2000. The ENSEMBLES data is on the spatial scale of 25 x 25 km and it was disaggregated onto a 5 x 5 km grid over the catchment and compared with interpolated observational data with the same resolution. The mean precipitation was generally underestimated by the ENSEMBLES data, and the maximum and persistence of high intensity rainfall was even more underestimated. The inter-annual variability was not fully captured by the RCMs, and there was a systematic underestimation of precipitation during the autumn months. The spatial pattern in the modelled precipitation data was too smooth in comparison with the observed data, especially in the high altitudes in the western part of the catchment where the high precipitation usually occurs. The RCM outputs cannot reproduce the current high intensity precipitation events that are needed to sufficiently model extreme flood events. The results point out the discrepancy between climate model output and the high intensity precipitation input needs for hydrological impact modelling.

  3. Lesion stiffness measured by shear-wave elastography: Preoperative predictor of the histologic underestimation of US-guided core needle breast biopsy.

    Science.gov (United States)

    Park, Ah Young; Son, Eun Ju; Kim, Jeong-Ah; Han, Kyunghwa; Youk, Ji Hyun

    2015-12-01

    To determine whether lesion stiffness measured by shear-wave elastography (SWE) can be used to predict the histologic underestimation of ultrasound (US)-guided 14-gauge core needle biopsy (CNB) for breast masses. This retrospective study enrolled 99 breast masses from 93 patients, including 40 high-risk lesions and 59 ductal carcinoma in situ (DCIS), which were diagnosed by US-guided 14-gauge CNB. SWE was performed for all breast masses to measure quantitative elasticity values before US-guided CNB. To identify the preoperative factors associated with histologic underestimation, patients' age, symptoms, lesion size, B-mode US findings, and quantitative SWE parameters were compared according to the histologic upgrade after surgery using the chi-square test, Fisher's exact test, or independent t-test. The independent factors for predicting histologic upgrade were evaluated using multivariate logistic regression analysis. The underestimation rate was 28.3% (28/99) in total, 25.0% (10/40) in high-risk lesions, and 30.5% (18/59) in DCIS. All elasticity values of the upgrade group were significantly higher than those of the non-upgrade group (PBreast lesion stiffness quantitatively measured by SWE could be helpful to predict the underestimation of malignancy in US-guided 14-gauge CNB. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  4. Decoding β-decay systematics: A global statistical model for β- half-lives

    International Nuclear Information System (INIS)

    Costiris, N. J.; Mavrommatis, E.; Gernoth, K. A.; Clark, J. W.

    2009-01-01

    Statistical modeling of nuclear data provides a novel approach to nuclear systematics complementary to established theoretical and phenomenological approaches based on quantum theory. Continuing previous studies in which global statistical modeling is pursued within the general framework of machine learning theory, we implement advances in training algorithms designed to improve generalization, in application to the problem of reproducing and predicting the half-lives of nuclear ground states that decay 100% by the β - mode. More specifically, fully connected, multilayer feed-forward artificial neural network models are developed using the Levenberg-Marquardt optimization algorithm together with Bayesian regularization and cross-validation. The predictive performance of models emerging from extensive computer experiments is compared with that of traditional microscopic and phenomenological models as well as with the performance of other learning systems, including earlier neural network models as well as the support vector machines recently applied to the same problem. In discussing the results, emphasis is placed on predictions for nuclei that are far from the stability line, and especially those involved in r-process nucleosynthesis. It is found that the new statistical models can match or even surpass the predictive performance of conventional models for β-decay systematics and accordingly should provide a valuable additional tool for exploring the expanding nuclear landscape.

  5. Disclosing bias in bisulfite assay: MethPrimers underestimate high DNA methylation.

    Directory of Open Access Journals (Sweden)

    Andrea Fuso

    Full Text Available Discordant results obtained in bisulfite assays using MethPrimers (PCR primers designed using MethPrimer software or assuming that non-CpGs cytosines are non methylated versus primers insensitive to cytosine methylation lead us to hypothesize a technical bias. We therefore used the two kinds of primers to study different experimental models and methylation statuses. We demonstrated that MethPrimers negatively select hypermethylated DNA sequences in the PCR step of the bisulfite assay, resulting in CpG methylation underestimation and non-CpG methylation masking, failing to evidence differential methylation statuses. We also describe the characteristics of "Methylation-Insensitive Primers" (MIPs, having degenerated bases (G/A to cope with the uncertain C/U conversion. As CpG and non-CpG DNA methylation patterns are largely variable depending on the species, developmental stage, tissue and cell type, a variable extent of the bias is expected. The more the methylome is methylated, the greater is the extent of the bias, with a prevalent effect of non-CpG methylation. These findings suggest a revision of several DNA methylation patterns so far documented and also point out the necessity of applying unbiased analyses to the increasing number of epigenomic studies.

  6. Disk Masses around Solar-mass Stars are Underestimated by CO Observations

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Mo; Evans II, Neal J. [Astronomy Department, University of Texas, 2515 Speedway, Stop C1400, Austin, TX 78712 (United States); Dodson-Robinson, Sarah E. [University of Delaware, Department of Physics and Astronomy, 217 Sharp Lab, Newark, DE 19716 (United States); Willacy, Karen; Turner, Neal J. [Mail Stop 169-506, Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States)

    2017-05-20

    Gas in protostellar disks provides the raw material for giant planet formation and controls the dynamics of the planetesimal-building dust grains. Accurate gas mass measurements help map the observed properties of planet-forming disks onto the formation environments of known exoplanets. Rare isotopologues of carbon monoxide (CO) have been used as gas mass tracers for disks in the Lupus star-forming region, with an assumed interstellar CO/H{sub 2} abundance ratio. Unfortunately, observations of T-Tauri disks show that CO abundance is not interstellar, a finding reproduced by models that show CO abundance decreasing both with distance from the star and as a function of time. Here, we present radiative transfer simulations that assess the accuracy of CO-based disk mass measurements. We find that the combination of CO chemical depletion in the outer disk and optically thick emission from the inner disk leads observers to underestimate gas mass by more than an order of magnitude if they use the standard assumptions of interstellar CO/H{sub 2} ratio and optically thin emission. Furthermore, CO abundance changes on million-year timescales, introducing an age/mass degeneracy into observations. To reach a factor of a few accuracy for CO-based disk mass measurements, we suggest that observers and modelers adopt the following strategies: (1) select low- J transitions; (2) observe multiple CO isotopologues and use either intensity ratios or normalized line profiles to diagnose CO chemical depletion; and (3) use spatially resolved observations to measure the CO-abundance distribution.

  7. FROM ATOMISTIC TO SYSTEMATIC COARSE-GRAINED MODELS FOR MOLECULAR SYSTEMS

    KAUST Repository

    Harmandaris, Vagelis

    2017-10-03

    The development of systematic (rigorous) coarse-grained mesoscopic models for complex molecular systems is an intense research area. Here we first give an overview of methods for obtaining optimal parametrized coarse-grained models, starting from detailed atomistic representation for high dimensional molecular systems. Different methods are described based on (a) structural properties (inverse Boltzmann approaches), (b) forces (force matching), and (c) path-space information (relative entropy). Next, we present a detailed investigation concerning the application of these methods in systems under equilibrium and non-equilibrium conditions. Finally, we present results from the application of these methods to model molecular systems.

  8. e-Government Maturity Model Based on Systematic Review and Meta-Ethnography Approach

    Directory of Open Access Journals (Sweden)

    Darmawan Napitupulu

    2016-11-01

    Full Text Available Maturity model based on e-Government portal has been developed by a number of researchers both individually and institutionally, but still scattered in various journals and conference articles and can be said to have a different focus with each other, both in terms of stages and features. The aim of this research is conducting a study to integrate a number of maturity models existing today in order to build generic maturity model based on e-Government portal. The method used in this study is Systematic Review with meta-ethnography qualitative approach. Meta-ethnography, which is part of Systematic Review method, is a technique to perform data integration to obtain theories and concepts with a new level of understanding that is deeper and thorough. The result obtained is a maturity model based on e-Government portal that consists of 7 (seven stages, namely web presence, interaction, transaction, vertical integration, horizontal integration, full integration, and open participation. These seven stages are synthesized from the 111 key concepts related to 25 studies of maturity model based e-Government portal. The maturity model resulted is more comprehensive and generic because it is an integration of models (best practices that exists today.

  9. Experimental model for architectural systematization and its basic thermal performance. Part 1. Research on architectural systematization of energy conversion devices; Kenchiku system ka model no gaiyo to kihon seino ni tsuite. 1. Energy henkan no kenchiku system ka ni kansuru kenkyu

    Energy Technology Data Exchange (ETDEWEB)

    Sunaga, N; Ito, N; Kimura, G; Fukao, S; Shimizu, T; Tsunoda, M; Muro, K [Tokyo Metropolitan University, Tokyo (Japan)

    1996-10-27

    The outline of a model for architectural systematization of natural energy conversion and the experiment result on the basic thermal performance in winter are described. The model is about 20 m{sup 2} in floor space. Foam polystyrene of 100 mm and 200 mm thick was used for the outer wall as heat insulating materials. The model has a solar battery and air conditioner and uses red brick as a heat reservoir. An experiment was made on seven modes obtained when three elements (heating, heat storage, and night insulated door) are combined. The information obtained by the experiment showed that a model for architectural systematization has high heat insulation and tightness and can be used as an energy element or an evaluation model for architectural systematization. In this model for architectural systematization, the power consumption of an air conditioner in winter can be fully supplied by only the power generation based on a solar battery. In an architectural element, the heating energy consumption can be remarkably reduced and the indoor thermal environment can be greatly improved, by the combination of a heat reservoir and night heat insulated door. 1 ref., 6 figs., 3 tabs.

  10. Modelling the regional climate and isotopic composition of Svalbard precipitation using REMOiso

    DEFF Research Database (Denmark)

    Divine..[], D.V.; Sjolte, Jesper; Isaksson, E.

    2011-01-01

    Simulations of a regional (approx. 50 km resolution) circulation model REMOiso with embedded stable water isotope module covering the period 1958-2001 are compared with the two instrumental climate and four isotope series (d18O) from western Svalbard. We examine the data from ice cores drilled...... than summer. The simulated and measured Holtedahlfonna d18O series agree reasonably well, whereas no significant correlation has been observed between the modelled and measured Lomonosovfonna ice core isotopic series. It is shown that sporadic nature as well as variability in the amount inherent...... in reproducing the local climate. The model successfully captures the climate variations on the daily to multidecadal times scales although it tends to systematically underestimate the winter SAT. Analysis suggests that REMOiso performs better at simulating isotope compositions of precipitation in the winter...

  11. Insights on the impact of systematic model errors on data assimilation performance in changing catchments

    Science.gov (United States)

    Pathiraja, S.; Anghileri, D.; Burlando, P.; Sharma, A.; Marshall, L.; Moradkhani, H.

    2018-03-01

    The global prevalence of rapid and extensive land use change necessitates hydrologic modelling methodologies capable of handling non-stationarity. This is particularly true in the context of Hydrologic Forecasting using Data Assimilation. Data Assimilation has been shown to dramatically improve forecast skill in hydrologic and meteorological applications, although such improvements are conditional on using bias-free observations and model simulations. A hydrologic model calibrated to a particular set of land cover conditions has the potential to produce biased simulations when the catchment is disturbed. This paper sheds new light on the impacts of bias or systematic errors in hydrologic data assimilation, in the context of forecasting in catchments with changing land surface conditions and a model calibrated to pre-change conditions. We posit that in such cases, the impact of systematic model errors on assimilation or forecast quality is dependent on the inherent prediction uncertainty that persists even in pre-change conditions. Through experiments on a range of catchments, we develop a conceptual relationship between total prediction uncertainty and the impacts of land cover changes on the hydrologic regime to demonstrate how forecast quality is affected when using state estimation Data Assimilation with no modifications to account for land cover changes. This work shows that systematic model errors as a result of changing or changed catchment conditions do not always necessitate adjustments to the modelling or assimilation methodology, for instance through re-calibration of the hydrologic model, time varying model parameters or revised offline/online bias estimation.

  12. Underestimated risks of recurrent long-range ash dispersal from northern Pacific Arc volcanoes.

    Science.gov (United States)

    Bourne, A J; Abbott, P M; Albert, P G; Cook, E; Pearce, N J G; Ponomareva, V; Svensson, A; Davies, S M

    2016-07-21

    Widespread ash dispersal poses a significant natural hazard to society, particularly in relation to disruption to aviation. Assessing the extent of the threat of far-travelled ash clouds on flight paths is substantially hindered by an incomplete volcanic history and an underestimation of the potential reach of distant eruptive centres. The risk of extensive ash clouds to aviation is thus poorly quantified. New evidence is presented of explosive Late Pleistocene eruptions in the Pacific Arc, currently undocumented in the proximal geological record, which dispersed ash up to 8000 km from source. Twelve microscopic ash deposits or cryptotephra, invisible to the naked eye, discovered within Greenland ice-cores, and ranging in age between 11.1 and 83.7 ka b2k, are compositionally matched to northern Pacific Arc sources including Japan, Kamchatka, Cascades and Alaska. Only two cryptotephra deposits are correlated to known high-magnitude eruptions (Towada-H, Japan, ca 15 ka BP and Mount St Helens Set M, ca 28 ka BP). For the remaining 10 deposits, there is no evidence of age- and compositionally-equivalent eruptive events in regional volcanic stratigraphies. This highlights the inherent problem of under-reporting eruptions and the dangers of underestimating the long-term risk of widespread ash dispersal for trans-Pacific and trans-Atlantic flight routes.

  13. [The effectiveness of continuing care models in patients with chronic diseases: a systematic review].

    Science.gov (United States)

    Chen, Hsiao-Mei; Han, Tung-Chen; Chen, Ching-Min

    2014-04-01

    Population aging has caused significant rises in the prevalence of chronic diseases and the utilization of healthcare services in Taiwan. The current healthcare delivery system is fragmented. Integrating medical services may increase the quality of healthcare, enhance patient and patient family satisfaction with healthcare services, and better contain healthcare costs. This article introduces two continuing care models: discharge planning and case management. Further, the effectiveness and essential components of these two models are analyzed using a systematic review method. Articles included in this systematic review were all original articles on discharge-planning or case-management interventions published between February 1999 and March 2013 in any of 6 electronic databases (Medline, PubMed, Cinahl Plus with full Text, ProQuest, Cochrane Library, CEPS and Center for Chinese Studies electronic databases). Of the 70 articles retrieved, only 7 were randomized controlled trial studies. Three types of continuity-of-care models were identified: discharge planning, case management, and a hybrid of these two. All three models used logical and systematic processes to conduct assessment, planning, implementation, coordination, follow-up, and evaluation activities. Both the discharge planning model and the case management model were positively associated with improved self-care knowledge, reduced length of stay, decreased medical costs, and better quality of life. This study cross-referenced all reviewed articles in terms of target clients, content, intervention schedules, measurements, and outcome indicators. Study results may be referenced in future implementations of continuity-care models and may provide a reference for future research.

  14. Carbon and energy fluxes in cropland ecosystems: a model-data comparison

    Science.gov (United States)

    Lokupitiya, E.; Denning, A. Scott; Schaefer, K.; Ricciuto, D.; Anderson, R.; Arain, M. A.; Baker, I.; Barr, A. G.; Chen, G.; Chen, J.M.; Ciais, P.; Cook, D.R.; Dietze, M.C.; El Maayar, M.; Fischer, M.; Grant, R.; Hollinger, D.; Izaurralde, C.; Jain, A.; Kucharik, C.J.; Li, Z.; Liu, S.; Li, L.; Matamala, R.; Peylin, P.; Price, D.; Running, S. W.; Sahoo, A.; Sprintsin, M.; Suyker, A.E.; Tian, H.; Tonitto, Christina; Torn, M.S.; Verbeeck, Hans; Verma, S.B.; Xue, Y.

    2016-01-01

    Croplands are highly productive ecosystems that contribute to land–atmosphere exchange of carbon, energy, and water during their short growing seasons. We evaluated and compared net ecosystem exchange (NEE), latent heat flux (LE), and sensible heat flux (H) simulated by a suite of ecosystem models at five agricultural eddy covariance flux tower sites in the central United States as part of the North American Carbon Program Site Synthesis project. Most of the models overestimated H and underestimated LE during the growing season, leading to overall higher Bowen ratios compared to the observations. Most models systematically under predicted NEE, especially at rain-fed sites. Certain crop-specific models that were developed considering the high productivity and associated physiological changes in specific crops better predicted the NEE and LE at both rain-fed and irrigated sites. Models with specific parameterization for different crops better simulated the inter-annual variability of NEE for maize-soybean rotation compared to those models with a single generic crop type. Stratification according to basic model formulation and phenological methodology did not explain significant variation in model performance across these sites and crops. The under prediction of NEE and LE and over prediction of H by most of the models suggests that models developed and parameterized for natural ecosystems cannot accurately predict the more robust physiology of highly bred and intensively managed crop ecosystems. When coupled in Earth System Models, it is likely that the excessive physiological stress simulated in many land surface component models leads to overestimation of temperature and atmospheric boundary layer depth, and underestimation of humidity and CO2 seasonal uptake over agricultural regions.

  15. Carbon and energy fluxes in cropland ecosystems: a model-data comparison

    Energy Technology Data Exchange (ETDEWEB)

    Lokupitiya, E.; Denning, A. S.; Schaefer, K.; Ricciuto, D.; Anderson, R.; Arain, M. A.; Baker, I.; Barr, A. G.; Chen, G.; Chen, J. M.; Ciais, P.; Cook, D. R.; Dietze, M.; El Maayar, M.; Fischer, M.; Grant, R.; Hollinger, D.; Izaurralde, C.; Jain, A.; Kucharik, C.; Li, Z.; Liu, S.; Li, L.; Matamala, R.; Peylin, P.; Price, D.; Running, S. W.; Sahoo, A.; Sprintsin, M.; Suyker, A. E.; Tian, H.; Tonitto, C.; Torn, M.; Verbeeck, Hans; Verma, S. B.; Xue, Y.

    2016-06-03

    Croplands are highly productive ecosystems that contribute to land–atmosphere exchange of carbon, energy, and water during their short growing seasons. We evaluated and compared net ecosystem exchange (NEE), latent heat flux (LE), and sensible heat flux (H) simulated by a suite of ecosystem models at five agricultural eddy covariance flux tower sites in the central United States as part of the North American Carbon Program Site Synthesis project. Most of the models overestimated H and underestimated LE during the growing season, leading to overall higher Bowen ratios compared to the observations. Most models systematically under predicted NEE, especially at rain-fed sites. Certain crop-specific models that were developed considering the high productivity and associated physiological changes in specific crops better predicted the NEE and LE at both rain-fed and irrigated sites. Models with specific parameterization for different crops better simulated the inter-annual variability of NEE for maize-soybean rotation compared to those models with a single generic crop type. Stratification according to basic model formulation and phenological methodology did not explain significant variation in model performance across these sites and crops. The under prediction of NEE and LE and over prediction of H by most of the models suggests that models developed and parameterized for natural ecosystems cannot accurately predict the more robust physiology of highly bred and intensively managed crop ecosystems. When coupled in Earth System Models, it is likely that the excessive physiological stress simulated in many land surface component models leads to overestimation of temperature and atmospheric boundary layer depth, and underestimation of humidity and CO2 seasonal uptake over agricultural regions.

  16. Agent-based modeling of noncommunicable diseases: a systematic review.

    Science.gov (United States)

    Nianogo, Roch A; Arah, Onyebuchi A

    2015-03-01

    We reviewed the use of agent-based modeling (ABM), a systems science method, in understanding noncommunicable diseases (NCDs) and their public health risk factors. We systematically reviewed studies in PubMed, ScienceDirect, and Web of Sciences published from January 2003 to July 2014. We retrieved 22 relevant articles; each had an observational or interventional design. Physical activity and diet were the most-studied outcomes. Often, single agent types were modeled, and the environment was usually irrelevant to the studied outcome. Predictive validation and sensitivity analyses were most used to validate models. Although increasingly used to study NCDs, ABM remains underutilized and, where used, is suboptimally reported in public health studies. Its use in studying NCDs will benefit from clarified best practices and improved rigor to establish its usefulness and facilitate replication, interpretation, and application.

  17. X-ray computed microtomography characterizes the wound effect that causes sap flow underestimation by thermal dissipation sensors.

    Science.gov (United States)

    Marañón-Jiménez, S; Van den Bulcke, J; Piayda, A; Van Acker, J; Cuntz, M; Rebmann, C; Steppe, K

    2018-02-01

    Insertion of thermal dissipation (TD) sap flow sensors in living tree stems causes damage of the wood tissue, as is the case with other invasive methods. The subsequent wound formation is one of the main causes of underestimation of tree water-use measured by TD sensors. However, the specific alterations in wood anatomy in response to inserted sensors have not yet been characterized, and the linked dysfunctions in xylem conductance and sensor accuracy are still unknown. In this study, we investigate the anatomical mechanisms prompting sap flow underestimation and the dynamic process of wound formation. Successive sets of TD sensors were installed in the early, mid and end stage of the growing season in diffuse- and ring-porous trees, Fagus sylvatica (Linnaeus) and Quercus petraea ((Mattuschka) Lieblein), respectively. The trees were cut in autumn and additional sensors were installed in the cut stem segments as controls without wound formation. The wounded area and volume surrounding each sensor was then visually determined by X-ray computed microtomography (X-ray microCT). This technique allowed the characterization of vessel anatomical transformations such as tyloses formation, their spatial distribution and quantification of reduction in conductive area. MicroCT scans showed considerable formation of tyloses that reduced the conductive area of vessels surrounding the inserted TD probes, thus causing an underestimation in sap flux density (SFD) in both beech and oak. Discolored wood tissue was ellipsoidal, larger in the radial plane, more extensive in beech than in oak, and also for sensors installed for longer times. However, the severity of anatomical transformations did not always follow this pattern. Increased wound size with time, for example, did not result in larger SFD underestimation. This information helps us to better understand the mechanisms involved in wound effects with TD sensors and allows the provision of practical recommendations to reduce

  18. A Systematic Review of Evidence for the Clubhouse Model of Psychosocial Rehabilitation

    OpenAIRE

    McKay, Colleen; Nugent, Katie L.; Johnsen, Matthew; Eaton, William W.; Lidz, Charles W.

    2016-01-01

    The Clubhouse Model has been in existence for over sixty-five years; however, a review that synthesizes the literature on the model is needed. The current study makes use of the existing research to conduct a systematic review of articles providing a comprehensive understanding of what is known about the Clubhouse Model, to identify the best evidence available, as well as areas that would benefit from further study. Findings are summarized and evidence is classified by outcome domains. Fifty-...

  19. Modelling the transuranic contamination in soils by using a generic model and systematic sampling

    International Nuclear Information System (INIS)

    Breitenecker, Katharina; Brandl, Alexander; Bock, Helmut; Villa, Mario

    2008-01-01

    Full text: In the course of the decommissioning the former ASTRA Research Reactor, the Seibersdorf site is to be surveyed for possible contamination by radioactive materials, including transuranium elements. To limit costs due to systematic sampling and time consuming laboratory analyses, a mathematical model that describes the migration of transuranium elements and that includes the local topography of the area where deposition has occurred, was established.The project basis is to find a mathematical function that determines the contamination by modelling the pathways of transuranium elements. The model approach chosen is cellular automata (CA). For this purpose, a hypothetical activity of transuranium elements is released on the ground in the centre of a simulated area. Under the assumption that migration of these elements only takes place by diffusion, transport and sorption, their equations are modelled in the CA-model by a simple discretization for the existing problem. To include local topography, most of the simulated area consists of a green corridor, where migration proceeds quite slowly; streets, where the migrational behaviour is different, and migration velocities in ditches are also modelled. The Migration of three different plutonium isotopes ( 238P u, 239+240P u, 241P u), the migration of one americium isotope ( 241A m), the radioactive decay of 241P u via 241A m to 237N p and the radioactive decay of 238P u to 234U were considered in this model. Due to the special modelling approach of CA, the physical necessity of conservation of the amount of substance is always fulfilled. The entire system was implemented in MATLAB. Systematic sampling onto a featured test site, followed by detailed laboratory analyses were done to compare the underlying CA-model to real data. On this account a nuclide vector with 241A m as the reference nuclide was established. As long as the initial parameters (e.g. meteorological data) are well known, the model describes the

  20. Bioenergetics modeling of percid fishes: Chapter 14

    Science.gov (United States)

    Madenjian, Charles P.; Kestemont, Patrick; Dabrowski, Konrad; Summerfelt, Robert C.

    2015-01-01

    A bioenergetics model for a percid fish represents a quantitative description of the fish’s energy budget. Bioenergetics modeling can be used to identify the important factors determining growth of percids in lakes, rivers, or seas. For example, bioenergetics modeling applied to yellow perch (Perca flavescens) in the western and central basins of Lake Erie revealed that the slower growth in the western basin was attributable to limitations in suitably sized prey in western Lake Erie, rather than differences in water temperature between the two basins. Bioenergetics modeling can also be applied to a percid population to estimate the amount of food being annually consumed by the percid population. For example, bioenergetics modeling applied to the walleye (Sander vitreus) population in Lake Erie has provided fishery managers valuable insights into changes in the population’s predatory demand over time. In addition, bioenergetics modeling has been used to quantify the effect of the difference in growth between the sexes on contaminant accumulation in walleye. Field and laboratory evaluations of percid bioenergetics model performance have documented a systematic bias, such that the models overestimate consumption at low feeding rates but underestimate consumption at high feeding rates. However, more recent studies have shown that this systematic bias was due, at least in part, to an error in the energy budget balancing algorithm used in the computer software. Future research work is needed to more thoroughly assess the field and laboratory performance of percid bioenergetics models and to quantify differences in activity and standard metabolic rate between the sexes of mature percids.

  1. College Students' Underestimation of Blood Alcohol Concentration from Hypothetical Consumption of Supersized Alcopops: Results from a Cluster-Randomized Classroom Study.

    Science.gov (United States)

    Rossheim, Matthew E; Thombs, Dennis L; Krall, Jenna R; Jernigan, David H

    2018-05-30

    Supersized alcopops are a class of single-serving beverages popular among underage drinkers. These products contain large quantities of alcohol. This study examines the extent to which young adults recognize how intoxicated they would become from consuming these products. The study sample included 309 undergraduates who had consumed alcohol within the past year. Thirty-two sections of a college English course were randomized to 1 of 2 survey conditions, based on hypothetical consumption of supersized alcopops or beer of comparable liquid volume. Students were provided an empty can of 1 of the 2 beverages to help them answer the survey questions. Equation-calculated blood alcohol concentrations (BACs)-based on body weight and sex-were compared to the students' self-estimated BACs for consuming 1, 2, and 3 cans of the beverage provided to them. In adjusted regression models, students randomized to the supersized alcopop group greatly underestimated their BAC, whereas students randomized to the beer group overestimated it. The supersized alcopop group underestimated their BAC by 0.04 (95% confidence interval [CI]: 0.034, 0.053), 0.09 (95% CI: 0.067, 0.107), and 0.13 g/dl (95% CI: 0.097, 0.163) compared to the beer group. When asked how much alcohol they could consume before it would be unsafe to drive, students in the supersized alcopop group had 7 times the odds of estimating consumption that would generate a calculated BAC of at least 0.08 g/dl, compared to those making estimates based on beer consumption (95% CI: 3.734, 13.025). Students underestimated the intoxication they would experience from consuming supersized alcopops. Revised product warning labels are urgently needed to clearly identify the number of standard drinks contained in a supersized alcopop can. Moreover, regulations are needed to limit alcohol content of single-serving products. Copyright © 2018 by the Research Society on Alcoholism.

  2. Equation-free analysis of agent-based models and systematic parameter determination

    Science.gov (United States)

    Thomas, Spencer A.; Lloyd, David J. B.; Skeldon, Anne C.

    2016-12-01

    Agent based models (ABM)s are increasingly used in social science, economics, mathematics, biology and computer science to describe time dependent systems in circumstances where a description in terms of equations is difficult. Yet few tools are currently available for the systematic analysis of ABM behaviour. Numerical continuation and bifurcation analysis is a well-established tool for the study of deterministic systems. Recently, equation-free (EF) methods have been developed to extend numerical continuation techniques to systems where the dynamics are described at a microscopic scale and continuation of a macroscopic property of the system is considered. To date, the practical use of EF methods has been limited by; (1) the over-head of application-specific implementation; (2) the laborious configuration of problem-specific parameters; and (3) large ensemble sizes (potentially) leading to computationally restrictive run-times. In this paper we address these issues with our tool for the EF continuation of stochastic systems, which includes algorithms to systematically configuration problem specific parameters and enhance robustness to noise. Our tool is generic and can be applied to any 'black-box' simulator and determines the essential EF parameters prior to EF analysis. Robustness is significantly improved using our convergence-constraint with a corrector-repeat (C3R) method. This algorithm automatically detects outliers based on the dynamics of the underlying system enabling both an order of magnitude reduction in ensemble size and continuation of systems at much higher levels of noise than classical approaches. We demonstrate our method with application to several ABM models, revealing parameter dependence, bifurcation and stability analysis of these complex systems giving a deep understanding of the dynamical behaviour of the models in a way that is not otherwise easily obtainable. In each case we demonstrate our systematic parameter determination stage for

  3. A Systematic Review of Agent-Based Modelling and Simulation Applications in the Higher Education Domain

    Science.gov (United States)

    Gu, X.; Blackmore, K. L.

    2015-01-01

    This paper presents the results of a systematic review of agent-based modelling and simulation (ABMS) applications in the higher education (HE) domain. Agent-based modelling is a "bottom-up" modelling paradigm in which system-level behaviour (macro) is modelled through the behaviour of individual local-level agent interactions (micro).…

  4. Drought Persistence Errors in Global Climate Models

    Science.gov (United States)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  5. The Canadian Centre for Climate Modelling and Analysis global coupled model and its climate

    Energy Technology Data Exchange (ETDEWEB)

    Flato, G.M.; Boer, G.J.; Lee, W.G.; McFarlane, N.A.; Ramsden, D.; Reader, M.C. [Canadian Centre for Climate Modelling and Analysis, Victoria, BC (Canada); Weaver, A.J. [School of Earth and Ocean Sciences, University of Victoria, BC (Canada)

    2000-06-01

    A global, three-dimensional climate model, developed by coupling the CCCma second-generation atmospheric general circulation model (GCM2) to a version of the GFDL modular ocean model (MOM1), forms the basis for extended simulations of past, current and projected future climate. The spin-up and coupling procedures are described, as is the resulting climate based on a 200 year model simulation with constant atmospheric composition and external forcing. The simulated climate is systematically compared to available observations in terms of mean climate quantities and their spatial patterns, temporal variability, and regional behavior. Such comparison demonstrates a generally successful reproduction of the broad features of mean climate quantities, albeit with local discrepancies. Variability is generally well-simulated over land, but somewhat underestimated in the tropical ocean and the extratropical storm-track regions. The modelled climate state shows only small trends, indicating a reasonable level of balance at the surface, which is achieved in part by the use of heat and freshwater flux adjustments. The control simulation provides a basis against which to compare simulated climate change due to historical and projected greenhouse gas and aerosol forcing as described in companion publications. (orig.)

  6. SU-F-T-132: Variable RBE Models Predict Possible Underestimation of Vaginal Dose for Anal Cancer Patients Treated Using Single-Field Proton Treatments

    Energy Technology Data Exchange (ETDEWEB)

    McNamara, A; Underwood, T; Wo, J; Paganetti, H [Massachusetts General Hospital & Harvard Medical School, Boston, MA (United States)

    2016-06-15

    Purpose: Anal cancer patients treated using a posterior proton beam may be at risk of vaginal wall injury due to the increased linear energy transfer (LET) and relative biological effectiveness (RBE) at the beam distal edge. We investigate the vaginal dose received. Methods: Five patients treated for anal cancer with proton pencil beam scanning were considered, all treated to a prescription dose of 54 Gy(RBE) over 28–30 fractions. Dose and LET distributions were calculated using the Monte Carlo simulation toolkit TOPAS. In addition to the standard assumption of a fixed RBE of 1.1, variable RBE was considered via the application of published models. Dose volume histograms (DVHs) were extracted for the planning treatment volume (PTV) and vagina, the latter being used to calculate the vaginal normal tissue complication probability (NTCP). Results: Compared to the assumption of a fixed RBE of 1.1, the variable RBE model predicts a dose increase of approximately 3.3 ± 1.7 Gy at the end of beam range. NTCP parameters for the vagina are incomplete in the current literature, however, inferring value ranges from the existing data we use D{sub 50} = 50 Gy and LKB model parameters a=1–2 and m=0.2–0.4. We estimate the NTCP for the vagina to be 37–48% and 42–47% for the fixed and variable RBE cases, respectively. Additionally, a difference in the dose distribution was observed between the analytical calculation and Monte Carlo methods. We find that the target dose is overestimated on average by approximately 1–2%. Conclusion: For patients treated with posterior beams, the vaginal wall may coincide with the distal end of the proton beam and may receive a substantial increase in dose if variable RBE models are applied compared to using the current clinical standard of RBE equal to 1.1. This could potentially lead to underestimating toxicities when treating with protons.

  7. Comparing interval estimates for small sample ordinal CFA models.

    Science.gov (United States)

    Natesan, Prathiba

    2015-01-01

    Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading

  8. Systematizing Web Search through a Meta-Cognitive, Systems-Based, Information Structuring Model (McSIS)

    Science.gov (United States)

    Abuhamdieh, Ayman H.; Harder, Joseph T.

    2015-01-01

    This paper proposes a meta-cognitive, systems-based, information structuring model (McSIS) to systematize online information search behavior based on literature review of information-seeking models. The General Systems Theory's (GST) prepositions serve as its framework. Factors influencing information-seekers, such as the individual learning…

  9. Business model framework applications in health care: A systematic review.

    Science.gov (United States)

    Fredriksson, Jens Jacob; Mazzocato, Pamela; Muhammed, Rafiq; Savage, Carl

    2017-11-01

    It has proven to be a challenge for health care organizations to achieve the Triple Aim. In the business literature, business model frameworks have been used to understand how organizations are aligned to achieve their goals. We conducted a systematic literature review with an explanatory synthesis approach to understand how business model frameworks have been applied in health care. We found a large increase in applications of business model frameworks during the last decade. E-health was the most common context of application. We identified six applications of business model frameworks: business model description, financial assessment, classification based on pre-defined typologies, business model analysis, development, and evaluation. Our synthesis suggests that the choice of business model framework and constituent elements should be informed by the intent and context of application. We see a need for harmonization in the choice of elements in order to increase generalizability, simplify application, and help organizations realize the Triple Aim.

  10. Tick-borne encephalitis (TBE): an underestimated risk…still: report of the 14th annual meeting of the International Scientific Working Group on Tick-Borne Encephalitis (ISW-TBE).

    Science.gov (United States)

    Kunze, Ursula

    2012-06-01

    Today, the risk of getting tick-borne encephalitis (TBE) is still underestimated in many parts of Europe and worldwide. Therefore, the 14th meeting of the International Scientific Working Group on Tick-Borne Encephalitis (ISW-TBE) - a group of neurologists, general practitioners, clinicians, travel physicians, virologists, pediatricians, and epidemiologists - was held under the title "Tick-borne encephalitis: an underestimated risk…still". Among the discussed issues were: TBE, an underestimated risk in children, a case report in two Dutch travelers, the very emotional report of a tick victim, an overview of the epidemiological situation, investigations to detect new TBE cases in Italy, TBE virus (TBEV) strains circulation in Northern Europe, TBE Program of the European Centre for Disease Prevention and Control (ECDC), efforts to increase the TBE vaccination rate in the Czech Republic, positioning statement of the World Health Organization (WHO), and TBE in dogs. To answer the question raised above: Yes, the risk of getting TBE is underestimated in children and adults, because awareness is still too low. It is still underestimated in several areas of Europe, where, for a lack of human cases, TBEV is thought to be absent. It is underestimated in travelers, because they still do not know enough about the risk, and diagnostic awareness in non-endemic countries is still low. Copyright © 2012. Published by Elsevier GmbH. All rights reserved.

  11. How and why DNA barcodes underestimate the diversity of microbial eukaryotes.

    Directory of Open Access Journals (Sweden)

    Gwenael Piganeau

    Full Text Available BACKGROUND: Because many picoplanktonic eukaryotic species cannot currently be maintained in culture, direct sequencing of PCR-amplified 18S ribosomal gene DNA fragments from filtered sea-water has been successfully used to investigate the astounding diversity of these organisms. The recognition of many novel planktonic organisms is thus based solely on their 18S rDNA sequence. However, a species delimited by its 18S rDNA sequence might contain many cryptic species, which are highly differentiated in their protein coding sequences. PRINCIPAL FINDINGS: Here, we investigate the issue of species identification from one gene to the whole genome sequence. Using 52 whole genome DNA sequences, we estimated the global genetic divergence in protein coding genes between organisms from different lineages and compared this to their ribosomal gene sequence divergences. We show that this relationship between proteome divergence and 18S divergence is lineage dependent. Unicellular lineages have especially low 18S divergences relative to their protein sequence divergences, suggesting that 18S ribosomal genes are too conservative to assess planktonic eukaryotic diversity. We provide an explanation for this lineage dependency, which suggests that most species with large effective population sizes will show far less divergence in 18S than protein coding sequences. CONCLUSIONS: There is therefore a trade-off between using genes that are easy to amplify in all species, but which by their nature are highly conserved and underestimate the true number of species, and using genes that give a better description of the number of species, but which are more difficult to amplify. We have shown that this trade-off differs between unicellular and multicellular organisms as a likely consequence of differences in effective population sizes. We anticipate that biodiversity of microbial eukaryotic species is underestimated and that numerous "cryptic species" will become

  12. Systematic comparison of model polymer nanocomposite mechanics.

    Science.gov (United States)

    Xiao, Senbo; Peter, Christine; Kremer, Kurt

    2016-09-13

    Polymer nanocomposites render a range of outstanding materials from natural products such as silk, sea shells and bones, to synthesized nanoclay or carbon nanotube reinforced polymer systems. In contrast to the fast expanding interest in this type of material, the fundamental mechanisms of their mixing, phase behavior and reinforcement, especially for higher nanoparticle content as relevant for bio-inorganic composites, are still not fully understood. Although polymer nanocomposites exhibit diverse morphologies, qualitatively their mechanical properties are believed to be governed by a few parameters, namely their internal polymer network topology, nanoparticle volume fraction, particle surface properties and so on. Relating material mechanics to such elementary parameters is the purpose of this work. By taking a coarse-grained molecular modeling approach, we study an range of different polymer nanocomposites. We vary polymer nanoparticle connectivity, surface geometry and volume fraction to systematically study rheological/mechanical properties. Our models cover different materials, and reproduce key characteristics of real nanocomposites, such as phase separation, mechanical reinforcement. The results shed light on establishing elementary structure, property and function relationship of polymer nanocomposites.

  13. A trans-Amazonian screening of mtDNA reveals deep intraspecific divergence in forest birds and suggests a vast underestimation of species diversity.

    Directory of Open Access Journals (Sweden)

    Borja Milá

    Full Text Available The Amazonian avifauna remains severely understudied relative to that of the temperate zone, and its species richness is thought to be underestimated by current taxonomy. Recent molecular systematic studies using mtDNA sequence reveal that traditionally accepted species-level taxa often conceal genetically divergent subspecific lineages found to represent new species upon close taxonomic scrutiny, suggesting that intraspecific mtDNA variation could be useful in species discovery. Surveys of mtDNA variation in Holarctic species have revealed patterns of variation that are largely congruent with species boundaries. However, little information exists on intraspecific divergence in most Amazonian species. Here we screen intraspecific mtDNA genetic variation in 41 Amazonian forest understory species belonging to 36 genera and 17 families in 6 orders, using 758 individual samples from Ecuador and French Guiana. For 13 of these species, we also analyzed trans-Andean populations from the Ecuadorian Chocó. A consistent pattern of deep intraspecific divergence among trans-Amazonian haplogroups was found for 33 of the 41 taxa, and genetic differentiation and genetic diversity among them was highly variable, suggesting a complex range of evolutionary histories. Mean sequence divergence within families was the same as that found in North American birds (13%, yet mean intraspecific divergence in Neotropical species was an order of magnitude larger (2.13% vs. 0.23%, with mean distance between intraspecific lineages reaching 3.56%. We found no clear relationship between genetic distances and differentiation in plumage color. Our results identify numerous genetically and phenotypically divergent lineages which may result in new species-level designations upon closer taxonomic scrutiny and thorough sampling, although lineages in the tropical region could be older than those in the temperate zone without necessarily representing separate species. In

  14. An Approach to Remove the Systematic Bias from the Storm Surge forecasts in the Venice Lagoon

    Science.gov (United States)

    Canestrelli, A.

    2017-12-01

    In this work a novel approach is proposed for removing the systematic bias from the storm surge forecast computed by a two-dimensional shallow-water model. The model covers both the Adriatic and Mediterranean seas and provides the forecast at the entrance of the Venice Lagoon. The wind drag coefficient at the water-air interface is treated as a calibration parameter, with a different value for each range of wind velocities and wind directions. This sums up to a total of 16-64 parameters to be calibrated, depending on the chosen resolution. The best set of parameters is determined by means of an optimization procedure, which minimizes the RMS error between measured and modeled water level in Venice for the period 2011-2015. It is shown that a bias is present, for which the peaks of wind velocities provided by the weather forecast are largely underestimated, and that the calibration procedure removes this bias. When the calibrated model is used to reproduce events not included in the calibration dataset, the forecast error is strongly reduced, thus confirming the quality of our procedure. The proposed approach it is not site-specific and could be applied to different situations, such as storm surges caused by intense hurricanes.

  15. Social cure, what social cure? The propensity to underestimate the importance of social factors for health.

    Science.gov (United States)

    Haslam, S Alexander; McMahon, Charlotte; Cruwys, Tegan; Haslam, Catherine; Jetten, Jolanda; Steffens, Niklas K

    2018-02-01

    Recent meta-analytic research indicates that social support and social integration are highly protective against mortality, and that their importance is comparable to, or exceeds, that of many established behavioural risks such as smoking, high alcohol consumption, lack of exercise, and obesity that are the traditional focus of medical research (Holt-Lunstad et al., 2010). The present study examines perceptions of the contribution of these various factors to life expectancy within the community at large. American and British community respondents (N = 502) completed an on-line survey assessing the perceived importance of social and behavioural risk factors for mortality. As hypothesized, while respondents' perceptions of the importance of established behavioural risks was positively and highly correlated with their actual importance, social factors were seen to be far less important for health than they actually are. As a result, overall, there was a small but significant negative correlation between the perceived benefits and the actual benefits of different social and behavioural factors. Men, younger participants, and participants with a lower level of education were more likely to underestimate the importance of social factors for health. There was also evidence that underestimation was predicted by a cluster of ideological factors, the most significant of which was respondents' respect for prevailing convention and authorities as captured by Right-Wing Authoritarianism. Findings suggest that while people generally underestimate the importance of social factors for health this also varies as a function of demographic and ideological factors. They point to a range of challenges confronting those who seek to promote greater awareness of the importance of social factors for health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Measuring and modelling the effects of systematic non-adherence to mass drug administration

    Directory of Open Access Journals (Sweden)

    Louise Dyson

    2017-03-01

    Full Text Available It is well understood that the success or failure of a mass drug administration campaign critically depends on the level of coverage achieved. To that end coverage levels are often closely scrutinised during campaigns and the response to underperforming campaigns is to attempt to improve coverage. Modelling work has indicated, however, that the quality of the coverage achieved may also have a significant impact on the outcome. If the coverage achieved is likely to miss similar people every round then this can have a serious detrimental effect on the campaign outcome. We begin by reviewing the current modelling descriptions of this effect and introduce a new modelling framework that can be used to simulate a given level of systematic non-adherence. We formalise the likelihood that people may miss several rounds of treatment using the correlation in the attendance of different rounds. Using two very simplified models of the infection of helminths and non-helminths, respectively, we demonstrate that the modelling description used and the correlation included between treatment rounds can have a profound effect on the time to elimination of disease in a population. It is therefore clear that more detailed coverage data is required to accurately predict the time to disease elimination. We review published coverage data in which individuals are asked how many previous rounds they have attended, and show how this information may be used to assess the level of systematic non-adherence. We note that while the coverages in the data found range from 40.5% to 95.5%, still the correlations found lie in a fairly narrow range (between 0.2806 and 0.5351. This indicates that the level of systematic non-adherence may be similar even in data from different years, countries, diseases and administered drugs.

  17. Systematically too low values of the cranking model collective inertia parameters

    International Nuclear Information System (INIS)

    Dudek, I.; Dudek, W.; Lukasiak-Ruchowska, E.; Skalski, I.

    1980-01-01

    Deformed Nilsson and Woods-Saxon potentials were employed for generating single particle states used henceforth for calculating the inertia tensor (cranking model and monopole pairing) and the collective energy surfaces (Strutinsky method). The deformation was parametrized in terms of quadrupole and hexadecapole degrees of freedom. The classical energy expression obtained from the inertia tensor and energy surfaces was quantized and the resulting stationary Schroedinger equation was solved using the approximate method. The second Isup(π) = 0 + 2 collective level energies were calculated for the Rare Earth and Actinide nuclei and the results compared with the experimental data. The vibrational level energies agree with the experimental ones much better for spherical nuclei for both single particle potentials; the discrepancies for deformed nuclei overestimate the experimental results by roughly a factor of two. It is argued that coupling of the axially symmetric quadrupole degrees of freedom to non-axial and hexadecapole ones does not affect the conclusions about systematically too low mass parameter values. The alternative explanation of the systematic deviations from the 0 + 2 level energies could be a systematically too high stiffness of the energy surfaces obrained with the Strutinsky method. (orig.)

  18. Drastic underestimation of amphipod biodiversity in the endangered Irano-Anatolian and Caucasus biodiversity hotspots.

    Science.gov (United States)

    Katouzian, Ahmad-Reza; Sari, Alireza; Macher, Jan N; Weiss, Martina; Saboori, Alireza; Leese, Florian; Weigand, Alexander M

    2016-03-01

    Biodiversity hotspots are centers of biological diversity and particularly threatened by anthropogenic activities. Their true magnitude of species diversity and endemism, however, is still largely unknown as species diversity is traditionally assessed using morphological descriptions only, thereby ignoring cryptic species. This directly limits evidence-based monitoring and management strategies. Here we used molecular species delimitation methods to quantify cryptic diversity of the montane amphipods in the Irano-Anatolian and Caucasus biodiversity hotspots. Amphipods are ecosystem engineers in rivers and lakes. Species diversity was assessed by analysing two genetic markers (mitochondrial COI and nuclear 28S rDNA), compared with morphological assignments. Our results unambiguously demonstrate that species diversity and endemism is dramatically underestimated, with 42 genetically identified freshwater species in only five reported morphospecies. Over 90% of the newly recovered species cluster inside Gammarus komareki and G. lacustris; 69% of the recovered species comprise narrow range endemics. Amphipod biodiversity is drastically underestimated for the studied regions. Thus, the risk of biodiversity loss is significantly greater than currently inferred as most endangered species remain unrecognized and/or are only found locally. Integrative application of genetic assessments in monitoring programs will help to understand the true magnitude of biodiversity and accurately evaluate its threat status.

  19. Clinical information modeling processes for semantic interoperability of electronic health records: systematic review and inductive analysis.

    Science.gov (United States)

    Moreno-Conde, Alberto; Moner, David; Cruz, Wellington Dimas da; Santos, Marcelo R; Maldonado, José Alberto; Robles, Montserrat; Kalra, Dipak

    2015-07-01

    This systematic review aims to identify and compare the existing processes and methodologies that have been published in the literature for defining clinical information models (CIMs) that support the semantic interoperability of electronic health record (EHR) systems. Following the preferred reporting items for systematic reviews and meta-analyses systematic review methodology, the authors reviewed published papers between 2000 and 2013 that covered that semantic interoperability of EHRs, found by searching the PubMed, IEEE Xplore, and ScienceDirect databases. Additionally, after selection of a final group of articles, an inductive content analysis was done to summarize the steps and methodologies followed in order to build CIMs described in those articles. Three hundred and seventy-eight articles were screened and thirty six were selected for full review. The articles selected for full review were analyzed to extract relevant information for the analysis and characterized according to the steps the authors had followed for clinical information modeling. Most of the reviewed papers lack a detailed description of the modeling methodologies used to create CIMs. A representative example is the lack of description related to the definition of terminology bindings and the publication of the generated models. However, this systematic review confirms that most clinical information modeling activities follow very similar steps for the definition of CIMs. Having a robust and shared methodology could improve their correctness, reliability, and quality. Independently of implementation technologies and standards, it is possible to find common patterns in methods for developing CIMs, suggesting the viability of defining a unified good practice methodology to be used by any clinical information modeler. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Economic Evaluations of Multicomponent Disease Management Programs with Markov Models: A Systematic Review.

    Science.gov (United States)

    Kirsch, Florian

    2016-12-01

    Disease management programs (DMPs) for chronic diseases are being increasingly implemented worldwide. To present a systematic overview of the economic effects of DMPs with Markov models. The quality of the models is assessed, the method by which the DMP intervention is incorporated into the model is examined, and the differences in the structure and data used in the models are considered. A literature search was conducted; the Preferred Reporting Items for Systematic Reviews and Meta-Analyses statement was followed to ensure systematic selection of the articles. Study characteristics e.g. results, the intensity of the DMP and usual care, model design, time horizon, discount rates, utility measures, and cost-of-illness were extracted from the reviewed studies. Model quality was assessed by two researchers with two different appraisals: one proposed by Philips et al. (Good practice guidelines for decision-analytic modelling in health technology assessment: a review and consolidation of quality asessment. Pharmacoeconomics 2006;24:355-71) and the other proposed by Caro et al. (Questionnaire to assess relevance and credibility of modeling studies for informing health care decision making: an ISPOR-AMCP-NPC Good Practice Task Force report. Value Health 2014;17:174-82). A total of 16 studies (9 on chronic heart disease, 2 on asthma, and 5 on diabetes) met the inclusion criteria. Five studies reported cost savings and 11 studies reported additional costs. In the quality, the overall score of the models ranged from 39% to 65%, it ranged from 34% to 52%. Eleven models integrated effectiveness derived from a clinical trial or a meta-analysis of complete DMPs and only five models combined intervention effects from different sources into a DMP. The main limitations of the models are bad reporting practice and the variation in the selection of input parameters. Eleven of the 14 studies reported cost-effectiveness results of less than $30,000 per quality-adjusted life-year and

  1. Non-differential underestimation may cause a threshold effect of exposure to appear as a dose-response relationship

    NARCIS (Netherlands)

    Verkerk, P. H.; Buitendijk, S. E.

    1992-01-01

    It is generally believed that non-differential misclassification will lead to a bias toward the null-value. However, using one graphical and one numerical example, we show that in situations where underestimation more than overestimation is the problem, non-differential misclassification may lead to

  2. Underestimation of weight and its associated factors in overweight and obese university students from 21 low, middle and emerging economy countries.

    Science.gov (United States)

    Peltzer, Karl; Pengpid, Supa

    2015-01-01

    Awareness of overweight status is an important factor of weight control and may have more impact on one's decision to lose weight than objective weight status. The purpose of this study was to assess the prevalence of underestimation of overweight/obesity and its associated factors among university students from 21 low, middle and emerging economy countries. In a cross-sectional survey the total sample included 15,068 undergraduate university students (mean age 20.8, SD=2.8, age range of 16-30 years) from 21 countries. Anthropometric measurements and self-administrated questionnaire were applied to collected data. The prevalence of weight underestimation (being normal or underweight) for overweight or obese university students was 33.3% (41% in men and 25.1% in women), among overweight students, 39% felt they had normal weight or were under weight, and among obese students 67% did not rate themselves as obese or very overweight. In multivariate logistic regression analysis, being male, poor subjective health status, lack of overweight health risk awareness, lack of importance to lose weight, not trying and not dieting to lose weight, and regular breakfast was associated with underestimation of weight in overweight and obese university students. The study found a high prevalence of underestimation of overweight/obesity among university students. Several factors identified can be utilized in health promotion programmes including diet and weight management behaviours to focus on inaccurate weight perceptions on the design of weight control, in particular for men. Copyright © 2014 Asian Oceanian Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  3. THE SYSTEMATICS OF STRONG LENS MODELING QUANTIFIED: THE EFFECTS OF CONSTRAINT SELECTION AND REDSHIFT INFORMATION ON MAGNIFICATION, MASS, AND MULTIPLE IMAGE PREDICTABILITY

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu [University of Michigan, Department of Astronomy, 1085 South University Avenue, Ann Arbor, MI 48109-1107 (United States)

    2016-11-20

    Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.

  4. Systematic model development for partial nitrification of landfill leachate in a SBR

    DEFF Research Database (Denmark)

    Ganigue, R.; Volcke, E.I.P.; Puig, S.

    2010-01-01

    ), confirmed by statistical tests. Good model fits were also obtained for pH, despite a slight bias in pH prediction, probably caused by the high salinity of the leachate. Future work will be addressed to the model-based evaluation of the interaction of different factors (aeration, stripping, pH, inhibitions....... Following a systematic procedure, the model was successfully constructed, calibrated and validated using data from short-term (one cycle) operation of the PN-SBR. The evaluation of the model revealed a good fit to the main physical-chemical measurements (ammonium, nitrite, nitrate and inorganic carbon......, among others) and their impact on the process performance....

  5. A new algorithm for reducing the workload of experts in performing systematic reviews.

    Science.gov (United States)

    Matwin, Stan; Kouznetsov, Alexandre; Inkpen, Diana; Frunza, Oana; O'Blenis, Peter

    2010-01-01

    To determine whether a factorized version of the complement naïve Bayes (FCNB) classifier can reduce the time spent by experts reviewing journal articles for inclusion in systematic reviews of drug class efficacy for disease treatment. The proposed classifier was evaluated on a test collection built from 15 systematic drug class reviews used in previous work. The FCNB classifier was constructed to classify each article as containing high-quality, drug class-specific evidence or not. Weight engineering (WE) techniques were added to reduce underestimation for Medical Subject Headings (MeSH)-based and Publication Type (PubType)-based features. Cross-validation experiments were performed to evaluate the classifier's parameters and performance. Work saved over sampling (WSS) at no less than a 95% recall was used as the main measure of performance. The minimum workload reduction for a systematic review for one topic, achieved with a FCNB/WE classifier, was 8.5%; the maximum was 62.2% and the average over the 15 topics was 33.5%. This is 15.0% higher than the average workload reduction obtained using a voting perceptron-based automated citation classification system. The FCNB/WE classifier is simple, easy to implement, and produces significantly better results in reducing the workload than previously achieved. The results support it being a useful algorithm for machine-learning-based automation of systematic reviews of drug class efficacy for disease treatment.

  6. Illustrating the benefit of using hourly monitoring data on secondary inorganic aerosol and its precursors for model evaluation

    Directory of Open Access Journals (Sweden)

    M. Schaap

    2011-11-01

    Full Text Available Secondary inorganic aerosol, most notably ammonium nitrate and ammonium sulphate, is an important contributor to ambient particulate mass and provides a means for long range transport of acidifying components. The modelling of the formation and fate of these components is challenging. Especially, the formation of the semi-volatile ammonium nitrate is strongly dependent on ambient conditions and the precursor concentrations. For the first time an hourly artefact free data set from the MARGA instrument is available for the period of a full year (1 August 2007 to 1 August 2008 at Cabauw, the Netherlands. This data set is used to verify the results of the LOTOS-EUROS model. The comparison showed that the model underestimates the SIA levels. Closer inspection revealed that base line values appear well estimated for ammonium and sulphate and that the underestimation predominantly takes place at the peak concentrations. For nitrate the variability towards high concentrations is much better captured, however, a systematic relative underestimation was found. The model is able to reproduce many features of the intra-day variability observed for SIA. Although the model captures the seasonal and average diurnal variation of the SIA components, the modelled variability for the nitrate precursor gas nitric acid is much too large. It was found that the thermodynamic equilibrium module produces a too stable ammonium nitrate in winter and during night time in summer, whereas during the daytime in summer it is too unstable. We recommend to improve the model by verification of the equilibrium module, inclusion of coarse mode nitrate and to address the processes concerning SIA formation combined with a detailed analysis of the data set at hand. The benefit of the hourly data with both particulate and gas phase concentrations is illustrated and a continuation of these measurements may prove to be very useful in future model evaluation and improvement studies. Based

  7. Echocardiography underestimates stroke volume and aortic valve area: implications for patients with small-area low-gradient aortic stenosis.

    Science.gov (United States)

    Chin, Calvin W L; Khaw, Hwan J; Luo, Elton; Tan, Shuwei; White, Audrey C; Newby, David E; Dweck, Marc R

    2014-09-01

    Discordance between small aortic valve area (AVA; area (LVOTarea) and stroke volume alongside inconsistencies in recommended thresholds. One hundred thirty-three patients with mild to severe AS and 33 control individuals underwent comprehensive echocardiography and cardiovascular magnetic resonance imaging (MRI). Stroke volume and LVOTarea were calculated using echocardiography and MRI, and the effects on AVA estimation were assessed. The relationship between AVA and MPG measurements was then modelled with nonlinear regression and consistent thresholds for these parameters calculated. Finally the effect of these modified AVA measurements and novel thresholds on the number of patients with small-area low-gradient AS was investigated. Compared with MRI, echocardiography underestimated LVOTarea (n = 40; -0.7 cm(2); 95% confidence interval [CI], -2.6 to 1.3), stroke volumes (-6.5 mL/m(2); 95% CI, -28.9 to 16.0) and consequently, AVA (-0.23 cm(2); 95% CI, -1.01 to 0.59). Moreover, an AVA of 1.0 cm(2) corresponded to MPG of 24 mm Hg based on echocardiographic measurements and 37 mm Hg after correction with MRI-derived stroke volumes. Based on conventional measures, 56 patients had discordant small-area low-gradient AS. Using MRI-derived stroke volumes and the revised thresholds, a 48% reduction in discordance was observed (n = 29). Echocardiography underestimated LVOTarea, stroke volume, and therefore AVA, compared with MRI. The thresholds based on current guidelines were also inconsistent. In combination, these factors explain > 40% of patients with discordant small-area low-gradient AS. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  8. Intervention Strategies Based on Information-Motivation-Behavioral Skills Model for Health Behavior Change: A Systematic Review

    OpenAIRE

    Chang, Sun Ju; Choi, Suyoung; Kim, Se-An; Song, Misoon

    2014-01-01

    Purpose: This study systematically reviewed research on behavioral interventions based on the information-motivation-behavioral skills (IMB) model to investigate specific intervention strategies that focus on information, motivation, and behavioral skills and to evaluate their effectiveness for people with chronic diseases. Methods: A systematic review was conducted in accordance with the guidelines of both the National Evidence-based Healthcare Collaborating Agency and Im and Chang. A lit...

  9. Convolution method and CTV-to-PTV margins for finite fractions and small systematic errors

    International Nuclear Information System (INIS)

    Gordon, J J; Siebers, J V

    2007-01-01

    The van Herk margin formula (VHMF) relies on the accuracy of the convolution method (CM) to determine clinical target volume (CTV) to planning target volume (PTV) margins. This work (1) evaluates the accuracy of the CM and VHMF as a function of the number of fractions N and other parameters, and (2) proposes an alternative margin algorithm which ensures target coverage for a wider range of parameter values. Dose coverage was evaluated for a spherical target with uniform margin, using the same simplified dose model and CTV coverage criterion as were used in development of the VHMF. Systematic and random setup errors were assumed to be normally distributed with standard deviations Σ and σ. For clinically relevant combinations of σ, Σ and N, margins were determined by requiring that 90% of treatment course simulations have a CTV minimum dose greater than or equal to the static PTV minimum dose. Simulation results were compared with the VHMF and the alternative margin algorithm. The CM and VHMF were found to be accurate for parameter values satisfying the approximate criterion: σ[1 - γN/25] 0.2, because they failed to account for the non-negligible dose variability associated with random setup errors. These criteria are applicable when σ ∼> σ P , where σ P = 0.32 cm is the standard deviation of the normal dose penumbra. (Qualitative behaviour of the CM and VHMF will remain the same, though the criteria might vary if σ P takes values other than 0.32 cm.) When σ P , dose variability due to random setup errors becomes negligible, and the CM and VHMF are valid regardless of the values of Σ and N. When σ ∼> σ P , consistent with the above criteria, it was found that the VHMF can underestimate margins for large σ, small Σ and small N. A potential consequence of this underestimate is that the CTV minimum dose can fall below its planned value in more than the prescribed 10% of treatments. The proposed alternative margin algorithm provides better margin

  10. Are We Underestimating Microplastic Contamination in Aquatic Environments?

    Science.gov (United States)

    Conkle, Jeremy L.; Báez Del Valle, Christian D.; Turner, Jeffrey W.

    2018-01-01

    Plastic debris, specifically microplastic in the aquatic environment, is an escalating environmental crisis. Efforts at national scales to reduce or ban microplastics in personal care products are starting to pay off, but this will not affect those materials already in the environment or those that result from unregulated products and materials. To better inform future microplastic research and mitigation efforts this study (1) evaluates methods currently used to quantify microplastics in the environment and (2) characterizes the concentration and size distribution of microplastics in a variety of products. In this study, 50 published aquatic surveys were reviewed and they demonstrated that most ( 80%) only account for plastics ≥ 300 μm in diameter. In addition, we surveyed 770 personal care products to determine the occurrence, concentration and size distribution of polyethylene microbeads. Particle concentrations ranged from 1.9 to 71.9 mg g-1 of product or 1649 to 31,266 particles g-1 of product. The large majority ( > 95%) of particles in products surveyed were less than the 300 μm minimum diameter, indicating that previous environmental surveys could be underestimating microplastic contamination. To account for smaller particles as well as microfibers from synthetic textiles, we strongly recommend that future surveys consider methods that materials < 300 μm in diameter.

  11. Analysis of Error Propagation Within Hierarchical Air Combat Models

    Science.gov (United States)

    2016-06-01

    values alone are propagated through layers of combat models, the final results will likely be biased, and risk underestimated. An air-to-air...values alone are propagated through layers of combat models, the final results will likely be biased, and risk underestimated. An air-to-air engagement... PROPAGATION WITHIN HIERARCHICAL AIR COMBAT MODELS by Salih Ilaslan June 2016 Thesis Advisor: Thomas W. Lucas Second Reader: Jeffrey

  12. Systematic problems with using dark matter simulations to model stellar halos

    International Nuclear Information System (INIS)

    Bailin, Jeremy; Bell, Eric F.; Valluri, Monica; Stinson, Greg S.; Debattista, Victor P.; Couchman, H. M. P.; Wadsley, James

    2014-01-01

    The limits of available computing power have forced models for the structure of stellar halos to adopt one or both of the following simplifying assumptions: (1) stellar mass can be 'painted' onto dark matter (DM) particles in progenitor satellites; (2) pure DM simulations that do not form a luminous galaxy can be used. We estimate the magnitude of the systematic errors introduced by these assumptions using a controlled set of stellar halo models where we independently vary whether we look at star particles or painted DM particles, and whether we use a simulation in which a baryonic disk galaxy forms or a matching pure DM simulation that does not form a baryonic disk. We find that the 'painting' simplification reduces the halo concentration and internal structure, predominantly because painted DM particles have different kinematics from star particles even when both are buried deep in the potential well of the satellite. The simplification of using pure DM simulations reduces the concentration further, but increases the internal structure, and results in a more prolate stellar halo. These differences can be a factor of 1.5-7 in concentration (as measured by the half-mass radius) and 2-7 in internal density structure. Given this level of systematic uncertainty, one should be wary of overinterpreting differences between observations and the current generation of stellar halo models based on DM-only simulations when such differences are less than an order of magnitude.

  13. Does the surface property of a disposable applanation tonometer account for its underestimation of intraocular pressure when compared with the Goldmann tonometer?

    Science.gov (United States)

    Osborne, Sarah F; Williams, Rachel; Batterbury, Mark; Wong, David

    2007-04-01

    Disposable tonometers are increasingly being adopted partly because of concerns over the transmission of variant Creutzfeldt-Jakob disease and partly for convenience. Recently, we have found one such tonometer (Tonojet by Luneau Ophthalmologie, France) underestimated the intraocular pressure (IOP). We hypothesized that this underestimation was caused by a difference in the surface property of the tonometers. A tensiometer was used to measure the suction force resulting from interfacial tension between a solution of lignocaine and fluorescein and the tonometers. The results showed that the suction force was significantly greater for the Goldmann compared to the Tonojet. The magnitude of this force was too small to account for the difference in IOP measurements. The Tonojet was less hydrophilic than the Goldmann, and the contact angle of the fluid was therefore greater. For a given tear film, less hydrophilic tonometers will tend to have thicker mires, and this may lead to underestimation of the IOP. When such disposable tonometers are used, it is recommended care should be taken to reject readings from thick mires.

  14. SEMI-EMPIRICAL WHITE DWARF INITIAL-FINAL MASS RELATIONSHIPS: A THOROUGH ANALYSIS OF SYSTEMATIC UNCERTAINTIES DUE TO STELLAR EVOLUTION MODELS

    International Nuclear Information System (INIS)

    Salaris, Maurizio; Serenelli, Aldo; Weiss, Achim; Miller Bertolami, Marcelo

    2009-01-01

    Using the most recent results about white dwarfs (WDs) in ten open clusters, we revisit semiempirical estimates of the initial-final mass relation (IFMR) in star clusters, with emphasis on the use of stellar evolution models. We discuss the influence of these models on each step of the derivation. One intention of our work is to use consistent sets of calculations both for the isochrones and the WD cooling tracks. The second one is to derive the range of systematic errors arising from stellar evolution theory. This is achieved by using different sources for the stellar models and by varying physical assumptions and input data. We find that systematic errors, including the determination of the cluster age, are dominating the initial mass values, while observational uncertainties influence the final mass primarily. After having determined the systematic errors, the initial-final mass relation allows us finally to draw conclusions about the physics of the stellar models, in particular about convective overshooting.

  15. Scaling analysis in modeling transport and reaction processes a systematic approach to model building and the art of approximation

    CERN Document Server

    Krantz, William B

    2007-01-01

    This book is unique as the first effort to expound on the subject of systematic scaling analysis. Not written for a specific discipline, the book targets any reader interested in transport phenomena and reaction processes. The book is logically divided into chapters on the use of systematic scaling analysis in fluid dynamics, heat transfer, mass transfer, and reaction processes. An integrating chapter is included that considers more complex problems involving combined transport phenomena. Each chapter includes several problems that are explained in considerable detail. These are followed by several worked examples for which the general outline for the scaling is given. Each chapter also includes many practice problems. This book is based on recognizing the value of systematic scaling analysis as a pedagogical method for teaching transport and reaction processes and as a research tool for developing and solving models and in designing experiments. Thus, the book can serve as both a textbook and a reference boo...

  16. Calibration of a biome-biogeochemical cycles model for modeling the net primary production of teak forests through inverse modeling of remotely sensed data

    Science.gov (United States)

    Imvitthaya, Chomchid; Honda, Kiyoshi; Lertlum, Surat; Tangtham, Nipon

    2011-01-01

    In this paper, we present the results of a net primary production (NPP) modeling of teak (Tectona grandis Lin F.), an important species in tropical deciduous forests. The biome-biogeochemical cycles or Biome-BGC model was calibrated to estimate net NPP through the inverse modeling approach. A genetic algorithm (GA) was linked with Biome-BGC to determine the optimal ecophysiological model parameters. The Biome-BGC was calibrated by adjusting the ecophysiological model parameters to fit the simulated LAI to the satellite LAI (SPOT-Vegetation), and the best fitness confirmed the high accuracy of generated ecophysioligical parameter from GA. The modeled NPP, using optimized parameters from GA as input data, was evaluated using daily NPP derived by the MODIS satellite and the annual field data in northern Thailand. The results showed that NPP obtained using the optimized ecophysiological parameters were more accurate than those obtained using default literature parameterization. This improvement occurred mainly because the model's optimized parameters reduced the bias by reducing systematic underestimation in the model. These Biome-BGC results can be effectively applied in teak forests in tropical areas. The study proposes a more effective method of using GA to determine ecophysiological parameters at the site level and represents a first step toward the analysis of the carbon budget of teak plantations at the regional scale.

  17. Is hyperthyroidism underestimated in pregnancy and misdiagnosed as hyperemesis gravidarum?

    Science.gov (United States)

    Luetic, Ana Tikvica; Miskovic, Berivoj

    2010-10-01

    Thyroid changes are considered to be normal events that happen as a large maternal multiorganic adjustment to pregnancy. However, hyperthyroidism occurs in pregnancy with clinical presentation similar to hyperemesis gravidarum (HG) and pregnancy itself. Moreover, 10% of women with HG will continue to have symptoms throughout the pregnancy suggesting that the underlying cause might not be elevation of human chorionic gonadotropin in the first trimester. Variable frequency of both hyperthyroidism and HG worldwide might suggest the puzzlement of inclusion criteria for both diagnoses enhanced by the alternation of thyroid hormone levels assessed in normal pregnancy. Increased number of hyperthyroidism among women population without the expected rise in gestational hyperthyroidism encouraged us for creating the hypotheses that hyperthyroidism could be underestimated in normal pregnancy and even misdiagnosed as HG. This hypothesis, if confirmed, might have beneficial clinical implications, such as better detection of hyperthyroidism in pregnancies, application of therapy when needed with the reduction of maternal or fetal consequences. Copyright 2010 Elsevier Ltd. All rights reserved.

  18. Background model systematics for the Fermi GeV excess

    Energy Technology Data Exchange (ETDEWEB)

    Calore, Francesca; Cholis, Ilias; Weniger, Christoph

    2015-03-01

    The possible gamma-ray excess in the inner Galaxy and the Galactic center (GC) suggested by Fermi-LAT observations has triggered a large number of studies. It has been interpreted as a variety of different phenomena such as a signal from WIMP dark matter annihilation, gamma-ray emission from a population of millisecond pulsars, or emission from cosmic rays injected in a sequence of burst-like events or continuously at the GC. We present the first comprehensive study of model systematics coming from the Galactic diffuse emission in the inner part of our Galaxy and their impact on the inferred properties of the excess emission at Galactic latitudes 2° < |b| < 20° and 300 MeV to 500 GeV. We study both theoretical and empirical model systematics, which we deduce from a large range of Galactic diffuse emission models and a principal component analysis of residuals in numerous test regions along the Galactic plane. We show that the hypothesis of an extended spherical excess emission with a uniform energy spectrum is compatible with the Fermi-LAT data in our region of interest at 95% CL. Assuming that this excess is the extended counterpart of the one seen in the inner few degrees of the Galaxy, we derive a lower limit of 10.0° (95% CL) on its extension away from the GC. We show that, in light of the large correlated uncertainties that affect the subtraction of the Galactic diffuse emission in the relevant regions, the energy spectrum of the excess is equally compatible with both a simple broken power-law of break energy E(break) = 2.1 ± 0.2 GeV, and with spectra predicted by the self-annihilation of dark matter, implying in the case of bar bb final states a dark matter mass of m(χ)=49(+6.4)(-)(5.4)  GeV.

  19. Impact of rotavirus vaccination on hospitalisations in Belgium: comparing model predictions with observed data.

    Directory of Open Access Journals (Sweden)

    Baudouin Standaert

    Full Text Available BACKGROUND: Published economic assessments of rotavirus vaccination typically use modelling, mainly static Markov cohort models with birth cohorts followed up to the age of 5 years. Rotavirus vaccination has now been available for several years in some countries, and data have been collected to evaluate the real-world impact of vaccination on rotavirus hospitalisations. This study compared the economic impact of vaccination between model estimates and observed data on disease-specific hospitalisation reductions in a country for which both modelled and observed datasets exist (Belgium. METHODS: A previously published Markov cohort model estimated the impact of rotavirus vaccination on the number of rotavirus hospitalisations in children aged <5 years in Belgium using vaccine efficacy data from clinical development trials. Data on the number of rotavirus-positive gastroenteritis hospitalisations in children aged <5 years between 1 June 2004 and 31 May 2006 (pre-vaccination study period or 1 June 2007 to 31 May 2010 (post-vaccination study period were analysed from nine hospitals in Belgium and compared with the modelled estimates. RESULTS: The model predicted a smaller decrease in hospitalisations over time, mainly explained by two factors. First, the observed data indicated indirect vaccine protection in children too old or too young for vaccination. This herd effect is difficult to capture in static Markov cohort models and therefore was not included in the model. Second, the model included a 'waning' effect, i.e. reduced vaccine effectiveness over time. The observed data suggested this waning effect did not occur during that period, and so the model systematically underestimated vaccine effectiveness during the first 4 years after vaccine implementation. CONCLUSIONS: Model predictions underestimated the direct medical economic value of rotavirus vaccination during the first 4 years of vaccination by approximately 10% when assessing

  20. The Psychology Department Model Advisement Procedure: A Comprehensive, Systematic Approach to Career Development Advisement

    Science.gov (United States)

    Howell-Carter, Marya; Nieman-Gonder, Jennifer; Pellegrino, Jennifer; Catapano, Brittani; Hutzel, Kimberly

    2016-01-01

    The MAP (Model Advisement Procedure) is a comprehensive, systematic approach to developmental student advisement. The MAP was implemented to improve advisement consistency, improve student preparation for internships/senior projects, increase career exploration, reduce career uncertainty, and, ultimately, improve student satisfaction with the…

  1. Systematic simulations of modified gravity: chameleon models

    Energy Technology Data Exchange (ETDEWEB)

    Brax, Philippe [Institut de Physique Theorique, CEA, IPhT, CNRS, URA 2306, F-91191Gif/Yvette Cedex (France); Davis, Anne-Christine [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Road, Cambridge CB3 0WA (United Kingdom); Li, Baojiu [Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE (United Kingdom); Winther, Hans A. [Institute of Theoretical Astrophysics, University of Oslo, 0315 Oslo (Norway); Zhao, Gong-Bo, E-mail: philippe.brax@cea.fr, E-mail: a.c.davis@damtp.cam.ac.uk, E-mail: baojiu.li@durham.ac.uk, E-mail: h.a.winther@astro.uio.no, E-mail: gong-bo.zhao@port.ac.uk [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth PO1 3FX (United Kingdom)

    2013-04-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc{sup −1}, since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future.

  2. Systematic simulations of modified gravity: chameleon models

    International Nuclear Information System (INIS)

    Brax, Philippe; Davis, Anne-Christine; Li, Baojiu; Winther, Hans A.; Zhao, Gong-Bo

    2013-01-01

    In this work we systematically study the linear and nonlinear structure formation in chameleon theories of modified gravity, using a generic parameterisation which describes a large class of models using only 4 parameters. For this we have modified the N-body simulation code ecosmog to perform a total of 65 simulations for different models and parameter values, including the default ΛCDM. These simulations enable us to explore a significant portion of the parameter space. We have studied the effects of modified gravity on the matter power spectrum and mass function, and found a rich and interesting phenomenology where the difference with the ΛCDM paradigm cannot be reproduced by a linear analysis even on scales as large as k ∼ 0.05 hMpc −1 , since the latter incorrectly assumes that the modification of gravity depends only on the background matter density. Our results show that the chameleon screening mechanism is significantly more efficient than other mechanisms such as the dilaton and symmetron, especially in high-density regions and at early times, and can serve as a guidance to determine the parts of the chameleon parameter space which are cosmologically interesting and thus merit further studies in the future

  3. Design of roundness measurement model with multi-systematic error for cylindrical components with large radius.

    Science.gov (United States)

    Sun, Chuanzhi; Wang, Lei; Tan, Jiubin; Zhao, Bo; Tang, Yangchao

    2016-02-01

    The paper designs a roundness measurement model with multi-systematic error, which takes eccentricity, probe offset, radius of tip head of probe, and tilt error into account for roundness measurement of cylindrical components. The effects of the systematic errors and radius of components are analysed in the roundness measurement. The proposed method is built on the instrument with a high precision rotating spindle. The effectiveness of the proposed method is verified by experiment with the standard cylindrical component, which is measured on a roundness measuring machine. Compared to the traditional limacon measurement model, the accuracy of roundness measurement can be increased by about 2.2 μm using the proposed roundness measurement model for the object with a large radius of around 37 mm. The proposed method can improve the accuracy of roundness measurement and can be used for error separation, calibration, and comparison, especially for cylindrical components with a large radius.

  4. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study

    DEFF Research Database (Denmark)

    Tønnesen, Hanne; Christensen, Mette E; Groene, Oliver

    2007-01-01

    The first step of handling health promotion (HP) in Diagnosis Related Groups (DRGs) is a systematic documentation and registration of the activities in the medical records. So far the possibility and tradition for systematic registration of clinical HP activities in the medical records...... and in patient administrative systems have been sparse. Therefore, the activities are mostly invisible in the registers of hospital services as well as in budgets and balances.A simple model has been described to structure the registration of the HP procedures performed by the clinical staff. The model consists...... of two parts; first part includes motivational counselling (7 codes) and the second part comprehends intervention, rehabilitation and after treatment (8 codes).The objective was to evaluate in an international study the usefulness, applicability and sufficiency of a simple model for the systematic...

  5. A Systematic Approach to Determining the Identifiability of Multistage Carcinogenesis Models.

    Science.gov (United States)

    Brouwer, Andrew F; Meza, Rafael; Eisenberg, Marisa C

    2017-07-01

    Multistage clonal expansion (MSCE) models of carcinogenesis are continuous-time Markov process models often used to relate cancer incidence to biological mechanism. Identifiability analysis determines what model parameter combinations can, theoretically, be estimated from given data. We use a systematic approach, based on differential algebra methods traditionally used for deterministic ordinary differential equation (ODE) models, to determine identifiable combinations for a generalized subclass of MSCE models with any number of preinitation stages and one clonal expansion. Additionally, we determine the identifiable combinations of the generalized MSCE model with up to four clonal expansion stages, and conjecture the results for any number of clonal expansion stages. The results improve upon previous work in a number of ways and provide a framework to find the identifiable combinations for further variations on the MSCE models. Finally, our approach, which takes advantage of the Kolmogorov backward equations for the probability generating functions of the Markov process, demonstrates that identifiability methods used in engineering and mathematics for systems of ODEs can be applied to continuous-time Markov processes. © 2016 Society for Risk Analysis.

  6. Multi-model evaluation of short-lived pollutant distributions over east Asia during summer 2008

    Science.gov (United States)

    Quennehen, B.; Raut, J.-C.; Law, K. S.; Daskalakis, N.; Ancellet, G.; Clerbaux, C.; Kim, S.-W.; Lund, M. T.; Myhre, G.; Olivié, D. J. L.; Safieddine, S.; Skeie, R. B.; Thomas, J. L.; Tsyro, S.; Bazureau, A.; Bellouin, N.; Hu, M.; Kanakidou, M.; Klimont, Z.; Kupiainen, K.; Myriokefalitakis, S.; Quaas, J.; Rumbold, S. T.; Schulz, M.; Cherian, R.; Shimizu, A.; Wang, J.; Yoon, S.-C.; Zhu, T.

    2016-08-01

    The ability of seven state-of-the-art chemistry-aerosol models to reproduce distributions of tropospheric ozone and its precursors, as well as aerosols over eastern Asia in summer 2008, is evaluated. The study focuses on the performance of models used to assess impacts of pollutants on climate and air quality as part of the EU ECLIPSE project. Models, run using the same ECLIPSE emissions, are compared over different spatial scales to in situ surface, vertical profiles and satellite data. Several rather clear biases are found between model results and observations, including overestimation of ozone at rural locations downwind of the main emission regions in China, as well as downwind over the Pacific. Several models produce too much ozone over polluted regions, which is then transported downwind. Analysis points to different factors related to the ability of models to simulate VOC-limited regimes over polluted regions and NOx limited regimes downwind. This may also be linked to biases compared to satellite NO2, indicating overestimation of NO2 over and to the north of the northern China Plain emission region. On the other hand, model NO2 is too low to the south and west of this region and over South Korea/Japan. Overestimation of ozone is linked to systematic underestimation of CO particularly at rural sites and downwind of the main Chinese emission regions. This is likely to be due to enhanced destruction of CO by OH. Overestimation of Asian ozone and its transport downwind implies that radiative forcing from this source may be overestimated. Model-observation discrepancies over Beijing do not appear to be due to emission controls linked to the Olympic Games in summer 2008.With regard to aerosols, most models reproduce the satellite-derived AOD patterns over eastern China. Our study nevertheless reveals an overestimation of ECLIPSE model mean surface BC and sulphate aerosols in urban China in summer 2008. The effect of the short-term emission mitigation in Beijing

  7. Multi-model evaluation of short-lived pollutant distributions over east Asia during summer 2008

    Directory of Open Access Journals (Sweden)

    B. Quennehen

    2016-08-01

    Full Text Available The ability of seven state-of-the-art chemistry–aerosol models to reproduce distributions of tropospheric ozone and its precursors, as well as aerosols over eastern Asia in summer 2008, is evaluated. The study focuses on the performance of models used to assess impacts of pollutants on climate and air quality as part of the EU ECLIPSE project. Models, run using the same ECLIPSE emissions, are compared over different spatial scales to in situ surface, vertical profiles and satellite data. Several rather clear biases are found between model results and observations, including overestimation of ozone at rural locations downwind of the main emission regions in China, as well as downwind over the Pacific. Several models produce too much ozone over polluted regions, which is then transported downwind. Analysis points to different factors related to the ability of models to simulate VOC-limited regimes over polluted regions and NOx limited regimes downwind. This may also be linked to biases compared to satellite NO2, indicating overestimation of NO2 over and to the north of the northern China Plain emission region. On the other hand, model NO2 is too low to the south and west of this region and over South Korea/Japan. Overestimation of ozone is linked to systematic underestimation of CO particularly at rural sites and downwind of the main Chinese emission regions. This is likely to be due to enhanced destruction of CO by OH. Overestimation of Asian ozone and its transport downwind implies that radiative forcing from this source may be overestimated. Model-observation discrepancies over Beijing do not appear to be due to emission controls linked to the Olympic Games in summer 2008.With regard to aerosols, most models reproduce the satellite-derived AOD patterns over eastern China. Our study nevertheless reveals an overestimation of ECLIPSE model mean surface BC and sulphate aerosols in urban China in summer 2008. The effect of the short-term emission

  8. A Systematic Literature Review of Agile Maturity Model Research

    Directory of Open Access Journals (Sweden)

    Vaughan Henriques

    2017-02-01

    Full Text Available Background/Aim/Purpose: A commonly implemented software process improvement framework is the capability maturity model integrated (CMMI. Existing literature indicates higher levels of CMMI maturity could result in a loss of agility due to its organizational focus. To maintain agility, research has focussed attention on agile maturity models. The objective of this paper is to find the common research themes and conclusions in agile maturity model research. Methodology: This research adopts a systematic approach to agile maturity model research, using Google Scholar, Science Direct, and IEEE Xplore as sources. In total 531 articles were initially found matching the search criteria, which was filtered to 39 articles by applying specific exclusion criteria. Contribution:: The article highlights the trends in agile maturity model research, specifically bringing to light the lack of research providing validation of such models. Findings: Two major themes emerge, being the coexistence of agile and CMMI and the development of agile principle based maturity models. The research trend indicates an increase in agile maturity model articles, particularly in the latter half of the last decade, with concentrations of research coinciding with version updates of CMMI. While there is general consensus around higher CMMI maturity levels being incompatible with true agility, there is evidence of the two coexisting when agile is introduced into already highly matured environments. Future Research:\tFuture research direction for this topic should include how to attain higher levels of CMMI maturity using only agile methods, how governance is addressed in agile environments, and whether existing agile maturity models relate to improved project success.

  9. Efficient trawl avoidance by mesopelagic fishes causes large underestimation of their biomass

    KAUST Repository

    Kaartvedt, Stein

    2012-06-07

    Mesopelagic fishes occur in all the world’s oceans, but their abundance and consequently their ecological significance remains uncertain. The current global estimate based on net sampling prior to 1980 suggests a global abundance of one gigatonne (109 t) wet weight. Here we report novel evidence of efficient avoidance of such sampling by the most common myctophid fish in the Northern Atlantic, i.e. Benthosema glaciale. We reason that similar avoidance of nets may explain consistently higher acoustic abundance estimates of mesopelagic fish from different parts of the world’s oceans. It appears that mesopelagic fish abundance may be underestimated by one order of magnitude, suggesting that the role of mesopelagic fish in the oceans might need to be revised.

  10. Hydrocarbon Fuel Thermal Performance Modeling based on Systematic Measurement and Comprehensive Chromatographic Analysis

    Science.gov (United States)

    2016-07-31

    distribution unlimited Hydrocarbon Fuel Thermal Performance Modeling based on Systematic Measurement and Comprehensive Chromatographic Analysis Matthew...vital importance for hydrocarbon -fueled propulsion systems: fuel thermal performance as indicated by physical and chemical effects of cooling passage... analysis . The selection and acquisition of a set of chemically diverse fuels is pivotal for a successful outcome since test method validation and

  11. Acute Myocardial Infarction Readmission Risk Prediction Models: A Systematic Review of Model Performance.

    Science.gov (United States)

    Smith, Lauren N; Makam, Anil N; Darden, Douglas; Mayo, Helen; Das, Sandeep R; Halm, Ethan A; Nguyen, Oanh Kieu

    2018-01-01

    Hospitals are subject to federal financial penalties for excessive 30-day hospital readmissions for acute myocardial infarction (AMI). Prospectively identifying patients hospitalized with AMI at high risk for readmission could help prevent 30-day readmissions by enabling targeted interventions. However, the performance of AMI-specific readmission risk prediction models is unknown. We systematically searched the published literature through March 2017 for studies of risk prediction models for 30-day hospital readmission among adults with AMI. We identified 11 studies of 18 unique risk prediction models across diverse settings primarily in the United States, of which 16 models were specific to AMI. The median overall observed all-cause 30-day readmission rate across studies was 16.3% (range, 10.6%-21.0%). Six models were based on administrative data; 4 on electronic health record data; 3 on clinical hospital data; and 5 on cardiac registry data. Models included 7 to 37 predictors, of which demographics, comorbidities, and utilization metrics were the most frequently included domains. Most models, including the Centers for Medicare and Medicaid Services AMI administrative model, had modest discrimination (median C statistic, 0.65; range, 0.53-0.79). Of the 16 reported AMI-specific models, only 8 models were assessed in a validation cohort, limiting generalizability. Observed risk-stratified readmission rates ranged from 3.0% among the lowest-risk individuals to 43.0% among the highest-risk individuals, suggesting good risk stratification across all models. Current AMI-specific readmission risk prediction models have modest predictive ability and uncertain generalizability given methodological limitations. No existing models provide actionable information in real time to enable early identification and risk-stratification of patients with AMI before hospital discharge, a functionality needed to optimize the potential effectiveness of readmission reduction interventions

  12. Systematic Multi‐Scale Model Development Strategy for the Fragrance Spraying Process and Transport

    DEFF Research Database (Denmark)

    Heitzig, M.; Rong, Y.; Gregson, C.

    2012-01-01

    The fast and efficient development and application of reliable models with appropriate degree of detail to predict the behavior of fragrance aerosols are challenging problems of high interest to the related industries. A generic modeling template for the systematic derivation of specific fragrance......‐aided modeling framework, which is structured based on workflows for different general modeling tasks. The benefits of the fragrance spraying template are highlighted by a case study related to the derivation of a fragrance aerosol model that is able to reflect measured dynamic droplet size distribution profiles...... aerosol models is proposed. The main benefits of the fragrance spraying template are the speed‐up of the model development/derivation process, the increase in model quality, and the provision of structured domain knowledge where needed. The fragrance spraying template is integrated in a generic computer...

  13. Terrestrial pesticide exposure of amphibians: an underestimated cause of global decline?

    Science.gov (United States)

    Brühl, Carsten A; Schmidt, Thomas; Pieper, Silvia; Alscher, Annika

    2013-01-01

    Amphibians, a class of animals in global decline, are present in agricultural landscapes characterized by agrochemical inputs. Effects of pesticides on terrestrial life stages of amphibians such as juvenile and adult frogs, toads and newts are little understood and a specific risk assessment for pesticide exposure, mandatory for other vertebrate groups, is currently not conducted. We studied the effects of seven pesticide products on juvenile European common frogs (Rana temporaria) in an agricultural overspray scenario. Mortality ranged from 100% after one hour to 40% after seven days at the recommended label rate of currently registered products. The demonstrated toxicity is alarming and a large-scale negative effect of terrestrial pesticide exposure on amphibian populations seems likely. Terrestrial pesticide exposure might be underestimated as a driver of their decline calling for more attention in conservation efforts and the risk assessment procedures in place do not protect this vanishing animal group.

  14. The global distribution of fatal pesticide self-poisoning: systematic review

    DEFF Research Database (Denmark)

    Gunnell, David; Eddleston, Michael; Phillips, Michael R

    2007-01-01

    BACKGROUND: Evidence is accumulating that pesticide self-poisoning is one of the most commonly used methods of suicide worldwide, but the magnitude of the problem and the global distribution of these deaths is unknown. METHODS: We have systematically reviewed the worldwide literature to estimate......-poisoning worldwide each year, accounting for 30% (range 27% to 37%) of suicides globally. Official data from India probably underestimate the incidence of suicides; applying evidence-based corrections to India's official data, our estimate for world suicides using pesticides increases to 371,594 (range 347......, not the quantity used, that influences the likelihood they will be used in acts of fatal self-harm. CONCLUSION: Pesticide self-poisoning accounts for about one-third of the world's suicides. Epidemiological and toxicological data suggest that many of these deaths might be prevented if (a) the use of pesticides...

  15. Eating disorders among fashion models: a systematic review of the literature.

    Science.gov (United States)

    Zancu, Simona Alexandra; Enea, Violeta

    2017-09-01

    In the light of recent concerns regarding the eating disorders among fashion models and professional regulations of fashion model occupation, an examination of the scientific evidence on this issue is necessary. The article reviews findings on the prevalence of eating disorders and body image concerns among professional fashion models. A systematic literature search was conducted using ProQUEST, EBSCO, PsycINFO, SCOPUS, and Gale Canage electronic databases. A very low number of studies conducted on fashion models and eating disorders resulted between 1980 and 2015, with seven articles included in this review. Overall, results of these studies do not indicate a higher prevalence of eating disorders among fashion models compared to non-models. Fashion models have a positive body image and generally do not report more dysfunctional eating behaviors than controls. However, fashion models are on average slightly underweight with significantly lower BMI than controls, and give higher importance to appearance and thin body shape, and thus have a higher prevalence of partial-syndrome eating disorders than controls. Despite public concerns, research on eating disorders among professional fashion models is extremely scarce and results cannot be generalized to all models. The existing research fails to clarify the matter of eating disorders among fashion models and given the small number of studies, further research is needed.

  16. The air forces on a systematic series of biplane and triplane cellule models

    Science.gov (United States)

    Munk, Max M

    1927-01-01

    The air forces on a systematic series of biplane and triplane cellule models are the subject of this report. The test consist in the determination of the lift, drag, and moment of each individual airfoil in each cellule, mostly with the same wing section. The magnitude of the gap and of the stagger is systematically varied; not, however, the decalage, which is zero throughout the tests. Certain check tests with a second wing section make the tests more complete and conclusions more convincing. The results give evidence that the present army and navy specifications for the relative lifts of biplanes are good. They furnish material for improving such specifications for the relative lifts of triplanes. A larger number of factors can now be prescribed to take care of different cases.

  17. Precision of systematic and random sampling in clustered populations: habitat patches and aggregating organisms.

    Science.gov (United States)

    McGarvey, Richard; Burch, Paul; Matthews, Janet M

    2016-01-01

    Natural populations of plants and animals spatially cluster because (1) suitable habitat is patchy, and (2) within suitable habitat, individuals aggregate further into clusters of higher density. We compare the precision of random and systematic field sampling survey designs under these two processes of species clustering. Second, we evaluate the performance of 13 estimators for the variance of the sample mean from a systematic survey. Replicated simulated surveys, as counts from 100 transects, allocated either randomly or systematically within the study region, were used to estimate population density in six spatial point populations including habitat patches and Matérn circular clustered aggregations of organisms, together and in combination. The standard one-start aligned systematic survey design, a uniform 10 x 10 grid of transects, was much more precise. Variances of the 10 000 replicated systematic survey mean densities were one-third to one-fifth of those from randomly allocated transects, implying transect sample sizes giving equivalent precision by random survey would need to be three to five times larger. Organisms being restricted to patches of habitat was alone sufficient to yield this precision advantage for the systematic design. But this improved precision for systematic sampling in clustered populations is underestimated by standard variance estimators used to compute confidence intervals. True variance for the survey sample mean was computed from the variance of 10 000 simulated survey mean estimates. Testing 10 published and three newly proposed variance estimators, the two variance estimators (v) that corrected for inter-transect correlation (ν₈ and ν(W)) were the most accurate and also the most precise in clustered populations. These greatly outperformed the two "post-stratification" variance estimators (ν₂ and ν₃) that are now more commonly applied in systematic surveys. Similar variance estimator performance rankings were found with

  18. Whole-word response scoring underestimates functional spelling ability for some individuals with global agraphia

    Directory of Open Access Journals (Sweden)

    Andrew Tesla Demarco

    2015-05-01

    These data suggest that conventional whole-word scoring may significantly underestimate functional spelling performance. Because by-letter scoring boosted pre-treatment scores to the same extent as post-treatment scores, the magnitude of treatment gains was no greater than estimates from conventional whole-word scoring. Nonetheless, the surprisingly large disparity between conventional whole-word scoring and by-letter scoring suggests that by-letter scoring methods may warrant further investigation. Because by-letter analyses may hold interest to others, we plan to make the software tool used in this study available on-line for use to researchers and clinicians at large.

  19. Modeling natural wetlands: A new global framework built on wetland observations

    Science.gov (United States)

    Matthews, E.; Romanski, J.; Olefeldt, D.

    2015-12-01

    Natural wetlands are the world's largest methane (CH4) source, and their distribution and CH4 fluxes are sensitive to interannual and longer-term climate variations. Wetland distributions used in wetland-CH4 models diverge widely, and these geographic differences contribute substantially to large variations in magnitude, seasonality and distribution of modeled methane fluxes. Modeling wetland type and distribution—closely tied to simulating CH4 emissions—is a high priority, particularly for studies of wetlands and CH4 dynamics under past and future climates. Methane-wetland models either prescribe or simulate methane-producing areas (aka wetlands) and both approaches result in predictable over- and under-estimates. 1) Monthly satellite-derived inundation data include flooded areas that are not wetlands (e.g., lakes, reservoirs, and rivers), and do not identify non-flooded wetlands. 2) Models simulating methane-producing areas overwhelmingly rely on modeled soil moisture, systematically over-estimating total global area, with regional over- and under-estimates, while schemes to model soil-moisture typically cannot account for positive water tables (i.e., flooding). Interestingly, while these distinct hydrological approaches to identify wetlands are complementary, merging them does not provide critical data needed to model wetlands for methane studies. We present a new integrated framework for modeling wetlands, and ultimately their methane emissions, that exploits the extensive body of data and information on wetlands. The foundation of the approach is an existing global gridded data set comprising all and only wetlands, including vegetation information. This data set is augmented with data inter alia on climate, inundation dynamics, soil type and soil carbon, permafrost, active-layer depth, growth form, and species composition. We investigate this enhanced wetland data set to identify which variables best explain occurrence and characteristics of observed

  20. Systematic review of model-based cervical screening evaluations.

    Science.gov (United States)

    Mendes, Diana; Bains, Iren; Vanni, Tazio; Jit, Mark

    2015-05-01

    Optimising population-based cervical screening policies is becoming more complex due to the expanding range of screening technologies available and the interplay with vaccine-induced changes in epidemiology. Mathematical models are increasingly being applied to assess the impact of cervical cancer screening strategies. We systematically reviewed MEDLINE®, Embase, Web of Science®, EconLit, Health Economic Evaluation Database, and The Cochrane Library databases in order to identify the mathematical models of human papillomavirus (HPV) infection and cervical cancer progression used to assess the effectiveness and/or cost-effectiveness of cervical cancer screening strategies. Key model features and conclusions relevant to decision-making were extracted. We found 153 articles meeting our eligibility criteria published up to May 2013. Most studies (72/153) evaluated the introduction of a new screening technology, with particular focus on the comparison of HPV DNA testing and cytology (n = 58). Twenty-eight in forty of these analyses supported HPV DNA primary screening implementation. A few studies analysed more recent technologies - rapid HPV DNA testing (n = 3), HPV DNA self-sampling (n = 4), and genotyping (n = 1) - and were also supportive of their introduction. However, no study was found on emerging molecular markers and their potential utility in future screening programmes. Most evaluations (113/153) were based on models simulating aggregate groups of women at risk of cervical cancer over time without accounting for HPV infection transmission. Calibration to country-specific outcome data is becoming more common, but has not yet become standard practice. Models of cervical screening are increasingly used, and allow extrapolation of trial data to project the population-level health and economic impact of different screening policy. However, post-vaccination analyses have rarely incorporated transmission dynamics. Model calibration to country

  1. Systematic Review of Health Economic Impact Evaluations of Risk Prediction Models : Stop Developing, Start Evaluating

    NARCIS (Netherlands)

    van Giessen, Anoukh; Peters, Jaime; Wilcher, Britni; Hyde, Chris; Moons, Carl; de Wit, Ardine; Koffijberg, Erik

    2017-01-01

    Background: Although health economic evaluations (HEEs) are increasingly common for therapeutic interventions, they appear to be rare for the use of risk prediction models (PMs). Objectives: To evaluate the current state of HEEs of PMs by performing a comprehensive systematic review. Methods: Four

  2. Dual-use tools and systematics-aware analysis workflows in the ATLAS Run-II analysis model

    CERN Document Server

    FARRELL, Steven; The ATLAS collaboration

    2015-01-01

    The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the introduction of a dual-use tool technology, which provides abstract interfaces for analysis software tools to run in either the Athena framework or a ROOT-based framework. The tool interfaces, including a new interface for handling systematic uncertainties, have been standardized for the development of improved analysis workflows and consolidation of high-level analysis tools. This presentation will cover the details of the dual-use tool functionality, the systematics interface, and how these features fit into a centrally supported analysis environment.

  3. Dual-use tools and systematics-aware analysis workflows in the ATLAS Run-2 analysis model

    CERN Document Server

    FARRELL, Steven; The ATLAS collaboration; Calafiura, Paolo; Delsart, Pierre-Antoine; Elsing, Markus; Koeneke, Karsten; Krasznahorkay, Attila; Krumnack, Nils; Lancon, Eric; Lavrijsen, Wim; Laycock, Paul; Lei, Xiaowen; Strandberg, Sara Kristina; Verkerke, Wouter; Vivarelli, Iacopo; Woudstra, Martin

    2015-01-01

    The ATLAS analysis model has been overhauled for the upcoming run of data collection in 2015 at 13 TeV. One key component of this upgrade was the Event Data Model (EDM), which now allows for greater flexibility in the choice of analysis software framework and provides powerful new features that can be exploited by analysis software tools. A second key component of the upgrade is the introduction of a dual-use tool technology, which provides abstract interfaces for analysis software tools to run in either the Athena framework or a ROOT-based framework. The tool interfaces, including a new interface for handling systematic uncertainties, have been standardized for the development of improved analysis workflows and consolidation of high-level analysis tools. This paper will cover the details of the dual-use tool functionality, the systematics interface, and how these features fit into a centrally supported analysis environment.

  4. Systematic integration of experimental data and models in systems biology.

    Science.gov (United States)

    Li, Peter; Dada, Joseph O; Jameson, Daniel; Spasic, Irena; Swainston, Neil; Carroll, Kathleen; Dunn, Warwick; Khan, Farid; Malys, Naglis; Messiha, Hanan L; Simeonidis, Evangelos; Weichart, Dieter; Winder, Catherine; Wishart, Jill; Broomhead, David S; Goble, Carole A; Gaskell, Simon J; Kell, Douglas B; Westerhoff, Hans V; Mendes, Pedro; Paton, Norman W

    2010-11-29

    The behaviour of biological systems can be deduced from their mathematical models. However, multiple sources of data in diverse forms are required in the construction of a model in order to define its components and their biochemical reactions, and corresponding parameters. Automating the assembly and use of systems biology models is dependent upon data integration processes involving the interoperation of data and analytical resources. Taverna workflows have been developed for the automated assembly of quantitative parameterised metabolic networks in the Systems Biology Markup Language (SBML). A SBML model is built in a systematic fashion by the workflows which starts with the construction of a qualitative network using data from a MIRIAM-compliant genome-scale model of yeast metabolism. This is followed by parameterisation of the SBML model with experimental data from two repositories, the SABIO-RK enzyme kinetics database and a database of quantitative experimental results. The models are then calibrated and simulated in workflows that call out to COPASIWS, the web service interface to the COPASI software application for analysing biochemical networks. These systems biology workflows were evaluated for their ability to construct a parameterised model of yeast glycolysis. Distributed information about metabolic reactions that have been described to MIRIAM standards enables the automated assembly of quantitative systems biology models of metabolic networks based on user-defined criteria. Such data integration processes can be implemented as Taverna workflows to provide a rapid overview of the components and their relationships within a biochemical system.

  5. Accuracy and bias of ICT self-efficacy: an empirical study into students' over- and underestimation of their ICT competences

    NARCIS (Netherlands)

    Aesaert, K.; Voogt, J.; Kuiper, E.; van Braak, J.

    2017-01-01

    Most studies on the assessment of ICT competences use measures of ICT self-efficacy. These studies are often accused that they suffer from self-reported bias, i.e. students can over- and/or underestimate their ICT competences. As such, taking bias and accuracy of ICT self-efficacy into account,

  6. A comprehensive model for executing knowledge management audits in organizations: a systematic review.

    Science.gov (United States)

    Shahmoradi, Leila; Ahmadi, Maryam; Sadoughi, Farahnaz; Piri, Zakieh; Gohari, Mahmood Reza

    2015-01-01

    A knowledge management audit (KMA) is the first phase in knowledge management implementation. Incomplete or incomprehensive execution of the KMA has caused many knowledge management programs to fail. A study was undertaken to investigate how KMAs are performed systematically in organizations and present a comprehensive model for performing KMAs based on a systematic review. Studies were identified by searching electronic databases such as Emerald, LISA, and the Cochrane library and e-journals such as the Oxford Journal and hand searching of printed journals, theses, and books in the Tehran University of Medical Sciences digital library. The sources used in this study consisted of studies available through the digital library of the Tehran University of Medical Sciences that were published between 2000 and 2013, including both Persian- and English-language sources, as well as articles explaining the steps involved in performing a KMA. A comprehensive model for KMAs is presented in this study. To successfully execute a KMA, it is necessary to perform the appropriate preliminary activities in relation to the knowledge management infrastructure, determine the knowledge management situation, and analyze and use the available data on this situation.

  7. Maintaining Sexual Desire in Long-Term Relationships: A Systematic Review and Conceptual Model.

    Science.gov (United States)

    Mark, Kristen P; Lasslo, Julie A

    The most universally experienced sexual response is sexual desire. Though research on this topic has increased in recent years, low and high desire are still problematized in clinical settings and the broader culture. However, despite knowledge that sexual desire ebbs and flows both within and between individuals, and that problems with sexual desire are strongly linked to problems with relationships, there is a critical gap in understanding the factors that contribute to maintaining sexual desire in the context of relationships. This article offers a systematic review of the literature to provide researchers, educators, clinicians, and the broader public with an overview and a conceptual model of nonclinical sexual desire in long-term relationships. First, we systematically identified peer-reviewed, English-language articles that focused on the maintenance of sexual desire in the context of nonclinical romantic relationships. Second, we reviewed a total of 64 articles that met inclusion criteria and synthesized them into factors using a socioecological framework categorized as individual, interpersonal, and societal in nature. These findings are used to build a conceptual model of maintaining sexual desire in long-term relationships. Finally, we discuss the limitations of the existing research and suggest clear directions for future research.

  8. Developing and Optimising the Use of Logic Models in Systematic Reviews: Exploring Practice and Good Practice in the Use of Programme Theory in Reviews.

    Science.gov (United States)

    Kneale, Dylan; Thomas, James; Harris, Katherine

    2015-01-01

    Logic models are becoming an increasingly common feature of systematic reviews, as is the use of programme theory more generally in systematic reviewing. Logic models offer a framework to help reviewers to 'think' conceptually at various points during the review, and can be a useful tool in defining study inclusion and exclusion criteria, guiding the search strategy, identifying relevant outcomes, identifying mediating and moderating factors, and communicating review findings. In this paper we critique the use of logic models in systematic reviews and protocols drawn from two databases representing reviews of health interventions and international development interventions. Programme theory featured only in a minority of the reviews and protocols included. Despite drawing from different disciplinary traditions, reviews and protocols from both sources shared several limitations in their use of logic models and theories of change, and these were used almost unanimously to solely depict pictorially the way in which the intervention worked. Logic models and theories of change were consequently rarely used to communicate the findings of the review. Logic models have the potential to be an aid integral throughout the systematic reviewing process. The absence of good practice around their use and development may be one reason for the apparent limited utility of logic models in many existing systematic reviews. These concerns are addressed in the second half of this paper, where we offer a set of principles in the use of logic models and an example of how we constructed a logic model for a review of school-based asthma interventions.

  9. Simulation modeling for stratified breast cancer screening - a systematic review of cost and quality of life assumptions.

    Science.gov (United States)

    Arnold, Matthias

    2017-12-02

    The economic evaluation of stratified breast cancer screening gains momentum, but produces also very diverse results. Systematic reviews so far focused on modeling techniques and epidemiologic assumptions. However, cost and utility parameters received only little attention. This systematic review assesses simulation models for stratified breast cancer screening based on their cost and utility parameters in each phase of breast cancer screening and care. A literature review was conducted to compare economic evaluations with simulation models of personalized breast cancer screening. Study quality was assessed using reporting guidelines. Cost and utility inputs were extracted, standardized and structured using a care delivery framework. Studies were then clustered according to their study aim and parameters were compared within the clusters. Eighteen studies were identified within three study clusters. Reporting quality was very diverse in all three clusters. Only two studies in cluster 1, four studies in cluster 2 and one study in cluster 3 scored high in the quality appraisal. In addition to the quality appraisal, this review assessed if the simulation models were consistent in integrating all relevant phases of care, if utility parameters were consistent and methodological sound and if cost were compatible and consistent in the actual parameters used for screening, diagnostic work up and treatment. Of 18 studies, only three studies did not show signs of potential bias. This systematic review shows that a closer look into the cost and utility parameter can help to identify potential bias. Future simulation models should focus on integrating all relevant phases of care, using methodologically sound utility parameters and avoiding inconsistent cost parameters.

  10. A systematic hub loads model of a horizontal wind turbine

    International Nuclear Information System (INIS)

    Kazacoks, Romans; Jamieson, Peter

    2014-01-01

    The wind turbine industry has focused offshore on increasing the capacity of a single unit through up-scaling their machines. There is however a lack of systematic studies on how loads vary due to properties of a wind turbine and scaling of wind turbines. The purpose of this paper is to study how applied blade modifications, with similarities such as mass, stiffness and dimensions, influence blade root moments and lifetime damage equivalent loads (DELs) of the rotor blades. In order to produce fatigue load blade root moment trends based on the applied modifications. It was found that a linear trend of lifetime DELs based on the applied modifications of blades, which have effect on the natural frequency of blade of the original or reference model. As the control system was tuned for the specific frequency of the reference model. The linear trend of lifetime DELs was generated as long as the natural frequency of the reference model was preserved. For larger modifications of the wind turbine the controller would need retuning

  11. Limits of Risk Predictability in a Cascading Alternating Renewal Process Model.

    Science.gov (United States)

    Lin, Xin; Moussawi, Alaa; Korniss, Gyorgy; Bakdash, Jonathan Z; Szymanski, Boleslaw K

    2017-07-27

    Most risk analysis models systematically underestimate the probability and impact of catastrophic events (e.g., economic crises, natural disasters, and terrorism) by not taking into account interconnectivity and interdependence of risks. To address this weakness, we propose the Cascading Alternating Renewal Process (CARP) to forecast interconnected global risks. However, assessments of the model's prediction precision are limited by lack of sufficient ground truth data. Here, we establish prediction precision as a function of input data size by using alternative long ground truth data generated by simulations of the CARP model with known parameters. We illustrate the approach on a model of fires in artificial cities assembled from basic city blocks with diverse housing. The results confirm that parameter recovery variance exhibits power law decay as a function of the length of available ground truth data. Using CARP, we also demonstrate estimation using a disparate dataset that also has dependencies: real-world prediction precision for the global risk model based on the World Economic Forum Global Risk Report. We conclude that the CARP model is an efficient method for predicting catastrophic cascading events with potential applications to emerging local and global interconnected risks.

  12. Is oral cancer incidence among patients with oral lichen planus/oral lichenoid lesions underestimated?

    Science.gov (United States)

    Gonzalez-Moles, M A; Gil-Montoya, J A; Ruiz-Avila, I; Bravo, M

    2017-02-01

    Oral lichen planus (OLP) and oral lichenoid lesions (OLL) are considered potentially malignant disorders with a cancer incidence of around 1% of cases, although this estimation is controversial. The aim of this study was to analyze the cancer incidence in a case series of patients with OLP and OLL and to explore clinicopathological aspects that may cause underestimation of the cancer incidence in these diseases. A retrospective study was conducted of 102 patients diagnosed with OLP (n = 21, 20.58%) or OLL (n = 81) between January 2006 and January 2016. Patients were informed of the risk of malignization and followed up annually. The number of sessions programmed for each patient was compared with the number actually attended. Follow-up was classified as complete (100% attendance), good (75-99%), moderate (25-74%), or poor (<25% attendance) compliance. Cancer was developed by four patients (3.9%), three males and one male. One of these developed three carcinomas, which were diagnosed at the follow-up visit (two in lower gingiva, one in floor of mouth); one had OLL and the other three had OLP. The carcinoma developed in mucosal areas with no OLP or OLL involvement in three of these patients, while OLP and cancer were diagnosed simultaneously in the fourth. Of the six carcinomas diagnosed, five (83.3%) were T1 and one (16.7%) T2. None were N+, and all patients remain alive and disease-free. The cancer incidence in OLP and OLL appears to be underestimated due to the strict exclusion criteria usually imposed. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. A systematic approach to obtain validated Partial Least Square models for predicting lipoprotein subclasses from serum NMR spectra

    NARCIS (Netherlands)

    Mihaleva, V.V.; van Schalkwijk, D.B.; de Graaf, A.A.; van Duynhoven, J.; van Dorsten, F.A.; Vervoort, J.; Smilde, A.; Westerhuis, J.A.; Jacobs, D.M.

    2014-01-01

    A systematic approach is described for building validated PLS models that predict cholesterol and triglyceride concentrations in lipoprotein subclasses in fasting serum from a normolipidemic, healthy population. The PLS models were built on diffusion-edited 1H NMR spectra and calibrated on

  14. A systematic approach to obtain validated partial least square models for predicting lipoprotein subclasses from serum NMR spectra

    NARCIS (Netherlands)

    Mihaleva, V.V.; Schalkwijk, van D.B.; Graaf, de A.A.; Duynhoven, van J.P.M.; Dorsten, van F.A.; Vervoort, J.J.M.; Smilde, A.K.; Westerhuis, J.A.; Jacobs, D.M.

    2014-01-01

    A systematic approach is described for building validated PLS models that predict cholesterol and triglyceride concentrations in lipoprotein subclasses in fasting serum from a normolipidemic, healthy population. The PLS models were built on diffusion-edited (1)H NMR spectra and calibrated on

  15. A systematic approach to obtain validated partial least square models for predicting lipoprotein subclasses from serum nmr spectra

    NARCIS (Netherlands)

    Mihaleva, V.V.; Schalkwijk, D.B. van; Graaf, A.A. de; Duynhoven, J. van; Dorsten, F.A. van; Vervoort, J.; Smilde, A.; Westerhuis, J.A.; Jacobs, D.M.

    2014-01-01

    A systematic approach is described for building validated PLS models that predict cholesterol and triglyceride concentrations in lipoprotein subclasses in fasting serum from a normolipidemic, healthy population. The PLS models were built on diffusion-edited 1H NMR spectra and calibrated on

  16. Error Analysis of Satellite Precipitation-Driven Modeling of Flood Events in Complex Alpine Terrain

    Directory of Open Access Journals (Sweden)

    Yiwen Mei

    2016-03-01

    Full Text Available The error in satellite precipitation-driven complex terrain flood simulations is characterized in this study for eight different global satellite products and 128 flood events over the Eastern Italian Alps. The flood events are grouped according to two flood types: rain floods and flash floods. The satellite precipitation products and runoff simulations are evaluated based on systematic and random error metrics applied on the matched event pairs and basin-scale event properties (i.e., rainfall and runoff cumulative depth and time series shape. Overall, error characteristics exhibit dependency on the flood type. Generally, timing of the event precipitation mass center and dispersion of the time series derived from satellite precipitation exhibits good agreement with the reference; the cumulative depth is mostly underestimated. The study shows a dampening effect in both systematic and random error components of the satellite-driven hydrograph relative to the satellite-retrieved hyetograph. The systematic error in shape of the time series shows a significant dampening effect. The random error dampening effect is less pronounced for the flash flood events and the rain flood events with a high runoff coefficient. This event-based analysis of the satellite precipitation error propagation in flood modeling sheds light on the application of satellite precipitation in mountain flood hydrology.

  17. Maturity Models in Supply Chain Sustainability: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Elisabete Correia

    2017-01-01

    Full Text Available A systematic literature review of supply chain maturity models with sustainability concerns is presented. The objective is to give insights into methodological issues related to maturity models, namely the research objectives; the research methods used to develop, validate and test them; the scope; and the main characteristics associated with their design. The literature review was performed based on journal articles and conference papers from 2000 to 2015 using the SCOPUS, Emerald Insight, EBSCO and Web of Science databases. Most of the analysed papers have as main objective the development of maturity models and their validation. The case study is the methodology that is most widely used by researchers to develop and validate maturity models. From the sustainability perspective, the scope of the analysed maturity models is the Triple Bottom Line (TBL and environmental dimension, focusing on a specific process (eco-design and new product development and without a broad SC perspective. The dominant characteristics associated with the design of the maturity models are the maturity grids and a continuous representation. In addition, results do not allow identifying a trend for a specific number of maturity levels. The comprehensive review, analysis, and synthesis of the maturity model literature represent an important contribution to the organization of this research area, making possible to clarify some confusion that exists about concepts, approaches and components of maturity models in sustainability. Various aspects associated with the maturity models (i.e., research objectives, research methods, scope and characteristics of the design of models are explored to contribute to the evolution and significance of this multidimensional area.

  18. Statistical Methods for the Qualitative Assessment of Dynamic Models with Time Delay (R Package qualV

    Directory of Open Access Journals (Sweden)

    Stefanie Jachner

    2007-06-01

    Full Text Available Results of ecological models differ, to some extent, more from measured data than from empirical knowledge. Existing techniques for validation based on quantitative assessments sometimes cause an underestimation of the performance of models due to time shifts, accelerations and delays or systematic differences between measurement and simulation. However, for the application of such models it is often more important to reproduce essential patterns instead of seemingly exact numerical values. This paper presents techniques to identify patterns and numerical methods to measure the consistency of patterns between observations and model results. An orthogonal set of deviance measures for absolute, relative and ordinal scale was compiled to provide informations about the type of difference. Furthermore, two different approaches accounting for time shifts were presented. The first one transforms the time to take time delays and speed differences into account. The second one describes known qualitative criteria dividing time series into interval units in accordance to their main features. The methods differ in their basic concepts and in the form of the resulting criteria. Both approaches and the deviance measures discussed are implemented in an R package. All methods are demonstrated by means of water quality measurements and simulation data. The proposed quality criteria allow to recognize systematic differences and time shifts between time series and to conclude about the quantitative and qualitative similarity of patterns.

  19. Tundra water budget and implications of precipitation underestimation.

    Science.gov (United States)

    Liljedahl, Anna K; Hinzman, Larry D; Kane, Douglas L; Oechel, Walter C; Tweedie, Craig E; Zona, Donatella

    2017-08-01

    Difficulties in obtaining accurate precipitation measurements have limited meaningful hydrologic assessment for over a century due to performance challenges of conventional snowfall and rainfall gauges in windy environments. Here, we compare snowfall observations and bias adjusted snowfall to end-of-winter snow accumulation measurements on the ground for 16 years (1999-2014) and assess the implication of precipitation underestimation on the water balance for a low-gradient tundra wetland near Utqiagvik (formerly Barrow), Alaska (2007-2009). In agreement with other studies, and not accounting for sublimation, conventional snowfall gauges captured 23-56% of end-of-winter snow accumulation. Once snowfall and rainfall are bias adjusted, long-term annual precipitation estimates more than double (from 123 to 274 mm), highlighting the risk of studies using conventional or unadjusted precipitation that dramatically under-represent water balance components. Applying conventional precipitation information to the water balance analysis produced consistent storage deficits (79 to 152 mm) that were all larger than the largest actual deficit (75 mm), which was observed in the unusually low rainfall summer of 2007. Year-to-year variability in adjusted rainfall (±33 mm) was larger than evapotranspiration (±13 mm). Measured interannual variability in partitioning of snow into runoff (29% in 2008 to 68% in 2009) in years with similar end-of-winter snow accumulation (180 and 164 mm, respectively) highlights the importance of the previous summer's rainfall (25 and 60 mm, respectively) on spring runoff production. Incorrect representation of precipitation can therefore have major implications for Arctic water budget descriptions that in turn can alter estimates of carbon and energy fluxes.

  20. Are the impacts of land use on warming underestimated in climate policy?

    Science.gov (United States)

    Mahowald, Natalie M.; Ward, Daniel S.; Doney, Scott C.; Hess, Peter G.; Randerson, James T.

    2017-09-01

    While carbon dioxide emissions from energy use must be the primary target of climate change mitigation efforts, land use and land cover change (LULCC) also represent an important source of climate forcing. In this study we compute time series of global surface temperature change separately for LULCC and non-LULCC sources (primarily fossil fuel burning), and show that because of the extra warming associated with the co-emission of methane and nitrous oxide with LULCC carbon dioxide emissions, and a co-emission of cooling aerosols with non-LULCC emissions of carbon dioxide, the linear relationship between cumulative carbon dioxide emissions and temperature has a two-fold higher slope for LULCC than for non-LULCC activities. Moreover, projections used in the Intergovernmental Panel on Climate Change (IPCC) for the rate of tropical land conversion in the future are relatively low compared to contemporary observations, suggesting that the future projections of land conversion used in the IPCC may underestimate potential impacts of LULCC. By including a ‘business as usual’ future LULCC scenario for tropical deforestation, we find that even if all non-LULCC emissions are switched off in 2015, it is likely that 1.5 °C of warming relative to the preindustrial era will occur by 2100. Thus, policies to reduce LULCC emissions must remain a high priority if we are to achieve the low to medium temperature change targets proposed as a part of the Paris Agreement. Future studies using integrated assessment models and other climate simulations should include more realistic deforestation rates and the integration of policy that would reduce LULCC emissions.

  1. Rapidity distributions of hadrons in the HydHSD hybrid model

    Energy Technology Data Exchange (ETDEWEB)

    Khvorostukhin, A. S., E-mail: hvorost@theor.jinr.ru; Toneev, V. D. [Joint Institute for Nuclear Research (Russian Federation)

    2017-03-15

    A multistage hybrid model intended for describing heavy-ion interactions in the energy region of the NICA collider under construction in Dubna is proposed. The model combines the initial, fast, interaction stage described by the model of hadron string dynamics (HSD) and the subsequent evolution that the expanding system formed at the first stage experiences at the second stage and which one treats on the basis of ideal hydrodynamics; after the completion of the second stage, the particles involved may still undergo rescattering (third interaction stage). The model admits three freeze-out scenarios: isochronous, isothermal, and isoenergetic. Generally, the HydHSD hybrid model developed in the present study provides fairly good agreement with available experimental data on proton rapidity spectra. It is shown that, within this hybrid model, the two-humped structure of proton rapidity distributions can be obtained either by increasing the freeze-out temperature and energy density or by more lately going over to the hydrodynamic stage. Although the proposed hybrid model reproduces rapidity spectra of protons, it is unable to describe rapidity distributions of pions, systematically underestimating their yield. It is necessary to refine the model by including viscosity effects at the hydrodynamic stage of evolution of the system and by considering in more detail the third interaction stage.

  2. An extended systematic mapping study about the scalability of i* Models

    Directory of Open Access Journals (Sweden)

    Paulo Lima

    2016-12-01

    Full Text Available i* models have been used for requirements specification in many domains, such as healthcare, telecommunication, and air traffic control. Managing the scalability and the complexity of such models is an important challenge in Requirements Engineering (RE. Scalability is also one of the most intractable issues in the design of visual notations in general: a well-known problem with visual representations is that they do not scale well. This issue has led us to investigate scalability in i* models and its variants by means of a systematic mapping study. This paper is an extended version of a previous paper on the scalability of i* including papers indicated by specialists. Moreover, we also discuss the challenges and open issues regarding scalability of i* models and its variants. A total of 126 papers were analyzed in order to understand: how the RE community perceives scalability; and which proposals have considered this topic. We found that scalability issues are indeed perceived as relevant and that further work is still required, even though many potential solutions have already been proposed. This study can be a starting point for researchers aiming to further advance the treatment of scalability in i* models.

  3. Underestimating Costs in Public Works Projects

    DEFF Research Database (Denmark)

    Flyvbjerg, Bent; Holm, Mette K. Skamris; Buhl, Søren L.

    2002-01-01

    This article presents results from the first statistically significant study of cost escalation in transportation infrastructure projects. Based on a sample of 258 transportation infrastructure projects worth $90 billion (U.S.), it is found with overwhelming statistical significance that the cost...... honest numbers should not trust the cost estimates and cost-benefit analyses produced by project promoters and their analysts. Independent estimates and analyses are needed as are institutional checks and balances to curb deception.......This article presents results from the first statistically significant study of cost escalation in transportation infrastructure projects. Based on a sample of 258 transportation infrastructure projects worth $90 billion (U.S.), it is found with overwhelming statistical significance that the cost...... estimates used to decide whether important infrastructure should be built are highly and systematically misleading. The result is continuous cost escalation of billions of dollars. The sample used in the study is the largest of its kind, allowing for the first time statistically valid conclusions regarding...

  4. Cost Underestimation in Public Works Projects

    DEFF Research Database (Denmark)

    Flyvbjerg, Bent; Holm, Mette K. Skamris; Buhl, Søren L.

    This article presents results from the first statistically significant study of cost escalation in transportation infrastructure projects. Based on a sample of 258 transportation infrastructure projects worth $90 billion (U.S.), it is found with overwhelming statistical significance that the cost...... honest numbers should not trust the cost estimates and cost-benefit analyses produced by project promoters and their analysts. Independent estimates and analyses are needed as are institutional checks and balances to curb deception.......This article presents results from the first statistically significant study of cost escalation in transportation infrastructure projects. Based on a sample of 258 transportation infrastructure projects worth $90 billion (U.S.), it is found with overwhelming statistical significance that the cost...... estimates used to decide whether important infrastructure should be built are highly and systematically misleading. The result is continuous cost escalation of billions of dollars. The sample used in the study is the largest of its kind, allowing for the first time statistically valid conclusions regarding...

  5. Language issues, an underestimated danger in major hazard control?

    Science.gov (United States)

    Lindhout, Paul; Ale, Ben J M

    2009-12-15

    Language issues are problems with communication via speech, signs, gestures or their written equivalents. They may result from poor reading and writing skills, a mix of foreign languages and other circumstances. Language issues are not picked up as a safety risk on the shop floor by current safety management systems. These safety risks need to be identified, acknowledged, quantified and prioritized in order to allow risk reducing measures to be taken. This study investigates the nature of language issues related danger in literature, by experiment and by a survey among the Seveso II companies in the Netherlands. Based on human error frequencies, and on the contents of accident investigation reports, the risks associated with language issues were ranked. Accident investigation method causal factor categories were found not to be sufficiently representative for the type and magnitude of these risks. Readability of safety related documents used by the companies was investigated and found to be poor in many cases. Interviews among regulators and a survey among Seveso II companies were used to identify the gap between the language issue related dangers found in literature and current best practices. This study demonstrates by means of triangulation with different investigative methods that language issue related risks are indeed underestimated. A recommended coarse of action in order to arrive at appropriate measures is presented.

  6. Language issues, an underestimated danger in major hazard control?

    Energy Technology Data Exchange (ETDEWEB)

    Lindhout, Paul, E-mail: plindhout@minszw.nl [Ministry of Social Affairs and Employment, AI-MHC, Anna van Hannoverstraat 4, P.O. Box 90801, 2509 LV The Hague (Netherlands); Ale, Ben J.M. [Delft University of Technology, TBM-Safety Science Group, Jaffalaan 5, 2628 BX Delft (Netherlands)

    2009-12-15

    Language issues are problems with communication via speech, signs, gestures or their written equivalents. They may result from poor reading and writing skills, a mix of foreign languages and other circumstances. Language issues are not picked up as a safety risk on the shop floor by current safety management systems. These safety risks need to be identified, acknowledged, quantified and prioritised in order to allow risk reducing measures to be taken. This study investigates the nature of language issues related danger in literature, by experiment and by a survey among the Seveso II companies in the Netherlands. Based on human error frequencies, and on the contents of accident investigation reports, the risks associated with language issues were ranked. Accident investigation method causal factor categories were found not to be sufficiently representative for the type and magnitude of these risks. Readability of safety related documents used by the companies was investigated and found to be poor in many cases. Interviews among regulators and a survey among Seveso II companies were used to identify the gap between the language issue related dangers found in literature and current best practices. This study demonstrates by means of triangulation with different investigative methods that language issue related risks are indeed underestimated. A recommended coarse of action in order to arrive at appropriate measures is presented.

  7. Language issues, an underestimated danger in major hazard control?

    International Nuclear Information System (INIS)

    Lindhout, Paul; Ale, Ben J.M.

    2009-01-01

    Language issues are problems with communication via speech, signs, gestures or their written equivalents. They may result from poor reading and writing skills, a mix of foreign languages and other circumstances. Language issues are not picked up as a safety risk on the shop floor by current safety management systems. These safety risks need to be identified, acknowledged, quantified and prioritised in order to allow risk reducing measures to be taken. This study investigates the nature of language issues related danger in literature, by experiment and by a survey among the Seveso II companies in the Netherlands. Based on human error frequencies, and on the contents of accident investigation reports, the risks associated with language issues were ranked. Accident investigation method causal factor categories were found not to be sufficiently representative for the type and magnitude of these risks. Readability of safety related documents used by the companies was investigated and found to be poor in many cases. Interviews among regulators and a survey among Seveso II companies were used to identify the gap between the language issue related dangers found in literature and current best practices. This study demonstrates by means of triangulation with different investigative methods that language issue related risks are indeed underestimated. A recommended coarse of action in order to arrive at appropriate measures is presented.

  8. Systematic review of prognostic models in traumatic brain injury

    Directory of Open Access Journals (Sweden)

    Roberts Ian

    2006-11-01

    Full Text Available Abstract Background Traumatic brain injury (TBI is a leading cause of death and disability world-wide. The ability to accurately predict patient outcome after TBI has an important role in clinical practice and research. Prognostic models are statistical models that combine two or more items of patient data to predict clinical outcome. They may improve predictions in TBI patients. Multiple prognostic models for TBI have accumulated for decades but none of them is widely used in clinical practice. The objective of this systematic review is to critically assess existing prognostic models for TBI Methods Studies that combine at least two variables to predict any outcome in patients with TBI were searched in PUBMED and EMBASE. Two reviewers independently examined titles, abstracts and assessed whether each met the pre-defined inclusion criteria. Results A total of 53 reports including 102 models were identified. Almost half (47% were derived from adult patients. Three quarters of the models included less than 500 patients. Most of the models (93% were from high income countries populations. Logistic regression was the most common analytical strategy to derived models (47%. In relation to the quality of the derivation models (n:66, only 15% reported less than 10% pf loss to follow-up, 68% did not justify the rationale to include the predictors, 11% conducted an external validation and only 19% of the logistic models presented the results in a clinically user-friendly way Conclusion Prognostic models are frequently published but they are developed from small samples of patients, their methodological quality is poor and they are rarely validated on external populations. Furthermore, they are not clinically practical as they are not presented to physicians in a user-friendly way. Finally because only a few are developed using populations from low and middle income countries, where most of trauma occurs, the generalizability to these setting is limited.

  9. Parenting practices, parents' underestimation of daughters' risks, and alcohol and sexual behaviors of urban girls.

    Science.gov (United States)

    O'Donnell, Lydia; Stueve, Ann; Duran, Richard; Myint-U, Athi; Agronick, Gail; San Doval, Alexi; Wilson-Simmons, Renée

    2008-05-01

    In urban economically distressed communities, high rates of early sexual initiation combined with alcohol use place adolescent girls at risk for myriad negative health consequences. This article reports on the extent to which parents of young teens underestimate both the risks their daughters are exposed to and the considerable influence that they have over their children's decisions and behaviors. Surveys were conducted with more than 700 sixth-grade girls and their parents, recruited from seven New York City schools serving low-income families. Bivariate and multivariate analyses examined relationships among parents' practices and perceptions of daughters' risks, girls' reports of parenting, and outcomes of girls' alcohol use, media and peer conduct, and heterosexual romantic and social behaviors that typically precede sexual intercourse. Although only four parents thought that their daughters had used alcohol, 22% of the daughters reported drinking in the past year. Approximately 5% of parents thought that daughters had hugged and kissed a boy for a long time or had "hung out" with older boys, whereas 38% of girls reported these behaviors. Parents' underestimation of risk was correlated with lower reports of positive parenting practices by daughters. In multivariate analyses, girls' reports of parental oversight, rules, and disapproval of risk are associated with all three behavioral outcomes. Adult reports of parenting practices are associated with girls' conduct and heterosexual behaviors, but not with their alcohol use. Creating greater awareness of the early onset of risk behaviors among urban adolescent girls is important for fostering positive parenting practices, which in turn may help parents to support their daughters' healthier choices.

  10. Testing flow diversion in animal models: a systematic review.

    Science.gov (United States)

    Fahed, Robert; Raymond, Jean; Ducroux, Célina; Gentric, Jean-Christophe; Salazkin, Igor; Ziegler, Daniela; Gevry, Guylaine; Darsaut, Tim E

    2016-04-01

    Flow diversion (FD) is increasingly used to treat intracranial aneurysms. We sought to systematically review published studies to assess the quality of reporting and summarize the results of FD in various animal models. Databases were searched to retrieve all animal studies on FD from 2000 to 2015. Extracted data included species and aneurysm models, aneurysm and neck dimensions, type of flow diverter, occlusion rates, and complications. Articles were evaluated using a checklist derived from the Animal Research: Reporting of In Vivo Experiments (ARRIVE) guidelines. Forty-two articles reporting the results of FD in nine different aneurysm models were included. The rabbit elastase-induced aneurysm model was the most commonly used, with 3-month occlusion rates of 73.5%, (95%CI [61.9-82.6%]). FD of surgical sidewall aneurysms, constructed in rabbits or canines, resulted in high occlusion rates (100% [65.5-100%]). FD resulted in modest occlusion rates (15.4% [8.9-25.1%]) when tested in six complex canine aneurysm models designed to reproduce more difficult clinical contexts (large necks, bifurcation, or fusiform aneurysms). Adverse events, including branch occlusion, were rarely reported. There were no hemorrhagic complications. Articles complied with 20.8 ± 3.9 of 41 ARRIVE items; only a small number used randomization (3/42 articles [7.1%]) or a control group (13/42 articles [30.9%]). Preclinical studies on FD have shown various results. Occlusion of elastase-induced aneurysms was common after FD. The model is not challenging but standardized in many laboratories. Failures of FD can be reproduced in less standardized but more challenging surgical canine constructions. The quality of reporting could be improved.

  11. Systematics of the level density parameters

    International Nuclear Information System (INIS)

    Ignatyuk, A.V.; Istekov, K.K.; Smirenkin, G.N.

    1977-01-01

    The excitation energy dependence of nucleus energy-level density is phenomenologically systematized in terms of the Fermi gas model. The analysis has been conducted in the atomic mass number range of A(>=)150, where the collective effects are mostly pronounced. The density parameter a(U) is obtained using data on neutron resonances. To depict energy spectra of nuclear states in the Fermi gas model (1) the contributions from collective rotational and vibrational modes (2), as well as from pair correlations (3) are also taken into account. It is shown, that at excitation energies close to the neutron binding energy all three systematics of a(U) yield practically the same energy-level densities. At high energies only the (2) and (3) systematics are valid, and at energies lower than the neutron binding energy only the last systematics will be adequate

  12. Disguised Distress in Children and Adolescents "Flying under the Radar": Why Psychological Problems Are Underestimated and How Schools Must Respond

    Science.gov (United States)

    Flett, Gordon L.; Hewitt, Paul L.

    2013-01-01

    It is now recognized that there is a very high prevalence of psychological disorders among children and adolescents and relatively few receive psychological treatment. In the current article, we present the argument that levels of distress and dysfunction among young people are substantially underestimated and the prevalence of psychological…

  13. Cost-Effectiveness of HBV and HCV Screening Strategies – A Systematic Review of Existing Modelling Techniques

    Science.gov (United States)

    Geue, Claudia; Wu, Olivia; Xin, Yiqiao; Heggie, Robert; Hutchinson, Sharon; Martin, Natasha K.; Fenwick, Elisabeth; Goldberg, David

    2015-01-01

    Introduction Studies evaluating the cost-effectiveness of screening for Hepatitis B Virus (HBV) and Hepatitis C Virus (HCV) are generally heterogeneous in terms of risk groups, settings, screening intervention, outcomes and the economic modelling framework. It is therefore difficult to compare cost-effectiveness results between studies. This systematic review aims to summarise and critically assess existing economic models for HBV and HCV in order to identify the main methodological differences in modelling approaches. Methods A structured search strategy was developed and a systematic review carried out. A critical assessment of the decision-analytic models was carried out according to the guidelines and framework developed for assessment of decision-analytic models in Health Technology Assessment of health care interventions. Results The overall approach to analysing the cost-effectiveness of screening strategies was found to be broadly consistent for HBV and HCV. However, modelling parameters and related structure differed between models, producing different results. More recent publications performed better against a performance matrix, evaluating model components and methodology. Conclusion When assessing screening strategies for HBV and HCV infection, the focus should be on more recent studies, which applied the latest treatment regimes, test methods and had better and more complete data on which to base their models. In addition to parameter selection and associated assumptions, careful consideration of dynamic versus static modelling is recommended. Future research may want to focus on these methodological issues. In addition, the ability to evaluate screening strategies for multiple infectious diseases, (HCV and HIV at the same time) might prove important for decision makers. PMID:26689908

  14. A systematic review of predictive models for asthma development in children.

    Science.gov (United States)

    Luo, Gang; Nkoy, Flory L; Stone, Bryan L; Schmick, Darell; Johnson, Michael D

    2015-11-28

    Asthma is the most common pediatric chronic disease affecting 9.6 % of American children. Delay in asthma diagnosis is prevalent, resulting in suboptimal asthma management. To help avoid delay in asthma diagnosis and advance asthma prevention research, researchers have proposed various models to predict asthma development in children. This paper reviews these models. A systematic review was conducted through searching in PubMed, EMBASE, CINAHL, Scopus, the Cochrane Library, the ACM Digital Library, IEEE Xplore, and OpenGrey up to June 3, 2015. The literature on predictive models for asthma development in children was retrieved, with search results limited to human subjects and children (birth to 18 years). Two independent reviewers screened the literature, performed data extraction, and assessed article quality. The literature search returned 13,101 references in total. After manual review, 32 of these references were determined to be relevant and are discussed in the paper. We identify several limitations of existing predictive models for asthma development in children, and provide preliminary thoughts on how to address these limitations. Existing predictive models for asthma development in children have inadequate accuracy. Efforts to improve these models' performance are needed, but are limited by a lack of a gold standard for asthma development in children.

  15. Kinematic modeling of the Milky Way using the RAVE and GCS stellar surveys

    Energy Technology Data Exchange (ETDEWEB)

    Sharma, S.; Bland-Hawthorn, J. [Sydney Institute for Astronomy, School of Physics, University of Sydney, Sydney, NSW 2006 (Australia); Binney, J. [Rudolf Peierls Center for Theoretical Physics, University of Oxford, 1 Keble Road, Oxford OX1 3NP (United Kingdom); Freeman, K. C. [RSAA Australian National University, Mount Stromlo Observatory, Cotter Road, Weston Creek, Canberra, ACT 72611 (Australia); Steinmetz, M.; Williams, M. E. K. [Leibniz Institut für Astrophysik Potsdam (AIP), An der Sterwarte 16, D-14482 Potsdam (Germany); Boeche, C.; Grebel, E. K. [Astronomisches Rechen-Institut, Zentrum für Astronomie der Universität Heidelberg, D-69120 Heidelberg (Germany); Bienaymé, O.; Siebert, A. [Observatoire astronomique de Strasbourg, Université de Strasbourg, CNRS, UMR 7550, F-67000 Strasbourg (France); Gibson, B. K. [Jeremiah Horrocks Institute for Astrophysics and Super-computing, University of Central Lancashire, Preston PR1 2HE (United Kingdom); Gilmore, G. F.; Kordopatis, G. [Institute of Astronomy, University of Cambridge, Madingley Road, Cambridge CB3 0HA (United Kingdom); Helmi, A. [Kapteyn Astronomical Institute, University of Groningen, Postbus 800, 9700 AV Groningen (Netherlands); Munari, U. [INAF-Astronomical Observatory of Padova, I-36012 Asiago (VI) (Italy); Navarro, J. F. [University of Victoria, P.O. Box 3055, Station CSC, Victoria, BC V8W 3P6 (Canada); Parker, Q. A.; Reid, W. A. [Department of Physics and Astronomy, Macquarie University, Sydney, NSW 2109 (Australia); Seabroke, G. M. [Mullard Space Science Laboratory, University College London, Holmbury St Mary, Dorking RH5 6NT (United Kingdom); Watson, F. [Australian Astronomical Observatory, P.O. Box 296, Epping, NSW 1710 (Australia); and others

    2014-09-20

    We investigate the kinematic parameters of the Milky Way disk using the Radial Velocity Experiment (RAVE) and Geneva-Copenhagen Survey (GCS) stellar surveys. We do this by fitting a kinematic model to the data and taking the selection function of the data into account. For stars in the GCS we use all phase-space coordinates, but for RAVE stars we use only (ℓ, b, v {sub los}). Using the Markov Chain Monte Carlo technique, we investigate the full posterior distributions of the parameters given the data. We investigate the age-velocity dispersion relation for the three kinematic components (σ {sub R}, σ{sub φ}, σ {sub z}), the radial dependence of the velocity dispersions, the solar peculiar motion (U {sub ☉}, V {sub ☉}, W {sub ☉}), the circular speed Θ{sub 0} at the Sun, and the fall of mean azimuthal motion with height above the midplane. We confirm that the Besançon-style Gaussian model accurately fits the GCS data but fails to match the details of the more spatially extended RAVE survey. In particular, the Shu distribution function (DF) handles noncircular orbits more accurately and provides a better fit to the kinematic data. The Gaussian DF not only fits the data poorly but systematically underestimates the fall of velocity dispersion with radius. The radial scale length of the velocity dispersion profile of the thick disk was found to be smaller than that of the thin disk. We find that correlations exist between a number of parameters, which highlights the importance of doing joint fits. The large size of the RAVE survey allows us to get precise values for most parameters. However, large systematic uncertainties remain, especially in V {sub ☉} and Θ{sub 0}. We find that, for an extended sample of stars, Θ{sub 0} is underestimated by as much as 10% if the vertical dependence of the mean azimuthal motion is neglected. Using a simple model for vertical dependence of kinematics, we find that it is possible to match the Sgr A* proper motion without

  16. Examining Pedestrian Injury Severity Using Alternative Disaggregate Models

    DEFF Research Database (Denmark)

    Abay, Kibrom Araya

    2013-01-01

    This paper investigates the injury severity of pedestrians considering detailed road user characteristics and alternative model specification using a high-quality Danish road accident data. Such detailed and alternative modeling approach helps to assess the sensitivity of empirical inferences...... to the choice of these models. The empirical analysis reveals that detailed road user characteristics such as crime history of drivers and momentary activities of road users at the time of the accident provides an interesting insight in the injury severity analysis. Likewise, the alternative analytical...... specification of the models reveals that some of the conventionally employed fixed parameters injury severity models could underestimate the effect of some important behavioral attributes of the accidents. For instance, the standard ordered logit model underestimated the marginal effects of some...

  17. Mucus: An Underestimated Gut Target for Environmental Pollutants and Food Additives.

    Science.gov (United States)

    Gillois, Kévin; Lévêque, Mathilde; Théodorou, Vassilia; Robert, Hervé; Mercier-Bonin, Muriel

    2018-06-15

    Synthetic chemicals (environmental pollutants, food additives) are widely used for many industrial purposes and consumer-related applications, which implies, through manufactured products, diet, and environment, a repeated exposure of the general population with growing concern regarding health disorders. The gastrointestinal tract is the first physical and biological barrier against these compounds, and thus their first target. Mounting evidence indicates that the gut microbiota represents a major player in the toxicity of environmental pollutants and food additives; however, little is known on the toxicological relevance of the mucus/pollutant interplay, even though mucus is increasingly recognized as essential in gut homeostasis. Here, we aimed at describing how environmental pollutants (heavy metals, pesticides, and other persistent organic pollutants) and food additives (emulsifiers, nanomaterials) might interact with mucus and mucus-related microbial species; that is, “mucophilic” bacteria such as mucus degraders. This review highlights that intestinal mucus, either directly or through its crosstalk with the gut microbiota, is a key, yet underestimated gut player that must be considered for better risk assessment and management of environmental pollution.

  18. Factors Models of Scrum Adoption in the Software Development Process: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Marilyn Sihuay

    2018-05-01

    Full Text Available (Background The adoption of Agile Software Development (ASD, in particular Scrum, has grown significantly since its introduction in 2001. However, in Lima, many ASDs implementations have been not suitable (uncompleted or inconsistent, thus losing benefits obtainable by this approach and the critical success factors in this context are unknown. (Objective To analyze factors models used in the evaluation of the adoption of ASDs, as these factors models can contribute to explaining the success or failure of these adoptions. (Method In this study we used a systematic literature review. (Result Ten models have been identified; their similarities and differences are presented. (Conclusion Each model identified consider different factors, however some of them are shared by five of these models, such as team member attributes, engaging customer, customer collaboration, experience and work environment.

  19. Estimating the burden of pneumococcal pneumonia among adults: a systematic review and meta-analysis of diagnostic techniques.

    Directory of Open Access Journals (Sweden)

    Maria A Said

    Full Text Available Pneumococcal pneumonia causes significant morbidity and mortality among adults. Given limitations of diagnostic tests for non-bacteremic pneumococcal pneumonia, most studies report the incidence of bacteremic or invasive pneumococcal disease (IPD, and thus, grossly underestimate the pneumococcal pneumonia burden. We aimed to develop a conceptual and quantitative strategy to estimate the non-bacteremic disease burden among adults with community-acquired pneumonia (CAP using systematic study methods and the availability of a urine antigen assay.We performed a systematic literature review of studies providing information on the relative yield of various diagnostic assays (BinaxNOW® S. pneumoniae urine antigen test (UAT with blood and/or sputum culture in diagnosing pneumococcal pneumonia. We estimated the proportion of pneumococcal pneumonia that is bacteremic, the proportion of CAP attributable to pneumococcus, and the additional contribution of the Binax UAT beyond conventional diagnostic techniques, using random effects meta-analytic methods and bootstrapping. We included 35 studies in the analysis, predominantly from developed countries. The estimated proportion of pneumococcal pneumonia that is bacteremic was 24.8% (95% CI: 21.3%, 28.9%. The estimated proportion of CAP attributable to pneumococcus was 27.3% (95% CI: 23.9%, 31.1%. The Binax UAT diagnosed an additional 11.4% (95% CI: 9.6, 13.6% of CAP beyond conventional techniques. We were limited by the fact that not all patients underwent all diagnostic tests and by the sensitivity and specificity of the diagnostic tests themselves. We address these resulting biases and provide a range of plausible values in order to estimate the burden of pneumococcal pneumonia among adults.Estimating the adult burden of pneumococcal disease from bacteremic pneumococcal pneumonia data alone significantly underestimates the true burden of disease in adults. For every case of bacteremic pneumococcal pneumonia

  20. Systematic Analysis of Quantitative Logic Model Ensembles Predicts Drug Combination Effects on Cell Signaling Networks

    Science.gov (United States)

    2016-08-27

    bovine serum albumin (BSA) diluted to the amount corresponding to that in the media of the stimulated cells. Phospho-JNK comprises two isoforms whose...information accompanies this paper on the CPT: Pharmacometrics & Systems Pharmacology website (http://www.wileyonlinelibrary.com/psp4) Systematic Analysis of Quantitative Logic Model Morris et al. 553 www.wileyonlinelibrary/psp4

  1. Antibiotic efficacy in patients with a moderate probability of acute rhinosinusitis: a systematic review.

    Science.gov (United States)

    Burgstaller, Jakob M; Steurer, Johann; Holzmann, David; Geiges, Gabriel; Soyka, Michael B

    2016-05-01

    The aim of this systematic review was to synthesize the results of original studies assessing antibiotic efficacy at different time points after initiating treatment in patients with a moderate probability of acute bacterial rhinosinusitis. We searched the Cochrane library for systematic reviews on the efficacy of antibiotic treatment in patients with acute rhinosinusitis (ARS). Only randomized controlled trials (RCTs) that compared treatment of any antibiotic with placebo were included. The synthesis of the results of six RCTs showed a benefit of antibiotic treatment compared to placebo for the rate of improvement after 3 [pooled odds ratio (OR) 2.78 (95 % confidence interval (CI) 1.39-5.58)] and 7 [OR 2.29 (95 % CI 1.19-4.41)] days after initiation in patients with symptoms and signs of ARS lasting for 7 or more days. After 10 days [pooled OR 1.36 (95 % CI 0.66-2.90)], improvement rates did not differ significantly between patients treated with or without antibiotics. Compared to placebo, antibiotic treatment relieves symptoms in a significantly higher proportion of patients within the first days of treatment. Reporting an overall average treatment efficacy may underestimate treatment benefits in patients with a self-limiting illness.

  2. Systematic Assessment Through Mathematical Model For Sustainability Reporting In Malaysia Context

    Science.gov (United States)

    Lanang, Wan Nurul Syahirah Wan; Turan, Faiz Mohd; Johan, Kartina

    2017-08-01

    Sustainability assessment have been studied and increasingly recognized as a powerful and valuable tool to measure the performance of sustainability in a company or industry. Nowadays, there are many existing tools that the users can use for sustainable development. There are various initiatives exists on tools for sustainable development, though most of the tools focused on environmental, economy and social aspects. Using the Green Project Management (GPM) P5 concept that suggests the firms not only needs to engage in mainly 3Ps principle: planet, profit, people responsible behaviours, but also, product and process need to be included in the practices, this study will introduce a new mathematical model for assessing the level of sustainability practice in the company. Based on multiple case studies, involving in-depth interviews with senior directors, feedback from experts, and previous engineering report, a systematic approach is done with the aims to obtain the respective data from the feedbacks and to be developed into a new mathematical model. By reviewing on the methodology of this research it comprises of several phases where it starts with the analyzation of the parameters and criteria selection according to the Malaysian context of industry. Moving on to the next step is data analysis involving regression and finally the normalisation process will be done to determine the result of this research either succeeded or not. Lastly, this study is expected to provide a clear guideline to any company or organization to assimilate the sustainability assessment in their development stage. In future, the better understanding towards the sustainability assessment is attained to be aligned unitedly in order to integrated the process approach into the systematic approach for the sustainability assessment.

  3. Peak Vertical Ground Reaction Force during Two-Leg Landing: A Systematic Review and Mathematical Modeling

    Directory of Open Access Journals (Sweden)

    Wenxin Niu

    2014-01-01

    Full Text Available Objectives. (1 To systematically review peak vertical ground reaction force (PvGRF during two-leg drop landing from specific drop height (DH, (2 to construct a mathematical model describing correlations between PvGRF and DH, and (3 to analyze the effects of some factors on the pooled PvGRF regardless of DH. Methods. A computerized bibliographical search was conducted to extract PvGRF data on a single foot when participants landed with both feet from various DHs. An innovative mathematical model was constructed to analyze effects of gender, landing type, shoes, ankle stabilizers, surface stiffness and sample frequency on PvGRF based on the pooled data. Results. Pooled PvGRF and DH data of 26 articles showed that the square root function fits their relationship well. An experimental validation was also done on the regression equation for the medicum frequency. The PvGRF was not significantly affected by surface stiffness, but was significantly higher in men than women, the platform than suspended landing, the barefoot than shod condition, and ankle stabilizer than control condition, and higher than lower frequencies. Conclusions. The PvGRF and root DH showed a linear relationship. The mathematical modeling method with systematic review is helpful to analyze the influence factors during landing movement without considering DH.

  4. Modelling of the spallation reaction: analysis and testing of nuclear models

    International Nuclear Information System (INIS)

    Toccoli, C.

    2000-01-01

    The spallation reaction is considered as a 2-step process. First a very quick stage (10 -22 , 10 -29 s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10 -18 , 10 -19 s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, 3 He, 4 He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  5. Adaptive radiotherapy with an average anatomy model: Evaluation and quantification of residual deformations in head and neck cancer patients

    International Nuclear Information System (INIS)

    Kranen, Simon van; Mencarelli, Angelo; Beek, Suzanne van; Rasch, Coen; Herk, Marcel van; Sonke, Jan-Jakob

    2013-01-01

    Background and purpose: To develop and validate an adaptive intervention strategy for radiotherapy of head-and-neck cancer that accounts for systematic deformations by modifying the planning-CT (pCT) to the average misalignments in daily cone beam CT (CBCT) measured with deformable registration (DR). Methods and materials: Daily CBCT scans (808 scans) for 25 patients were retrospectively registered to the pCT with B-spline DR. The average deformation vector field ( ) was used to deform the pCT for adaptive intervention. Two strategies were simulated: single intervention after 10 fractions and weekly intervention with an from the previous week. The model was geometrically validated with the residual misalignment of anatomical landmarks both on bony-anatomy (BA; automatically generated) and soft-tissue (ST; manually identified). Results: Systematic deformations were 2.5/3.4 mm vector length (BA/ST). Single intervention reduced deformations to 1.5/2.7 mm (BA/ST). Weekly intervention resulted in 1.0/2.2 mm (BA/ST) and accounted better for progressive changes. 15 patients had average systematic deformations >2 mm (BA): reductions were 1.1/1.9 mm (single/weekly BA). ST improvements were underestimated due to observer and registration variability. Conclusions: Adaptive intervention with a pCT modified to the average anatomy during treatment successfully reduces systematic deformations. The improved accuracy could possibly be exploited in margin reduction and/or dose escalation

  6. Does verbatim sentence recall underestimate the language competence of near-native speakers?

    Directory of Open Access Journals (Sweden)

    Judith eSchweppe

    2015-02-01

    Full Text Available Verbatim sentence recall is widely used to test the language competence of native and non-native speakers since it involves comprehension and production of connected speech. However, we assume that, to maintain surface information, sentence recall relies particularly on attentional resources, which differentially affects native and non-native speakers. Since even in near-natives language processing is less automatized than in native speakers, processing a sentence in a foreign language plus retaining its surface may result in a cognitive overload. We contrasted sentence recall performance of German native speakers with that of highly proficient non-natives. Non-natives recalled the sentences significantly poorer than the natives, but performed equally well on a cloze test. This implies that sentence recall underestimates the language competence of good non-native speakers in mixed groups with native speakers. The findings also suggest that theories of sentence recall need to consider both its linguistic and its attentional aspects.

  7. Look before You Leap: Underestimating Chinese Student History, Chinese University Setting and Chinese University Steering in Sino-British HE Joint Ventures?

    Science.gov (United States)

    Dow, Ewan G.

    2010-01-01

    This article makes the case--in three parts--that many Anglo-Chinese university collaborations (joint ventures) to date have seriously underestimated Chinese (student) history, the Chinese university setting and Chinese national governmental steering as part of the process of "glocalisation". Recent turbulence in this particular HE…

  8. Clinical characteristics and outcomes of traditional Chinese medicine-induced liver injury: a systematic review.

    Science.gov (United States)

    Wang, Ran; Qi, Xingshun; Yoshida, Eric M; Méndez-Sánchez, Nahum; Teschke, Rolf; Sun, Mingyu; Liu, Xu; Su, Chunping; Deng, Jiao; Deng, Han; Hou, Feifei; Guo, Xiaozhong

    2018-04-01

    Traditional Chinese medicine (TCM) is becoming increasingly popular and related adverse events are often ignored or underestimated. This systematic review aimed to evaluate the clinical characteristics and outcomes of TCM-induced liver injury (TCM-ILI) and to estimate the proportion of TCM-ILI in all drug-induced liver injuries (DILI). China National Knowledge Infrastructure, Wanfang, VIP, PubMed, and Embase databases were searched. Demographic, clinical, and survival data were extracted and pooled. Factors associated with worse outcomes were calculated. For the proportion meta-analyses, the data were pooled by using a random-effects model. Overall, 21,027 articles were retrieved, of which 625 were finally included. There was a predominance of female and older patients. The proportion of liver transplantation was 2.18% (7/321). The mortality was 4.67% (15/321). Male, higher aspartate aminotransferase and direct bilirubin, and lower albumin were significantly associated with an increased risk of death/liver transplantation in TCM-ILI patients. The proportion of TCM-ILI in all DILI was 25.71%. The proportion was gradually increased with year. Our work summarises current knowledge regarding clinical presentation, disease course, and prognosis of TCM-ILI. TCM can result in hepatotoxicity, even death or necessitate life-saving liver transplantation. Governmental regulation of TCM products should be strictly established.

  9. [Skilled communication as "intervention" : Models for systematic communication in the healthcare system].

    Science.gov (United States)

    Weinert, M; Mayer, H; Zojer, E

    2015-02-01

    Specific communication training is currently not integrated into anesthesiology curricula. At the same time communication is an important key factor when working with colleagues, in the physician-patient relationship, during management of emergencies and in avoiding or reducing the legal consequences of adverse medical events. Therefore, focused attention should be brought to this area. In other high risk industries, specific communication training has been standard for a long time and in medicine there is an approach to teach and train these soft skills by simulation. Systematic communication training, however, is rarely an established component of specialist training. It is impossible not to communicate whereby nonverbal indications, such as gestures, mimic expression, posture and tone play an important part. Miscommunication, however, is common and leads to unproductive behavior. The cause of this is not always obvious. This article provides an overview of the communication models of Shannon, Watzlawick et al. and Schulz von Thun et al. and describes their limitations. The "Process Communication Model®" (PCM) is also introduced. An overview is provided with examples of how this tool can be used to look at the communication process from a systematic point of view. People have different psychological needs. Not taking care of these needs will result in individual stress behavior, which can be graded into first, second and third degrees of severity (driver behavior, mask behavior and desperation). These behavior patterns become exposed in predictable sequences. Furthermore, on the basis of this model, successful communication can be established while unproductive behavior that occurs during stress can be dealt with appropriately. Because of the importance of communication in all areas of medical care, opportunities exist to focus research on the influence of targeted communication on patient outcome, complications and management of emergencies.

  10. Modelling the transmission of healthcare associated infections: a systematic review

    Science.gov (United States)

    2013-01-01

    Background Dynamic transmission models are increasingly being used to improve our understanding of the epidemiology of healthcare-associated infections (HCAI). However, there has been no recent comprehensive review of this emerging field. This paper summarises how mathematical models have informed the field of HCAI and how methods have developed over time. Methods MEDLINE, EMBASE, Scopus, CINAHL plus and Global Health databases were systematically searched for dynamic mathematical models of HCAI transmission and/or the dynamics of antimicrobial resistance in healthcare settings. Results In total, 96 papers met the eligibility criteria. The main research themes considered were evaluation of infection control effectiveness (64%), variability in transmission routes (7%), the impact of movement patterns between healthcare institutes (5%), the development of antimicrobial resistance (3%), and strain competitiveness or co-colonisation with different strains (3%). Methicillin-resistant Staphylococcus aureus was the most commonly modelled HCAI (34%), followed by vancomycin resistant enterococci (16%). Other common HCAIs, e.g. Clostridum difficile, were rarely investigated (3%). Very few models have been published on HCAI from low or middle-income countries. The first HCAI model has looked at antimicrobial resistance in hospital settings using compartmental deterministic approaches. Stochastic models (which include the role of chance in the transmission process) are becoming increasingly common. Model calibration (inference of unknown parameters by fitting models to data) and sensitivity analysis are comparatively uncommon, occurring in 35% and 36% of studies respectively, but their application is increasing. Only 5% of models compared their predictions to external data. Conclusions Transmission models have been used to understand complex systems and to predict the impact of control policies. Methods have generally improved, with an increased use of stochastic models, and

  11. Veterans' informal caregivers in the "sandwich generation": a systematic review toward a resilience model.

    Science.gov (United States)

    Smith-Osborne, Alexa; Felderhoff, Brandi

    2014-01-01

    Social work theory advanced the formulation of the construct of the sandwich generation to apply to the emerging generational cohort of caregivers, most often middle-aged women, who were caring for maturing children and aging parents simultaneously. This systematic review extends that focus by synthesizing the literature on sandwich generation caregivers for the general aging population with dementia and for veterans with dementia and polytrauma. It develops potential protective mechanisms based on empirical literature to support an intervention resilience model for social work practitioners. This theoretical model addresses adaptive coping of sandwich- generation families facing ongoing challenges related to caregiving demands.

  12. A Novel Method Using Abstract Convex Underestimation in Ab-Initio Protein Structure Prediction for Guiding Search in Conformational Feature Space.

    Science.gov (United States)

    Hao, Xiao-Hu; Zhang, Gui-Jun; Zhou, Xiao-Gen; Yu, Xu-Feng

    2016-01-01

    To address the searching problem of protein conformational space in ab-initio protein structure prediction, a novel method using abstract convex underestimation (ACUE) based on the framework of evolutionary algorithm was proposed. Computing such conformations, essential to associate structural and functional information with gene sequences, is challenging due to the high-dimensionality and rugged energy surface of the protein conformational space. As a consequence, the dimension of protein conformational space should be reduced to a proper level. In this paper, the high-dimensionality original conformational space was converted into feature space whose dimension is considerably reduced by feature extraction technique. And, the underestimate space could be constructed according to abstract convex theory. Thus, the entropy effect caused by searching in the high-dimensionality conformational space could be avoided through such conversion. The tight lower bound estimate information was obtained to guide the searching direction, and the invalid searching area in which the global optimal solution is not located could be eliminated in advance. Moreover, instead of expensively calculating the energy of conformations in the original conformational space, the estimate value is employed to judge if the conformation is worth exploring to reduce the evaluation time, thereby making computational cost lower and the searching process more efficient. Additionally, fragment assembly and the Monte Carlo method are combined to generate a series of metastable conformations by sampling in the conformational space. The proposed method provides a novel technique to solve the searching problem of protein conformational space. Twenty small-to-medium structurally diverse proteins were tested, and the proposed ACUE method was compared with It Fix, HEA, Rosetta and the developed method LEDE without underestimate information. Test results show that the ACUE method can more rapidly and more

  13. Large-Scale Features of Pliocene Climate: Results from the Pliocene Model Intercomparison Project

    Science.gov (United States)

    Haywood, A. M.; Hill, D.J.; Dolan, A. M.; Otto-Bliesner, B. L.; Bragg, F.; Chan, W.-L.; Chandler, M. A.; Contoux, C.; Dowsett, H. J.; Jost, A.; hide

    2013-01-01

    Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma) have been extensively studied.Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-mode data intercomparison. Whilst commonalities in model outputs for the Pliocene are clearly evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data model comparison highlights that models potentially underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Furthermore, sensitivity tests exploring the known unknowns in modelling Pliocene climate specifically relevant to the high latitudes are essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses). Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS), support previous work suggesting that ESS is greater than Climate Sensitivity (CS), and suggest that the ratio of ESS to CS is between 1 and 2, with a "best" estimate of 1.5.

  14. Transition between process models (BPMN and service models (WS-BPEL and other standards: A systematic review

    Directory of Open Access Journals (Sweden)

    Marko Jurišić

    2011-12-01

    Full Text Available BPMN and BPEL have become de facto standards for modeling of business processes and imple-mentation of business processes via Web services. There is a quintessential problem of discrep-ancy between these two approaches as they are applied in different phases of lifecycle and theirfundamental concepts are different — BPMN is a graph based language while BPEL is basicallya block-based programming language. This paper shows basic concepts and gives an overviewof research and ideas which emerged during last two years, presents state of the art and possiblefuture research directions. Systematic literature review was performed and critical review wasgiven regarding the potential of the given solutions.

  15. Systematic analysis of fly models with multiple drivers reveals different effects of ataxin-1 and huntingtin in neuron subtype-specific expression.

    Directory of Open Access Journals (Sweden)

    Risa Shiraishi

    Full Text Available The fruit fly, Drosophila melanogaster, is a commonly used model organism for neurodegenerative diseases. Its major advantages include a short lifespan and its susceptibility to manipulation using sophisticated genetic techniques. Here, we report the systematic comparison of fly models of two polyglutamine (polyQ diseases. We induced expression of the normal and mutant forms of full-length Ataxin-1 and Huntingtin exon 1 in cholinergic, dopaminergic, and motor neurons, and glial cells using cell type-specific drivers. We systematically analyzed their effects based on multiple phenotypes: eclosion rate, lifespan, motor performance, and circadian rhythms of spontaneous activity. This systematic assay system enabled us to quantitatively evaluate and compare the functional disabilities of different genotypes. The results suggest different effects of Ataxin-1 and Huntingtin on specific types of neural cells during development and in adulthood. In addition, we confirmed the therapeutic effects of LiCl and butyrate using representative models. These results support the usefulness of this assay system for screening candidate chemical compounds that modify the pathologies of polyQ diseases.

  16. Systematic model for lean product development implementation in an automotive related company

    Directory of Open Access Journals (Sweden)

    Daniel Osezua Aikhuele

    2017-07-01

    Full Text Available Lean product development is a major innovative business strategy that employs sets of practices to achieve an efficient, innovative and a sustainable product development. Despite the many benefits and high hopes in the lean strategy, many companies are still struggling, and unable to either achieve or sustain substantial positive results with their lean implementation efforts. However, as the first step towards addressing this issue, this paper seeks to propose a systematic model that considers the administrative and implementation limitations of lean thinking practices in the product development process. The model which is based on the integration of fuzzy Shannon’s entropy and Modified Technique for Order Preference by Similarity to the Ideal Solution (M-TOPSIS model for the lean product development practices implementation with respective to different criteria including management and leadership, financial capabilities, skills and expertise and organization culture, provides a guide or roadmap for product development managers on the lean implementation route.

  17. Experimental validation of depletion calculations with VESTA 2.1.5 using JEFF-3.2

    Directory of Open Access Journals (Sweden)

    Haeck Wim

    2017-01-01

    Full Text Available The removal of decay heat is a significant safety concern in nuclear engineering for the operation of a nuclear reactor both in normal and accidental conditions and for intermediate and long term waste storage facilities. The correct evaluation of the decay heat produced by an irradiated material requires first of all the calculation of the composition of the irradiated material by depletion codes such as VESTA 2.1, currently under development at IRSN in France. A set of PWR assembly decay heat measurements performed by the Swedish Central Interim Storage Facility (CLAB located in Oskarshamm (Sweden have been calculated using different nuclear data libraries: ENDF/B-VII.0, JEFF-3.1, JEFF-3.2 and JEFF-3.3T1. Using these nuclear data libraries, VESTA 2.1 calculates the assembly decay heat for almost all cases within 4% of the measured decay heat. On average, the ENDF/B-VII.0 calculated decay heat values appear to give a systematic underestimation of only 0.5%. When using the JEFF-3.1 library, this results a systematic underestimation of about 2%. By switching to the JEFF-3.2 library, this systematic underestimation is improved slighty (up to 1.5%. The changes made in the JEFF-3.3T1 beta library appear to be overcorrecting, as the systematic underestimation is transformed into a systematic overestimation of about 1.5%.

  18. Large proportions of overweight and obese children, as well as their parents, underestimate children's weight status across Europe. The ENERGY (EuropeaN Energy balance Research to prevent excessive weight Gain among Youth) project.

    Science.gov (United States)

    Manios, Yannis; Moschonis, George; Karatzi, Kalliopi; Androutsos, Odysseas; Chinapaw, Mai; Moreno, Luis A; Bere, Elling; Molnar, Denes; Jan, Natasha; Dössegger, Alain; De Bourdeaudhuij, Ilse; Singh, Amika; Brug, Johannes

    2015-08-01

    To investigate the magnitude and country-specific differences in underestimation of children's weight status by children and their parents in Europe and to further explore its associations with family characteristics and sociodemographic factors. Children's weight and height were objectively measured. Parental anthropometric and sociodemographic data were self-reported. Children and their parents were asked to comment on children's weight status based on five-point Likert-type scales, ranging from 'I am much too thin' to 'I am much too fat' (children) and 'My child's weight is way too little' to 'My child's weight is way too much' (parents). These data were combined with children's actual weight status, in order to assess underestimation of children's weight status by children themselves and by their parents, respectively. Chi-square tests and multilevel logistic regression analyses were conducted to examine the aims of the current study. Eight European countries participating in the ENERGY (EuropeaN Energy balance Research to prevent excessive weight Gain among Youth) project. A school-based survey among 6113 children aged 10-12 years and their parents. In the total sample, 42·9 % of overweight/obese children and 27·6 % of parents of overweight/obese children underestimated their and their children's weight status, respectively. A higher likelihood for this underestimation of weight status by children and their parents was observed in Eastern and Southern compared with Central/Northern countries. Overweight or obese parents (OR=1·81; 95 % CI 1·39, 2·35 and OR=1·78, 95 % CI 1·22, 2·60), parents of boys (OR=1·32; 95 % CI 1·05, 1·67) and children from overweight/obese (OR=1·60; 95 % CI 1·29, 1·98 and OR=1·76; 95 % CI 1·29, 2·41) or unemployed parents (OR=1·53; 95 % CI 1·22, 1·92) were more likely to underestimate children's weight status. Children of overweight or obese parents, those from Eastern and Southern Europe, boys, younger children and

  19. Reconstructing extreme AMOC events through nudging of the ocean surface: a perfect model approach

    Science.gov (United States)

    Ortega, Pablo; Guilyardi, Eric; Swingedouw, Didier; Mignot, Juliette; Nguyen, Sébastien

    2017-11-01

    While the Atlantic Meridional Overturning Circulation (AMOC) is thought to be a crucial component of the North Atlantic climate, past changes in its strength are challenging to quantify, and only limited information is available. In this study, we use a perfect model approach with the IPSL-CM5A-LR model to assess the performance of several surface nudging techniques in reconstructing the variability of the AMOC. Special attention is given to the reproducibility of an extreme positive AMOC peak from a preindustrial control simulation. Nudging includes standard relaxation techniques towards the sea surface temperature and salinity anomalies of this target control simulation, and/or the prescription of the wind-stress fields. Surface nudging approaches using standard fixed restoring terms succeed in reproducing most of the target AMOC variability, including the timing of the extreme event, but systematically underestimate its amplitude. A detailed analysis of the AMOC variability mechanisms reveals that the underestimation of the extreme AMOC maximum comes from a deficit in the formation of the dense water masses in the main convection region, located south of Iceland in the model. This issue is largely corrected after introducing a novel surface nudging approach, which uses a varying restoring coefficient that is proportional to the simulated mixed layer depth, which, in essence, keeps the restoring time scale constant. This new technique substantially improves water mass transformation in the regions of convection, and in particular, the formation of the densest waters, which are key for the representation of the AMOC extreme. It is therefore a promising strategy that may help to better constrain the AMOC variability and other ocean features in the models. As this restoring technique only uses surface data, for which better and longer observations are available, it opens up opportunities for improved reconstructions of the AMOC over the last few decades.

  20. Ecological validity of cost-effectiveness models of universal HPV vaccination: A systematic literature review.

    Science.gov (United States)

    Favato, Giampiero; Easton, Tania; Vecchiato, Riccardo; Noikokyris, Emmanouil

    2017-05-09

    The protective (herd) effect of the selective vaccination of pubertal girls against human papillomavirus (HPV) implies a high probability that one of the two partners involved in intercourse is immunised, hence preventing the other from this sexually transmitted infection. The dynamic transmission models used to inform immunisation policy should include consideration of sexual behaviours and population mixing in order to demonstrate an ecological validity, whereby the scenarios modelled remain faithful to the real-life social and cultural context. The primary aim of this review is to test the ecological validity of the universal HPV vaccination cost-effectiveness modelling available in the published literature. The research protocol related to this systematic review has been registered in the International Prospective Register of Systematic Reviews (PROSPERO: CRD42016034145). Eight published economic evaluations were reviewed. None of the studies showed due consideration of the complexities of human sexual behaviour and the impact this may have on the transmission of HPV. Our findings indicate that all the included models might be affected by a different degree of ecological bias, which implies an inability to reflect the natural demographic and behavioural trends in their outcomes and, consequently, to accurately inform public healthcare policy. In particular, ecological bias have the effect to over-estimate the preference-based outcomes of selective immunisation. A relatively small (15-20%) over-estimation of quality-adjusted life years (QALYs) gained with selective immunisation programmes could induce a significant error in the estimate of cost-effectiveness of universal immunisation, by inflating its incremental cost effectiveness ratio (ICER) beyond the acceptability threshold. The results modelled here demonstrate the limitations of the cost-effectiveness studies for HPV vaccination, and highlight the concern that public healthcare policy might have been

  1. Underestimation of nuclear fuel burnup – theory, demonstration and solution in numerical models

    Directory of Open Access Journals (Sweden)

    Gajda Paweł

    2016-01-01

    Full Text Available Monte Carlo methodology provides reference statistical solution of neutron transport criticality problems of nuclear systems. Estimated reaction rates can be applied as an input to Bateman equations that govern isotopic evolution of reactor materials. Because statistical solution of Boltzmann equation is computationally expensive, it is in practice applied to time steps of limited length. In this paper we show that simple staircase step model leads to underprediction of numerical fuel burnup (Fissions per Initial Metal Atom – FIMA. Theoretical considerations indicates that this error is inversely proportional to the length of the time step and origins from the variation of heating per source neutron. The bias can be diminished by application of predictor-corrector step model. A set of burnup simulations with various step length and coupling schemes has been performed. SERPENT code version 1.17 has been applied to the model of a typical fuel assembly from Pressurized Water Reactor. In reference case FIMA reaches 6.24% that is equivalent to about 60 GWD/tHM of industrial burnup. The discrepancies up to 1% have been observed depending on time step model and theoretical predictions are consistent with numerical results. Conclusions presented in this paper are important for research and development concerning nuclear fuel cycle also in the context of Gen4 systems.

  2. A systematic literature review of open source software quality assessment models.

    Science.gov (United States)

    Adewumi, Adewole; Misra, Sanjay; Omoregbe, Nicholas; Crawford, Broderick; Soto, Ricardo

    2016-01-01

    Many open source software (OSS) quality assessment models are proposed and available in the literature. However, there is little or no adoption of these models in practice. In order to guide the formulation of newer models so they can be acceptable by practitioners, there is need for clear discrimination of the existing models based on their specific properties. Based on this, the aim of this study is to perform a systematic literature review to investigate the properties of the existing OSS quality assessment models by classifying them with respect to their quality characteristics, the methodology they use for assessment, and their domain of application so as to guide the formulation and development of newer models. Searches in IEEE Xplore, ACM, Science Direct, Springer and Google Search is performed so as to retrieve all relevant primary studies in this regard. Journal and conference papers between the year 2003 and 2015 were considered since the first known OSS quality model emerged in 2003. A total of 19 OSS quality assessment model papers were selected. To select these models we have developed assessment criteria to evaluate the quality of the existing studies. Quality assessment models are classified into five categories based on the quality characteristics they possess namely: single-attribute, rounded category, community-only attribute, non-community attribute as well as the non-quality in use models. Our study reflects that software selection based on hierarchical structures is found to be the most popular selection method in the existing OSS quality assessment models. Furthermore, we found that majority (47%) of the existing models do not specify any domain of application. In conclusion, our study will be a valuable contribution to the community and helps the quality assessment model developers in formulating newer models and also to the practitioners (software evaluators) in selecting suitable OSS in the midst of alternatives.

  3. Pain begets pain: When marathon runners are not in pain anymore, they underestimate their memory of marathon pain: A mediation analysis

    NARCIS (Netherlands)

    Babel, P.; Bajcar, E.A.; Smieja, M.; Adamczyk, W.; Swider, K.J.; Kicman, P.; Lisinska, N.

    2018-01-01

    Background: A previous study has shown that memory of pain induced by running a marathon might be underestimated. However, little is known about the factors that might influence such a memory distortion during pain recall. The aim of the study was to investigate the memory of pain induced by running

  4. Underestimated Halogen Bonds Forming with Protein Backbone in Protein Data Bank.

    Science.gov (United States)

    Zhang, Qian; Xu, Zhijian; Shi, Jiye; Zhu, Weiliang

    2017-07-24

    Halogen bonds (XBs) are attracting increasing attention in biological systems. Protein Data Bank (PDB) archives experimentally determined XBs in biological macromolecules. However, no software for structure refinement in X-ray crystallography takes into account XBs, which might result in the weakening or even vanishing of experimentally determined XBs in PDB. In our previous study, we showed that side-chain XBs forming with protein side chains are underestimated in PDB on the basis of the phenomenon that the proportion of side-chain XBs to overall XBs decreases as structural resolution becomes lower and lower. However, whether the dominant backbone XBs forming with protein backbone are overlooked is still a mystery. Here, with the help of the ratio (R F ) of the observed XBs' frequency of occurrence to their frequency expected at random, we demonstrated that backbone XBs are largely overlooked in PDB, too. Furthermore, three cases were discovered possessing backbone XBs in high resolution structures while losing the XBs in low resolution structures. In the last two cases, even at 1.80 Å resolution, the backbone XBs were lost, manifesting the urgent need to consider XBs in the refinement process during X-ray crystallography study.

  5. Evaluation models and criteria of the quality of hospital websites: a systematic review study

    OpenAIRE

    Jeddi, Fatemeh Rangraz; Gilasi, Hamidreza; Khademi, Sahar

    2017-01-01

    Introduction Hospital websites are important tools in establishing communication and exchanging information between patients and staff, and thus should enjoy an acceptable level of quality. The aim of this study was to identify proper models and criteria to evaluate the quality of hospital websites. Methods This research was a systematic review study. The international databases such as Science Direct, Google Scholar, PubMed, Proquest, Ovid, Elsevier, Springer, and EBSCO together with regiona...

  6. Implementing learning organization components in Ardabil Regional Water Company based on Marquardt systematic model

    OpenAIRE

    Shahram Mirzaie Daryani; Azadeh Zirak

    2015-01-01

    This main purpose of this study was to survey the implementation of learning organization characteristics based on Marquardt systematic model in Ardabil Regional Water Company. Two hundred and four staff (164 employees and 40 authorities) participated in the study. For data collection Marquardt questionnaire was used which its validity and reliability had been confirmed. The results of the data analysis showed that learning organization characteristics were used more than average level in som...

  7. Statistical errors in Monte Carlo estimates of systematic errors

    Energy Technology Data Exchange (ETDEWEB)

    Roe, Byron P. [Department of Physics, University of Michigan, Ann Arbor, MI 48109 (United States)]. E-mail: byronroe@umich.edu

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k{sup 2}.

  8. Statistical errors in Monte Carlo estimates of systematic errors

    International Nuclear Information System (INIS)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k 2

  9. A systematic composite service design modeling method using graph-based theory.

    Science.gov (United States)

    Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh

    2015-01-01

    The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.

  10. Few promising multivariable prognostic models exist for recovery of people with non-specific neck pain in musculoskeletal primary care: A systematic review

    NARCIS (Netherlands)

    R.W. Wingbermühle (Roel); E. van Trijffel (Emiel); Nelissen, P.M. (Paul M.); B.W. Koes (Bart); A.P. Verhagen (Arianne)

    2017-01-01

    markdownabstractQuestion: Which multivariable prognostic model(s) for recovery in people with neck pain can be used in primary care? Design: Systematic review of studies evaluating multivariable prognostic models. Participants: People with non-specific neck pain presenting at primary care.

  11. Systematic biases in human heading estimation.

    Directory of Open Access Journals (Sweden)

    Luigi F Cuturi

    Full Text Available Heading estimation is vital to everyday navigation and locomotion. Despite extensive behavioral and physiological research on both visual and vestibular heading estimation over more than two decades, the accuracy of heading estimation has not yet been systematically evaluated. Therefore human visual and vestibular heading estimation was assessed in the horizontal plane using a motion platform and stereo visual display. Heading angle was overestimated during forward movements and underestimated during backward movements in response to both visual and vestibular stimuli, indicating an overall multimodal bias toward lateral directions. Lateral biases are consistent with the overrepresentation of lateral preferred directions observed in neural populations that carry visual and vestibular heading information, including MSTd and otolith afferent populations. Due to this overrepresentation, population vector decoding yields patterns of bias remarkably similar to those observed behaviorally. Lateral biases are inconsistent with standard bayesian accounts which predict that estimates should be biased toward the most common straight forward heading direction. Nevertheless, lateral biases may be functionally relevant. They effectively constitute a perceptual scale expansion around straight ahead which could allow for more precise estimation and provide a high gain feedback signal to facilitate maintenance of straight-forward heading during everyday navigation and locomotion.

  12. Systematic review on cashew nut allergy.

    Science.gov (United States)

    van der Valk, J P M; Dubois, A E J; Gerth van Wijk, R; Wichers, H J; de Jong, N W

    2014-06-01

    Recent studies on cashew nut allergy suggest that the prevalence of cashew nut allergy is increasing. Cashew nut consumption by allergic patients can cause severe reactions, including anaphylaxis. This review summarizes current knowledge on cashew nut allergy to facilitate timely clinical recognition and to promote awareness of this emerging food allergy amongst clinicians. The goal of this study is to present a systematic review focused on the clinical aspects of allergy to cashew nut including the characteristics of cashew nut, the prevalence, allergenic components, cross-reactivity, diagnosis and management of cashew nut allergy. The literature search yielded 255 articles of which 40 met our selection criteria and were considered to be relevant for this review. The 40 articles included one prospective study, six retrospective studies and seven case reports. The remaining 26 papers were not directly related to cashew nut allergy. The literature suggests that the prevalence of cashew nut allergy is increasing, although the level of evidence for this is low. A minimal amount of cashew nut allergen may cause a severe allergic reaction, suggesting high potency comparable with other tree nuts and peanuts. Cashew allergy is clearly an underestimated important healthcare problem, especially in children. © 2014 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  13. Toward a systematic exploration of nano-bio interactions

    Energy Technology Data Exchange (ETDEWEB)

    Bai, Xue; Liu, Fang; Liu, Yin; Li, Cong; Wang, Shenqing [School of Chemistry and Chemical Engineering, Shandong University, Jinan (China); Zhou, Hongyu [School of Environmental Science and Technology, Shandong University, Jinan (China); Wang, Wenyi; Zhu, Hao [Department of Chemistry, Rutgers University, Camden, NJ (United States); The Rutgers Center for Computational and Integrative Biology, Rutgers University, Camden, NJ (United States); Winkler, David A., E-mail: d.winkler@latrobe.edu.au [CSIRO Manufacturing, Bag 10, Clayton South MDC 3169 (Australia); Monash Institute of Pharmaceutical Sciences, 392 Royal Parade, Parkville 3052 (Australia); Latrobe Institute for Molecular Science, Bundoora 3083 (Australia); School of Chemical and Physical Sciences, Flinders University, Bedford Park 5042 (Australia); Yan, Bing, E-mail: drbingyan@yahoo.com [School of Chemistry and Chemical Engineering, Shandong University, Jinan (China); School of Environmental Science and Technology, Shandong University, Jinan (China)

    2017-05-15

    Many studies of nanomaterials make non-systematic alterations of nanoparticle physicochemical properties. Given the immense size of the property space for nanomaterials, such approaches are not very useful in elucidating fundamental relationships between inherent physicochemical properties of these materials and their interactions with, and effects on, biological systems. Data driven artificial intelligence methods such as machine learning algorithms have proven highly effective in generating models with good predictivity and some degree of interpretability. They can provide a viable method of reducing or eliminating animal testing. However, careful experimental design with the modelling of the results in mind is a proven and efficient way of exploring large materials spaces. This approach, coupled with high speed automated experimental synthesis and characterization technologies now appearing, is the fastest route to developing models that regulatory bodies may find useful. We advocate greatly increased focus on systematic modification of physicochemical properties of nanoparticles combined with comprehensive biological evaluation and computational analysis. This is essential to obtain better mechanistic understanding of nano-bio interactions, and to derive quantitatively predictive and robust models for the properties of nanomaterials that have useful domains of applicability. - Highlights: • Nanomaterials studies make non-systematic alterations to nanoparticle properties. • Vast nanomaterials property spaces require systematic studies of nano-bio interactions. • Experimental design and modelling are efficient ways of exploring materials spaces. • We advocate systematic modification and computational analysis to probe nano-bio interactions.

  14. Toward a systematic exploration of nano-bio interactions

    International Nuclear Information System (INIS)

    Bai, Xue; Liu, Fang; Liu, Yin; Li, Cong; Wang, Shenqing; Zhou, Hongyu; Wang, Wenyi; Zhu, Hao; Winkler, David A.; Yan, Bing

    2017-01-01

    Many studies of nanomaterials make non-systematic alterations of nanoparticle physicochemical properties. Given the immense size of the property space for nanomaterials, such approaches are not very useful in elucidating fundamental relationships between inherent physicochemical properties of these materials and their interactions with, and effects on, biological systems. Data driven artificial intelligence methods such as machine learning algorithms have proven highly effective in generating models with good predictivity and some degree of interpretability. They can provide a viable method of reducing or eliminating animal testing. However, careful experimental design with the modelling of the results in mind is a proven and efficient way of exploring large materials spaces. This approach, coupled with high speed automated experimental synthesis and characterization technologies now appearing, is the fastest route to developing models that regulatory bodies may find useful. We advocate greatly increased focus on systematic modification of physicochemical properties of nanoparticles combined with comprehensive biological evaluation and computational analysis. This is essential to obtain better mechanistic understanding of nano-bio interactions, and to derive quantitatively predictive and robust models for the properties of nanomaterials that have useful domains of applicability. - Highlights: • Nanomaterials studies make non-systematic alterations to nanoparticle properties. • Vast nanomaterials property spaces require systematic studies of nano-bio interactions. • Experimental design and modelling are efficient ways of exploring materials spaces. • We advocate systematic modification and computational analysis to probe nano-bio interactions.

  15. Flat epithelial atypia and atypical ductal hyperplasia: carcinoma underestimation rate.

    Science.gov (United States)

    Ingegnoli, Anna; d'Aloia, Cecilia; Frattaruolo, Antonia; Pallavera, Lara; Martella, Eugenia; Crisi, Girolamo; Zompatori, Maurizio

    2010-01-01

    This study was carried out to determine the underestimation rate of carcinoma upon surgical biopsy after a diagnosis of flat epithelial atypia and atypical ductal hyperplasia and 11-gauge vacuum-assisted breast biopsy. A retrospective review was conducted of 476 vacuum-assisted breast biopsy performed from May 2005 to January 2007 and a total of 70 cases of atypia were identified. Fifty cases (71%) were categorized as pure atypical ductal hyperplasia, 18 (26%) as pure flat epithelial atypia and two (3%) as concomitant flat epithelial atypia and atypical ductal hyperplasia. Each group were compared with the subsequent open surgical specimens. Surgical biopsy was performed in 44 patients with atypical ductal hyperplasia, 15 patients with flat epithelial atypia, and two patients with flat epithelial atypia and atypical ductal hyperplasia. Five cases of atypical ductal hyperplasia were upgraded to ductal carcinoma in situ, three cases of flat epithelial atypia yielded one ductal carcinoma in situ and two cases of invasive ductal carcinoma, and one case of flat epithelial atypia/atypical ductal hyperplasia had invasive ductal carcinoma. The overall rate of malignancy was 16% for atypical ductal hyperplasia (including flat epithelial atypia/atypical ductal hyperplasia patients) and 20% for flat epithelial atypia. The presence of flat epithelial atypia and atypical ductal hyperplasia at biopsy requires careful consideration, and surgical excision should be suggested.

  16. CAUSES: Diagnosis of the Summertime Warm Bias in CMIP5 Climate Models at the ARM Southern Great Plains Site

    Science.gov (United States)

    Zhang, Chengzhu; Xie, Shaocheng; Klein, Stephen A.; Ma, Hsi-yen; Tang, Shuaiqi; Van Weverberg, Kwinten; Morcrette, Cyril J.; Petch, Jon

    2018-03-01

    All the weather and climate models participating in the Clouds Above the United States and Errors at the Surface project show a summertime surface air temperature (T2 m) warm bias in the region of the central United States. To understand the warm bias in long-term climate simulations, we assess the Atmospheric Model Intercomparison Project simulations from the Coupled Model Intercomparison Project Phase 5, with long-term observations mainly from the Atmospheric Radiation Measurement program Southern Great Plains site. Quantities related to the surface energy and water budget, and large-scale circulation are analyzed to identify possible factors and plausible links involved in the warm bias. The systematic warm season bias is characterized by an overestimation of T2 m and underestimation of surface humidity, precipitation, and precipitable water. Accompanying the warm bias is an overestimation of absorbed solar radiation at the surface, which is due to a combination of insufficient cloud reflection and clear-sky shortwave absorption by water vapor and an underestimation in surface albedo. The bias in cloud is shown to contribute most to the radiation bias. The surface layer soil moisture impacts T2 m through its control on evaporative fraction. The error in evaporative fraction is another important contributor to T2 m. Similar sources of error are found in hindcast from other Clouds Above the United States and Errors at the Surface studies. In Atmospheric Model Intercomparison Project simulations, biases in meridional wind velocity associated with the low-level jet and the 500 hPa vertical velocity may also relate to T2 m bias through their control on the surface energy and water budget.

  17. Convergence and Divergence in a Multi-Model Ensemble of Terrestrial Ecosystem Models in North America

    Science.gov (United States)

    Dungan, J. L.; Wang, W.; Hashimoto, H.; Michaelis, A.; Milesi, C.; Ichii, K.; Nemani, R. R.

    2009-12-01

    In support of NACP, we are conducting an ensemble modeling exercise using the Terrestrial Observation and Prediction System (TOPS) to evaluate uncertainties among ecosystem models, satellite datasets, and in-situ measurements. The models used in the experiment include public-domain versions of Biome-BGC, LPJ, TOPS-BGC, and CASA, driven by a consistent set of climate fields for North America at 8km resolution and daily/monthly time steps over the period of 1982-2006. The reference datasets include MODIS Gross Primary Production (GPP) and Net Primary Production (NPP) products, Fluxnet measurements, and other observational data. The simulation results and the reference datasets are consistently processed and systematically compared in the climate (temperature-precipitation) space; in particular, an alternative to the Taylor diagram is developed to facilitate model-data intercomparisons in multi-dimensional space. The key findings of this study indicate that: the simulated GPP/NPP fluxes are in general agreement with observations over forests, but are biased low (underestimated) over non-forest types; large uncertainties of biomass and soil carbon stocks are found among the models (and reference datasets), often induced by seemingly “small” differences in model parameters and implementation details; the simulated Net Ecosystem Production (NEP) mainly responds to non-respiratory disturbances (e.g. fire) in the models and therefore is difficult to compare with flux data; and the seasonality and interannual variability of NEP varies significantly among models and reference datasets. These findings highlight the problem inherent in relying on only one modeling approach to map surface carbon fluxes and emphasize the pressing necessity of expanded and enhanced monitoring systems to narrow critical structural and parametrical uncertainties among ecosystem models.

  18. A systematic review of models to predict recruitment to multicentre clinical trials

    Directory of Open Access Journals (Sweden)

    Cook Andrew

    2010-07-01

    Full Text Available Abstract Background Less than one third of publicly funded trials managed to recruit according to their original plan often resulting in request for additional funding and/or time extensions. The aim was to identify models which might be useful to a major public funder of randomised controlled trials when estimating likely time requirements for recruiting trial participants. The requirements of a useful model were identified as usability, based on experience, able to reflect time trends, accounting for centre recruitment and contribution to a commissioning decision. Methods A systematic review of English language articles using MEDLINE and EMBASE. Search terms included: randomised controlled trial, patient, accrual, predict, enrol, models, statistical; Bayes Theorem; Decision Theory; Monte Carlo Method and Poisson. Only studies discussing prediction of recruitment to trials using a modelling approach were included. Information was extracted from articles by one author, and checked by a second, using a pre-defined form. Results Out of 326 identified abstracts, only 8 met all the inclusion criteria. Of these 8 studies examined, there are five major classes of model discussed: the unconditional model, the conditional model, the Poisson model, Bayesian models and Monte Carlo simulation of Markov models. None of these meet all the pre-identified needs of the funder. Conclusions To meet the needs of a number of research programmes, a new model is required as a matter of importance. Any model chosen should be validated against both retrospective and prospective data, to ensure the predictions it gives are superior to those currently used.

  19. Worst case prediction of additives migration from polystyrene for food safety purposes: a model update.

    Science.gov (United States)

    Martínez-López, Brais; Gontard, Nathalie; Peyron, Stéphane

    2018-03-01

    A reliable prediction of migration levels of plastic additives into food requires a robust estimation of diffusivity. Predictive modelling of diffusivity as recommended by the EU commission is carried out using a semi-empirical equation that relies on two polymer-dependent parameters. These parameters were determined for the polymers most used by packaging industry (LLDPE, HDPE, PP, PET, PS, HIPS) from the diffusivity data available at that time. In the specific case of general purpose polystyrene, the diffusivity data published since then shows that the use of the equation with the original parameters results in systematic underestimation of diffusivity. The goal of this study was therefore, to propose an update of the aforementioned parameters for PS on the basis of up to date diffusivity data, so the equation can be used for a reasoned overestimation of diffusivity.

  20. Conceptual Model for Systematic Construction Waste Management

    OpenAIRE

    Abd Rahim Mohd Hilmi Izwan; Kasim Narimah

    2017-01-01

    Development of the construction industry generated construction waste which can contribute towards environmental issues. Weaknesses of compliance in construction waste management especially in construction site have also contributed to the big issues of waste generated in landfills and illegal dumping area. This gives sign that construction projects are needed a systematic construction waste management. To date, a comprehensive criteria of construction waste management, particularly for const...

  1. Systematics of constant roll inflation

    Science.gov (United States)

    Anguelova, Lilia; Suranyi, Peter; Wijewardhana, L. C. R.

    2018-02-01

    We study constant roll inflation systematically. This is a regime, in which the slow roll approximation can be violated. It has long been thought that this approximation is necessary for agreement with observations. However, recently it was understood that there can be inflationary models with a constant, and not necessarily small, rate of roll that are both stable and compatible with the observational constraint ns ≈ 1. We investigate systematically the condition for such a constant-roll regime. In the process, we find a whole new class of inflationary models, in addition to the known solutions. We show that the new models are stable under scalar perturbations. Finally, we find a part of their parameter space, in which they produce a nearly scale-invariant scalar power spectrum, as needed for observational viability.

  2. Effects of waveform model systematics on the interpretation of GW150914

    OpenAIRE

    Abbott, B P; Abbott, R; Abbott, T D; Abernathy, M R; Acernese, F; Ackley, K; Adams, C; Adams, T; Addesso, P; Adhikari, R X; Adya, V B; Affeldt, C; Agathos, M; Agatsuma, K; Aggarwal, N

    2017-01-01

    PAPER\\ud Effects of waveform model systematics on the interpretation of GW150914\\ud B P Abbott1, R Abbott1, T D Abbott2, M R Abernathy3, F Acernese4,5, K Ackley6, C Adams7, T Adams8, P Addesso9,144, R X Adhikari1, V B Adya10, C Affeldt10, M Agathos11, K Agatsuma11, N Aggarwal12, O D Aguiar13, L Aiello14,15, A Ain16, P Ajith17, B Allen10,18,19, A Allocca20,21, P A Altin22, A Ananyeva1, S B Anderson1, W G Anderson18, S Appert1, K Arai1, M C Araya1, J S Areeda23, N Arnaud24, K G Arun25, S Ascenz...

  3. Aerosol modelling and validation during ESCOMPTE 2001

    Science.gov (United States)

    Cousin, F.; Liousse, C.; Cachier, H.; Bessagnet, B.; Guillaume, B.; Rosset, R.

    The ESCOMPTE 2001 programme (Atmospheric Research. 69(3-4) (2004) 241) has resulted in an exhaustive set of dynamical, radiative, gas and aerosol observations (surface and aircraft measurements). A previous paper (Atmospheric Research. (2004) in press) has dealt with dynamics and gas-phase chemistry. The present paper is an extension to aerosol formation, transport and evolution. To account for important loadings of primary and secondary aerosols and their transformation processes in the ESCOMPTE domain, the ORISAM aerosol module (Atmospheric Environment. 35 (2001) 4751) was implemented on-line in the air-quality Meso-NH-C model. Additional developments have been introduced in ORganic and Inorganic Spectral Aerosol Module (ORISAM) to improve the comparison between simulations and experimental surface and aircraft field data. This paper discusses this comparison for a simulation performed during one selected day, 24 June 2001, during the Intensive Observation Period IOP2b. Our work relies on BC and OCp emission inventories specifically developed for ESCOMPTE. This study confirms the need for a fine resolution aerosol inventory with spectral chemical speciation. BC levels are satisfactorily reproduced, thus validating our emission inventory and its processing through Meso-NH-C. However, comparisons for reactive species generally denote an underestimation of concentrations. Organic aerosol levels are rather well simulated though with a trend to underestimation in the afternoon. Inorganic aerosol species are underestimated for several reasons, some of them have been identified. For sulphates, primary emissions were introduced. Improvement was obtained too for modelled nitrate and ammonium levels after introducing heterogeneous chemistry. However, no modelling of terrigeneous particles is probably a major cause for nitrates and ammonium underestimations. Particle numbers and size distributions are well reproduced, but only in the submicrometer range. Our work points out

  4. Increasing the Accuracy of Mapping Urban Forest Carbon Density by Combining Spatial Modeling and Spectral Unmixing Analysis

    Directory of Open Access Journals (Sweden)

    Hua Sun

    2015-11-01

    Full Text Available Accurately mapping urban vegetation carbon density is challenging because of complex landscapes and mixed pixels. In this study, a novel methodology was proposed that combines a linear spectral unmixing analysis (LSUA with a linear stepwise regression (LSR, a logistic model-based stepwise regression (LMSR and k-Nearest Neighbors (kNN, to map the forest carbon density of Shenzhen City of China, using Landsat 8 imagery and sample plot data collected in 2014. The independent variables that contributed to statistically significantly improving the fit of a model to data and reducing the sum of squared errors were first selected from a total of 284 spectral variables derived from the image bands. The vegetation fraction from LSUA was then added as an independent variable. The results obtained using cross-validation showed that: (1 Compared to the methods without the vegetation information, adding the vegetation fraction increased the accuracy of mapping carbon density by 1%–9.3%; (2 As the observed values increased, the LSR and kNN residuals showed overestimates and underestimates for the smaller and larger observations, respectively, while LMSR improved the systematical over and underestimations; (3 LSR resulted in illogically negative and unreasonably large estimates, while KNN produced the greatest values of root mean square error (RMSE. The results indicate that combining the spatial modeling method LMSR and the spectral unmixing analysis LUSA, coupled with Landsat imagery, is most promising for increasing the accuracy of urban forest carbon density maps. In addition, this method has considerable potential for accurate, rapid and nondestructive prediction of urban and peri-urban forest carbon stocks with an acceptable level of error and low cost.

  5. Modeling systematic errors: polychromatic sources of Beer-Lambert deviations in HPLC/UV and nonchromatographic spectrophotometric assays.

    Science.gov (United States)

    Galli, C

    2001-07-01

    It is well established that the use of polychromatic radiation in spectrophotometric assays leads to excursions from the Beer-Lambert limit. This Note models the resulting systematic error as a function of assay spectral width, slope of molecular extinction coefficient, and analyte concentration. The theoretical calculations are compared with recent experimental results; a parameter is introduced which can be used to estimate the magnitude of the systematic error in both chromatographic and nonchromatographic spectrophotometric assays. It is important to realize that the polychromatic radiation employed in common laboratory equipment can yield assay errors up to approximately 4%, even at absorption levels generally considered 'safe' (i.e. absorption <1). Thus careful consideration of instrumental spectral width, analyte concentration, and slope of molecular extinction coefficient is required to ensure robust analytical methods.

  6. Implementing learning organization components in Ardabil Regional Water Company based on Marquardt systematic model

    Directory of Open Access Journals (Sweden)

    Shahram Mirzaie Daryani

    2015-09-01

    Full Text Available This main purpose of this study was to survey the implementation of learning organization characteristics based on Marquardt systematic model in Ardabil Regional Water Company. Two hundred and four staff (164 employees and 40 authorities participated in the study. For data collection Marquardt questionnaire was used which its validity and reliability had been confirmed. The results of the data analysis showed that learning organization characteristics were used more than average level in some subsystems of Marquardt model and there was a significant difference between current position and excellent position based on learning organization characteristic application. The results of this study can be used to improve work processes of organizations and institutions.

  7. Rehabilitation service models for people with physical and/or mental disability living in low- and middle-income countries: A systematic review.

    Science.gov (United States)

    Furlan, Andréa D; Irvin, Emma; Munhall, Claire; Giraldo-Prieto, Mario; Fullerton, Laura; McMaster, Robert; Danak, Shivang; Costante, Alicia; Pitzul, Kristen B; Bhide, Rohit P; Marchenko, Stanislav; Mahood, Quenby; David, Judy A; Flannery, John F; Bayley, Mark

    2018-04-03

    To compare models of rehabilitation services for people with mental and/or physical disability in order to determine optimal models for therapy and interventions in low- to middle-income countries. CINAHL, EMBASE, MEDLINE, CENTRAL, PsycINFO, Business Source Premier, HINARI, CEBHA and PubMed. Systematic reviews, randomized control trials and observational studies comparing >2 models of rehabilitation care in any language. Date extraction: Standardized forms were used. Methodological quality was assessed using AMSTAR and quality of evidence was assessed using GRADE. Twenty-four systematic reviews which included 578 studies and 202,307 participants were selected. In addition, four primary studies were included to complement the gaps in the systematic reviews. The studies were all done at various countries. Moderate- to high-quality evidence supports the following models of rehabilitation services: psychological intervention in primary care settings for people with major depression, admission into an inpatient, multidisciplinary, specialized rehabilitation unit for those with recent onset of a severe disabling condition; outpatient rehabilitation with multidisciplinary care in the community, hospital or home is recommended for less severe conditions; However, a model of rehabilitation service that includes early discharge is not recommended for elderly patients with severe stroke, chronic obstructive pulmonary disease, hip fracture and total joints. Models of rehabilitation care in inpatient, multidisciplinary and specialized rehabilitation units are recommended for the treatment of severe conditions with recent onset, as they reduce mortality and the need for institutionalized care, especially among elderly patients, stroke patients, or those with chronic back pain. Results are expected to be generalizable for brain/spinal cord injury and complex fractures.

  8. Dynamic epidemiological models for dengue transmission: a systematic review of structural approaches.

    Directory of Open Access Journals (Sweden)

    Mathieu Andraud

    Full Text Available Dengue is a vector-borne disease recognized as the major arbovirose with four immunologically distant dengue serotypes coexisting in many endemic areas. Several mathematical models have been developed to understand the transmission dynamics of dengue, including the role of cross-reactive antibodies for the four different dengue serotypes. We aimed to review deterministic models of dengue transmission, in order to summarize the evolution of insights for, and provided by, such models, and to identify important characteristics for future model development. We identified relevant publications using PubMed and ISI Web of Knowledge, focusing on mathematical deterministic models of dengue transmission. Model assumptions were systematically extracted from each reviewed model structure, and were linked with their underlying epidemiological concepts. After defining common terms in vector-borne disease modelling, we generally categorised fourty-two published models of interest into single serotype and multiserotype models. The multi-serotype models assumed either vector-host or direct host-to-host transmission (ignoring the vector component. For each approach, we discussed the underlying structural and parameter assumptions, threshold behaviour and the projected impact of interventions. In view of the expected availability of dengue vaccines, modelling approaches will increasingly focus on the effectiveness and cost-effectiveness of vaccination options. For this purpose, the level of representation of the vector and host populations seems pivotal. Since vector-host transmission models would be required for projections of combined vaccination and vector control interventions, we advocate their use as most relevant to advice health policy in the future. The limited understanding of the factors which influence dengue transmission as well as limited data availability remain important concerns when applying dengue models to real-world decision problems.

  9. Neural systems language: a formal modeling language for the systematic description, unambiguous communication, and automated digital curation of neural connectivity.

    Science.gov (United States)

    Brown, Ramsay A; Swanson, Larry W

    2013-09-01

    Systematic description and the unambiguous communication of findings and models remain among the unresolved fundamental challenges in systems neuroscience. No common descriptive frameworks exist to describe systematically the connective architecture of the nervous system, even at the grossest level of observation. Furthermore, the accelerating volume of novel data generated on neural connectivity outpaces the rate at which this data is curated into neuroinformatics databases to synthesize digitally systems-level insights from disjointed reports and observations. To help address these challenges, we propose the Neural Systems Language (NSyL). NSyL is a modeling language to be used by investigators to encode and communicate systematically reports of neural connectivity from neuroanatomy and brain imaging. NSyL engenders systematic description and communication of connectivity irrespective of the animal taxon described, experimental or observational technique implemented, or nomenclature referenced. As a language, NSyL is internally consistent, concise, and comprehensible to both humans and computers. NSyL is a promising development for systematizing the representation of neural architecture, effectively managing the increasing volume of data on neural connectivity and streamlining systems neuroscience research. Here we present similar precedent systems, how NSyL extends existing frameworks, and the reasoning behind NSyL's development. We explore NSyL's potential for balancing robustness and consistency in representation by encoding previously reported assertions of connectivity from the literature as examples. Finally, we propose and discuss the implications of a framework for how NSyL will be digitally implemented in the future to streamline curation of experimental results and bridge the gaps among anatomists, imagers, and neuroinformatics databases. Copyright © 2013 Wiley Periodicals, Inc.

  10. Systematics of β and γ parameters of O(6)-like nuclei in the interacting boson model

    International Nuclear Information System (INIS)

    Wang Baolin

    1997-01-01

    By comparing quadrupole moments between the interacting boson model (IBM) and the collective model, a simple calculation for the triaxial deformation parameters β and γ in the O(6)-like nuclei is presented, based on the intrinsic frame in the IBM. The systematics of the β and γ are studied. The realistic cases are calculated for the even-even Xe, Ba and Ce isotopes, and the smooth dependences of the strength ratios θ 3 /κ and the effective charges e 2 on the proton and neutron boson numbers N π and N ν are discovered

  11. A systematic study of multiple minerals precipitation modelling in wastewater treatment.

    Science.gov (United States)

    Kazadi Mbamba, Christian; Tait, Stephan; Flores-Alsina, Xavier; Batstone, Damien J

    2015-11-15

    Mineral solids precipitation is important in wastewater treatment. However approaches to minerals precipitation modelling are varied, often empirical, and mostly focused on single precipitate classes. A common approach, applicable to multi-species precipitates, is needed to integrate into existing wastewater treatment models. The present study systematically tested a semi-mechanistic modelling approach, using various experimental platforms with multiple minerals precipitation. Experiments included dynamic titration with addition of sodium hydroxide to synthetic wastewater, and aeration to progressively increase pH and induce precipitation in real piggery digestate and sewage sludge digestate. The model approach consisted of an equilibrium part for aqueous phase reactions and a kinetic part for minerals precipitation. The model was fitted to dissolved calcium, magnesium, total inorganic carbon and phosphate. Results indicated that precipitation was dominated by the mineral struvite, forming together with varied and minor amounts of calcium phosphate and calcium carbonate. The model approach was noted to have the advantage of requiring a minimal number of fitted parameters, so the model was readily identifiable. Kinetic rate coefficients, which were statistically fitted, were generally in the range 0.35-11.6 h(-1) with confidence intervals of 10-80% relative. Confidence regions for the kinetic rate coefficients were often asymmetric with model-data residuals increasing more gradually with larger coefficient values. This suggests that a large kinetic coefficient could be used when actual measured data is lacking for a particular precipitate-matrix combination. Correlation between the kinetic rate coefficients of different minerals was low, indicating that parameter values for individual minerals could be independently fitted (keeping all other model parameters constant). Implementation was therefore relatively flexible, and would be readily expandable to include other

  12. Characteristics of Indigenous primary health care service delivery models: a systematic scoping review.

    Science.gov (United States)

    Harfield, Stephen G; Davy, Carol; McArthur, Alexa; Munn, Zachary; Brown, Alex; Brown, Ngiare

    2018-01-25

    Indigenous populations have poorer health outcomes compared to their non-Indigenous counterparts. The evolution of Indigenous primary health care services arose from mainstream health services being unable to adequately meet the needs of Indigenous communities and Indigenous peoples often being excluded and marginalised from mainstream health services. Part of the solution has been to establish Indigenous specific primary health care services, for and managed by Indigenous peoples. There are a number of reasons why Indigenous primary health care services are more likely than mainstream services to improve the health of Indigenous communities. Their success is partly due to the fact that they often provide comprehensive programs that incorporate treatment and management, prevention and health promotion, as well as addressing the social determinants of health. However, there are gaps in the evidence base including the characteristics that contribute to the success of Indigenous primary health care services in providing comprehensive primary health care. This systematic scoping review aims to identify the characteristics of Indigenous primary health care service delivery models. This systematic scoping review was led by an Aboriginal researcher, using the Joanna Briggs Institute Scoping Review Methodology. All published peer-reviewed and grey literature indexed in PubMed, EBSCO CINAHL, Embase, Informit, Mednar, and Trove databases from September 1978 to May 2015 were reviewed for inclusion. Studies were included if they describe the characteristics of service delivery models implemented within an Indigenous primary health care service. Sixty-two studies met the inclusion criteria. Data were extracted and then thematically analysed to identify the characteristics of Indigenous PHC service delivery models. Culture was the most prominent characteristic underpinning all of the other seven characteristics which were identified - accessible health services, community

  13. Systematic Assessment of Neutron and Gamma Backgrounds Relevant to Operational Modeling and Detection Technology Implementation

    Energy Technology Data Exchange (ETDEWEB)

    Archer, Daniel E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Hornback, Donald Eric [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Johnson, Jeffrey O. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nicholson, Andrew D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Patton, Bruce W. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Peplow, Douglas E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Miller, Thomas Martin [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Ayaz-Maierhafer, Birsen [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-01-01

    This report summarizes the findings of a two year effort to systematically assess neutron and gamma backgrounds relevant to operational modeling and detection technology implementation. The first year effort focused on reviewing the origins of background sources and their impact on measured rates in operational scenarios of interest. The second year has focused on the assessment of detector and algorithm performance as they pertain to operational requirements against the various background sources and background levels.

  14. Modelling of the spallation reaction: analysis and testing of nuclear models; Simulation de la spallation: analyse et test des modeles nucleaires

    Energy Technology Data Exchange (ETDEWEB)

    Toccoli, C

    2000-04-03

    The spallation reaction is considered as a 2-step process. First a very quick stage (10{sup -22}, 10{sup -29} s) which corresponds to the individual interaction between the incident projectile and nucleons, this interaction is followed by a series of nucleon-nucleon collisions (intranuclear cascade) during which fast particles are emitted, the nucleus is left in a strongly excited level. Secondly a slower stage (10{sup -18}, 10{sup -19} s) during which the nucleus is expected to de-excite completely. This de-excitation is performed by evaporation of light particles (n, p, d, t, {sup 3}He, {sup 4}He) or/and fission or/and fragmentation. The HETC code has been designed to simulate spallation reactions, this simulation is based on the 2-steps process and on several models of intranuclear cascades (Bertini model, Cugnon model, Helder Duarte model), the evaporation model relies on the statistical theory of Weiskopf-Ewing. The purpose of this work is to evaluate the ability of the HETC code to predict experimental results. A methodology about the comparison of relevant experimental data with results of calculation is presented and a preliminary estimation of the systematic error of the HETC code is proposed. The main problem of cascade models originates in the difficulty of simulating inelastic nucleon-nucleon collisions, the emission of pions is over-estimated and corresponding differential spectra are badly reproduced. The inaccuracy of cascade models has a great impact to determine the excited level of the nucleus at the end of the first step and indirectly on the distribution of final residual nuclei. The test of the evaporation model has shown that the emission of high energy light particles is under-estimated. (A.C.)

  15. Evaluation of surface air temperature and urban effects in Japan simulated by non-hydrostatic regional climate model

    Science.gov (United States)

    Murata, A.; Sasaki, H.; Hanafusa, M.; Kurihara, K.

    2012-12-01

    We evaluated the performance of a well-developed nonhydrostatic regional climate model (NHRCM) with a spatial resolution of 5 km with respect to temperature in the present-day climate of Japan, and estimated urban heat island (UHI) intensity by comparing the model results and observations. The magnitudes of root mean square error (RMSE) and systematic error (bias) for the annual average of daily mean (Ta), maximum (Tx), and minimum (Tn) temperatures are within 1.5 K, demonstrating that the temperatures of the present-day climate are reproduced well by NHRCM. These small errors indicate that temperature variability produced by local-scale phenomena is represented well by the model with a higher spatial resolution. It is also found that the magnitudes of RMSE and bias in the annually-average Tx are relatively large compared with those in Ta and Tn. The horizontal distributions of the error, defined as the difference between simulated and observed temperatures (simulated minus observed), illustrate negative errors in the annually-averaged Tn in three major metropolitan areas: Tokyo, Osaka, and Nagoya. These negative errors in urban areas affect the cold bias in the annually-averaged Tx. The relation between the underestimation of temperature and degree of urbanization is therefore examined quantitatively using National Land Numerical Information provided by the Ministry of Land, Infrastructure, Transport, and Tourism. The annually-averaged Ta, Tx, and Tn are all underestimated in the areas where the degree of urbanization is relatively high. The underestimations in these areas are attributed to the treatment of urban areas in NHRCM, where the effects of urbanization, such as waste heat and artificial structures, are not included. In contrast, in rural areas, the simulated Tx is underestimated and Tn is overestimated although the errors in Ta are small. This indicates that the simulated diurnal temperature range is underestimated. The reason for the relatively large

  16. Evaluation of Simulation Models that Estimate the Effect of Dietary Strategies on Nutritional Intake: A Systematic Review.

    Science.gov (United States)

    Grieger, Jessica A; Johnson, Brittany J; Wycherley, Thomas P; Golley, Rebecca K

    2017-05-01

    Background: Dietary simulation modeling can predict dietary strategies that may improve nutritional or health outcomes. Objectives: The study aims were to undertake a systematic review of simulation studies that model dietary strategies aiming to improve nutritional intake, body weight, and related chronic disease, and to assess the methodologic and reporting quality of these models. Methods: The Preferred Reporting Items for Systematic Reviews and Meta-Analyses guided the search strategy with studies located through electronic searches [Cochrane Library, Ovid (MEDLINE and Embase), EBSCOhost (CINAHL), and Scopus]. Study findings were described and dietary modeling methodology and reporting quality were critiqued by using a set of quality criteria adapted for dietary modeling from general modeling guidelines. Results: Forty-five studies were included and categorized as modeling moderation, substitution, reformulation, or promotion dietary strategies. Moderation and reformulation strategies targeted individual nutrients or foods to theoretically improve one particular nutrient or health outcome, estimating small to modest improvements. Substituting unhealthy foods with healthier choices was estimated to be effective across a range of nutrients, including an estimated reduction in intake of saturated fatty acids, sodium, and added sugar. Promotion of fruits and vegetables predicted marginal changes in intake. Overall, the quality of the studies was moderate to high, with certain features of the quality criteria consistently reported. Conclusions: Based on the results of reviewed simulation dietary modeling studies, targeting a variety of foods rather than individual foods or nutrients theoretically appears most effective in estimating improvements in nutritional intake, particularly reducing intake of nutrients commonly consumed in excess. A combination of strategies could theoretically be used to deliver the best improvement in outcomes. Study quality was moderate to

  17. Background modelling of diffraction data in the presence of ice rings

    Directory of Open Access Journals (Sweden)

    James M. Parkhurst

    2017-09-01

    Full Text Available An algorithm for modelling the background for each Bragg reflection in a series of X-ray diffraction images containing Debye–Scherrer diffraction from ice in the sample is presented. The method involves the use of a global background model which is generated from the complete X-ray diffraction data set. Fitting of this model to the background pixels is then performed for each reflection independently. The algorithm uses a static background model that does not vary over the course of the scan. The greatest improvement can be expected for data where ice rings are present throughout the data set and the local background shape at the size of a spot on the detector does not exhibit large time-dependent variation. However, the algorithm has been applied to data sets whose background showed large pixel variations (variance/mean > 2 and has been shown to improve the results of processing for these data sets. It is shown that the use of a simple flat-background model as in traditional integration programs causes systematic bias in the background determination at ice-ring resolutions, resulting in an overestimation of reflection intensities at the peaks of the ice rings and an underestimation of reflection intensities either side of the ice ring. The new global background-model algorithm presented here corrects for this bias, resulting in a noticeable improvement in R factors following refinement.

  18. Dispersion of traffic exhausts emitted from a stationary line source versus individual moving cars – a numerical comparison

    Directory of Open Access Journals (Sweden)

    Günter Gross

    2016-09-01

    Full Text Available A three-dimensional microscale model was used to study the effects of moving vehicles on air pollution in the close vicinity of a road. The numerical results are compared to general findings from wind tunnel experiments and field observations. It was found that the model is suitable to capture the main flow characteristics within an urban street canyon, in particular the modifications relating to running traffic. A comparison of the results for a stationary line source approach and for multiple single moving sources demonstrates significant differences. For a street in a flat terrain, the near-road concentrations are underestimated by up to a factor of two if the emissions are approximated by a stationary line source. This underestimation decreases with increasing distance, and becomes negligible 30–50 m away from the road. For an urban canyon situation, the line source assumption is a conservative approximation for the concentrations at the leeside of the street, while on the opposite pavement and wall, a systematic underestimation was found. Also, the effects of different traffic situations have been studied and discussed.

  19. A Hamiltonian viewpoint in the modeling of switching power converters : A systematic modeling procedure of a large class of switching power converters using the Hamiltonian approach

    NARCIS (Netherlands)

    Escobar, Gerardo; Schaft, Arjan J. van der; Ortega, Romeo

    1999-01-01

    In this paper we show how, using the Hamiltonian formalism, we can systematically derive mathematical models that describe the behaviour of a large class of switching power converters, including the "Boost", "Buck", "Buck-Boost", "Čuk" and "Flyback" converters. We follow the approach earlier

  20. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs).

    Science.gov (United States)

    Kuang, Qifan; Wang, MinQi; Li, Rong; Dong, YongCheng; Li, Yizhou; Li, Menglong

    2014-01-01

    Early and accurate identification of adverse drug reactions (ADRs) is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs. In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper. Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  1. Statistical errors in Monte Carlo estimates of systematic errors

    Science.gov (United States)

    Roe, Byron P.

    2007-01-01

    For estimating the effects of a number of systematic errors on a data sample, one can generate Monte Carlo (MC) runs with systematic parameters varied and examine the change in the desired observed result. Two methods are often used. In the unisim method, the systematic parameters are varied one at a time by one standard deviation, each parameter corresponding to a MC run. In the multisim method (see ), each MC run has all of the parameters varied; the amount of variation is chosen from the expected distribution of each systematic parameter, usually assumed to be a normal distribution. The variance of the overall systematic error determination is derived for each of the two methods and comparisons are made between them. If one focuses not on the error in the prediction of an individual systematic error, but on the overall error due to all systematic errors in the error matrix element in data bin m, the number of events needed is strongly reduced because of the averaging effect over all of the errors. For simple models presented here the multisim model was far better if the statistical error in the MC samples was larger than an individual systematic error, while for the reverse case, the unisim model was better. Exact formulas and formulas for the simple toy models are presented so that realistic calculations can be made. The calculations in the present note are valid if the errors are in a linear region. If that region extends sufficiently far, one can have the unisims or multisims correspond to k standard deviations instead of one. This reduces the number of events required by a factor of k2. The specific terms unisim and multisim were coined by Peter Meyers and Steve Brice, respectively, for the MiniBooNE experiment. However, the concepts have been developed over time and have been in general use for some time.

  2. A comprehensive, consistent and systematic mathematical model of PEM fuel cells

    International Nuclear Information System (INIS)

    Baschuk, J.J.; Li Xianguo

    2009-01-01

    This paper presents a comprehensive, consistent and systematic mathematical model for PEM fuel cells that can be used as the general formulation for the simulation and analysis of PEM fuel cells. As an illustration, the model is applied to an isothermal, steady state, two-dimensional PEM fuel cell. Water is assumed to be in either the gas phase or as a liquid phase in the pores of the polymer electrolyte. The model includes the transport of gas in the gas flow channels, electrode backing and catalyst layers; the transport of water and hydronium in the polymer electrolyte of the catalyst and polymer electrolyte layers; and the transport of electrical current in the solid phase. Water and ion transport in the polymer electrolyte was modeled using the generalized Stefan-Maxwell equations, based on non-equilibrium thermodynamics. Model simulations show that the bulk, convective gas velocity facilitates hydrogen transport from the gas flow channels to the anode catalyst layers, but inhibits oxygen transport. While some of the water required by the anode is supplied by the water produced in the cathode, the majority of water must be supplied by the anode gas phase, making operation with fully humidified reactants necessary. The length of the gas flow channel has a significant effect on the current production of the PEM fuel cell, with a longer channel length having a lower performance relative to a shorter channel length. This lower performance is caused by a greater variation in water content within the longer channel length

  3. Systematic reviews in bioethics: types, challenges, and value.

    Science.gov (United States)

    McDougall, Rosalind

    2014-02-01

    There has recently been interest in applying the techniques of systematic review to bioethics literature. In this paper, I identify the three models of systematic review proposed to date in bioethics: systematic reviews of empirical bioethics research, systematic reviews of normative bioethics literature, and systematic reviews of reasons. I argue that all three types yield information useful to scholarship in bioethics, yet they also face significant challenges particularly in relation to terminology and time. Drawing on my recent experience conducting a systematic review, I suggest that complete comprehensiveness may not always be an appropriate goal of a literature review in bioethics, depending on the research question. In some cases, all the relevant ideas may be captured without capturing all the relevant literature. I conclude that systematic reviews in bioethics have an important role to play alongside the traditional broadbrush approach to reviewing literature in bioethics.

  4. Quality of systematic reviews in pediatric oncology--a systematic review.

    Science.gov (United States)

    Lundh, Andreas; Knijnenburg, Sebastiaan L; Jørgensen, Anders W; van Dalen, Elvira C; Kremer, Leontien C M

    2009-12-01

    To ensure evidence-based decision making in pediatric oncology systematic reviews are necessary. The objective of our study was to evaluate the methodological quality of all currently existing systematic reviews in pediatric oncology. We identified eligible systematic reviews through a systematic search of the literature. Data on clinical and methodological characteristics of the included systematic reviews were extracted. The methodological quality of the included systematic reviews was assessed using the overview quality assessment questionnaire, a validated 10-item quality assessment tool. We compared the methodological quality of systematic reviews published in regular journals with that of Cochrane systematic reviews. We included 117 systematic reviews, 99 systematic reviews published in regular journals and 18 Cochrane systematic reviews. The average methodological quality of systematic reviews was low for all ten items, but the quality of Cochrane systematic reviews was significantly higher than systematic reviews published in regular journals. On a 1-7 scale, the median overall quality score for all systematic reviews was 2 (range 1-7), with a score of 1 (range 1-7) for systematic reviews in regular journals compared to 6 (range 3-7) in Cochrane systematic reviews (pmethodological flaws leading to a high risk of bias. While Cochrane systematic reviews were of higher methodological quality than systematic reviews in regular journals, some of them also had methodological problems. Therefore, the methodology of each individual systematic review should be scrutinized before accepting its results.

  5. Investigating the role of chemical and physical processes on organic aerosol modelling with CAMx in the Po Valley during a winter episode

    Science.gov (United States)

    Meroni, A.; Pirovano, G.; Gilardoni, S.; Lonati, G.; Colombi, C.; Gianelle, V.; Paglione, M.; Poluzzi, V.; Riva, G. M.; Toppetti, A.

    2017-12-01

    Traditional aerosol mechanisms underestimate the observed organic aerosol concentration, especially due to the lack of information on secondary organic aerosol (SOA) formation and processing. In this study we evaluate the chemical and transport model CAMx during a one-month in winter (February 2013) over a 5 km resolution domain, covering the whole Po valley (Northern Italy). This works aims at investigating the effects of chemical and physical atmospheric processing on modelling results and, in particular, to evaluate the CAMx sensitivity to organic aerosol (OA) modelling schemes: we will compare the recent 1.5D-VBS algorithm (CAMx-VBS) with the traditional Odum 2-product model (CAMx-SOAP). Additionally, the thorough diagnostic analysis of the reproduction of meteorology, precursors and aerosol components was intended to point put strength and weaknesses of the modelling system and address its improvement. Firstly, we evaluate model performance for criteria PM concentration. PM10 concentration was underestimated both by CAMx-SOAP and even more by CAMx-VBS, with the latter showing a bias ranging between -4.7 and -7.1 μg m-3. PM2.5 model performance was to some extent better than PM10, showing a mean bias ranging between -0.5 μg m-3 at rural sites and -5.5 μg m-3 at urban and suburban sites. CAMx performance for OA was clearly worse than for the other PM compounds (negative bias ranging between -40% and -75%). The comparisons of model results with OA sources (identified by PMF analysis) shows that the VBS scheme underestimates freshly emitted organic aerosol while SOAP overestimates. The VBS scheme correctly reproduces biomass burning (BBOA) contributions to primary OA concentrations (POA). In contrast VBS slightly underestimates the contribution from fossil-fuel combustion (HOA), indicating that POA emissions related to road transport are either underestimated or associated to higher volatility classes. The VBS scheme under-predictes the SOA too, but to a lesser

  6. Acute Systemic Complications of Convulsive Status Epilepticus-A Systematic Review.

    Science.gov (United States)

    Sutter, Raoul; Dittrich, Tolga; Semmlack, Saskia; Rüegg, Stephan; Marsch, Stephan; Kaplan, Peter W

    2018-01-01

    Status epilepticus is a neurologic emergency with high morbidity and mortality requiring neurointensive care and treatment of systemic complications. This systematic review compiles the current literature on acute systemic complications of generalized convulsive status epilepticus in adults and their immediate clinical impact along with recommendations for optimal neurointensive care. We searched PubMed, Medline, Embase, and the Cochrane library for articles published between 1960 and 2016 and reporting on systemic complications of convulsive status epilepticus. All identified studies were screened for eligibility by two independent reviewers. Key data were extracted using standardized data collection forms. Thirty-two of 3,046 screened articles were included. Acute manifestations and complications reported in association with generalized convulsive status epilepticus can affect all organ systems fueling complex cascades and multiple organ interactions. Most reported complications result from generalized excessive muscle contractions that increase body temperature and serum potassium levels and may interfere with proper and coordinated function of respiratory muscles followed by hypoxia and respiratory acidosis. Increased plasma catecholamines can cause a decay of skeletal muscle cells and cardiac function, including stress cardiomyopathy. Systemic complications are often underestimated or misinterpreted as they may mimic underlying causes of generalized convulsive status epilepticus or treatment-related adverse events. Management of generalized convulsive status epilepticus should center on the administration of antiseizure drugs, treatment of the underlying causes, and the attendant systemic consequences to prevent secondary seizure-related injuries. Heightened awareness, systematic clinical assessment, and diagnostic workup and management based on the proposed algorithm are advocated as they are keys to optimal outcome.

  7. A Systematic Review of Cost-Effectiveness Models in Type 1 Diabetes Mellitus.

    Science.gov (United States)

    Henriksson, Martin; Jindal, Ramandeep; Sternhufvud, Catarina; Bergenheim, Klas; Sörstadius, Elisabeth; Willis, Michael

    2016-06-01

    Critiques of cost-effectiveness modelling in type 1 diabetes mellitus (T1DM) are scarce and are often undertaken in combination with type 2 diabetes mellitus (T2DM) models. However, T1DM is a separate disease, and it is therefore important to appraise modelling methods in T1DM. This review identified published economic models in T1DM and provided an overview of the characteristics and capabilities of available models, thus enabling a discussion of best-practice modelling approaches in T1DM. A systematic review of Embase(®), MEDLINE(®), MEDLINE(®) In-Process, and NHS EED was conducted to identify available models in T1DM. Key conferences and health technology assessment (HTA) websites were also reviewed. The characteristics of each model (e.g. model structure, simulation method, handling of uncertainty, incorporation of treatment effect, data for risk equations, and validation procedures, based on information in the primary publication) were extracted, with a focus on model capabilities. We identified 13 unique models. Overall, the included studies varied greatly in scope as well as in the quality and quantity of information reported, but six of the models (Archimedes, CDM [Core Diabetes Model], CRC DES [Cardiff Research Consortium Discrete Event Simulation], DCCT [Diabetes Control and Complications Trial], Sheffield, and EAGLE [Economic Assessment of Glycaemic control and Long-term Effects of diabetes]) were the most rigorous and thoroughly reported. Most models were Markov based, and cohort and microsimulation methods were equally common. All of the more comprehensive models employed microsimulation methods. Model structure varied widely, with the more holistic models providing a comprehensive approach to microvascular and macrovascular events, as well as including adverse events. The majority of studies reported a lifetime horizon, used a payer perspective, and had the capability for sensitivity analysis. Several models have been developed that provide useful

  8. Systematic Methodology for Reproducible Optimizing Batch Operation

    DEFF Research Database (Denmark)

    Bonné, Dennis; Jørgensen, Sten Bay

    2006-01-01

    This contribution presents a systematic methodology for rapid acquirement of discrete-time state space model representations of batch processes based on their historical operation data. These state space models are parsimoniously parameterized as a set of local, interdependent models. The present...

  9. Associated factors to the maternal perception of child body weight: a systematic review

    Directory of Open Access Journals (Sweden)

    Perla Trejo-Ortíz

    2017-01-01

    Full Text Available Objective: To conduct a systematic review of literature about maternal perception of child weight and the factors that are associated with it. Materials and methods: SciELO, PubMed, LILACS and Redalyc were subject to a database search for articles published between 2009 and 2016. The final sample was comprised of twenty five articles. Results: From 21.8% to 98.2% of mothers underestimate the weight of their child. This has been associated with body mass index (BMI, sex, age, birth weight and the quantity of food that is ingested by the child; race, BMI, age, income and maternal education. Furthermore it has been found that the perception of child weight is associated with the presence of childhood obesity, actions and problems of parents to manage the weight of the child and dietary control. Conclusions: It is necessary to continue the study of the maternal perception of the child's weight and to find proposals for intervention aimed at reducing this problem.

  10. Markov modeling and discrete event simulation in health care: a systematic comparison.

    Science.gov (United States)

    Standfield, Lachlan; Comans, Tracy; Scuffham, Paul

    2014-04-01

    The aim of this study was to assess if the use of Markov modeling (MM) or discrete event simulation (DES) for cost-effectiveness analysis (CEA) may alter healthcare resource allocation decisions. A systematic literature search and review of empirical and non-empirical studies comparing MM and DES techniques used in the CEA of healthcare technologies was conducted. Twenty-two pertinent publications were identified. Two publications compared MM and DES models empirically, one presented a conceptual DES and MM, two described a DES consensus guideline, and seventeen drew comparisons between MM and DES through the authors' experience. The primary advantages described for DES over MM were the ability to model queuing for limited resources, capture individual patient histories, accommodate complexity and uncertainty, represent time flexibly, model competing risks, and accommodate multiple events simultaneously. The disadvantages of DES over MM were the potential for model overspecification, increased data requirements, specialized expensive software, and increased model development, validation, and computational time. Where individual patient history is an important driver of future events an individual patient simulation technique like DES may be preferred over MM. Where supply shortages, subsequent queuing, and diversion of patients through other pathways in the healthcare system are likely to be drivers of cost-effectiveness, DES modeling methods may provide decision makers with more accurate information on which to base resource allocation decisions. Where these are not major features of the cost-effectiveness question, MM remains an efficient, easily validated, parsimonious, and accurate method of determining the cost-effectiveness of new healthcare interventions.

  11. Reduction of the nitro group during sample preparation may cause underestimation of the nitration level in 3-nitrotyrosine immunoblotting

    DEFF Research Database (Denmark)

    Söderling, Ann-Sofi; Hultman, Lena; Delbro, Dick

    2007-01-01

    We noted differences in the antibody response to 3-nitrotyrosine (NO(2)Tyr) in fixed and non-fixed tissues, and studied therefore potential problems associated with non-fixed tissues in Western blot analyses. Three different monoclonal anti-nitrotyrosine antibodies in Western blot analysis of inf...... is not detected by anti-NO(2)Tyr antibodies. Western blot analysis may therefore underestimate the level of tissue nitration, and factors causing a reduction of NO(2)Tyr during sample preparation might conceal the actual nitration of proteins....

  12. Quality of systematic reviews in pediatric oncology--a systematic review

    DEFF Research Database (Denmark)

    Lundh, Andreas; Knijnenburg, Sebastiaan L; Jørgensen, Anders W

    2009-01-01

    BACKGROUND: To ensure evidence-based decision making in pediatric oncology systematic reviews are necessary. The objective of our study was to evaluate the methodological quality of all currently existing systematic reviews in pediatric oncology. METHODS: We identified eligible systematic reviews...... through a systematic search of the literature. Data on clinical and methodological characteristics of the included systematic reviews were extracted. The methodological quality of the included systematic reviews was assessed using the overview quality assessment questionnaire, a validated 10-item quality...... assessment tool. We compared the methodological quality of systematic reviews published in regular journals with that of Cochrane systematic reviews. RESULTS: We included 117 systematic reviews, 99 systematic reviews published in regular journals and 18 Cochrane systematic reviews. The average methodological...

  13. A systematic review of breast cancer incidence risk prediction models with meta-analysis of their performance.

    Science.gov (United States)

    Meads, Catherine; Ahmed, Ikhlaaq; Riley, Richard D

    2012-04-01

    A risk prediction model is a statistical tool for estimating the probability that a currently healthy individual with specific risk factors will develop a condition in the future such as breast cancer. Reliably accurate prediction models can inform future disease burdens, health policies and individual decisions. Breast cancer prediction models containing modifiable risk factors, such as alcohol consumption, BMI or weight, condom use, exogenous hormone use and physical activity, are of particular interest to women who might be considering how to reduce their risk of breast cancer and clinicians developing health policies to reduce population incidence rates. We performed a systematic review to identify and evaluate the performance of prediction models for breast cancer that contain modifiable factors. A protocol was developed and a sensitive search in databases including MEDLINE and EMBASE was conducted in June 2010. Extensive use was made of reference lists. Included were any articles proposing or validating a breast cancer prediction model in a general female population, with no language restrictions. Duplicate data extraction and quality assessment were conducted. Results were summarised qualitatively, and where possible meta-analysis of model performance statistics was undertaken. The systematic review found 17 breast cancer models, each containing a different but often overlapping set of modifiable and other risk factors, combined with an estimated baseline risk that was also often different. Quality of reporting was generally poor, with characteristics of included participants and fitted model results often missing. Only four models received independent validation in external data, most notably the 'Gail 2' model with 12 validations. None of the models demonstrated consistently outstanding ability to accurately discriminate between those who did and those who did not develop breast cancer. For example, random-effects meta-analyses of the performance of the

  14. Models and Theories of Health Education and Health Promotion in Physical Activity Interventions for Women: a Systematic Review

    Directory of Open Access Journals (Sweden)

    Seyed Mohammad Mehdi Hazavehei

    2014-09-01

    Full Text Available Introduction: The present study as a systematic review investigated and analyzed interventions based on models and theories of health education and promotion in the field of physical activity in women. Materials and Methods: Three electronic databases, including Springer, Biomed Central and Science Direct were searched systematically. Only studies were selected that were quantitative, interventional and in English language as well as those that used at least one of the models and theories of health education and health promotion. Finally, 13 studies were reviewed that met the inclusion criteria and published from 2000 to 2013. Results: Of 13 studies reviewed, 10 studies measured levels of physical activity before and after the intervention, which nine interventions increased physical activity in the intervention group compared to the control group. Studies were conducted in different settings of health promotion including health care centers, community setting and workplace. The most widely used model was the Transtheoretical Model applied in eight of investigations. Conclusion: It is suggested to focus more on physical activity and duration of interventions to increase the efficacy of interventions. It is suggested to measure changes of physical activity habits in experimental and control groups in interventions based on the transtheoretical model to prepare a complementary scale to assess the efficacy of interventions. According to the results, no study had focused on changes in institutional policies or general health or providing changes in environment related to physical activity.

  15. Modeling Systematic Change in Stopover Duration Does Not Improve Bias in Trends Estimated from Migration Counts.

    Directory of Open Access Journals (Sweden)

    Tara L Crewe

    Full Text Available The use of counts of unmarked migrating animals to monitor long term population trends assumes independence of daily counts and a constant rate of detection. However, migratory stopovers often last days or weeks, violating the assumption of count independence. Further, a systematic change in stopover duration will result in a change in the probability of detecting individuals once, but also in the probability of detecting individuals on more than one sampling occasion. We tested how variation in stopover duration influenced accuracy and precision of population trends by simulating migration count data with known constant rate of population change and by allowing daily probability of survival (an index of stopover duration to remain constant, or to vary randomly, cyclically, or increase linearly over time by various levels. Using simulated datasets with a systematic increase in stopover duration, we also tested whether any resulting bias in population trend could be reduced by modeling the underlying source of variation in detection, or by subsampling data to every three or five days to reduce the incidence of recounting. Mean bias in population trend did not differ significantly from zero when stopover duration remained constant or varied randomly over time, but bias and the detection of false trends increased significantly with a systematic increase in stopover duration. Importantly, an increase in stopover duration over time resulted in a compounding effect on counts due to the increased probability of detection and of recounting on subsequent sampling occasions. Under this scenario, bias in population trend could not be modeled using a covariate for stopover duration alone. Rather, to improve inference drawn about long term population change using counts of unmarked migrants, analyses must include a covariate for stopover duration, as well as incorporate sampling modifications (e.g., subsampling to reduce the probability that individuals will

  16. Ecotoxicological potential of the biocides terbutryn, octhilinone and methylisothiazolinone: Underestimated risk from biocidal pathways?

    Science.gov (United States)

    Kresmann, Simon; Arokia, Arokia Hansel Rajan; Koch, Christoph; Sures, Bernd

    2018-06-01

    The use of biocides by industry, agriculture and households increased throughout the last two decades. Many new applications with known substances enriched the variety of biocidal pollution sources for the aquatic environment. While agriculture was the major source for a long time, leaching from building facades and preservation of personal care and cleaning products was identified as new sources in the last few years. With the different usage forms of biocidal products the complexity of legislative regulation increased as well. The requirements for risk assessment differ from one law to another and the potential risk of substances under different regulations might be underestimated. Still EC 50 and predicted no-effect concentration (PNEC) values gained from testing with different species are the core of environmental risk assessment, but ecotoxicological data is limited or lacking for many biocides. In this study the biocides widely used in facade coatings and household products terbutryn, octhilinone and methylisothiazolinone were tested with the Daphnia magna acute immobilisation assay, the neutral red uptake assay and the ethoxyresorufin-O-deethylase (EROD) assay, performed with rainbow trout liver (RTL-W1) cells. Further, the MTT assay with the ovarian cell line CHO-9 from Chinese hamster was used as mammalian model. Octhilinone induced the strongest effects with EC 50 values of 156μg/l in the D. magna assay, while terbutryn showed the weakest effects with 8390μg/l and methylisothiazolinone 513μg/l respectively. All other assays showed higher EC 50 values and thus only weak effects. EROD assays did not show any effects. With additional literature and database records PNEC values were calculated: terbutryn reached 0.003μg/l, octhilinone 0.05μg/l and methylisothiazolinone 0.5μg/l. Potential ecotoxicological risks of these biocides are discussed, considering environmental concentrations. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. A systematic review of innovative diabetes care models in low-and middle-income countries (LMICs).

    Science.gov (United States)

    Esterson, Yonah B; Carey, Michelle; Piette, John D; Thomas, Nihal; Hawkins, Meredith

    2014-02-01

    Over 70% of the world's patients with diabetes reside in low-and middle-income countries (LMICs), where adequate infrastructure and resources for diabetes care are often lacking. Therefore, academic institutions, health care organizations, and governments from Western nations and LMICs have worked together to develop a variety of effective diabetes care models for resource-poor settings. A focused search of PubMed was conducted with the goal of identifying reports that addressed the implementation of diabetes care models or initiatives to improve clinical and/or biochemical outcomes in patients with diabetes mellitus. A total of 15 published manuscripts comprising nine diabetes care models in 16 locations in sub-Saharan Africa, Latin America, and Asia identified by the above approach were systematically reviewed. The reviewed models shared a number of principles including collaboration, education, standardization, resource optimization, and technological innovation. The most comprehensive models used a number of these principles, which contributed to their success. Reviewing the principles shared by these successful programs may help guide the development of effective future models for diabetes care in low-income settings.

  18. Decision-model estimation of the age-specific disability weight for schistosomiasis japonica: a systematic review of the literature.

    OpenAIRE

    Julia L Finkelstein; Mark D Schleinitz; Hélène Carabin; Stephen T McGarvey

    2008-01-01

    Schistosomiasis is among the most prevalent parasitic infections worldwide. However, current Global Burden of Disease (GBD) disability-adjusted life year estimates indicate that its population-level impact is negligible. Recent studies suggest that GBD methodologies may significantly underestimate the burden of parasitic diseases, including schistosomiasis. Furthermore, strain-specific disability weights have not been established for schistosomiasis, and the magnitude of human disease burden ...

  19. A systematic investigation of computation models for predicting Adverse Drug Reactions (ADRs.

    Directory of Open Access Journals (Sweden)

    Qifan Kuang

    Full Text Available Early and accurate identification of adverse drug reactions (ADRs is critically important for drug development and clinical safety. Computer-aided prediction of ADRs has attracted increasing attention in recent years, and many computational models have been proposed. However, because of the lack of systematic analysis and comparison of the different computational models, there remain limitations in designing more effective algorithms and selecting more useful features. There is therefore an urgent need to review and analyze previous computation models to obtain general conclusions that can provide useful guidance to construct more effective computational models to predict ADRs.In the current study, the main work is to compare and analyze the performance of existing computational methods to predict ADRs, by implementing and evaluating additional algorithms that have been earlier used for predicting drug targets. Our results indicated that topological and intrinsic features were complementary to an extent and the Jaccard coefficient had an important and general effect on the prediction of drug-ADR associations. By comparing the structure of each algorithm, final formulas of these algorithms were all converted to linear model in form, based on this finding we propose a new algorithm called the general weighted profile method and it yielded the best overall performance among the algorithms investigated in this paper.Several meaningful conclusions and useful findings regarding the prediction of ADRs are provided for selecting optimal features and algorithms.

  20. Dynamic temperature dependence patterns in future energy demand models in the context of climate change

    International Nuclear Information System (INIS)

    Hekkenberg, M.; Moll, H.C.; Uiterkamp, A.J.M. Schoot

    2009-01-01

    Energy demand depends on outdoor temperature in a 'u' shaped fashion. Various studies have used this temperature dependence to investigate the effects of climate change on energy demand. Such studies contain implicit or explicit assumptions to describe expected socio-economic changes that may affect future energy demand. This paper critically analyzes these implicit or explicit assumptions and their possible effect on the studies' outcomes. First we analyze the interaction between the socio-economic structure and the temperature dependence pattern (TDP) of energy demand. We find that socio-economic changes may alter the TDP in various ways. Next we investigate how current studies manage these dynamics in socio-economic structure. We find that many studies systematically misrepresent the possible effect of socio-economic changes on the TDP of energy demand. Finally, we assess the consequences of these misrepresentations in an energy demand model based on temperature dependence and climate scenarios. Our model results indicate that expected socio-economic dynamics generally lead to an underestimation of future energy demand in models that misrepresent such dynamics. We conclude that future energy demand models should improve the incorporation of socio-economic dynamics. We propose dynamically modeling several key parameters and using direct meteorological data instead of degree days. (author)

  1. Population-level impact, herd immunity, and elimination after human papillomavirus vaccination: a systematic review and meta-analysis of predictions from transmission-dynamic models.

    NARCIS (Netherlands)

    Brisson, Marc; Bénard, Élodie; Drolet, Mélanie; Bogaards, Johannes A; Baussano, Iacopo; Vänskä, Simopekka; Jit, Mark; Boily, Marie-Claude; Smith, Megan A; Berkhof, Johannes; Canfell, Karen; Chesson, Harrell W; Burger, Emily A; Choi, Yoon H; De Blasio, Birgitte Freiesleben; De Vlas, Sake J; Guzzetta, Giorgio; Hontelez, Jan A C; Horn, Johannes; Jepsen, Martin R; Kim, Jane J; Lazzarato, Fulvio; Matthijsse, Suzette M; Mikolajczyk, Rafael; Pavelyev, Andrew; Pillsbury, Matthew; Shafer, Leigh Anne; Tully, Stephen P; Turner, Hugo C; Usher, Cara; Walsh, Cathal

    2016-01-01

    Modelling studies have been widely used to inform human papillomavirus (HPV) vaccination policy decisions; however, many models exist and it is not known whether they produce consistent predictions of population-level effectiveness and herd effects. We did a systematic review and meta-analysis of

  2. Composition effects on chemical durability and viscosity of nuclear waste glasses - systematic studies and structural thermodynamic models

    International Nuclear Information System (INIS)

    Feng, X.

    1988-01-01

    Two of the primary criteria for the acceptability of nuclear waste glasses are their durability, i.e. chemical resistance to aqueous attack for 10 4 to 10 5 years, and processability, which requires their viscosity at the desired melt temperature to be sufficiently low. Chapter 3 presents the results of systematic composition variation studies around the preliminary reference glass composition WV205 and an atomistic interpretation of the effects of individual oxides. Chapter 4 is concerned with modifications of the Jantzen-Plodinec hydration model which takes into account formation of complex aluminosilicate compounds in the glass. Chapter 5 is devoted to the development and validation of the structural-thermodynamic model for both durability and viscosity. This model assumes the strength of bonds between atoms to be the controlling factor in the composition dependence of these glass properties. The binding strengths are derived from the known heats of formation and the structural roles of constituent oxides. Since the coordination state of various oxides in the glass is temperature dependent and cation size has opposite effects on the two properties, the correlation between melt viscosity and rate of corrosion at low temperature is not simply linear. Chapter 6 surveys the effects of aqueous phase composition on the leach behavior of glasses. These studies provide a comprehensive view of the effects of both glass composition and leachant composition on leaching. The models developed correlate both durability and viscosity with glass composition. A major implication is that these findings can be used in the systematic optimization of the properties of complex oxide glasses

  3. Origin of elemental carbon in snow from western Siberia and northwestern European Russia during winter-spring 2014, 2015 and 2016

    Science.gov (United States)

    Evangeliou, Nikolaos; Shevchenko, Vladimir P.; Espen Yttri, Karl; Eckhardt, Sabine; Sollum, Espen; Pokrovsky, Oleg S.; Kobelev, Vasily O.; Korobov, Vladimir B.; Lobanov, Andrey A.; Starodymova, Dina P.; Vorobiev, Sergey N.; Thompson, Rona L.; Stohl, Andreas

    2018-01-01

    Short-lived climate forcers have been proven important both for the climate and human health. In particular, black carbon (BC) is an important climate forcer both as an aerosol and when deposited on snow and ice surface because of its strong light absorption. This paper presents measurements of elemental carbon (EC; a measurement-based definition of BC) in snow collected from western Siberia and northwestern European Russia during 2014, 2015 and 2016. The Russian Arctic is of great interest to the scientific community due to the large uncertainty of emission sources there. We have determined the major contributing sources of BC in snow in western Siberia and northwestern European Russia using a Lagrangian atmospheric transport model. For the first time, we use a recently developed feature that calculates deposition in backward (so-called retroplume) simulations allowing estimation of the specific locations of sources that contribute to the deposited mass. EC concentrations in snow from western Siberia and northwestern European Russia were highly variable depending on the sampling location. Modelled BC and measured EC were moderately correlated (R = 0.53-0.83) and a systematic region-specific model underestimation was found. The model underestimated observations by 42 % (RMSE = 49 ng g-1) in 2014, 48 % (RMSE = 37 ng g-1) in 2015 and 27 % (RMSE = 43 ng g-1) in 2016. For EC sampled in northwestern European Russia the underestimation by the model was smaller (fractional bias, FB > -100 %). In this region, the major sources were transportation activities and domestic combustion in Finland. When sampling shifted to western Siberia, the model underestimation was more significant (FB < -100 %). There, the sources included emissions from gas flaring as a major contributor to snow BC. The accuracy of the model calculations was also evaluated using two independent datasets of BC measurements in snow covering the entire Arctic. The model underestimated BC concentrations in

  4. Descriptors used to define running-related musculoskeletal injury: a systematic review.

    Science.gov (United States)

    Yamato, Tiê Parma; Saragiotto, Bruno Tirotti; Hespanhol Junior, Luiz Carlos; Yeung, Simon S; Lopes, Alexandre Dias

    2015-05-01

    Systematic review. To systematically review the descriptors used to define running-related musculoskeletal injury and to analyze the implications of different definitions on the results of studies. Studies have developed their own definitions of running-related musculoskeletal injuries based on different criteria. This may affect the rates of injury, which can be overestimated or underestimated due to the lack of a standard definition. Searches were conducted in the Embase, PubMed, CINAHL, SPORTDiscus, LILACS, and SciELO databases, without limits on date of publication and language. Only articles that reported a definition of running-related injury were included. The definitions were classified according to 3 domains and subcategories: (1) presence of physical complaint (symptom, body system involved, region), (2) interruption of training or competition (primary sports involved, extent of injury, extent of limitation, interruption, period of injury), and (3) need for medical assistance. Spearman rank correlation was performed to evaluate the correlation between the completeness of definitions and the rates of injury reported in the studies. A total of 48 articles were included. Most studies described more than half of the subcategories, but with no standardization between the terms used within each category, showing that there is no consensus for a definition. The injury rates ranged between 3% and 85%, and tended to increase with less specific definitions. The descriptors commonly used by researchers to define a running-related injury vary between studies and may affect the rates of injuries. The lack of a standardized definition hinders comparison between studies and rates of injuries.

  5. Policy-Relevant Systematic Reviews to Strengthen Health Systems: Models and Mechanisms to Support Their Production

    Science.gov (United States)

    Oliver, Sandra; Dickson, Kelly

    2016-01-01

    Support for producing systematic reviews about health systems is less well developed than for those about clinical practice. From interviewing policy makers and systematic reviewers we identified institutional mechanisms which bring systematic reviews and policy priorities closer by harnessing organisational and individual motivations, emphasising…

  6. Global CO2 flux inversions from remote-sensing data with systematic errors using hierarchical statistical models

    Science.gov (United States)

    Zammit-Mangion, Andrew; Stavert, Ann; Rigby, Matthew; Ganesan, Anita; Rayner, Peter; Cressie, Noel

    2017-04-01

    The Orbiting Carbon Observatory-2 (OCO-2) satellite was launched on 2 July 2014, and it has been a source of atmospheric CO2 data since September 2014. The OCO-2 dataset contains a number of variables, but the one of most interest for flux inversion has been the column-averaged dry-air mole fraction (in units of ppm). These global level-2 data offer the possibility of inferring CO2 fluxes at Earth's surface and tracking those fluxes over time. However, as well as having a component of random error, the OCO-2 data have a component of systematic error that is dependent on the instrument's mode, namely land nadir, land glint, and ocean glint. Our statistical approach to CO2-flux inversion starts with constructing a statistical model for the random and systematic errors with parameters that can be estimated from the OCO-2 data and possibly in situ sources from flasks, towers, and the Total Column Carbon Observing Network (TCCON). Dimension reduction of the flux field is achieved through the use of physical basis functions, while temporal evolution of the flux is captured by modelling the basis-function coefficients as a vector autoregressive process. For computational efficiency, flux inversion uses only three months of sensitivities of mole fraction to changes in flux, computed using MOZART; any residual variation is captured through the modelling of a stochastic process that varies smoothly as a function of latitude. The second stage of our statistical approach is to simulate from the posterior distribution of the basis-function coefficients and all unknown parameters given the data using a fully Bayesian Markov chain Monte Carlo (MCMC) algorithm. Estimates and posterior variances of the flux field can then be obtained straightforwardly from this distribution. Our statistical approach is different than others, as it simultaneously makes inference (and quantifies uncertainty) on both the error components' parameters and the CO2 fluxes. We compare it to more classical

  7. Towards a Conceptual Framework of Sustainable Business Model Innovation in the Agri-Food Sector: A Systematic Literature Review

    Directory of Open Access Journals (Sweden)

    Henrik Barth

    2017-09-01

    Full Text Available This paper aims to increase our understanding of sustainable business model innovation in the agri-food sector in terms of its theoretical and practical approaches for sustainability and their degree of complexity and maturity. The paper is based on a systematic literature review of 570 journal articles on business models and business model innovation published between 1990 and 2014. Of these articles, only 21 have business model innovation as their main focus. The review shows that research interest in the agri-food sector has increased in these years. The paper proposes a conceptual framework for sustainable business model innovation in the agri-food sector that can be used to meet the challenges encountered in taking a sustainability perspective.

  8. A systematic review of taeniasis, cysticercosis and trichinellosis in Vietnam.

    Science.gov (United States)

    Ng-Nguyen, Dinh; Stevenson, Mark A; Traub, Rebecca J

    2017-03-21

    Taeniasis, cysticercosis and trichinellosis have been ranked as the most important food-borne parasites of humans in terms of public health, socioeconomic and trade impact. Despite this, information on these food-borne zoonoses in Vietnam is scarce and fragmented, and many local reports remain inaccessible to the international research community. This study aims to conduct comprehensive literature searches to report on the incidence and estimate the true prevalence of taeniasis in humans and T. solium cysticercosis in humans and pigs in Vietnam utilizing Bayesian models; in addition, to report the incidence and the distribution of trichinellosis. A Bayesian approach was used to estimate the true prevalence of taeniasis and cysticercosis based on published diagnostic test characteristics used in each published cross-sectional survey. The utilization of coproscopic-based examination of Taenia eggs in stool, although highly specific for genus-level detection, has poor sensitivity and led to an underestimation of the prevalence of human taeniasis. Similarly, post-mortem-based surveys of T. solium cysticercosis in pigs also led to the underestimation of prevalence of porcine cysticercosis. On the other hand, the low specificity of immunodiagnostic methods, in particular Ab-ELISA, led to a likely overestimation of T. solium cysticercosis in humans. Due to the use of imperfect diagnosis tests combined with poor descriptions of sampling methods, our ability to draw solid conclusions from these data is limited. We estimate that the true prevalence of taeniasis and T. solium cysticercosis in rural 'hotspots', is as high as 13% for each, in humans. Taeniasis and T. solium cysticercosis occurs in 60 of the 63 provinces of Vietnam. Most of the information relating to the distribution and prevalence of porcine cysticercosis is limited to commercial abattoir surveys. In Vietnam, Taenia asiatica appears to be confined to the north where it occurs sympatrically with T. solium and

  9. An underestimated role of precipitation frequency in regulating summer soil moisture

    International Nuclear Information System (INIS)

    Wu Chaoyang; Chen, Jing M; Pumpanen, Jukka; Cescatti, Alessandro; Marcolla, Barbara; Blanken, Peter D; Ardö, Jonas; Tang, Yanhong; Magliulo, Vincenzo; Georgiadis, Teodoro; Soegaard, Henrik; Cook, David R; Harding, Richard J

    2012-01-01

    Soil moisture induced droughts are expected to become more frequent under future global climate change. Precipitation has been previously assumed to be mainly responsible for variability in summer soil moisture. However, little is known about the impacts of precipitation frequency on summer soil moisture, either interannually or spatially. To better understand the temporal and spatial drivers of summer drought, 415 site yr measurements observed at 75 flux sites world wide were used to analyze the temporal and spatial relationships between summer soil water content (SWC) and the precipitation frequencies at various temporal scales, i.e., from half-hourly, 3, 6, 12 and 24 h measurements. Summer precipitation was found to be an indicator of interannual SWC variability with r of 0.49 (p < 0.001) for the overall dataset. However, interannual variability in summer SWC was also significantly correlated with the five precipitation frequencies and the sub-daily precipitation frequencies seemed to explain the interannual SWC variability better than the total of precipitation. Spatially, all these precipitation frequencies were better indicators of summer SWC than precipitation totals, but these better performances were only observed in non-forest ecosystems. Our results demonstrate that precipitation frequency may play an important role in regulating both interannual and spatial variations of summer SWC, which has probably been overlooked or underestimated. However, the spatial interpretation should carefully consider other factors, such as the plant functional types and soil characteristics of diverse ecoregions. (letter)

  10. Vast underestimation of Madagascar's biodiversity evidenced by an integrative amphibian inventory.

    Science.gov (United States)

    Vieites, David R; Wollenberg, Katharina C; Andreone, Franco; Köhler, Jörn; Glaw, Frank; Vences, Miguel

    2009-05-19

    Amphibians are in decline worldwide. However, their patterns of diversity, especially in the tropics, are not well understood, mainly because of incomplete information on taxonomy and distribution. We assess morphological, bioacoustic, and genetic variation of Madagascar's amphibians, one of the first near-complete taxon samplings from a biodiversity hotspot. Based on DNA sequences of 2,850 specimens sampled from over 170 localities, our analyses reveal an extreme proportion of amphibian diversity, projecting an almost 2-fold increase in species numbers from the currently described 244 species to a minimum of 373 and up to 465. This diversity is widespread geographically and across most major phylogenetic lineages except in a few previously well-studied genera, and is not restricted to morphologically cryptic clades. We classify the genealogical lineages in confirmed and unconfirmed candidate species or deeply divergent conspecific lineages based on concordance of genetic divergences with other characters. This integrative approach may be widely applicable to improve estimates of organismal diversity. Our results suggest that in Madagascar the spatial pattern of amphibian richness and endemism must be revisited, and current habitat destruction may be affecting more species than previously thought, in amphibians as well as in other animal groups. This case study suggests that worldwide tropical amphibian diversity is probably underestimated at an unprecedented level and stresses the need for integrated taxonomic surveys as a basis for prioritizing conservation efforts within biodiversity hotspots.

  11. A systematic comparison of recurrent event models for application to composite endpoints.

    Science.gov (United States)

    Ozga, Ann-Kathrin; Kieser, Meinhard; Rauch, Geraldine

    2018-01-04

    Many clinical trials focus on the comparison of the treatment effect between two or more groups concerning a rarely occurring event. In this situation, showing a relevant effect with an acceptable power requires the observation of a large number of patients over a long period of time. For feasibility issues, it is therefore often considered to include several event types of interest, non-fatal or fatal, and to combine them within a composite endpoint. Commonly, a composite endpoint is analyzed with standard survival analysis techniques by assessing the time to the first occurring event. This approach neglects that an individual may experience more than one event which leads to a loss of information. As an alternative, composite endpoints could be analyzed by models for recurrent events. There exists a number of such models, e.g. regression models based on count data or Cox-based models such as the approaches of Andersen and Gill, Prentice, Williams and Peterson or, Wei, Lin and Weissfeld. Although some of the methods were already compared within the literature there exists no systematic investigation for the special requirements regarding composite endpoints. Within this work a simulation-based comparison of recurrent event models applied to composite endpoints is provided for different realistic clinical trial scenarios. We demonstrate that the Andersen-Gill model and the Prentice- Williams-Petersen models show similar results under various data scenarios whereas the Wei-Lin-Weissfeld model delivers effect estimators which can considerably deviate under commonly met data scenarios. Based on the conducted simulation study, this paper helps to understand the pros and cons of the investigated methods in the context of composite endpoints and provides therefore recommendations for an adequate statistical analysis strategy and a meaningful interpretation of results.

  12. TAxonomy of Self-reported Sedentary behaviour Tools (TASST) framework for development, comparison and evaluation of self-report tools: content analysis and systematic review.

    Science.gov (United States)

    Dall, P M; Coulter, E H; Fitzsimons, C F; Skelton, D A; Chastin, Sfm

    2017-04-08

    Sedentary behaviour (SB) has distinct deleterious health outcomes, yet there is no consensus on best practice for measurement. This study aimed to identify the optimal self-report tool for population surveillance of SB, using a systematic framework. A framework, TAxonomy of Self-reported Sedentary behaviour Tools (TASST), consisting of four domains (type of assessment, recall period, temporal unit and assessment period), was developed based on a systematic inventory of existing tools. The inventory was achieved through a systematic review of studies reporting SB and tracing back to the original description. A systematic review of the accuracy and sensitivity to change of these tools was then mapped against TASST domains. Systematic searches were conducted via EBSCO, reference lists and expert opinion. The inventory included tools measuring SB in adults that could be self-completed at one sitting, and excluded tools measuring SB in specific populations or contexts. The systematic review included studies reporting on the accuracy against an objective measure of SB and/or sensitivity to change of a tool in the inventory. The systematic review initially identified 32 distinct tools (141 questions), which were used to develop the TASST framework. Twenty-two studies evaluated accuracy and/or sensitivity to change representing only eight taxa. Assessing SB as a sum of behaviours and using a previous day recall were the most promising features of existing tools. Accuracy was poor for all existing tools, with underestimation and overestimation of SB. There was a lack of evidence about sensitivity to change. Despite the limited evidence, mapping existing SB tools onto the TASST framework has enabled informed recommendations to be made about the most promising features for a surveillance tool, identified aspects on which future research and development of SB surveillance tools should focus. International prospective register of systematic reviews (PROPSPERO)/CRD42014009851

  13. Modelling the fate of persistent organic pollutants in Europe: parameterisation of a gridded distribution model

    International Nuclear Information System (INIS)

    Prevedouros, Konstantinos; MacLeod, Matthew; Jones, Kevin C.; Sweetman, Andrew J.

    2004-01-01

    A regionally segmented multimedia fate model for the European continent is described together with an illustrative steady-state case study examining the fate of γ-HCH (lindane) based on 1998 emission data. The study builds on the regionally segmented BETR North America model structure and describes the regional segmentation and parameterisation for Europe. The European continent is described by a 5 deg. x 5 deg. grid, leading to 50 regions together with four perimetric boxes representing regions buffering the European environment. Each zone comprises seven compartments including; upper and lower atmosphere, soil, vegetation, fresh water and sediment and coastal water. Inter-regions flows of air and water are described, exploiting information originating from GIS databases and other georeferenced data. The model is primarily designed to describe the fate of Persistent Organic Pollutants (POPs) within the European environment by examining chemical partitioning and degradation in each region, and inter-region transport either under steady-state conditions or fully dynamically. A test case scenario is presented which examines the fate of estimated spatially resolved atmospheric emissions of lindane throughout Europe within the lower atmosphere and surface soil compartments. In accordance with the predominant wind direction in Europe, the model predicts high concentrations close to the major sources as well as towards Central and Northeast regions. Elevated soil concentrations in Scandinavian soils provide further evidence of the potential of increased scavenging by forests and subsequent accumulation by organic-rich terrestrial surfaces. Initial model predictions have revealed a factor of 5-10 underestimation of lindane concentrations in the atmosphere. This is explained by an underestimation of source strength and/or an underestimation of European background levels. The model presented can further be used to predict deposition fluxes and chemical inventories, and it

  14. A systematic review and qualitative analysis to inform the development of a new emergency department-based geriatric case management model.

    Science.gov (United States)

    Sinha, Samir K; Bessman, Edward S; Flomenbaum, Neal; Leff, Bruce

    2011-06-01

    We inform the future development of a new geriatric emergency management practice model. We perform a systematic review of the existing evidence for emergency department (ED)-based case management models designed to improve the health, social, and health service utilization outcomes for noninstitutionalized older patients within the context of an index ED visit. This was a systematic review of English-language articles indexed in MEDLINE and CINAHL (1966 to 2010), describing ED-based case management models for older adults. Bibliographies of the retrieved articles were reviewed to identify additional references. A systematic qualitative case study analytic approach was used to identify the core operational components and outcome measures of the described clinical interventions. The authors of the included studies were also invited to verify our interpretations of their work. The determined patterns of component adherence were then used to postulate the relative importance and effect of the presence or absence of a particular component in influencing the overall effectiveness of their respective interventions. Eighteen of 352 studies (reported in 20 articles) met study criteria. Qualitative analyses identified 28 outcome measures and 8 distinct model characteristic components that included having an evidence-based practice model, nursing clinical involvement or leadership, high-risk screening processes, focused geriatric assessments, the initiation of care and disposition planning in the ED, interprofessional and capacity-building work practices, post-ED discharge follow-up with patients, and evaluation and monitoring processes. Of the 15 positive study results, 6 had all 8 characteristic components and 9 were found to be lacking at least 1 component. Two studies with positive results lacked 2 characteristic components and none lacked more than 2 components. Of the 3 studies with negative results demonstrating no positive effects based on any outcome tested, one

  15. Computational Modeling of Cultural Dimensions in Adversary Organizations

    Science.gov (United States)

    2010-01-01

    theatre of operations. 50 51 Chapter 5 Adversary Modeling Applications 5.1 Modeling Uncertainty in Adversary Behavior: Attacks in...Underestimate the Strength of Coalition Power 1 1 (= True) 1 1 1 -- Coalition Deploys Forces to Indonesia 1 1 2 1 2 -- Thai can Conduct Unilateral NEO 1 1

  16. Intercomparison between CMIP5 model and MODIS satellite-retrieved data of aerosol optical depth, cloud fraction, and cloud-aerosol interactions

    Science.gov (United States)

    Sockol, Alyssa; Small Griswold, Jennifer D.

    2017-08-01

    Aerosols are a critical component of the Earth's atmosphere and can affect the climate of the Earth through their interactions with solar radiation and clouds. Cloud fraction (CF) and aerosol optical depth (AOD) at 550 nm from the Moderate Resolution Imaging Spectroradiometer (MODIS) are used with analogous cloud and aerosol properties from Historical Phase 5 of the Coupled Model Intercomparison Project (CMIP5) model runs that explicitly include anthropogenic aerosols and parameterized cloud-aerosol interactions. The models underestimate AOD by approximately 15% and underestimate CF by approximately 10% overall on a global scale. A regional analysis is then used to evaluate model performance in two regions with known biomass burning activity and absorbing aerosol (South America (SAM) and South Africa (SAF)). In SAM, the models overestimate AOD by 4.8% and underestimate CF by 14%. In SAF, the models underestimate AOD by 35% and overestimate CF by 13.4%. Average annual cycles show that the monthly timing of AOD peaks closely match satellite data in both SAM and SAF for all except the Community Atmosphere Model 5 and Geophysical Fluid Dynamics Laboratory (GFDL) models. Monthly timing of CF peaks closely match for all models (except GFDL) for SAM and SAF. Sorting monthly averaged 2° × 2.5° model or MODIS CF as a function of AOD does not result in the previously observed "boomerang"-shaped CF versus AOD relationship characteristic of regions with absorbing aerosols from biomass burning. Cloud-aerosol interactions, as observed using daily (or higher) temporal resolution data, are not reproducible at the spatial or temporal resolution provided by the CMIP5 models.

  17. Are we under-estimating the association between autism symptoms?: The importance of considering simultaneous selection when using samples of individuals who meet diagnostic criteria for an autism spectrum disorder.

    Science.gov (United States)

    Murray, Aja Louise; McKenzie, Karen; Kuenssberg, Renate; O'Donnell, Michael

    2014-11-01

    The magnitude of symptom inter-correlations in diagnosed individuals has contributed to the evidence that autism spectrum disorders (ASD) is a fractionable disorder. Such correlations may substantially under-estimate the population correlations among symptoms due to simultaneous selection on the areas of deficit required for diagnosis. Using statistical simulations of this selection mechanism, we provide estimates of the extent of this bias, given different levels of population correlation between symptoms. We then use real data to compare domain inter-correlations in the Autism Spectrum Quotient, in those with ASD versus a combined ASD and non-ASD sample. Results from both studies indicate that samples restricted to individuals with a diagnosis of ASD potentially substantially under-estimate the magnitude of association between features of ASD.

  18. Systematic discrepancies in Monte Carlo predictions of k-ratios emitted from thin films on substrates

    International Nuclear Information System (INIS)

    Statham, P; Llovet, X; Duncumb, P

    2012-01-01

    We have assessed the reliability of different Monte Carlo simulation programmes using the two available Bastin-Heijligers databases of thin-film measurements by EPMA. The MC simulation programmes tested include Curgenven-Duncumb MSMC, NISTMonte, Casino and PENELOPE. Plots of the ratio of calculated to measured k-ratios ('k calc /k meas ') against various parameters reveal error trends that are not apparent in simple error histograms. The results indicate that the MC programmes perform quite differently on the same dataset. However, they appear to show a similar pronounced trend with a 'hockey stick' shape in the 'k calc /k meas versus k meas ' plots. The most sophisticated programme PENELOPE gives the closest correspondence with experiment but still shows a tendency to underestimate experimental k-ratios by 10 % for films that are thin compared to the electron range. We have investigated potential causes for this systematic behaviour and extended the study to data not collected by Bastin and Heijligers.

  19. THE TEXTBOOK AS A PRODUCT OF SCHOOL GEOGRAPHY: underestimated work?

    Directory of Open Access Journals (Sweden)

    José Eustáquio de Sene

    2014-01-01

    Full Text Available ABSTRACT: This article will address the textbook as a specific cultural production of school disciplines having as reference the theoretical debate that opposed the conceptions of "didactic transposition" (CHEVALLARD, 1997 and "school culture" (CHERVEL, 1990. Based on this debate, characteristic of the curriculum field, this article aims to understand why, historically, the textbook has been underestimated and even considered a "less important work” within the limits of the academy (BITTENCOURT, 2004. The examples used will always be of the Geography discipline – both school and academic, as well as the relations between this two fields – having in mind their "multiplicity of paradigms" (LESTEGÁS, 2002. The analysis will also take into account the historic process of institutionalization of academic Geography based on "Layton’s stages" (GOODSON, 2005. RESUMO: Este artigo abordará o livro didático como uma produção cultural específica das disciplinas escolares tendo como referência o debate teórico que opõem as concepções de “transposição didática” (CHEVALLARD, 1997 e de “cultura escolar” (CHERVEL, 1990. Com base em tal debate, próprio do campo curricular, procurará compreender porque historicamente o livro didático tem sido pouco valorizado e até mesmo considerado uma “obra menor” nos limites da academia (BITTENCOURT, 2004. Os exemplos utilizados serão sempre da disciplina Geografia – tanto a escolar quanto a acadêmica, assim como das relações entre ambas – tendo em vista sua “multiplicidade de paradigmas” (LESTEGÁS, 2002. A análise também levará em conta o histórico processo de institucionalização da Geografia acadêmica com base nos “estágios de Layton” (GOODSON, 2005.

  20. The selection, optimization, and compensation model in the work context : A systematic review and meta-analysis of two decades of research

    NARCIS (Netherlands)

    Moghimi, Darya; Zacher, Hannes; Scheibe, Susanne; Van Yperen, Nico W.

    Over the past two decades, the selection, optimization, and compensation (SOC) model has been applied in the work context to investigate antecedents and outcomes of employees’ use of action regulation strategies. We systematically review, meta-analyze, and critically discuss the literature on SOC

  1. Systematic review of statistically-derived models of immunological response in HIV-infected adults on antiretroviral therapy in Sub-Saharan Africa.

    Science.gov (United States)

    Sempa, Joseph B; Ujeneza, Eva L; Nieuwoudt, Martin

    2017-01-01

    In Sub-Saharan African (SSA) resource limited settings, Cluster of Differentiation 4 (CD4) counts continue to be used for clinical decision making in antiretroviral therapy (ART). Here, HIV-infected people often remain with CD4 counts immunological monitoring is necessary. Due to varying statistical modeling methods comparing immune response to ART across different cohorts is difficult. We systematically review such models and detail the similarities, differences and problems. 'Preferred Reporting Items for Systematic Review and Meta-Analyses' guidelines were used. Only studies of immune-response after ART initiation from SSA in adults were included. Data was extracted from each study and tabulated. Outcomes were categorized into 3 groups: 'slope', 'survival', and 'asymptote' models. Wordclouds were drawn wherein the frequency of variables occurring in the reviewed models is indicated by their size and color. 69 covariates were identified in the final models of 35 studies. Effect sizes of covariates were not directly quantitatively comparable in view of the combination of differing variables and scale transformation methods across models. Wordclouds enabled the identification of qualitative and semi-quantitative covariate sets for each outcome category. Comparison across categories identified sex, baseline age, baseline log viral load, baseline CD4, ART initiation regimen and ART duration as a minimal consensus set. Most models were different with respect to covariates included, variable transformations and scales, model assumptions, modelling strategies and reporting methods, even for the same outcomes. To enable comparison across cohorts, statistical models would benefit from the application of more uniform modelling techniques. Historic efforts have produced results that are anecdotal to individual cohorts only. This study was able to define 'prior' knowledge in the Bayesian sense. Such information has value for prospective modelling efforts.

  2. Gastroesophageal reflux disease vs. Panayiotopoulos syndrome: an underestimated misdiagnosis in pediatric age?

    Science.gov (United States)

    Parisi, Pasquale; Pacchiarotti, Claudia; Ferretti, Alessandro; Bianchi, Simona; Paolino, Maria Chiara; Barreto, Mario; Principessa, Luigi; Villa, Maria Pia

    2014-12-01

    Autonomic signs and symptoms could be of epileptic or nonepileptic origin, and the differential diagnosis depends on a number of factors which include the nature of the autonomic manifestations themselves, the occurrence of other nonictal autonomic signs/symptoms, and the age of the patient. Here, we describe twelve children (aged from ten months to six years at the onset of the symptoms) with Panayiotopoulos syndrome misdiagnosed as gastroesophageal reflux disease. Gastroesophageal reflux disease and Panayiotopoulos syndrome may represent an underestimated diagnostic challenge. When the signs/symptoms occur mainly during sleep, a sleep EEG or, if available, a polysomnographic evaluation may be the most useful investigation to make a differential diagnosis between autonomic epileptic and nonepileptic disorders. An early detection can reduce both the high morbidity related to mismanagement and the high costs to the national health service related to the incorrect diagnostic and therapeutic approaches. To decide if antiseizure therapy is required, one should take into account both the frequency and severity of epileptic seizures and the tendency to have potentially lethal autonomic cardiorespiratory involvement. In conclusion, we would emphasize the need to make a differential diagnosis between gastroesophageal reflux disease and Panayiotopoulos syndrome in patients with "an unusual" late-onset picture of GERD and acid therapy-resistant gastroesophageal reflux, especially if associated with other autonomic symptoms and signs. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. A permutation test to analyse systematic bias and random measurement errors of medical devices via boosting location and scale models.

    Science.gov (United States)

    Mayr, Andreas; Schmid, Matthias; Pfahlberg, Annette; Uter, Wolfgang; Gefeller, Olaf

    2017-06-01

    Measurement errors of medico-technical devices can be separated into systematic bias and random error. We propose a new method to address both simultaneously via generalized additive models for location, scale and shape (GAMLSS) in combination with permutation tests. More precisely, we extend a recently proposed boosting algorithm for GAMLSS to provide a test procedure to analyse potential device effects on the measurements. We carried out a large-scale simulation study to provide empirical evidence that our method is able to identify possible sources of systematic bias as well as random error under different conditions. Finally, we apply our approach to compare measurements of skin pigmentation from two different devices in an epidemiological study.

  4. Prevalence of sexual abuse among children with conduct disorder: a systematic review.

    Science.gov (United States)

    Maniglio, Roberto

    2014-09-01

    Many clinicians and researchers have speculated that child sexual abuse and conduct disorder co-occur frequently, yet no systematic reviews of literature have specifically addressed both these conditions. To estimate the prevalence of sexual abuse among children with conduct disorder, the pertinent literature was systematically reviewed. Ten databases were searched, supplemented with hand search of reference lists from retrieved papers. Blind assessments of study eligibility and quality were conducted by two independent researchers. Disagreements were resolved by consensus. Twenty-three studies meeting minimum quality criteria that were enough to insure objectivity and not to invalidate results and including 7,256 participants with either conduct disorder or child sexual abuse were examined. The prevalence of child sexual abuse among participants with conduct disorder was 27 %; however, such figure might be underestimated due to selection, sampling, and recall biases; poor assessment methods; and narrow definitions of abuse in included studies. Participants with conduct disorder, compared with healthy individuals, reported higher rates of child sexual abuse. However, compared with other psychiatric populations, they reported similar or lower rates. There was also some evidence suggesting that children with conduct disorder might be more likely to report child physical abuse. Female participants with conduct disorder, compared with males, were significantly more likely to report child sexual abuse. Youths with conduct disorder are at risk of being (or having been) sexually abused, although such risk seems to be neither more specific to nor stronger for these individuals, compared with people with other psychiatric disorders.

  5. Stress testing hydrologic models using bottom-up climate change assessment

    Science.gov (United States)

    Stephens, C.; Johnson, F.; Marshall, L. A.

    2017-12-01

    Bottom-up climate change assessment is a promising approach for understanding the vulnerability of a system to potential future changes. The technique has been utilised successfully in risk-based assessments of future flood severity and infrastructure vulnerability. We find that it is also an ideal tool for assessing hydrologic model performance in a changing climate. In this study, we applied bottom-up climate change to compare the performance of two different hydrologic models (an event-based and a continuous model) under increasingly severe climate change scenarios. This allowed us to diagnose likely sources of future prediction error in the two models. The climate change scenarios were based on projections for southern Australia, which indicate drier average conditions with increased extreme rainfall intensities. We found that the key weakness in using the event-based model to simulate drier future scenarios was the model's inability to dynamically account for changing antecedent conditions. This led to increased variability in model performance relative to the continuous model, which automatically accounts for the wetness of a catchment through dynamic simulation of water storages. When considering more intense future rainfall events, representation of antecedent conditions became less important than assumptions around (non)linearity in catchment response. The linear continuous model we applied may underestimate flood risk in a future climate with greater extreme rainfall intensity. In contrast with the recommendations of previous studies, this indicates that continuous simulation is not necessarily the key to robust flood modelling under climate change. By applying bottom-up climate change assessment, we were able to understand systematic changes in relative model performance under changing conditions and deduce likely sources of prediction error in the two models.

  6. An evaluation of a model for the systematic documentation of hospital based health promotion activities: results from a multicentre study

    Directory of Open Access Journals (Sweden)

    Morris Denise

    2007-09-01

    Full Text Available Abstract Background The first step of handling health promotion (HP in Diagnosis Related Groups (DRGs is a systematic documentation and registration of the activities in the medical records. So far the possibility and tradition for systematic registration of clinical HP activities in the medical records and in patient administrative systems have been sparse. Therefore, the activities are mostly invisible in the registers of hospital services as well as in budgets and balances. A simple model has been described to structure the registration of the HP procedures performed by the clinical staff. The model consists of two parts; first part includes motivational counselling (7 codes and the second part comprehends intervention, rehabilitation and after treatment (8 codes. The objective was to evaluate in an international study the usefulness, applicability and sufficiency of a simple model for the systematic registration of clinical HP procedures in day life. Methods The multi centre project was carried out in 19 departments/hospitals in 6 countries in a clinical setup. The study consisted of three parts in accordance with the objectives. A: Individual test. 20 consecutive medical records from each participating department/hospital were coded by the (coding specialists at local department/hospital, exclusively (n = 5,529 of 5,700 possible tests in total. B: Common test. 14 standardized medical records were coded by all the specialists from 17 departments/hospitals, who returned 3,046 of 3,570 tests. C: Specialist evaluation. The specialists from the 19 departments/hospitals evaluated if the codes were useful, applicable and sufficient for the registration in their own department/hospital (239 of 285. Results A: In 97 to100% of the local patient pathways the specialists were able to evaluate if there was documentation of HP activities in the medical record to be coded. B: Inter rater reliability on the use of the codes were 93% (57 to 100% and 71% (31

  7. Extensive and systematic rewiring of histone post-translational modifications in cancer model systems.

    Science.gov (United States)

    Noberini, Roberta; Osti, Daniela; Miccolo, Claudia; Richichi, Cristina; Lupia, Michela; Corleone, Giacomo; Hong, Sung-Pil; Colombo, Piergiuseppe; Pollo, Bianca; Fornasari, Lorenzo; Pruneri, Giancarlo; Magnani, Luca; Cavallaro, Ugo; Chiocca, Susanna; Minucci, Saverio; Pelicci, Giuliana; Bonaldi, Tiziana

    2018-05-04

    Histone post-translational modifications (PTMs) generate a complex combinatorial code that regulates gene expression and nuclear functions, and whose deregulation has been documented in different types of cancers. Therefore, the availability of relevant culture models that can be manipulated and that retain the epigenetic features of the tissue of origin is absolutely crucial for studying the epigenetic mechanisms underlying cancer and testing epigenetic drugs. In this study, we took advantage of quantitative mass spectrometry to comprehensively profile histone PTMs in patient tumor tissues, primary cultures and cell lines from three representative tumor models, breast cancer, glioblastoma and ovarian cancer, revealing an extensive and systematic rewiring of histone marks in cell culture conditions, which includes a decrease of H3K27me2/me3, H3K79me1/me2 and H3K9ac/K14ac, and an increase of H3K36me1/me2. While some changes occur in short-term primary cultures, most of them are instead time-dependent and appear only in long-term cultures. Remarkably, such changes mostly revert in cell line- and primary cell-derived in vivo xenograft models. Taken together, these results support the use of xenografts as the most representative models of in vivo epigenetic processes, suggesting caution when using cultured cells, in particular cell lines and long-term primary cultures, for epigenetic investigations.

  8. Study on the systematic approach of Markov modeling for dependability analysis of complex fault-tolerant features with voting logics

    International Nuclear Information System (INIS)

    Son, Kwang Seop; Kim, Dong Hoon; Kim, Chang Hwoi; Kang, Hyun Gook

    2016-01-01

    The Markov analysis is a technique for modeling system state transitions and calculating the probability of reaching various system states. While it is a proper tool for modeling complex system designs involving timing, sequencing, repair, redundancy, and fault tolerance, as the complexity or size of the system increases, so does the number of states of interest, leading to difficulty in constructing and solving the Markov model. This paper introduces a systematic approach of Markov modeling to analyze the dependability of a complex fault-tolerant system. This method is based on the decomposition of the system into independent subsystem sets, and the system-level failure rate and the unavailability rate for the decomposed subsystems. A Markov model for the target system is easily constructed using the system-level failure and unavailability rates for the subsystems, which can be treated separately. This approach can decrease the number of states to consider simultaneously in the target system by building Markov models of the independent subsystems stage by stage, and results in an exact solution for the Markov model of the whole target system. To apply this method we construct a Markov model for the reactor protection system found in nuclear power plants, a system configured with four identical channels and various fault-tolerant architectures. The results show that the proposed method in this study treats the complex architecture of the system in an efficient manner using the merits of the Markov model, such as a time dependent analysis and a sequential process analysis. - Highlights: • Systematic approach of Markov modeling for system dependability analysis is proposed based on the independent subsystem set, its failure rate and unavailability rate. • As an application example, we construct the Markov model for the digital reactor protection system configured with four identical and independent channels, and various fault-tolerant architectures. • The

  9. Underestimation of glucose turnover measured with [6-3H]- and [6,6-2H]- but not [6-14C]glucose during hyperinsulinemia in humans

    International Nuclear Information System (INIS)

    McMahon, M.M.; Schwenk, W.F.; Haymond, M.W.; Rizza, R.A.

    1989-01-01

    Recent studies indicate that hydrogen-labeled glucose tracers underestimate glucose turnover in humans under conditions of high flux. The cause of this underestimation is unknown. To determine whether the error is time-, pool-, model-, or insulin-dependent, glucose turnover was measured simultaneously with [6-3H]-, [6,6-2H2]-, and [6-14C]glucose during a 7-h infusion of either insulin (1 mU.kg-1.min-1) or saline. During the insulin infusion, steady-state glucose turnover measured with both [6-3H]glucose (8.0 +/- 0.5 mg.kg-1.min-1) and [6,6-2H2]glucose (7.6 +/- 0.5 mg.kg-1.min-1) was lower (P less than .01) than either the glucose infusion rate required to maintain euglycemia (9.8 +/- 0.7 mg.kg-1.min-1) or glucose turnover determined with [6-14C]glucose and corrected for Cori cycle activity (9.8 +/- 0.7 mg.kg-1.min-1). Consequently negative glucose production rates (P less than .01) were obtained with either [6-3H]- or [6,6-2H2]- but not [6-14C]glucose. The difference between turnover estimated with [6-3H]glucose and actual glucose disposal (or 14C glucose flux) did not decrease with time and was not dependent on duration of isotope infusion. During saline infusion, estimates of glucose turnover were similar regardless of the glucose tracer used. High-performance liquid chromatography of the radioactive glucose tracer and plasma revealed the presence of a tritiated nonglucose contaminant. Although the contaminant represented only 1.5% of the radioactivity in the [6-3H]glucose infusate, its clearance was 10-fold less (P less than .001) than that of [6-3H]glucose. This resulted in accumulation in plasma, with the contaminant accounting for 16.6 +/- 2.09 and 10.8 +/- 0.9% of what customarily is assumed to be plasma glucose radioactivity during the insulin or saline infusion, respectively (P less than .01)

  10. Dissemination bias in systematic reviews of animal research: a systematic review.

    Directory of Open Access Journals (Sweden)

    Katharina F Mueller

    Full Text Available Systematic reviews of preclinical studies, in vivo animal experiments in particular, can influence clinical research and thus even clinical care. Dissemination bias, selective dissemination of positive or significant results, is one of the major threats to validity in systematic reviews also in the realm of animal studies. We conducted a systematic review to determine the number of published systematic reviews of animal studies until present, to investigate their methodological features especially with respect to assessment of dissemination bias, and to investigate the citation of preclinical systematic reviews on clinical research.Eligible studies for this systematic review constitute systematic reviews that summarize in vivo animal experiments whose results could be interpreted as applicable to clinical care. We systematically searched Ovid Medline, Embase, ToxNet, and ScienceDirect from 1st January 2009 to 9th January 2013 for eligible systematic reviews without language restrictions. Furthermore we included articles from two previous systematic reviews by Peters et al. and Korevaar et al.The literature search and screening process resulted in 512 included full text articles. We found an increasing number of published preclinical systematic reviews over time. The methodological quality of preclinical systematic reviews was low. The majority of preclinical systematic reviews did not assess methodological quality of the included studies (71%, nor did they assess heterogeneity (81% or dissemination bias (87%. Statistics quantifying the importance of clinical research citing systematic reviews of animal studies showed that clinical studies referred to the preclinical research mainly to justify their study or a future study (76%.Preclinical systematic reviews may have an influence on clinical research but their methodological quality frequently remains low. Therefore, systematic reviews of animal research should be critically appraised before

  11. Decision-model estimation of the age-specific disability weight for schistosomiasis japonica: a systematic review of the literature.

    Directory of Open Access Journals (Sweden)

    Julia L Finkelstein

    2008-03-01

    Full Text Available Schistosomiasis is among the most prevalent parasitic infections worldwide. However, current Global Burden of Disease (GBD disability-adjusted life year estimates indicate that its population-level impact is negligible. Recent studies suggest that GBD methodologies may significantly underestimate the burden of parasitic diseases, including schistosomiasis. Furthermore, strain-specific disability weights have not been established for schistosomiasis, and the magnitude of human disease burden due to Schistosoma japonicum remains controversial. We used a decision model to quantify an alternative disability weight estimate of the burden of human disease due to S. japonicum. We reviewed S. japonicum morbidity data, and constructed decision trees for all infected persons and two age-specific strata, or =15 y. We conducted stochastic and probabilistic sensitivity analyses for each model. Infection with S. japonicum was associated with an average disability weight of 0.132, with age-specific disability weights of 0.098 ( or =15 y. Re-estimated disability weights were seven to 46 times greater than current GBD measures; no simulations produced disability weight estimates lower than 0.009. Nutritional morbidities had the greatest contribution to the S. japonicum disability weight in the <15 y model, whereas major organ pathologies were the most critical variables in the older age group. GBD disability weights for schistosomiasis urgently need to be revised, and species-specific disability weights should be established. Even a marginal increase in current estimates would result in a substantial rise in the estimated global burden of schistosomiasis, and have considerable implications for public health prioritization and resource allocation for schistosomiasis research, monitoring, and control.

  12. Do Individuals Perceive Income Tax Rates Correctly?

    Science.gov (United States)

    Gideon, Michael

    2017-01-01

    This article uses data from survey questions fielded on the 2011 wave of the Cognitive Economics Study to uncover systematic errors in perceptions of income tax rates. First, when asked about the marginal tax rates (MTRs) for households in the top tax bracket, respondents underestimate the top MTR on wages and salary income, overestimate the MTR on dividend income, and therefore significantly underestimate the currently tax-advantaged status of dividend income. Second, when analyzing the relationship between respondents' self-reported average tax rates (ATRs) and MTRs, many people do not understand the progressive nature of the federal income tax system. Third, when comparing self-reported tax rates with those computed from self-reported income, respondents systematically overestimate their ATR while reported MTR are accurate at the mean, the responses are consistent with underestimation of tax schedule progressivity.

  13. Pharmacological interventions for daytime sleepiness and sleep disorders in Parkinson's disease: Systematic review and meta-analysis.

    Science.gov (United States)

    Rodrigues, Tiago Martins; Castro Caldas, Ana; Ferreira, Joaquim J

    2016-06-01

    Daytime sleepiness and sleep disorders are frequently reported in Parkinson's disease (PD). However, their impact on quality of life has been underestimated and few clinical trials have been performed. We aimed to assess the efficacy and safety of pharmacological interventions for daytime sleepiness and sleep disorders in PD. Systematic review of randomized controlled trials comparing any pharmacological intervention with no intervention or placebo for the treatment of daytime sleepiness and sleep problems in PD patients. Ten studies (n = 338 patients) were included. Four trials addressed interventions for excessive daytime sleepiness. Meta-analysis of the three trials evaluating modafinil showed a significant reduction in sleepiness, as assessed by the Epworth Sleepiness Scale (ESS) (- 2.24 points, 95% CI - 3.90 to - 0.57, p sleep Behaviour Disorder (RBD). Single study results suggest that doxepin and YXQN granules might be efficacious, while pergolide may be deleterious for insomnia and that rivastigmine may be used to treat RBD in PD patients. However, there is insufficient evidence to support or refute the efficacy of any of these interventions. No relevant side effects were reported. Whilst providing recommendations, this systematic review depicts the lack of a body of evidence regarding the treatment of sleep disorders in PD patients; hence, further studies are warranted. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Systematic Trading: Calibration Advances through Machine Learning

    OpenAIRE

    Alvarez Teleña, S.

    2015-01-01

    Systematic trading in finance uses computer models to define trade goals, risk controls and rules that can execute trade orders in a methodical way. This thesis investigates how performance in systematic trading can be crucially enhanced by both i) persistently reducing the bid-offer spread quoted by the trader through optimized and realistically backtested strategies and ii) improving the out-of-sample robustness of the strategy selected through the injection of theory into the typically dat...

  15. Health literacy and public health: A systematic review and integration of definitions and models

    LENUS (Irish Health Repository)

    Sorensen, Kristine

    2012-01-25

    Abstract Background Health literacy concerns the knowledge and competences of persons to meet the complex demands of health in modern society. Although its importance is increasingly recognised, there is no consensus about the definition of health literacy or about its conceptual dimensions, which limits the possibilities for measurement and comparison. The aim of the study is to review definitions and models on health literacy to develop an integrated definition and conceptual model capturing the most comprehensive evidence-based dimensions of health literacy. Methods A systematic literature review was performed to identify definitions and conceptual frameworks of health literacy. A content analysis of the definitions and conceptual frameworks was carried out to identify the central dimensions of health literacy and develop an integrated model. Results The review resulted in 17 definitions of health literacy and 12 conceptual models. Based on the content analysis, an integrative conceptual model was developed containing 12 dimensions referring to the knowledge, motivation and competencies of accessing, understanding, appraising and applying health-related information within the healthcare, disease prevention and health promotion setting, respectively. Conclusions Based upon this review, a model is proposed integrating medical and public health views of health literacy. The model can serve as a basis for developing health literacy enhancing interventions and provide a conceptual basis for the development and validation of measurement tools, capturing the different dimensions of health literacy within the healthcare, disease prevention and health promotion settings.

  16. A critical comparison of systematic calibration protocols for activated sludge models: a SWOT analysis.

    Science.gov (United States)

    Sin, Gürkan; Van Hulle, Stijn W H; De Pauw, Dirk J W; van Griensven, Ann; Vanrolleghem, Peter A

    2005-07-01

    Modelling activated sludge systems has gained an increasing momentum after the introduction of activated sludge models (ASMs) in 1987. Application of dynamic models for full-scale systems requires essentially a calibration of the chosen ASM to the case under study. Numerous full-scale model applications have been performed so far which were mostly based on ad hoc approaches and expert knowledge. Further, each modelling study has followed a different calibration approach: e.g. different influent wastewater characterization methods, different kinetic parameter estimation methods, different selection of parameters to be calibrated, different priorities within the calibration steps, etc. In short, there was no standard approach in performing the calibration study, which makes it difficult, if not impossible, to (1) compare different calibrations of ASMs with each other and (2) perform internal quality checks for each calibration study. To address these concerns, systematic calibration protocols have recently been proposed to bring guidance to the modeling of activated sludge systems and in particular to the calibration of full-scale models. In this contribution four existing calibration approaches (BIOMATH, HSG, STOWA and WERF) will be critically discussed using a SWOT (Strengths, Weaknesses, Opportunities, Threats) analysis. It will also be assessed in what way these approaches can be further developed in view of further improving the quality of ASM calibration. In this respect, the potential of automating some steps of the calibration procedure by use of mathematical algorithms is highlighted.

  17. Mothers' perceptions about the nutritional status of their overweight children: a systematic review

    Directory of Open Access Journals (Sweden)

    Caliandra Francescatto

    2014-07-01

    Full Text Available OBJECTIVE: this systematic review aims to explore and describe the studies that have as a primary outcome the identification of mothers' perception of the nutritional status of their children. SOURCES: the PubMed, Embase, LILACS, and SciELO databases were researched, regardless of language or publication date. The terms used for the search, with its variants, were: Nutritional Status, Perception, Mother, Maternal, Parents, Parental. SUMMARY OF THE FINDINGS: after screening of 167 articles, 41 were selected for full text reading, of which 17 were included in the review and involved the evaluation of the perception of mothers on the nutritional status of 57,700 children and adolescents. The methodological quality of the studies ranged from low to excellent. The proportion of mothers who inadequately perceived the nutritional status of their children was high, and was the most common underestimation for children with overweight or obesity. CONCLUSION: despite the increasing prevalence of obesity in pediatric age, mothers have difficulty in properly perceiving the nutritional status of their children, which may compromise referral to treatment programs.

  18. Interagency collaboration models for people with mental ill health in contact with the police: a systematic scoping review.

    Science.gov (United States)

    Parker, Adwoa; Scantlebury, Arabella; Booth, Alison; MacBryde, Jillian Catherine; Scott, William J; Wright, Kath; McDaid, Catriona

    2018-03-27

    To identify existing evidence on interagency collaboration between law enforcement, emergency services, statutory services and third sector agencies regarding people with mental ill health. Systematic scoping review. Scoping reviews map particular research areas to identify research gaps. ASSIA, CENTRAL, the Cochrane Library databases, Criminal Justice Abstracts, ERIC, Embase, MEDLINE, PsycINFO, PROSPERO and Social Care Online and Social Sciences Citation Index were searched up to 2017, as were grey literature and hand searches. Eligible articles were empirical evaluations or descriptions of models of interagency collaboration between the police and other agencies. Screening and data extraction were undertaken independently by two researchers. Arksey's framework was used to collate and map included studies. One hundred and twenty-five studies were included. The majority of articles were of descriptions of models (28%), mixed methods evaluations of models (18%) and single service evaluations (14%). The most frequently reported outcomes (52%) were 'organisational or service level outcomes' (eg, arrest rates). Most articles (53%) focused on adults with mental ill health, whereas others focused on adult offenders with mental ill health (17.4%). Thirteen models of interagency collaboration were described, each involving between 2 and 13 agencies. Frequently reported models were 'prearrest diversion' of people with mental ill health (34%), 'coresponse' involving joint response by police officers paired with mental health professionals (28.6%) and 'jail diversion' following arrest (23.8%). We identified 13 different interagency collaboration models catering for a range of mental health-related interactions. All but one of these models involved the police and mental health services or professionals. Several models have sufficient literature to warrant full systematic reviews of their effectiveness, whereas others need robust evaluation, by randomised controlled trial where

  19. Interagency collaboration models for people with mental ill health in contact with the police: a systematic scoping review

    Science.gov (United States)

    Scantlebury, Arabella; Booth, Alison; MacBryde, Jillian Catherine; Scott, William J; Wright, Kath

    2018-01-01

    Objective To identify existing evidence on interagency collaboration between law enforcement, emergency services, statutory services and third sector agencies regarding people with mental ill health. Design Systematic scoping review. Scoping reviews map particular research areas to identify research gaps. Data sources and eligibility ASSIA, CENTRAL, the Cochrane Library databases, Criminal Justice Abstracts, ERIC, Embase, MEDLINE, PsycINFO, PROSPERO and Social Care Online and Social Sciences Citation Index were searched up to 2017, as were grey literature and hand searches. Eligible articles were empirical evaluations or descriptions of models of interagency collaboration between the police and other agencies. Study appraisal and synthesis Screening and data extraction were undertaken independently by two researchers. Arksey’s framework was used to collate and map included studies. Results One hundred and twenty-five studies were included. The majority of articles were of descriptions of models (28%), mixed methods evaluations of models (18%) and single service evaluations (14%). The most frequently reported outcomes (52%) were ‘organisational or service level outcomes’ (eg, arrest rates). Most articles (53%) focused on adults with mental ill health, whereas others focused on adult offenders with mental ill health (17.4%). Thirteen models of interagency collaboration were described, each involving between 2 and 13 agencies. Frequently reported models were ‘prearrest diversion’ of people with mental ill health (34%), ‘coresponse’ involving joint response by police officers paired with mental health professionals (28.6%) and ‘jail diversion’ following arrest (23.8%). Conclusions We identified 13 different interagency collaboration models catering for a range of mental health-related interactions. All but one of these models involved the police and mental health services or professionals. Several models have sufficient literature to warrant full

  20. Tissue Engineering in Animal Models for Urinary Diversion: A Systematic Review

    Science.gov (United States)

    Sloff, Marije; de Vries, Rob; Geutjes, Paul; IntHout, Joanna; Ritskes-Hoitinga, Merel

    2014-01-01

    Tissue engineering and regenerative medicine (TERM) approaches may provide alternatives for gastrointestinal tissue in urinary diversion. To continue to clinically translatable studies, TERM alternatives need to be evaluated in (large) controlled and standardized animal studies. Here, we investigated all evidence for the efficacy of tissue engineered constructs in animal models for urinary diversion. Studies investigating this subject were identified through a systematic search of three different databases (PubMed, Embase and Web of Science). From each study, animal characteristics, study characteristics and experimental outcomes for meta-analyses were tabulated. Furthermore, the reporting of items vital for study replication was assessed. The retrieved studies (8 in total) showed extreme heterogeneity in study design, including animal models, biomaterials and type of urinary diversion. All studies were feasibility studies, indicating the novelty of this field. None of the studies included appropriate control groups, i.e. a comparison with the classical treatment using GI tissue. The meta-analysis showed a trend towards successful experimentation in larger animals although no specific animal species could be identified as the most suitable model. Larger animals appear to allow a better translation to the human situation, with respect to anatomy and surgical approaches. It was unclear whether the use of cells benefits the formation of a neo urinary conduit. The reporting of the methodology and data according to standardized guidelines was insufficient and should be improved to increase the value of such publications. In conclusion, animal models in the field of TERM for urinary diversion have probably been chosen for reasons other than their predictive value. Controlled and comparative long term animal studies, with adequate methodological reporting are needed to proceed to clinical translatable studies. This will aid in good quality research with the reduction in

  1. Random variables in forest policy: A systematic sensitivity analysis using CGE models

    International Nuclear Information System (INIS)

    Alavalapati, J.R.R.

    1999-01-01

    Computable general equilibrium (CGE) models are extensively used to simulate economic impacts of forest policies. Parameter values used in these models often play a central role in their outcome. Since econometric studies and best guesses are the main sources of these parameters, some randomness exists about the 'true' values of these parameters. Failure to incorporate this randomness into these models may limit the degree of confidence in the validity of the results. In this study, we conduct a systematic sensitivity analysis (SSA) to assess the economic impacts of: 1) a 1 % increase in tax on Canadian lumber and wood products exports to the United States (US), and 2) a 1% decrease in technical change in the lumber and wood products and pulp and paper sectors of the US and Canada. We achieve this task by using an aggregated version of global trade model developed by Hertel (1997) and the automated SSA procedure developed by Arndt and Pearson (1996). The estimated means and standard deviations suggest that certain impacts are more likely than others. For example, an increase in export tax is likely to cause a decrease in Canadian income, while an increase in US income is unlikely. On the other hand, a decrease in US welfare is likely, while an increase in Canadian welfare is unlikely, in response to an increase in tax. It is likely that income and welfare both fall in Canada and the US in response to a decrease in the technical change in lumber and wood products and pulp and paper sectors 21 refs, 1 fig, 5 tabs

  2. The necessity of clinical application of tibial reduction for detection of underestimated posterolateral rotatory instability in combined posterior cruciate ligament and posterolateral corner deficient knee.

    Science.gov (United States)

    Lee, Han-Jun; Park, Yong-Beom; Ko, Young-Bong; Kim, Seong-Hwan; Kwon, Hyeok-Bin; Yu, Dong-Seok; Jung, Young-Bok

    2015-10-01

    The purpose of this study was to evaluate the usefulness of tibial reduction during dial test for clinical detection of underestimated posterolateral rotatory instability (PLRI) in combined posterior cruciate ligament (PCL)-posterolateral corner (PLC) deficient knee in terms of external rotation laxity and clinical outcomes. Twenty-one patients who classified as grade I PLRI using dial test with subluxated tibia, but classified as grade II with tibial reduction evaluated retrospectively. The mean follow-up was 39.3 months (range 24-61 months). Each patient was evaluated by the following variables: posterior translation and varus laxity on radiograph, KT-1000 arthrometer, dial test (reduced and subluxated position), International Knee Documentation Committee, Orthopädische Arbeitsgruppe Knie scoring system and Tegner activity scale. There were significant improvements in posterior tibial translation (8.6 ± 2.0 to 2.1 ± 1.0 mm; P application of reduction of posteriorly subluxated tibia during the dial test was essential for an appropriate treatment of underestimated PLRI in combined PCL-PLC deficient knee. Retrospective case series, Level IV.

  3. Systematic Applications of Metabolomics in Metabolic Engineering

    Directory of Open Access Journals (Sweden)

    Robert A. Dromms

    2012-12-01

    Full Text Available The goals of metabolic engineering are well-served by the biological information provided by metabolomics: information on how the cell is currently using its biochemical resources is perhaps one of the best ways to inform strategies to engineer a cell to produce a target compound. Using the analysis of extracellular or intracellular levels of the target compound (or a few closely related molecules to drive metabolic engineering is quite common. However, there is surprisingly little systematic use of metabolomics datasets, which simultaneously measure hundreds of metabolites rather than just a few, for that same purpose. Here, we review the most common systematic approaches to integrating metabolite data with metabolic engineering, with emphasis on existing efforts to use whole-metabolome datasets. We then review some of the most common approaches for computational modeling of cell-wide metabolism, including constraint-based models, and discuss current computational approaches that explicitly use metabolomics data. We conclude with discussion of the broader potential of computational approaches that systematically use metabolomics data to drive metabolic engineering.

  4. Effect of health belief model and health promotion model on breast cancer early diagnosis behavior: a systematic review.

    Science.gov (United States)

    Ersin, Fatma; Bahar, Zuhal

    2011-01-01

    Breast cancer is an important public health problem on the grounds that it is frequently seen and it is a fatal disease. The objective of this systematic analysis is to indicate the effects of interventions performed by nurses by using the Health Belief Model (HBM) and Health Promotion Model (HPM) on the breast cancer early diagnosis behaviors and on the components of the Health Belief Model and Health Promotion Model. The reveiw was created in line with the Centre for Reviews and Dissemination guide dated 2009 (CRD) and developed by York University National Institute of Health Researches. Review was conducted by using PUBMED, OVID, EBSCO and COCHRANE databases. Six hundred seventy eight studies (PUBMED: 236, OVID: 162, EBSCO: 175, COCHRANE:105) were found in total at the end of the review. Abstracts and full texts of these six hundred seventy eight studies were evaluated in terms of inclusion and exclusion criteria and 9 studies were determined to meet the criteria. Samplings of the studies varied between ninety four and one thousand six hundred fifty five. It was detected in the studies that educations provided by taking the theories as basis became effective on the breast cancer early diagnosis behaviors. When the literature is examined, it is observed that the experimental researches which compare the concepts of Health Belief Model (HBM) and Health Promotion Model (HPM) preoperatively and postoperatively and show the effect of these concepts on education and are conducted by nurses are limited in number. Randomized controlled studies which compare HBM and HPM concepts preoperatively and postoperatively and show the efficiency of the interventions can be useful in evaluating the efficiency of the interventions.

  5. Accuracy of national key performance indicator reporting from two Aboriginal medical services: potential to underestimate the performance of primary health care.

    Science.gov (United States)

    2017-05-09

    Objective The aim of the present study was to assess the accuracy of extracting national key performance indicator (nKPI) data for the Online Community Health Reporting Environment for Health Services (OCHREStreams) program using the Pen Computer Systems (Leichhardt, NSW, Australia) Clinical Audit Tool (CAT) from Communicare (Telstra Health Communicare Systems, Perth, WA, Australia), a commonly used patient information management system (PIMS) in Aboriginal primary care. Methods Two Aboriginal Community-Controlled Health Services (ACCHSs) were recruited to the present study. A sample of regular clients aged ≥55 years from each ACCHS was selected and a subset of 13 nKPIs was examined. A manual case note audit of the nKPI subset within Communicare was undertaken by a clinician at each participating ACCHS and acted as a 'gold standard' comparator for three query methods: (1) internal Communicare nKPI reports; (2) PenCS CAT nKPI manual filtering (a third-party data-extraction tool); and (3) nKPI data submitted to the Improvement Foundation qiConnect portal. Results No errors were found in nKPI data extraction from Communicare using the CAT and subsequent submission to the qiConnect portal. However, the Communicare internal nKPI report included deceased clients and past patients, and we can be very confident that deceased clients and past patients are also included in the qiConnect portal data. This resulted in inflation of client denominators and an underestimation of health service performance, particularly for nKPIs recording activity in the past 6 months. Several minor errors were also detected in Communicare internal nKPI reports. Conclusions CAT accurately extracts a subset of nKPI data from Communicare. However, given the widespread use of Communicare in ACCHSs, the inclusion of deceased clients and past patients in the OCHREStreams nKPI data program is likely to have resulted in systematic under-reporting of health service performance nationally. What is known

  6. The JOINT model of nurse absenteeism and turnover: a systematic review.

    Science.gov (United States)

    Daouk-Öyry, Lina; Anouze, Abdel-Latef; Otaki, Farah; Dumit, Nuhad Yazbik; Osman, Ibrahim

    2014-01-01

    Absenteeism and turnover among healthcare workers have a significant impact on overall healthcare system performance. The literature captures variables from different levels of measurement and analysis as being associated with attendance behavior among nurses. Yet, it remains unclear how variables from different contextual levels interact to impact nurses' attendance behaviors. The purpose of this review is to develop an integrative multilevel framework that optimizes our understanding of absenteeism and turnover among nurses in hospital settings. We therefore systematically examine English-only studies retrieved from two major databases, PubMed and CINAHL Plus and published between January, 2007 and January, 2013 (inclusive). Our review led to the identification of 7619 articles out of which 41 matched the inclusion criteria. The analysis yielded a total of 91 antecedent variables and 12 outcome variables for turnover, and 29 antecedent variables and 9 outcome variables for absenteeism. The various manifested variables were analyzed using content analysis and grouped into 11 categories, and further into five main factors: Job, Organization, Individual, National and inTerpersonal (JOINT). Thus, we propose the JOINT multilevel conceptual model for investigating absenteeism and turnover among nurses. The JOINT model can be adapted by researchers for fitting their hypothesized multilevel relationships. It can also be used by nursing managers as a lens for holistically managing nurses' attendance behaviors. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. A systematic methodology to extend the applicability of a bioconversion model for the simulation of various co-digestion scenarios

    DEFF Research Database (Denmark)

    Kovalovszki, Adam; Alvarado-Morales, Merlin; Fotidis, Ioannis

    2017-01-01

    Detailed simulation of anaerobic digestion (AD) requires complex mathematical models and the optimization of numerous model parameters. By performing a systematic methodology and identifying parameters with the highest impact on process variables in a well-established AD model, its applicability...... was extended to various co-digestion scenarios. More specifically, the application of the step-by-step methodology led to the estimation of a general and reduced set of parameters, for the simulation of scenarios where either manure or wastewater were co-digested with different organic substrates. Validation...... experimental data quite well, indicating that it offers a reliable reference point for future simulations of anaerobic co-digestion scenarios....

  8. Evaluating clinical librarian services: a systematic review.

    Science.gov (United States)

    Brettle, Alison; Maden-Jenkins, Michelle; Anderson, Lucy; McNally, Rosalind; Pratchett, Tracey; Tancock, Jenny; Thornton, Debra; Webb, Anne

    2011-03-01

      Previous systematic reviews have indicated limited evidence and poor quality evaluations of clinical librarian (CL) services. Rigorous evaluations should demonstrate the value of CL services, but guidance is needed before this can be achieved.   To undertake a systematic review which examines models of CL services, quality, methods and perspectives of clinical librarian service evaluations.   Systematic review methodology and synthesis of evidence, undertaken collaboratively by a group of 8 librarians to develop research and critical appraisal skills.   There are four clear models of clinical library service provision. Clinical librarians are effective in saving health professionals time, providing relevant, useful information and high quality services. Clinical librarians have a positive effect on clinical decision making by contributing to better informed decisions, diagnosis and choice of drug or therapy. The quality of CL studies is improving, but more work is needed on reducing bias and providing evidence of specific impacts on patient care. The Critical Incident Technique as part of a mixed method approach appears to offer a useful approach to demonstrating impact.   This systematic review provides practical guidance regarding the evaluation of CL services. It also provides updated evidence regarding the effectiveness and impact of CL services. The approach used was successful in developing research and critical appraisal skills in a group of librarians. © 2010 The authors. Health Information and Libraries Journal © 2010 Health Libraries Group.

  9. NRC Information No. 90-21: Potential failure of motor-operated butterfly valves to operate because valve seat friction was underestimated

    International Nuclear Information System (INIS)

    Rossi, C.E.

    1992-01-01

    In October 1988, at Catawba Nuclear Station Unit 1, a motor-operated butterfly valve in the service water system failed to open under high differential pressure conditions. The licensee concluded that the valve manufacturer, BIF/General Signal Corporation, had underestimated the degree to which the material used in the valve seat would harden with age (the responsibility for these valves has been transferred to Paul-Munroe Enertech). This underestimation of the age hardening had led the manufacturer to assume valve seat friction forces that were less than the actual friction forces in the installed valve. To overcome the larger-than-anticipated friction forces, the licensee's engineering staff recommended the open torque switch for 56 butterfly valves be reset to the maximum allowable value. The systems in which these valves are located include the component cooling water system, service water system, and various ventilation systems. By July 26, 1989, the torque switch adjustments were completed at Catawba Units 1 and 2. After reviewing the final settings, the licensee's engineering staff determined that the actuators for three butterfly valves in the component cooling water system might not be able to overcome the friction forces resulting from maximum seat hardening. On December 13, 1989, the licensee determined that the failure of these BIF/General Signal motor-operated valves (MOVs) could cause a loss of cooling water to residual heat removal system heat exchangers. To resolve the concern regarding the operability of these BIF/General Signal valves, a torque switch bypass was installed on two of the actuators to allow full motor capability during opening

  10. A Regional Model for Malaria Vector Developmental Habitats Evaluated Using Explicit, Pond-Resolving Surface Hydrology Simulations.

    Directory of Open Access Journals (Sweden)

    Ernest Ohene Asare

    Full Text Available Dynamical malaria models can relate precipitation to the availability of vector breeding sites using simple models of surface hydrology. Here, a revised scheme is developed for the VECTRI malaria model, which is evaluated alongside the default scheme using a two year simulation by HYDREMATS, a 10 metre resolution, village-scale model that explicitly simulates individual ponds. Despite the simplicity of the two VECTRI surface hydrology parametrization schemes, they can reproduce the sub-seasonal evolution of fractional water coverage. Calibration of the model parameters is required to simulate the mean pond fraction correctly. The default VECTRI model tended to overestimate water fraction in periods subject to light rainfall events and underestimate it during periods of intense rainfall. This systematic error was improved in the revised scheme by including the a parametrization for surface run-off, such that light rainfall below the initial abstraction threshold does not contribute to ponds. After calibration of the pond model, the VECTRI model was able to simulate vector densities that compared well to the detailed agent based model contained in HYDREMATS without further parameter adjustment. Substituting local rain-gauge data with satellite-retrieved precipitation gave a reasonable approximation, raising the prospects for regional malaria simulations even in data sparse regions. However, further improvements could be made if a method can be derived to calibrate the key hydrology parameters of the pond model in each grid cell location, possibly also incorporating slope and soil texture.

  11. A Regional Model for Malaria Vector Developmental Habitats Evaluated Using Explicit, Pond-Resolving Surface Hydrology Simulations.

    Science.gov (United States)

    Asare, Ernest Ohene; Tompkins, Adrian Mark; Bomblies, Arne

    2016-01-01

    Dynamical malaria models can relate precipitation to the availability of vector breeding sites using simple models of surface hydrology. Here, a revised scheme is developed for the VECTRI malaria model, which is evaluated alongside the default scheme using a two year simulation by HYDREMATS, a 10 metre resolution, village-scale model that explicitly simulates individual ponds. Despite the simplicity of the two VECTRI surface hydrology parametrization schemes, they can reproduce the sub-seasonal evolution of fractional water coverage. Calibration of the model parameters is required to simulate the mean pond fraction correctly. The default VECTRI model tended to overestimate water fraction in periods subject to light rainfall events and underestimate it during periods of intense rainfall. This systematic error was improved in the revised scheme by including the a parametrization for surface run-off, such that light rainfall below the initial abstraction threshold does not contribute to ponds. After calibration of the pond model, the VECTRI model was able to simulate vector densities that compared well to the detailed agent based model contained in HYDREMATS without further parameter adjustment. Substituting local rain-gauge data with satellite-retrieved precipitation gave a reasonable approximation, raising the prospects for regional malaria simulations even in data sparse regions. However, further improvements could be made if a method can be derived to calibrate the key hydrology parameters of the pond model in each grid cell location, possibly also incorporating slope and soil texture.

  12. Monod kinetics rather than a first-order degradation model explains atrazine fate in soil mini-columns: Implications for pesticide fate modelling

    International Nuclear Information System (INIS)

    Cheyns, K.; Mertens, J.; Diels, J.; Smolders, E.; Springael, D.

    2010-01-01

    Pesticide transport models commonly assume first-order pesticide degradation kinetics for describing reactive transport in soil. This assumption was assessed in mini-column studies with associated batch degradation tests. Soil mini-columns were irrigated with atrazine in two intermittent steps of about 30 days separated by 161 days application of artificial rain water. Atrazine concentration in the effluent peaked to that of the influent concentration after initial break-through but sharply decreased while influx was sustained, suggesting a degradation lag phase. The same pattern was displayed in the second step but peak height and percentage of atrazine recovered in the effluent were lower. A Monod model with biomass decay was successfully calibrated to this data. The model was successfully evaluated against batch degradation data and mini-column experiments at lower flow rate. The study suggested that first-order degradation models may underestimate risk of pesticide leaching if the pesticide degradation potential needs amplification during degradation. - Population dynamics of pesticide degrading population should be taken into account when predictions of pesticide fate are made to avoid underestimation of pesticide break-through towards groundwater.

  13. Panoramic radiographs underestimate extensions of the anterior loop and mandibular incisive canal

    International Nuclear Information System (INIS)

    De Brito, Ana Caroline Ramos; Nejaim, Yuri; De Freitas, Deborah Queiroz; De Oliveira Santos, Christiano

    2016-01-01

    The purpose of this study was to detect the anterior loop of the mental nerve and the mandibular incisive canal in panoramic radiographs (PAN) and cone-beam computed tomography (CBCT) images, as well as to determine the anterior/mesial extension of these structures in panoramic and cross-sectional reconstructions using PAN and CBCT images. Images (both PAN and CBCT) from 90 patients were evaluated by 2 independent observers. Detection of the anterior loop and the incisive canal were compared between PAN and CBCT. The anterior/mesial extension of these structures was compared between PAN and both cross-sectional and panoramic CBCT reconstructions. In CBCT, the anterior loop and the incisive canal were observed in 7.7% and 24.4% of the hemimandibles, respectively. In PAN, the anterior loop and the incisive canal were detected in 15% and 5.5% of cases, respectively. PAN presented more difficulties in the visualization of structures. The anterior/mesial extensions ranged from 0.0 mm to 19.0 mm on CBCT. PAN underestimated the measurements by approximately 2.0 mm. CBCT appears to be a more reliable imaging modality than PAN for preoperative workups of the anterior mandible. Individual variations in the anterior/mesial extensions of the anterior loop of the mental nerve and the mandibular incisive canal mean that is not prudent to rely on a general safe zone for implant placement or bone surgery in the interforaminal region

  14. Panoramic radiographs underestimate extensions of the anterior loop and mandibular incisive canal

    Energy Technology Data Exchange (ETDEWEB)

    De Brito, Ana Caroline Ramos; Nejaim, Yuri; De Freitas, Deborah Queiroz [Dept. of Oral Diagnosis, Division of Oral Radiology, Piracicaba Dental School, University of Campinas, Sao Paulo (Brazil); De Oliveira Santos, Christiano [Dept. of Stomatology, Public Oral Health and Forensic Dentistry, School of Dentistry of Ribeirao Preto, University of Sao Paulo, Sao Paulo (Brazil)

    2016-09-15

    The purpose of this study was to detect the anterior loop of the mental nerve and the mandibular incisive canal in panoramic radiographs (PAN) and cone-beam computed tomography (CBCT) images, as well as to determine the anterior/mesial extension of these structures in panoramic and cross-sectional reconstructions using PAN and CBCT images. Images (both PAN and CBCT) from 90 patients were evaluated by 2 independent observers. Detection of the anterior loop and the incisive canal were compared between PAN and CBCT. The anterior/mesial extension of these structures was compared between PAN and both cross-sectional and panoramic CBCT reconstructions. In CBCT, the anterior loop and the incisive canal were observed in 7.7% and 24.4% of the hemimandibles, respectively. In PAN, the anterior loop and the incisive canal were detected in 15% and 5.5% of cases, respectively. PAN presented more difficulties in the visualization of structures. The anterior/mesial extensions ranged from 0.0 mm to 19.0 mm on CBCT. PAN underestimated the measurements by approximately 2.0 mm. CBCT appears to be a more reliable imaging modality than PAN for preoperative workups of the anterior mandible. Individual variations in the anterior/mesial extensions of the anterior loop of the mental nerve and the mandibular incisive canal mean that is not prudent to rely on a general safe zone for implant placement or bone surgery in the interforaminal region.

  15. Reanalysis data underestimate significant changes in growing season weather in Kazakhstan

    Energy Technology Data Exchange (ETDEWEB)

    Wright, C K; Henebry, G M [Geographic Information Science Center of Excellence (GIScCE), South Dakota State University, Brookings, SD (United States); De Beurs, K M [Department of Geography, Virginia Polytechnic Institute and State University, Blacksburg, VA (United States); Akhmadieva, Z K [Kazakhstan Scientific Research Institute of Ecology and Climate, Ministry of Environment Protection of the Republic of Kazakhstan, Astana (Kazakhstan); Groisman, P Y, E-mail: Geoffrey.Henebry@sdstate.ed [National Climatic Data Center, University Corporation for Atmospheric Research, Asheville, NC (United States)

    2009-10-15

    We present time series analyses of recently compiled climate station data which allowed us to assess contemporary trends in growing season weather across Kazakhstan as drivers of a significant decline in growing season normalized difference vegetation index (NDVI) recently observed by satellite remote sensing across much of Central Asia. We used a robust nonparametric time series analysis method, the seasonal Kendall trend test to analyze georeferenced time series of accumulated growing season precipitation (APPT) and accumulated growing degree-days (AGDD). Over the period 2000-2006 we found geographically extensive, statistically significant (p<0.05) decreasing trends in APPT and increasing trends in AGDD. The temperature trends were especially apparent during the warm season and coincided with precipitation decreases in northwest Kazakhstan, indicating that pervasive drought conditions and higher temperature excursions were the likely drivers of NDVI declines observed in Kazakhstan over the same period. We also compared the APPT and AGDD trends at individual stations with results from trend analysis of gridded monthly precipitation data from the Global Precipitation Climatology Centre (GPCC) Full Data Reanalysis v4 and gridded daily near surface air temperature from the National Centers for Climate Prediction Reanalysis v2 (NCEP R2). We found substantial deviation between the station and the reanalysis trends, suggesting that GPCC and NCEP data substantially underestimate the geographic extent of recent drought in Kazakhstan. Although gridded climate products offer many advantages in ease of use and complete coverage, our findings for Kazakhstan should serve as a caveat against uncritical use of GPCC and NCEP reanalysis data and demonstrate the importance of compiling and standardizing daily climate data from data-sparse regions like Central Asia.

  16. Reanalysis data underestimate significant changes in growing season weather in Kazakhstan

    International Nuclear Information System (INIS)

    Wright, C K; Henebry, G M; De Beurs, K M; Akhmadieva, Z K; Groisman, P Y

    2009-01-01

    We present time series analyses of recently compiled climate station data which allowed us to assess contemporary trends in growing season weather across Kazakhstan as drivers of a significant decline in growing season normalized difference vegetation index (NDVI) recently observed by satellite remote sensing across much of Central Asia. We used a robust nonparametric time series analysis method, the seasonal Kendall trend test to analyze georeferenced time series of accumulated growing season precipitation (APPT) and accumulated growing degree-days (AGDD). Over the period 2000-2006 we found geographically extensive, statistically significant (p<0.05) decreasing trends in APPT and increasing trends in AGDD. The temperature trends were especially apparent during the warm season and coincided with precipitation decreases in northwest Kazakhstan, indicating that pervasive drought conditions and higher temperature excursions were the likely drivers of NDVI declines observed in Kazakhstan over the same period. We also compared the APPT and AGDD trends at individual stations with results from trend analysis of gridded monthly precipitation data from the Global Precipitation Climatology Centre (GPCC) Full Data Reanalysis v4 and gridded daily near surface air temperature from the National Centers for Climate Prediction Reanalysis v2 (NCEP R2). We found substantial deviation between the station and the reanalysis trends, suggesting that GPCC and NCEP data substantially underestimate the geographic extent of recent drought in Kazakhstan. Although gridded climate products offer many advantages in ease of use and complete coverage, our findings for Kazakhstan should serve as a caveat against uncritical use of GPCC and NCEP reanalysis data and demonstrate the importance of compiling and standardizing daily climate data from data-sparse regions like Central Asia.

  17. Women's maternity care needs and related service models in rural areas: A comprehensive systematic review of qualitative evidence.

    Science.gov (United States)

    Hoang, Ha; Le, Quynh; Ogden, Kathryn

    2014-12-01

    Understanding the needs of rural women in maternity care and service models available to them is significant for the development of effective policies and the sustainability of rural communities. Nevertheless, no systematic review of studies addressing these needs has been conducted. To synthesise the best available evidence on the experiences of women's needs in maternity care and existing service models in rural areas. Literature search of ten electronic databases, digital theses, and reference lists of relevant studies applying inclusion/exclusion criteria was conducted. Selected papers were assessed using standardised critical appraisal instruments from JBI-QARI. Data extracted from these studies were synthesised using thematic synthesis. 12 studies met the inclusion criteria. There were three main themes and several sub-themes identified. A comprehensive set of the maternity care expectations of rural women was reported in this review including safety (7), continuity of care (6) and quality of care (6), and informed choices needs (4). In addition, challenges in accessing maternity services also emerged from the literature such as access (6), risk of travelling (9) and associated cost of travel (9). Four models of maternity care examined in the literature were medically led care (5), GP-led care (4), midwifery-led care (7) and home birth (6). The systematic review demonstrates the importance of including well-conducted qualitative studies in informing the development of evidence-based policies to address women's maternity care needs and inform service models. Synthesising the findings from qualitative studies offers important insight for informing effective public health policy. Copyright © 2014 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.

  18. Premodelling of the importance of the location of the upstream hydraulic boundary of a regional flow model of the Laxemar-Simpevarp area. Site descriptive modelling SDM-Site Laxemar

    International Nuclear Information System (INIS)

    Holmen, Johan G.

    2008-03-01

    The location of the westernmost hydraulic boundary of a regional groundwater flow model representing the Laxemar investigation area is of importance as the regional flow of groundwater is primarily from the west towards the sea (as given by the regional topography). If the westernmost boundary condition of a regional flow model is located to close to the investigation area, the regional flow model may underestimate the magnitude of the regional groundwater flow (at the investigation area), as well as overestimate breakthrough times of flow paths from the repository area, etc. Groundwater flows have been calculated by use of two mathematical (numerical) models: A very large groundwater flow model, much larger than the regional flow model used in the Laxemar site description version 1.2, and a smaller flow model that is of a comparable size to the regional model used in the site description. The models are identical except for the different horizontal extensions of the models; the large model extends to the west much further than the small model. The westernmost lateral boundary of the small model is a topographic water divide approx. 7 km from the central parts of the Laxemar investigation area, and the westernmost lateral boundary of the large model is a topographic water divide approx. 40 km from the central parts of the Laxemar investigation area. In the models the lateral boundaries are defined as no-flow boundaries. The objective of the study is to calculate and compare the groundwater flow properties at a tentative repository area at Laxemar; by use of a large flow model and a small flow model. The comparisons include the following three parameters: - Length of flow paths from the tentative repository area. - Advective breakthrough time for flow paths from the tentative repository area. - Magnitude of flow at the tentative repository area. The comparisons demonstrated the following considering the median values of the obtained distributions of flow paths

  19. A systematic narrative review of consumer-directed care for older people: implications for model development.

    Science.gov (United States)

    Ottmann, Goetz; Allen, Jacqui; Feldman, Peter

    2013-11-01

    Consumer-directed care is increasingly becoming a mainstream option in community-based aged care. However, a systematic review describing how the current evaluation research translates into practise has not been published to date. This review aimed to systematically establish an evidence base of user preferences for and satisfaction with services associated with consumer-directed care programmes for older people. Twelve databases were searched, including MedLine, BioMed Central, Cinahl, Expanded Academic ASAP, PsychInfo, ProQuest, Age Line, Science Direct, Social Citation Index, Sociological Abstracts, Web of Science and the Cochrane Library. Google Scholar and Google were also searched. Eligible studies were those reporting on choice, user preferences and service satisfaction outcomes regarding a programme or model of home-based care in the United States or United Kingdom. This systematic narrative review retrieved literature published from January 1992 to August 2011. A total of 277 references were identified. Of these 17 met the selection criteria and were reviewed. Findings indicate that older people report varying preferences for consumer-directed care with some demonstrating limited interest. Clients and carers reported good service satisfaction. However, research comparing user preferences across countries or investigating how ecological factors shape user preferences has received limited attention. Policy-makers and practitioners need to carefully consider the diverse contexts, needs and preferences of older adults in adopting consumer-directed care approaches in community aged care. The review calls for the development of consumer-directed care programmes offering a broad range of options that allow for personalisation and greater control over services without necessarily transferring the responsibility for administrative responsibilities to service users. Review findings suggest that consumer-directed care approaches have the potential to empower older

  20. Focusing on fast food restaurants alone underestimates the relationship between neighborhood deprivation and exposure to fast food in a large rural area

    OpenAIRE

    Sharkey, Joseph R; Johnson, Cassandra M; Dean, Wesley R; Horel, Scott A

    2011-01-01

    Abstract Background Individuals and families are relying more on food prepared outside the home as a source for at-home and away-from-home consumption. Restricting the estimation of fast-food access to fast-food restaurants alone may underestimate potential spatial access to fast food. Methods The study used data from the 2006 Brazos Valley Food Environment Project (BVFEP) and the 2000 U.S. Census Summary File 3 for six rural counties in the Texas Brazos Valley region. BVFEP ground-truthed da...

  1. Anxiety in the context of cancer: A systematic review and development of an integrated model.

    Science.gov (United States)

    Curran, Leah; Sharpe, Louise; Butow, Phyllis

    2017-08-01

    Anxiety is common in the context of cancer, but there are few theoretical models that apply to people with cancer across the trajectory of their illness. The aims of this review are to identify existing theories and to propose an integrated model of cancer-related anxiety. Using a systematic literature search of Medline, Premedline and PsycINFO databases, we identified nine theoretical models of anxiety in the context of cancer. We reviewed these for psychological concepts that fell under five themes: pre-existing schema, the inherent nature of cancer, cognitive factors, coping responses and contextual factors. From these themes, we integrated concepts from different models to develop a theoretical framework to explain the development and maintenance of anxiety in the context of cancer. The resulting model suggests that pre-existing schema, past experiences of cancer, an intolerance of uncertainty and meta-cognitive beliefs about worry interact with the inherent nature of cancer to produce overwhelming distress. The distress activates cognitive processes characterized by vigilance, worry and rumination. Attempts to cope by re-establishing control, and a pattern of vigilance to cancer-related cues and/or avoidance reinforce anxiety, in the context of a range of systemic factors that can either buffer against or worsen the anxiety. Copyright © 2017. Published by Elsevier Ltd.

  2. Decision-model estimation of the age-specific disability weight for schistosomiasis japonica: a systematic review of the literature.

    Science.gov (United States)

    Finkelstein, Julia L; Schleinitz, Mark D; Carabin, Hélène; McGarvey, Stephen T

    2008-03-05

    Schistosomiasis is among the most prevalent parasitic infections worldwide. However, current Global Burden of Disease (GBD) disability-adjusted life year estimates indicate that its population-level impact is negligible. Recent studies suggest that GBD methodologies may significantly underestimate the burden of parasitic diseases, including schistosomiasis. Furthermore, strain-specific disability weights have not been established for schistosomiasis, and the magnitude of human disease burden due to Schistosoma japonicum remains controversial. We used a decision model to quantify an alternative disability weight estimate of the burden of human disease due to S. japonicum. We reviewed S. japonicum morbidity data, and constructed decision trees for all infected persons and two age-specific strata, or =15 y. We conducted stochastic and probabilistic sensitivity analyses for each model. Infection with S. japonicum was associated with an average disability weight of 0.132, with age-specific disability weights of 0.098 ( or =15 y). Re-estimated disability weights were seven to 46 times greater than current GBD measures; no simulations produced disability weight estimates lower than 0.009. Nutritional morbidities had the greatest contribution to the S. japonicum disability weight in the disability weights for schistosomiasis urgently need to be revised, and species-specific disability weights should be established. Even a marginal increase in current estimates would result in a substantial rise in the estimated global burden of schistosomiasis, and have considerable implications for public health prioritization and resource allocation for schistosomiasis research, monitoring, and control.

  3. Calibrating a surface mass-balance model for Austfonna ice cap, Svalbard

    Science.gov (United States)

    Schuler, Thomas Vikhamar; Loe, Even; Taurisano, Andrea; Eiken, Trond; Hagen, Jon Ove; Kohler, Jack

    2007-10-01

    Austfonna (8120 km2) is by far the largest ice mass in the Svalbard archipelago. There is considerable uncertainty about its current state of balance and its possible response to climate change. Over the 2004/05 period, we collected continuous meteorological data series from the ice cap, performed mass-balance measurements using a network of stakes distributed across the ice cap and mapped the distribution of snow accumulation using ground-penetrating radar along several profile lines. These data are used to drive and test a model of the surface mass balance. The spatial accumulation pattern was derived from the snow depth profiles using regression techniques, and ablation was calculated using a temperature-index approach. Model parameters were calibrated using the available field data. Parameter calibration was complicated by the fact that different parameter combinations yield equally acceptable matches to the stake data while the resulting calculated net mass balance differs considerably. Testing model results against multiple criteria is an efficient method to cope with non-uniqueness. In doing so, a range of different data and observations was compared to several different aspects of the model results. We find a systematic underestimation of net balance for parameter combinations that predict observed ice ablation, which suggests that refreezing processes play an important role. To represent these effects in the model, a simple PMAX approach was included in its formulation. Used as a diagnostic tool, the model suggests that the surface mass balance for the period 29 April 2004 to 23 April 2005 was negative (-318 mm w.e.).

  4. It is time to abandon "expected bladder capacity." Systematic review and new models for children's normal maximum voided volumes.

    Science.gov (United States)

    Martínez-García, Roberto; Ubeda-Sansano, Maria Isabel; Díez-Domingo, Javier; Pérez-Hoyos, Santiago; Gil-Salom, Manuel

    2014-09-01

    There is an agreement to use simple formulae (expected bladder capacity and other age based linear formulae) as bladder capacity benchmark. But real normal child's bladder capacity is unknown. To offer a systematic review of children's normal bladder capacity, to measure children's normal maximum voided volumes (MVVs), to construct models of MVVs and to compare them with the usual formulae. Computerized, manual and grey literature were reviewed until February 2013. Epidemiological, observational, transversal, multicenter study. A consecutive sample of healthy children aged 5-14 years, attending Primary Care centres with no urologic abnormality were selected. Participants filled-in a 3-day frequency-volume chart. Variables were MVVs: maximum of 24 hr, nocturnal, and daytime maximum voided volumes. diuresis and its daytime and nighttime fractions; body-measure data; and gender. The consecutive steps method was used in a multivariate regression model. Twelve articles accomplished systematic review's criteria. Five hundred and fourteen cases were analysed. Three models, one for each of the MVVs, were built. All of them were better adjusted to exponential equations. Diuresis (not age) was the most significant factor. There was poor agreement between MVVs and usual formulae. Nocturnal and daytime maximum voided volumes depend on several factors and are different. Nocturnal and daytime maximum voided volumes should be used with different meanings in clinical setting. Diuresis is the main factor for bladder capacity. This is the first model for benchmarking normal MVVs with diuresis as its main factor. Current formulae are not suitable for clinical use. © 2013 Wiley Periodicals, Inc.

  5. Sensitivity Analysis and Parameter Estimation for a Reactive Transport Model of Uranium Bioremediation

    Science.gov (United States)

    Meyer, P. D.; Yabusaki, S.; Curtis, G. P.; Ye, M.; Fang, Y.

    2011-12-01

    A three-dimensional, variably-saturated flow and multicomponent biogeochemical reactive transport model of uranium bioremediation was used to generate synthetic data . The 3-D model was based on a field experiment at the U.S. Dept. of Energy Rifle Integrated Field Research Challenge site that used acetate biostimulation of indigenous metal reducing bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. A key assumption in past modeling studies at this site was that a comprehensive reaction network could be developed largely through one-dimensional modeling. Sensitivity analyses and parameter estimation were completed for a 1-D reactive transport model abstracted from the 3-D model to test this assumption, to identify parameters with the greatest potential to contribute to model predictive uncertainty, and to evaluate model structure and data limitations. Results showed that sensitivities of key biogeochemical concentrations varied in space and time, that model nonlinearities and/or parameter interactions have a significant impact on calculated sensitivities, and that the complexity of the model's representation of processes affecting Fe(II) in the system may make it difficult to correctly attribute observed Fe(II) behavior to modeled processes. Non-uniformity of the 3-D simulated groundwater flux and averaging of the 3-D synthetic data for use as calibration targets in the 1-D modeling resulted in systematic errors in the 1-D model parameter estimates and outputs. This occurred despite using the same reaction network for 1-D modeling as used in the data-generating 3-D model. Predictive uncertainty of the 1-D model appeared to be significantly underestimated by linear parameter uncertainty estimates.

  6. The Reporting Quality of Systematic Reviews and Meta-Analyses in Industrial and Organizational Psychology: A Systematic Review.

    Science.gov (United States)

    Schalken, Naomi; Rietbergen, Charlotte

    2017-01-01

    Objective: The goal of this systematic review was to examine the reporting quality of the method section of quantitative systematic reviews and meta-analyses from 2009 to 2016 in the field of industrial and organizational psychology with the help of the Meta-Analysis Reporting Standards (MARS), and to update previous research, such as the study of Aytug et al. (2012) and Dieckmann et al. (2009). Methods: A systematic search for quantitative systematic reviews and meta-analyses was conducted in the top 10 journals in the field of industrial and organizational psychology between January 2009 and April 2016. Data were extracted on study characteristics and items of the method section of MARS. A cross-classified multilevel model was analyzed, to test whether publication year and journal impact factor (JIF) were associated with the reporting quality scores of articles. Results: Compliance with MARS in the method section was generally inadequate in the random sample of 120 articles. Variation existed in the reporting of items. There were no significant effects of publication year and journal impact factor (JIF) on the reporting quality scores of articles. Conclusions: The reporting quality in the method section of systematic reviews and meta-analyses was still insufficient, therefore we recommend researchers to improve the reporting in their articles by using reporting standards like MARS.

  7. Systematic model calculations of the hyperfine structure in light and heavy ions

    CERN Document Server

    Tomaselli, M; Nörtershäuser, W; Ewald, G; Sánchez, R; Fritzsche, S; Karshenboim, S G

    2003-01-01

    Systematic model calculations are performed for the magnetization distributions and the hyperfine structure (HFS) of light and heavy ions with a mass close to A ~ 6 208 235 to test the interplay of nuclear and atomic structure. A high-precision measurement of lithium-isotope shifts (IS) for suitable transition, combined with an accurate theoretical evaluation of the mass-shift contribution in the respective transition, can be used to determine the root-mean-square (rms) nuclear-charge radius of Li isotopes, particularly of the halo nucleus /sup 11/Li. An experiment of this type is currently underway at GSI in Darmstadt and ISOLDE at CERN. However, the field-shift contributions between the different isotopes can be evaluated using the results obtained for the charge radii, thus casting, with knowledge of the ratio of the HFS constants to the magnetic moments, new light on the IS theory. For heavy charged ions the calculated n- body magnetization distributions reproduce the HFS of hydrogen-like ions well if QED...

  8. California Wintertime Precipitation in Regional and Global Climate Models

    Energy Technology Data Exchange (ETDEWEB)

    Caldwell, P M

    2009-04-27

    In this paper, wintertime precipitation from a variety of observational datasets, regional climate models (RCMs), and general circulation models (GCMs) is averaged over the state of California (CA) and compared. Several averaging methodologies are considered and all are found to give similar values when model grid spacing is less than 3{sup o}. This suggests that CA is a reasonable size for regional intercomparisons using modern GCMs. Results show that reanalysis-forced RCMs tend to significantly overpredict CA precipitation. This appears to be due mainly to overprediction of extreme events; RCM precipitation frequency is generally underpredicted. Overprediction is also reflected in wintertime precipitation variability, which tends to be too high for RCMs on both daily and interannual scales. Wintertime precipitation in most (but not all) GCMs is underestimated. This is in contrast to previous studies based on global blended gauge/satellite observations which are shown here to underestimate precipitation relative to higher-resolution gauge-only datasets. Several GCMs provide reasonable daily precipitation distributions, a trait which doesn't seem tied to model resolution. GCM daily and interannual variability is generally underpredicted.

  9. Assessment of the aerosol optics component of the coupled WRF-CMAQ model using CARES field campaign data and a single column model

    Science.gov (United States)

    Gan, Chuen Meei; Binkowski, Francis; Pleim, Jonathan; Xing, Jia; Wong, David; Mathur, Rohit; Gilliam, Robert

    2015-08-01

    The Carbonaceous Aerosols and Radiative Effects Study (CARES), a field campaign held in central California in June 2010, provides a unique opportunity to assess the aerosol optics modeling component of the two-way coupled Weather Research and Forecasting (WRF) - Community Multiscale Air Quality (CMAQ) model. This campaign included comprehensive measurements of aerosol composition and optical properties at two ground sites and aloft from instrumentation on-board two aircraft. A single column model (SCM) was developed to evaluate the accuracy and consistency of the coupled model using both observation and model information. Two cases (June 14 and 24, 2010) are examined in this study. The results show that though the coupled WRF-CMAQ estimates of aerosol extinction were underestimated relative to these measurements, when measured concentrations and characteristics of ambient aerosols were used as input to constrain the SCM calculations, the estimated extinction profiles agreed well with aircraft observations. One of the possible causes of the WRF-CMAQ extinction errors is that the simulated sea-salt (SS) in the accumulation mode in WRF-CMAQ is very low in both cases while the observations indicate a considerable amount of SS. Also, a significant amount of organic carbon (OC) is present in the measurement. However, in the current WRF-CMAQ model all OC is considered to be insoluble whereas most secondary organic aerosol is water soluble. In addition, the model does not consider external mixing and hygroscopic effects of water soluble OC which can impact the extinction calculations. In conclusion, the constrained SCM results indicate that the scattering portion of the aerosol optics calculations is working well, although the absorption calculation could not be effectively evaluated. However, a few factors such as greatly underestimated accumulation mode SS, misrepresentation of water soluble OC, and incomplete mixing state representation in the full coupled model

  10. Systematic sampling with errors in sample locations

    DEFF Research Database (Denmark)

    Ziegel, Johanna; Baddeley, Adrian; Dorph-Petersen, Karl-Anton

    2010-01-01

    analysis using point process methods. We then analyze three different models for the error process, calculate exact expressions for the variances, and derive asymptotic variances. Errors in the placement of sample points can lead to substantial inflation of the variance, dampening of zitterbewegung......Systematic sampling of points in continuous space is widely used in microscopy and spatial surveys. Classical theory provides asymptotic expressions for the variance of estimators based on systematic sampling as the grid spacing decreases. However, the classical theory assumes that the sample grid...... is exactly periodic; real physical sampling procedures may introduce errors in the placement of the sample points. This paper studies the effect of errors in sample positioning on the variance of estimators in the case of one-dimensional systematic sampling. First we sketch a general approach to variance...

  11. Modeling air pollutant emissions from Indian auto-rickshaws: Model development and implications for fleet emission rate estimates

    Science.gov (United States)

    Grieshop, Andrew P.; Boland, Daniel; Reynolds, Conor C. O.; Gouge, Brian; Apte, Joshua S.; Rogak, Steven N.; Kandlikar, Milind

    2012-04-01

    Chassis dynamometer tests were conducted on 40 Indian auto-rickshaws with 3 different fuel-engine combinations operating on the Indian Drive Cycle (IDC). Second-by-second (1 Hz) data were collected and used to develop velocity-acceleration look-up table models for fuel consumption and emissions of CO2, CO, total hydrocarbons (THC), oxides of nitrogen (NOx) and fine particulate matter (PM2.5) for each fuel-engine combination. Models were constructed based on group-average vehicle activity and emissions data in order to represent the performance of a 'typical' vehicle. The models accurately estimated full-cycle emissions for most species, though pollutants with more variable emission rates (e.g., PM2.5) were associated with larger errors. Vehicle emissions data showed large variability for single vehicles ('intra-vehicle variability') and within the test group ('inter-vehicle variability'), complicating the development of a single model to represent a vehicle population. To evaluate the impact of this variability, sensitivity analyses were conducted using vehicle activity data other than the IDC as model input. Inter-vehicle variability dominated the uncertainty in vehicle emission modeling. 'Leave-one-out' analyses indicated that the model outputs were relatively insensitive to the specific sample of vehicles and that the vehicle samples were likely a reasonable representation of the Delhi fleet. Intra-vehicle variability in emissions was also substantial, though had a relatively minor impact on model performance. The models were used to assess whether the IDC, used for emission factor development in India, accurately represents emissions from on-road driving. Modeling based on Global Positioning System (GPS) activity data from real-world auto-rickshaws suggests that, relative to on-road vehicles in Delhi, the IDC systematically under-estimates fuel use and emissions; real-word auto-rickshaws consume 15% more fuel and emit 49% more THC and 16% more PM2.5. The models

  12. The aerosol distribution in Europe derived with the Community Multiscale Air Quality (CMAQ model: comparison to near surface in situ and sunphotometer measurements

    Directory of Open Access Journals (Sweden)

    V. Matthias

    2008-09-01

    Full Text Available The aerosol distribution in Europe was simulated with the Community Multiscale Air Quality (CMAQ model system version 4.5 for the years 2000 and 2001. The results were compared with daily averages of PM10 measurements taken in the framework of EMEP and with aerosol optical depth (AOD values measured within AERONET. The modelled total aerosol mass is typically about 30–60% lower than the corresponding measurements. However a comparison of the chemical composition of the aerosol revealed a considerably better agreement between the modelled and the measured aerosol components for ammonium, nitrate and sulfate, which are on average only 15–20% underestimated. Sligthly worse agreement was determined for sea salt, that was only avaliable at two sites. The largest discrepancies result from the aerosol mass which was not chemically specified by the measurements. The agreement between measurements and model is better in winter than in summer. The modelled organic aerosol mass is higher in summer than in winter but it is significantly underestimated by the model. This could be one of the main reasons for the discrepancies between measurements and model results. The other is that primary coarse particles are underestimated in the emissions. The probability distribution function of the PM10 measurements follows a log-normal distribution at most sites. The model is only able to reproduce this distribution function at non-coastal low altitude stations. The AOD derived from the model results is 20–70% lower than the values observed within AERONET. This is mainly attributed to the missing aerosol mass in the model. The day-to-day variability of the AOD and the log-normal distribution functions are quite well reproduced by the model. The seasonality on the other hand is underestimated by the model results because better agreement is achieved in winter.

  13. Systematics of nuclear mass and level density formulas

    Energy Technology Data Exchange (ETDEWEB)

    Nakamura, Hisashi [Fuji Electric Co. Ltd., Kawasaki, Kanagawa (Japan)

    1998-03-01

    The phenomenological models of the nuclear mass and level density are close related to each other, the nuclear ground and excited state properties are described by using the parameter systematics on the mass and level density formulas. The main aim of this work is to provide in an analytical framework the improved energy dependent shell, pairing and deformation corrections generalized to the collective enhancement factors, which offer a systematic prescription over a great number of nuclear reaction cross sections. The new formulas are shown to be in close agreement with not only the empirical nuclear mass data but the measured slow neutron resonance spacings, and experimental systematics observed in the excitation energy dependent properties. (author)

  14. Quality of systematic reviews in pediatric oncology - A systematic review

    NARCIS (Netherlands)

    Lundh, Andreas; Knijnenburg, Sebastiaan L.; Jørgensen, Anders W.; van Dalen, Elvira C.; Kremer, Leontien C. M.

    2009-01-01

    Background: To ensure evidence-based decision making in pediatric oncology systematic reviews are necessary. The objective of our study was to evaluate the methodological quality of all currently existing systematic reviews in pediatric oncology. Methods: We identified eligible systematic reviews

  15. Stillbirth With Group B Streptococcus Disease Worldwide: Systematic Review and Meta-analyses.

    Science.gov (United States)

    Seale, Anna C; Blencowe, Hannah; Bianchi-Jassir, Fiorella; Embleton, Nicholas; Bassat, Quique; Ordi, Jaume; Menéndez, Clara; Cutland, Clare; Briner, Carmen; Berkley, James A; Lawn, Joy E; Baker, Carol J; Bartlett, Linda; Gravett, Michael G; Heath, Paul T; Ip, Margaret; Le Doare, Kirsty; Rubens, Craig E; Saha, Samir K; Schrag, Stephanie; Meulen, Ajoke Sobanjo-Ter; Vekemans, Johan; Madhi, Shabir A

    2017-11-06

    There are an estimated 2.6 million stillbirths each year, many of which are due to infections, especially in low- and middle-income contexts. This paper, the eighth in a series on the burden of group B streptococcal (GBS) disease, aims to estimate the percentage of stillbirths associated with GBS disease. We conducted systematic literature reviews (PubMed/Medline, Embase, Literatura Latino-Americana e do Caribe em Ciências da Saúde, World Health Organization Library Information System, and Scopus) and sought unpublished data from investigator groups. Studies were included if they reported original data on stillbirths (predominantly ≥28 weeks' gestation or ≥1000 g, with GBS isolated from a sterile site) as a percentage of total stillbirths. We did meta-analyses to derive pooled estimates of the percentage of GBS-associated stillbirths, regionally and worldwide for recent datasets. We included 14 studies from any period, 5 with recent data (after 2000). There were no data from Asia. We estimated that 1% (95% confidence interval [CI], 0-2%) of all stillbirths in developed countries and 4% (95% CI, 2%-6%) in Africa were associated with GBS. GBS is likely an important cause of stillbirth, especially in Africa. However, data are limited in terms of geographic spread, with no data from Asia, and cases worldwide are probably underestimated due to incomplete case ascertainment. More data, using standardized, systematic methods, are critical, particularly from low- and middle-income contexts where the highest burden of stillbirths occurs. These data are essential to inform interventions, such as maternal GBS vaccination. © The Author 2017. Published by Oxford University Press for the Infectious Diseases Society of America.

  16. Climate change effects on extreme flows of water supply area in Istanbul: utility of regional climate models and downscaling method.

    Science.gov (United States)

    Kara, Fatih; Yucel, Ismail

    2015-09-01

    This study investigates the climate change impact on the changes of mean and extreme flows under current and future climate conditions in the Omerli Basin of Istanbul, Turkey. The 15 regional climate model output from the EU-ENSEMBLES project and a downscaling method based on local implications from geophysical variables were used for the comparative analyses. Automated calibration algorithm is used to optimize the parameters of Hydrologiska Byråns Vattenbalansavdel-ning (HBV) model for the study catchment using observed daily temperature and precipitation. The calibrated HBV model was implemented to simulate daily flows using precipitation and temperature data from climate models with and without downscaling method for reference (1960-1990) and scenario (2071-2100) periods. Flood indices were derived from daily flows, and their changes throughout the four seasons and year were evaluated by comparing their values derived from simulations corresponding to the current and future climate. All climate models strongly underestimate precipitation while downscaling improves their underestimation feature particularly for extreme events. Depending on precipitation input from climate models with and without downscaling the HBV also significantly underestimates daily mean and extreme flows through all seasons. However, this underestimation feature is importantly improved for all seasons especially for spring and winter through the use of downscaled inputs. Changes in extreme flows from reference to future increased for the winter and spring and decreased for the fall and summer seasons. These changes were more significant with downscaling inputs. With respect to current time, higher flow magnitudes for given return periods will be experienced in the future and hence, in the planning of the Omerli reservoir, the effective storage and water use should be sustained.

  17. Systematic Assessment for University Sexuality Programming.

    Science.gov (United States)

    Westefeld, John S.; Winkelpleck, Judy M.

    1982-01-01

    Suggests systematic empirical assessment is needed to plan university sexuality programing. Proposes the traditional approach of asking about students' attitudes, knowledge, and behavior is useful for developing specific programing content. Presents an assessment model emphasizing assessment of students' desires for sexuality programing in terms…

  18. Accounting for the decrease of photosystem photochemical efficiency with increasing irradiance to estimate quantum yield of leaf photosynthesis.

    Science.gov (United States)

    Yin, Xinyou; Belay, Daniel W; van der Putten, Peter E L; Struik, Paul C

    2014-12-01

    Maximum quantum yield for leaf CO2 assimilation under limiting light conditions (Φ CO2LL) is commonly estimated as the slope of the linear regression of net photosynthetic rate against absorbed irradiance over a range of low-irradiance conditions. Methodological errors associated with this estimation have often been attributed either to light absorptance by non-photosynthetic pigments or to some data points being beyond the linear range of the irradiance response, both causing an underestimation of Φ CO2LL. We demonstrate here that a decrease in photosystem (PS) photochemical efficiency with increasing irradiance, even at very low levels, is another source of error that causes a systematic underestimation of Φ CO2LL. A model method accounting for this error was developed, and was used to estimate Φ CO2LL from simultaneous measurements of gas exchange and chlorophyll fluorescence on leaves using various combinations of species, CO2, O2, or leaf temperature levels. The conventional linear regression method under-estimated Φ CO2LL by ca. 10-15%. Differences in the estimated Φ CO2LL among measurement conditions were generally accounted for by different levels of photorespiration as described by the Farquhar-von Caemmerer-Berry model. However, our data revealed that the temperature dependence of PSII photochemical efficiency under low light was an additional factor that should be accounted for in the model.

  19. Systematic Equation Formulation

    DEFF Research Database (Denmark)

    Lindberg, Erik

    2007-01-01

    A tutorial giving a very simple introduction to the set-up of the equations used as a model for an electrical/electronic circuit. The aim is to find a method which is as simple and general as possible with respect to implementation in a computer program. The “Modified Nodal Approach”, MNA, and th......, and the “Controlled Source Approach”, CSA, for systematic equation formulation are investigated. It is suggested that the kernel of the P Spice program based on MNA is reprogrammed....

  20. A Conceptual Model of Agile Software Development in a Safety-Critical Context: A Systematic Literature Review

    DEFF Research Database (Denmark)

    Tordrup Heeager, Lise; Nielsen, Peter Axel

    2018-01-01

    challenges of agile software development of safety-critical systems. The conceptual model consists of four problematic practice areas and five relationships, which we find to be even more important than the problematic areas. From this review, we suggest that there are important research gaps that need...... processes or agile processes that are purportedly faster and promise to lead to better products. Objective: To identify the issues and disputes in agile development of safety-critical software and the key qualities as found in the extant research literature. Method: We conducted a systematic literature...... review as an interpretive study following a research design to search, assess, extract, group, and understand the results of the found studies. Results: There are key issues and propositions that we elicit from the literature and combine into a conceptual model for understanding the foundational...

  1. [Taxonomic theory for non-classical systematics].

    Science.gov (United States)

    Pavlinov, I Ia

    2012-01-01

    Outlined briefly are basic principles of construing general taxonomic theory for biological systematics considered in the context of non-classical scientific paradigm. The necessity of such kind of theory is substantiated, and some key points of its elaboration are exposed: its interpretation as a framework concept for the partial taxonomic theories in various schools of systematics; elaboration of idea of cognitive situation including three interrelated components, namely subject, object, and epistemic ones; its construing as a content-wisely interpreted quasi-axiomatics, with strong structuring of its conceptual space including demarcation between axioms and inferring rules; its construing as a "conceptual pyramid" of concepts of various levels of generality; inclusion of a basic model into definition of the taxonomic system (classification) regulating its content. Two problems are indicated as fundamental: definition of taxonomic diversity as a subject domain for the systematics as a whole; definition of onto-epistemological status of taxonomic system (classification) in general and of taxa in particular.

  2. Leaders' experiences and perceptions implementing activity-based funding and pay-for-performance hospital funding models: A systematic review.

    Science.gov (United States)

    Baxter, Pamela E; Hewko, Sarah J; Pfaff, Kathryn A; Cleghorn, Laura; Cunningham, Barbara J; Elston, Dawn; Cummings, Greta G

    2015-08-01

    Providing cost-effective, accessible, high quality patient care is a challenge to governments and health care delivery systems across the globe. In response to this challenge, two types of hospital funding models have been widely implemented: (1) activity-based funding (ABF) and (2) pay-for-performance (P4P). Although health care leaders play a critical role in the implementation of these funding models, to date their perspectives have not been systematically examined. The purpose of this systematic review was to gain a better understanding of the experiences of health care leaders implementing hospital funding reforms within Organisation for Economic Cooperation and Development countries. We searched literature from 1982 to 2013 using: Medline, EMBASE, CINAHL, Academic Search Complete, Academic Search Elite, and Business Source Complete. Two independent reviewers screened titles, abstracts and full texts using predefined criteria. We included 2 mixed methods and 12 qualitative studies. Thematic analysis was used in synthesizing results. Five common themes and multiple subthemes emerged. Themes include: pre-requisites for success, perceived benefits, barriers/challenges, unintended consequences, and leader recommendations. Irrespective of which type of hospital funding reform was implemented, health care leaders described a complex process requiring the following: organizational commitment; adequate infrastructure; human, financial and information technology resources; change champions and a personal commitment to quality care. Crown Copyright © 2015. Published by Elsevier Ireland Ltd. All rights reserved.

  3. Assessing harmful effects in systematic Reviews

    Directory of Open Access Journals (Sweden)

    Woolacott Nerys F

    2004-07-01

    Full Text Available Abstract Background Balanced decisions about health care interventions require reliable evidence on harms as well as benefits. Most systematic reviews focus on efficacy and randomised trials, for which the methodology is well established. Methods to systematically review harmful effects are less well developed and there are few sources of guidance for researchers. We present our own recent experience of conducting systematic reviews of harmful effects and make suggestions for future practice and further research. Methods We described and compared the methods used in three systematic reviews. Our evaluation focused on the review question, study designs and quality assessment. Results One review question focused on providing information on specific harmful effects to furnish an economic model, the other two addressed much broader questions. All three reviews included randomised and observational data, although each defined the inclusion criteria differently. Standard methods were used to assess study quality. Various practical problems were encountered in applying the study design inclusion criteria and assessing quality, mainly because of poor study design, inadequate reporting and the limitations of existing tools. All three reviews generated a large volume of work that did not yield much useful information for health care decision makers. The key areas for improvement we identified were focusing the review question and developing methods for quality assessment of studies of harmful effects. Conclusions Systematic reviews of harmful effects are more likely to yield information pertinent to clinical decision-making if they address a focused question. This will enable clear decisions to be made about the type of research to include in the review. The methodology for assessing the quality of harmful effects data in systematic reviews requires further development.

  4. Monte Carlo dose calculations and radiobiological modelling: analysis of the effect of the statistical noise of the dose distribution on the probability of tumour control

    International Nuclear Information System (INIS)

    Buffa, Francesca M.

    2000-01-01

    The aim of this work is to investigate the influence of the statistical fluctuations of Monte Carlo (MC) dose distributions on the dose volume histograms (DVHs) and radiobiological models, in particular the Poisson model for tumour control probability (tcp). The MC matrix is characterized by a mean dose in each scoring voxel, d, and a statistical error on the mean dose, σ d ; whilst the quantities d and σ d depend on many statistical and physical parameters, here we consider only their dependence on the phantom voxel size and the number of histories from the radiation source. Dose distributions from high-energy photon beams have been analysed. It has been found that the DVH broadens when increasing the statistical noise of the dose distribution, and the tcp calculation systematically underestimates the real tumour control value, defined here as the value of tumour control when the statistical error of the dose distribution tends to zero. When increasing the number of energy deposition events, either by increasing the voxel dimensions or increasing the number of histories from the source, the DVH broadening decreases and tcp converges to the 'correct' value. It is shown that the underestimation of the tcp due to the noise in the dose distribution depends on the degree of heterogeneity of the radiobiological parameters over the population; in particular this error decreases with increasing the biological heterogeneity, whereas it becomes significant in the hypothesis of a radiosensitivity assay for single patients, or for subgroups of patients. It has been found, for example, that when the voxel dimension is changed from a cube with sides of 0.5 cm to a cube with sides of 0.25 cm (with a fixed number of histories of 10 8 from the source), the systematic error in the tcp calculation is about 75% in the homogeneous hypothesis, and it decreases to a minimum value of about 15% in a case of high radiobiological heterogeneity. The possibility of using the error on the

  5. Religion and Spirituality's Influences on HIV Syndemics Among MSM: A Systematic Review and Conceptual Model.

    Science.gov (United States)

    Lassiter, Jonathan M; Parsons, Jeffrey T

    2016-02-01

    This paper presents a systematic review of the quantitative HIV research that assessed the relationships between religion, spirituality, HIV syndemics, and individual HIV syndemics-related health conditions (e.g. depression, substance abuse, HIV risk) among men who have sex with men (MSM) in the United States. No quantitative studies were found that assessed the relationships between HIV syndemics, religion, and spirituality. Nine studies, with 13 statistical analyses, were found that examined the relationships between individual HIV syndemics-related health conditions, religion, and spirituality. Among the 13 analyses, religion and spirituality were found to have mixed relationships with HIV syndemics-related health conditions (6 nonsignificant associations; 5 negative associations; 2 positive associations). Given the overall lack of inclusion of religion and spirituality in HIV syndemics research, a conceptual model that hypothesizes the potential interactions of religion and spirituality with HIV syndemics-related health conditions is presented. The implications of the model for MSM's health are outlined.

  6. Effects of Photobiomodulation Therapy on Oxidative Stress in Muscle Injury Animal Models: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Solange Almeida dos Santos

    2017-01-01

    Full Text Available This systematic review was performed to identify the role of photobiomodulation therapy on experimental muscle injury models linked to induce oxidative stress. EMBASE, PubMed, and CINAHL were searched for studies published from January 2006 to January 2016 in the areas of laser and oxidative stress. Any animal model using photobiomodulation therapy to modulate oxidative stress was included in analysis. Eight studies were selected from 68 original articles targeted on laser irradiation and oxidative stress. Articles were critically assessed by two independent raters with a structured tool for rating the research quality. Although the small number of studies limits conclusions, the current literature indicates that photobiomodulation therapy can be an effective short-term approach to reduce oxidative stress markers (e.g., thiobarbituric acid-reactive and to increase antioxidant substances (e.g., catalase, glutathione peroxidase, and superoxide dismutase. However, there is a nonuniformity in the terminology used to describe the parameters and dose for low-level laser treatment.

  7. Discounting the duration of bolus exposure in impedance testing underestimates acid reflux.

    Science.gov (United States)

    Vikneswaran, Namasivayam; Murray, Joseph A

    2016-06-08

    Combined impedance-pH testing (MII) allows for detection of reflux episodes regardless of pH. However impedance-based diagnosis of reflux may not routinely account for duration of the reflux episode. We hypothesize that impedance testing may be less sensitive than pH-testing in detecting acid reflux off therapy as a result of discounting duration of exposure. Baseline characteristics and reflux parameters of MII studies performed off-anti-secretory medications were analyzed. Studies on acid suppressive medication and those with recording times less than 20 h or low baseline impedance were excluded. A total of 73 consecutive MII studies were analyzed of which 31 MII studies had elevated acid exposure while 16 were abnormal by impedance criteria. MII testing off-therapy was more likely to be abnormal by pH criteria (percent time pH reflux):[42 vs 22 % (p =0.02)]. Acid exposure (percent time pH acid reflux episodes [42 vs 34 % (p acid clearance time (pH-detected) was significantly longer than median bolus clearance time (impedance-detected) in the total [98.7 s vs 12.6 s (p acid clearance time (pH-detected) and the median bolus clearance time (impedance-detected) was significantly higher in the recumbent position compared to the upright position [11. vs 5.3 (p = 0.01)]. Ambulatory impedance testing underestimates acid reflux compared to esophageal acid exposure by discounting the prolonged period of mucosal contact with each acid reflux episode, particularly in the recumbent position.

  8. Exploration of the Drosophila buzzatii transposable element content suggests underestimation of repeats in Drosophila genomes.

    Science.gov (United States)

    Rius, Nuria; Guillén, Yolanda; Delprat, Alejandra; Kapusta, Aurélie; Feschotte, Cédric; Ruiz, Alfredo

    2016-05-10

    Many new Drosophila genomes have been sequenced in recent years using new-generation sequencing platforms and assembly methods. Transposable elements (TEs), being repetitive sequences, are often misassembled, especially in the genomes sequenced with short reads. Consequently, the mobile fraction of many of the new genomes has not been analyzed in detail or compared with that of other genomes sequenced with different methods, which could shed light into the understanding of genome and TE evolution. Here we compare the TE content of three genomes: D. buzzatii st-1, j-19, and D. mojavensis. We have sequenced a new D. buzzatii genome (j-19) that complements the D. buzzatii reference genome (st-1) already published, and compared their TE contents with that of D. mojavensis. We found an underestimation of TE sequences in Drosophila genus NGS-genomes when compared to Sanger-genomes. To be able to compare genomes sequenced with different technologies, we developed a coverage-based method and applied it to the D. buzzatii st-1 and j-19 genome. Between 10.85 and 11.16 % of the D. buzzatii st-1 genome is made up of TEs, between 7 and 7,5 % of D. buzzatii j-19 genome, while TEs represent 15.35 % of the D. mojavensis genome. Helitrons are the most abundant order in the three genomes. TEs in D. buzzatii are less abundant than in D. mojavensis, as expected according to the genome size and TE content positive correlation. However, TEs alone do not explain the genome size difference. TEs accumulate in the dot chromosomes and proximal regions of D. buzzatii and D. mojavensis chromosomes. We also report a significantly higher TE density in D. buzzatii and D. mojavensis X chromosomes, which is not expected under the current models. Our easy-to-use correction method allowed us to identify recently active families in D. buzzatii st-1 belonging to the LTR-retrotransposon superfamily Gypsy.

  9. Refinement of Modeled Aqueous-Phase Sulfate Production via the Fe- and Mn-Catalyzed Oxidation Pathway

    Directory of Open Access Journals (Sweden)

    Syuichi Itahashi

    2018-04-01

    Full Text Available We refined the aqueous-phase sulfate (SO42− production in the state-of-the-art Community Multiscale Air Quality (CMAQ model during the Japanese model inter-comparison project, known as Japan’s Study for Reference Air Quality Modeling (J-STREAM. In Japan, SO42− is the major component of PM2.5, and CMAQ reproduces the observed seasonal variation of SO42− with the summer maxima and winter minima. However, CMAQ underestimates the concentration during winter over Japan. Based on a review of the current modeling system, we identified a possible reason as being the inadequate aqueous-phase SO42− production by Fe- and Mn-catalyzed O2 oxidation. This is because these trace metals are not properly included in the Asian emission inventories. Fe and Mn observations over Japan showed that the model concentrations based on the latest Japanese emission inventory were substantially underestimated. Thus, we conducted sensitivity simulations where the modeled Fe and Mn concentrations were adjusted to the observed levels, the Fe and Mn solubilities were increased, and the oxidation rate constant was revised. Adjusting the concentration increased the SO42− concentration during winter, as did increasing the solubilities and revising the rate constant to consider pH dependencies. Statistical analysis showed that these sensitivity simulations improved model performance. The approach adopted in this study can partly improve model performance in terms of the underestimation of SO42− concentration during winter. From our findings, we demonstrated the importance of developing and evaluating trace metal emission inventories in Asia.

  10. Scale interactions on diurnal toseasonal timescales and their relevanceto model systematic errors

    Directory of Open Access Journals (Sweden)

    G. Yang

    2003-06-01

    Full Text Available Examples of current research into systematic errors in climate models are used to demonstrate the importance of scale interactions on diurnal,intraseasonal and seasonal timescales for the mean and variability of the tropical climate system. It has enabled some conclusions to be drawn about possible processes that may need to be represented, and some recommendations to be made regarding model improvements. It has been shown that the Maritime Continent heat source is a major driver of the global circulation but yet is poorly represented in GCMs. A new climatology of the diurnal cycle has been used to provide compelling evidence of important land-sea breeze and gravity wave effects, which may play a crucial role in the heat and moisture budget of this key region for the tropical and global circulation. The role of the diurnal cycle has also been emphasized for intraseasonal variability associated with the Madden Julian Oscillation (MJO. It is suggested that the diurnal cycle in Sea Surface Temperature (SST during the suppressed phase of the MJO leads to a triggering of cumulus congestus clouds, which serve to moisten the free troposphere and hence precondition the atmosphere for the next active phase. It has been further shown that coupling between the ocean and atmosphere on intraseasonal timescales leads to a more realistic simulation of the MJO. These results stress the need for models to be able to simulate firstly, the observed tri-modal distribution of convection, and secondly, the coupling between the ocean and atmosphere on diurnal to intraseasonal timescales. It is argued, however, that the current representation of the ocean mixed layer in coupled models is not adequate to represent the complex structure of the observed mixed layer, in particular the formation of salinity barrier layers which can potentially provide much stronger local coupling between the atmosphere and ocean on diurnal to intraseasonal timescales.

  11. Interventions directed at eating habits and physical activity using the Transtheoretical Model: a systematic review.

    Science.gov (United States)

    Carvalho de Menezes, Mariana; Bedeschi, Lydiane Bragunci; Santos, Luana Caroline Dos; Lopes, Aline Cristine Souza

    2016-09-20

    The multi-behavioral Transtheoretical Model (TTM) addresses multiple behaviors and it is a promising strategy to control multifactorial morbidities, such as chronic diseases. The results obtained using the TTM are positive, but are not consistently methodical. The aim of this study was to systematically review the effectiveness of the Transtheoretical Model in multi-behavioral interventions for changing eating habits and levels of physical activity. A search on PubMed and SciELO databases was performed with inclusion criteria set for intervention studies before 2016 using the Transtheoretical Model for more than one behavior, including eating habits and/or engaging in physical activity. Eighteen studies were identified; there was a predominance of randomized clinical trials, studies conducted in the United States, and the use of the Internet and/or telephone. The selected studies were aimed at changing eating behaviors; five of the studies did not address physical activity. The main results were reduction of fat consumption, an increase in the consumption of fruit and vegetables, and increases in physical activity, which are progressions in the stages of change and weight loss identified by the Transtheoretical Model. However, the studies showed methodological weaknesses, including high participant loss and the omission of information about randomization and blinding.

  12. Systematic Desensitization as Training in Self-Control

    Science.gov (United States)

    Goldfried, Marvin R.

    1971-01-01

    A description of a mediational model to explain the effectiveness of desensitization and a discussion of the available corroborative research findings for this alternative explanation are given. Also, specific procedural modifications for systematic desensitization are suggested. (Author)

  13. Chronic kidney disease, spirituality and religiosity: a systematic overview with the list of eligible studies

    Directory of Open Access Journals (Sweden)

    Nicola Luigi Bragazzi

    2013-08-01

    Full Text Available Chronic Kidney Disease (CKD has a tremendous psychological burden, which sometimes is overlooked or underestimated in the daily clinical routine practice, since in the health care process physicians prefer to focus on the objective aspects of the pathology. In this contribution, we make a systematic overview of the relationship between spirituality/religiosity and CKD, an emerging theme which only recently has raised interest from the scientific community despite its importance. We investigate different variables, axis and categories (from the quality of life to customer’s satisfaction, treatment adherence and therapeutic alliance, clinical parameters, as well as overall survival, and coping strategies adopted by the patient. Moreover, we underpin the principal clinically relevant implications (like the possibility of psycho-therapeutic interventions based on the spiritual and religious attitudes of the patient and we discuss the main gaps, methodological barriers and difficulties in the field, fostering and advocating further research and clinical studies. This last aspect, together with the quality assessment of the studies, will be further explored in the second part of the study.

  14. ARAMIS a regional air quality model for air pollution management: evaluation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Solar, M. R.; Gamez, P.; Olid, M.

    2015-07-01

    The aim of this research was to better understand the dynamics of air pollutants and to forecast the air quality over regional areas in order to develop emission abatement strategies for air pollution and adverse health effects. To accomplish this objective, we developed and applied a high resolution Eulerian system named ARAMIS (A Regional Air Quality Modelling Integrated System) over the north-east of Spain (Catalonia), where several pollutants exceed threshold values for the protection of human health. The results indicate that the model reproduced reasonably well observed concentrations, as statistical values fell within Environmental Protection Agency (EPA) recommendations and European (EU) regulations. Nevertheless, some hourly O{sub 3} exceedances in summer and hourly peaks of NO{sub 2} in winter were underestimated. Concerning PM10 concentrations less accurate model levels were obtained with a moderate trend towards underestimation during the day. (Author)

  15. ARAMIS a regional air quality model for air pollution management: evaluation and validation

    Energy Technology Data Exchange (ETDEWEB)

    Soler, M.R.; Gamez, P.; Olid, M.

    2015-07-01

    The aim of this research was to better understand the dynamics of air pollutants and to forecast the air quality over regional areas in order to develop emission abatement strategies for air pollution and adverse health effects. To accomplish this objective, we developed and applied a high resolution Eulerian system named ARAMIS (A Regional Air Quality Modelling Integrated System) over the north-east of Spain (Catalonia), where several pollutants exceed threshold values for the protection of human health. The results indicate that the model reproduced reasonably well observed concentrations, as statistical values fell within Environmental Protection Agency (EPA) recommendations and European (EU) regulations. Nevertheless, some hourly O3 exceedances in summer and hourly peaks of NO2 in winter were underestimated. Concerning PM10 concentrations less accurate model levels were obtained with a moderate trend towards underestimation during the day. (Author)

  16. An investigation into the performance of four cloud droplet activation parameterisations

    Directory of Open Access Journals (Sweden)

    E. Simpson

    2014-07-01

    the "median diameter" is small (between 5 and 250 nm in a single lognormal mode the fraction of activated drops is underestimated by the parameterisations. Secondly, it is found that in dual-mode cases there is a systematic tendency towards underestimation of the fraction of activated drops, which is due to the methods used by the parameterisations to approximate the sink of water vapour.

  17. External Validity and Model Validity: A Conceptual Approach for Systematic Review Methodology

    Directory of Open Access Journals (Sweden)

    Raheleh Khorsan

    2014-01-01

    Full Text Available Background. Evidence rankings do not consider equally internal (IV, external (EV, and model validity (MV for clinical studies including complementary and alternative medicine/integrative health care (CAM/IHC research. This paper describe this model and offers an EV assessment tool (EVAT© for weighing studies according to EV and MV in addition to IV. Methods. An abbreviated systematic review methodology was employed to search, assemble, and evaluate the literature that has been published on EV/MV criteria. Standard databases were searched for keywords relating to EV, MV, and bias-scoring from inception to Jan 2013. Tools identified and concepts described were pooled to assemble a robust tool for evaluating these quality criteria. Results. This study assembled a streamlined, objective tool to incorporate for the evaluation of quality of EV/MV research that is more sensitive to CAM/IHC research. Conclusion. Improved reporting on EV can help produce and provide information that will help guide policy makers, public health researchers, and other scientists in their selection, development, and improvement in their research-tested intervention. Overall, clinical studies with high EV have the potential to provide the most useful information about “real-world” consequences of health interventions. It is hoped that this novel tool which considers IV, EV, and MV on equal footing will better guide clinical decision making.

  18. PCR diagnostics underestimate the prevalence of avian malaria (Plasmodium relictum) in experimentally-infected passerines

    Science.gov (United States)

    Jarvi, Susan I.; Schultz, Jeffrey J.; Atkinson, Carter T.

    2002-01-01

    Several polymerase chain reaction (PCR)-based methods have recently been developed for diagnosing malarial infections in both birds and reptiles, but a critical evaluation of their sensitivity in experimentally-infected hosts has not been done. This study compares the sensitivity of several PCR-based methods for diagnosing avian malaria (Plasmodium relictum) in captive Hawaiian honeycreepers using microscopy and a recently developed immunoblotting technique. Sequential blood samples were collected over periods of up to 4.4 yr after experimental infection and rechallenge to determine both the duration and detectability of chronic infections. Two new nested PCR approaches for detecting circulating parasites based on P. relictum 18S rRNA genes and the thrombospondin-related anonymous protein (TRAP) gene are described. The blood smear and the PCR tests were less sensitive than serological methods for detecting chronic malarial infections. Individually, none of the diagnostic methods was 100% accurate in detecting subpatent infections, although serological methods were significantly more sensitive (97%) than either nested PCR (61–84%) or microscopy (27%). Circulating parasites in chronically infected birds either disappear completely from circulation or to drop to intensities below detectability by nested PCR. Thus, the use of PCR as a sole means of detection of circulating parasites may significantly underestimate true prevalence.

  19. On wake modeling, wind-farm gradients and AEP predictions at the Anholt wind farm

    DEFF Research Database (Denmark)

    Pena Diaz, Alfredo; Hansen, Kurt Schaldemose; Ott, Søren

    2017-01-01

    of the mesoscale simulations and supervisory control and data acquisition (SCADA), we show that for westerly flow in particular, there is a clear horizontal wind-speed gradient over the wind farm. We also use the mesoscale simulations to derive the undisturbed inflow conditions that are coupled with three commonly....... When looking at westerly flow wake cases, where the impact of the horizontal wind-speed gradient on the power of the undisturbed turbines is largest, the wake models agree with the SCADA fairly well; when looking at a southerly flow case, where the wake losses are highest, the wake models tend...... to underestimate the wake loss. With the mesoscale-wake model setup, we are also able to estimate the capacity factor of the wind farm rather well when compared to that derived from the SCADA. Finally, we estimate the uncertainty of the wake models by bootstrapping the SCADA. The models tend to underestimate...

  20. Conducting systematic reviews of economic evaluations.

    Science.gov (United States)

    Gomersall, Judith Streak; Jadotte, Yuri Tertilus; Xue, Yifan; Lockwood, Suzi; Riddle, Dru; Preda, Alin

    2015-09-01

    In 2012, a working group was established to review and enhance the Joanna Briggs Institute (JBI) guidance for conducting systematic review of evidence from economic evaluations addressing a question(s) about health intervention cost-effectiveness. The objective is to present the outcomes of the working group. The group conducted three activities to inform the new guidance: review of literature on the utility/futility of systematic reviews of economic evaluations and consideration of its implications for updating the existing methodology; assessment of the critical appraisal tool in the existing guidance against criteria that promotes validity in economic evaluation research and two other commonly used tools; and a workshop. The debate in the literature on the limitations/value of systematic review of economic evidence cautions that systematic reviews of economic evaluation evidence are unlikely to generate one size fits all answers to questions about the cost-effectiveness of interventions and their comparators. Informed by this finding, the working group adjusted the framing of the objectives definition in the existing JBI methodology. The shift is away from defining the objective as to determine one cost-effectiveness measure toward summarizing study estimates of cost-effectiveness and informed by consideration of the included study characteristics (patient, setting, intervention component, etc.), identifying conditions conducive to lowering costs and maximizing health benefits. The existing critical appraisal tool was included in the new guidance. The new guidance includes the recommendation that a tool designed specifically for the purpose of appraising model-based studies be used together with the generic appraisal tool for economic evaluations assessment to evaluate model-based evaluations. The guidance produced by the group offers reviewers guidance for each step of the systematic review process, which are the same steps followed in JBI reviews of other

  1. Individual, premigration and postsettlement factors, and academic achievement in adolescents from refugee backgrounds: A systematic review and model.

    Science.gov (United States)

    Wong, Charissa W S; Schweitzer, Robert D

    2017-01-01

    We have limited understanding of the precursors of academic achievement in resettled adolescents from refugee backgrounds. To date, no clear model has been developed to conceptualise the academic trajectories of adolescents from refugee backgrounds at postsettlement. The current review had two aims. First, to propose an integrated adaptive model to conceptualise the impact of individual, premigration, and postsettlement factors on academic achievement at postsettlement; and second, to critically examine the literature on factors that predict academic achievement in adolescents from refugee backgrounds in relation to the proposed model and highlight issues deserving future exploration. Following the protocol of a systematic literature review, 13 studies were identified for full-text review. Gender, ethnicity, English proficiency, psychological distress, premigration trauma, premigration loss, postsettlement social support, and postsettlement school connectedness, were found to predict academic achievement in adolescents from refugee backgrounds.

  2. Validity of Dietary Assessment in Athletes: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Louise Capling

    2017-12-01

    Full Text Available Dietary assessment methods that are recognized as appropriate for the general population are usually applied in a similar manner to athletes, despite the knowledge that sport-specific factors can complicate assessment and impact accuracy in unique ways. As dietary assessment methods are used extensively within the field of sports nutrition, there is concern the validity of methodologies have not undergone more rigorous evaluation in this unique population sub-group. The purpose of this systematic review was to compare two or more methods of dietary assessment, including dietary intake measured against biomarkers or reference measures of energy expenditure, in athletes. Six electronic databases were searched for English-language, full-text articles published from January 1980 until June 2016. The search strategy combined the following keywords: diet, nutrition assessment, athlete, and validity; where the following outcomes are reported but not limited to: energy intake, macro and/or micronutrient intake, food intake, nutritional adequacy, diet quality, or nutritional status. Meta-analysis was performed on studies with sufficient methodological similarity, with between-group standardized mean differences (or effect size and 95% confidence intervals (CI being calculated. Of the 1624 studies identified, 18 were eligible for inclusion. Studies comparing self-reported energy intake (EI to energy expenditure assessed via doubly labelled water were grouped for comparison (n = 11 and demonstrated mean EI was under-estimated by 19% (−2793 ± 1134 kJ/day. Meta-analysis revealed a large pooled effect size of −1.006 (95% CI: −1.3 to −0.7; p < 0.001. The remaining studies (n = 7 compared a new dietary tool or instrument to a reference method(s (e.g., food record, 24-h dietary recall, biomarker as part of a validation study. This systematic review revealed there are limited robust studies evaluating dietary assessment methods in athletes. Existing

  3. Computer-aided modeling framework – a generic modeling template

    DEFF Research Database (Denmark)

    Fedorova, Marina; Sin, Gürkan; Gani, Rafiqul

    and test models systematically, efficiently and reliably. In this way, development of products and processes can be made faster, cheaper and more efficient. In this contribution, as part of the framework, a generic modeling template for the systematic derivation of problem specific models is presented....... The application of the modeling template is highlighted with a case study related to the modeling of a catalytic membrane reactor coupling dehydrogenation of ethylbenzene with hydrogenation of nitrobenzene...

  4. An integrative model of patient-centeredness - a systematic review and concept analysis.

    Directory of Open Access Journals (Sweden)

    Isabelle Scholl

    Full Text Available Existing models of patient-centeredness reveal a lack of conceptual clarity. This results in a heterogeneous use of the term, unclear measurement dimensions, inconsistent results regarding the effectiveness of patient-centered interventions, and finally in difficulties in implementing patient-centered care. The aim of this systematic review was to identify the different dimensions of patient-centeredness described in the literature and to propose an integrative model of patient-centeredness based on these results.Protocol driven search in five databases, combined with a comprehensive secondary search strategy. All articles that include a definition of patient-centeredness were eligible for inclusion in the review and subject to subsequent content analysis. Two researchers independently first screened titles and abstracts, then assessed full texts for eligibility. In each article the given definition of patient-centeredness was coded independently by two researchers. We discussed codes within the research team and condensed them into an integrative model of patient-centeredness.4707 records were identified through primary and secondary search, of which 706 were retained after screening of titles and abstracts. 417 articles (59% contained a definition of patient-centeredness and were coded. 15 dimensions of patient-centeredness were identified: essential characteristics of clinician, clinician-patient relationship, clinician-patient communication, patient as unique person, biopsychosocial perspective, patient information, patient involvement in care, involvement of family and friends, patient empowerment, physical support, emotional support, integration of medical and non-medical care, teamwork and teambuilding, access to care, coordination and continuity of care. In the resulting integrative model the dimensions were mapped onto different levels of care.The proposed integrative model of patient-centeredness allows different stakeholders to speak

  5. 3S - Systematic, systemic, and systems biology and toxicology.

    Science.gov (United States)

    Smirnova, Lena; Kleinstreuer, Nicole; Corvi, Raffaella; Levchenko, Andre; Fitzpatrick, Suzanne C; Hartung, Thomas

    2018-01-01

    A biological system is more than the sum of its parts - it accomplishes many functions via synergy. Deconstructing the system down to the molecular mechanism level necessitates the complement of reconstructing functions on all levels, i.e., in our conceptualization of biology and its perturbations, our experimental models and computer modelling. Toxicology contains the somewhat arbitrary subclass "systemic toxicities"; however, there is no relevant toxic insult or general disease that is not systemic. At least inflammation and repair are involved that require coordinated signaling mechanisms across the organism. However, the more body components involved, the greater the challenge to reca-pitulate such toxicities using non-animal models. Here, the shortcomings of current systemic testing and the development of alternative approaches are summarized. We argue that we need a systematic approach to integrating existing knowledge as exemplified by systematic reviews and other evidence-based approaches. Such knowledge can guide us in modelling these systems using bioengineering and virtual computer models, i.e., via systems biology or systems toxicology approaches. Experimental multi-organ-on-chip and microphysiological systems (MPS) provide a more physiological view of the organism, facilitating more comprehensive coverage of systemic toxicities, i.e., the perturbation on organism level, without using substitute organisms (animals). The next challenge is to establish disease models, i.e., micropathophysiological systems (MPPS), to expand their utility to encompass biomedicine. Combining computational and experimental systems approaches and the chal-lenges of validating them are discussed. The suggested 3S approach promises to leverage 21st century technology and systematic thinking to achieve a paradigm change in studying systemic effects.

  6. Real-time PCR Machine System Modeling and a Systematic Approach for the Robust Design of a Real-time PCR-on-a-Chip System

    Directory of Open Access Journals (Sweden)

    Da-Sheng Lee

    2010-01-01

    Full Text Available Chip-based DNA quantification systems are widespread, and used in many point-of-care applications. However, instruments for such applications may not be maintained or calibrated regularly. Since machine reliability is a key issue for normal operation, this study presents a system model of the real-time Polymerase Chain Reaction (PCR machine to analyze the instrument design through numerical experiments. Based on model analysis, a systematic approach was developed to lower the variation of DNA quantification and achieve a robust design for a real-time PCR-on-a-chip system. Accelerated lift testing was adopted to evaluate the reliability of the chip prototype. According to the life test plan, this proposed real-time PCR-on-a-chip system was simulated to work continuously for over three years with similar reproducibility in DNA quantification. This not only shows the robustness of the lab-on-a-chip system, but also verifies the effectiveness of our systematic method for achieving a robust design.

  7. A new Markov-chain-related statistical approach for modelling synthetic wind power time series

    International Nuclear Information System (INIS)

    Pesch, T; Hake, J F; Schröders, S; Allelein, H J

    2015-01-01

    The integration of rising shares of volatile wind power in the generation mix is a major challenge for the future energy system. To address the uncertainties involved in wind power generation, models analysing and simulating the stochastic nature of this energy source are becoming increasingly important. One statistical approach that has been frequently used in the literature is the Markov chain approach. Recently, the method was identified as being of limited use for generating wind time series with time steps shorter than 15–40 min as it is not capable of reproducing the autocorrelation characteristics accurately. This paper presents a new Markov-chain-related statistical approach that is capable of solving this problem by introducing a variable second lag. Furthermore, additional features are presented that allow for the further adjustment of the generated synthetic time series. The influences of the model parameter settings are examined by meaningful parameter variations. The suitability of the approach is demonstrated by an application analysis with the example of the wind feed-in in Germany. It shows that—in contrast to conventional Markov chain approaches—the generated synthetic time series do not systematically underestimate the required storage capacity to balance wind power fluctuation. (paper)

  8. Are We Under-Estimating the Association between Autism Symptoms?: The Importance of Considering Simultaneous Selection When Using Samples of Individuals Who Meet Diagnostic Criteria for an Autism Spectrum Disorder

    Science.gov (United States)

    Murray, Aja Louise; McKenzie, Karen; Kuenssberg, Renate; O'Donnell, Michael

    2014-01-01

    The magnitude of symptom inter-correlations in diagnosed individuals has contributed to the evidence that autism spectrum disorders (ASD) is a fractionable disorder. Such correlations may substantially under-estimate the population correlations among symptoms due to simultaneous selection on the areas of deficit required for diagnosis. Using…

  9. External validation of multivariable prediction models: a systematic review of methodological conduct and reporting

    Science.gov (United States)

    2014-01-01

    Background Before considering whether to use a multivariable (diagnostic or prognostic) prediction model, it is essential that its performance be evaluated in data that were not used to develop the model (referred to as external validation). We critically appraised the methodological conduct and reporting of external validation studies of multivariable prediction models. Methods We conducted a systematic review of articles describing some form of external validation of one or more multivariable prediction models indexed in PubMed core clinical journals published in 2010. Study data were extracted in duplicate on design, sample size, handling of missing data, reference to the original study developing the prediction models and predictive performance measures. Results 11,826 articles were identified and 78 were included for full review, which described the evaluation of 120 prediction models. in participant data that were not used to develop the model. Thirty-three articles described both the development of a prediction model and an evaluation of its performance on a separate dataset, and 45 articles described only the evaluation of an existing published prediction model on another dataset. Fifty-seven percent of the prediction models were presented and evaluated as simplified scoring systems. Sixteen percent of articles failed to report the number of outcome events in the validation datasets. Fifty-four percent of studies made no explicit mention of missing data. Sixty-seven percent did not report evaluating model calibration whilst most studies evaluated model discrimination. It was often unclear whether the reported performance measures were for the full regression model or for the simplified models. Conclusions The vast majority of studies describing some form of external validation of a multivariable prediction model were poorly reported with key details frequently not presented. The validation studies were characterised by poor design, inappropriate handling

  10. Using a systematic conceptual model for a process evaluation of a middle school obesity risk-reduction nutrition curriculum intervention: choice, control & change.

    Science.gov (United States)

    Lee, Heewon; Contento, Isobel R; Koch, Pamela

    2013-03-01

    To use and review a conceptual model of process evaluation and to examine the implementation of a nutrition education curriculum, Choice, Control & Change, designed to promote dietary and physical activity behaviors that reduce obesity risk. A process evaluation study based on a systematic conceptual model. Five middle schools in New York City. Five hundred sixty-two students in 20 classes and their science teachers (n = 8). Based on the model, teacher professional development, teacher implementation, and student reception were evaluated. Also measured were teacher characteristics, teachers' curriculum evaluation, and satisfaction with teaching the curriculum. Descriptive statistics and Spearman ρ correlation for quantitative analysis and content analysis for qualitative data were used. Mean score of the teacher professional development evaluation was 4.75 on a 5-point scale. Average teacher implementation rate was 73%, and the student reception rate was 69%. Ongoing teacher support was highly valued by teachers. Teacher satisfaction with teaching the curriculum was highly correlated with student satisfaction (P Teacher perception of amount of student work was negatively correlated with implementation and with student satisfaction (P < .05). Use of a systematic conceptual model and comprehensive process measures improves understanding of the implementation process and helps educators to better implement interventions as designed. Copyright © 2013 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.

  11. [Information system for supporting the Nursing Care Systematization].

    Science.gov (United States)

    Malucelli, Andreia; Otemaier, Kelly Rafaela; Bonnet, Marcel; Cubas, Marcia Regina; Garcia, Telma Ribeiro

    2010-01-01

    It is an unquestionable fact, the importance, relevance and necessity of implementing the Nursing Care Systematization in the different environments of professional practice. Considering it as a principle, emerged the motivation for the development of an information system to support the Nursing Care Systematization, based on Nursing Process steps and Human Needs, using the diagnoses language, nursing interventions and outcomes for professional practice documentation. This paper describes the methodological steps and results of the information system development - requirements elicitation, modeling, object-relational mapping, implementation and system validation.

  12. Influence of an urban canopy model and PBL schemes on vertical mixing for air quality modeling over Greater Paris

    Science.gov (United States)

    Kim, Youngseob; Sartelet, Karine; Raut, Jean-Christophe; Chazette, Patrick

    2015-04-01

    Impacts of meteorological modeling in the planetary boundary layer (PBL) and urban canopy model (UCM) on the vertical mixing of pollutants are studied. Concentrations of gaseous chemical species, including ozone (O3) and nitrogen dioxide (NO2), and particulate matter over Paris and the near suburbs are simulated using the 3-dimensional chemistry-transport model Polair3D of the Polyphemus platform. Simulated concentrations of O3, NO2 and PM10/PM2.5 (particulate matter of aerodynamic diameter lower than 10 μm/2.5 μm, respectively) are first evaluated using ground measurements. Higher surface concentrations are obtained for PM10, PM2.5 and NO2 with the MYNN PBL scheme than the YSU PBL scheme because of lower PBL heights in the MYNN scheme. Differences between simulations using different PBL schemes are lower than differences between simulations with and without the UCM and the Corine land-use over urban areas. Regarding the root mean square error, the simulations using the UCM and the Corine land-use tend to perform better than the simulations without it. At urban stations, the PM10 and PM2.5 concentrations are over-estimated and the over-estimation is reduced using the UCM and the Corine land-use. The ability of the model to reproduce vertical mixing is evaluated using NO2 measurement data at the upper air observation station of the Eiffel Tower, and measurement data at a ground station near the Eiffel Tower. Although NO2 is under-estimated in all simulations, vertical mixing is greatly improved when using the UCM and the Corine land-use. Comparisons of the modeled PM10 vertical distributions to distributions deduced from surface and mobile lidar measurements are performed. The use of the UCM and the Corine land-use is crucial to accurately model PM10 concentrations during nighttime in the center of Paris. In the nocturnal stable boundary layer, PM10 is relatively well modeled, although it is over-estimated on 24 May and under-estimated on 25 May. However, PM10 is

  13. Body fat in children measured by DXA, air-displacement plethysmography, TBW and multicomponent models: a systematic review.

    Science.gov (United States)

    Zanini, Roberta de Vargas; Santos, Iná S; Chrestani, Maria Aurora D; Gigante, Denise Petrucci

    2015-07-01

    To conduct a systematic literature review to identify studies that used indirect methods to assess body fat in healthy children. A systematic review was conducted according to the PRISMA guidelines. We conducted a search in the MEDLINE/PubMed, SciELO and Google Scholar databases. Studies in healthy children aged 0-9 years were eligible for inclusion. Studies were kept or excluded from the review according to eligibility criteria defined a priori. Two independent reviewers conducted all steps in the study selection. Initially, 11,246 articles were retrieved, with 3,593 duplicates. After applying the eligibility criteria, 22 articles were selected for review. The methodology of each study was analyzed by each reviewer individually. The indirect methods used to assess body fat in children included dual-energy X-ray absorptiometry (DXA) (14 articles), air-displacement plethysmography (five articles), multicomponent models (two articles), and total body water (one article). Most studies reported absolute (in kilograms) or relative (percentage) body fat measures. Only seven studies reported the fat mass index (FMI) (kg/m(2)). DXA was the indirect method most frequently used to assess body fat in healthy children. FMI was seldom reported.

  14. Efficacy and safety of regenerative cell therapy for pulmonary arterial hypertension in animal models: a preclinical systematic review protocol.

    Science.gov (United States)

    Suen, Colin M; Zhai, Alex; Lalu, Manoj M; Welsh, Christopher; Levac, Brendan M; Fergusson, Dean; McIntyre, Lauralyn; Stewart, Duncan J

    2016-05-25

    Pulmonary arterial hypertension (PAH) is a rare disease (15 cases per million) that is characterized by widespread loss of the pulmonary microcirculation and elevated pulmonary vascular resistance leading to pathological right ventricular remodeling and ultimately right heart failure. Regenerative cell therapies (i.e., therapies involving cells with stem or progenitor-like properties) could potentially restore the effective lung microcirculation and provide a curative therapy for PAH. Preclinical evidence suggests that regenerative cell therapy using endothelial progenitor cells or mesenchymal stem cells may be beneficial in the treatment of PAH. These findings have led to the completion of a small number of human clinical trials, albeit with modest effect compared to animal studies. The objective of this systematic review is to compare the efficacy and safety of regenerative cell therapies in preclinical models of PAH as well as assess study quality to inform future clinical studies. We will include preclinical studies of PAH in which a regenerative cell type was administered and outcomes compared to a disease control. The primary outcome will be pulmonary hemodynamics as assessed by measurement of right ventricular systolic pressure and/or mean pulmonary arterial pressure. Secondary outcomes will include mortality, survival, right ventricular remodeling, pulmonary vascular resistance, cardiac output, cardiac index, pulmonary acceleration time, tricuspid annular systolic excursion, and right ventricular wall thickness. Electronic searches of MEDLINE and EMBASE databases will be constructed and reviewed by the Peer Review of Electronic Search Strategies (PRESS) process. Search results will be screened independently in duplicate. Data from eligible studies will be extracted, pooled, and analyzed using random effects models. Risk of bias will be assessed using the SYstematic Review Centre for Laboratory animal Experimentation (SYRCLE) risk of bias tool, and

  15. Does the Rayleigh equation apply to evaluate field isotope data in contaminant hydrogeology?

    Science.gov (United States)

    Abe, Yumiko; Hunkeler, Daniel

    2006-03-01

    Stable isotope data have been increasingly used to assess in situ biodegradation of organic contaminants in groundwater. The data are usually evaluated using the Rayleigh equation to evaluate whether isotope data follow a Rayleigh trend, to calculate the extent of contaminant biodegradation, or to estimate first-order rate constants. However, the Rayleigh equation was developed for homogeneous systems while in the subsurface, contaminants can migrate at different velocities due to physical heterogeneity. This paper presents a method to quantify the systematic effect that is introduced by applying the Rayleigh equation to field isotope data. For this purpose, the travel time distribution between source and sampling point is characterized by an analytical solution to the advection-dispersion equation. The systematic effect was evaluated as a function of the magnitude of physical heterogeneity, geometry of the contaminant plume, and degree of biodegradation. Results revealed that the systematic effect always leads to an underestimation of the actual values of isotope enrichment factors, the extent of biodegradation, or first-order rate constants, especially in the dispersion-dominant region representing a higher degree of physical heterogeneity. A substantial systematic effect occurs especially for the quantification of first-order rate constants (up to 50% underestimation of actual rate) while it is relatively small for quantification of the extent of biodegradation (< 5% underestimation of actual degree of biodegradation). The magnitude of the systematic effect is in the same range as the uncertainty due to uncertainty of the analytical data, of the isotope enrichment factor, and the average travel time.

  16. Real-time PCR Machine System Modeling and a Systematic Approach for the Robust Design of a Real-time PCR-on-a-Chip System

    OpenAIRE

    Lee, Da-Sheng

    2010-01-01

    Chip-based DNA quantification systems are widespread, and used in many point-of-care applications. However, instruments for such applications may not be maintained or calibrated regularly. Since machine reliability is a key issue for normal operation, this study presents a system model of the real-time Polymerase Chain Reaction (PCR) machine to analyze the instrument design through numerical experiments. Based on model analysis, a systematic approach was developed to lower the variation of DN...

  17. A multi-resolution assessment of the Community Multiscale Air Quality (CMAQ model v4.7 wet deposition estimates for 2002–2006

    Directory of Open Access Journals (Sweden)

    K. W. Appel

    2011-05-01

    Full Text Available This paper examines the operational performance of the Community Multiscale Air Quality (CMAQ model simulations for 2002–2006 using both 36-km and 12-km horizontal grid spacing, with a primary focus on the performance of the CMAQ model in predicting wet deposition of sulfate (SO4=, ammonium (NH4+ and nitrate (NO3. Performance of the wet deposition estimates from the model is determined by comparing CMAQ predicted concentrations to concentrations measured by the National Acid Deposition Program (NADP, specifically the National Trends Network (NTN. For SO4= wet deposition, the CMAQ model estimates were generally comparable between the 36-km and 12-km simulations for the eastern US, with the 12-km simulation giving slightly higher estimates of SO4= wet deposition than the 36-km simulation on average. The result is a slightly larger normalized mean bias (NMB for the 12-km simulation; however both simulations had annual biases that were less than ±15 % for each of the five years. The model estimated SO4= wet deposition values improved when they were adjusted to account for biases in the model estimated precipitation. The CMAQ model underestimates NH4+ wet deposition over the eastern US, with a slightly larger underestimation in the 36-km simulation. The largest underestimations occur in the winter and spring periods, while the summer and fall have slightly smaller underestimations of NH4+ wet deposition. The underestimation in NH4+ wet deposition is likely due in part to the poor temporal and spatial representation of ammonia (NH3 emissions, particularly those emissions associated with fertilizer applications and NH3 bi-directional exchange. The model performance for estimates of NO3 wet deposition are

  18. Systematic review

    DEFF Research Database (Denmark)

    Enggaard, Helle

    Title: Systematic review a method to promote nursing students skills in Evidence Based Practice Background: Department of nursing educate students to practice Evidence Based Practice (EBP), where clinical decisions is based on the best available evidence, patient preference, clinical experience...... and resources available. In order to incorporate evidence in clinical decisions, nursing students need to learn how to transfer knowledge in order to utilize evidence in clinical decisions. The method of systematic review can be one approach to achieve this in nursing education. Method: As an associate lecturer...... I have taken a Comprehensive Systematic Review Training course provide by Center of Clinical Guidelines in Denmark and Jonna Briggs Institute (JBI) and practice in developing a systematic review on how patients with ischemic heart disease experiences peer support. This insight and experience...

  19. Disassembly for remanufacturing: A systematic literature review, new model development and future research needs

    Directory of Open Access Journals (Sweden)

    Anjar Priyono

    2016-11-01

    Full Text Available Purpose: Disassembly is an important process that distinguishes remanufacturing from conventional manufacturing. It is a unique process that becomes focus of investigation from many scholars. Yet, most scholars investigate disassembly from technical and operational standpoint that lack of strategic perspective. This paper attempts to fill this gap by looking at disassembly from a strategic perspective by considering organisational characteristics, process choices and product attributes. To be more specific, this paper has three objectives. First, to gain understanding what has been done, and what need to be done in the field of disassembly in remanufacturing. Second, to conduct a systematic literature review for identifying the factors affecting disassembly for remanufacturing. Third, to propose a new model of disassembly for remanufacturing and also to provide avenues for future research. Design/methodology/approach: This study used a systematic literature review method. A series of steps were undertaken during the review. The study was started with determining the purpose of the study, selecting appropriate keywords, and reducing the selected papers using a number of criteria. A deeper analysis was carried out on the final paper that meets the criteria for this review. Findings: There are two main findings of this study. First, a list of factors affecting disassembly in remanufacturing is identified. The factors can be categorised into three groups: organisational factors, process choices and product attributes. Second, using factors that have been identified, a new model of disassembly process for remanufacturing is developed. Current studies only consider disassembly as a physical activity to break down products into components. In the new model, disassembly is viewed as a process that converts into into output, which consist of a series of steps. Research limitations/implications: The opportunities for future research include: the need to

  20. Disassembly for remanufacturing: A systematic literature review, new model development and future research needs

    International Nuclear Information System (INIS)

    Priyono, A.; Ijomah, W.; Bititci, U.

    2016-01-01

    Purpose: Disassembly is an important process that distinguishes remanufacturing from conventional manufacturing. It is a unique process that becomes focus of investigation from many scholars. Yet, most scholars investigate disassembly from technical and operational standpoint that lack of strategic perspective. This paper attempts to fill this gap by looking at disassembly from a strategic perspective by considering organisational characteristics, process choices and product attributes. To be more specific, this paper has three objectives. First, to gain understanding what has been done, and what need to be done in the field of disassembly in remanufacturing. Second, to conduct a systematic literature review for identifying the factors affecting disassembly for remanufacturing. Third, to propose a new model of disassembly for remanufacturing and also to provide avenues for future research. Design/methodology/approach: This study used a systematic literature review method. A series of steps were undertaken during the review. The study was started with determining the purpose of the study, selecting appropriate keywords, and reducing the selected papers using a number of criteria. A deeper analysis was carried out on the final paper that meets the criteria for this review. Findings: There are two main findings of this study. First, a list of factors affecting disassembly in remanufacturing is identified. The factors can be categorised into three groups: organisational factors, process choices and product attributes. Second, using factors that have been identified, a new model of disassembly process for remanufacturing is developed. Current studies only consider disassembly as a physical activity to break down products into components. In the new model, disassembly is viewed as a process that converts into into output, which consist of a series of steps. Research limitations/implications: The opportunities for future research include: the need to develop an index of

  1. Disassembly for remanufacturing: A systematic literature review, new model development and future research needs

    Energy Technology Data Exchange (ETDEWEB)

    Priyono, A.; Ijomah, W.; Bititci, U.

    2016-07-01

    Purpose: Disassembly is an important process that distinguishes remanufacturing from conventional manufacturing. It is a unique process that becomes focus of investigation from many scholars. Yet, most scholars investigate disassembly from technical and operational standpoint that lack of strategic perspective. This paper attempts to fill this gap by looking at disassembly from a strategic perspective by considering organisational characteristics, process choices and product attributes. To be more specific, this paper has three objectives. First, to gain understanding what has been done, and what need to be done in the field of disassembly in remanufacturing. Second, to conduct a systematic literature review for identifying the factors affecting disassembly for remanufacturing. Third, to propose a new model of disassembly for remanufacturing and also to provide avenues for future research. Design/methodology/approach: This study used a systematic literature review method. A series of steps were undertaken during the review. The study was started with determining the purpose of the study, selecting appropriate keywords, and reducing the selected papers using a number of criteria. A deeper analysis was carried out on the final paper that meets the criteria for this review. Findings: There are two main findings of this study. First, a list of factors affecting disassembly in remanufacturing is identified. The factors can be categorised into three groups: organisational factors, process choices and product attributes. Second, using factors that have been identified, a new model of disassembly process for remanufacturing is developed. Current studies only consider disassembly as a physical activity to break down products into components. In the new model, disassembly is viewed as a process that converts into into output, which consist of a series of steps. Research limitations/implications: The opportunities for future research include: the need to develop an index of

  2. Health economic analyses in medical nutrition: a systematic literature review

    Directory of Open Access Journals (Sweden)

    Walzer S

    2014-03-01

    Full Text Available Stefan Walzer,1,2 Daniel Droeschel,1,3 Mark Nuijten,4 Hélène Chevrou-Séverac5 1MArS Market Access and Pricing Strategy GmbH, Weil am Rhein, Germany; 2State University Baden-Wuerttemberg, Loerrach, Germany; 3Riedlingen University, SRH FernHochschule, Riedlingen, Germany; 4Ars Accessus Medica BV, Amsterdam, the Netherlands, 5Nestlé Health Science, Vevey, Switzerland Background: Medical nutrition is a specific nutrition category either covering specific dietary needs and/or nutrient deficiency in patients or feeding patients unable to eat normally. Medical nutrition is regulated by a specific bill in Europe and in the US, with specific legislation and guidelines, and is provided to patients with special nutritional needs and indications for nutrition support. Therefore, medical nutrition products are delivered by medical prescription and supervised by health care professionals. Although these products have existed for more than 2 decades, health economic evidence of medical nutrition interventions is scarce. This research assesses the current published health economic evidence for medical nutrition by performing a systematic literature review related to health economic analysis of medical nutrition. Methods: A systematic literature search was done using standard literature databases, including PubMed, the Health Technology Assessment Database, and the National Health Service Economic Evaluation Database. Additionally, a free web-based search was conducted using the same search terms utilized in the systematic database search. The clinical background and basis of the analysis, health economic design, and results were extracted from the papers finally selected. The Drummond checklist was used to validate the quality of health economic modeling studies and the AMSTAR (A Measurement Tool to Assess Systematic Reviews checklist was used for published systematic reviews. Results: Fifty-three papers were identified and obtained via PubMed, or directly

  3. Improvement of PM10 prediction in East Asia using inverse modeling

    Science.gov (United States)

    Koo, Youn-Seo; Choi, Dae-Ryun; Kwon, Hi-Yong; Jang, Young-Kee; Han, Jin-Seok

    2015-04-01

    Aerosols from anthropogenic emissions in industrialized region in China as well as dust emissions from southern Mongolia and northern China that transport along prevailing northwestern wind have a large influence on the air quality in Korea. The emission inventory in the East Asia region is an important factor in chemical transport modeling (CTM) for PM10 (particulate matters less than 10 ㎛ in aerodynamic diameter) forecasts and air quality management in Korea. Most previous studies showed that predictions of PM10 mass concentration by the CTM were underestimated when comparing with observational data. In order to fill the gap in discrepancies between observations and CTM predictions, the inverse Bayesian approach with Comprehensive Air-quality Model with extension (CAMx) forward model was applied to obtain optimized a posteriori PM10 emissions in East Asia. The predicted PM10 concentrations with a priori emission were first compared with observations at monitoring sites in China and Korea for January and August 2008. The comparison showed that PM10 concentrations with a priori PM10 emissions for anthropogenic and dust sources were generally under-predicted. The result from the inverse modeling indicated that anthropogenic PM10 emissions in the industrialized and urbanized areas in China were underestimated while dust emissions from desert and barren soil in southern Mongolia and northern China were overestimated. A priori PM10 emissions from northeastern China regions including Shenyang, Changchun, and Harbin were underestimated by about 300% (i.e., the ratio of a posteriori to a priori PM10 emission was a factor of about 3). The predictions of PM10 concentrations with a posteriori emission showed better agreement with the observations, implying that the inverse modeling minimized the discrepancies in the model predictions by improving PM10 emissions in East Asia.

  4. An interoceptive model of bulimia nervosa: A neurobiological systematic review.

    Science.gov (United States)

    Klabunde, Megan; Collado, Danielle; Bohon, Cara

    2017-11-01

    The objective of our study was to examine the neurobiological support for an interoceptive sensory processing model of bulimia nervosa (BN). To do so, we conducted a systematic review of interoceptive sensory processing in BN, using the PRISMA guidelines. We searched PsychInfo, Pubmed, and Web of Knowledge databases to identify biological and behavioral studies that examine interoceptive detection in BN. After screening 390 articles for inclusion and conducting a quality assessment of articles that met inclusion criteria, we reviewed 41 articles. We found that global interoceptive sensory processing deficits may be present in BN. Specifically there is evidence of abnormal brain function, structure and connectivity in the interoceptive neural network, in addition to gastric and pain processing disturbances. These results suggest that there may be a neurobiological basis for global interoceptive sensory processing deficits in BN that remain after recovery. Data from taste and heart beat detection studies were inconclusive; some studies suggest interoceptive disturbances in these sensory domains. Discrepancies in findings appear to be due to methodological differences. In conclusion, interoceptive sensory processing deficits may directly contribute to and explain a variety of symptoms present in those with BN. Further examination of interoceptive sensory processing deficits could inform the development of treatments for those with BN. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Photogrammetry experiments with a model eye.

    Science.gov (United States)

    Rosenthal, A R; Falconer, D G; Pieper, I

    1980-01-01

    Digital photogrammetry was performed on stereophotographs of the optic nerve head of a modified Zeiss model eye in which optic cups of varying depths could be simulated. Experiments were undertaken to determine the impact of both photographic and ocular variables on the photogrammetric measurements of cup depth. The photogrammetric procedure tolerates refocusing, repositioning, and realignment as well as small variations in the geometric position of the camera. Progressive underestimation of cup depth was observed with increasing myopia, while progressive overestimation was noted with increasing hyperopia. High cylindrical errors at axis 90 degrees led to significant errors in cup depth estimates, while high cylindrical errors at axis 180 degrees did not materially affect the accuracy of the analysis. Finally, cup depths were seriously underestimated when the pupil diameter was less than 5.0 mm. Images PMID:7448139

  6. Focusing on fast food restaurants alone underestimates the relationship between neighborhood deprivation and exposure to fast food in a large rural area.

    Science.gov (United States)

    Sharkey, Joseph R; Johnson, Cassandra M; Dean, Wesley R; Horel, Scott A

    2011-01-25

    Individuals and families are relying more on food prepared outside the home as a source for at-home and away-from-home consumption. Restricting the estimation of fast-food access to fast-food restaurants alone may underestimate potential spatial access to fast food. The study used data from the 2006 Brazos Valley Food Environment Project (BVFEP) and the 2000 U.S. Census Summary File 3 for six rural counties in the Texas Brazos Valley region. BVFEP ground-truthed data included identification and geocoding of all fast-food restaurants, convenience stores, supermarkets, and grocery stores in study area and on-site assessment of the availability and variety of fast-food lunch/dinner entrées and side dishes. Network distance was calculated from the population-weighted centroid of each census block group to all retail locations that marketed fast food (n = 205 fast-food opportunities). Spatial access to fast-food opportunities (FFO) was significantly better than to traditional fast-food restaurants (FFR). The median distance to the nearest FFO was 2.7 miles, compared with 4.5 miles to the nearest FFR. Residents of high deprivation neighborhoods had better spatial access to a variety of healthier fast-food entrée and side dish options than residents of low deprivation neighborhoods. Our analyses revealed that identifying fast-food restaurants as the sole source of fast-food entrées and side dishes underestimated neighborhood exposure to fast food, in terms of both neighborhood proximity and coverage. Potential interventions must consider all retail opportunities for fast food, and not just traditional FFR.

  7. Security Issues in the Android Cross-Layer Architecture

    OpenAIRE

    Armando, Alessandro; Merlo, Alessio; Verderame, Luca

    2012-01-01

    The security of Android has been recently challenged by the discovery of a number of vulnerabilities involving different layers of the Android stack. We argue that such vulnerabilities are largely related to the interplay among layers composing the Android stack. Thus, we also argue that such interplay has been underestimated from a security point-of-view and a systematic analysis of the Android interplay has not been carried out yet. To this aim, in this paper we provide a simple model of th...

  8. Modeling of FREYA fast critical experiments with the Serpent Monte Carlo code

    International Nuclear Information System (INIS)

    Fridman, E.; Kochetkov, A.; Krása, A.

    2017-01-01

    Highlights: • FREYA – the EURATOM project executed to support fast lead-based reactor systems. • Critical experiments in the VENUS-F facility during the FREYA project. • Characterization of the critical VENUS-F cores with Serpent. • Comparison of the numerical Serpent results to the experimental data. - Abstract: The FP7 EURATOM project FREYA has been executed between 2011 and 2016 with the aim of supporting the design of fast lead-cooled reactor systems such as MYRRHA and ALFRED. During the project, a number of critical experiments were conducted in the VENUS-F facility located at SCK·CEN, Mol, Belgium. The Monte Carlo code Serpent was one of the codes applied for the characterization of the critical VENUS-F cores. Four critical configurations were modeled with Serpent, namely the reference critical core, the clean MYRRHA mock-up, the full MYRRHA mock-up, and the critical core with the ALFRED island. This paper briefly presents the VENUS-F facility, provides a detailed description of the aforementioned critical VENUS-F cores, and compares the numerical results calculated by Serpent to the available experimental data. The compared parameters include keff, point kinetics parameters, fission rate ratios of important actinides to that of U235 (spectral indices), axial and radial distribution of fission rates, and lead void reactivity effect. The reported results show generally good agreement between the calculated and experimental values. Nevertheless, the paper also reveals some noteworthy issues requiring further attention. This includes the systematic overprediction of reactivity and systematic underestimation of the U238 to U235 fission rate ratio.

  9. Reconstructing and modelling 71 years of forest growth in a Canadian boreal landscape : a test of the CBM-CFS3 carbon accounting model

    Energy Technology Data Exchange (ETDEWEB)

    Bernier, P.Y.; Guindon, L. [Canadian Forest Service, Quebec, PQ (Canada). Laurentian Forestry Centre; Kurz, W.A.; Stinson, G. [Canadian Forest Service, Victoria, BC (Canada). Pacific Forestry Centre

    2010-01-15

    Modelled estimates have suggested that Canada's managed forests are now shifting from being carbon sinks to becoming carbon sources. This study evaluated the Canadian Forest Sector carbon budget model (CBM-CFS3). A reconstructed dataset of forest growth and disturbances encompassing a 62 km{sup 2} landscape spanning a 71 year period were used to demonstrate that the CBM-CFS3 simulations underestimated realized net biomass accrual by 10 per cent in undisturbed stands, and may also underestimate biomass accrual in disturbed stands. Results from the model were compared with mechanistic model predictions, flux-tower measurements of ecosystem carbon exchanges, and long-term observations of changes in biomass. The errors were attributed to the initial 1928 operational forest photointerpretation and inventory procedures used to determine merchantable volume and biomass. Regionally parameterized yield curves may also be contributing to errors. Results of the study suggested that long-term trends in climate or atmospheric composition may not have contributed to the bias. A similar exercise conducted in a Pacific coastal forest demonstrated a small relative impact on total carbon from forest management activities in the absence of natural disturbances. 30 refs., 1 tab., 8 figs.

  10. Modeling the height of young forests regenerating from recent disturbances in Mississippi using Landsat and ICESat data

    Science.gov (United States)

    Li, Ainong; Huang, Chengquan; Sun, Guoqing; Shi, Hua; Toney, Chris; Zhu, Zhiliang; Rollins, Matthew G.; Goward, Samuel N.; Masek, Jeffery G.

    2011-01-01

    disturbances in current LTSS–VCT products and difficulty in deriving reliable forest height measurements using GLAS samples when terrain relief was present within their footprints. In addition, a systematic underestimation of about 5 m by the developed model was also observed, half of which could be explained by forest growth that occurred between field measurement year and model target year. The remaining difference suggests that tree height measurements derived using waveform lidar data could be significantly underestimated, especially for young pine forests. Options for improving the height modeling approach developed in this study were discussed.

  11. Modeling the Height of Young Forests Regenerating from Recent Disturbances in Mississippi using Landsat and ICESat data

    Science.gov (United States)

    Li, Ainong; Huang, Chengquan; Sun, Guoqing; Shi, Hua; Toney, Chris; Zhu, Zhiliang; Rollins, Matthew G.; Goward, Samuel N.; Masek, Jeffrey G.

    2011-01-01

    current LTSS-VCT products and difficulty in deriving reliable forest height measurements using GLAS samples when terrain relief was present within their footprints. In addition, a systematic underestimation of about 5 m by the developed model was also observed, half of which could be explained by forest growth that occurred between field measurement year and model target year. The remaining difference suggests that tree height measurements derived using waveform lidar data could be significantly underestimated, especially for young pine forests. Options for improving the height modeling approach developed in this study were discussed.

  12. Systematic vacuum study of the ITER model cryopump by test particle Monte Carlo simulation

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Xueli; Haas, Horst; Day, Christian [Institute for Technical Physics, Karlsruhe Institute of Technology, P.O. Box 3640, 76021 Karlsruhe (Germany)

    2011-07-01

    The primary pumping systems on the ITER torus are based on eight tailor-made cryogenic pumps because not any standard commercial vacuum pump can meet the ITER working criteria. This kind of cryopump can provide high pumping speed, especially for light gases, by the cryosorption on activated charcoal at 4.5 K. In this paper we will present the systematic Monte Carlo simulation results of the model pump in a reduced scale by ProVac3D, a new Test Particle Monte Carlo simulation program developed by KIT. The simulation model has included the most important mechanical structures such as sixteen cryogenic panels working at 4.5 K, the 80 K radiation shield envelope with baffles, the pump housing, inlet valve and the TIMO (Test facility for the ITER Model Pump) test facility. Three typical gas species, i.e., deuterium, protium and helium are simulated. The pumping characteristics have been obtained. The result is in good agreement with the experiment data up to the gas throughput of 1000 sccm, which marks the limit for free molecular flow. This means that ProVac3D is a useful tool in the design of the prototype cryopump of ITER. Meanwhile, the capture factors at different critical positions are calculated. They can be used as the important input parameters for a follow-up Direct Simulation Monte Carlo (DSMC) simulation for higher gas throughput.

  13. Scaling up depot medroxyprogesterone acetate (DMPA): a systematic literature review illustrating the AIDED model.

    Science.gov (United States)

    Curry, Leslie; Taylor, Lauren; Pallas, Sarah Wood; Cherlin, Emily; Pérez-Escamilla, Rafael; Bradley, Elizabeth H

    2013-08-02

    Use of depot medroxyprogesterone acetate (DMPA), often known by the brand name Depo-Provera, has increased globally, particularly in multiple low- and middle-income countries (LMICs). As a reproductive health technology that has scaled up in diverse contexts, DMPA is an exemplar product innovation with which to illustrate the utility of the AIDED model for scaling up family health innovations. We conducted a systematic review of the enabling factors and barriers to scaling up DMPA use in LMICs. We searched 11 electronic databases for academic literature published through January 2013 (n = 284 articles), and grey literature from major health organizations. We applied exclusion criteria to identify relevant articles from peer-reviewed (n = 10) and grey literature (n = 9), extracting data on scale up of DMPA in 13 countries. We then mapped the resulting factors to the five AIDED model components: ASSESS, INNOVATE, DEVELOP, ENGAGE, and DEVOLVE. The final sample of sources included studies representing variation in geographies and methodologies. We identified 15 enabling factors and 10 barriers to dissemination, diffusion, scale up, and/or sustainability of DMPA use. The greatest number of factors were mapped to the ASSESS, DEVELOP, and ENGAGE components. Findings offer early empirical support for the AIDED model, and provide insights into scale up of DMPA that may be relevant for other family planning product innovations.

  14. Few promising multivariable prognostic models exist for recovery of people with non-specific neck pain in musculoskeletal primary care: a systematic review.

    Science.gov (United States)

    Wingbermühle, Roel W; van Trijffel, Emiel; Nelissen, Paul M; Koes, Bart; Verhagen, Arianne P

    2018-01-01

    Which multivariable prognostic model(s) for recovery in people with neck pain can be used in primary care? Systematic review of studies evaluating multivariable prognostic models. People with non-specific neck pain presenting at primary care. Baseline characteristics of the participants. Recovery measured as pain reduction, reduced disability, or perceived recovery at short-term and long-term follow-up. Fifty-three publications were included, of which 46 were derivation studies, four were validation studies, and three concerned combined studies. The derivation studies presented 99 multivariate models, all of which were at high risk of bias. Three externally validated models generated usable models in low risk of bias studies. One predicted recovery in non-specific neck pain, while two concerned participants with whiplash-associated disorders (WAD). Discriminative ability of the non-specific neck pain model was area under the curve (AUC) 0.65 (95% CI 0.59 to 0.71). For the first WAD model, discriminative ability was AUC 0.85 (95% CI 0.79 to 0.91). For the second WAD model, specificity was 99% (95% CI 93 to 100) and sensitivity was 44% (95% CI 23 to 65) for prediction of non-recovery, and 86% (95% CI 73 to 94) and 55% (95% CI 41 to 69) for prediction of recovery, respectively. Initial Neck Disability Index scores and age were identified as consistent prognostic factors in these three models. Three externally validated models were found to be usable and to have low risk of bias, of which two showed acceptable discriminative properties for predicting recovery in people with neck pain. These three models need further validation and evaluation of their clinical impact before their broad clinical use can be advocated. PROSPERO CRD42016042204. [Wingbermühle RW, van Trijffel E, Nelissen PM, Koes B, Verhagen AP (2018) Few promising multivariable prognostic models exist for recovery of people with non-specific neck pain in musculoskeletal primary care: a systematic review

  15. Evaluation of sea-surface photosynthetically available radiation algorithms under various sky conditions and solar elevations.

    Science.gov (United States)

    Somayajula, Srikanth Ayyala; Devred, Emmanuel; Bélanger, Simon; Antoine, David; Vellucci, V; Babin, Marcel

    2018-04-20

    systematic overestimation, and one method showed a systematic underestimation of daily PAR, with relative RMSDs as large as 50% under all sky conditions. Under partially clear to overcast conditions all the methods underestimated PAR. Model uncertainties predominantly depend on which cloud products were used.

  16. Conceptualising paediatric health disparities: a metanarrative systematic review and unified conceptual framework.

    Science.gov (United States)

    Ridgeway, Jennifer L; Wang, Zhen; Finney Rutten, Lila J; van Ryn, Michelle; Griffin, Joan M; Murad, M Hassan; Asiedu, Gladys B; Egginton, Jason S; Beebe, Timothy J

    2017-08-04

    There exists a paucity of work in the development and testing of theoretical models specific to childhood health disparities even though they have been linked to the prevalence of adult health disparities including high rates of chronic disease. We conducted a systematic review and thematic analysis of existing models of health disparities specific to children to inform development of a unified conceptual framework. We systematically reviewed articles reporting theoretical or explanatory models of disparities on a range of outcomes related to child health. We searched Ovid Medline In-Process & Other Non-Indexed Citations, Ovid MEDLINE, Ovid Embase, Ovid Cochrane Central Register of Controlled Trials, Ovid Cochrane Database of Systematic Reviews, and Scopus (database inception to 9 July 2015). A metanarrative approach guided the analysis process. A total of 48 studies presenting 48 models were included. This systematic review found multiple models but no consensus on one approach. However, we did discover a fair amount of overlap, such that the 48 models reviewed converged into the unified conceptual framework. The majority of models included factors in three domains: individual characteristics and behaviours (88%), healthcare providers and systems (63%), and environment/community (56%), . Only 38% of models included factors in the health and public policies domain. A disease-agnostic unified conceptual framework may inform integration of existing knowledge of child health disparities and guide future research. This multilevel framework can focus attention among clinical, basic and social science research on the relationships between policy, social factors, health systems and the physical environment that impact children's health outcomes. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  17. Comparing the reported burn conditions for different severity burns in porcine models: a systematic review.

    Science.gov (United States)

    Andrews, Christine J; Cuttle, Leila

    2017-12-01

    There are many porcine burn models that create burns using different materials (e.g. metal, water) and different burn conditions (e.g. temperature and duration of exposure). This review aims to determine whether a pooled analysis of these studies can provide insight into the burn materials and conditions required to create burns of a specific severity. A systematic review of 42 porcine burn studies describing the depth of burn injury with histological evaluation is presented. Inclusion criteria included thermal burns, burns created with a novel method or material, histological evaluation within 7 days post-burn and method for depth of injury assessment specified. Conditions causing deep dermal scald burns compared to contact burns of equivalent severity were disparate, with lower temperatures and shorter durations reported for scald burns (83°C for 14 seconds) compared to contact burns (111°C for 23 seconds). A valuable archive of the different mechanisms and materials used for porcine burn models is presented to aid design and optimisation of future models. Significantly, this review demonstrates the effect of the mechanism of injury on burn severity and that caution is recommended when burn conditions established by porcine contact burn models are used by regulators to guide scald burn prevention strategies. © 2017 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  18. Cross Deployment Networking and Systematic Performance Analysis of Underwater Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Zhengxian Wei

    2017-07-01

    Full Text Available Underwater wireless sensor networks (UWSNs have become a new hot research area. However, due to the work dynamics and harsh ocean environment, how to obtain an UWSN with the best systematic performance while deploying as few sensor nodes as possible and setting up self-adaptive networking is an urgent problem that needs to be solved. Consequently, sensor deployment, networking, and performance calculation of UWSNs are challenging issues, hence the study in this paper centers on this topic and three relevant methods and models are put forward. Firstly, the normal body-centered cubic lattice to cross body-centered cubic lattice (CBCL has been improved, and a deployment process and topology generation method are built. Then most importantly, a cross deployment networking method (CDNM for UWSNs suitable for the underwater environment is proposed. Furthermore, a systematic quar-performance calculation model (SQPCM is proposed from an integrated perspective, in which the systematic performance of a UWSN includes coverage, connectivity, durability and rapid-reactivity. Besides, measurement models are established based on the relationship between systematic performance and influencing parameters. Finally, the influencing parameters are divided into three types, namely, constraint parameters, device performance and networking parameters. Based on these, a networking parameters adjustment method (NPAM for optimized systematic performance of UWSNs has been presented. The simulation results demonstrate that the approach proposed in this paper is feasible and efficient in networking and performance calculation of UWSNs.

  19. Marginalizing Instrument Systematics in HST WFC3 Transit Light Curves

    Science.gov (United States)

    Wakeford, H. R.; Sing, D. K.; Evans, T.; Deming, D.; Mandell, A.

    2016-03-01

    Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) infrared observations at 1.1-1.7 μm probe primarily the H2O absorption band at 1.4 μm, and have provided low-resolution transmission spectra for a wide range of exoplanets. We present the application of marginalization based on Gibson to analyze exoplanet transit light curves obtained from HST WFC3 to better determine important transit parameters such as Rp/R*, which are important for accurate detections of H2O. We approximate the evidence, often referred to as the marginal likelihood, for a grid of systematic models using the Akaike Information Criterion. We then calculate the evidence-based weight assigned to each systematic model and use the information from all tested models to calculate the final marginalized transit parameters for both the band-integrated and spectroscopic light curves to construct the transmission spectrum. We find that a majority of the highest weight models contain a correction for a linear trend in time as well as corrections related to HST orbital phase. We additionally test the dependence on the shift in spectral wavelength position over the course of the observations and find that spectroscopic wavelength shifts {δ }λ (λ ) best describe the associated systematic in the spectroscopic light curves for most targets while fast scan rate observations of bright targets require an additional level of processing to produce a robust transmission spectrum. The use of marginalization allows for transparent interpretation and understanding of the instrument and the impact of each systematic evaluated statistically for each data set, expanding the ability to make true and comprehensive comparisons between exoplanet atmospheres.

  20. Food Classification Systems Based on Food Processing: Significance and Implications for Policies and Actions: A Systematic Literature Review and Assessment.

    Science.gov (United States)

    Moubarac, Jean-Claude; Parra, Diana C; Cannon, Geoffrey; Monteiro, Carlos A

    2014-06-01

    This paper is the first to make a systematic review and assessment of the literature that attempts methodically to incorporate food processing into classification of diets. The review identified 1276 papers, of which 110 were screened and 21 studied, derived from five classification systems. This paper analyses and assesses the five systems, one of which has been devised and developed by a research team that includes co-authors of this paper. The quality of the five systems is assessed and scored according to how specific, coherent, clear, comprehensive and workable they are. Their relevance to food, nutrition and health, and their use in various settings, is described. The paper shows that the significance of industrial food processing in shaping global food systems and supplies and thus dietary patterns worldwide, and its role in the pandemic of overweight and obesity, remains overlooked and underestimated. Once food processing is systematically incorporated into food classifications, they will be more useful in assessing and monitoring dietary patterns. Food classification systems that emphasize industrial food processing, and that define and distinguish relevant different types of processing, will improve understanding of how to prevent and control overweight, obesity and related chronic non-communicable diseases, and also malnutrition. They will also be a firmer basis for rational policies and effective actions designed to protect and improve public health at all levels from global to local.

  1. Application of Vine Copulas to Credit Portfolio Risk Modeling

    Directory of Open Access Journals (Sweden)

    Marco Geidosch

    2016-06-01

    Full Text Available In this paper, we demonstrate the superiority of vine copulas over conventional copulas when modeling the dependence structure of a credit portfolio. We show statistical and economic implications of replacing conventional copulas by vine copulas for a subportfolio of the Euro Stoxx 50 and the S&P 500 companies, respectively. Our study includes D-vines and R-vines where the bivariate building blocks are chosen from the Gaussian, the t and the Clayton family. Our findings are (i the conventional Gauss copula is deficient in modeling the dependence structure of a credit portfolio and economic capital is seriously underestimated; (ii D-vine structures offer a better statistical fit to the data than classical copulas, but underestimate economic capital compared to R-vines; (iii when mixing different copula families in an R-vine structure, the best statistical fit to the data can be achieved which corresponds to the most reliable estimate for economic capital.

  2. Precision tests of the standard model, the Higgs, and new physics

    Indian Academy of Sciences (India)

    We present a concise review of the status of the standard model and of the ... simply arise from an underestimation of the theoretical error in the QCD analysis needed ..... The reason is that there are both conceptual problems and phenomeno-.

  3. Evaluation and improvement of the Community Land Model (CLM4 in Oregon forests

    Directory of Open Access Journals (Sweden)

    T. W. Hudiburg

    2013-01-01

    Full Text Available Ecosystem process models are important tools for determining the interactive effects of global change and disturbance on forest carbon dynamics. Here we evaluated and improved terrestrial carbon cycling simulated by the Community Land Model (CLM4, the land model portion of the Community Earth System Model (CESM1.0.4. Our analysis was conducted primarily in Oregon forests using FLUXNET and forest inventory data for the period 2001–2006. We go beyond prior modeling studies in the region by incorporating regional variation in physiological parameters from >100 independent field sites in the region. We also compare spatial patterns of simulated forest carbon stocks and net primary production (NPP at 15 km resolution using data collected from federal forest inventory plots (FIA from >3000 plots in the study region. Finally, we evaluate simulated gross primary production (GPP with FLUXNET eddy covariance tower data at wet and dry sites in the region. We improved model estimates by making modifications to CLM4 to allow physiological parameters (e.g., foliage carbon to nitrogen ratios and specific leaf area, mortality rate, biological nitrogen fixation, and wood allocation to vary spatially by plant functional type (PFT within an ecoregion based on field plot data in the region. Prior to modifications, default parameters resulted in underestimation of stem biomass in all forested ecoregions except the Blue Mountains and annual NPP was both over- and underestimated. After modifications, model estimates of mean NPP fell within the observed range of uncertainty in all ecoregions (two-sided P value = 0.8, and the underestimation of stem biomass was reduced. This was an improvement from the default configuration by 50% for stem biomass and 30% for NPP. At the tower sites, modeled monthly GPP fell within the observed range of uncertainty at both sites for the majority of the year, however summer GPP was underestimated at the Metolius semi

  4. Modeling of novel diagnostic strategies for active tuberculosis - a systematic review: current practices and recommendations.

    Directory of Open Access Journals (Sweden)

    Alice Zwerling

    Full Text Available The field of diagnostics for active tuberculosis (TB is rapidly developing. TB diagnostic modeling can help to inform policy makers and support complicated decisions on diagnostic strategy, with important budgetary implications. Demand for TB diagnostic modeling is likely to increase, and an evaluation of current practice is important. We aimed to systematically review all studies employing mathematical modeling to evaluate cost-effectiveness or epidemiological impact of novel diagnostic strategies for active TB.Pubmed, personal libraries and reference lists were searched to identify eligible papers. We extracted data on a wide variety of model structure, parameter choices, sensitivity analyses and study conclusions, which were discussed during a meeting of content experts.From 5619 records a total of 36 papers were included in the analysis. Sixteen papers included population impact/transmission modeling, 5 were health systems models, and 24 included estimates of cost-effectiveness. Transmission and health systems models included specific structure to explore the importance of the diagnostic pathway (n = 4, key determinants of diagnostic delay (n = 5, operational context (n = 5, and the pre-diagnostic infectious period (n = 1. The majority of models implemented sensitivity analysis, although only 18 studies described multi-way sensitivity analysis of more than 2 parameters simultaneously. Among the models used to make cost-effectiveness estimates, most frequent diagnostic assays studied included Xpert MTB/RIF (n = 7, and alternative nucleic acid amplification tests (NAATs (n = 4. Most (n = 16 of the cost-effectiveness models compared new assays to an existing baseline and generated an incremental cost-effectiveness ratio (ICER.Although models have addressed a small number of important issues, many decisions regarding implementation of TB diagnostics are being made without the full benefits of insight from mathematical

  5. Religion and Spirituality’s Influences on HIV Syndemics Among MSM: A Systematic Review and Conceptual Model

    Science.gov (United States)

    Parsons, Jeffrey T.

    2015-01-01

    This paper presents a systematic review of the quantitative HIV research that assessed the relationships between religion, spirituality, HIV syndemics, and individual HIV syndemics-related health conditions (e.g. depression, substance abuse, HIV risk) among men who have sex with men (MSM) in the United States. No quantitative studies were found that assessed the relationships between HIV syndemics, religion, and spirituality. Nine studies, with 13 statistical analyses, were found that examined the relationships between individual HIV syndemics-related health conditions, religion, and spirituality. Among the 13 analyses, religion and spirituality were found to have mixed relationships with HIV syndemics-related health conditions (6 nonsignificant associations; 5 negative associations; 2 positive associations). Given the overall lack of inclusion of religion and spirituality in HIV syndemics research, a conceptual model that hypothesizes the potential interactions of religion and spirituality with HIV syndemics-related health conditions is presented. The implications of the model for MSM’s health are outlined. PMID:26319130

  6. Nitrous oxide emissions from cropland: a procedure for calibrating the DayCent biogeochemical model using inverse modelling

    Science.gov (United States)

    Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.

    2013-01-01

    DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.

  7. Planck 2013 results. III. LFI systematic uncertainties

    CERN Document Server

    Aghanim, N; Arnaud, M; Ashdown, M; Atrio-Barandela, F; Aumont, J; Baccigalupi, C; Banday, A J; Barreiro, R B; Battaner, E; Benabed, K; Benoît, A; Benoit-Lévy, A; Bernard, J -P; Bersanelli, M; Bielewicz, P; Bobin, J; Bock, J J; Bonaldi, A; Bonavera, L; Bond, J R; Borrill, J; Bouchet, F R; Bridges, M; Bucher, M; Burigana, C; Butler, R C; Cardoso, J -F; Catalano, A; Chamballu, A; Chiang, L -Y; Christensen, P R; Church, S; Colombi, S; Colombo, L P L; Crill, B P; Cruz, M; Curto, A; Cuttaia, F; Danese, L; Davies, R D; Davis, R J; de Bernardis, P; de Rosa, A; de Zotti, G; Delabrouille, J; Dick, J; Dickinson, C; Diego, J M; Dole, H; Donzelli, S; Doré, O; Douspis, M; Dupac, X; Efstathiou, G; Enßlin, T A; Eriksen, H K; Finelli, F; Forni, O; Frailis, M; Franceschi, E; Gaier, T C; Galeotta, S; Ganga, K; Giard, M; Giraud-Héraud, Y; Gjerløw, E; González-Nuevo, J; Górski, K M; Gratton, S; Gregorio, A; Gruppuso, A; Hansen, F K; Hanson, D; Harrison, D; Henrot-Versillé, S; Hernández-Monteagudo, C; Herranz, D; Hildebrandt, S R; Hivon, E; Hobson, M; Holmes, W A; Hornstrup, A; Hovest, W; Huffenberger, K M; Jaffe, T R; Jaffe, A H; Jewell, J; Jones, W C; Juvela, M; Kangaslahti, P; Keihänen, E; Keskitalo, R; Kiiveri, K; Kisner, T S; Knoche, J; Knox, L; Kunz, M; Kurki-Suonio, H; Lagache, G; Lähteenmäki, A; Lamarre, J -M; Lasenby, A; Laureijs, R J; Lawrence, C R; Leahy, J P; Leonardi, R; Lesgourgues, J; Liguori, M; Lilje, P B; Lindholm, V; Linden-Vørnle, M; López-Caniego, M; Lubin, P M; Macías-Pérez, J F; Maino, D; Mandolesi, N; Maris, M; Marshall, D J; Martin, P G; Martínez-González, E; Masi, S; Matarrese, S; Matthai, F; Mazzotta, P; Meinhold, P R; Melchiorri, A; Mendes, L; Mennella, A; Migliaccio, M; Mitra, S; Moneti, A; Montier, L; Morgante, G; Mortlock, D; Moss, A; Munshi, D; Naselsky, P; Natoli, P; Netterfield, C B; Nørgaard-Nielsen, H U; Novikov, D; Novikov, I; O'Dwyer, I J; Osborne, S; Paci, F; Pagano, L; Paladini, R; Paoletti, D; Partridge, B; Pasian, F; Patanchon, G; Pearson, D; Peel, M; Perdereau, O; Perotto, L; Perrotta, F; Pierpaoli, E; Pietrobon, D; Plaszczynski, S; Platania, P; Pointecouteau, E; Polenta, G; Ponthieu, N; Popa, L; Poutanen, T; Pratt, G W; Prézeau, G; Prunet, S; Puget, J -L; Rachen, J P; Rebolo, R; Reinecke, M; Remazeilles, M; Ricciardi, S; Riller, T; Rocha, G; Rosset, C; Rossetti, M; Roudier, G; Rubiño-Martín, J A; Rusholme, B; Sandri, M; Santos, D; Scott, D; Seiffert, M D; Shellard, E P S; Spencer, L D; Starck, J -L; Stolyarov, V; Stompor, R; Sureau, F; Sutton, D; Suur-Uski, A -S; Sygnet, J -F; Tauber, J A; Tavagnacco, D; Terenzi, L; Toffolatti, L; Tomasi, M; Tristram, M; Tucci, M; Tuovinen, J; Türler, M; Umana, G; Valenziano, L; Valiviita, J; Van Tent, B; Varis, J; Vielva, P; Villa, F; Vittorio, N; Wade, L A; Wandelt, B D; Watson, R; Wilkinson, A; Yvon, D; Zacchei, A; Zonca, A

    2014-01-01

    We present the current estimate of instrumental and systematic effect uncertainties for the Planck-Low Frequency Instrument relevant to the first release of the Planck cosmological results. We give an overview of the main effects and of the tools and methods applied to assess residuals in maps and power spectra. We also present an overall budget of known systematic effect uncertainties, which are dominated sidelobe straylight pick-up and imperfect calibration. However, even these two effects are at least two orders of magnitude weaker than the cosmic microwave background (CMB) fluctuations as measured in terms of the angular temperature power spectrum. A residual signal above the noise level is present in the multipole range $\\ell<20$, most notably at 30 GHz, and is likely caused by residual Galactic straylight contamination. Current analysis aims to further reduce the level of spurious signals in the data and to improve the systematic effects modelling, in particular with respect to straylight and calibra...

  8. Systematic review automation technologies

    Science.gov (United States)

    2014-01-01

    Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects. We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time. PMID:25005128

  9. The Diagnostic Efficacy of Cone-beam Computed Tomography in Endodontics: A Systematic Review and Analysis by a Hierarchical Model of Efficacy.

    Science.gov (United States)

    Rosen, Eyal; Taschieri, Silvio; Del Fabbro, Massimo; Beitlitum, Ilan; Tsesis, Igor

    2015-07-01

    The aim of this study was to evaluate the diagnostic efficacy of cone-beam computed tomographic (CBCT) imaging in endodontics based on a systematic search and analysis of the literature using an efficacy model. A systematic search of the literature was performed to identify studies evaluating the use of CBCT imaging in endodontics. The identified studies were subjected to strict inclusion criteria followed by an analysis using a hierarchical model of efficacy (model) designed for appraisal of the literature on the levels of efficacy of a diagnostic imaging modality. Initially, 485 possible relevant articles were identified. After title and abstract screening and a full-text evaluation, 58 articles (12%) that met the inclusion criteria were analyzed and allocated to levels of efficacy. Most eligible articles (n = 52, 90%) evaluated technical characteristics or the accuracy of CBCT imaging, which was defined in this model as low levels of efficacy. Only 6 articles (10%) proclaimed to evaluate the efficacy of CBCT imaging to support the practitioner's decision making; treatment planning; and, ultimately, the treatment outcome, which was defined as higher levels of efficacy. The expected ultimate benefit of CBCT imaging to the endodontic patient as evaluated by its level of diagnostic efficacy is unclear and is mainly limited to its technical and diagnostic accuracy efficacies. Even for these low levels of efficacy, current knowledge is limited. Therefore, a cautious and rational approach is advised when considering CBCT imaging for endodontic purposes. Copyright © 2015 American Association of Endodontists. Published by Elsevier Inc. All rights reserved.

  10. Soil gas and radon entry into a simple test structure: Comparison of experimental and modelling results

    DEFF Research Database (Denmark)

    Andersen, C.E.; Søgaard-Hansen, J.; Majborn, B.

    1994-01-01

    A radon test structure has been established at a field site at Riso National Laboratory. Measurements have been made of soil gas entry rates, pressure couplings and radon depletion. The experimental results have been compared with results obtained from measured soil parameters and a two......-dimensional steady-state numerical model of Darcy flow and combined diffusive and advective transport of radon. For most probe locations, the calculated values of the pressure couplings and the radon depletion agree well with the measured values, thus verifying important elements of the Darcy flow approximation......, and the ability of the model to treat combined diffusive and advective transport of radon. However, the model gives an underestimation of the soil gas entry rate. Even if it is assumed that the soil has a permeability equal to the highest of the measured values, the model underestimates the soil gas entry rate...

  11. Systematic study of α half-lives of superheavy nuclei

    Science.gov (United States)

    Budaca, A. I.; Silisteanu, I.

    2014-03-01

    Two different descriptions of the α-decay process, namely, the shell model rate theory and phenomenological description are emphasized to investigate the α-decay properties of SHN. These descriptions are shortly presented and illustrated by their results. Special attention is given to the shell structure and resonance scattering effects due to which they exist and decay. A first systematics of α-decay properties of SHN was performed by studying the half-life vs. energy correlations in terms of atomic number and mass number. Such a systematics shows that the transitions between even-even nuclei are favored, while all other transitions with odd nucleons are prohibited. The accuracy of experimental and calculated α-half-lives is illustrated by the systematics of these results.

  12. Mathematical models used to inform study design or surveillance systems in infectious diseases: a systematic review.

    Science.gov (United States)

    Herzog, Sereina A; Blaizot, Stéphanie; Hens, Niel

    2017-12-18

    Mathematical models offer the possibility to investigate the infectious disease dynamics over time and may help in informing design of studies. A systematic review was performed in order to determine to what extent mathematical models have been incorporated into the process of planning studies and hence inform study design for infectious diseases transmitted between humans and/or animals. We searched Ovid Medline and two trial registry platforms (Cochrane, WHO) using search terms related to infection, mathematical model, and study design from the earliest dates to October 2016. Eligible publications and registered trials included mathematical models (compartmental, individual-based, or Markov) which were described and used to inform the design of infectious disease studies. We extracted information about the investigated infection, population, model characteristics, and study design. We identified 28 unique publications but no registered trials. Focusing on compartmental and individual-based models we found 12 observational/surveillance studies and 11 clinical trials. Infections studied were equally animal and human infectious diseases for the observational/surveillance studies, while all but one between humans for clinical trials. The mathematical models were used to inform, amongst other things, the required sample size (n = 16), the statistical power (n = 9), the frequency at which samples should be taken (n = 6), and from whom (n = 6). Despite the fact that mathematical models have been advocated to be used at the planning stage of studies or surveillance systems, they are used scarcely. With only one exception, the publications described theoretical studies, hence, not being utilised in real studies.

  13. Primary care models for treating opioid use disorders: What actually works? A systematic review.

    Directory of Open Access Journals (Sweden)

    Pooja Lagisetty

    Full Text Available Primary care-based models for Medication-Assisted Treatment (MAT have been shown to reduce mortality for Opioid Use Disorder (OUD and have equivalent efficacy to MAT in specialty substance treatment facilities.The objective of this study is to systematically analyze current evidence-based, primary care OUD MAT interventions and identify program structures and processes associated with improved patient outcomes in order to guide future policy and implementation in primary care settings.PubMed, EMBASE, CINAHL, and PsychInfo.We included randomized controlled or quasi experimental trials and observational studies evaluating OUD treatment in primary care settings treating adult patient populations and assessed structural domains using an established systems engineering framework.We included 35 interventions (10 RCTs and 25 quasi-experimental interventions that all tested MAT, buprenorphine or methadone, in primary care settings across 8 countries. Most included interventions used joint multi-disciplinary (specialty addiction services combined with primary care and coordinated care by physician and non-physician provider delivery models to provide MAT. Despite large variability in reported patient outcomes, processes, and tasks/tools used, similar key design factors arose among successful programs including integrated clinical teams with support staff who were often advanced practice clinicians (nurses and pharmacists as clinical care managers, incorporating patient "agreements," and using home inductions to make treatment more convenient for patients and providers.The findings suggest that multidisciplinary and coordinated care delivery models are an effective strategy to implement OUD treatment and increase MAT access in primary care, but research directly comparing specific structures and processes of care models is still needed.

  14. Evaluation of radar-derived precipitation estimates using runoff simulation : report for the NFR Energy Norway funded project 'Utilisation of weather radar data in atmospheric and hydrological models'

    Energy Technology Data Exchange (ETDEWEB)

    Abdella, Yisak; Engeland, Kolbjoern; Lepioufle, Jean-Marie

    2012-11-01

    This report presents the results from the project called 'Utilisation of weather radar data in atmospheric and hydrological models' funded by NFR and Energy Norway. Three precipitation products (radar-derived, interpolated and combination of the two) were generated as input for hydrological models. All the three products were evaluated by comparing the simulated and observed runoff at catchments. In order to expose any bias in the precipitation inputs, no precipitation correction factors were applied. Three criteria were used to measure the performance: Nash, correlation coefficient, and bias. The results shows that the simulations with the combined precipitation input give the best performance. We also see that the radar-derived precipitation estimates give reasonable runoff simulation even without a region specific parameters for the Z-R relationship. All the three products resulted in an underestimation of the estimated runoff, revealing a systematic bias in measurements (e.g. catch deficit, orographic effects, Z-R relationships) that can be improved. There is an important potential of using radar-derived precipitation for simulation of runoff, especially in catchments without precipitation gauges inside.(Author)

  15. Health economic analyses in medical nutrition: a systematic literature review.

    Science.gov (United States)

    Walzer, Stefan; Droeschel, Daniel; Nuijten, Mark; Chevrou-Séverac, Hélène

    2014-01-01

    Medical nutrition is a specific nutrition category either covering specific dietary needs and/or nutrient deficiency in patients or feeding patients unable to eat normally. Medical nutrition is regulated by a specific bill in Europe and in the US, with specific legislation and guidelines, and is provided to patients with special nutritional needs and indications for nutrition support. Therefore, medical nutrition products are delivered by medical prescription and supervised by health care professionals. Although these products have existed for more than 2 decades, health economic evidence of medical nutrition interventions is scarce. This research assesses the current published health economic evidence for medical nutrition by performing a systematic literature review related to health economic analysis of medical nutrition. A systematic literature search was done using standard literature databases, including PubMed, the Health Technology Assessment Database, and the National Health Service Economic Evaluation Database. Additionally, a free web-based search was conducted using the same search terms utilized in the systematic database search. The clinical background and basis of the analysis, health economic design, and results were extracted from the papers finally selected. The Drummond checklist was used to validate the quality of health economic modeling studies and the AMSTAR (A Measurement Tool to Assess Systematic Reviews) checklist was used for published systematic reviews. Fifty-three papers were identified and obtained via PubMed, or directly via journal webpages for further assessment. Thirty-two papers were finally included in a thorough data extraction procedure, including those identified by a "gray literature search" utilizing the Google search engine and cross-reference searches. Results regarding content of the studies showed that malnutrition was the underlying clinical condition in most cases (32%). In addition, gastrointestinal disorders (eg

  16. Life Cycle Evolution and Systematics of Campanulariid Hydrozoans

    Science.gov (United States)

    2004-09-01

    longissima - a true cosmopolite ? Cornelius (1975) synonymized Obelia longissima, as well as many other nominal Obelia species, into Obelia dichotoma. Based...biodiversity may be underestimated in the Campanulariidae and other hydrozoans, although some true cosmopolites may exist. This is in contrast to the most recent

  17. The East Asian Atmospheric Water Cycle and Monsoon Circulation in the Met Office Unified Model

    Science.gov (United States)

    Rodríguez, José M.; Milton, Sean F.; Marzin, Charline

    2017-10-01

    In this study the low-level monsoon circulation and observed sources of moisture responsible for the maintenance and seasonal evolution of the East Asian monsoon are examined, studying the detailed water budget components. These observational estimates are contrasted with the Met Office Unified Model (MetUM) climate simulation performance in capturing the circulation and water cycle at a variety of model horizontal resolutions and in fully coupled ocean-atmosphere simulations. We study the role of large-scale circulation in determining the hydrological cycle by analyzing key systematic errors in the model simulations. MetUM climate simulations exhibit robust circulation errors, including a weakening of the summer west Pacific Subtropical High, which leads to an underestimation of the southwesterly monsoon flow over the region. Precipitation and implied diabatic heating biases in the South Asian monsoon and Maritime Continent region are shown, via nudging sensitivity experiments, to have an impact on the East Asian monsoon circulation. By inference, the improvement of these tropical biases with increased model horizontal resolution is hypothesized to be a factor in improvements seen over East Asia with increased resolution. Results from the annual cycle of the hydrological budget components in five domains show a good agreement between MetUM simulations and ERA-Interim reanalysis in northern and Tibetan domains. In simulations, the contribution from moisture convergence is larger than in reanalysis, and they display less precipitation recycling over land. The errors are closely linked to monsoon circulation biases.

  18. Quantifying remarks to the question of uncertainties of the 'general dose assessment fundamentals'

    International Nuclear Information System (INIS)

    Brenk, H.D.; Vogt, K.J.

    1982-12-01

    Dose prediction models are always subject to uncertainties due to a number of factors including deficiencies in the model structure and uncertainties of the model input parameter values. In lieu of validation experiments the evaluation of these uncertainties is restricted to scientific judgement. Several attempts have been made in the literature to evaluate the uncertainties of the current dose assessment models resulting from uncertainties of the model input parameter values using stochastic approaches. Less attention, however, has been paid to potential sources of systematic over- and underestimations of the predicted doses due to deficiencies in the model structure. The present study addresses this aspect with regard to dose assessment models currently used for regulatory purposes. The influence of a number of basic simplifications and conservative assumptions has been investigated. Our systematic approach is exemplified by a comparison of doses evaluated on the basis of the regulatory guide model and a more realistic model respectively. This is done for 3 critical exposure pathways. As a result of this comparison it can be concluded that the currently used regularoty-type models include significant safety factors resulting in a systematic overprediction of dose to man up to two orders of magnitude. For this reason there are some indications that these models usually more than compensate the bulk of the stochastic uncertainties caused by the variability of the input parameter values. (orig.) [de

  19. Effect of Face-to-face Education, Problem-based Learning, and Goldstein Systematic Training Model on Quality of Life and Fatigue among Caregivers of Patients with Diabetes.

    Science.gov (United States)

    Masoudi, Reza; Soleimani, Mohammad Ali; Yaghoobzadeh, Ameneh; Baraz, Shahram; Hakim, Ashrafalsadat; Chan, Yiong H

    2017-01-01

    Education is a fundamental component for patients with diabetes to achieve good glycemic control. In addition, selecting the appropriate method of education is one of the most effective factors in the quality of life. The present study aimed to evaluate the effect of face-to-face education, problem-based learning, and Goldstein systematic training model on the quality of life (QOL) and fatigue among caregivers of patients with diabetes. This randomized clinical trial was conducted in Hajar Hospital (Shahrekord, Iran) in 2012. The study subjects consisted of 105 family caregivers of patients with diabetes. The participants were randomly assigned to three intervention groups (35 caregivers in each group). For each group, 5-h training sessions were held separately. QOL and fatigue were evaluated immediately before and after the intervention, and after 1, 2, 3, and 4 months of intervention. There was a significant increase in QOL for all the three groups. Both the problem-based learning and the Goldstein method showed desirable QOL improvement over time. The desired educational intervention for fatigue reduction during the 4-month post-intervention period was the Goldstein method. A significant reduction was observed in fatigue in all three groups after the intervention ( P problem-based learning and Goldstein systematic training model improve the QOL of caregivers of patients with diabetes. In addition, the Goldstein systematic training model had the greatest effect on the reduction of fatigue within 4 months of the intervention.

  20. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    Science.gov (United States)

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  1. Core Professionalism Education in Surgery: A Systematic Review

    Directory of Open Access Journals (Sweden)

    Akile Sarıoğlu Büke

    2018-03-01

    Full Text Available Background: Professionalism education is one of the major elements of surgical residency education. Aims: To evaluate the studies on core professionalism education programs in surgical professionalism education. Study Design: Systematic review. Methods: This systematic literature review was performed to analyze core professionalism programs for surgical residency education published in English with at least three of the following features: program developmental model/instructional design method, aims and competencies, methods of teaching, methods of assessment, and program evaluation model or method. A total of 27083 articles were retrieved using EBSCOHOST, PubMed, Science Direct, Web of Science, and manual search. Results: Eight articles met the selection criteria. The instructional design method was presented in only one article, which described the Analysis, Design, Development, Implementation, and Evaluation model. Six articles were based on the Accreditation Council for Graduate Medical Education criterion, although there was significant variability in content. The most common teaching method was role modeling with scenario- and case-based learning. A wide range of assessment methods for evaluating professionalism education were reported. The Kirkpatrick model was reported in one article as a method for program evaluation. Conclusion: It is suggested that for a core surgical professionalism education program, developmental/instructional design model, aims and competencies, content, teaching methods, assessment methods, and program evaluation methods/models should be well defined, and the content should be comparable.

  2. Core Professionalism Education in Surgery: A Systematic Review.

    Science.gov (United States)

    Sarıoğlu Büke, Akile; Karabilgin Öztürkçü, Özlem Sürel; Yılmaz, Yusuf; Sayek, İskender

    2018-03-15

    Professionalism education is one of the major elements of surgical residency education. To evaluate the studies on core professionalism education programs in surgical professionalism education. Systematic review. This systematic literature review was performed to analyze core professionalism programs for surgical residency education published in English with at least three of the following features: program developmental model/instructional design method, aims and competencies, methods of teaching, methods of assessment, and program evaluation model or method. A total of 27083 articles were retrieved using EBSCOHOST, PubMed, Science Direct, Web of Science, and manual search. Eight articles met the selection criteria. The instructional design method was presented in only one article, which described the Analysis, Design, Development, Implementation, and Evaluation model. Six articles were based on the Accreditation Council for Graduate Medical Education criterion, although there was significant variability in content. The most common teaching method was role modeling with scenario- and case-based learning. A wide range of assessment methods for evaluating professionalism education were reported. The Kirkpatrick model was reported in one article as a method for program evaluation. It is suggested that for a core surgical professionalism education program, developmental/instructional design model, aims and competencies, content, teaching methods, assessment methods, and program evaluation methods/models should be well defined, and the content should be comparable.

  3. Quasi-random Monte Carlo application in CGE systematic sensitivity analysis

    NARCIS (Netherlands)

    Chatzivasileiadis, T.

    2017-01-01

    The uncertainty and robustness of Computable General Equilibrium models can be assessed by conducting a Systematic Sensitivity Analysis. Different methods have been used in the literature for SSA of CGE models such as Gaussian Quadrature and Monte Carlo methods. This paper explores the use of

  4. Anti-tumor effects of metformin in animal models of hepatocellular carcinoma: a systematic review and meta-analysis.

    Directory of Open Access Journals (Sweden)

    Juan Li

    Full Text Available Several studies have reported that metformin can reduce the risk of hepatocellular carcinoma (HCC in diabetes patients. However, the direct anti-HCC effects of metformin have hardly been studied in patients, but have been extensively investigated in animal models of HCC. We therefore performed a systematic review and meta-analysis of animal studies evaluating the effects of metformin on HCC.We collected the relevant studies by searching EMBASE, Medline (OvidSP, Web of Science, Scopus, PubMed Publisher, and Google Scholar. Studies were included according to the following inclusion criteria: HCC, animal study, and metformin intervention. Study quality was assessed using SYRCLE's risk of bias tool. A meta-analysis was performed for the outcome measures: tumor growth (tumor volume, weight and size, tumor number and incidence.The search resulted in 573 references, of which 13 could be included in the review and 12 included in the meta-analysis. The study characteristics of the included studies varied considerably. Two studies used rats, while the others used mice. Only one study used female animals, nine used male, and three studies didn't mention the gender of animals in their experiments. The quality of the included studies was low to moderate based on the assessment of their risk of bias. The meta-analysis showed that metformin significantly inhibited the growth of HCC tumour (SMD -2.20[-2.96,-1.43]; n=16, but no significant effect on the number of tumors (SMD-1.05[-2.13,0.03]; n=5 or the incidence of HCC was observed (RR 0.62[0.33,1.16]; n=6. To investigate the potential sources of significant heterogeneities found in outcome of tumor growth (I2=81%, subgroup analyses of scales of growth measures and of types of animal models used were performed.Metformin appears to have a direct anti-HCC effect in animal models. Although the intrinsic limitations of animal studies, this systematic review could provide an important reference for future

  5. Dose-rate dependent stochastic effects in radiation cell-survival models

    International Nuclear Information System (INIS)

    Sachs, R.K.; Hlatky, L.R.

    1990-01-01

    When cells are subjected to ionizing radiation the specific energy rate (microscopic analog of dose-rate) varies from cell to cell. Within one cell, this rate fluctuates during the course of time; a crossing of a sensitive cellular site by a high energy charged particle produces many ionizations almost simultaneously, but during the interval between events no ionizations occur. In any cell-survival model one can incorporate the effect of such fluctuations without changing the basic biological assumptions. Using stochastic differential equations and Monte Carlo methods to take into account stochastic effects we calculated the dose-survival rfelationships in a number of current cell survival models. Some of the models assume quadratic misrepair; others assume saturable repair enzyme systems. It was found that a significant effect of random fluctuations is to decrease the theoretically predicted amount of dose-rate sparing. In the limit of low dose-rates neglecting the stochastic nature of specific energy rates often leads to qualitatively misleading results by overestimating the surviving fraction drastically. In the opposite limit of acute irradiation, analyzing the fluctuations in rates merely amounts to analyzing fluctuations in total specific energy via the usual microdosimetric specific energy distribution function, and neglecting fluctuations usually underestimates the surviving fraction. The Monte Carlo methods interpolate systematically between the low dose-rate and high dose-rate limits. As in other approaches, the slope of the survival curve at low dose-rates is virtually independent of dose and equals the initial slope of the survival curve for acute radiation. (orig.)

  6. Underestimating the Toxicological Challenges Associated with the Use of Herbal Medicinal Products in Developing Countries

    Directory of Open Access Journals (Sweden)

    Vidushi S. Neergheen-Bhujun

    2013-01-01

    Full Text Available Various reports suggest a high contemporaneous prevalence of herb-drug use in both developed and developing countries. The World Health Organisation indicates that 80% of the Asian and African populations rely on traditional medicine as the primary method for their health care needs. Since time immemorial and despite the beneficial and traditional roles of herbs in different communities, the toxicity and herb-drug interactions that emanate from this practice have led to severe adverse effects and fatalities. As a result of the perception that herbal medicinal products have low risk, consumers usually disregard any association between their use and any adverse reactions hence leading to underreporting of adverse reactions. This is particularly common in developing countries and has led to a paucity of scientific data regarding the toxicity and interactions of locally used traditional herbal medicine. Other factors like general lack of compositional and toxicological information of herbs and poor quality of adverse reaction case reports present hurdles which are highly underestimated by the population in the developing world. This review paper addresses these toxicological challenges and calls for natural health product regulations as well as for protocols and guidance documents on safety and toxicity testing of herbal medicinal products.

  7. Underestimating the toxicological challenges associated with the use of herbal medicinal products in developing countries.

    Science.gov (United States)

    Neergheen-Bhujun, Vidushi S

    2013-01-01

    Various reports suggest a high contemporaneous prevalence of herb-drug use in both developed and developing countries. The World Health Organisation indicates that 80% of the Asian and African populations rely on traditional medicine as the primary method for their health care needs. Since time immemorial and despite the beneficial and traditional roles of herbs in different communities, the toxicity and herb-drug interactions that emanate from this practice have led to severe adverse effects and fatalities. As a result of the perception that herbal medicinal products have low risk, consumers usually disregard any association between their use and any adverse reactions hence leading to underreporting of adverse reactions. This is particularly common in developing countries and has led to a paucity of scientific data regarding the toxicity and interactions of locally used traditional herbal medicine. Other factors like general lack of compositional and toxicological information of herbs and poor quality of adverse reaction case reports present hurdles which are highly underestimated by the population in the developing world. This review paper addresses these toxicological challenges and calls for natural health product regulations as well as for protocols and guidance documents on safety and toxicity testing of herbal medicinal products.

  8. On the climate model simulation of Indian monsoon low pressure systems and the effect of remote disturbances and systematic biases

    Science.gov (United States)

    Levine, Richard C.; Martin, Gill M.

    2018-06-01

    Monsoon low pressure systems (LPS) are synoptic-scale systems forming over the Indian monsoon trough region, contributing substantially to seasonal mean summer monsoon rainfall there. Many current global climate models (GCMs), including the Met Office Unified Model (MetUM), show deficient rainfall in this region, much of which has previously been attributed to remote systematic biases such as excessive equatorial Indian Ocean (EIO) convection, while also substantially under-representing LPS and associated rainfall as they travel westwards across India. Here the sources and sensitivities of LPS to local, remote and short-timescale forcing are examined, in order to understand the poor representation in GCMs. An LPS tracking method is presented using TRACK feature tracking software for comparison between re-analysis data-sets, MetUM GCM and regional climate model (RCM) simulations. RCM simulations, at similar horizontal resolution to the GCM and forced with re-analysis data at the lateral boundaries, are carried out with different domains to examine the effects of remote biases. The results suggest that remote biases contribute significantly to the poor simulation of LPS in the GCM. As these remote systematic biases are common amongst many current GCMs, it is likely that GCMs are intrinsically capable of representing LPS, even at relatively low resolution. The main problem areas are time-mean excessive EIO convection and poor representation of precursor disturbances transmitted from the Western Pacific. The important contribution of the latter is established using RCM simulations forced by climatological 6-hourly lateral boundary conditions, which also highlight the role of LPS in moving rainfall from steep orography towards Central India.

  9. Focusing on fast food restaurants alone underestimates the relationship between neighborhood deprivation and exposure to fast food in a large rural area

    Directory of Open Access Journals (Sweden)

    Dean Wesley R

    2011-01-01

    Full Text Available Abstract Background Individuals and families are relying more on food prepared outside the home as a source for at-home and away-from-home consumption. Restricting the estimation of fast-food access to fast-food restaurants alone may underestimate potential spatial access to fast food. Methods The study used data from the 2006 Brazos Valley Food Environment Project (BVFEP and the 2000 U.S. Census Summary File 3 for six rural counties in the Texas Brazos Valley region. BVFEP ground-truthed data included identification and geocoding of all fast-food restaurants, convenience stores, supermarkets, and grocery stores in study area and on-site assessment of the availability and variety of fast-food lunch/dinner entrées and side dishes. Network distance was calculated from the population-weighted centroid of each census block group to all retail locations that marketed fast food (n = 205 fast-food opportunities. Results Spatial access to fast-food opportunities (FFO was significantly better than to traditional fast-food restaurants (FFR. The median distance to the nearest FFO was 2.7 miles, compared with 4.5 miles to the nearest FFR. Residents of high deprivation neighborhoods had better spatial access to a variety of healthier fast-food entrée and side dish options than residents of low deprivation neighborhoods. Conclusions Our analyses revealed that identifying fast-food restaurants as the sole source of fast-food entrées and side dishes underestimated neighborhood exposure to fast food, in terms of both neighborhood proximity and coverage. Potential interventions must consider all retail opportunities for fast food, and not just traditional FFR.

  10. Toward a Two-Dimensional Model of Social Cognition in Clinical Neuropsychology: A Systematic Review of Factor Structure Studies.

    Science.gov (United States)

    Etchepare, Aurore; Prouteau, Antoinette

    2018-04-01

    Social cognition has received growing interest in many conditions in recent years. However, this construct still suffers from a considerable lack of consensus, especially regarding the dimensions to be studied and the resulting methodology of clinical assessment. Our review aims to clarify the distinctiveness of the dimensions of social cognition. Based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statements, a systematic review was conducted to explore the factor structure of social cognition in the adult general and clinical populations. The initial search provided 441 articles published between January 1982 and March 2017. Eleven studies were included, all conducted in psychiatric populations and/or healthy participants. Most studies were in favor of a two-factor solution. Four studies drew a distinction between low-level (e.g., facial emotion/prosody recognition) and high-level (e.g., theory of mind) information processing. Four others reported a distinction between affective (e.g., facial emotion/prosody recognition) and cognitive (e.g., false beliefs) information processing. Interestingly, attributional style was frequently reported as an additional separate factor of social cognition. Results of factor analyses add further support for the relevance of models differentiating level of information processing (low- vs. high-level) from nature of processed information (affective vs. cognitive). These results add to a significant body of empirical evidence from developmental, clinical research and neuroimaging studies. We argue the relevance of integrating low- versus high-level processing with affective and cognitive processing in a two-dimensional model of social cognition that would be useful for future research and clinical practice. (JINS, 2018, 24, 391-404).

  11. How cold was Europe at the Last Glacial Maximum? A synthesis of the progress achieved since the first PMIP model-data comparison

    Directory of Open Access Journals (Sweden)

    G. Ramstein

    2007-06-01

    Full Text Available The Last Glacial Maximum has been one of the first foci of the Paleoclimate Modelling Intercomparison Project (PMIP. During its first phase, the results of 17 atmosphere general circulation models were compared to paleoclimate reconstructions. One of the largest discrepancies in the simulations was the systematic underestimation, by at least 10°C, of the winter cooling over Europe and the Mediterranean region observed in the pollen-based reconstructions. In this paper, we investigate the progress achieved to reduce this inconsistency through a large modelling effort and improved temperature reconstructions. We show that increased model spatial resolution does not significantly increase the simulated LGM winter cooling. Further, neither the inclusion of a vegetation cover compatible with the LGM climate, nor the interactions with the oceans simulated by the atmosphere-ocean general circulation models run in the second phase of PMIP result in a better agreement between models and data. Accounting for changes in interannual variability in the interpretation of the pollen data does not result in a reduction of the reconstructed cooling. The largest recent improvement in the model-data comparison has instead arisen from a new climate reconstruction based on inverse vegetation modelling, which explicitly accounts for the CO2 decrease at LGM and which substantially reduces the LGM winter cooling reconstructed from pollen assemblages. As a result, the simulated and observed LGM winter cooling over Western Europe and the Mediterranean area are now in much better agreement.

  12. Three Phase Power Imbalance Decomposition into Systematic Imbalance and Random Imbalance

    DEFF Research Database (Denmark)

    Kong, Wangwei; Ma, Kang; Wu, Qiuwei

    2017-01-01

    Uneven load allocations and random load behaviors are two major causes for three-phase power imbalance. The former mainly cause systematic imbalance, which can be addressed by low-cost phase swapping; the latter contribute to random imbalance, which requires relatively costly demand...... minimum phase, or both. Then, this paper proposes a new method to decompose three-phase power series into a systematic imbalance component and a random imbalance component as the closed-form solutions of quadratic optimization models that minimize random imbalance. A degree of power imbalance...... is calculated based on the systematic imbalance component to guide phase swapping. Case studies demonstrate that 72.8% of 782 low voltage substations have systematic imbalance components. The degree of power imbalance results reveal the maximum need for phase swapping and the random imbalance components reveal...

  13. A Systematic Mapping Study of Software Architectures for Cloud Based Systems

    DEFF Research Database (Denmark)

    Chauhan, Muhammad Aufeef; Babar, Muhammad Ali

    2014-01-01

    Context: Cloud computing has gained significant attention of researchers and practitioners. This emerging paradigm is being used to provide solutions in multiple domains without huge upfront investment because of its on demand recourse-provisioning model. However, the information about how software...... of this study is to systematically identify and analyze the currently published research on the topics related to software architectures for cloud-based systems in order to identify architecture solutions for achieving quality requirements. Method: We decided to carry out a systematic mapping study to find...... as much peer-reviewed literature on the topics related to software architectures for cloud-based systems as possible. This study has been carried out by following the guidelines for conducting systematic literature reviews and systematic mapping studies as reported in the literature. Based on our paper...

  14. Methods of abdominal wall expansion for repair of incisional herniae: a systematic review.

    Science.gov (United States)

    Alam, N N; Narang, S K; Pathak, S; Daniels, I R; Smart, N J

    2016-04-01

    To systematically review the available literature regarding methods for abdominal wall expansion and compare the outcome of primary fascial closure rates. A systematic search of Pubmed and Embase databases was conducted using the search terms "Abdominal wall hernia", "ventral hernia", "midline hernia", "Botulinum toxin", "botox", "dysport", "progressive preoperative pneumoperitoneum", and "tissue expanders". Study quality was assessed using the Methodological Index for Non-Randomised Studies. 21 of the 105 studies identified met the inclusion criteria. Progressive preoperative pneumoperitoneum (PPP) was performed in 269 patients across 15 studies with primary fascial closure being achieved in 226 (84%). 16 patients had a recurrence (7.2%) and the complication rate was 12% with 2 reported mortalities. There were 4 studies with 14 patients in total undergoing abdominal wall expansion using tissue expanders with a fascial closure rate of 92.9% (n = 13). A recurrence rate of 10.0% (n = 1) was reported with 1 complication and no mortalities. Follow up ranged from 3 to 36 months across the studies. There were 2 studies reporting the use of botulinum toxin with 29 patients in total. A primary fascial closure rate of 100% (n = 29) was demonstrated although a combination of techniques including component separation and Rives-Stoppa repair were used. There were no reported complications related to the use of Botulinum Toxin. However, the short-term follow up in many cases and the lack of routine radiological assessment for recurrence suggests that the recurrence rate has been underestimated. PPP, tissue expanders and Botulinum toxin are safe and feasible methods for abdominal wall expansion prior to incisional hernia repair. In combination with existing techniques for repair, these methods may help provide the crucial extra tissue mobility required to achieve primary closure.

  15. Gambler Risk Perception: A Mental Model and Grounded Theory Analysis.

    Science.gov (United States)

    Spurrier, Michael; Blaszczynski, Alexander; Rhodes, Paul

    2015-09-01

    Few studies have investigated how gamblers perceive risk or the role of risk perception in disordered gambling. The purpose of the current study therefore was to obtain data on lay gamblers' beliefs on these variables and their effects on decision-making, behaviour, and disordered gambling aetiology. Fifteen regular lay gamblers (non-problem/low risk, moderate risk and problem gamblers) completed a semi-structured interview following mental models and grounded theory methodologies. Gambler interview data was compared to an expert 'map' of risk-perception, to identify comparative gaps or differences associated with harmful or safe gambling. Systematic overlapping processes of data gathering and analysis were used to iteratively extend, saturate, test for exception, and verify concepts and themes emerging from the data. The preliminary findings suggested that gambler accounts supported the presence of expert conceptual constructs, and to some degree the role of risk perception in protecting against or increasing vulnerability to harm and disordered gambling. Gambler accounts of causality, meaning, motivation, and strategy were highly idiosyncratic, and often contained content inconsistent with measures of disordered gambling. Disordered gambling appears heavily influenced by relative underestimation of risk and overvaluation of gambling, based on explicit and implicit analysis, and deliberate, innate, contextual, and learned processing evaluations and biases.

  16. Modelling the distributions and spatial coincidence of bluetongue vectors Culicoides imicola and the Culicoides obsoletus group throughout the Iberian peninsula.

    Science.gov (United States)

    Calvete, C; Estrada, R; Miranda, M A; Borrás, D; Calvo, J H; Lucientes, J

    2008-06-01

    Data obtained by a Spanish national surveillance programme in 2005 were used to develop climatic models for predictions of the distribution of the bluetongue virus (BTV) vectors Culicoides imicola Kieffer (Diptera: Ceratopogonidae) and the Culicoides obsoletus group Meigen throughout the Iberian peninsula. Models were generated using logistic regression to predict the probability of species occurrence at an 8-km spatial resolution. Predictor variables included the annual mean values and seasonalities of a remotely sensed normalized difference vegetation index (NDVI), a sun index, interpolated precipitation and temperature. Using an information-theoretic paradigm based on Akaike's criterion, a set of best models accounting for 95% of model selection certainty were selected and used to generate an average predictive model for each vector. The predictive performances (i.e. the discrimination capacity and calibration) of the average models were evaluated by both internal and external validation. External validation was achieved by comparing average model predictions with surveillance programme data obtained in 2004 and 2006. The discriminatory capacity of both models was found to be reasonably high. The estimated areas under the receiver operating characteristic (ROC) curve (AUC) were 0.78 and 0.70 for the C. imicola and C. obsoletus group models, respectively, in external validation, and 0.81 and 0.75, respectively, in internal validation. The predictions of both models were in close agreement with the observed distribution patterns of both vectors. Both models, however, showed a systematic bias in their predicted probability of occurrence: observed occurrence was systematically overestimated for C. imicola and underestimated for the C. obsoletus group. Average models were used to determine the areas of spatial coincidence of the two vectors. Although their spatial distributions were highly complementary, areas of spatial coincidence were identified, mainly in

  17. Evaluation of cloud resolving model simulations of midlatitude cirrus with ARM and A-Train observations

    Science.gov (United States)

    Muehlbauer, A. D.; Ackerman, T. P.; Lawson, P.; Xie, S.; Zhang, Y.

    2015-12-01

    This paper evaluates cloud resolving model (CRM) and cloud system-resolving model (CSRM) simulations of a midlatitude cirrus case with comprehensive observations collected under the auspices of the Atmospheric Radiation Measurements (ARM) program and with spaceborne observations from the National Aeronautics and Space Administration (NASA) A-train satellites. Vertical profiles of temperature, relative humidity and wind speeds are reasonably well simulated by the CSRM and CRM but there are remaining biases in the temperature, wind speeds and relative humidity, which can be mitigated through nudging the model simulations toward the observed radiosonde profiles. Simulated vertical velocities are underestimated in all simulations except in the CRM simulations with grid spacings of 500m or finer, which suggests that turbulent vertical air motions in cirrus clouds need to be parameterized in GCMs and in CSRM simulations with horizontal grid spacings on the order of 1km. The simulated ice water content and ice number concentrations agree with the observations in the CSRM but are underestimated in the CRM simulations. The underestimation of ice number concentrations is consistent with the overestimation of radar reflectivity in the CRM simulations and suggests that the model produces too many large ice particles especially toward cloud base. Simulated cloud profiles are rather insensitive to perturbations in the initial conditions or the dimensionality of the model domain but the treatment of the forcing data has a considerable effect on the outcome of the model simulations. Despite considerable progress in observations and microphysical parameterizations, simulating the microphysical, macrophysical and radiative properties of cirrus remains challenging. Comparing model simulations with observations from multiple instruments and observational platforms is important for revealing model deficiencies and for providing rigorous benchmarks. However, there still is considerable

  18. The systematization of practice

    DEFF Research Database (Denmark)

    Vidal, Rene Victor Valqui

    2002-01-01

    this paper presents an approach for the systematization of practical experiences whitin Operations Research, Community work, Systems Sciences, Action Research, etc. A case study is presented: the systematization of the activities of a development center in Denmark.......this paper presents an approach for the systematization of practical experiences whitin Operations Research, Community work, Systems Sciences, Action Research, etc. A case study is presented: the systematization of the activities of a development center in Denmark....

  19. Screening strategies for atrial fibrillation: a systematic review and cost-effectiveness analysis.

    Science.gov (United States)

    Welton, Nicky J; McAleenan, Alexandra; Thom, Howard Hz; Davies, Philippa; Hollingworth, Will; Higgins, Julian Pt; Okoli, George; Sterne, Jonathan Ac; Feder, Gene; Eaton, Diane; Hingorani, Aroon; Fawsitt, Christopher; Lobban, Trudie; Bryden, Peter; Richards, Alison; Sofat, Reecha

    2017-05-01

    Atrial fibrillation (AF) is a common cardiac arrhythmia that increases the risk of thromboembolic events. Anticoagulation therapy to prevent AF-related stroke has been shown to be cost-effective. A national screening programme for AF may prevent AF-related events, but would involve a substantial investment of NHS resources. To conduct a systematic review of the diagnostic test accuracy (DTA) of screening tests for AF, update a systematic review of comparative studies evaluating screening strategies for AF, develop an economic model to compare the cost-effectiveness of different screening strategies and review observational studies of AF screening to provide inputs to the model. Systematic review, meta-analysis and cost-effectiveness analysis. Primary care. Adults. Screening strategies, defined by screening test, age at initial and final screens, screening interval and format of screening {systematic opportunistic screening [individuals offered screening if they consult with their general practitioner (GP)] or systematic population screening (when all eligible individuals are invited to screening)}. Sensitivity, specificity and diagnostic odds ratios; the odds ratio of detecting new AF cases compared with no screening; and the mean incremental net benefit compared with no screening. Two reviewers screened the search results, extracted data and assessed the risk of bias. A DTA meta-analysis was perfomed, and a decision tree and Markov model was used to evaluate the cost-effectiveness of the screening strategies. Diagnostic test accuracy depended on the screening test and how it was interpreted. In general, the screening tests identified in our review had high sensitivity (> 0.9). Systematic population and systematic opportunistic screening strategies were found to be similarly effective, with an estimated 170 individuals needed to be screened to detect one additional AF case compared with no screening. Systematic opportunistic screening was more likely to be cost

  20. Reliability analysis - systematic approach based on limited data

    International Nuclear Information System (INIS)

    Bourne, A.J.

    1975-11-01

    The initial approaches required for reliability analysis are outlined. These approaches highlight the system boundaries, examine the conditions under which the system is required to operate, and define the overall performance requirements. The discussion is illustrated by a simple example of an automatic protective system for a nuclear reactor. It is then shown how the initial approach leads to a method of defining the system, establishing performance parameters of interest and determining the general form of reliability models to be used. The overall system model and the availability of reliability data at the system level are next examined. An iterative process is then described whereby the reliability model and data requirements are systematically refined at progressively lower hierarchic levels of the system. At each stage, the approach is illustrated with examples from the protective system previously described. The main advantages of the approach put forward are the systematic process of analysis, the concentration of assessment effort in the critical areas and the maximum use of limited reliability data. (author)