WorldWideScience

Sample records for large model underestimate

  1. Global models underestimate large decadal declining and rising water storage trends relative to GRACE satellite data

    Science.gov (United States)

    Scanlon, Bridget R.; Zhang, Zizhan; Save, Himanshu; Sun, Alexander Y.; van Beek, Ludovicus P. H.; Wiese, David N.; Reedy, Robert C.; Longuevergne, Laurent; Döll, Petra; Bierkens, Marc F. P.

    2018-01-01

    Assessing reliability of global models is critical because of increasing reliance on these models to address past and projected future climate and human stresses on global water resources. Here, we evaluate model reliability based on a comprehensive comparison of decadal trends (2002–2014) in land water storage from seven global models (WGHM, PCR-GLOBWB, GLDAS NOAH, MOSAIC, VIC, CLM, and CLSM) to trends from three Gravity Recovery and Climate Experiment (GRACE) satellite solutions in 186 river basins (∼60% of global land area). Medians of modeled basin water storage trends greatly underestimate GRACE-derived large decreasing (≤−0.5 km3/y) and increasing (≥0.5 km3/y) trends. Decreasing trends from GRACE are mostly related to human use (irrigation) and climate variations, whereas increasing trends reflect climate variations. For example, in the Amazon, GRACE estimates a large increasing trend of ∼43 km3/y, whereas most models estimate decreasing trends (−71 to 11 km3/y). Land water storage trends, summed over all basins, are positive for GRACE (∼71–82 km3/y) but negative for models (−450 to −12 km3/y), contributing opposing trends to global mean sea level change. Impacts of climate forcing on decadal land water storage trends exceed those of modeled human intervention by about a factor of 2. The model-GRACE comparison highlights potential areas of future model development, particularly simulated water storage. The inability of models to capture large decadal water storage trends based on GRACE indicates that model projections of climate and human-induced water storage changes may be underestimated. PMID:29358394

  2. Development and evaluation of a prediction model for underestimated invasive breast cancer in women with ductal carcinoma in situ at stereotactic large core needle biopsy.

    Directory of Open Access Journals (Sweden)

    Suzanne C E Diepstraten

    Full Text Available BACKGROUND: We aimed to develop a multivariable model for prediction of underestimated invasiveness in women with ductal carcinoma in situ at stereotactic large core needle biopsy, that can be used to select patients for sentinel node biopsy at primary surgery. METHODS: From the literature, we selected potential preoperative predictors of underestimated invasive breast cancer. Data of patients with nonpalpable breast lesions who were diagnosed with ductal carcinoma in situ at stereotactic large core needle biopsy, drawn from the prospective COBRA (Core Biopsy after RAdiological localization and COBRA2000 cohort studies, were used to fit the multivariable model and assess its overall performance, discrimination, and calibration. RESULTS: 348 women with large core needle biopsy-proven ductal carcinoma in situ were available for analysis. In 100 (28.7% patients invasive carcinoma was found at subsequent surgery. Nine predictors were included in the model. In the multivariable analysis, the predictors with the strongest association were lesion size (OR 1.12 per cm, 95% CI 0.98-1.28, number of cores retrieved at biopsy (OR per core 0.87, 95% CI 0.75-1.01, presence of lobular cancerization (OR 5.29, 95% CI 1.25-26.77, and microinvasion (OR 3.75, 95% CI 1.42-9.87. The overall performance of the multivariable model was poor with an explained variation of 9% (Nagelkerke's R(2, mediocre discrimination with area under the receiver operating characteristic curve of 0.66 (95% confidence interval 0.58-0.73, and fairly good calibration. CONCLUSION: The evaluation of our multivariable prediction model in a large, clinically representative study population proves that routine clinical and pathological variables are not suitable to select patients with large core needle biopsy-proven ductal carcinoma in situ for sentinel node biopsy during primary surgery.

  3. A Large Underestimate of Formic Acid from Tropical Fires: Constraints from Space-Borne Measurements.

    Science.gov (United States)

    Chaliyakunnel, S; Millet, D B; Wells, K C; Cady-Pereira, K E; Shephard, M W

    2016-06-07

    Formic acid (HCOOH) is one of the most abundant carboxylic acids and a dominant source of atmospheric acidity. Recent work indicates a major gap in the HCOOH budget, with atmospheric concentrations much larger than expected from known sources. Here, we employ recent space-based observations from the Tropospheric Emission Spectrometer with the GEOS-Chem atmospheric model to better quantify the HCOOH source from biomass burning, and assess whether fire emissions can help close the large budget gap for this species. The space-based data reveal a severe model HCOOH underestimate most prominent over tropical burning regions, suggesting a major missing source of organic acids from fires. We develop an approach for inferring the fractional fire contribution to ambient HCOOH and find, based on measurements over Africa, that pyrogenic HCOOH:CO enhancement ratios are much higher than expected from direct emissions alone, revealing substantial secondary organic acid production in fire plumes. Current models strongly underestimate (by 10 ± 5 times) the total primary and secondary HCOOH source from African fires. If a 10-fold bias were to extend to fires in other regions, biomass burning could produce 14 Tg/a of HCOOH in the tropics or 16 Tg/a worldwide. However, even such an increase would only represent 15-20% of the total required HCOOH source, implying the existence of other larger missing sources.

  4. Underestimation of Project Costs

    Science.gov (United States)

    Jones, Harry W.

    2015-01-01

    Large projects almost always exceed their budgets. Estimating cost is difficult and estimated costs are usually too low. Three different reasons are suggested: bad luck, overoptimism, and deliberate underestimation. Project management can usually point to project difficulty and complexity, technical uncertainty, stakeholder conflicts, scope changes, unforeseen events, and other not really unpredictable bad luck. Project planning is usually over-optimistic, so the likelihood and impact of bad luck is systematically underestimated. Project plans reflect optimism and hope for success in a supposedly unique new effort rather than rational expectations based on historical data. Past project problems are claimed to be irrelevant because "This time it's different." Some bad luck is inevitable and reasonable optimism is understandable, but deliberate deception must be condemned. In a competitive environment, project planners and advocates often deliberately underestimate costs to help gain project approval and funding. Project benefits, cost savings, and probability of success are exaggerated and key risks ignored. Project advocates have incentives to distort information and conceal difficulties from project approvers. One naively suggested cure is more openness, honesty, and group adherence to shared overall goals. A more realistic alternative is threatening overrun projects with cancellation. Neither approach seems to solve the problem. A better method to avoid the delusions of over-optimism and the deceptions of biased advocacy is to base the project cost estimate on the actual costs of a large group of similar projects. Over optimism and deception can continue beyond the planning phase and into project execution. Hard milestones based on verified tests and demonstrations can provide a reality check.

  5. Underestimated effect sizes in GWAS: fundamental limitations of single SNP analysis for dichotomous phenotypes.

    Directory of Open Access Journals (Sweden)

    Sven Stringer

    Full Text Available Complex diseases are often highly heritable. However, for many complex traits only a small proportion of the heritability can be explained by observed genetic variants in traditional genome-wide association (GWA studies. Moreover, for some of those traits few significant SNPs have been identified. Single SNP association methods test for association at a single SNP, ignoring the effect of other SNPs. We show using a simple multi-locus odds model of complex disease that moderate to large effect sizes of causal variants may be estimated as relatively small effect sizes in single SNP association testing. This underestimation effect is most severe for diseases influenced by numerous risk variants. We relate the underestimation effect to the concept of non-collapsibility found in the statistics literature. As described, continuous phenotypes generated with linear genetic models are not affected by this underestimation effect. Since many GWA studies apply single SNP analysis to dichotomous phenotypes, previously reported results potentially underestimate true effect sizes, thereby impeding identification of true effect SNPs. Therefore, when a multi-locus model of disease risk is assumed, a multi SNP analysis may be more appropriate.

  6. Evidence for link between modelled trends in Antarctic sea ice and underestimated westerly wind changes.

    Science.gov (United States)

    Purich, Ariaan; Cai, Wenju; England, Matthew H; Cowan, Tim

    2016-02-04

    Despite global warming, total Antarctic sea ice coverage increased over 1979-2013. However, the majority of Coupled Model Intercomparison Project phase 5 models simulate a decline. Mechanisms causing this discrepancy have so far remained elusive. Here we show that weaker trends in the intensification of the Southern Hemisphere westerly wind jet simulated by the models may contribute to this disparity. During austral summer, a strengthened jet leads to increased upwelling of cooler subsurface water and strengthened equatorward transport, conducive to increased sea ice. As the majority of models underestimate summer jet trends, this cooling process is underestimated compared with observations and is insufficient to offset warming in the models. Through the sea ice-albedo feedback, models produce a high-latitude surface ocean warming and sea ice decline, contrasting the observed net cooling and sea ice increase. A realistic simulation of observed wind changes may be crucial for reproducing the recent observed sea ice increase.

  7. Low modeled ozone production suggests underestimation of precursor emissions (especially NOx in Europe

    Directory of Open Access Journals (Sweden)

    E. Oikonomakis

    2018-02-01

    Full Text Available High surface ozone concentrations, which usually occur when photochemical ozone production takes place, pose a great risk to human health and vegetation. Air quality models are often used by policy makers as tools for the development of ozone mitigation strategies. However, the modeled ozone production is often not or not enough evaluated in many ozone modeling studies. The focus of this work is to evaluate the modeled ozone production in Europe indirectly, with the use of the ozone–temperature correlation for the summer of 2010 and to analyze its sensitivity to precursor emissions and meteorology by using the regional air quality model, the Comprehensive Air Quality Model with Extensions (CAMx. The results show that the model significantly underestimates the observed high afternoon surface ozone mixing ratios (≥  60 ppb by 10–20 ppb and overestimates the lower ones (<  40 ppb by 5–15 ppb, resulting in a misleading good agreement with the observations for average ozone. The model also underestimates the ozone–temperature regression slope by about a factor of 2 for most of the measurement stations. To investigate the impact of emissions, four scenarios were tested: (i increased volatile organic compound (VOC emissions by a factor of 1.5 and 2 for the anthropogenic and biogenic VOC emissions, respectively, (ii increased nitrogen oxide (NOx emissions by a factor of 2, (iii a combination of the first two scenarios and (iv increased traffic-only NOx emissions by a factor of 4. For southern, eastern, and central (except the Benelux area Europe, doubling NOx emissions seems to be the most efficient scenario to reduce the underestimation of the observed high ozone mixing ratios without significant degradation of the model performance for the lower ozone mixing ratios. The model performance for ozone–temperature correlation is also better when NOx emissions are doubled. In the Benelux area, however, the third scenario

  8. Earth System Models Underestimate Soil Carbon Diagnostic Times in Dry and Cold Regions.

    Science.gov (United States)

    Jing, W.; Xia, J.; Zhou, X.; Huang, K.; Huang, Y.; Jian, Z.; Jiang, L.; Xu, X.; Liang, J.; Wang, Y. P.; Luo, Y.

    2017-12-01

    Soils contain the largest organic carbon (C) reservoir in the Earth's surface and strongly modulate the terrestrial feedback to climate change. Large uncertainty exists in current Earth system models (ESMs) in simulating soil organic C (SOC) dynamics, calling for a systematic diagnosis on their performance based on observations. Here, we built a global database of SOC diagnostic time (i.e.,turnover times; τsoil) measured at 320 sites with four different approaches. We found that the estimated τsoil was comparable among approaches of 14C dating () (median with 25 and 75 percentiles), 13C shifts due to vegetation change () and the ratio of stock over flux (), but was shortest from laboratory incubation studies (). The state-of-the-art ESMs underestimated the τsoil in most biomes, even by >10 and >5 folds in cold and dry regions, respectively. Moreover,we identified clear negative dependences of τsoil on temperature and precipitation in both of the observational and modeling results. Compared with Community Land Model (version 4), the incorporation of soil vertical profile (CLM4.5) could substantially extend the τsoil of SOC. Our findings suggest the accuracy of climate-C cycle feedback in current ESMs could be enhanced by an improved understanding of SOC dynamics under the limited hydrothermal conditions.

  9. Some sources of the underestimation of evaluated cross section uncertainties

    International Nuclear Information System (INIS)

    Badikov, S.A.; Gai, E.V.

    2003-01-01

    The problem of the underestimation of evaluated cross-section uncertainties is addressed. Two basic sources of the underestimation of evaluated cross-section uncertainties - a) inconsistency between declared and observable experimental uncertainties and b) inadequacy between applied statistical models and processed experimental data - are considered. Both the sources of the underestimation are mainly a consequence of existence of the uncertainties unrecognized by experimenters. A model of a 'constant shift' is proposed for taking unrecognised experimental uncertainties into account. The model is applied for statistical analysis of the 238 U(n,f)/ 235 U(n,f) reaction cross-section ratio measurements. It is demonstrated that multiplication by sqrt(χ 2 ) as instrument for correction of underestimated evaluated cross-section uncertainties fails in case of correlated measurements. It is shown that arbitrary assignment of uncertainties and correlation in a simple least squares fit of two correlated measurements of unknown mean leads to physically incorrect evaluated results. (author)

  10. Modeling microelectrode biosensors: free-flow calibration can substantially underestimate tissue concentrations.

    Science.gov (United States)

    Newton, Adam J H; Wall, Mark J; Richardson, Magnus J E

    2017-03-01

    Microelectrode amperometric biosensors are widely used to measure concentrations of analytes in solution and tissue including acetylcholine, adenosine, glucose, and glutamate. A great deal of experimental and modeling effort has been directed at quantifying the response of the biosensors themselves; however, the influence that the macroscopic tissue environment has on biosensor response has not been subjected to the same level of scrutiny. Here we identify an important issue in the way microelectrode biosensors are calibrated that is likely to have led to underestimations of analyte tissue concentrations. Concentration in tissue is typically determined by comparing the biosensor signal to that measured in free-flow calibration conditions. In a free-flow environment the concentration of the analyte at the outer surface of the biosensor can be considered constant. However, in tissue the analyte reaches the biosensor surface by diffusion through the extracellular space. Because the enzymes in the biosensor break down the analyte, a density gradient is set up resulting in a significantly lower concentration of analyte near the biosensor surface. This effect is compounded by the diminished volume fraction (porosity) and reduction in the diffusion coefficient due to obstructions (tortuosity) in tissue. We demonstrate this effect through modeling and experimentally verify our predictions in diffusive environments. NEW & NOTEWORTHY Microelectrode biosensors are typically calibrated in a free-flow environment where the concentrations at the biosensor surface are constant. However, when in tissue, the analyte reaches the biosensor via diffusion and so analyte breakdown by the biosensor results in a concentration gradient and consequently a lower concentration around the biosensor. This effect means that naive free-flow calibration will underestimate tissue concentration. We develop mathematical models to better quantify the discrepancy between the calibration and tissue

  11. Low modeled ozone production suggests underestimation of precursor emissions (especially NOx) in Europe

    Science.gov (United States)

    Oikonomakis, Emmanouil; Aksoyoglu, Sebnem; Ciarelli, Giancarlo; Baltensperger, Urs; Prévôt, André Stephan Henry

    2018-02-01

    High surface ozone concentrations, which usually occur when photochemical ozone production takes place, pose a great risk to human health and vegetation. Air quality models are often used by policy makers as tools for the development of ozone mitigation strategies. However, the modeled ozone production is often not or not enough evaluated in many ozone modeling studies. The focus of this work is to evaluate the modeled ozone production in Europe indirectly, with the use of the ozone-temperature correlation for the summer of 2010 and to analyze its sensitivity to precursor emissions and meteorology by using the regional air quality model, the Comprehensive Air Quality Model with Extensions (CAMx). The results show that the model significantly underestimates the observed high afternoon surface ozone mixing ratios (≥ 60 ppb) by 10-20 ppb and overestimates the lower ones (degradation of the model performance for the lower ozone mixing ratios. The model performance for ozone-temperature correlation is also better when NOx emissions are doubled. In the Benelux area, however, the third scenario (where both NOx and VOC emissions are increased) leads to a better model performance. Although increasing only the traffic NOx emissions by a factor of 4 gave very similar results to the doubling of all NOx emissions, the first scenario is more consistent with the uncertainties reported by other studies than the latter, suggesting that high uncertainties in NOx emissions might originate mainly from the road-transport sector rather than from other sectors. The impact of meteorology was examined with three sensitivity tests: (i) increased surface temperature by 4 °C, (ii) reduced wind speed by 50 % and (iii) doubled wind speed. The first two scenarios led to a consistent increase in all surface ozone mixing ratios, thus improving the model performance for the high ozone values but significantly degrading it for the low ozone values, while the third scenario had exactly the

  12. The underestimated potential of solar energy to mitigate climate change

    Science.gov (United States)

    Creutzig, Felix; Agoston, Peter; Goldschmidt, Jan Christoph; Luderer, Gunnar; Nemet, Gregory; Pietzcker, Robert C.

    2017-09-01

    The Intergovernmental Panel on Climate Change's fifth assessment report emphasizes the importance of bioenergy and carbon capture and storage for achieving climate goals, but it does not identify solar energy as a strategically important technology option. That is surprising given the strong growth, large resource, and low environmental footprint of photovoltaics (PV). Here we explore how models have consistently underestimated PV deployment and identify the reasons for underlying bias in models. Our analysis reveals that rapid technological learning and technology-specific policy support were crucial to PV deployment in the past, but that future success will depend on adequate financing instruments and the management of system integration. We propose that with coordinated advances in multiple components of the energy system, PV could supply 30-50% of electricity in competitive markets.

  13. Efficient trawl avoidance by mesopelagic fishes causes large underestimation of their biomass

    KAUST Repository

    Kaartvedt, Stein

    2012-06-07

    Mesopelagic fishes occur in all the world’s oceans, but their abundance and consequently their ecological significance remains uncertain. The current global estimate based on net sampling prior to 1980 suggests a global abundance of one gigatonne (109 t) wet weight. Here we report novel evidence of efficient avoidance of such sampling by the most common myctophid fish in the Northern Atlantic, i.e. Benthosema glaciale. We reason that similar avoidance of nets may explain consistently higher acoustic abundance estimates of mesopelagic fish from different parts of the world’s oceans. It appears that mesopelagic fish abundance may be underestimated by one order of magnitude, suggesting that the role of mesopelagic fish in the oceans might need to be revised.

  14. Linear-quadratic model underestimates sparing effect of small doses per fraction in rat spinal cord

    International Nuclear Information System (INIS)

    Shun Wong, C.; Toronto University; Minkin, S.; Hill, R.P.; Toronto University

    1993-01-01

    The application of the linear-quadratic (LQ) model to describe iso-effective fractionation schedules for dose fraction sizes less than 2 Gy has been controversial. Experiments are described in which the effect of daily fractionated irradiation given with a wide range of fraction sizes was assessed in rat cervical spine cord. The first group of rats was given doses in 1, 2, 4, 8 and 40 fractions/day. The second group received 3 initial 'top-up'doses of 9 Gy given once daily, representing 3/4 tolerance, followed by doses in 1, 2, 10, 20, 30 and 40 fractions/day. The fractionated portion of the irradiation schedule therefore constituted only the final quarter of the tolerance dose. The endpoint of the experiments was paralysis of forelimbs secondary to white matter necrosis. Direct analysis of data from experiments with full course fractionation up to 40 fractions/day (25.0-1.98 Gy/fraction) indicated consistency with the LQ model yielding an α/β value of 2.41 Gy. Analysis of data from experiments in which the 3 'top-up' doses were followed by up to 10 fractions (10.0-1.64 Gy/fraction) gave an α/β value of 3.41 Gy. However, data from 'top-up' experiments with 20, 30 and 40 fractions (1.60-0.55 Gy/fraction) were inconsistent with LQ model and gave a very small α/β of 0.48 Gy. It is concluded that LQ model based on data from large doses/fraction underestimates the sparing effect of small doses/fraction, provided sufficient time is allowed between each fraction for repair of sublethal damage. (author). 28 refs., 5 figs., 1 tab

  15. Breakdown of hot-spot model in determining convective amplification in large homogeneous systems

    International Nuclear Information System (INIS)

    Mounaix, Philippe; Divol, Laurent

    2004-01-01

    Convective amplification in large homogeneous systems is studied, both analytically and numerically, in the case of a linear diffraction-free stochastic amplifier. Overall amplification does not result from successive amplifications in small scale high intensity hot spots, but from a single amplification in a delocalized mode of the driver field spreading over the whole interaction length. For this model, the hot-spot approach is found to systematically underestimate the gain factor by more than 50%

  16. Underestimation of boreal soil carbon stocks by mathematical soil carbon models linked to soil nutrient status

    Science.gov (United States)

    Ťupek, Boris; Ortiz, Carina A.; Hashimoto, Shoji; Stendahl, Johan; Dahlgren, Jonas; Karltun, Erik; Lehtonen, Aleksi

    2016-08-01

    Inaccurate estimate of the largest terrestrial carbon pool, soil organic carbon (SOC) stock, is the major source of uncertainty in simulating feedback of climate warming on ecosystem-atmosphere carbon dioxide exchange by process-based ecosystem and soil carbon models. Although the models need to simplify complex environmental processes of soil carbon sequestration, in a large mosaic of environments a missing key driver could lead to a modeling bias in predictions of SOC stock change.We aimed to evaluate SOC stock estimates of process-based models (Yasso07, Q, and CENTURY soil sub-model v4) against a massive Swedish forest soil inventory data set (3230 samples) organized by a recursive partitioning method into distinct soil groups with underlying SOC stock development linked to physicochemical conditions.For two-thirds of measurements all models predicted accurate SOC stock levels regardless of the detail of input data, e.g., whether they ignored or included soil properties. However, in fertile sites with high N deposition, high cation exchange capacity, or moderately increased soil water content, Yasso07 and Q models underestimated SOC stocks. In comparison to Yasso07 and Q, accounting for the site-specific soil characteristics (e. g. clay content and topsoil mineral N) by CENTURY improved SOC stock estimates for sites with high clay content, but not for sites with high N deposition.Our analysis suggested that the soils with poorly predicted SOC stocks, as characterized by the high nutrient status and well-sorted parent material, indeed have had other predominant drivers of SOC stabilization lacking in the models, presumably the mycorrhizal organic uptake and organo-mineral stabilization processes. Our results imply that the role of soil nutrient status as regulator of organic matter mineralization has to be re-evaluated, since correct SOC stocks are decisive for predicting future SOC change and soil CO2 efflux.

  17. Method for activity measurement in large packages of radioactive wastes. Is the overall activity stored inside a final repository systematically under-estimated?

    International Nuclear Information System (INIS)

    Rottner, B.

    2005-01-01

    The activity of a rad waste package is usually evaluated from gamma spectrometry measurements or dose rates emitted by the package, associated with transfer functions. These functions are calculated assuming that both activity and mass distributions are homogeneous. The proposed method, OPROF-STAT (patented) evaluates the error arising from this homogeneous assumption. This error has a systematic part, leading to an over or underestimation of the overall activity in a family of similar waste packages, and a stochastic part, whose mean effect on the overall activity of the family is null. The method consists in building a virtual family of packages, by numeric simulation of the filling of each package of the family. The simulated filling has a stochastic part, so that the mass and activity distributions inside a package are different from one package to another. The virtual packages are wholly known, which is not the case for the real family, and it is then possible to compute the result of a measurement, and the associated error, for each package of the virtual family. A way to fit and demonstrate the representativeness of the virtual is described. The main trends and parameters modifying the error are explored: a systematic underestimation of the activity in a large family of rad waste packages is possible. (author)

  18. the Underestimation of Isorene in Houston during the Texas 2013 DISCOVER-AQ Campaign

    Science.gov (United States)

    Choi, Y.; Diao, L.; Czader, B.; Li, X.; Estes, M. J.

    2014-12-01

    This study applies principal component analysis to aircraft data from the Texas 2013 DISCOVER-AQ (Deriving Information on Surface Conditions from Column and Vertically Resolved Observations Relevant to Air Quality) field campaign to characterize isoprene sources over Houston during September 2013. The biogenic isoprene signature appears in the third principal component and anthropogenic signals in the following two. Evaluations of the Community Multiscale Air Quality (CMAQ) model simulations of isoprene with airborne measurements are more accurate for suburban areas than for industrial areas. This study also compares model outputs to eight surface automated gas chromatograph (Auto-GC) measurements near the Houston ship channel industrial area during the nighttime and shows that modeled anthropogenic isoprene is underestimated by a factor of 10.60. This study employs a new simulation with a modified anthropogenic emissions inventory (constraining using the ratios of observed values versus simulated ones) that yields closer isoprene predictions at night with a reduction in the mean bias by 56.93%, implying that model-estimated isoprene emissions from the 2008 National Emission Inventory are underestimated in the city of Houston and that other climate models or chemistry and transport models using the same emissions inventory might also be underestimated in other Houston-like areas in the United States.

  19. Satellite methods underestimate indirect climate forcing by aerosols

    Science.gov (United States)

    Penner, Joyce E.; Xu, Li; Wang, Minghuai

    2011-01-01

    Satellite-based estimates of the aerosol indirect effect (AIE) are consistently smaller than the estimates from global aerosol models, and, partly as a result of these differences, the assessment of this climate forcing includes large uncertainties. Satellite estimates typically use the present-day (PD) relationship between observed cloud drop number concentrations (Nc) and aerosol optical depths (AODs) to determine the preindustrial (PI) values of Nc. These values are then used to determine the PD and PI cloud albedos and, thus, the effect of anthropogenic aerosols on top of the atmosphere radiative fluxes. Here, we use a model with realistic aerosol and cloud processes to show that empirical relationships for ln(Nc) versus ln(AOD) derived from PD results do not represent the atmospheric perturbation caused by the addition of anthropogenic aerosols to the preindustrial atmosphere. As a result, the model estimates based on satellite methods of the AIE are between a factor of 3 to more than a factor of 6 smaller than model estimates based on actual PD and PI values for Nc. Using ln(Nc) versus ln(AI) (Aerosol Index, or the optical depth times angstrom exponent) to estimate preindustrial values for Nc provides estimates for Nc and forcing that are closer to the values predicted by the model. Nevertheless, the AIE using ln(Nc) versus ln(AI) may be substantially incorrect on a regional basis and may underestimate or overestimate the global average forcing by 25 to 35%. PMID:21808047

  20. Is dream recall underestimated by retrospective measures and enhanced by keeping a logbook? A review.

    Science.gov (United States)

    Aspy, Denholm J; Delfabbro, Paul; Proeve, Michael

    2015-05-01

    There are two methods commonly used to measure dream recall in the home setting. The retrospective method involves asking participants to estimate their dream recall in response to a single question and the logbook method involves keeping a daily record of one's dream recall. Until recently, the implicit assumption has been that these measures are largely equivalent. However, this is challenged by the tendency for retrospective measures to yield significantly lower dream recall rates than logbooks. A common explanation for this is that retrospective measures underestimate dream recall. Another is that keeping a logbook enhances it. If retrospective measures underestimate dream recall and if logbooks enhance it they are both unlikely to reflect typical dream recall rates and may be confounded with variables associated with the underestimation and enhancement effects. To date, this issue has received insufficient attention. The present review addresses this gap in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Underestimating belief in climate change

    Science.gov (United States)

    Jost, John T.

    2018-03-01

    People are influenced by second-order beliefs — beliefs about the beliefs of others. New research finds that citizens in the US and China systematically underestimate popular support for taking action to curb climate change. Fortunately, they seem willing and able to correct their misperceptions.

  2. Diversity in the representation of large-scale circulation associated with ENSO-Indian summer monsoon teleconnections in CMIP5 models

    Science.gov (United States)

    Ramu, Dandi A.; Chowdary, Jasti S.; Ramakrishna, S. S. V. S.; Kumar, O. S. R. U. B.

    2018-04-01

    Realistic simulation of large-scale circulation patterns associated with El Niño-Southern Oscillation (ENSO) is vital in coupled models in order to represent teleconnections to different regions of globe. The diversity in representing large-scale circulation patterns associated with ENSO-Indian summer monsoon (ISM) teleconnections in 23 Coupled Model Intercomparison Project Phase 5 (CMIP5) models is examined. CMIP5 models have been classified into three groups based on the correlation between Niño3.4 sea surface temperature (SST) index and ISM rainfall anomalies, models in group 1 (G1) overestimated El Niño-ISM teleconections and group 3 (G3) models underestimated it, whereas these teleconnections are better represented in group 2 (G2) models. Results show that in G1 models, El Niño-induced Tropical Indian Ocean (TIO) SST anomalies are not well represented. Anomalous low-level anticyclonic circulation anomalies over the southeastern TIO and western subtropical northwest Pacific (WSNP) cyclonic circulation are shifted too far west to 60° E and 120° E, respectively. This bias in circulation patterns implies dry wind advection from extratropics/midlatitudes to Indian subcontinent. In addition to this, large-scale upper level convergence together with lower level divergence over ISM region corresponding to El Niño are stronger in G1 models than in observations. Thus, unrealistic shift in low-level circulation centers corroborated by upper level circulation changes are responsible for overestimation of ENSO-ISM teleconnections in G1 models. Warm Pacific SST anomalies associated with El Niño are shifted too far west in many G3 models unlike in the observations. Further large-scale circulation anomalies over the Pacific and ISM region are misrepresented during El Niño years in G3 models. Too strong upper-level convergence away from Indian subcontinent and too weak WSNP cyclonic circulation are prominent in most of G3 models in which ENSO-ISM teleconnections are

  3. Terrestrial biosphere models underestimate photosynthetic capacity and CO2 assimilation in the Arctic.

    Science.gov (United States)

    Rogers, Alistair; Serbin, Shawn P; Ely, Kim S; Sloan, Victoria L; Wullschleger, Stan D

    2017-12-01

    Terrestrial biosphere models (TBMs) are highly sensitive to model representation of photosynthesis, in particular the parameters maximum carboxylation rate and maximum electron transport rate at 25°C (V c,max.25 and J max.25 , respectively). Many TBMs do not include representation of Arctic plants, and those that do rely on understanding and parameterization from temperate species. We measured photosynthetic CO 2 response curves and leaf nitrogen (N) content in species representing the dominant vascular plant functional types found on the coastal tundra near Barrow, Alaska. The activation energies associated with the temperature response functions of V c,max and J max were 17% lower than commonly used values. When scaled to 25°C, V c,max.25 and J max.25 were two- to five-fold higher than the values used to parameterize current TBMs. This high photosynthetic capacity was attributable to a high leaf N content and the high fraction of N invested in Rubisco. Leaf-level modeling demonstrated that current parameterization of TBMs resulted in a two-fold underestimation of the capacity for leaf-level CO 2 assimilation in Arctic vegetation. This study highlights the poor representation of Arctic photosynthesis in TBMs, and provides the critical data necessary to improve our ability to project the response of the Arctic to global environmental change. No claim to original US Government works. New Phytologist © 2017 New Phytologist Trust.

  4. Coupled climate model simulations of Mediterranean winter cyclones and large-scale flow patterns

    Directory of Open Access Journals (Sweden)

    B. Ziv

    2013-03-01

    Full Text Available The study aims to evaluate the ability of global, coupled climate models to reproduce the synoptic regime of the Mediterranean Basin. The output of simulations of the 9 models included in the IPCC CMIP3 effort is compared to the NCEP-NCAR reanalyzed data for the period 1961–1990. The study examined the spatial distribution of cyclone occurrence, the mean Mediterranean upper- and lower-level troughs, the inter-annual variation and trend in the occurrence of the Mediterranean cyclones, and the main large-scale circulation patterns, represented by rotated EOFs of 500 hPa and sea level pressure. The models reproduce successfully the two maxima in cyclone density in the Mediterranean and their locations, the location of the average upper- and lower-level troughs, the relative inter-annual variation in cyclone occurrences and the structure of the four leading large scale EOFs. The main discrepancy is the models' underestimation of the cyclone density in the Mediterranean, especially in its western part. The models' skill in reproducing the cyclone distribution is found correlated with their spatial resolution, especially in the vertical. The current improvement in model spatial resolution suggests that their ability to reproduce the Mediterranean cyclones would be improved as well.

  5. CMIP5 land surface models systematically underestimate inter-annual variability of net ecosystem exchange in semi-arid southwestern North America.

    Science.gov (United States)

    MacBean, N.; Scott, R. L.; Biederman, J. A.; Vuichard, N.; Hudson, A.; Barnes, M.; Fox, A. M.; Smith, W. K.; Peylin, P. P.; Maignan, F.; Moore, D. J.

    2017-12-01

    Recent studies based on analysis of atmospheric CO2 inversions, satellite data and terrestrial biosphere model simulations have suggested that semi-arid ecosystems play a dominant role in the interannual variability and long-term trend in the global carbon sink. These studies have largely cited the response of vegetation activity to changing moisture availability as the primary mechanism of variability. However, some land surface models (LSMs) used in these studies have performed poorly in comparison to satellite-based observations of vegetation dynamics in semi-arid regions. Further analysis is therefore needed to ensure semi-arid carbon cycle processes are well represented in global scale LSMs before we can fully establish their contribution to the global carbon cycle. In this study, we evaluated annual net ecosystem exchange (NEE) simulated by CMIP5 land surface models using observations from 20 Ameriflux sites across semi-arid southwestern North America. We found that CMIP5 models systematically underestimate the magnitude and sign of NEE inter-annual variability; therefore, the true role of semi-arid regions in the global carbon cycle may be even more important than previously thought. To diagnose the factors responsible for this bias, we used the ORCHIDEE LSM to test different climate forcing data, prescribed vegetation fractions and model structures. Climate and prescribed vegetation do contribute to uncertainty in annual NEE simulations, but the bias is primarily caused by incorrect timing and magnitude of peak gross carbon fluxes. Modifications to the hydrology scheme improved simulations of soil moisture in comparison to data. This in turn improved the seasonal cycle of carbon uptake due to a more realistic limitation on photosynthesis during water stress. However, the peak fluxes are still too low, and phenology is poorly represented for desert shrubs and grasses. We provide suggestions on model developments needed to tackle these issues in the future.

  6. Body Size Estimation from Early to Middle Childhood: Stability of Underestimation, BMI, and Gender Effects

    Directory of Open Access Journals (Sweden)

    Silje Steinsbekk

    2017-11-01

    Full Text Available Individuals who are overweight are more likely to underestimate their body size than those who are normal weight, and overweight underestimators are less likely to engage in weight loss efforts. Underestimation of body size might represent a barrier to prevention and treatment of overweight; thus insight in how underestimation of body size develops and tracks through the childhood years is needed. The aim of the present study was therefore to examine stability in children’s underestimation of body size, exploring predictors of underestimation over time. The prospective path from underestimation to BMI was also tested. In a Norwegian cohort of 6 year olds, followed up at ages 8 and 10 (analysis sample: n = 793 body size estimation was captured by the Children’s Body Image Scale, height and weight were measured and BMI calculated. Overall, children were more likely to underestimate than overestimate their body size. Individual stability in underestimation was modest, but significant. Higher BMI predicted future underestimation, even when previous underestimation was adjusted for, but there was no evidence for the opposite direction of influence. Boys were more likely than girls to underestimate their body size at ages 8 and 10 (age 8: 38.0% vs. 24.1%; Age 10: 57.9% vs. 30.8% and showed a steeper increase in underestimation with age compared to girls. In conclusion, the majority of 6, 8, and 10-year olds correctly estimate their body size (prevalence ranging from 40 to 70% depending on age and gender, although a substantial portion perceived themselves to be thinner than they actually were. Higher BMI forecasted future underestimation, but underestimation did not increase the risk for excessive weight gain in middle childhood.

  7. The Perception of Time Is Underestimated in Adolescents With Anorexia Nervosa.

    Science.gov (United States)

    Vicario, Carmelo M; Felmingham, Kim

    2018-01-01

    Research has revealed reduced temporal discounting (i.e., increased capacity to delay reward) and altered interoceptive awareness in anorexia nervosa (AN). In line with the research linking temporal underestimation with a reduced tendency to devalue a reward and reduced interoceptive awareness, we tested the hypothesis that time duration might be underestimated in AN. Our findings revealed that patients with AN displayed lower timing accuracy in the form of timing underestimation compared with controls. These results were not predicted by clinical, demographic factors, attention, and working memory performance of the participants. The evidence of a temporal underestimation bias in AN might be clinically relevant to explain their abnormal motivation in pursuing a long-term restrictive diet, in line with the evidence that increasing the subjective temporal proximity of remote future goals can boost motivation and the actual behavior to reach them.

  8. The Perception of Time Is Underestimated in Adolescents With Anorexia Nervosa

    Directory of Open Access Journals (Sweden)

    Carmelo M. Vicario

    2018-04-01

    Full Text Available Research has revealed reduced temporal discounting (i.e., increased capacity to delay reward and altered interoceptive awareness in anorexia nervosa (AN. In line with the research linking temporal underestimation with a reduced tendency to devalue a reward and reduced interoceptive awareness, we tested the hypothesis that time duration might be underestimated in AN. Our findings revealed that patients with AN displayed lower timing accuracy in the form of timing underestimation compared with controls. These results were not predicted by clinical, demographic factors, attention, and working memory performance of the participants. The evidence of a temporal underestimation bias in AN might be clinically relevant to explain their abnormal motivation in pursuing a long-term restrictive diet, in line with the evidence that increasing the subjective temporal proximity of remote future goals can boost motivation and the actual behavior to reach them.

  9. Large-scale movements in European badgers: has the tail of the movement kernel been underestimated?

    Science.gov (United States)

    Byrne, Andrew W; Quinn, John L; O'Keeffe, James J; Green, Stuart; Sleeman, D Paddy; Martin, S Wayne; Davenport, John

    2014-07-01

    Characterizing patterns of animal movement is a major aim in population ecology, and yet doing so at an appropriate spatial scale remains a major challenge. Estimating the frequency and distances of movements is of particular importance when species are implicated in the transmission of zoonotic diseases. European badgers (Meles meles) are classically viewed as exhibiting limited dispersal, and yet their movements bring them into conflict with farmers due to their potential to spread bovine tuberculosis in parts of their range. Considerable uncertainty surrounds the movement potential of badgers, and this may be related to the spatial scale of previous empirical studies. We conducted a large-scale mark-recapture study (755 km(2); 2008-2012; 1935 capture events; 963 badgers) to investigate movement patterns in badgers, and undertook a comparative meta-analysis using published data from 15 European populations. The dispersal movement (>1 km) kernel followed an inverse power-law function, with a substantial 'tail' indicating the occurrence of rare long-distance dispersal attempts during the study period. The mean recorded distance from this distribution was 2.6 km, the 95 percentile was 7.3 km and the longest recorded was 22.1 km. Dispersal frequency distributions were significantly different between genders; males dispersed more frequently than females, but females made proportionally more long-distance dispersal attempts than males. We used a subsampling approach to demonstrate that the appropriate minimum spatial scale to characterize badger movements in our study population was 80 km(2), substantially larger than many previous badger studies. Furthermore, the meta-analysis indicated a significant association between maximum movement distance and study area size, while controlling for population density. Maximum long-distance movements were often only recorded by chance beyond the boundaries of study areas. These findings suggest that the tail of the badger

  10. Whole-word response scoring underestimates functional spelling ability for some individuals with global agraphia

    Directory of Open Access Journals (Sweden)

    Andrew Tesla Demarco

    2015-05-01

    These data suggest that conventional whole-word scoring may significantly underestimate functional spelling performance. Because by-letter scoring boosted pre-treatment scores to the same extent as post-treatment scores, the magnitude of treatment gains was no greater than estimates from conventional whole-word scoring. Nonetheless, the surprisingly large disparity between conventional whole-word scoring and by-letter scoring suggests that by-letter scoring methods may warrant further investigation. Because by-letter analyses may hold interest to others, we plan to make the software tool used in this study available on-line for use to researchers and clinicians at large.

  11. Underestimation of risk due to exposure misclassification

    DEFF Research Database (Denmark)

    Grandjean, Philippe; Budtz-Jørgensen, Esben; Keiding, Niels

    2004-01-01

    Exposure misclassification constitutes a major obstacle when developing dose-response relationships for risk assessment. A non-differentional error results in underestimation of the risk. If the degree of misclassification is known, adjustment may be achieved by sensitivity analysis. The purpose...

  12. Underestimation of soil carbon stocks by Yasso07, Q, and CENTURY models in boreal forest linked to overlooking site fertility

    Science.gov (United States)

    Ťupek, Boris; Ortiz, Carina; Hashimoto, Shoji; Stendahl, Johan; Dahlgren, Jonas; Karltun, Erik; Lehtonen, Aleksi

    2016-04-01

    The soil organic carbon stock (SOC) changes estimated by the most process based soil carbon models (e.g. Yasso07, Q and CENTURY), needed for reporting of changes in soil carbon amounts for the United Nations Framework Convention on Climate Change (UNFCCC) and for mitigation of anthropogenic CO2 emissions by soil carbon management, can be biased if in a large mosaic of environments the models are missing a key factor driving SOC sequestration. To our knowledge soil nutrient status as a missing driver of these models was not tested in previous studies. Although, it's known that models fail to reconstruct the spatial variation and that soil nutrient status drives the ecosystem carbon use efficiency and soil carbon sequestration. We evaluated SOC stock estimates of Yasso07, Q and CENTURY process based models against the field data from Swedish Forest Soil National Inventories (3230 samples) organized by recursive partitioning method (RPART) into distinct soil groups with underlying SOC stock development linked to physicochemical conditions. These models worked for most soils with approximately average SOC stocks, but could not reproduce higher measured SOC stocks in our application. The Yasso07 and Q models that used only climate and litterfall input data and ignored soil properties generally agreed with two third of measurements. However, in comparison with measurements grouped according to the gradient of soil nutrient status we found that the models underestimated for the Swedish boreal forest soils with higher site fertility. Accounting for soil texture (clay, silt, and sand content) and structure (bulk density) in CENTURY model showed no improvement on carbon stock estimates, as CENTURY deviated in similar manner. We highlighted the mechanisms why models deviate from the measurements and the ways of considering soil nutrient status in further model development. Our analysis suggested that the models indeed lack other predominat drivers of SOC stabilization

  13. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    Directory of Open Access Journals (Sweden)

    A. F. Van Loon

    2012-11-01

    underestimation of wet-to-dry-season droughts and snow-related droughts. Furthermore, almost no composite droughts were simulated for slowly responding areas, while many multi-year drought events were expected in these systems.

    We conclude that most drought propagation processes are reasonably well reproduced by the ensemble mean of large-scale models in contrasting catchments in Europe. Challenges, however, remain in catchments with cold and semi-arid climates and catchments with large storage in aquifers or lakes. This leads to a high uncertainty in hydrological drought simulation at large scales. Improvement of drought simulation in large-scale models should focus on a better representation of hydrological processes that are important for drought development, such as evapotranspiration, snow accumulation and melt, and especially storage. Besides the more explicit inclusion of storage in large-scale models, also parametrisation of storage processes requires attention, for example through a global-scale dataset on aquifer characteristics, improved large-scale datasets on other land characteristics (e.g. soils, land cover, and calibration/evaluation of the models against observations of storage (e.g. in snow, groundwater.

  14. Quality of life and time to death: have the health gains of preventive interventions been underestimated?

    Science.gov (United States)

    Gheorghe, Maria; Brouwer, Werner B F; van Baal, Pieter H M

    2015-04-01

    This article explores the implications of the relation between quality of life (QoL) and time to death (TTD) for economic evaluations of preventive interventions. By using health survey data on QoL for the general Dutch population linked to the mortality registry, we quantify the magnitude of this relationship. For addressing specific features of the nonstandard QoL distribution such as boundness, skewness, and heteroscedasticity, we modeled QoL using a generalized additive model for location, scale, and shape (GAMLSS) with a β inflated outcome distribution. Our empirical results indicate that QoL decreases when approaching death, suggesting that there is a strong relationship between TTD and QoL. Predictions of different regression models revealed that ignoring this relationship results in an underestimation of the quality-adjusted life year (QALY) gains for preventive interventions. The underestimation ranged between 3% and 7% and depended on age, the number of years gained from the intervention, and the discount rate used. © The Author(s) 2014.

  15. Simulated pre-industrial climate in Bergen Climate Model (version 2: model description and large-scale circulation features

    Directory of Open Access Journals (Sweden)

    O. H. Otterå

    2009-11-01

    Full Text Available The Bergen Climate Model (BCM is a fully-coupled atmosphere-ocean-sea-ice model that provides state-of-the-art computer simulations of the Earth's past, present, and future climate. Here, a pre-industrial multi-century simulation with an updated version of BCM is described and compared to observational data. The model is run without any form of flux adjustments and is stable for several centuries. The simulated climate reproduces the general large-scale circulation in the atmosphere reasonably well, except for a positive bias in the high latitude sea level pressure distribution. Also, by introducing an updated turbulence scheme in the atmosphere model a persistent cold bias has been eliminated. For the ocean part, the model drifts in sea surface temperatures and salinities are considerably reduced compared to earlier versions of BCM. Improved conservation properties in the ocean model have contributed to this. Furthermore, by choosing a reference pressure at 2000 m and including thermobaric effects in the ocean model, a more realistic meridional overturning circulation is simulated in the Atlantic Ocean. The simulated sea-ice extent in the Northern Hemisphere is in general agreement with observational data except for summer where the extent is somewhat underestimated. In the Southern Hemisphere, large negative biases are found in the simulated sea-ice extent. This is partly related to problems with the mixed layer parametrization, causing the mixed layer in the Southern Ocean to be too deep, which in turn makes it hard to maintain a realistic sea-ice cover here. However, despite some problematic issues, the pre-industrial control simulation presented here should still be appropriate for climate change studies requiring multi-century simulations.

  16. Disclosing bias in bisulfite assay: MethPrimers underestimate high DNA methylation.

    Directory of Open Access Journals (Sweden)

    Andrea Fuso

    Full Text Available Discordant results obtained in bisulfite assays using MethPrimers (PCR primers designed using MethPrimer software or assuming that non-CpGs cytosines are non methylated versus primers insensitive to cytosine methylation lead us to hypothesize a technical bias. We therefore used the two kinds of primers to study different experimental models and methylation statuses. We demonstrated that MethPrimers negatively select hypermethylated DNA sequences in the PCR step of the bisulfite assay, resulting in CpG methylation underestimation and non-CpG methylation masking, failing to evidence differential methylation statuses. We also describe the characteristics of "Methylation-Insensitive Primers" (MIPs, having degenerated bases (G/A to cope with the uncertain C/U conversion. As CpG and non-CpG DNA methylation patterns are largely variable depending on the species, developmental stage, tissue and cell type, a variable extent of the bias is expected. The more the methylome is methylated, the greater is the extent of the bias, with a prevalent effect of non-CpG methylation. These findings suggest a revision of several DNA methylation patterns so far documented and also point out the necessity of applying unbiased analyses to the increasing number of epigenomic studies.

  17. Guiding exploration in conformational feature space with Lipschitz underestimation for ab-initio protein structure prediction.

    Science.gov (United States)

    Hao, Xiaohu; Zhang, Guijun; Zhou, Xiaogen

    2018-04-01

    Computing conformations which are essential to associate structural and functional information with gene sequences, is challenging due to the high dimensionality and rugged energy surface of the protein conformational space. Consequently, the dimension of the protein conformational space should be reduced to a proper level, and an effective exploring algorithm should be proposed. In this paper, a plug-in method for guiding exploration in conformational feature space with Lipschitz underestimation (LUE) for ab-initio protein structure prediction is proposed. The conformational space is converted into ultrafast shape recognition (USR) feature space firstly. Based on the USR feature space, the conformational space can be further converted into Underestimation space according to Lipschitz estimation theory for guiding exploration. As a consequence of the use of underestimation model, the tight lower bound estimate information can be used for exploration guidance, the invalid sampling areas can be eliminated in advance, and the number of energy function evaluations can be reduced. The proposed method provides a novel technique to solve the exploring problem of protein conformational space. LUE is applied to differential evolution (DE) algorithm, and metropolis Monte Carlo(MMC) algorithm which is available in the Rosetta; When LUE is applied to DE and MMC, it will be screened by the underestimation method prior to energy calculation and selection. Further, LUE is compared with DE and MMC by testing on 15 small-to-medium structurally diverse proteins. Test results show that near-native protein structures with higher accuracy can be obtained more rapidly and efficiently with the use of LUE. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Academic self-concept, learning motivation, and test anxiety of the underestimated student.

    Science.gov (United States)

    Urhahne, Detlef; Chao, Sheng-Han; Florineth, Maria Luise; Luttenberger, Silke; Paechter, Manuela

    2011-03-01

    BACKGROUND. Teachers' judgments of student performance on a standardized achievement test often result in an overestimation of students' abilities. In the majority of cases, a larger group of overestimated students and a smaller group of underestimated students are formed by these judgments. AIMS. In this research study, the consequences of the underestimation of students' mathematical performance potential were examined. SAMPLE. Two hundred and thirty-five fourth grade students and their fourteen mathematics teachers took part in the investigation. METHOD. Students worked on a standardized mathematics achievement test and completed a self-description questionnaire about motivation and affect. Teachers estimated each individual student's potential with regard to mathematics test performance as well as students' expectancy for success, level of aspiration, academic self-concept, learning motivation, and test anxiety. The differences between teachers' judgments on students' test performance and students' actual performance were used to build groups of underestimated and overestimated students. RESULTS. Underestimated students displayed equal levels of test performance, learning motivation, and level of aspiration in comparison with overestimated students, but had lower expectancy for success, lower academic self-concept, and experienced more test anxiety. Teachers expected that underestimated students would receive lower grades on the next mathematics test, believed that students were satisfied with lower grades, and assumed that the students have weaker learning motivation than their overestimated classmates. CONCLUSION. Teachers' judgment error was not confined to test performance but generalized to motivational and affective traits of the students. © 2010 The British Psychological Society.

  19. Tritium: an underestimated health risk- 'ACROnic du nucleaire' nr 85, June 2009

    International Nuclear Information System (INIS)

    Barbey, Pierre

    2009-06-01

    After having indicated how tritium released in the environment (under the form of tritiated water or gas) is absorbed by living species, the author describes the different biological effects of ionizing radiations and the risk associated with tritium. He evokes how the radiation protection system is designed with respect to standards, and outlines how the risk related to tritium is underestimated by different existing models and standards. The author discusses the consequences of tritium transmutation and of the isotopic effect

  20. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    Science.gov (United States)

    Fowler, Mike S; Ruokolainen, Lasse

    2013-01-01

    The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies) dominate in red environments, rapid fluctuations (high frequencies) in blue environments and white environments are purely random (no frequencies dominate). Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental) series used in combination with population (dynamical feedback) models: autoregressive [AR(1)] and sinusoidal (1/f) models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1) models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing) populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1) methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We must let

  1. High-risk lesions diagnosed at MRI-guided vacuum-assisted breast biopsy: can underestimation be predicted?

    Energy Technology Data Exchange (ETDEWEB)

    Crystal, Pavel [Mount Sinai Hospital, University Health Network, Division of Breast Imaging, Toronto, ON (Canada); Mount Sinai Hospital, Toronto, ON (Canada); Sadaf, Arifa; Bukhanov, Karina; Helbich, Thomas H. [Mount Sinai Hospital, University Health Network, Division of Breast Imaging, Toronto, ON (Canada); McCready, David [Princess Margaret Hospital, Department of Surgical Oncology, Toronto, ON (Canada); O' Malley, Frances [Mount Sinai Hospital, Department of Pathology, Laboratory Medicine, Toronto, ON (Canada)

    2011-03-15

    To evaluate the frequency of diagnosis of high-risk lesions at MRI-guided vacuum-assisted breast biopsy (MRgVABB) and to determine whether underestimation may be predicted. Retrospective review of the medical records of 161 patients who underwent MRgVABB was performed. The underestimation rate was defined as an upgrade of a high-risk lesion at MRgVABB to malignancy at surgery. Clinical data, MRI features of the biopsied lesions, and histological diagnosis of cases with and those without underestimation were compared. Of 161 MRgVABB, histology revealed 31 (19%) high-risk lesions. Of 26 excised high-risk lesions, 13 (50%) were upgraded to malignancy. The underestimation rates of lobular neoplasia, atypical apocrine metaplasia, atypical ductal hyperplasia, and flat epithelial atypia were 50% (4/8), 100% (5/5), 50% (3/6) and 50% (1/2) respectively. There was no underestimation in the cases of benign papilloma without atypia (0/3), and radial scar (0/2). No statistically significant differences (p > 0.1) between the cases with and those without underestimation were seen in patient age, indications for breast MRI, size of lesion on MRI, morphological and kinetic features of biopsied lesions. Imaging and clinical features cannot be used reliably to predict underestimation at MRgVABB. All high-risk lesions diagnosed at MRgVABB require surgical excision. (orig.)

  2. Calorie Underestimation When Buying High-Calorie Beverages in Fast-Food Contexts.

    Science.gov (United States)

    Franckle, Rebecca L; Block, Jason P; Roberto, Christina A

    2016-07-01

    We asked 1877 adults and 1178 adolescents visiting 89 fast-food restaurants in New England in 2010 and 2011 to estimate calories purchased. Calorie underestimation was greater among those purchasing a high-calorie beverage than among those who did not (adults: 324 ±698 vs 102 ±591 calories; adolescents: 360 ±602 vs 198 ±509 calories). This difference remained significant for adults but not adolescents after adjusting for total calories purchased. Purchasing high-calorie beverages may uniquely contribute to calorie underestimation among adults.

  3. Stress underestimation and mental health literacy of depression in Japanese workers: A cross-sectional study.

    Science.gov (United States)

    Nakamura-Taira, Nanako; Izawa, Shuhei; Yamada, Kosuke Chris

    2018-04-01

    Appropriately estimating stress levels in daily life is important for motivating people to undertake stress-management behaviors or seek out information on stress management and mental health. People who exhibit high stress underestimation might not be interested in information on mental health, and would therefore have less knowledge of it. We investigated the association between stress underestimation tendency and mental health literacy of depression (i.e., knowledge of the recognition, prognosis, and usefulness of resources of depression) in Japanese workers. We cross-sectionally surveyed 3718 Japanese workers using a web-based questionnaire on stress underestimation, mental health literacy of depression (vignettes on people with depression), and covariates (age, education, depressive symptoms, income, and worksite size). After adjusting for covariates, high stress underestimation was associated with greater odds of not recognizing depression (i.e., choosing anything other than depression). Furthermore, these individuals had greater odds of expecting the case to improve without treatment and not selecting useful sources of support (e.g. talk over with friends/family, see a psychiatrist, take medication, see a counselor) compared to those with moderate stress underestimation. These relationships were all stronger among males than among females. Stress underestimation was related to poorer mental health literacy of depression. Copyright © 2018 Elsevier B.V. All rights reserved.

  4. Parental and Child Factors Associated with Under-Estimation of Children with Excess Weight in Spain.

    Science.gov (United States)

    de Ruiter, Ingrid; Olmedo-Requena, Rocío; Jiménez-Moleón, José Juan

    2017-11-01

    Objective Understanding obesity misperception and associated factors can improve strategies to increase obesity identification and intervention. We investigate underestimation of child excess weight with a broader perspective, incorporating perceptions, views, and psychosocial aspects associated with obesity. Methods This study used cross-sectional data from the Spanish National Health Survey in 2011-2012 for children aged 2-14 years who are overweight or obese. Percentages of parental misperceived excess weight were calculated. Crude and adjusted analyses were performed for both child and parental factors analyzing associations with underestimation. Results Two-five year olds have the highest prevalence of misperceived overweight or obesity around 90%. In the 10-14 year old age group approximately 63% of overweight teens were misperceived as normal weight and 35.7 and 40% of obese males and females. Child gender did not affect underestimation, whereas a younger age did. Aspects of child social and mental health were associated with under-estimation, as was short sleep duration. Exercise, weekend TV and videogames, and food habits had no effect on underestimation. Fathers were more likely to misperceive their child´s weight status; however parent's age had no effect. Smokers and parents with excess weight were less likely to misperceive their child´s weight status. Parents being on a diet also decreased odds of underestimation. Conclusions for practice This study identifies some characteristics of both parents and children which are associated with under-estimation of child excess weight. These characteristics can be used for consideration in primary care, prevention strategies and for further research.

  5. Unaware of a large leiomyoma: A case report with respect to unusual symptoms of large leiomyomas

    Directory of Open Access Journals (Sweden)

    Barış Mülayim

    2015-12-01

    Conclusion: Patients might have no symptoms or might be unaware of the presence of a large uterine leiomyoma, as in our case; however, large leiomyomas have various unusual symptoms in addition to the common ones. These symptoms should not be disregarded or underestimated.

  6. The role of underestimating body size for self-esteem and self-efficacy among grade five children in Canada.

    Science.gov (United States)

    Maximova, Katerina; Khan, Mohammad K A; Austin, S Bryn; Kirk, Sara F L; Veugelers, Paul J

    2015-10-01

    Underestimating body size hinders healthy behavior modification needed to prevent obesity. However, initiatives to improve body size misperceptions may have detrimental consequences on self-esteem and self-efficacy. Using sex-specific multiple mixed-effect logistic regression models, we examined the association of underestimating versus accurate body size perceptions with self-esteem and self-efficacy in a provincially representative sample of 5075 grade five school children. Body size perceptions were defined as the standardized difference between the body mass index (BMI, from measured height and weight) and self-perceived body size (Stunkard body rating scale). Self-esteem and self-efficacy for physical activity and healthy eating were self-reported. Most of overweight boys and girls (91% and 83%); and most of obese boys and girls (93% and 90%) underestimated body size. Underestimating weight was associated with greater self-efficacy for physical activity and healthy eating among normal-weight children (odds ratio: 1.9 and 1.6 for boys, 1.5 and 1.4 for girls) and greater self-esteem among overweight and obese children (odds ratio: 2.0 and 6.2 for boys, 2.0 and 3.4 for girls). Results highlight the importance of developing optimal intervention strategies as part of targeted obesity prevention efforts that de-emphasize the focus on body weight, while improving body size perceptions. Copyright © 2015 Elsevier Inc. All rights reserved.

  7. Predictive equations underestimate resting energy expenditure in female adolescents with phenylketonuria

    Science.gov (United States)

    Quirk, Meghan E.; Schmotzer, Brian J.; Schmotzer, Brian J.; Singh, Rani H.

    2010-01-01

    Resting energy expenditure (REE) is often used to estimate total energy needs. The Schofield equation based on weight and height has been reported to underestimate REE in female children with phenylketonuria (PKU). The objective of this observational, cross-sectional study was to evaluate the agreement of measured REE with predicted REE for female adolescents with PKU. A total of 36 females (aged 11.5-18.7 years) with PKU attending Emory University’s Metabolic Camp (June 2002 – June 2008) underwent indirect calorimetry. Measured REE was compared to six predictive equations using paired Student’s t-tests, regression-based analysis, and assessment of clinical accuracy. The differences between measured and predicted REE were modeled against clinical parameters to determine to if a relationship existed. All six selected equations significantly under predicted measured REE (P< 0.005). The Schofield equation based on weight had the greatest level of agreement, with the lowest mean prediction bias (144 kcal) and highest concordance correlation coefficient (0.626). However, the Schofield equation based on weight lacked clinical accuracy, predicting measured REE within ±10% in only 14 of 36 participants. Clinical parameters were not associated with bias for any of the equations. Predictive equations underestimated measured REE in this group of female adolescents with PKU. Currently, there is no accurate and precise alternative for indirect calorimetry in this population. PMID:20497783

  8. Underestimation of Severity of Previous Whiplash Injuries

    Science.gov (United States)

    Naqui, SZH; Lovell, SJ; Lovell, ME

    2008-01-01

    INTRODUCTION We noted a report that more significant symptoms may be expressed after second whiplash injuries by a suggested cumulative effect, including degeneration. We wondered if patients were underestimating the severity of their earlier injury. PATIENTS AND METHODS We studied recent medicolegal reports, to assess subjects with a second whiplash injury. They had been asked whether their earlier injury was worse, the same or lesser in severity. RESULTS From the study cohort, 101 patients (87%) felt that they had fully recovered from their first injury and 15 (13%) had not. Seventy-six subjects considered their first injury of lesser severity, 24 worse and 16 the same. Of the 24 that felt the violence of their first accident was worse, only 8 had worse symptoms, and 16 felt their symptoms were mainly the same or less than their symptoms from their second injury. Statistical analysis of the data revealed that the proportion of those claiming a difference who said the previous injury was lesser was 76% (95% CI 66–84%). The observed proportion with a lesser injury was considerably higher than the 50% anticipated. CONCLUSIONS We feel that subjects may underestimate the severity of an earlier injury and associated symptoms. Reasons for this may include secondary gain rather than any proposed cumulative effect. PMID:18201501

  9. Individuals underestimate moderate and vigorous intensity physical activity.

    Directory of Open Access Journals (Sweden)

    Karissa L Canning

    Full Text Available BACKGROUND: It is unclear whether the common physical activity (PA intensity descriptors used in PA guidelines worldwide align with the associated percent heart rate maximum method used for prescribing relative PA intensities consistently between sexes, ethnicities, age categories and across body mass index (BMI classifications. OBJECTIVES: The objectives of this study were to determine whether individuals properly select light, moderate and vigorous intensity PA using the intensity descriptions in PA guidelines and determine if there are differences in estimation across sex, ethnicity, age and BMI classifications. METHODS: 129 adults were instructed to walk/jog at a "light," "moderate" and "vigorous effort" in a randomized order. The PA intensities were categorized as being below, at or above the following %HRmax ranges of: 50-63% for light, 64-76% for moderate and 77-93% for vigorous effort. RESULTS: On average, people correctly estimated light effort as 51.5±8.3%HRmax but underestimated moderate effort as 58.7±10.7%HRmax and vigorous effort as 69.9±11.9%HRmax. Participants walked at a light intensity (57.4±10.5%HRmax when asked to walk at a pace that provided health benefits, wherein 52% of participants walked at a light effort pace, 19% walked at a moderate effort and 5% walked at a vigorous effort pace. These results did not differ by sex, ethnicity or BMI class. However, younger adults underestimated moderate and vigorous intensity more so than middle-aged adults (P<0.05. CONCLUSION: When the common PA guideline descriptors were aligned with the associated %HRmax ranges, the majority of participants underestimated the intensity of PA that is needed to obtain health benefits. Thus, new subjective descriptions for moderate and vigorous intensity may be warranted to aid individuals in correctly interpreting PA intensities.

  10. Nuclear power plant cost underestimation: mechanisms and corrections

    International Nuclear Information System (INIS)

    Meyer, M.B.

    1984-01-01

    Criticisms of inaccurate nuclear power plant cost estimates have commonly focused upon what factors have caused actual costs to increase and not upon the engineering cost estimate methodology itself. This article describes two major sources of cost underestimation and suggests corrections for each which can be applied while retaining the traditional engineering methodology in general

  11. Drastic underestimation of amphipod biodiversity in the endangered Irano-Anatolian and Caucasus biodiversity hotspots.

    Science.gov (United States)

    Katouzian, Ahmad-Reza; Sari, Alireza; Macher, Jan N; Weiss, Martina; Saboori, Alireza; Leese, Florian; Weigand, Alexander M

    2016-03-01

    Biodiversity hotspots are centers of biological diversity and particularly threatened by anthropogenic activities. Their true magnitude of species diversity and endemism, however, is still largely unknown as species diversity is traditionally assessed using morphological descriptions only, thereby ignoring cryptic species. This directly limits evidence-based monitoring and management strategies. Here we used molecular species delimitation methods to quantify cryptic diversity of the montane amphipods in the Irano-Anatolian and Caucasus biodiversity hotspots. Amphipods are ecosystem engineers in rivers and lakes. Species diversity was assessed by analysing two genetic markers (mitochondrial COI and nuclear 28S rDNA), compared with morphological assignments. Our results unambiguously demonstrate that species diversity and endemism is dramatically underestimated, with 42 genetically identified freshwater species in only five reported morphospecies. Over 90% of the newly recovered species cluster inside Gammarus komareki and G. lacustris; 69% of the recovered species comprise narrow range endemics. Amphipod biodiversity is drastically underestimated for the studied regions. Thus, the risk of biodiversity loss is significantly greater than currently inferred as most endangered species remain unrecognized and/or are only found locally. Integrative application of genetic assessments in monitoring programs will help to understand the true magnitude of biodiversity and accurately evaluate its threat status.

  12. Confounding environmental colour and distribution shape leads to underestimation of population extinction risk.

    Directory of Open Access Journals (Sweden)

    Mike S Fowler

    Full Text Available The colour of environmental variability influences the size of population fluctuations when filtered through density dependent dynamics, driving extinction risk through dynamical resonance. Slow fluctuations (low frequencies dominate in red environments, rapid fluctuations (high frequencies in blue environments and white environments are purely random (no frequencies dominate. Two methods are commonly employed to generate the coloured spatial and/or temporal stochastic (environmental series used in combination with population (dynamical feedback models: autoregressive [AR(1] and sinusoidal (1/f models. We show that changing environmental colour from white to red with 1/f models, and from white to red or blue with AR(1 models, generates coloured environmental series that are not normally distributed at finite time-scales, potentially confounding comparison with normally distributed white noise models. Increasing variability of sample Skewness and Kurtosis and decreasing mean Kurtosis of these series alter the frequency distribution shape of the realised values of the coloured stochastic processes. These changes in distribution shape alter patterns in the probability of single and series of extreme conditions. We show that the reduced extinction risk for undercompensating (slow growing populations in red environments previously predicted with traditional 1/f methods is an artefact of changes in the distribution shapes of the environmental series. This is demonstrated by comparison with coloured series controlled to be normally distributed using spectral mimicry. Changes in the distribution shape that arise using traditional methods lead to underestimation of extinction risk in normally distributed, red 1/f environments. AR(1 methods also underestimate extinction risks in traditionally generated red environments. This work synthesises previous results and provides further insight into the processes driving extinction risk in model populations. We

  13. Did the Stern Review underestimate US and global climate damages?

    International Nuclear Information System (INIS)

    Ackerman, Frank; Stanton, Elizabeth A.; Hope, Chris; Alberth, Stephane

    2009-01-01

    The Stern Review received widespread attention for its innovative approach to the economics of climate change when it appeared in 2006, and generated controversies that have continued to this day. One key controversy concerns the magnitude of the expected impacts of climate change. Stern's estimates, based on results from the PAGE2002 model, sounded substantially greater than those produced by many other models, leading several critics to suggest that Stern had inflated his damage figures. We reached the opposite conclusion in a recent application of PAGE2002 in a study of the costs to the US economy of inaction on climate change. This article describes our revisions to the PAGE estimates, and explains our conclusion that the model runs used in the Stern Review may well underestimate US and global damages. Stern's estimates from PAGE2002 implied that mean business-as-usual damages in 2100 would represent just 0.4 percent of GDP for the United States and 2.2 percent of GDP for the world. Our revisions and reinterpretation of the PAGE model imply that climate damages in 2100 could reach 2.6 percent of GDP for the United States and 10.8 percent for the world.

  14. Is hyperthyroidism underestimated in pregnancy and misdiagnosed as hyperemesis gravidarum?

    Science.gov (United States)

    Luetic, Ana Tikvica; Miskovic, Berivoj

    2010-10-01

    Thyroid changes are considered to be normal events that happen as a large maternal multiorganic adjustment to pregnancy. However, hyperthyroidism occurs in pregnancy with clinical presentation similar to hyperemesis gravidarum (HG) and pregnancy itself. Moreover, 10% of women with HG will continue to have symptoms throughout the pregnancy suggesting that the underlying cause might not be elevation of human chorionic gonadotropin in the first trimester. Variable frequency of both hyperthyroidism and HG worldwide might suggest the puzzlement of inclusion criteria for both diagnoses enhanced by the alternation of thyroid hormone levels assessed in normal pregnancy. Increased number of hyperthyroidism among women population without the expected rise in gestational hyperthyroidism encouraged us for creating the hypotheses that hyperthyroidism could be underestimated in normal pregnancy and even misdiagnosed as HG. This hypothesis, if confirmed, might have beneficial clinical implications, such as better detection of hyperthyroidism in pregnancies, application of therapy when needed with the reduction of maternal or fetal consequences. Copyright 2010 Elsevier Ltd. All rights reserved.

  15. Large proportions of overweight and obese children, as well as their parents, underestimate children's weight status across Europe. The ENERGY (EuropeaN Energy balance Research to prevent excessive weight Gain among Youth) project.

    Science.gov (United States)

    Manios, Yannis; Moschonis, George; Karatzi, Kalliopi; Androutsos, Odysseas; Chinapaw, Mai; Moreno, Luis A; Bere, Elling; Molnar, Denes; Jan, Natasha; Dössegger, Alain; De Bourdeaudhuij, Ilse; Singh, Amika; Brug, Johannes

    2015-08-01

    To investigate the magnitude and country-specific differences in underestimation of children's weight status by children and their parents in Europe and to further explore its associations with family characteristics and sociodemographic factors. Children's weight and height were objectively measured. Parental anthropometric and sociodemographic data were self-reported. Children and their parents were asked to comment on children's weight status based on five-point Likert-type scales, ranging from 'I am much too thin' to 'I am much too fat' (children) and 'My child's weight is way too little' to 'My child's weight is way too much' (parents). These data were combined with children's actual weight status, in order to assess underestimation of children's weight status by children themselves and by their parents, respectively. Chi-square tests and multilevel logistic regression analyses were conducted to examine the aims of the current study. Eight European countries participating in the ENERGY (EuropeaN Energy balance Research to prevent excessive weight Gain among Youth) project. A school-based survey among 6113 children aged 10-12 years and their parents. In the total sample, 42·9 % of overweight/obese children and 27·6 % of parents of overweight/obese children underestimated their and their children's weight status, respectively. A higher likelihood for this underestimation of weight status by children and their parents was observed in Eastern and Southern compared with Central/Northern countries. Overweight or obese parents (OR=1·81; 95 % CI 1·39, 2·35 and OR=1·78, 95 % CI 1·22, 2·60), parents of boys (OR=1·32; 95 % CI 1·05, 1·67) and children from overweight/obese (OR=1·60; 95 % CI 1·29, 1·98 and OR=1·76; 95 % CI 1·29, 2·41) or unemployed parents (OR=1·53; 95 % CI 1·22, 1·92) were more likely to underestimate children's weight status. Children of overweight or obese parents, those from Eastern and Southern Europe, boys, younger children and

  16. Stress Underestimation and Mental Health Outcomes in Male Japanese Workers: a 1-Year Prospective Study.

    Science.gov (United States)

    Izawa, Shuhei; Nakamura-Taira, Nanako; Yamada, Kosuke Chris

    2016-12-01

    Being appropriately aware of the extent of stress experienced in daily life is essential in motivating stress management behaviours. Excessive stress underestimation obstructs this process, which is expected to exert adverse effects on health. We prospectively examined associations between stress underestimation and mental health outcomes in Japanese workers. Web-based surveys were conducted twice with an interval of 1 year on 2359 Japanese male workers. Participants were asked to complete survey items concerning stress underestimation, depressive symptoms, sickness absence, and antidepressant use. Multiple logistic regression analysis revealed that high baseline levels of 'overgeneralization of stress' and 'insensitivity to stress' were significantly associated with new-onset depressive symptoms (OR = 2.66 [95 % CI, 1.54-4.59], p stress underestimation, including stress insensitivity and the overgeneralization of stress, could exert adverse effects on mental health.

  17. Poverty Underestimation in Rural India- A Critique

    OpenAIRE

    Sivakumar, Marimuthu; Sarvalingam, A

    2010-01-01

    When ever the Planning Commission of India releases the poverty data, that data is being criticised by experts and economists. The main criticism is underestimation of poverty especially in rural India by the Planning Commission. This paper focuses on that criticism and compares the Indian Planning Commission’s 2004-05 rural poverty data with the India’s 2400 kcal poverty norms, World Bank’s US $1.08 poverty concept and Asian Development Bank’s US $1.35 poverty concept.

  18. Focusing on fast food restaurants alone underestimates the relationship between neighborhood deprivation and exposure to fast food in a large rural area

    OpenAIRE

    Sharkey, Joseph R; Johnson, Cassandra M; Dean, Wesley R; Horel, Scott A

    2011-01-01

    Abstract Background Individuals and families are relying more on food prepared outside the home as a source for at-home and away-from-home consumption. Restricting the estimation of fast-food access to fast-food restaurants alone may underestimate potential spatial access to fast food. Methods The study used data from the 2006 Brazos Valley Food Environment Project (BVFEP) and the 2000 U.S. Census Summary File 3 for six rural counties in the Texas Brazos Valley region. BVFEP ground-truthed da...

  19. BMI may underestimate the socioeconomic gradient in true obesity

    NARCIS (Netherlands)

    van den Berg, G.; van Eijsden, M.; Vrijkotte, T. G. M.; Gemke, R. J. B. J.

    2013-01-01

    Body mass index (BMI) does not make a distinction between fat mass and lean mass. In children, high fat mass appears to be associated with low maternal education, as well as low lean mass because maternal education is associated with physical activity. Therefore, BMI might underestimate true obesity

  20. Terrestrial pesticide exposure of amphibians: an underestimated cause of global decline?

    Science.gov (United States)

    Brühl, Carsten A; Schmidt, Thomas; Pieper, Silvia; Alscher, Annika

    2013-01-01

    Amphibians, a class of animals in global decline, are present in agricultural landscapes characterized by agrochemical inputs. Effects of pesticides on terrestrial life stages of amphibians such as juvenile and adult frogs, toads and newts are little understood and a specific risk assessment for pesticide exposure, mandatory for other vertebrate groups, is currently not conducted. We studied the effects of seven pesticide products on juvenile European common frogs (Rana temporaria) in an agricultural overspray scenario. Mortality ranged from 100% after one hour to 40% after seven days at the recommended label rate of currently registered products. The demonstrated toxicity is alarming and a large-scale negative effect of terrestrial pesticide exposure on amphibian populations seems likely. Terrestrial pesticide exposure might be underestimated as a driver of their decline calling for more attention in conservation efforts and the risk assessment procedures in place do not protect this vanishing animal group.

  1. A Poisson regression approach to model monthly hail occurrence in Northern Switzerland using large-scale environmental variables

    Science.gov (United States)

    Madonna, Erica; Ginsbourger, David; Martius, Olivia

    2018-05-01

    In Switzerland, hail regularly causes substantial damage to agriculture, cars and infrastructure, however, little is known about its long-term variability. To study the variability, the monthly number of days with hail in northern Switzerland is modeled in a regression framework using large-scale predictors derived from ERA-Interim reanalysis. The model is developed and verified using radar-based hail observations for the extended summer season (April-September) in the period 2002-2014. The seasonality of hail is explicitly modeled with a categorical predictor (month) and monthly anomalies of several large-scale predictors are used to capture the year-to-year variability. Several regression models are applied and their performance tested with respect to standard scores and cross-validation. The chosen model includes four predictors: the monthly anomaly of the two meter temperature, the monthly anomaly of the logarithm of the convective available potential energy (CAPE), the monthly anomaly of the wind shear and the month. This model well captures the intra-annual variability and slightly underestimates its inter-annual variability. The regression model is applied to the reanalysis data back in time to 1980. The resulting hail day time series shows an increase of the number of hail days per month, which is (in the model) related to an increase in temperature and CAPE. The trend corresponds to approximately 0.5 days per month per decade. The results of the regression model have been compared to two independent data sets. All data sets agree on the sign of the trend, but the trend is weaker in the other data sets.

  2. Are We Underestimating Microplastic Contamination in Aquatic Environments?

    Science.gov (United States)

    Conkle, Jeremy L.; Báez Del Valle, Christian D.; Turner, Jeffrey W.

    2018-01-01

    Plastic debris, specifically microplastic in the aquatic environment, is an escalating environmental crisis. Efforts at national scales to reduce or ban microplastics in personal care products are starting to pay off, but this will not affect those materials already in the environment or those that result from unregulated products and materials. To better inform future microplastic research and mitigation efforts this study (1) evaluates methods currently used to quantify microplastics in the environment and (2) characterizes the concentration and size distribution of microplastics in a variety of products. In this study, 50 published aquatic surveys were reviewed and they demonstrated that most ( 80%) only account for plastics ≥ 300 μm in diameter. In addition, we surveyed 770 personal care products to determine the occurrence, concentration and size distribution of polyethylene microbeads. Particle concentrations ranged from 1.9 to 71.9 mg g-1 of product or 1649 to 31,266 particles g-1 of product. The large majority ( > 95%) of particles in products surveyed were less than the 300 μm minimum diameter, indicating that previous environmental surveys could be underestimating microplastic contamination. To account for smaller particles as well as microfibers from synthetic textiles, we strongly recommend that future surveys consider methods that materials < 300 μm in diameter.

  3. An Ensemble Three-Dimensional Constrained Variational Analysis Method to Derive Large-Scale Forcing Data for Single-Column Models

    Science.gov (United States)

    Tang, Shuaiqi

    Atmospheric vertical velocities and advective tendencies are essential as large-scale forcing data to drive single-column models (SCM), cloud-resolving models (CRM) and large-eddy simulations (LES). They cannot be directly measured or easily calculated with great accuracy from field measurements. In the Atmospheric Radiation Measurement (ARM) program, a constrained variational algorithm (1DCVA) has been used to derive large-scale forcing data over a sounding network domain with the aid of flux measurements at the surface and top of the atmosphere (TOA). We extend the 1DCVA algorithm into three dimensions (3DCVA) along with other improvements to calculate gridded large-scale forcing data. We also introduce an ensemble framework using different background data, error covariance matrices and constraint variables to quantify the uncertainties of the large-scale forcing data. The results of sensitivity study show that the derived forcing data and SCM simulated clouds are more sensitive to the background data than to the error covariance matrices and constraint variables, while horizontal moisture advection has relatively large sensitivities to the precipitation, the dominate constraint variable. Using a mid-latitude cyclone case study in March 3rd, 2000 at the ARM Southern Great Plains (SGP) site, we investigate the spatial distribution of diabatic heating sources (Q1) and moisture sinks (Q2), and show that they are consistent with the satellite clouds and intuitive structure of the mid-latitude cyclone. We also evaluate the Q1 and Q2 in analysis/reanalysis, finding that the regional analysis/reanalysis all tend to underestimate the sub-grid scale upward transport of moist static energy in the lower troposphere. With the uncertainties from large-scale forcing data and observation specified, we compare SCM results and observations and find that models have large biases on cloud properties which could not be fully explained by the uncertainty from the large-scale forcing

  4. Social cure, what social cure? The propensity to underestimate the importance of social factors for health.

    Science.gov (United States)

    Haslam, S Alexander; McMahon, Charlotte; Cruwys, Tegan; Haslam, Catherine; Jetten, Jolanda; Steffens, Niklas K

    2018-02-01

    Recent meta-analytic research indicates that social support and social integration are highly protective against mortality, and that their importance is comparable to, or exceeds, that of many established behavioural risks such as smoking, high alcohol consumption, lack of exercise, and obesity that are the traditional focus of medical research (Holt-Lunstad et al., 2010). The present study examines perceptions of the contribution of these various factors to life expectancy within the community at large. American and British community respondents (N = 502) completed an on-line survey assessing the perceived importance of social and behavioural risk factors for mortality. As hypothesized, while respondents' perceptions of the importance of established behavioural risks was positively and highly correlated with their actual importance, social factors were seen to be far less important for health than they actually are. As a result, overall, there was a small but significant negative correlation between the perceived benefits and the actual benefits of different social and behavioural factors. Men, younger participants, and participants with a lower level of education were more likely to underestimate the importance of social factors for health. There was also evidence that underestimation was predicted by a cluster of ideological factors, the most significant of which was respondents' respect for prevailing convention and authorities as captured by Right-Wing Authoritarianism. Findings suggest that while people generally underestimate the importance of social factors for health this also varies as a function of demographic and ideological factors. They point to a range of challenges confronting those who seek to promote greater awareness of the importance of social factors for health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. Premodelling of the importance of the location of the upstream hydraulic boundary of a regional flow model of the Laxemar-Simpevarp area. Site descriptive modelling SDM-Site Laxemar

    International Nuclear Information System (INIS)

    Holmen, Johan G.

    2008-03-01

    The location of the westernmost hydraulic boundary of a regional groundwater flow model representing the Laxemar investigation area is of importance as the regional flow of groundwater is primarily from the west towards the sea (as given by the regional topography). If the westernmost boundary condition of a regional flow model is located to close to the investigation area, the regional flow model may underestimate the magnitude of the regional groundwater flow (at the investigation area), as well as overestimate breakthrough times of flow paths from the repository area, etc. Groundwater flows have been calculated by use of two mathematical (numerical) models: A very large groundwater flow model, much larger than the regional flow model used in the Laxemar site description version 1.2, and a smaller flow model that is of a comparable size to the regional model used in the site description. The models are identical except for the different horizontal extensions of the models; the large model extends to the west much further than the small model. The westernmost lateral boundary of the small model is a topographic water divide approx. 7 km from the central parts of the Laxemar investigation area, and the westernmost lateral boundary of the large model is a topographic water divide approx. 40 km from the central parts of the Laxemar investigation area. In the models the lateral boundaries are defined as no-flow boundaries. The objective of the study is to calculate and compare the groundwater flow properties at a tentative repository area at Laxemar; by use of a large flow model and a small flow model. The comparisons include the following three parameters: - Length of flow paths from the tentative repository area. - Advective breakthrough time for flow paths from the tentative repository area. - Magnitude of flow at the tentative repository area. The comparisons demonstrated the following considering the median values of the obtained distributions of flow paths

  6. Assessment of Large Transport Infrastructure Projects: The CBA-DK Model

    DEFF Research Database (Denmark)

    Salling, Kim Bang; Banister, David

    2009-01-01

    use of both deterministic and stochastic based information. Decision support as illustrated in this paper aims to provide assistance in the development and ultimately the choice of action, while accounting for the uncertainties surrounding transport appraisal schemes. The modelling framework......This paper presents a newly developed decision support model to assess transport infrastructure projects: CBA-DK. The model combines use of conventional cost–benefit analysis to produce aggregated single point estimates, with quantitative risk analysis using Monte Carlo simulation to produce...... interval results. The embedded uncertainties within traditional CBA such as ex-ante based investment costs and travel time savings are of particular concern. The paper investigates these two impacts in terms of the Optimism Bias principle which is used to take account of the underestimation of construction...

  7. How and why DNA barcodes underestimate the diversity of microbial eukaryotes.

    Directory of Open Access Journals (Sweden)

    Gwenael Piganeau

    Full Text Available BACKGROUND: Because many picoplanktonic eukaryotic species cannot currently be maintained in culture, direct sequencing of PCR-amplified 18S ribosomal gene DNA fragments from filtered sea-water has been successfully used to investigate the astounding diversity of these organisms. The recognition of many novel planktonic organisms is thus based solely on their 18S rDNA sequence. However, a species delimited by its 18S rDNA sequence might contain many cryptic species, which are highly differentiated in their protein coding sequences. PRINCIPAL FINDINGS: Here, we investigate the issue of species identification from one gene to the whole genome sequence. Using 52 whole genome DNA sequences, we estimated the global genetic divergence in protein coding genes between organisms from different lineages and compared this to their ribosomal gene sequence divergences. We show that this relationship between proteome divergence and 18S divergence is lineage dependent. Unicellular lineages have especially low 18S divergences relative to their protein sequence divergences, suggesting that 18S ribosomal genes are too conservative to assess planktonic eukaryotic diversity. We provide an explanation for this lineage dependency, which suggests that most species with large effective population sizes will show far less divergence in 18S than protein coding sequences. CONCLUSIONS: There is therefore a trade-off between using genes that are easy to amplify in all species, but which by their nature are highly conserved and underestimate the true number of species, and using genes that give a better description of the number of species, but which are more difficult to amplify. We have shown that this trade-off differs between unicellular and multicellular organisms as a likely consequence of differences in effective population sizes. We anticipate that biodiversity of microbial eukaryotic species is underestimated and that numerous "cryptic species" will become

  8. College Students' Underestimation of Blood Alcohol Concentration from Hypothetical Consumption of Supersized Alcopops: Results from a Cluster-Randomized Classroom Study.

    Science.gov (United States)

    Rossheim, Matthew E; Thombs, Dennis L; Krall, Jenna R; Jernigan, David H

    2018-05-30

    Supersized alcopops are a class of single-serving beverages popular among underage drinkers. These products contain large quantities of alcohol. This study examines the extent to which young adults recognize how intoxicated they would become from consuming these products. The study sample included 309 undergraduates who had consumed alcohol within the past year. Thirty-two sections of a college English course were randomized to 1 of 2 survey conditions, based on hypothetical consumption of supersized alcopops or beer of comparable liquid volume. Students were provided an empty can of 1 of the 2 beverages to help them answer the survey questions. Equation-calculated blood alcohol concentrations (BACs)-based on body weight and sex-were compared to the students' self-estimated BACs for consuming 1, 2, and 3 cans of the beverage provided to them. In adjusted regression models, students randomized to the supersized alcopop group greatly underestimated their BAC, whereas students randomized to the beer group overestimated it. The supersized alcopop group underestimated their BAC by 0.04 (95% confidence interval [CI]: 0.034, 0.053), 0.09 (95% CI: 0.067, 0.107), and 0.13 g/dl (95% CI: 0.097, 0.163) compared to the beer group. When asked how much alcohol they could consume before it would be unsafe to drive, students in the supersized alcopop group had 7 times the odds of estimating consumption that would generate a calculated BAC of at least 0.08 g/dl, compared to those making estimates based on beer consumption (95% CI: 3.734, 13.025). Students underestimated the intoxication they would experience from consuming supersized alcopops. Revised product warning labels are urgently needed to clearly identify the number of standard drinks contained in a supersized alcopop can. Moreover, regulations are needed to limit alcohol content of single-serving products. Copyright © 2018 by the Research Society on Alcoholism.

  9. Focusing on fast food restaurants alone underestimates the relationship between neighborhood deprivation and exposure to fast food in a large rural area.

    Science.gov (United States)

    Sharkey, Joseph R; Johnson, Cassandra M; Dean, Wesley R; Horel, Scott A

    2011-01-25

    Individuals and families are relying more on food prepared outside the home as a source for at-home and away-from-home consumption. Restricting the estimation of fast-food access to fast-food restaurants alone may underestimate potential spatial access to fast food. The study used data from the 2006 Brazos Valley Food Environment Project (BVFEP) and the 2000 U.S. Census Summary File 3 for six rural counties in the Texas Brazos Valley region. BVFEP ground-truthed data included identification and geocoding of all fast-food restaurants, convenience stores, supermarkets, and grocery stores in study area and on-site assessment of the availability and variety of fast-food lunch/dinner entrées and side dishes. Network distance was calculated from the population-weighted centroid of each census block group to all retail locations that marketed fast food (n = 205 fast-food opportunities). Spatial access to fast-food opportunities (FFO) was significantly better than to traditional fast-food restaurants (FFR). The median distance to the nearest FFO was 2.7 miles, compared with 4.5 miles to the nearest FFR. Residents of high deprivation neighborhoods had better spatial access to a variety of healthier fast-food entrée and side dish options than residents of low deprivation neighborhoods. Our analyses revealed that identifying fast-food restaurants as the sole source of fast-food entrées and side dishes underestimated neighborhood exposure to fast food, in terms of both neighborhood proximity and coverage. Potential interventions must consider all retail opportunities for fast food, and not just traditional FFR.

  10. Uterine radiation dose from open sources: The potential for underestimation

    International Nuclear Information System (INIS)

    Cox, P.H.; Klijn, J.G.M.; Pillay, M.; Bontebal, M.; Schoenfeld, D.H.W.

    1990-01-01

    Recent observations on the biodistribution of a therapeutic dose of sodium iodide I 131 in a patient with an unsuspected early pregnancy lead us to suspect that current dose estimates with respect to uterine exposure (ARSAC 1988) may seriously underestimate the actual exposure of the developing foetus. (orig.)

  11. Combining satellite radar altimetry, SAR surface soil moisture and GRACE total storage changes for hydrological model calibration in a large poorly gauged catchment

    DEFF Research Database (Denmark)

    Milzow, Christian; Krogh, Pernille Engelbredt; Bauer-Gottwein, Peter

    2011-01-01

    The availability of data is a major challenge for hydrological modelling in large parts of the world. Remote sensing data can be exploited to improve models of ungauged or poorly gauged catchments. In this study we combine three datasets for calibration of a rainfall-runoff model of the poorly...... gauged Okavango catchment in Southern Africa: (i) surface soil moisture (SSM) estimates derived from radar measurements onboard the Envisat satellite; (ii) radar altimetry measurements by Envisat providing river stages in the tributaries of the Okavango catchment, down to a minimum river width of about...... one hundred meters; and (iii) temporal changes of the Earth's gravity field recorded by the Gravity Recovery and Climate Experiment (GRACE) caused by total water storage changes in the catchment. The SSM data are shown to be helpful in identifying periods with over-respectively underestimation...

  12. Focusing on fast food restaurants alone underestimates the relationship between neighborhood deprivation and exposure to fast food in a large rural area

    Directory of Open Access Journals (Sweden)

    Dean Wesley R

    2011-01-01

    Full Text Available Abstract Background Individuals and families are relying more on food prepared outside the home as a source for at-home and away-from-home consumption. Restricting the estimation of fast-food access to fast-food restaurants alone may underestimate potential spatial access to fast food. Methods The study used data from the 2006 Brazos Valley Food Environment Project (BVFEP and the 2000 U.S. Census Summary File 3 for six rural counties in the Texas Brazos Valley region. BVFEP ground-truthed data included identification and geocoding of all fast-food restaurants, convenience stores, supermarkets, and grocery stores in study area and on-site assessment of the availability and variety of fast-food lunch/dinner entrées and side dishes. Network distance was calculated from the population-weighted centroid of each census block group to all retail locations that marketed fast food (n = 205 fast-food opportunities. Results Spatial access to fast-food opportunities (FFO was significantly better than to traditional fast-food restaurants (FFR. The median distance to the nearest FFO was 2.7 miles, compared with 4.5 miles to the nearest FFR. Residents of high deprivation neighborhoods had better spatial access to a variety of healthier fast-food entrée and side dish options than residents of low deprivation neighborhoods. Conclusions Our analyses revealed that identifying fast-food restaurants as the sole source of fast-food entrées and side dishes underestimated neighborhood exposure to fast food, in terms of both neighborhood proximity and coverage. Potential interventions must consider all retail opportunities for fast food, and not just traditional FFR.

  13. Radiographic Underestimation of In Vivo Cup Coverage Provided by Total Hip Arthroplasty for Dysplasia.

    Science.gov (United States)

    Nie, Yong; Wang, HaoYang; Huang, ZeYu; Shen, Bin; Kraus, Virginia Byers; Zhou, Zongke

    2018-01-01

    The accuracy of using 2-dimensional anteroposterior pelvic radiography to assess acetabular cup coverage among patients with developmental dysplasia of the hip after total hip arthroplasty (THA) remains unclear in retrospective clinical studies. A group of 20 patients with developmental dysplasia of the hip (20 hips) underwent cementless THA. During surgery but after acetabular reconstruction, bone wax was pressed onto the uncovered surface of the acetabular cup. A surface model of the bone wax was generated with 3-dimensional scanning. The percentage of the acetabular cup that was covered by intact host acetabular bone in vivo was calculated with modeling software. Acetabular cup coverage also was determined from a postoperative supine anteroposterior pelvic radiograph. The height of the hip center (distance from the center of the femoral head perpendicular to the inter-teardrop line) also was determined from radiographs. Radiographic cup coverage was a mean of 6.93% (SD, 2.47%) lower than in vivo cup coverage for these 20 patients with developmental dysplasia of the hip (Pcup coverage (Pearson r=0.761, Pcup (P=.001) but not the position of the hip center (high vs normal) was significantly associated with the difference between radiographic and in vivo cup coverage. Two-dimensional radiographically determined cup coverage conservatively reflects in vivo cup coverage and remains an important index (taking 7% underestimation errors and the effect of greater underestimation of larger cup size into account) for assessing the stability of the cup and monitoring for adequate ingrowth of bone. [Orthopedics. 2018; 41(1):e46-e51.]. Copyright 2017, SLACK Incorporated.

  14. Completeness and underestimation of cancer mortality rate in Iran: a report from Fars Province in southern Iran.

    Science.gov (United States)

    Marzban, Maryam; Haghdoost, Ali-Akbar; Dortaj, Eshagh; Bahrampour, Abbas; Zendehdel, Kazem

    2015-03-01

    The incidence and mortality rates of cancer are increasing worldwide, particularly in the developing countries. Valid data are needed for measuring the cancer burden and making appropriate decisions toward cancer control. We evaluated the completeness of death registry with regard to cancer death in Fars Province, I. R. of Iran. We used data from three sources in Fars Province, including the national death registry (source 1), the follow-up data from the pathology-based cancer registry (source 2) and hospital based records (source 3) during 2004 - 2006. We used the capture-recapture method and estimated underestimation and the true age standardized mortality rate (ASMR) for cancer. We used log-linear (LL) modeling for statistical analysis. We observed 1941, 480, and 355 cancer deaths in sources 1, 2 and 3, respectively. After data linkage, we estimated that mortality registry had about 40% underestimation for cancer death. After adjustment for this underestimation rate, the ASMR of cancer in the Fars Province for all cancer types increased from 44.8 per 100,000 (95% CI: 42.8 - 46.7) to 76.3 per 100,000 (95% CI: 73.3 - 78.9), accounting for 3309 (95% CI: 3151 - 3293) cancer deaths annually. The mortality rate of cancer is considerably higher than the rates reported by the routine registry in Iran. Improvement in the validity and completeness of the mortality registry is needed to estimate the true mortality rate caused by cancer in Iran.

  15. Misery Has More Company Than People Think: Underestimating the Prevalence of Others’ Negative Emotions

    Science.gov (United States)

    Jordan, Alexander H.; Monin, Benoît; Dweck, Carol S.; Lovett, Benjamin J.; John, Oliver P.; Gross, James J.

    2014-01-01

    Four studies document underestimations of the prevalence of others’ negative emotions, and suggest causes and correlates of these erroneous perceptions. In Study 1A, participants reported that their negative emotions were more private or hidden than their positive emotions; in Study 1B, participants underestimated the peer prevalence of common negative, but not positive, experiences described in Study 1A. In Study 2, people underestimated negative emotions and overestimated positive emotions even for well-known peers, and this effect was partially mediated by the degree to which those peers reported suppression of negative (vs. positive) emotions. Study 3 showed that lower estimations of the prevalence of negative emotional experiences predicted greater loneliness and rumination and lower life satisfaction, and that higher estimations for positive emotional experiences predicted lower life satisfaction. Taken together, these studies suggest that people may think they are more alone in their emotional difficulties than they really are. PMID:21177878

  16. Consumer underestimation of sodium in fast food restaurant meals: Results from a cross-sectional observational study.

    Science.gov (United States)

    Moran, Alyssa J; Ramirez, Maricelle; Block, Jason P

    2017-06-01

    Restaurants are key venues for reducing sodium intake in the U.S. but little is known about consumer perceptions of sodium in restaurant foods. This study quantifies the difference between estimated and actual sodium content of restaurant meals and examines predictors of underestimation in adult and adolescent diners at fast food restaurants. In 2013 and 2014, meal receipts and questionnaires were collected from adults and adolescents dining at six restaurant chains in four New England cities. The sample included 993 adults surveyed during 229 dinnertime visits to 44 restaurants and 794 adolescents surveyed during 298 visits to 49 restaurants after school or at lunchtime. Diners were asked to estimate the amount of sodium (mg) in the meal they had just purchased. Sodium estimates were compared with actual sodium in the meal, calculated by matching all items that the respondent purchased for personal consumption to sodium information on chain restaurant websites. Mean (SD) actual sodium (mg) content of meals was 1292 (970) for adults and 1128 (891) for adolescents. One-quarter of diners (176 (23%) adults, 155 (25%) adolescents) were unable or unwilling to provide estimates of the sodium content of their meals. Of those who provided estimates, 90% of adults and 88% of adolescents underestimated sodium in their meals, with adults underestimating sodium by a mean (SD) of 1013 mg (1,055) and adolescents underestimating by 876 mg (1,021). Respondents underestimated sodium content more for meals with greater sodium content. Education about sodium at point-of-purchase, such as provision of sodium information on restaurant menu boards, may help correct consumer underestimation, particularly for meals of high sodium content. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Modeling Student Motivation and Students’ Ability Estimates From a Large-Scale Assessment of Mathematics

    Directory of Open Access Journals (Sweden)

    Carlos Zerpa

    2011-09-01

    Full Text Available When large-scale assessments (LSA do not hold personal stakes for students, students may not put forth their best effort. Low-effort examinee behaviors (e.g., guessing, omitting items result in an underestimate of examinee abilities, which is a concern when using results of LSA to inform educational policy and planning. The purpose of this study was to explore the relationship between examinee motivation as defined by expectancy-value theory, student effort, and examinee mathematics abilities. A principal components analysis was used to examine the data from Grade 9 students (n = 43,562 who responded to a self-report questionnaire on their attitudes and practices related to mathematics. The results suggested a two-component model where the components were interpreted as task-values in mathematics and student effort. Next, a hierarchical linear model was implemented to examine the relationship between examinee component scores and their estimated ability on a LSA. The results of this study provide evidence that motivation, as defined by the expectancy-value theory and student effort, partially explains student ability estimates and may have implications in the information that get transferred to testing organizations, school boards, and teachers while assessing students’ Grade 9 mathematics learning.

  18. A Bayesian model to correct underestimated 3-D wind speeds from sonic anemometers increases turbulent components of the surface energy balance

    Science.gov (United States)

    John M. Frank; William J. Massman; Brent E. Ewers

    2016-01-01

    Sonic anemometers are the principal instruments in micrometeorological studies of turbulence and ecosystem fluxes. Common designs underestimate vertical wind measurements because they lack a correction for transducer shadowing, with no consensus on a suitable correction. We reanalyze a subset of data collected during field experiments in 2011 and 2013 featuring two or...

  19. Development of fine-resolution analyses and expanded large-scale forcing properties: 2. Scale awareness and application to single-column model experiments

    Science.gov (United States)

    Feng, Sha; Li, Zhijin; Liu, Yangang; Lin, Wuyin; Zhang, Minghua; Toto, Tami; Vogelmann, Andrew M.; Endo, Satoshi

    2015-01-01

    three-dimensional fields have been produced using the Community Gridpoint Statistical Interpolation (GSI) data assimilation system for the U.S. Department of Energy's Atmospheric Radiation Measurement Program (ARM) Southern Great Plains region. The GSI system is implemented in a multiscale data assimilation framework using the Weather Research and Forecasting model at a cloud-resolving resolution of 2 km. From the fine-resolution three-dimensional fields, large-scale forcing is derived explicitly at grid-scale resolution; a subgrid-scale dynamic component is derived separately, representing subgrid-scale horizontal dynamic processes. Analyses show that the subgrid-scale dynamic component is often a major component over the large-scale forcing for grid scales larger than 200 km. The single-column model (SCM) of the Community Atmospheric Model version 5 is used to examine the impact of the grid-scale and subgrid-scale dynamic components on simulated precipitation and cloud fields associated with a mesoscale convective system. It is found that grid-scale size impacts simulated precipitation, resulting in an overestimation for grid scales of about 200 km but an underestimation for smaller grids. The subgrid-scale dynamic component has an appreciable impact on the simulations, suggesting that grid-scale and subgrid-scale dynamic components should be considered in the interpretation of SCM simulations.

  20. Childhood leukaemia and low-level radiation - are we underestimating the risk?

    International Nuclear Information System (INIS)

    Wakeford, R.

    1996-01-01

    The Seascale childhood leukaemia 'cluster' can be interpreted as indicating that the risk of childhood leukaemia arising from low-level exposure to ionising radiation has been underestimated. Indeed, several variants of such an interpretation have been advanced. These include exposure to particular radionuclides, an underestimation of the radiation risk coefficient for childhood leukaemia, and the existence of a previously unrecognized risk of childhood leukaemia from the preconceptional irradiation of fathers. However, the scientific assessment of epidemiological associations is a complex matter, and such associations must be interpreted with caution. It would now seem most likely that the Seascale 'cluster' does not represent an unanticipated effect of the exposure to ionising radiation, but rather the effect of unusual population mixing generated by the Sellafield site which has produced an increase in the infection-based risk of childhood leukaemia. This episode in the history of epidemiological research provides a timely reminder of the need for great care in the interpretation-of novel statistical associations. (author)

  1. Underestimated Rate of Status Epilepticus according to the Traditional Definition of Status Epilepticus.

    Science.gov (United States)

    Ong, Cheung-Ter; Wong, Yi-Sin; Sung, Sheng-Feng; Wu, Chi-Shun; Hsu, Yung-Chu; Su, Yu-Hsiang; Hung, Ling-Chien

    2015-01-01

    Status epilepticus (SE) is an important neurological emergency. Early diagnosis could improve outcomes. Traditionally, SE is defined as seizures lasting at least 30 min or repeated seizures over 30 min without recovery of consciousness. Some specialists argued that the duration of seizures qualifying as SE should be shorter and the operational definition of SE was suggested. It is unclear whether physicians follow the operational definition. The objective of this study was to investigate whether the incidence of SE was underestimated and to investigate the underestimate rate. This retrospective study evaluates the difference in diagnosis of SE between operational definition and traditional definition of status epilepticus. Between July 1, 2012, and June 30, 2014, patients discharged with ICD-9 codes for epilepsy (345.X) in Chia-Yi Christian Hospital were included in the study. A seizure lasting at least 30 min or repeated seizures over 30 min without recovery of consciousness were considered SE according to the traditional definition of SE (TDSE). A seizure lasting between 5 and 30 min was considered SE according to the operational definition of SE (ODSE); it was defined as underestimated status epilepticus (UESE). During a 2-year period, there were 256 episodes of seizures requiring hospital admission. Among the 256 episodes, 99 episodes lasted longer than 5 min, out of which 61 (61.6%) episodes persisted over 30 min (TDSE) and 38 (38.4%) episodes continued between 5 and 30 min (UESE). In the 38 episodes of seizure lasting 5 to 30 minutes, only one episode was previously discharged as SE (ICD-9-CM 345.3). Conclusion. We underestimated 37.4% of SE. Continuing education regarding the diagnosis and treatment of epilepsy is important for physicians.

  2. Systematic underestimation of the age of samples with saturating exponential behaviour and inhomogeneous dose distribution

    International Nuclear Information System (INIS)

    Brennan, B.J.

    2000-01-01

    In luminescence and ESR studies, a systematic underestimate of the (average) equivalent dose, and thus also the age, of a sample can occur when there is significant variation of the natural dose within the sample and some regions approach saturation. This is demonstrated explicitly for a material that exhibits a single-saturating-exponential growth of signal with dose. The result is valid for any geometry (e.g. a plain layer, spherical grain, etc.) and some illustrative cases are modelled, with the age bias exceeding 10% in extreme cases. If the dose distribution within the sample can be modelled accurately, it is possible to correct for the bias in the estimates of equivalent dose estimate and age. While quantifying the effect would be more difficult, similar systematic biases in dose and age estimates are likely in other situations more complex than the one modelled

  3. Fishermen´s underestimation of risk

    DEFF Research Database (Denmark)

    Knudsen, Fabienne; Grøn, Sisse

    2009-01-01

    to stress the positive potentiale of risk. This can be explained by several, interrelated factors such as the nature of fishing, it-self a risk-based enterprise; a life-form promoting independency and identification with the enterprise's pecuniary priorities; working conditions upholding a feeling......  Fishermen's underestimation of risk   Background: In order to understand the effect of footwear and flooring on slips, trips and falls, 1st author visited 4 fishing boats.  An important spinoff of the study was to get an in situ insight in the way, fishermen perceive risk.   Objectives......: The presentation will analyse fishermen's risk perception, its causes and consequences.   Methods: The first author participated in 3 voyages at sea on fishing vessels (from 1 to 10 days each and from 2 to 4 crewmembers) where  interviews and participant observation was undertaken. A 4th fishing boat was visited...

  4. Sap flow is Underestimated by Thermal Dissipation Sensors due to Alterations of Wood Anatomy

    Science.gov (United States)

    Marañón-Jiménez, S.; Wiedemann, A.; van den Bulcke, J.; Cuntz, M.; Rebmann, C.; Steppe, K.

    2014-12-01

    The thermal dissipation technique (TD) is one of the most commonly adopted methods for sap flow measurements. However, underestimations of up to 60% of the tree transpiration have been reported with this technique, although the causes are not certainly known. The insertion of TD sensors within the stems causes damage of the wood tissue and subsequent healing reactions, changing wood anatomy and likely the sap flow path. However, the anatomical changes in response to the insertion of sap flow sensors and the effects on the measured flow have not been assessed yet. In this study, we investigate the alteration of vessel anatomy on wounds formed around TD sensors. Our main objectives were to elucidate the anatomical causes of sap flow underestimation for ring-porous and diffuse-porous species, and relate these changes to sap flow underestimations. Successive sets of TD probes were installed in early, mid and end of the growing season in Fagus sylvatica (diffuse-porous) and Quercus petraea (ring-porous) trees. They were logged after the growing season and additional sets of sensors were installed in the logged stems with presumably no healing reaction. The wood tissue surrounding each sensor was then excised and analysed by X-ray computed microtomography (X-ray micro CT). This technique allowed the quantification of vessel anatomical characteristics and the reconstruction of the 3-D internal microstructure of the xylem vessels so that extension and shape of the altered area could be determined. Gels and tyloses clogged the conductive vessels around the sensors in both beech and oak. The extension of the affected area was larger for beech although these anatomical changes led to similar sap flow underestimations in both species. The higher vessel size in oak may explain this result and, therefore, larger sap flow underestimation per area of affected conductive tissue. The wound healing reaction likely occurred within the first weeks after sensor installation, which

  5. Underestimated Halogen Bonds Forming with Protein Backbone in Protein Data Bank.

    Science.gov (United States)

    Zhang, Qian; Xu, Zhijian; Shi, Jiye; Zhu, Weiliang

    2017-07-24

    Halogen bonds (XBs) are attracting increasing attention in biological systems. Protein Data Bank (PDB) archives experimentally determined XBs in biological macromolecules. However, no software for structure refinement in X-ray crystallography takes into account XBs, which might result in the weakening or even vanishing of experimentally determined XBs in PDB. In our previous study, we showed that side-chain XBs forming with protein side chains are underestimated in PDB on the basis of the phenomenon that the proportion of side-chain XBs to overall XBs decreases as structural resolution becomes lower and lower. However, whether the dominant backbone XBs forming with protein backbone are overlooked is still a mystery. Here, with the help of the ratio (R F ) of the observed XBs' frequency of occurrence to their frequency expected at random, we demonstrated that backbone XBs are largely overlooked in PDB, too. Furthermore, three cases were discovered possessing backbone XBs in high resolution structures while losing the XBs in low resolution structures. In the last two cases, even at 1.80 Å resolution, the backbone XBs were lost, manifesting the urgent need to consider XBs in the refinement process during X-ray crystallography study.

  6. Managing large-scale models: DBS

    International Nuclear Information System (INIS)

    1981-05-01

    A set of fundamental management tools for developing and operating a large scale model and data base system is presented. Based on experience in operating and developing a large scale computerized system, the only reasonable way to gain strong management control of such a system is to implement appropriate controls and procedures. Chapter I discusses the purpose of the book. Chapter II classifies a broad range of generic management problems into three groups: documentation, operations, and maintenance. First, system problems are identified then solutions for gaining management control are disucssed. Chapters III, IV, and V present practical methods for dealing with these problems. These methods were developed for managing SEAS but have general application for large scale models and data bases

  7. Chronic rhinosinusitis in Europe - an underestimated disease. A GA(2) LEN study

    DEFF Research Database (Denmark)

    Hastan, D; Fokkens, W J; Bachert, C

    2011-01-01

    , Zuberbier T, Jarvis D, Burney P. Chronic rhinosinusitis in Europe - an underestimated disease. A GA(2) LEN study. Allergy 2011; 66: 1216-1223. ABSTRACT: Background:  Chronic rhinosinusitis (CRS) is a common health problem, with significant medical costs and impact on general health. Even so, prevalence...

  8. Is dream recall underestimated by retrospective measures and enhanced by keeping a logbook? An empirical investigation.

    Science.gov (United States)

    Aspy, Denholm J

    2016-05-01

    In a recent review, Aspy, Delfabbro, and Proeve (2015) highlighted the tendency for retrospective measures of dream recall to yield substantially lower recall rates than logbook measures, a phenomenon they termed the retrospective-logbook disparity. One explanation for this phenomenon is that retrospective measures underestimate true dream recall. Another explanation is that keeping a logbook tends to enhance dream recall. The present study provides a thorough empirical investigation into the retrospective-logbook disparity using a range of retrospective and logbook measures and three different types of logbook. Retrospective-logbook disparities were correlated with a range of variables theoretically related to the retrospective underestimation effect, and retrospective-logbook disparities were greater among participants that reported improved dream recall during the logbook period. These findings indicate that dream recall is underestimated by retrospective measures and enhanced by keeping a logbook. Recommendations for the use of retrospective and logbook measures of dream recall are provided. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. The "Grey Zone" cold air outbreak global model intercomparison: A cross evaluation using large-eddy simulations

    Science.gov (United States)

    Tomassini, Lorenzo; Field, Paul R.; Honnert, Rachel; Malardel, Sylvie; McTaggart-Cowan, Ron; Saitou, Kei; Noda, Akira T.; Seifert, Axel

    2017-03-01

    A stratocumulus-to-cumulus transition as observed in a cold air outbreak over the North Atlantic Ocean is compared in global climate and numerical weather prediction models and a large-eddy simulation model as part of the Working Group on Numerical Experimentation "Grey Zone" project. The focus of the project is to investigate to what degree current convection and boundary layer parameterizations behave in a scale-adaptive manner in situations where the model resolution approaches the scale of convection. Global model simulations were performed at a wide range of resolutions, with convective parameterizations turned on and off. The models successfully simulate the transition between the observed boundary layer structures, from a well-mixed stratocumulus to a deeper, partly decoupled cumulus boundary layer. There are indications that surface fluxes are generally underestimated. The amount of both cloud liquid water and cloud ice, and likely precipitation, are under-predicted, suggesting deficiencies in the strength of vertical mixing in shear-dominated boundary layers. But also regulation by precipitation and mixed-phase cloud microphysical processes play an important role in the case. With convection parameterizations switched on, the profiles of atmospheric liquid water and cloud ice are essentially resolution-insensitive. This, however, does not imply that convection parameterizations are scale-aware. Even at the highest resolutions considered here, simulations with convective parameterizations do not converge toward the results of convection-off experiments. Convection and boundary layer parameterizations strongly interact, suggesting the need for a unified treatment of convective and turbulent mixing when addressing scale-adaptivity.

  10. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  11. Efficient querying of large process model repositories

    NARCIS (Netherlands)

    Jin, Tao; Wang, Jianmin; La Rosa, M.; Hofstede, ter A.H.M.; Wen, Lijie

    2013-01-01

    Recent years have seen an increased uptake of business process management technology in industries. This has resulted in organizations trying to manage large collections of business process models. One of the challenges facing these organizations concerns the retrieval of models from large business

  12. Large-eddy simulation of heavy particle dispersion in wall-bounded turbulent flows

    Energy Technology Data Exchange (ETDEWEB)

    Salvetti, M.V. [DICI, University of Pisa, I-56122 Pisa (Italy)

    2015-03-10

    Capabilities and accuracy issues in Lagrangian tracking of heavy particles in velocity fields obtained from large-eddy simulations (LES) of wall-bounded turbulent flows are reviewed. In particular, it is shown that, if no subgrid scale (SGS) model is added to the particle motion equations, particle preferential concentration and near-wall accumulation are significantly underestimated. Results obtained with SGS modeling for the particle motion equations based on approximate deconvolution are briefly recalled. Then, the error purely due to filtering in particle tracking in LES flow fields is singled out and analyzed. The statistical properties of filtering errors are characterized in turbulent channel flow both from an Eulerian and a Lagrangian viewpoint. Implications for stochastic SGS modeling in particle motion equations are briefly outlined.

  13. Quantification of Underestimation of Physical Activity During Cycling to School When Using Accelerometry

    DEFF Research Database (Denmark)

    Tarp, Jakob; Andersen, Lars B; Østergaard, Lars

    2015-01-01

    Background: Cycling to and from school is an important source of physical activity (PA) in youth but it is not captured by the dominant objective method to quantify PA. The aim of this study was to quantify the underestimation of objectively assessed PA caused by cycling when using accelerometry....... Methods: Participants were 20 children aged 11-14 years from a randomized controlled trial performed in 2011. Physical activity was assessed by accelerometry with the addition of heart rate monitoring during cycling to school. Global positioning system (GPS) was used to identify periods of cycling...... to school. Results: Mean (95% CI) minutes of moderate-to-vigorous physical activity (MVPA) during round-trip commutes was 10.8 (7.1 - 16.6). Each kilometre of cycling meant an underestimation of 9314 (95%CI: 7719 - 11238) counts and 2.7 (95%CI: 2.1 - 3.5) minutes of MVPA. Adjusting for cycling to school...

  14. Dual-energy X-ray absorptiometry underestimates in vivo lumbar spine bone mineral density in overweight rats.

    Science.gov (United States)

    Cherif, Rim; Vico, Laurence; Laroche, Norbert; Sakly, Mohsen; Attia, Nebil; Lavet, Cedric

    2018-01-01

    Dual-energy X-ray absorptiometry (DXA) is currently the most widely used technique for measuring areal bone mineral density (BMD). However, several studies have shown inaccuracy, with either overestimation or underestimation of DXA BMD measurements in the case of overweight or obese individuals. We have designed an overweight rat model based on junk food to compare the effect of obesity on in vivo and ex vivo BMD and bone mineral content measurements. Thirty-eight 6-month old male rats were given a chow diet (n = 13) or a high fat and sucrose diet (n = 25), with the calorie amount being kept the same in the two groups, for 19 weeks. L1 BMD, L1 bone mineral content, amount of abdominal fat, and amount of abdominal lean were obtained from in vivo DXA scan. Ex vivo L1 BMD was also measured. A difference between in vivo and ex vivo DXA BMD measurements (P body weight, perirenal fat, abdominal fat, and abdominal lean. Multiple linear regression analysis shows that body weight, abdominal fat, and abdominal lean were independently related to ex vivo BMD. DXA underestimated lumbar in vivo BMD in overweight rats, and this measurement error is related to body weight and abdominal fat. Therefore, caution must be used when one is interpreting BMD among overweight and obese individuals.

  15. Hydrological regulation drives regime shifts: evidence from paleolimnology and ecosystem modeling of a large shallow Chinese lake.

    Science.gov (United States)

    Kong, Xiangzhen; He, Qishuang; Yang, Bin; He, Wei; Xu, Fuliu; Janssen, Annette B G; Kuiper, Jan J; van Gerven, Luuk P A; Qin, Ning; Jiang, Yujiao; Liu, Wenxiu; Yang, Chen; Bai, Zelin; Zhang, Min; Kong, Fanxiang; Janse, Jan H; Mooij, Wolf M

    2017-02-01

    Quantitative evidence of sudden shifts in ecological structure and function in large shallow lakes is rare, even though they provide essential benefits to society. Such 'regime shifts' can be driven by human activities which degrade ecological stability including water level control (WLC) and nutrient loading. Interactions between WLC and nutrient loading on the long-term dynamics of shallow lake ecosystems are, however, often overlooked and largely underestimated, which has hampered the effectiveness of lake management. Here, we focus on a large shallow lake (Lake Chaohu) located in one of the most densely populated areas in China, the lower Yangtze River floodplain, which has undergone both WLC and increasing nutrient loading over the last several decades. We applied a novel methodology that combines consistent evidence from both paleolimnological records and ecosystem modeling to overcome the hurdle of data insufficiency and to unravel the drivers and underlying mechanisms in ecosystem dynamics. We identified the occurrence of two regime shifts: one in 1963, characterized by the abrupt disappearance of submerged vegetation, and another around 1980, with strong algal blooms being observed thereafter. Using model scenarios, we further disentangled the roles of WLC and nutrient loading, showing that the 1963 shift was predominantly triggered by WLC, whereas the shift ca. 1980 was attributed to aggravated nutrient loading. Our analysis also shows interactions between these two stressors. Compared to the dynamics driven by nutrient loading alone, WLC reduced the critical P loading and resulted in earlier disappearance of submerged vegetation and emergence of algal blooms by approximately 26 and 10 years, respectively. Overall, our study reveals the significant role of hydrological regulation in driving shallow lake ecosystem dynamics, and it highlights the urgency of using multi-objective management criteria that includes ecological sustainability perspectives when

  16. Foreshock occurrence before large earthquakes

    Science.gov (United States)

    Reasenberg, P.A.

    1999-01-01

    Rates of foreshock occurrence involving shallow M ??? 6 and M ??? 7 mainshocks and M ??? 5 foreshocks were measured in two worldwide catalogs over ???20-year intervals. The overall rates observed are similar to ones measured in previous worldwide and regional studies when they are normalized for the ranges of magnitude difference they each span. The observed worldwide rates were compared to a generic model of earthquake clustering based on patterns of small and moderate aftershocks in California. The aftershock model was extended to the case of moderate foreshocks preceding large mainshocks. Overall, the observed worldwide foreshock rates exceed the extended California generic model by a factor of ???2. Significant differences in foreshock rate were found among subsets of earthquakes defined by their focal mechanism and tectonic region, with the rate before thrust events higher and the rate before strike-slip events lower than the worldwide average. Among the thrust events, a large majority, composed of events located in shallow subduction zones, had a high foreshock rate, while a minority, located in continental thrust belts, had a low rate. These differences may explain why previous surveys have found low foreshock rates among thrust events in California (especially southern California), while the worldwide observations suggests the opposite: California, lacking an active subduction zone in most of its territory, and including a region of mountain-building thrusts in the south, reflects the low rate apparently typical for continental thrusts, while the worldwide observations, dominated by shallow subduction zone events, are foreshock-rich. If this is so, then the California generic model may significantly underestimate the conditional probability for a very large (M ??? 8) earthquake following a potential (M ??? 7) foreshock in Cascadia. The magnitude differences among the identified foreshock-mainshock pairs in the Harvard catalog are consistent with a uniform

  17. Impact bias or underestimation? Outcome specifications predict the direction of affective forecasting errors.

    Science.gov (United States)

    Buechel, Eva C; Zhang, Jiao; Morewedge, Carey K

    2017-05-01

    Affective forecasts are used to anticipate the hedonic impact of future events and decide which events to pursue or avoid. We propose that because affective forecasters are more sensitive to outcome specifications of events than experiencers, the outcome specification values of an event, such as its duration, magnitude, probability, and psychological distance, can be used to predict the direction of affective forecasting errors: whether affective forecasters will overestimate or underestimate its hedonic impact. When specifications are positively correlated with the hedonic impact of an event, forecasters will overestimate the extent to which high specification values will intensify and low specification values will discount its impact. When outcome specifications are negatively correlated with its hedonic impact, forecasters will overestimate the extent to which low specification values will intensify and high specification values will discount its impact. These affective forecasting errors compound additively when multiple specifications are aligned in their impact: In Experiment 1, affective forecasters underestimated the hedonic impact of winning a smaller prize that they expected to win, and they overestimated the hedonic impact of winning a larger prize that they did not expect to win. In Experiment 2, affective forecasters underestimated the hedonic impact of a short unpleasant video about a temporally distant event, and they overestimated the hedonic impact of a long unpleasant video about a temporally near event. Experiments 3A and 3B showed that differences in the affect-richness of forecasted and experienced events underlie these differences in sensitivity to outcome specifications, therefore accounting for both the impact bias and its reversal. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  18. Constituent models and large transverse momentum reactions

    International Nuclear Information System (INIS)

    Brodsky, S.J.

    1975-01-01

    The discussion of constituent models and large transverse momentum reactions includes the structure of hard scattering models, dimensional counting rules for large transverse momentum reactions, dimensional counting and exclusive processes, the deuteron form factor, applications to inclusive reactions, predictions for meson and photon beams, the charge-cubed test for the e/sup +-/p → e/sup +-/γX asymmetry, the quasi-elastic peak in inclusive hadronic reactions, correlations, and the multiplicity bump at large transverse momentum. Also covered are the partition method for bound state calculations, proofs of dimensional counting, minimal neutralization and quark--quark scattering, the development of the constituent interchange model, and the A dependence of high transverse momentum reactions

  19. Large proportions of overweight and obese children, as well as their parents, underestimate children's weight status across Europe. The ENERGY (EuropeaN Energy balance Research to prevent excessive weight Gain among Youth) project

    NARCIS (Netherlands)

    Manios, Y.; Moschonis, G.; Karatzi, K.; Androutsos, O.; Chin A Paw, M.J.M.; Moreno, L.A.; Bere, E.; Molnar, D.; Jan, N.; Dossegger, A.; de Bourdeaudhuij, I.; Singh, A.S.; Brug, J.

    2015-01-01

    Objective To investigate the magnitude and country-specific differences in underestimation of children's weight status by children and their parents in Europe and to further explore its associations with family characteristics and sociodemographic factors. Design Children's weight and height were

  20. Shell model in large spaces and statistical spectroscopy

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1996-01-01

    For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)

  1. Exposure limits: the underestimation of absorbed cell phone radiation, especially in children.

    Science.gov (United States)

    Gandhi, Om P; Morgan, L Lloyd; de Salles, Alvaro Augusto; Han, Yueh-Ying; Herberman, Ronald B; Davis, Devra Lee

    2012-03-01

    The existing cell phone certification process uses a plastic model of the head called the Specific Anthropomorphic Mannequin (SAM), representing the top 10% of U.S. military recruits in 1989 and greatly underestimating the Specific Absorption Rate (SAR) for typical mobile phone users, especially children. A superior computer simulation certification process has been approved by the Federal Communications Commission (FCC) but is not employed to certify cell phones. In the United States, the FCC determines maximum allowed exposures. Many countries, especially European Union members, use the "guidelines" of International Commission on Non-Ionizing Radiation Protection (ICNIRP), a non governmental agency. Radiofrequency (RF) exposure to a head smaller than SAM will absorb a relatively higher SAR. Also, SAM uses a fluid having the average electrical properties of the head that cannot indicate differential absorption of specific brain tissue, nor absorption in children or smaller adults. The SAR for a 10-year old is up to 153% higher than the SAR for the SAM model. When electrical properties are considered, a child's head's absorption can be over two times greater, and absorption of the skull's bone marrow can be ten times greater than adults. Therefore, a new certification process is needed that incorporates different modes of use, head sizes, and tissue properties. Anatomically based models should be employed in revising safety standards for these ubiquitous modern devices and standards should be set by accountable, independent groups.

  2. Quantifying the underestimation of relative risks from genome-wide association studies.

    Directory of Open Access Journals (Sweden)

    Chris Spencer

    2011-03-01

    Full Text Available Genome-wide association studies (GWAS have identified hundreds of associated loci across many common diseases. Most risk variants identified by GWAS will merely be tags for as-yet-unknown causal variants. It is therefore possible that identification of the causal variant, by fine mapping, will identify alleles with larger effects on genetic risk than those currently estimated from GWAS replication studies. We show that under plausible assumptions, whilst the majority of the per-allele relative risks (RR estimated from GWAS data will be close to the true risk at the causal variant, some could be considerable underestimates. For example, for an estimated RR in the range 1.2-1.3, there is approximately a 38% chance that it exceeds 1.4 and a 10% chance that it is over 2. We show how these probabilities can vary depending on the true effects associated with low-frequency variants and on the minor allele frequency (MAF of the most associated SNP. We investigate the consequences of the underestimation of effect sizes for predictions of an individual's disease risk and interpret our results for the design of fine mapping experiments. Although these effects mean that the amount of heritability explained by known GWAS loci is expected to be larger than current projections, this increase is likely to explain a relatively small amount of the so-called "missing" heritability.

  3. Large-scale modelling of neuronal systems

    International Nuclear Information System (INIS)

    Castellani, G.; Verondini, E.; Giampieri, E.; Bersani, F.; Remondini, D.; Milanesi, L.; Zironi, I.

    2009-01-01

    The brain is, without any doubt, the most, complex system of the human body. Its complexity is also due to the extremely high number of neurons, as well as the huge number of synapses connecting them. Each neuron is capable to perform complex tasks, like learning and memorizing a large class of patterns. The simulation of large neuronal systems is challenging for both technological and computational reasons, and can open new perspectives for the comprehension of brain functioning. A well-known and widely accepted model of bidirectional synaptic plasticity, the BCM model, is stated by a differential equation approach based on bistability and selectivity properties. We have modified the BCM model extending it from a single-neuron to a whole-network model. This new model is capable to generate interesting network topologies starting from a small number of local parameters, describing the interaction between incoming and outgoing links from each neuron. We have characterized this model in terms of complex network theory, showing how this, learning rule can be a support For network generation.

  4. Commonly used reference values underestimate oxygen uptake in healthy, 50-year-old Swedish women.

    Science.gov (United States)

    Genberg, M; Andrén, B; Lind, L; Hedenström, H; Malinovschi, A

    2018-01-01

    Cardiopulmonary exercise testing (CPET) is the gold standard among clinical exercise tests. It combines a conventional stress test with measurement of oxygen uptake (V O 2 ) and CO 2 production. No validated Swedish reference values exist, and reference values in women are generally understudied. Moreover, the importance of achieved respiratory exchange ratio (RER) and the significance of breathing reserve (BR) at peak exercise in healthy individuals are poorly understood. We compared V O 2 at maximal load (peakV O 2 ) and anaerobic threshold (V O 2@ AT ) in healthy Swedish individuals with commonly used reference values, taking gender into account. Further, we analysed maximal workload and peakV O 2 with regard to peak RER and BR. In all, 181 healthy, 50-year-old individuals (91 women) performed CPET. PeakV O 2 was best predicted using Jones et al. (100·5%), while SHIP reference values underestimated peakV O 2 most: 112·5%. Furthermore, underestimation of peakV O 2 in women was found for all studied reference values (P 1·1 (2 328·7 versus 2 176·7 ml min -1 , P = 0·11). Lower BR (≤30%) related to significantly higher peakV O 2 (Pvalues underestimated oxygen uptake in women. No evidence for demanding RER > 1·1 in healthy individuals was found. A lowered BR is probably a normal response to higher workloads in healthy individuals. © 2016 Scandinavian Society of Clinical Physiology and Nuclear Medicine. Published by John Wiley & Sons Ltd.

  5. Using Agent Base Models to Optimize Large Scale Network for Large System Inventories

    Science.gov (United States)

    Shameldin, Ramez Ahmed; Bowling, Shannon R.

    2010-01-01

    The aim of this paper is to use Agent Base Models (ABM) to optimize large scale network handling capabilities for large system inventories and to implement strategies for the purpose of reducing capital expenses. The models used in this paper either use computational algorithms or procedure implementations developed by Matlab to simulate agent based models in a principal programming language and mathematical theory using clusters, these clusters work as a high performance computational performance to run the program in parallel computational. In both cases, a model is defined as compilation of a set of structures and processes assumed to underlie the behavior of a network system.

  6. Large-Scale Features of Pliocene Climate: Results from the Pliocene Model Intercomparison Project

    Science.gov (United States)

    Haywood, A. M.; Hill, D.J.; Dolan, A. M.; Otto-Bliesner, B. L.; Bragg, F.; Chan, W.-L.; Chandler, M. A.; Contoux, C.; Dowsett, H. J.; Jost, A.; hide

    2013-01-01

    Climate and environments of the mid-Pliocene warm period (3.264 to 3.025 Ma) have been extensively studied.Whilst numerical models have shed light on the nature of climate at the time, uncertainties in their predictions have not been systematically examined. The Pliocene Model Intercomparison Project quantifies uncertainties in model outputs through a coordinated multi-model and multi-mode data intercomparison. Whilst commonalities in model outputs for the Pliocene are clearly evident, we show substantial variation in the sensitivity of models to the implementation of Pliocene boundary conditions. Models appear able to reproduce many regional changes in temperature reconstructed from geological proxies. However, data model comparison highlights that models potentially underestimate polar amplification. To assert this conclusion with greater confidence, limitations in the time-averaged proxy data currently available must be addressed. Furthermore, sensitivity tests exploring the known unknowns in modelling Pliocene climate specifically relevant to the high latitudes are essential (e.g. palaeogeography, gateways, orbital forcing and trace gasses). Estimates of longer-term sensitivity to CO2 (also known as Earth System Sensitivity; ESS), support previous work suggesting that ESS is greater than Climate Sensitivity (CS), and suggest that the ratio of ESS to CS is between 1 and 2, with a "best" estimate of 1.5.

  7. Disinformative data in large-scale hydrological modelling

    Directory of Open Access Journals (Sweden)

    A. Kauffeldt

    2013-07-01

    Full Text Available Large-scale hydrological modelling has become an important tool for the study of global and regional water resources, climate impacts, and water-resources management. However, modelling efforts over large spatial domains are fraught with problems of data scarcity, uncertainties and inconsistencies between model forcing and evaluation data. Model-independent methods to screen and analyse data for such problems are needed. This study aimed at identifying data inconsistencies in global datasets using a pre-modelling analysis, inconsistencies that can be disinformative for subsequent modelling. The consistency between (i basin areas for different hydrographic datasets, and (ii between climate data (precipitation and potential evaporation and discharge data, was examined in terms of how well basin areas were represented in the flow networks and the possibility of water-balance closure. It was found that (i most basins could be well represented in both gridded basin delineations and polygon-based ones, but some basins exhibited large area discrepancies between flow-network datasets and archived basin areas, (ii basins exhibiting too-high runoff coefficients were abundant in areas where precipitation data were likely affected by snow undercatch, and (iii the occurrence of basins exhibiting losses exceeding the potential-evaporation limit was strongly dependent on the potential-evaporation data, both in terms of numbers and geographical distribution. Some inconsistencies may be resolved by considering sub-grid variability in climate data, surface-dependent potential-evaporation estimates, etc., but further studies are needed to determine the reasons for the inconsistencies found. Our results emphasise the need for pre-modelling data analysis to identify dataset inconsistencies as an important first step in any large-scale study. Applying data-screening methods before modelling should also increase our chances to draw robust conclusions from subsequent

  8. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  9. Long-Term Calculations with Large Air Pollution Models

    DEFF Research Database (Denmark)

    Ambelas Skjøth, C.; Bastrup-Birk, A.; Brandt, J.

    1999-01-01

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  10. Predictors of underestimation of malignancy after image-guided core needle biopsy diagnosis of flat epithelial atypia or atypical ductal hyperplasia.

    Science.gov (United States)

    Yu, Chi-Chang; Ueng, Shir-Hwa; Cheung, Yun-Chung; Shen, Shih-Che; Kuo, Wen-Lin; Tsai, Hsiu-Pei; Lo, Yung-Feng; Chen, Shin-Cheh

    2015-01-01

    Flat epithelial atypia (FEA) and atypical ductal hyperplasia (ADH) are precursors of breast malignancy. Management of FEA or ADH after image-guided core needle biopsy (CNB) remains controversial. The aim of this study was to evaluate malignancy underestimation rates after FEA or ADH diagnosis using image-guided CNB and to identify clinical characteristics and imaging features associated with malignancy as well as identify cases with low underestimation rates that may be treatable by observation only. We retrospectively reviewed 2,875 consecutive image-guided CNBs recorded in an electronic data base from January 2010 to December 2011 and identified 128 (4.5%) FEA and 83 (2.9%) ADH diagnoses (211 total cases). Of these, 64 (30.3%) were echo-guided CNB procedures and 147 (69.7%) mammography-guided CNBs. Twenty patients (9.5%) were upgraded to malignancy. Multivariate analysis indicated that age (OR = 1.123, p = 0.002, increase of 1 year), mass-type lesion with calcifications (OR = 8.213, p = 0.006), and ADH in CNB specimens (OR = 8.071, p = 0.003) were independent predictors of underestimation. In univariate analysis of echo-guided CNB (n = 64), mass with calcifications had the highest underestimation rate (p < 0.001). Multivariate analysis of 147 mammography-guided CNBs revealed that age (OR = 1.122, p = 0.040, increase of 1 year) and calcification distribution were significant independent predictors of underestimation. No FEA case in which, complete calcification retrieval was recorded after CNB was upgraded to malignancy. Older age at diagnosis on image-guided CNB was a predictor of malignancy underestimation. Mass with calcifications was more likely to be associated with malignancy, and in cases presenting as calcifications only, segmental distribution or linear shapes were significantly associated with upgrading. Excision after FEA or ADH diagnosis by image-guided CNB is warranted except for FEA diagnosed using mammography-guided CNB with complete calcification

  11. Automated Volumetric Mammographic Breast Density Measurements May Underestimate Percent Breast Density for High-density Breasts

    NARCIS (Netherlands)

    Rahbar, K.; Gubern Merida, A.; Patrie, J.T.; Harvey, J.A.

    2017-01-01

    RATIONALE AND OBJECTIVES: The purpose of this study was to evaluate discrepancy in breast composition measurements obtained from mammograms using two commercially available software methods for systematic trends in overestimation or underestimation compared to magnetic resonance-derived

  12. Inferring Perspective Versus Getting Perspective: Underestimating the Value of Being in Another Person's Shoes.

    Science.gov (United States)

    Zhou, Haotian; Majka, Elizabeth A; Epley, Nicholas

    2017-04-01

    People use at least two strategies to solve the challenge of understanding another person's mind: inferring that person's perspective by reading his or her behavior (theorization) and getting that person's perspective by experiencing his or her situation (simulation). The five experiments reported here demonstrate a strong tendency for people to underestimate the value of simulation. Predictors estimated a stranger's emotional reactions toward 50 pictures. They could either infer the stranger's perspective by reading his or her facial expressions or simulate the stranger's perspective by watching the pictures he or she viewed. Predictors were substantially more accurate when they got perspective through simulation, but overestimated the accuracy they had achieved by inferring perspective. Predictors' miscalibrated confidence stemmed from overestimating the information revealed through facial expressions and underestimating the similarity in people's reactions to a given situation. People seem to underappreciate a useful strategy for understanding the minds of others, even after they gain firsthand experience with both strategies.

  13. Large-scale multimedia modeling applications

    International Nuclear Information System (INIS)

    Droppo, J.G. Jr.; Buck, J.W.; Whelan, G.; Strenge, D.L.; Castleton, K.J.; Gelston, G.M.

    1995-08-01

    Over the past decade, the US Department of Energy (DOE) and other agencies have faced increasing scrutiny for a wide range of environmental issues related to past and current practices. A number of large-scale applications have been undertaken that required analysis of large numbers of potential environmental issues over a wide range of environmental conditions and contaminants. Several of these applications, referred to here as large-scale applications, have addressed long-term public health risks using a holistic approach for assessing impacts from potential waterborne and airborne transport pathways. Multimedia models such as the Multimedia Environmental Pollutant Assessment System (MEPAS) were designed for use in such applications. MEPAS integrates radioactive and hazardous contaminants impact computations for major exposure routes via air, surface water, ground water, and overland flow transport. A number of large-scale applications of MEPAS have been conducted to assess various endpoints for environmental and human health impacts. These applications are described in terms of lessons learned in the development of an effective approach for large-scale applications

  14. Spatial occupancy models for large data sets

    Science.gov (United States)

    Johnson, Devin S.; Conn, Paul B.; Hooten, Mevin B.; Ray, Justina C.; Pond, Bruce A.

    2013-01-01

    Since its development, occupancy modeling has become a popular and useful tool for ecologists wishing to learn about the dynamics of species occurrence over time and space. Such models require presence–absence data to be collected at spatially indexed survey units. However, only recently have researchers recognized the need to correct for spatially induced overdisperison by explicitly accounting for spatial autocorrelation in occupancy probability. Previous efforts to incorporate such autocorrelation have largely focused on logit-normal formulations for occupancy, with spatial autocorrelation induced by a random effect within a hierarchical modeling framework. Although useful, computational time generally limits such an approach to relatively small data sets, and there are often problems with algorithm instability, yielding unsatisfactory results. Further, recent research has revealed a hidden form of multicollinearity in such applications, which may lead to parameter bias if not explicitly addressed. Combining several techniques, we present a unifying hierarchical spatial occupancy model specification that is particularly effective over large spatial extents. This approach employs a probit mixture framework for occupancy and can easily accommodate a reduced-dimensional spatial process to resolve issues with multicollinearity and spatial confounding while improving algorithm convergence. Using open-source software, we demonstrate this new model specification using a case study involving occupancy of caribou (Rangifer tarandus) over a set of 1080 survey units spanning a large contiguous region (108 000 km2) in northern Ontario, Canada. Overall, the combination of a more efficient specification and open-source software allows for a facile and stable implementation of spatial occupancy models for large data sets.

  15. Large Mammalian Animal Models of Heart Disease

    Directory of Open Access Journals (Sweden)

    Paula Camacho

    2016-10-01

    Full Text Available Due to the biological complexity of the cardiovascular system, the animal model is an urgent pre-clinical need to advance our knowledge of cardiovascular disease and to explore new drugs to repair the damaged heart. Ideally, a model system should be inexpensive, easily manipulated, reproducible, a biological representative of human disease, and ethically sound. Although a larger animal model is more expensive and difficult to manipulate, its genetic, structural, functional, and even disease similarities to humans make it an ideal model to first consider. This review presents the commonly-used large animals—dog, sheep, pig, and non-human primates—while the less-used other large animals—cows, horses—are excluded. The review attempts to introduce unique points for each species regarding its biological property, degrees of susceptibility to develop certain types of heart diseases, and methodology of induced conditions. For example, dogs barely develop myocardial infarction, while dilated cardiomyopathy is developed quite often. Based on the similarities of each species to the human, the model selection may first consider non-human primates—pig, sheep, then dog—but it also depends on other factors, for example, purposes, funding, ethics, and policy. We hope this review can serve as a basic outline of large animal models for cardiovascular researchers and clinicians.

  16. Evaluation of trends in high temperature extremes in north-western Europe in regional climate models

    International Nuclear Information System (INIS)

    Min, E; Hazeleger, W; Van Oldenborgh, G J; Sterl, A

    2013-01-01

    Projections of future changes in weather extremes on the regional and local scale depend on a realistic representation of trends in extremes in regional climate models (RCMs). We have tested this assumption for moderate high temperature extremes (the annual maximum of the daily maximum 2 m temperature, T ann.max ). Linear trends in T ann.max from historical runs of 14 RCMs driven by atmospheric reanalysis data are compared with trends in gridded station data. The ensemble of RCMs significantly underestimates the observed trends over most of the north-western European land surface. Individual models do not fare much better, with even the best performing models underestimating observed trends over large areas. We argue that the inability of RCMs to reproduce observed trends is probably not due to errors in large-scale circulation. There is also no significant correlation between the RCM T ann.max trends and trends in radiation or Bowen ratio. We conclude that care should be taken when using RCM data for adaptation decisions. (letter)

  17. Large scale model testing

    International Nuclear Information System (INIS)

    Brumovsky, M.; Filip, R.; Polachova, H.; Stepanek, S.

    1989-01-01

    Fracture mechanics and fatigue calculations for WWER reactor pressure vessels were checked by large scale model testing performed using large testing machine ZZ 8000 (with a maximum load of 80 MN) at the SKODA WORKS. The results are described from testing the material resistance to fracture (non-ductile). The testing included the base materials and welded joints. The rated specimen thickness was 150 mm with defects of a depth between 15 and 100 mm. The results are also presented of nozzles of 850 mm inner diameter in a scale of 1:3; static, cyclic, and dynamic tests were performed without and with surface defects (15, 30 and 45 mm deep). During cyclic tests the crack growth rate in the elastic-plastic region was also determined. (author). 6 figs., 2 tabs., 5 refs

  18. Lesion stiffness measured by shear-wave elastography: Preoperative predictor of the histologic underestimation of US-guided core needle breast biopsy.

    Science.gov (United States)

    Park, Ah Young; Son, Eun Ju; Kim, Jeong-Ah; Han, Kyunghwa; Youk, Ji Hyun

    2015-12-01

    To determine whether lesion stiffness measured by shear-wave elastography (SWE) can be used to predict the histologic underestimation of ultrasound (US)-guided 14-gauge core needle biopsy (CNB) for breast masses. This retrospective study enrolled 99 breast masses from 93 patients, including 40 high-risk lesions and 59 ductal carcinoma in situ (DCIS), which were diagnosed by US-guided 14-gauge CNB. SWE was performed for all breast masses to measure quantitative elasticity values before US-guided CNB. To identify the preoperative factors associated with histologic underestimation, patients' age, symptoms, lesion size, B-mode US findings, and quantitative SWE parameters were compared according to the histologic upgrade after surgery using the chi-square test, Fisher's exact test, or independent t-test. The independent factors for predicting histologic upgrade were evaluated using multivariate logistic regression analysis. The underestimation rate was 28.3% (28/99) in total, 25.0% (10/40) in high-risk lesions, and 30.5% (18/59) in DCIS. All elasticity values of the upgrade group were significantly higher than those of the non-upgrade group (PBreast lesion stiffness quantitatively measured by SWE could be helpful to predict the underestimation of malignancy in US-guided 14-gauge CNB. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  19. Models for large superconducting toroidal magnet systems

    International Nuclear Information System (INIS)

    Arendt, F.; Brechna, H.; Erb, J.; Komarek, P.; Krauth, H.; Maurer, W.

    1976-01-01

    Prior to the design of large GJ toroidal magnet systems it is appropriate to procure small scale models, which can simulate their pertinent properties and allow to investigate their relevant phenomena. The important feature of the model is to show under which circumstances the system performance can be extrapolated to large magnets. Based on parameters such as the maximum magnetic field and the current density, the maximum tolerable magneto-mechanical stresses, a simple method of designing model magnets is presented. It is shown how pertinent design parameters are changed when the toroidal dimensions are altered. In addition some conductor cost estimations are given based on reactor power output and wall loading

  20. Rediscovery of an old article reporting that the area around the epicenter in Hiroshima was heavily contaminated with residual radiation, indicating that exposure doses of A-bomb survivors were largely underestimated

    International Nuclear Information System (INIS)

    Sutou, Shizuyo

    2017-01-01

    The A-bomb blast released a huge amount of energy: thermal radiation (35%), blast energy (50%), and nuclear radiation (15%). Of the 15%, 5% was initial radiation released within 30 s and 10% was residual radiation, the majority of which was fallout. Exposure doses of hibakusha (A-bomb survivors) were estimated solely on the basis of the initial radiation. The effects of the residual radiation on hibakusha have been considered controversial; some groups assert that the residual radiation was negligible, but others refute that assertion. I recently discovered a six-decade-old article written in Japanese by a medical doctor, Gensaku Obo, from Hiroshima City. This article clearly indicates that the area around the epicenter in Hiroshima was heavily contaminated with residual radiation. It reports that non-hibakusha who entered Hiroshima soon after the blast suffered from severe acute radiation sickness, including burns, external injuries, fever, diarrhea, skin bleeding, sore throat and loss of hair—as if they were real hibakusha. This means that (i) some of those who entered Hiroshima in the early days after the blast could be regarded as indirect hibakusha; (ii) ‘in-the-city-control’ people in the Life Span Study (LSS) must have been irradiated more or less from residual radiation and could not function properly as the negative control; (iii) exposure doses of hibakusha were largely underestimated; and (iv) cancer risk in the LSS was largely overestimated. Obo's article is very important to understand the health effects of A-bombs so that the essence of it is translated from Japanese to English with the permission of the publisher.

  1. Exactly soluble models for surface partition of large clusters

    International Nuclear Information System (INIS)

    Bugaev, K.A.; Bugaev, K.A.; Elliott, J.B.

    2007-01-01

    The surface partition of large clusters is studied analytically within a framework of the 'Hills and Dales Model'. Three formulations are solved exactly by using the Laplace-Fourier transformation method. In the limit of small amplitude deformations, the 'Hills and Dales Model' gives the upper and lower bounds for the surface entropy coefficient of large clusters. The found surface entropy coefficients are compared with those of large clusters within the 2- and 3-dimensional Ising models

  2. Disk Masses around Solar-mass Stars are Underestimated by CO Observations

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Mo; Evans II, Neal J. [Astronomy Department, University of Texas, 2515 Speedway, Stop C1400, Austin, TX 78712 (United States); Dodson-Robinson, Sarah E. [University of Delaware, Department of Physics and Astronomy, 217 Sharp Lab, Newark, DE 19716 (United States); Willacy, Karen; Turner, Neal J. [Mail Stop 169-506, Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States)

    2017-05-20

    Gas in protostellar disks provides the raw material for giant planet formation and controls the dynamics of the planetesimal-building dust grains. Accurate gas mass measurements help map the observed properties of planet-forming disks onto the formation environments of known exoplanets. Rare isotopologues of carbon monoxide (CO) have been used as gas mass tracers for disks in the Lupus star-forming region, with an assumed interstellar CO/H{sub 2} abundance ratio. Unfortunately, observations of T-Tauri disks show that CO abundance is not interstellar, a finding reproduced by models that show CO abundance decreasing both with distance from the star and as a function of time. Here, we present radiative transfer simulations that assess the accuracy of CO-based disk mass measurements. We find that the combination of CO chemical depletion in the outer disk and optically thick emission from the inner disk leads observers to underestimate gas mass by more than an order of magnitude if they use the standard assumptions of interstellar CO/H{sub 2} ratio and optically thin emission. Furthermore, CO abundance changes on million-year timescales, introducing an age/mass degeneracy into observations. To reach a factor of a few accuracy for CO-based disk mass measurements, we suggest that observers and modelers adopt the following strategies: (1) select low- J transitions; (2) observe multiple CO isotopologues and use either intensity ratios or normalized line profiles to diagnose CO chemical depletion; and (3) use spatially resolved observations to measure the CO-abundance distribution.

  3. WE-G-204-02: Utility of a Channelized Hotelling Model Observer Over a Large Range of Angiographic Exposure Levels

    International Nuclear Information System (INIS)

    Fetterly, K; Favazza, C

    2015-01-01

    Purpose: Mathematical model observers provide a figure of merit that simultaneously considers a test object and the contrast, noise, and spatial resolution properties of an imaging system. The purpose of this work was to investigate the utility of a channelized Hotelling model observer (CHO) to assess system performance over a large range of angiographic exposure conditions. Methods: A 4 mm diameter disk shaped, iodine contrast test object was placed on a 20 cm thick Lucite phantom and 1204 image frames were acquired using fixed x-ray beam quality and for several detector target dose (DTD) values in the range 6 to 240 nGy. The CHO was implemented in the spatial domain utilizing 96 Gabor functions as channels. Detectability index (DI) estimates were calculated using the “resubstitution” and “holdout” methods to train the CHO. Also, DI values calculated using discrete subsets of the data were used to estimate a minimally biased DI as might be expected from an infinitely large dataset. The relationship between DI, independently measured CNR, and changes in results expected assuming a quantum limited detector were assessed over the DTD range. Results: CNR measurements demonstrated that the angiography system is not quantum limited due to relatively increasing contamination from electronic noise that reduces CNR for low DTD. Direct comparison of DI versus CNR indicates that the CHO relatively overestimates DI for low DTD and/or underestimates DI values for high DTD. The relative magnitude of the apparent bias error in the DI values was ∼20% over the 40x DTD range investigated. Conclusion: For the angiography system investigated, the CHO can provide a minimally biased figure of merit if implemented over a restricted exposure range. However, bias leads to overestimates of DI for low exposures. This work emphasizes the need to verify CHO model performance during real-world application

  4. WE-G-204-02: Utility of a Channelized Hotelling Model Observer Over a Large Range of Angiographic Exposure Levels

    Energy Technology Data Exchange (ETDEWEB)

    Fetterly, K; Favazza, C [Mayo Clinic, Rochester, MN (United States)

    2015-06-15

    Purpose: Mathematical model observers provide a figure of merit that simultaneously considers a test object and the contrast, noise, and spatial resolution properties of an imaging system. The purpose of this work was to investigate the utility of a channelized Hotelling model observer (CHO) to assess system performance over a large range of angiographic exposure conditions. Methods: A 4 mm diameter disk shaped, iodine contrast test object was placed on a 20 cm thick Lucite phantom and 1204 image frames were acquired using fixed x-ray beam quality and for several detector target dose (DTD) values in the range 6 to 240 nGy. The CHO was implemented in the spatial domain utilizing 96 Gabor functions as channels. Detectability index (DI) estimates were calculated using the “resubstitution” and “holdout” methods to train the CHO. Also, DI values calculated using discrete subsets of the data were used to estimate a minimally biased DI as might be expected from an infinitely large dataset. The relationship between DI, independently measured CNR, and changes in results expected assuming a quantum limited detector were assessed over the DTD range. Results: CNR measurements demonstrated that the angiography system is not quantum limited due to relatively increasing contamination from electronic noise that reduces CNR for low DTD. Direct comparison of DI versus CNR indicates that the CHO relatively overestimates DI for low DTD and/or underestimates DI values for high DTD. The relative magnitude of the apparent bias error in the DI values was ∼20% over the 40x DTD range investigated. Conclusion: For the angiography system investigated, the CHO can provide a minimally biased figure of merit if implemented over a restricted exposure range. However, bias leads to overestimates of DI for low exposures. This work emphasizes the need to verify CHO model performance during real-world application.

  5. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.; Douglas, Craig C.

    2010-01-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models

  6. How systematic age underestimation can impede understanding of fish population dynamics: Lessons learned from a Lake Superior cisco stock

    Science.gov (United States)

    Yule, D.L.; Stockwell, J.D.; Black, J.A.; Cullis, K.I.; Cholwek, G.A.; Myers, J.T.

    2008-01-01

    Systematic underestimation of fish age can impede understanding of recruitment variability and adaptive strategies (like longevity) and can bias estimates of survivorship. We suspected that previous estimates of annual survival (S; range = 0.20-0.44) for Lake Superior ciscoes Coregonus artedi developed from scale ages were biased low. To test this hypothesis, we estimated the total instantaneous mortality rate of adult ciscoes from the Thunder Bay, Ontario, stock by use of cohort-based catch curves developed from commercial gill-net catches and otolith-aged fish. Mean S based on otolith ages was greater for adult females (0.80) than for adult males (0.75), but these differences were not significant. Applying the results of a study of agreement between scale and otolith ages, we modeled a scale age for each otolith-aged fish to reconstruct catch curves. Using modeled scale ages, estimates of S (0.42 for females, 0.36 for males) were comparable with those reported in past studies. We conducted a November 2005 acoustic and midwater trawl survey to estimate the abundance of ciscoes when the fish were being harvested for roe. Estimated exploitation rates were 0.085 for females and 0.025 for males, and the instantaneous rates of fishing mortality were 0.089 for females and 0.025 for males. The instantaneous rates of natural mortality were 0.131 and 0.265 for females and males, respectively. Using otolith ages, we found that strong year-classes at large during November 2005 were caught in high numbers as age-1 fish in previous annual bottom trawl surveys, whereas weak or absent year-classes were not. For decades, large-scale fisheries on the Great Lakes were allowed to operate because ciscoes were assumed to be short lived and to have regular recruitment. We postulate that the collapse of these fisheries was linked in part to a misunderstanding of cisco biology driven by scale-ageing error. ?? Copyright by the American Fisheries Society 2008.

  7. Large-scale modeling of rain fields from a rain cell deterministic model

    Science.gov (United States)

    FéRal, Laurent; Sauvageot, Henri; Castanet, Laurent; Lemorton, JoëL.; Cornet, FréDéRic; Leconte, Katia

    2006-04-01

    A methodology to simulate two-dimensional rain rate fields at large scale (1000 × 1000 km2, the scale of a satellite telecommunication beam or a terrestrial fixed broadband wireless access network) is proposed. It relies on a rain rate field cellular decomposition. At small scale (˜20 × 20 km2), the rain field is split up into its macroscopic components, the rain cells, described by the Hybrid Cell (HYCELL) cellular model. At midscale (˜150 × 150 km2), the rain field results from the conglomeration of rain cells modeled by HYCELL. To account for the rain cell spatial distribution at midscale, the latter is modeled by a doubly aggregative isotropic random walk, the optimal parameterization of which is derived from radar observations at midscale. The extension of the simulation area from the midscale to the large scale (1000 × 1000 km2) requires the modeling of the weather frontal area. The latter is first modeled by a Gaussian field with anisotropic covariance function. The Gaussian field is then turned into a binary field, giving the large-scale locations over which it is raining. This transformation requires the definition of the rain occupation rate over large-scale areas. Its probability distribution is determined from observations by the French operational radar network ARAMIS. The coupling with the rain field modeling at midscale is immediate whenever the large-scale field is split up into midscale subareas. The rain field thus generated accounts for the local CDF at each point, defining a structure spatially correlated at small scale, midscale, and large scale. It is then suggested that this approach be used by system designers to evaluate diversity gain, terrestrial path attenuation, or slant path attenuation for different azimuth and elevation angle directions.

  8. Active Exploration of Large 3D Model Repositories.

    Science.gov (United States)

    Gao, Lin; Cao, Yan-Pei; Lai, Yu-Kun; Huang, Hao-Zhi; Kobbelt, Leif; Hu, Shi-Min

    2015-12-01

    With broader availability of large-scale 3D model repositories, the need for efficient and effective exploration becomes more and more urgent. Existing model retrieval techniques do not scale well with the size of the database since often a large number of very similar objects are returned for a query, and the possibilities to refine the search are quite limited. We propose an interactive approach where the user feeds an active learning procedure by labeling either entire models or parts of them as "like" or "dislike" such that the system can automatically update an active set of recommended models. To provide an intuitive user interface, candidate models are presented based on their estimated relevance for the current query. From the methodological point of view, our main contribution is to exploit not only the similarity between a query and the database models but also the similarities among the database models themselves. We achieve this by an offline pre-processing stage, where global and local shape descriptors are computed for each model and a sparse distance metric is derived that can be evaluated efficiently even for very large databases. We demonstrate the effectiveness of our method by interactively exploring a repository containing over 100 K models.

  9. Comparison of Decadal Water Storage Trends from Global Hydrological Models and GRACE Satellite Data

    Science.gov (United States)

    Scanlon, B. R.; Zhang, Z. Z.; Save, H.; Sun, A. Y.; Mueller Schmied, H.; Van Beek, L. P.; Wiese, D. N.; Wada, Y.; Long, D.; Reedy, R. C.; Doll, P. M.; Longuevergne, L.

    2017-12-01

    Global hydrology is increasingly being evaluated using models; however, the reliability of these global models is not well known. In this study we compared decadal trends (2002-2014) in land water storage from 7 global models (WGHM, PCR-GLOBWB, and GLDAS: NOAH, MOSAIC, VIC, CLM, and CLSM) to storage trends from new GRACE satellite mascon solutions (CSR-M and JPL-M). The analysis was conducted over 186 river basins, representing about 60% of the global land area. Modeled total water storage trends agree with those from GRACE-derived trends that are within ±0.5 km3/yr but greatly underestimate large declining and rising trends outside this range. Large declining trends are found mostly in intensively irrigated basins and in some basins in northern latitudes. Rising trends are found in basins with little or no irrigation and are generally related to increasing trends in precipitation. The largest decline is found in the Ganges (-12 km3/yr) and the largest rise in the Amazon (43 km3/yr). Differences between models and GRACE are greatest in large basins (>0.5x106 km2) mostly in humid regions. There is very little agreement in storage trends between models and GRACE and among the models with values of r2 mostly store water over decadal timescales that is underrepresented by the models. The storage capacity in the modeled soil and groundwater compartments may be insufficient to accommodate the range in water storage variations shown by GRACE data. The inability of the models to capture the large storage trends indicates that model projections of climate and human-induced changes in water storage may be mostly underestimated. Future GRACE and model studies should try to reduce the various sources of uncertainty in water storage trends and should consider expanding the modeled storage capacity of the soil profiles and their interaction with groundwater.

  10. Combining satellite radar altimetry, SAR surface soil moisture and GRACE total storage changes for hydrological model calibration in a large poorly gauged catchment

    Directory of Open Access Journals (Sweden)

    C. Milzow

    2011-06-01

    Full Text Available The availability of data is a major challenge for hydrological modelling in large parts of the world. Remote sensing data can be exploited to improve models of ungauged or poorly gauged catchments. In this study we combine three datasets for calibration of a rainfall-runoff model of the poorly gauged Okavango catchment in Southern Africa: (i surface soil moisture (SSM estimates derived from radar measurements onboard the Envisat satellite; (ii radar altimetry measurements by Envisat providing river stages in the tributaries of the Okavango catchment, down to a minimum river width of about one hundred meters; and (iii temporal changes of the Earth's gravity field recorded by the Gravity Recovery and Climate Experiment (GRACE caused by total water storage changes in the catchment. The SSM data are shown to be helpful in identifying periods with over-respectively underestimation of the precipitation input. The accuracy of the radar altimetry data is validated on gauged subbasins of the catchment and altimetry data of an ungauged subbasin is used for model calibration. The radar altimetry data are important to condition model parameters related to channel morphology such as Manning's roughness. GRACE data are used to validate the model and to condition model parameters related to various storage compartments in the hydrological model (e.g. soil, groundwater, bank storage etc.. As precipitation input the FEWS-Net RFE, TRMM 3B42 and ECMWF ERA-Interim datasets are considered and compared.

  11. On spinfoam models in large spin regime

    International Nuclear Information System (INIS)

    Han, Muxin

    2014-01-01

    We study the semiclassical behavior of Lorentzian Engle–Pereira–Rovelli–Livine (EPRL) spinfoam model, by taking into account the sum over spins in the large spin regime. We also employ the method of stationary phase analysis with parameters and the so-called, almost analytic machinery, in order to find the asymptotic behavior of the contributions from all possible large spin configurations in the spinfoam model. The spins contributing the sum are written as J f = λj f , where λ is a large parameter resulting in an asymptotic expansion via stationary phase approximation. The analysis shows that at least for the simplicial Lorentzian geometries (as spinfoam critical configurations), they contribute the leading order approximation of spinfoam amplitude only when their deficit angles satisfy γ Θ-ring f ≤λ −1/2 mod 4πZ. Our analysis results in a curvature expansion of the semiclassical low energy effective action from the spinfoam model, where the UV modifications of Einstein gravity appear as subleading high-curvature corrections. (paper)

  12. Underestimated risks of recurrent long-range ash dispersal from northern Pacific Arc volcanoes.

    Science.gov (United States)

    Bourne, A J; Abbott, P M; Albert, P G; Cook, E; Pearce, N J G; Ponomareva, V; Svensson, A; Davies, S M

    2016-07-21

    Widespread ash dispersal poses a significant natural hazard to society, particularly in relation to disruption to aviation. Assessing the extent of the threat of far-travelled ash clouds on flight paths is substantially hindered by an incomplete volcanic history and an underestimation of the potential reach of distant eruptive centres. The risk of extensive ash clouds to aviation is thus poorly quantified. New evidence is presented of explosive Late Pleistocene eruptions in the Pacific Arc, currently undocumented in the proximal geological record, which dispersed ash up to 8000 km from source. Twelve microscopic ash deposits or cryptotephra, invisible to the naked eye, discovered within Greenland ice-cores, and ranging in age between 11.1 and 83.7 ka b2k, are compositionally matched to northern Pacific Arc sources including Japan, Kamchatka, Cascades and Alaska. Only two cryptotephra deposits are correlated to known high-magnitude eruptions (Towada-H, Japan, ca 15 ka BP and Mount St Helens Set M, ca 28 ka BP). For the remaining 10 deposits, there is no evidence of age- and compositionally-equivalent eruptive events in regional volcanic stratigraphies. This highlights the inherent problem of under-reporting eruptions and the dangers of underestimating the long-term risk of widespread ash dispersal for trans-Pacific and trans-Atlantic flight routes.

  13. Rediscovery of an old article reporting that the area around the epicenter in Hiroshima was heavily contaminated with residual radiation, indicating that exposure doses of A-bomb survivors were largely underestimated.

    Science.gov (United States)

    Sutou, Shizuyo

    2017-09-01

    The A-bomb blast released a huge amount of energy: thermal radiation (35%), blast energy (50%), and nuclear radiation (15%). Of the 15%, 5% was initial radiation released within 30 s and 10% was residual radiation, the majority of which was fallout. Exposure doses of hibakusha (A-bomb survivors) were estimated solely on the basis of the initial radiation. The effects of the residual radiation on hibakusha have been considered controversial; some groups assert that the residual radiation was negligible, but others refute that assertion. I recently discovered a six-decade-old article written in Japanese by a medical doctor, Gensaku Obo, from Hiroshima City. This article clearly indicates that the area around the epicenter in Hiroshima was heavily contaminated with residual radiation. It reports that non-hibakusha who entered Hiroshima soon after the blast suffered from severe acute radiation sickness, including burns, external injuries, fever, diarrhea, skin bleeding, sore throat and loss of hair-as if they were real hibakusha. This means that (i) some of those who entered Hiroshima in the early days after the blast could be regarded as indirect hibakusha; (ii) 'in-the-city-control' people in the Life Span Study (LSS) must have been irradiated more or less from residual radiation and could not function properly as the negative control; (iii) exposure doses of hibakusha were largely underestimated; and (iv) cancer risk in the LSS was largely overestimated. Obo's article is very important to understand the health effects of A-bombs so that the essence of it is translated from Japanese to English with the permission of the publisher. © The Author 2017. Published by Oxford University Press on behalf of The Japan Radiation Research Society and Japanese Society for Radiation Oncology.

  14. Research on large-scale wind farm modeling

    Science.gov (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  15. Modelling and control of large cryogenic refrigerator

    International Nuclear Information System (INIS)

    Bonne, Francois

    2014-01-01

    This manuscript is concern with both the modeling and the derivation of control schemes for large cryogenic refrigerators. The particular case of those which are submitted to highly variable pulsed heat load is studied. A model of each object that normally compose a large cryo-refrigerator is proposed. The methodology to gather objects model into the model of a subsystem is presented. The manuscript also shows how to obtain a linear equivalent model of the subsystem. Based on the derived models, advances control scheme are proposed. Precisely, a linear quadratic controller for warm compression station working with both two and three pressures state is derived, and a predictive constrained one for the cold-box is obtained. The particularity of those control schemes is that they fit the computing and data storage capabilities of Programmable Logic Controllers (PLC) with are well used in industry. The open loop model prediction capability is assessed using experimental data. Developed control schemes are validated in simulation and experimentally on the 400W1.8K SBT's cryogenic test facility and on the CERN's LHC warm compression station. (author) [fr

  16. X-ray computed microtomography characterizes the wound effect that causes sap flow underestimation by thermal dissipation sensors.

    Science.gov (United States)

    Marañón-Jiménez, S; Van den Bulcke, J; Piayda, A; Van Acker, J; Cuntz, M; Rebmann, C; Steppe, K

    2018-02-01

    Insertion of thermal dissipation (TD) sap flow sensors in living tree stems causes damage of the wood tissue, as is the case with other invasive methods. The subsequent wound formation is one of the main causes of underestimation of tree water-use measured by TD sensors. However, the specific alterations in wood anatomy in response to inserted sensors have not yet been characterized, and the linked dysfunctions in xylem conductance and sensor accuracy are still unknown. In this study, we investigate the anatomical mechanisms prompting sap flow underestimation and the dynamic process of wound formation. Successive sets of TD sensors were installed in the early, mid and end stage of the growing season in diffuse- and ring-porous trees, Fagus sylvatica (Linnaeus) and Quercus petraea ((Mattuschka) Lieblein), respectively. The trees were cut in autumn and additional sensors were installed in the cut stem segments as controls without wound formation. The wounded area and volume surrounding each sensor was then visually determined by X-ray computed microtomography (X-ray microCT). This technique allowed the characterization of vessel anatomical transformations such as tyloses formation, their spatial distribution and quantification of reduction in conductive area. MicroCT scans showed considerable formation of tyloses that reduced the conductive area of vessels surrounding the inserted TD probes, thus causing an underestimation in sap flux density (SFD) in both beech and oak. Discolored wood tissue was ellipsoidal, larger in the radial plane, more extensive in beech than in oak, and also for sensors installed for longer times. However, the severity of anatomical transformations did not always follow this pattern. Increased wound size with time, for example, did not result in larger SFD underestimation. This information helps us to better understand the mechanisms involved in wound effects with TD sensors and allows the provision of practical recommendations to reduce

  17. Modeling containment of large wildfires using generalized linear mixed-model analysis

    Science.gov (United States)

    Mark Finney; Isaac C. Grenfell; Charles W. McHugh

    2009-01-01

    Billions of dollars are spent annually in the United States to contain large wildland fires, but the factors contributing to suppression success remain poorly understood. We used a regression model (generalized linear mixed-model) to model containment probability of individual fires, assuming that containment was a repeated-measures problem (fixed effect) and...

  18. Model of large pool fires

    Energy Technology Data Exchange (ETDEWEB)

    Fay, J.A. [Department of Mechanical Engineering, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)]. E-mail: jfay@mit.edu

    2006-08-21

    A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables.

  19. Model of large pool fires

    International Nuclear Information System (INIS)

    Fay, J.A.

    2006-01-01

    A two zone entrainment model of pool fires is proposed to depict the fluid flow and flame properties of the fire. Consisting of combustion and plume zones, it provides a consistent scheme for developing non-dimensional scaling parameters for correlating and extrapolating pool fire visible flame length, flame tilt, surface emissive power, and fuel evaporation rate. The model is extended to include grey gas thermal radiation from soot particles in the flame zone, accounting for emission and absorption in both optically thin and thick regions. A model of convective heat transfer from the combustion zone to the liquid fuel pool, and from a water substrate to cryogenic fuel pools spreading on water, provides evaporation rates for both adiabatic and non-adiabatic fires. The model is tested against field measurements of large scale pool fires, principally of LNG, and is generally in agreement with experimental values of all variables

  20. Comparison Between Overtopping Discharge in Small and Large Scale Models

    DEFF Research Database (Denmark)

    Helgason, Einar; Burcharth, Hans F.

    2006-01-01

    The present paper presents overtopping measurements from small scale model test performed at the Haudraulic & Coastal Engineering Laboratory, Aalborg University, Denmark and large scale model tests performed at the Largde Wave Channel,Hannover, Germany. Comparison between results obtained from...... small and large scale model tests show no clear evidence of scale effects for overtopping above a threshold value. In the large scale model no overtopping was measured for waveheights below Hs = 0.5m as the water sunk into the voids between the stones on the crest. For low overtopping scale effects...

  1. Large Animal Stroke Models vs. Rodent Stroke Models, Pros and Cons, and Combination?

    Science.gov (United States)

    Cai, Bin; Wang, Ning

    2016-01-01

    Stroke is a leading cause of serious long-term disability worldwide and the second leading cause of death in many countries. Long-time attempts to salvage dying neurons via various neuroprotective agents have failed in stroke translational research, owing in part to the huge gap between animal stroke models and stroke patients, which also suggests that rodent models have limited predictive value and that alternate large animal models are likely to become important in future translational research. The genetic background, physiological characteristics, behavioral characteristics, and brain structure of large animals, especially nonhuman primates, are analogous to humans, and resemble humans in stroke. Moreover, relatively new regional imaging techniques, measurements of regional cerebral blood flow, and sophisticated physiological monitoring can be more easily performed on the same animal at multiple time points. As a result, we can use large animal stroke models to decrease the gap and promote translation of basic science stroke research. At the same time, we should not neglect the disadvantages of the large animal stroke model such as the significant expense and ethical considerations, which can be overcome by rodent models. Rodents should be selected as stroke models for initial testing and primates or cats are desirable as a second species, which was recommended by the Stroke Therapy Academic Industry Roundtable (STAIR) group in 2009.

  2. Constituent rearrangement model and large transverse momentum reactions

    International Nuclear Information System (INIS)

    Igarashi, Yuji; Imachi, Masahiro; Matsuoka, Takeo; Otsuki, Shoichiro; Sawada, Shoji.

    1978-01-01

    In this chapter, two models based on the constituent rearrangement picture for large p sub( t) phenomena are summarized. One is the quark-junction model, and the other is the correlating quark rearrangement model. Counting rules of the models apply to both two-body reactions and hadron productions. (author)

  3. A Full-Maxwell Approach for Large-Angle Polar Wander of Viscoelastic Bodies

    Science.gov (United States)

    Hu, H.; van der Wal, W.; Vermeersen, L. L. A.

    2017-12-01

    For large-angle long-term true polar wander (TPW) there are currently two types of nonlinear methods which give approximated solutions: those assuming that the rotational axis coincides with the axis of maximum moment of inertia (MoI), which simplifies the Liouville equation, and those based on the quasi-fluid approximation, which approximates the Love number. Recent studies show that both can have a significant bias for certain models. Therefore, we still lack an (semi)analytical method which can give exact solutions for large-angle TPW for a model based on Maxwell rheology. This paper provides a method which analytically solves the MoI equation and adopts an extended iterative procedure introduced in Hu et al. (2017) to obtain a time-dependent solution. The new method can be used to simulate the effect of a remnant bulge or models in different hydrostatic states. We show the effect of the viscosity of the lithosphere on long-term, large-angle TPW. We also simulate models without hydrostatic equilibrium and show that the choice of the initial stress-free shape for the elastic (or highly viscous) lithosphere of a given model is as important as its thickness for obtaining a correct TPW behavior. The initial shape of the lithosphere can be an alternative explanation to mantle convection for the difference between the observed and model predicted flattening. Finally, it is concluded that based on the quasi-fluid approximation, TPW speed on Earth and Mars is underestimated, while the speed of the rotational axis approaching the end position on Venus is overestimated.

  4. Large-signal modeling method for power FETs and diodes

    Energy Technology Data Exchange (ETDEWEB)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping, E-mail: sunlu_1019@126.co [School of Electromechanical Engineering, Xidian University, Xi' an 710071 (China)

    2009-06-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  5. Large-signal modeling method for power FETs and diodes

    International Nuclear Information System (INIS)

    Sun Lu; Wang Jiali; Wang Shan; Li Xuezheng; Shi Hui; Wang Na; Guo Shengping

    2009-01-01

    Under a large signal drive level, a frequency domain black box model of the nonlinear scattering function is introduced into power FETs and diodes. A time domain measurement system and a calibration method based on a digital oscilloscope are designed to extract the nonlinear scattering function of semiconductor devices. The extracted models can reflect the real electrical performance of semiconductor devices and propose a new large-signal model to the design of microwave semiconductor circuits.

  6. SU-F-T-132: Variable RBE Models Predict Possible Underestimation of Vaginal Dose for Anal Cancer Patients Treated Using Single-Field Proton Treatments

    Energy Technology Data Exchange (ETDEWEB)

    McNamara, A; Underwood, T; Wo, J; Paganetti, H [Massachusetts General Hospital & Harvard Medical School, Boston, MA (United States)

    2016-06-15

    Purpose: Anal cancer patients treated using a posterior proton beam may be at risk of vaginal wall injury due to the increased linear energy transfer (LET) and relative biological effectiveness (RBE) at the beam distal edge. We investigate the vaginal dose received. Methods: Five patients treated for anal cancer with proton pencil beam scanning were considered, all treated to a prescription dose of 54 Gy(RBE) over 28–30 fractions. Dose and LET distributions were calculated using the Monte Carlo simulation toolkit TOPAS. In addition to the standard assumption of a fixed RBE of 1.1, variable RBE was considered via the application of published models. Dose volume histograms (DVHs) were extracted for the planning treatment volume (PTV) and vagina, the latter being used to calculate the vaginal normal tissue complication probability (NTCP). Results: Compared to the assumption of a fixed RBE of 1.1, the variable RBE model predicts a dose increase of approximately 3.3 ± 1.7 Gy at the end of beam range. NTCP parameters for the vagina are incomplete in the current literature, however, inferring value ranges from the existing data we use D{sub 50} = 50 Gy and LKB model parameters a=1–2 and m=0.2–0.4. We estimate the NTCP for the vagina to be 37–48% and 42–47% for the fixed and variable RBE cases, respectively. Additionally, a difference in the dose distribution was observed between the analytical calculation and Monte Carlo methods. We find that the target dose is overestimated on average by approximately 1–2%. Conclusion: For patients treated with posterior beams, the vaginal wall may coincide with the distal end of the proton beam and may receive a substantial increase in dose if variable RBE models are applied compared to using the current clinical standard of RBE equal to 1.1. This could potentially lead to underestimating toxicities when treating with protons.

  7. Large scale stochastic spatio-temporal modelling with PCRaster

    NARCIS (Netherlands)

    Karssenberg, D.J.; Drost, N.; Schmitz, O.; Jong, K. de; Bierkens, M.F.P.

    2013-01-01

    PCRaster is a software framework for building spatio-temporal models of land surface processes (http://www.pcraster.eu). Building blocks of models are spatial operations on raster maps, including a large suite of operations for water and sediment routing. These operations are available to model

  8. Black carbon in the Arctic: the underestimated role of gas flaring and residential combustion emissions

    Directory of Open Access Journals (Sweden)

    A. Stohl

    2013-09-01

    annual mean Arctic BC surface concentrations due to residential combustion by 68% when using daily emissions. A large part (93% of this systematic increase can be captured also when using monthly emissions; the increase is compensated by a decreased BC burden at lower latitudes. In a comparison with BC measurements at six Arctic stations, we find that using daily-varying residential combustion emissions and introducing gas flaring emissions leads to large improvements of the simulated Arctic BC, both in terms of mean concentration levels and simulated seasonality. Case studies based on BC and carbon monoxide (CO measurements from the Zeppelin observatory appear to confirm flaring as an important BC source that can produce pollution plumes in the Arctic with a high BC / CO enhancement ratio, as expected for this source type. BC measurements taken during a research ship cruise in the White, Barents and Kara seas north of the region with strong flaring emissions reveal very high concentrations of the order of 200–400 ng m−3. The model underestimates these concentrations substantially, which indicates that the flaring emissions (and probably also other emissions in northern Siberia are rather under- than overestimated in our emission data set. Our results suggest that it may not be "vertical transport that is too strong or scavenging rates that are too low" and "opposite biases in these processes" in the Arctic and elsewhere in current aerosol models, as suggested in a recent review article (Bond et al., Bounding the role of black carbon in the climate system: a scientific assessment, J. Geophys. Res., 2013, but missing emission sources and lacking time resolution of the emission data that are causing opposite model biases in simulated BC concentrations in the Arctic and in the mid-latitudes.

  9. Tick-borne encephalitis (TBE): an underestimated risk…still: report of the 14th annual meeting of the International Scientific Working Group on Tick-Borne Encephalitis (ISW-TBE).

    Science.gov (United States)

    Kunze, Ursula

    2012-06-01

    Today, the risk of getting tick-borne encephalitis (TBE) is still underestimated in many parts of Europe and worldwide. Therefore, the 14th meeting of the International Scientific Working Group on Tick-Borne Encephalitis (ISW-TBE) - a group of neurologists, general practitioners, clinicians, travel physicians, virologists, pediatricians, and epidemiologists - was held under the title "Tick-borne encephalitis: an underestimated risk…still". Among the discussed issues were: TBE, an underestimated risk in children, a case report in two Dutch travelers, the very emotional report of a tick victim, an overview of the epidemiological situation, investigations to detect new TBE cases in Italy, TBE virus (TBEV) strains circulation in Northern Europe, TBE Program of the European Centre for Disease Prevention and Control (ECDC), efforts to increase the TBE vaccination rate in the Czech Republic, positioning statement of the World Health Organization (WHO), and TBE in dogs. To answer the question raised above: Yes, the risk of getting TBE is underestimated in children and adults, because awareness is still too low. It is still underestimated in several areas of Europe, where, for a lack of human cases, TBEV is thought to be absent. It is underestimated in travelers, because they still do not know enough about the risk, and diagnostic awareness in non-endemic countries is still low. Copyright © 2012. Published by Elsevier GmbH. All rights reserved.

  10. Nuclear spectroscopy in large shell model spaces: recent advances

    International Nuclear Information System (INIS)

    Kota, V.K.B.

    1995-01-01

    Three different approaches are now available for carrying out nuclear spectroscopy studies in large shell model spaces and they are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the recently introduced Monte Carlo method for the shell model; (iii) the spectral averaging theory, based on central limit theorems, in indefinitely large shell model spaces. The various principles, recent applications and possibilities of these three methods are described and the similarity between the Monte Carlo method and the spectral averaging theory is emphasized. (author). 28 refs., 1 fig., 5 tabs

  11. Long-term flow forecasts based on climate and hydrologic modeling: Uruguay River basin

    Science.gov (United States)

    Tucci, Carlos Eduardo Morelli; Clarke, Robin Thomas; Collischonn, Walter; da Silva Dias, Pedro Leite; de Oliveira, Gilvan Sampaio

    2003-07-01

    This paper describes a procedure for predicting seasonal flow in the Rio Uruguay drainage basin (area 75,000 km2, lying in Brazilian territory), using sequences of future daily rainfall given by the global climate model (GCM) of the Brazilian agency for climate prediction (Centro de Previsão de Tempo e Clima, or CPTEC). Sequences of future daily rainfall given by this model were used as input to a rainfall-runoff model appropriate for large drainage basins. Forecasts of flow in the Rio Uruguay were made for the period 1995-2001 of the full record, which began in 1940. Analysis showed that GCM forecasts underestimated rainfall over almost all the basin, particularly in winter, although interannual variability in regional rainfall was reproduced relatively well. A statistical procedure was used to correct for the underestimation of rainfall. When the corrected rainfall sequences were transformed to flow by the hydrologic model, forecasts of flow in the Rio Uruguay basin were better than forecasts based on historic mean or median flows by 37% for monthly flows and by 54% for 3-monthly flows.

  12. Modeling Temporal Behavior in Large Networks: A Dynamic Mixed-Membership Model

    Energy Technology Data Exchange (ETDEWEB)

    Rossi, R; Gallagher, B; Neville, J; Henderson, K

    2011-11-11

    Given a large time-evolving network, how can we model and characterize the temporal behaviors of individual nodes (and network states)? How can we model the behavioral transition patterns of nodes? We propose a temporal behavior model that captures the 'roles' of nodes in the graph and how they evolve over time. The proposed dynamic behavioral mixed-membership model (DBMM) is scalable, fully automatic (no user-defined parameters), non-parametric/data-driven (no specific functional form or parameterization), interpretable (identifies explainable patterns), and flexible (applicable to dynamic and streaming networks). Moreover, the interpretable behavioral roles are generalizable, computationally efficient, and natively supports attributes. We applied our model for (a) identifying patterns and trends of nodes and network states based on the temporal behavior, (b) predicting future structural changes, and (c) detecting unusual temporal behavior transitions. We use eight large real-world datasets from different time-evolving settings (dynamic and streaming). In particular, we model the evolving mixed-memberships and the corresponding behavioral transitions of Twitter, Facebook, IP-Traces, Email (University), Internet AS, Enron, Reality, and IMDB. The experiments demonstrate the scalability, flexibility, and effectiveness of our model for identifying interesting patterns, detecting unusual structural transitions, and predicting the future structural changes of the network and individual nodes.

  13. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    the potential of the method proposed and the possibility to use individual-based GPS units for travel surveys in real-life large-scale multi-modal networks. Congestion is known to highly influence the way we act in the transportation network (and organise our lives), because of longer travel times...... of observations of actual behaviour to obtain estimates of the (monetary) value of different travel time components, thereby increasing the behavioural realism of largescale models. vii The generation of choice sets is a vital component in route choice models. This is, however, not a straight-forward task in real......, but the reliability of the travel time also has a large impact on our travel choices. Consequently, in order to improve the realism of transport models, correct understanding and representation of two values that are related to the value of time (VoT) are essential: (i) the value of congestion (VoC), as the Vo...

  14. Large-scale hydrology in Europe : observed patterns and model performance

    Energy Technology Data Exchange (ETDEWEB)

    Gudmundsson, Lukas

    2011-06-15

    In a changing climate, terrestrial water storages are of great interest as water availability impacts key aspects of ecosystem functioning. Thus, a better understanding of the variations of wet and dry periods will contribute to fully grasp processes of the earth system such as nutrient cycling and vegetation dynamics. Currently, river runoff from small, nearly natural, catchments is one of the few variables of the terrestrial water balance that is regularly monitored with detailed spatial and temporal coverage on large scales. River runoff, therefore, provides a foundation to approach European hydrology with respect to observed patterns on large scales, with regard to the ability of models to capture these.The analysis of observed river flow from small catchments, focused on the identification and description of spatial patterns of simultaneous temporal variations of runoff. These are dominated by large-scale variations of climatic variables but also altered by catchment processes. It was shown that time series of annual low, mean and high flows follow the same atmospheric drivers. The observation that high flows are more closely coupled to large scale atmospheric drivers than low flows, indicates the increasing influence of catchment properties on runoff under dry conditions. Further, it was shown that the low-frequency variability of European runoff is dominated by two opposing centres of simultaneous variations, such that dry years in the north are accompanied by wet years in the south.Large-scale hydrological models are simplified representations of our current perception of the terrestrial water balance on large scales. Quantification of the models strengths and weaknesses is the prerequisite for a reliable interpretation of simulation results. Model evaluations may also enable to detect shortcomings with model assumptions and thus enable a refinement of the current perception of hydrological systems. The ability of a multi model ensemble of nine large

  15. Homogenization of Large-Scale Movement Models in Ecology

    Science.gov (United States)

    Garlick, M.J.; Powell, J.A.; Hooten, M.B.; McFarlane, L.R.

    2011-01-01

    A difficulty in using diffusion models to predict large scale animal population dispersal is that individuals move differently based on local information (as opposed to gradients) in differing habitat types. This can be accommodated by using ecological diffusion. However, real environments are often spatially complex, limiting application of a direct approach. Homogenization for partial differential equations has long been applied to Fickian diffusion (in which average individual movement is organized along gradients of habitat and population density). We derive a homogenization procedure for ecological diffusion and apply it to a simple model for chronic wasting disease in mule deer. Homogenization allows us to determine the impact of small scale (10-100 m) habitat variability on large scale (10-100 km) movement. The procedure generates asymptotic equations for solutions on the large scale with parameters defined by small-scale variation. The simplicity of this homogenization procedure is striking when compared to the multi-dimensional homogenization procedure for Fickian diffusion,and the method will be equally straightforward for more complex models. ?? 2010 Society for Mathematical Biology.

  16. ABOUT MODELING COMPLEX ASSEMBLIES IN SOLIDWORKS – LARGE AXIAL BEARING

    Directory of Open Access Journals (Sweden)

    Cătălin IANCU

    2017-12-01

    Full Text Available In this paperwork is presented the modeling strategy used in SOLIDWORKS for modeling special items as large axial bearing and the steps to be taken in order to obtain a better design. In the paper are presented the features that are used for modeling parts, and then the steps that must be taken in order to obtain the 3D model of a large axial bearing used for bucket-wheel equipment for charcoal moving.

  17. Non-differential underestimation may cause a threshold effect of exposure to appear as a dose-response relationship

    NARCIS (Netherlands)

    Verkerk, P. H.; Buitendijk, S. E.

    1992-01-01

    It is generally believed that non-differential misclassification will lead to a bias toward the null-value. However, using one graphical and one numerical example, we show that in situations where underestimation more than overestimation is the problem, non-differential misclassification may lead to

  18. A multi-resolution assessment of the Community Multiscale Air Quality (CMAQ model v4.7 wet deposition estimates for 2002–2006

    Directory of Open Access Journals (Sweden)

    K. W. Appel

    2011-05-01

    mixed throughout the year, with the model largely underestimating NO3 wet deposition in the spring and summer in the eastern US, while the model has a relatively small bias in the fall and winter. Model estimates of NO3 wet deposition tend to be slightly lower for the 36-km simulation as compared to the 12-km simulation, particularly in the spring. The underestimation of NO3 wet deposition in the spring and summer is due in part to a lack of lightning generated NO emissions in the upper troposphere, which can be a large source of NO in the spring and summer when lightning activity is the high. CMAQ model simulations that include production of NO from lightning show a significant improvement in the NO3 wet deposition estimates in the eastern US in the summer. Overall, performance for the 36-km and 12-km CMAQ model simulations is similar for the eastern US, while for the western US the performance of the 36-km simulation is generally not as good as either eastern US simulation, which is not entire unexpected given the complex topography in the western US.

  19. Underestimation of weight and its associated factors in overweight and obese university students from 21 low, middle and emerging economy countries.

    Science.gov (United States)

    Peltzer, Karl; Pengpid, Supa

    2015-01-01

    Awareness of overweight status is an important factor of weight control and may have more impact on one's decision to lose weight than objective weight status. The purpose of this study was to assess the prevalence of underestimation of overweight/obesity and its associated factors among university students from 21 low, middle and emerging economy countries. In a cross-sectional survey the total sample included 15,068 undergraduate university students (mean age 20.8, SD=2.8, age range of 16-30 years) from 21 countries. Anthropometric measurements and self-administrated questionnaire were applied to collected data. The prevalence of weight underestimation (being normal or underweight) for overweight or obese university students was 33.3% (41% in men and 25.1% in women), among overweight students, 39% felt they had normal weight or were under weight, and among obese students 67% did not rate themselves as obese or very overweight. In multivariate logistic regression analysis, being male, poor subjective health status, lack of overweight health risk awareness, lack of importance to lose weight, not trying and not dieting to lose weight, and regular breakfast was associated with underestimation of weight in overweight and obese university students. The study found a high prevalence of underestimation of overweight/obesity among university students. Several factors identified can be utilized in health promotion programmes including diet and weight management behaviours to focus on inaccurate weight perceptions on the design of weight control, in particular for men. Copyright © 2014 Asian Oceanian Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  20. Echocardiography underestimates stroke volume and aortic valve area: implications for patients with small-area low-gradient aortic stenosis.

    Science.gov (United States)

    Chin, Calvin W L; Khaw, Hwan J; Luo, Elton; Tan, Shuwei; White, Audrey C; Newby, David E; Dweck, Marc R

    2014-09-01

    Discordance between small aortic valve area (AVA; area (LVOTarea) and stroke volume alongside inconsistencies in recommended thresholds. One hundred thirty-three patients with mild to severe AS and 33 control individuals underwent comprehensive echocardiography and cardiovascular magnetic resonance imaging (MRI). Stroke volume and LVOTarea were calculated using echocardiography and MRI, and the effects on AVA estimation were assessed. The relationship between AVA and MPG measurements was then modelled with nonlinear regression and consistent thresholds for these parameters calculated. Finally the effect of these modified AVA measurements and novel thresholds on the number of patients with small-area low-gradient AS was investigated. Compared with MRI, echocardiography underestimated LVOTarea (n = 40; -0.7 cm(2); 95% confidence interval [CI], -2.6 to 1.3), stroke volumes (-6.5 mL/m(2); 95% CI, -28.9 to 16.0) and consequently, AVA (-0.23 cm(2); 95% CI, -1.01 to 0.59). Moreover, an AVA of 1.0 cm(2) corresponded to MPG of 24 mm Hg based on echocardiographic measurements and 37 mm Hg after correction with MRI-derived stroke volumes. Based on conventional measures, 56 patients had discordant small-area low-gradient AS. Using MRI-derived stroke volumes and the revised thresholds, a 48% reduction in discordance was observed (n = 29). Echocardiography underestimated LVOTarea, stroke volume, and therefore AVA, compared with MRI. The thresholds based on current guidelines were also inconsistent. In combination, these factors explain > 40% of patients with discordant small-area low-gradient AS. Copyright © 2014 Canadian Cardiovascular Society. Published by Elsevier Inc. All rights reserved.

  1. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  2. Citation analysis may severely underestimate the impact of clinical research as compared to basic research.

    Science.gov (United States)

    van Eck, Nees Jan; Waltman, Ludo; van Raan, Anthony F J; Klautz, Robert J M; Peul, Wilco C

    2013-01-01

    Citation analysis has become an important tool for research performance assessment in the medical sciences. However, different areas of medical research may have considerably different citation practices, even within the same medical field. Because of this, it is unclear to what extent citation-based bibliometric indicators allow for valid comparisons between research units active in different areas of medical research. A visualization methodology is introduced that reveals differences in citation practices between medical research areas. The methodology extracts terms from the titles and abstracts of a large collection of publications and uses these terms to visualize the structure of a medical field and to indicate how research areas within this field differ from each other in their average citation impact. Visualizations are provided for 32 medical fields, defined based on journal subject categories in the Web of Science database. The analysis focuses on three fields: Cardiac & cardiovascular systems, Clinical neurology, and Surgery. In each of these fields, there turn out to be large differences in citation practices between research areas. Low-impact research areas tend to focus on clinical intervention research, while high-impact research areas are often more oriented on basic and diagnostic research. Popular bibliometric indicators, such as the h-index and the impact factor, do not correct for differences in citation practices between medical fields. These indicators therefore cannot be used to make accurate between-field comparisons. More sophisticated bibliometric indicators do correct for field differences but still fail to take into account within-field heterogeneity in citation practices. As a consequence, the citation impact of clinical intervention research may be substantially underestimated in comparison with basic and diagnostic research.

  3. Modeling, Analysis, and Optimization Issues for Large Space Structures

    Science.gov (United States)

    Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)

    1983-01-01

    Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.

  4. An interactive display system for large-scale 3D models

    Science.gov (United States)

    Liu, Zijian; Sun, Kun; Tao, Wenbing; Liu, Liman

    2018-04-01

    With the improvement of 3D reconstruction theory and the rapid development of computer hardware technology, the reconstructed 3D models are enlarging in scale and increasing in complexity. Models with tens of thousands of 3D points or triangular meshes are common in practical applications. Due to storage and computing power limitation, it is difficult to achieve real-time display and interaction with large scale 3D models for some common 3D display software, such as MeshLab. In this paper, we propose a display system for large-scale 3D scene models. We construct the LOD (Levels of Detail) model of the reconstructed 3D scene in advance, and then use an out-of-core view-dependent multi-resolution rendering scheme to realize the real-time display of the large-scale 3D model. With the proposed method, our display system is able to render in real time while roaming in the reconstructed scene and 3D camera poses can also be displayed. Furthermore, the memory consumption can be significantly decreased via internal and external memory exchange mechanism, so that it is possible to display a large scale reconstructed scene with over millions of 3D points or triangular meshes in a regular PC with only 4GB RAM.

  5. Metallogenic model for continental volcanic-type rich and large uranium deposits

    International Nuclear Information System (INIS)

    Chen Guihua

    1998-01-01

    A metallogenic model for continental volcanic-type rich and large/super large uranium deposits has been established on the basis of analysis of occurrence features and ore-forming mechanism of some continental volcanic-type rich and large/super large uranium deposits in the world. The model proposes that uranium-enriched granite or granitic basement is the foundation, premetallogenic polycyclic and multistage volcanic eruptions are prerequisites, intense tectonic-extensional environment is the key for the ore formation, and relatively enclosed geologic setting is the reliable protection condition of the deposit. By using the model the author explains the occurrence regularities of some rich and large/super large uranium deposits such as Strelichof uranium deposit in Russia, Dornot uranium deposit in Mongolia, Olympic Dam Cu-U-Au-REE deposit in Australia, uranium deposit No.460 and Zhoujiashan uranium deposit in China, and then compares the above deposits with a large poor uranium deposit No.661 as well

  6. Modeling and simulation of large HVDC systems

    Energy Technology Data Exchange (ETDEWEB)

    Jin, H.; Sood, V.K.

    1993-01-01

    This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.

  7. Modeling sediment yield in small catchments at event scale: Model comparison, development and evaluation

    Science.gov (United States)

    Tan, Z.; Leung, L. R.; Li, H. Y.; Tesfa, T. K.

    2017-12-01

    Sediment yield (SY) has significant impacts on river biogeochemistry and aquatic ecosystems but it is rarely represented in Earth System Models (ESMs). Existing SY models focus on estimating SY from large river basins or individual catchments so it is not clear how well they simulate SY in ESMs at larger spatial scales and globally. In this study, we compare the strengths and weaknesses of eight well-known SY models in simulating annual mean SY at about 400 small catchments ranging in size from 0.22 to 200 km2 in the US, Canada and Puerto Rico. In addition, we also investigate the performance of these models in simulating event-scale SY at six catchments in the US using high-quality hydrological inputs. The model comparison shows that none of the models can reproduce the SY at large spatial scales but the Morgan model performs the better than others despite its simplicity. In all model simulations, large underestimates occur in catchments with very high SY. A possible pathway to reduce the discrepancies is to incorporate sediment detachment by landsliding, which is currently not included in the models being evaluated. We propose a new SY model that is based on the Morgan model but including a landsliding soil detachment scheme that is being developed. Along with the results of the model comparison and evaluation, preliminary findings from the revised Morgan model will be presented.

  8. Surgical repair of large cyclodialysis clefts.

    Science.gov (United States)

    Gross, Jacob B; Davis, Garvin H; Bell, Nicholas P; Feldman, Robert M; Blieden, Lauren S

    2017-05-11

    To describe a new surgical technique to effectively close large (>180 degrees) cyclodialysis clefts. Our method involves the use of procedures commonly associated with repair of retinal detachment and complex cataract extraction: phacoemulsification with placement of a capsular tension ring followed by pars plana vitrectomy and gas tamponade with light cryotherapy. We also used anterior segment optical coherence tomography (OCT) as a noninvasive mechanism to determine the extent of the clefts and compared those results with ultrasound biomicroscopy (UBM) and gonioscopy. This technique was used to repair large cyclodialysis clefts in 4 eyes. All 4 eyes had resolution of hypotony and improvement of visual acuity. One patient had an intraocular pressure spike requiring further surgical intervention. Anterior segment OCT imaging in all 4 patients showed a more extensive cleft than UBM or gonioscopy. This technique is effective in repairing large cyclodialysis clefts. Anterior segment OCT more accurately predicted the extent of each cleft, while UBM and gonioscopy both underestimated the size of the cleft.

  9. Analysis of Error Propagation Within Hierarchical Air Combat Models

    Science.gov (United States)

    2016-06-01

    values alone are propagated through layers of combat models, the final results will likely be biased, and risk underestimated. An air-to-air...values alone are propagated through layers of combat models, the final results will likely be biased, and risk underestimated. An air-to-air engagement... PROPAGATION WITHIN HIERARCHICAL AIR COMBAT MODELS by Salih Ilaslan June 2016 Thesis Advisor: Thomas W. Lucas Second Reader: Jeffrey

  10. Does the surface property of a disposable applanation tonometer account for its underestimation of intraocular pressure when compared with the Goldmann tonometer?

    Science.gov (United States)

    Osborne, Sarah F; Williams, Rachel; Batterbury, Mark; Wong, David

    2007-04-01

    Disposable tonometers are increasingly being adopted partly because of concerns over the transmission of variant Creutzfeldt-Jakob disease and partly for convenience. Recently, we have found one such tonometer (Tonojet by Luneau Ophthalmologie, France) underestimated the intraocular pressure (IOP). We hypothesized that this underestimation was caused by a difference in the surface property of the tonometers. A tensiometer was used to measure the suction force resulting from interfacial tension between a solution of lignocaine and fluorescein and the tonometers. The results showed that the suction force was significantly greater for the Goldmann compared to the Tonojet. The magnitude of this force was too small to account for the difference in IOP measurements. The Tonojet was less hydrophilic than the Goldmann, and the contact angle of the fluid was therefore greater. For a given tear film, less hydrophilic tonometers will tend to have thicker mires, and this may lead to underestimation of the IOP. When such disposable tonometers are used, it is recommended care should be taken to reject readings from thick mires.

  11. Hydrogen combustion modelling in large-scale geometries

    International Nuclear Information System (INIS)

    Studer, E.; Beccantini, A.; Kudriakov, S.; Velikorodny, A.

    2014-01-01

    Hydrogen risk mitigation issues based on catalytic recombiners cannot exclude flammable clouds to be formed during the course of a severe accident in a Nuclear Power Plant. Consequences of combustion processes have to be assessed based on existing knowledge and state of the art in CFD combustion modelling. The Fukushima accidents have also revealed the need for taking into account the hydrogen explosion phenomena in risk management. Thus combustion modelling in a large-scale geometry is one of the remaining severe accident safety issues. At present day there doesn't exist a combustion model which can accurately describe a combustion process inside a geometrical configuration typical of the Nuclear Power Plant (NPP) environment. Therefore the major attention in model development has to be paid on the adoption of existing approaches or creation of the new ones capable of reliably predicting the possibility of the flame acceleration in the geometries of that type. A set of experiments performed previously in RUT facility and Heiss Dampf Reactor (HDR) facility is used as a validation database for development of three-dimensional gas dynamic model for the simulation of hydrogen-air-steam combustion in large-scale geometries. The combustion regimes include slow deflagration, fast deflagration, and detonation. Modelling is based on Reactive Discrete Equation Method (RDEM) where flame is represented as an interface separating reactants and combustion products. The transport of the progress variable is governed by different flame surface wrinkling factors. The results of numerical simulation are presented together with the comparisons, critical discussions and conclusions. (authors)

  12. Modeling of 3D Aluminum Polycrystals during Large Deformations

    International Nuclear Information System (INIS)

    Maniatty, Antoinette M.; Littlewood, David J.; Lu Jing; Pyle, Devin

    2007-01-01

    An approach for generating, meshing, and modeling 3D polycrystals, with a focus on aluminum alloys, subjected to large deformation processes is presented. A Potts type model is used to generate statistically representative grain structures with periodicity to allow scale-linking. The grain structures are compared to experimentally observed grain structures to validate that they are representative. A procedure for generating a geometric model from the voxel data is developed allowing for adaptive meshing of the generated grain structure. Material behavior is governed by an appropriate crystal, elasto-viscoplastic constitutive model. The elastic-viscoplastic model is implemented in a three-dimensional, finite deformation, mixed, finite element program. In order to handle the large-scale problems of interest, a parallel implementation is utilized. A multiscale procedure is used to link larger scale models of deformation processes to the polycrystal model, where periodic boundary conditions on the fluctuation field are enforced. Finite-element models, of 3D polycrystal grain structures will be presented along with observations made from these simulations

  13. An accurate and simple large signal model of HEMT

    DEFF Research Database (Denmark)

    Liu, Qing

    1989-01-01

    A large-signal model of discrete HEMTs (high-electron-mobility transistors) has been developed. It is simple and suitable for SPICE simulation of hybrid digital ICs. The model parameters are extracted by using computer programs and data provided by the manufacturer. Based on this model, a hybrid...

  14. Improvement of PM10 prediction in East Asia using inverse modeling

    Science.gov (United States)

    Koo, Youn-Seo; Choi, Dae-Ryun; Kwon, Hi-Yong; Jang, Young-Kee; Han, Jin-Seok

    2015-04-01

    Aerosols from anthropogenic emissions in industrialized region in China as well as dust emissions from southern Mongolia and northern China that transport along prevailing northwestern wind have a large influence on the air quality in Korea. The emission inventory in the East Asia region is an important factor in chemical transport modeling (CTM) for PM10 (particulate matters less than 10 ㎛ in aerodynamic diameter) forecasts and air quality management in Korea. Most previous studies showed that predictions of PM10 mass concentration by the CTM were underestimated when comparing with observational data. In order to fill the gap in discrepancies between observations and CTM predictions, the inverse Bayesian approach with Comprehensive Air-quality Model with extension (CAMx) forward model was applied to obtain optimized a posteriori PM10 emissions in East Asia. The predicted PM10 concentrations with a priori emission were first compared with observations at monitoring sites in China and Korea for January and August 2008. The comparison showed that PM10 concentrations with a priori PM10 emissions for anthropogenic and dust sources were generally under-predicted. The result from the inverse modeling indicated that anthropogenic PM10 emissions in the industrialized and urbanized areas in China were underestimated while dust emissions from desert and barren soil in southern Mongolia and northern China were overestimated. A priori PM10 emissions from northeastern China regions including Shenyang, Changchun, and Harbin were underestimated by about 300% (i.e., the ratio of a posteriori to a priori PM10 emission was a factor of about 3). The predictions of PM10 concentrations with a posteriori emission showed better agreement with the observations, implying that the inverse modeling minimized the discrepancies in the model predictions by improving PM10 emissions in East Asia.

  15. Aero-Acoustic Modelling using Large Eddy Simulation

    International Nuclear Information System (INIS)

    Shen, W Z; Soerensen, J N

    2007-01-01

    The splitting technique for aero-acoustic computations is extended to simulate three-dimensional flow and acoustic waves from airfoils. The aero-acoustic model is coupled to a sub-grid-scale turbulence model for Large-Eddy Simulations. In the first test case, the model is applied to compute laminar flow past a NACA 0015 airfoil at a Reynolds number of 800, a Mach number of 0.2 and an angle of attack of 20 deg. The model is then applied to compute turbulent flow past a NACA 0015 airfoil at a Reynolds number of 100 000, a Mach number of 0.2 and an angle of attack of 20 deg. The predicted noise spectrum is compared to experimental data

  16. What Models and Satellites Tell Us (and Don't Tell Us) About Arctic Sea Ice Melt Season Length

    Science.gov (United States)

    Ahlert, A.; Jahn, A.

    2017-12-01

    Melt season length—the difference between the sea ice melt onset date and the sea ice freeze onset date—plays an important role in the radiation balance of the Arctic and the predictability of the sea ice cover. However, there are multiple possible definitions for sea ice melt and freeze onset in climate models, and none of them exactly correspond to the remote sensing definition. Using the CESM Large Ensemble model simulations, we show how this mismatch between model and remote sensing definitions of melt and freeze onset limits the utility of melt season remote sensing data for bias detection in models. It also opens up new questions about the precise physical meaning of the melt season remote sensing data. Despite these challenges, we find that the increase in melt season length in the CESM is not as large as that derived from remote sensing data, even when we account for internal variability and different definitions. At the same time, we find that the CESM ensemble members that have the largest trend in sea ice extent over the period 1979-2014 also have the largest melt season trend, driven primarily by the trend towards later freeze onsets. This might be an indication that an underestimation of the melt season length trend is one factor contributing to the generally underestimated sea ice loss within the CESM, and potentially climate models in general.

  17. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  18. Effect of modelling slum populations on influenza spread in Delhi

    Science.gov (United States)

    Chen, Jiangzhuo; Chu, Shuyu; Chungbaek, Youngyun; Khan, Maleq; Kuhlman, Christopher; Marathe, Achla; Mortveit, Henning; Vullikanti, Anil; Xie, Dawen

    2016-01-01

    Objectives This research studies the impact of influenza epidemic in the slum and non-slum areas of Delhi, the National Capital Territory of India, by taking proper account of slum demographics and residents’ activities, using a highly resolved social contact network of the 13.8 million residents of Delhi. Methods An SEIR model is used to simulate the spread of influenza on two different synthetic social contact networks of Delhi, one where slums and non-slums are treated the same in terms of their demographics and daily sets of activities and the other, where slum and non-slum regions have different attributes. Results Differences between the epidemic outcomes on the two networks are large. Time-to-peak infection is overestimated by several weeks, and the cumulative infection rate and peak infection rate are underestimated by 10–50%, when slum attributes are ignored. Conclusions Slum populations have a significant effect on influenza transmission in urban areas. Improper specification of slums in large urban regions results in underestimation of infections in the entire population and hence will lead to misguided interventions by policy planners. PMID:27687898

  19. Verifying large SDL-specifications using model checking

    NARCIS (Netherlands)

    Sidorova, N.; Steffen, M.; Reed, R.; Reed, J.

    2001-01-01

    In this paper we propose a methodology for model-checking based verification of large SDL specifications. The methodology is illustrated by a case study of an industrial medium-access protocol for wireless ATM. To cope with the state space explosion, the verification exploits the layered and modular

  20. Extending SME to Handle Large-Scale Cognitive Modeling.

    Science.gov (United States)

    Forbus, Kenneth D; Ferguson, Ronald W; Lovett, Andrew; Gentner, Dedre

    2017-07-01

    Analogy and similarity are central phenomena in human cognition, involved in processes ranging from visual perception to conceptual change. To capture this centrality requires that a model of comparison must be able to integrate with other processes and handle the size and complexity of the representations required by the tasks being modeled. This paper describes extensions to Structure-Mapping Engine (SME) since its inception in 1986 that have increased its scope of operation. We first review the basic SME algorithm, describe psychological evidence for SME as a process model, and summarize its role in simulating similarity-based retrieval and generalization. Then we describe five techniques now incorporated into the SME that have enabled it to tackle large-scale modeling tasks: (a) Greedy merging rapidly constructs one or more best interpretations of a match in polynomial time: O(n 2 log(n)); (b) Incremental operation enables mappings to be extended as new information is retrieved or derived about the base or target, to model situations where information in a task is updated over time; (c) Ubiquitous predicates model the varying degrees to which items may suggest alignment; (d) Structural evaluation of analogical inferences models aspects of plausibility judgments; (e) Match filters enable large-scale task models to communicate constraints to SME to influence the mapping process. We illustrate via examples from published studies how these enable it to capture a broader range of psychological phenomena than before. Copyright © 2016 Cognitive Science Society, Inc.

  1. Characteristics of vertical velocity in marine stratocumulus: comparison of large eddy simulations with observations

    International Nuclear Information System (INIS)

    Guo Huan; Liu Yangang; Daum, Peter H; Senum, Gunnar I; Tao, W-K

    2008-01-01

    We simulated a marine stratus deck sampled during the Marine Stratus/Stratocumulus Experiment (MASE) with a three-dimensional large eddy simulation (LES) model at different model resolutions. Various characteristics of the vertical velocity from the model simulations were evaluated against those derived from the corresponding aircraft in situ observations, focusing on standard deviation, skewness, kurtosis, probability density function (PDF), power spectrum, and structure function. Our results show that although the LES model captures reasonably well the lower-order moments (e.g., horizontal averages and standard deviations), it fails to simulate many aspects of the higher-order moments, such as kurtosis, especially near cloud base and cloud top. Further investigations of the PDFs, power spectra, and structure functions reveal that compared to the observations, the model generally underestimates relatively strong variations on small scales. The results also suggest that increasing the model resolutions improves the agreements between the model results and the observations in virtually all of the properties that we examined. Furthermore, the results indicate that a vertical grid size <10 m is necessary for accurately simulating even the standard-deviation profile, posing new challenges to computer resources.

  2. Spatially unresolved SED fitting can underestimate galaxy masses: a solution to the missing mass problem

    Science.gov (United States)

    Sorba, Robert; Sawicki, Marcin

    2018-05-01

    We perform spatially resolved, pixel-by-pixel Spectral Energy Distribution (SED) fitting on galaxies up to z ˜ 2.5 in the Hubble eXtreme Deep Field (XDF). Comparing stellar mass estimates from spatially resolved and spatially unresolved photometry we find that unresolved masses can be systematically underestimated by factors of up to 5. The ratio of the unresolved to resolved mass measurement depends on the galaxy's specific star formation rate (sSFR): at low sSFRs the bias is small, but above sSFR ˜ 10-9.5 yr-1 the discrepancy increases rapidly such that galaxies with sSFRs ˜ 10-8 yr-1 have unresolved mass estimates of only one-half to one-fifth of the resolved value. This result indicates that stellar masses estimated from spatially unresolved data sets need to be systematically corrected, in some cases by large amounts, and we provide an analytic prescription for applying this correction. We show that correcting stellar mass measurements for this bias changes the normalization and slope of the star-forming main sequence and reduces its intrinsic width; most dramatically, correcting for the mass bias increases the stellar mass density of the Universe at high redshift and can resolve the long-standing discrepancy between the directly measured cosmic SFR density at z ≳ 1 and that inferred from stellar mass densities (`the missing mass problem').

  3. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam's Window.

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E

    2016-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam's window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods.

  4. Validation of the Two-Layer Model for Correcting Clear Sky Reflectance Near Clouds

    Science.gov (United States)

    Wen, Guoyong; Marshak, Alexander; Evans, K. Frank; Vamal, Tamas

    2014-01-01

    A two-layer model was developed in our earlier studies to estimate the clear sky reflectance enhancement near clouds. This simple model accounts for the radiative interaction between boundary layer clouds and molecular layer above, the major contribution to the reflectance enhancement near clouds for short wavelengths. We use LES/SHDOM simulated 3D radiation fields to valid the two-layer model for reflectance enhancement at 0.47 micrometer. We find: (a) The simple model captures the viewing angle dependence of the reflectance enhancement near cloud, suggesting the physics of this model is correct; and (b) The magnitude of the 2-layer modeled enhancement agree reasonably well with the "truth" with some expected underestimation. We further extend our model to include cloud-surface interaction using the Poisson model for broken clouds. We found that including cloud-surface interaction improves the correction, though it can introduced some over corrections for large cloud albedo, large cloud optical depth, large cloud fraction, large cloud aspect ratio. This over correction can be reduced by excluding scenes (10 km x 10km) with large cloud fraction for which the Poisson model is not designed for. Further research is underway to account for the contribution of cloud-aerosol radiative interaction to the enhancement.

  5. Challenges of Modeling Flood Risk at Large Scales

    Science.gov (United States)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing

  6. The costs of electricity systems with a high share of fluctutating renewables. A stochastic investment and dispatch optimization model for Europe

    International Nuclear Information System (INIS)

    Nagl, Stephan; Fuersch, Michaela; Lindenberger, Dietmar

    2012-01-01

    Renewable energies are meant to produce a large share of the future electricity demand. However, the availability of wind and solar power depends on local weather conditions and therefore weather characteristics must be considered when optimizing the future electricity mix. In this article we analyze the impact of the stochastic availability of wind and solar energy on the cost-minimal power plant mix and the related total system costs. To determine optimal conventional, renewable and storage capacities for different shares of renewables, we apply a stochastic investment and dispatch optimization model to the European electricity market. The model considers stochastic feed-in structures and full load hours of wind and solar technologies and different correlations between regions and technologies. Key findings include the overestimation of fluctuating renewables and underestimation of total system costs compared to deterministic investment and dispatch models. Furthermore, solar technologies are - relative to wind turbines - underestimated when neglecting negative correlations between wind speeds and solar radiation.

  7. Large deflection of viscoelastic beams using fractional derivative model

    International Nuclear Information System (INIS)

    Bahranini, Seyed Masoud Sotoodeh; Eghtesad, Mohammad; Ghavanloo, Esmaeal; Farid, Mehrdad

    2013-01-01

    This paper deals with large deflection of viscoelastic beams using a fractional derivative model. For this purpose, a nonlinear finite element formulation of viscoelastic beams in conjunction with the fractional derivative constitutive equations has been developed. The four-parameter fractional derivative model has been used to describe the constitutive equations. The deflected configuration for a uniform beam with different boundary conditions and loads is presented. The effect of the order of fractional derivative on the large deflection of the cantilever viscoelastic beam, is investigated after 10, 100, and 1000 hours. The main contribution of this paper is finite element implementation for nonlinear analysis of viscoelastic fractional model using the storage of both strain and stress histories. The validity of the present analysis is confirmed by comparing the results with those found in the literature.

  8. Misspecified poisson regression models for large-scale registry data: inference for 'large n and small p'.

    Science.gov (United States)

    Grøn, Randi; Gerds, Thomas A; Andersen, Per K

    2016-03-30

    Poisson regression is an important tool in register-based epidemiology where it is used to study the association between exposure variables and event rates. In this paper, we will discuss the situation with 'large n and small p', where n is the sample size and p is the number of available covariates. Specifically, we are concerned with modeling options when there are time-varying covariates that can have time-varying effects. One problem is that tests of the proportional hazards assumption, of no interactions between exposure and other observed variables, or of other modeling assumptions have large power due to the large sample size and will often indicate statistical significance even for numerically small deviations that are unimportant for the subject matter. Another problem is that information on important confounders may be unavailable. In practice, this situation may lead to simple working models that are then likely misspecified. To support and improve conclusions drawn from such models, we discuss methods for sensitivity analysis, for estimation of average exposure effects using aggregated data, and a semi-parametric bootstrap method to obtain robust standard errors. The methods are illustrated using data from the Danish national registries investigating the diabetes incidence for individuals treated with antipsychotics compared with the general unexposed population. Copyright © 2015 John Wiley & Sons, Ltd.

  9. Evaluation of cloud resolving model simulations of midlatitude cirrus with ARM and A-Train observations

    Science.gov (United States)

    Muehlbauer, A. D.; Ackerman, T. P.; Lawson, P.; Xie, S.; Zhang, Y.

    2015-12-01

    This paper evaluates cloud resolving model (CRM) and cloud system-resolving model (CSRM) simulations of a midlatitude cirrus case with comprehensive observations collected under the auspices of the Atmospheric Radiation Measurements (ARM) program and with spaceborne observations from the National Aeronautics and Space Administration (NASA) A-train satellites. Vertical profiles of temperature, relative humidity and wind speeds are reasonably well simulated by the CSRM and CRM but there are remaining biases in the temperature, wind speeds and relative humidity, which can be mitigated through nudging the model simulations toward the observed radiosonde profiles. Simulated vertical velocities are underestimated in all simulations except in the CRM simulations with grid spacings of 500m or finer, which suggests that turbulent vertical air motions in cirrus clouds need to be parameterized in GCMs and in CSRM simulations with horizontal grid spacings on the order of 1km. The simulated ice water content and ice number concentrations agree with the observations in the CSRM but are underestimated in the CRM simulations. The underestimation of ice number concentrations is consistent with the overestimation of radar reflectivity in the CRM simulations and suggests that the model produces too many large ice particles especially toward cloud base. Simulated cloud profiles are rather insensitive to perturbations in the initial conditions or the dimensionality of the model domain but the treatment of the forcing data has a considerable effect on the outcome of the model simulations. Despite considerable progress in observations and microphysical parameterizations, simulating the microphysical, macrophysical and radiative properties of cirrus remains challenging. Comparing model simulations with observations from multiple instruments and observational platforms is important for revealing model deficiencies and for providing rigorous benchmarks. However, there still is considerable

  10. Topic modeling for cluster analysis of large biological and medical datasets.

    Science.gov (United States)

    Zhao, Weizhong; Zou, Wen; Chen, James J

    2014-01-01

    The big data moniker is nowhere better deserved than to describe the ever-increasing prodigiousness and complexity of biological and medical datasets. New methods are needed to generate and test hypotheses, foster biological interpretation, and build validated predictors. Although multivariate techniques such as cluster analysis may allow researchers to identify groups, or clusters, of related variables, the accuracies and effectiveness of traditional clustering methods diminish for large and hyper dimensional datasets. Topic modeling is an active research field in machine learning and has been mainly used as an analytical tool to structure large textual corpora for data mining. Its ability to reduce high dimensionality to a small number of latent variables makes it suitable as a means for clustering or overcoming clustering difficulties in large biological and medical datasets. In this study, three topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, are proposed and tested on the cluster analysis of three large datasets: Salmonella pulsed-field gel electrophoresis (PFGE) dataset, lung cancer dataset, and breast cancer dataset, which represent various types of large biological or medical datasets. All three various methods are shown to improve the efficacy/effectiveness of clustering results on the three datasets in comparison to traditional methods. A preferable cluster analysis method emerged for each of the three datasets on the basis of replicating known biological truths. Topic modeling could be advantageously applied to the large datasets of biological or medical research. The three proposed topic model-derived clustering methods, highest probable topic assignment, feature selection and feature extraction, yield clustering improvements for the three different data types. Clusters more efficaciously represent truthful groupings and subgroupings in the data than traditional methods, suggesting

  11. Protein homology model refinement by large-scale energy optimization.

    Science.gov (United States)

    Park, Hahnbeom; Ovchinnikov, Sergey; Kim, David E; DiMaio, Frank; Baker, David

    2018-03-20

    Proteins fold to their lowest free-energy structures, and hence the most straightforward way to increase the accuracy of a partially incorrect protein structure model is to search for the lowest-energy nearby structure. This direct approach has met with little success for two reasons: first, energy function inaccuracies can lead to false energy minima, resulting in model degradation rather than improvement; and second, even with an accurate energy function, the search problem is formidable because the energy only drops considerably in the immediate vicinity of the global minimum, and there are a very large number of degrees of freedom. Here we describe a large-scale energy optimization-based refinement method that incorporates advances in both search and energy function accuracy that can substantially improve the accuracy of low-resolution homology models. The method refined low-resolution homology models into correct folds for 50 of 84 diverse protein families and generated improved models in recent blind structure prediction experiments. Analyses of the basis for these improvements reveal contributions from both the improvements in conformational sampling techniques and the energy function.

  12. Modelling of pollen dispersion in the atmosphere: evaluation with a continuous 1β+1δ lidar

    Science.gov (United States)

    Sicard, Michaël; Izquierdo, Rebeca; Jorba, Oriol; Alarcón, Marta; Belmonte, Jordina; Comerón, Adolfo; De Linares, Concepción; Baldasano, José Maria

    2018-04-01

    Pollen allergenicity plays an important role on human health and wellness. It is thus of large public interest to increase our knowledge of pollen grain behavior in the atmosphere (source, emission, processes involved during their transport, etc.) at fine temporal and spatial scales. First simulations with the Barcelona Supercomputing Center NMMB/BSC-CTM model of Platanus and Pinus dispersion in the atmosphere were performed during a 5-day pollination event observed in Barcelona, Spain, between 27 - 31 March, 2015. The simulations are compared to vertical profiles measured with the continuous Barcelona Micro Pulse Lidar system. First results show that the vertical distribution is well reproduced by the model in shape, but not in intensity, the model largely underestimating in the afternoon. Guidelines are proposed to improve the dispersion of airborne pollen by numerical prediction models.

  13. Mathematical modeling of large floating roof reservoir temperature arena

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2018-03-01

    Full Text Available The current study is a simplification of related components of large floating roof tank and modeling for three dimensional temperature field of large floating roof tank. The heat transfer involves its transfer between the hot fluid in the oil tank, between the hot fluid and the tank wall and between the tank wall and the external environment. The mathematical model of heat transfer and flow of oil in the tank simulates the temperature field of oil in tank. Oil temperature field of large floating roof tank is obtained by numerical simulation, map the curve of central temperature dynamics with time and analyze axial and radial temperature of storage tank. It determines the distribution of low temperature storage tank location based on the thickness of the reservoir temperature. Finally, it compared the calculated results and the field test data; eventually validated the calculated results based on the experimental results.

  14. The Hamburg large scale geostrophic ocean general circulation model. Cycle 1

    International Nuclear Information System (INIS)

    Maier-Reimer, E.; Mikolajewicz, U.

    1992-02-01

    The rationale for the Large Scale Geostrophic ocean circulation model (LSG-OGCM) is based on the observations that for a large scale ocean circulation model designed for climate studies, the relevant characteristic spatial scales are large compared with the internal Rossby radius throughout most of the ocean, while the characteristic time scales are large compared with the periods of gravity modes and barotropic Rossby wave modes. In the present version of the model, the fast modes have been filtered out by a conventional technique of integrating the full primitive equations, including all terms except the nonlinear advection of momentum, by an implicit time integration method. The free surface is also treated prognostically, without invoking a rigid lid approximation. The numerical scheme is unconditionally stable and has the additional advantage that it can be applied uniformly to the entire globe, including the equatorial and coastal current regions. (orig.)

  15. Modelling and measurements of wakes in large wind farms

    DEFF Research Database (Denmark)

    Barthelmie, Rebecca Jane; Rathmann, Ole; Frandsen, Sten Tronæs

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve...

  16. Subgrid-scale models for large-eddy simulation of rotating turbulent channel flows

    Science.gov (United States)

    Silvis, Maurits H.; Bae, Hyunji Jane; Trias, F. Xavier; Abkar, Mahdi; Moin, Parviz; Verstappen, Roel

    2017-11-01

    We aim to design subgrid-scale models for large-eddy simulation of rotating turbulent flows. Rotating turbulent flows form a challenging test case for large-eddy simulation due to the presence of the Coriolis force. The Coriolis force conserves the total kinetic energy while transporting it from small to large scales of motion, leading to the formation of large-scale anisotropic flow structures. The Coriolis force may also cause partial flow laminarization and the occurrence of turbulent bursts. Many subgrid-scale models for large-eddy simulation are, however, primarily designed to parametrize the dissipative nature of turbulent flows, ignoring the specific characteristics of transport processes. We, therefore, propose a new subgrid-scale model that, in addition to the usual dissipative eddy viscosity term, contains a nondissipative nonlinear model term designed to capture transport processes, such as those due to rotation. We show that the addition of this nonlinear model term leads to improved predictions of the energy spectra of rotating homogeneous isotropic turbulence as well as of the Reynolds stress anisotropy in spanwise-rotating plane-channel flows. This work is financed by the Netherlands Organisation for Scientific Research (NWO) under Project Number 613.001.212.

  17. Large transverse momentum processes in a non-scaling parton model

    International Nuclear Information System (INIS)

    Stirling, W.J.

    1977-01-01

    The production of large transverse momentum mesons in hadronic collisions by the quark fusion mechanism is discussed in a parton model which gives logarithmic corrections to Bjorken scaling. It is found that the moments of the large transverse momentum structure function exhibit a simple scale breaking behaviour similar to the behaviour of the Drell-Yan and deep inelastic structure functions of the model. An estimate of corresponding experimental consequences is made and the extent to which analogous results can be expected in an asymptotically free gauge theory is discussed. A simple set of rules is presented for incorporating the logarithmic corrections to scaling into all covariant parton model calculations. (Auth.)

  18. Cyclone Activity in the Arctic From an Ensemble of Regional Climate Models (Arctic CORDEX)

    Science.gov (United States)

    Akperov, Mirseid; Rinke, Annette; Mokhov, Igor I.; Matthes, Heidrun; Semenov, Vladimir A.; Adakudlu, Muralidhar; Cassano, John; Christensen, Jens H.; Dembitskaya, Mariya A.; Dethloff, Klaus; Fettweis, Xavier; Glisan, Justin; Gutjahr, Oliver; Heinemann, Günther; Koenigk, Torben; Koldunov, Nikolay V.; Laprise, René; Mottram, Ruth; Nikiéma, Oumarou; Scinocca, John F.; Sein, Dmitry; Sobolowski, Stefan; Winger, Katja; Zhang, Wenxin

    2018-03-01

    The ability of state-of-the-art regional climate models to simulate cyclone activity in the Arctic is assessed based on an ensemble of 13 simulations from 11 models from the Arctic-CORDEX initiative. Some models employ large-scale spectral nudging techniques. Cyclone characteristics simulated by the ensemble are compared with the results forced by four reanalyses (ERA-Interim, National Centers for Environmental Prediction-Climate Forecast System Reanalysis, National Aeronautics and Space Administration-Modern-Era Retrospective analysis for Research and Applications Version 2, and Japan Meteorological Agency-Japanese 55-year reanalysis) in winter and summer for 1981-2010 period. In addition, we compare cyclone statistics between ERA-Interim and the Arctic System Reanalysis reanalyses for 2000-2010. Biases in cyclone frequency, intensity, and size over the Arctic are also quantified. Variations in cyclone frequency across the models are partly attributed to the differences in cyclone frequency over land. The variations across the models are largest for small and shallow cyclones for both seasons. A connection between biases in the zonal wind at 200 hPa and cyclone characteristics is found for both seasons. Most models underestimate zonal wind speed in both seasons, which likely leads to underestimation of cyclone mean depth and deep cyclone frequency in the Arctic. In general, the regional climate models are able to represent the spatial distribution of cyclone characteristics in the Arctic but models that employ large-scale spectral nudging show a better agreement with ERA-Interim reanalysis than the rest of the models. Trends also exhibit the benefits of nudging. Models with spectral nudging are able to reproduce the cyclone trends, whereas most of the nonnudged models fail to do so. However, the cyclone characteristics and trends are sensitive to the choice of nudged variables.

  19. A stochastic large deformation model for computational anatomy

    DEFF Research Database (Denmark)

    Arnaudon, Alexis; Holm, Darryl D.; Pai, Akshay Sadananda Uppinakudru

    2017-01-01

    In the study of shapes of human organs using computational anatomy, variations are found to arise from inter-subject anatomical differences, disease-specific effects, and measurement noise. This paper introduces a stochastic model for incorporating random variations into the Large Deformation...

  20. Modeling the impact of large-scale energy conversion systems on global climate

    International Nuclear Information System (INIS)

    Williams, J.

    There are three energy options which could satisfy a projected energy requirement of about 30 TW and these are the solar, nuclear and (to a lesser extent) coal options. Climate models can be used to assess the impact of large scale deployment of these options. The impact of waste heat has been assessed using energy balance models and general circulation models (GCMs). Results suggest that the impacts are significant when the heat imput is very high and studies of more realistic scenarios are required. Energy balance models, radiative-convective models and a GCM have been used to study the impact of doubling the atmospheric CO 2 concentration. State-of-the-art models estimate a surface temperature increase of 1.5-3.0 0 C with large amplification near the poles, but much uncertainty remains. Very few model studies have been made of the impact of particles on global climate, more information on the characteristics of particle input are required. The impact of large-scale deployment of solar energy conversion systems has received little attention but model studies suggest that large scale changes in surface characteristics associated with such systems (surface heat balance, roughness and hydrological characteristics and ocean surface temperature) could have significant global climatic effects. (Auth.)

  1. Bayesian hierarchical model for large-scale covariance matrix estimation.

    Science.gov (United States)

    Zhu, Dongxiao; Hero, Alfred O

    2007-12-01

    Many bioinformatics problems implicitly depend on estimating large-scale covariance matrix. The traditional approaches tend to give rise to high variance and low accuracy due to "overfitting." We cast the large-scale covariance matrix estimation problem into the Bayesian hierarchical model framework, and introduce dependency between covariance parameters. We demonstrate the advantages of our approaches over the traditional approaches using simulations and OMICS data analysis.

  2. Modelling and measurements of wakes in large wind farms

    International Nuclear Information System (INIS)

    Barthelmie, R J; Rathmann, O; Frandsen, S T; Hansen, K S; Politis, E; Prospathopoulos, J; Rados, K; Cabezon, D; Schlez, W; Phillips, J; Neubert, A; Schepers, J G; Pijl, S P van der

    2007-01-01

    The paper presents research conducted in the Flow workpackage of the EU funded UPWIND project which focuses on improving models of flow within and downwind of large wind farms in complex terrain and offshore. The main activity is modelling the behaviour of wind turbine wakes in order to improve power output predictions

  3. Asymmetries of poverty: why global burden of disease valuations underestimate the burden of neglected tropical diseases.

    Directory of Open Access Journals (Sweden)

    Charles H King

    2008-03-01

    Full Text Available The disability-adjusted life year (DALY initially appeared attractive as a health metric in the Global Burden of Disease (GBD program, as it purports to be a comprehensive health assessment that encompassed premature mortality, morbidity, impairment, and disability. It was originally thought that the DALY would be useful in policy settings, reflecting normative valuations as a standardized unit of ill health. However, the design of the DALY and its use in policy estimates contain inherent flaws that result in systematic undervaluation of the importance of chronic diseases, such as many of the neglected tropical diseases (NTDs, in world health. The conceptual design of the DALY comes out of a perspective largely focused on the individual risk rather than the ecology of disease, thus failing to acknowledge the implications of context on the burden of disease for the poor. It is nonrepresentative of the impact of poverty on disability, which results in the significant underestimation of disability weights for chronic diseases such as the NTDs. Finally, the application of the DALY in policy estimates does not account for the nonlinear effects of poverty in the cost-utility analysis of disease control, effectively discounting the utility of comprehensively treating NTDs. The present DALY framework needs to be substantially revised if the GBD is to become a valid and useful system for determining health priorities.

  4. Accuracy and bias of ICT self-efficacy: an empirical study into students' over- and underestimation of their ICT competences

    NARCIS (Netherlands)

    Aesaert, K.; Voogt, J.; Kuiper, E.; van Braak, J.

    2017-01-01

    Most studies on the assessment of ICT competences use measures of ICT self-efficacy. These studies are often accused that they suffer from self-reported bias, i.e. students can over- and/or underestimate their ICT competences. As such, taking bias and accuracy of ICT self-efficacy into account,

  5. Large-eddy simulation of the temporal mixing layer using the Clark model

    NARCIS (Netherlands)

    Vreman, A.W.; Geurts, B.J.; Kuerten, J.G.M.

    1996-01-01

    The Clark model for the turbulent stress tensor in large-eddy simulation is investigated from a theoretical and computational point of view. In order to be applicable to compressible turbulent flows, the Clark model has been reformulated. Actual large-eddy simulation of a weakly compressible,

  6. Global Bedload Flux Modeling and Analysis in Large Rivers

    Science.gov (United States)

    Islam, M. T.; Cohen, S.; Syvitski, J. P.

    2017-12-01

    Proper sediment transport quantification has long been an area of interest for both scientists and engineers in the fields of geomorphology, and management of rivers and coastal waters. Bedload flux is important for monitoring water quality and for sustainable development of coastal and marine bioservices. Bedload measurements, especially for large rivers, is extremely scarce across time, and many rivers have never been monitored. Bedload measurements in rivers, is particularly acute in developing countries where changes in sediment yields is high. The paucity of bedload measurements is the result of 1) the nature of the problem (large spatial and temporal uncertainties), and 2) field costs including the time-consuming nature of the measurement procedures (repeated bedform migration tracking, bedload samplers). Here we present a first of its kind methodology for calculating bedload in large global rivers (basins are >1,000 km. Evaluation of model skill is based on 113 bedload measurements. The model predictions are compared with an empirical model developed from the observational dataset in an attempt to evaluate the differences between a physically-based numerical model and a lumped relationship between bedload flux and fluvial and basin parameters (e.g., discharge, drainage area, lithology). The initial study success opens up various applications to global fluvial geomorphology (e.g. including the relationship between suspended sediment (wash load) and bedload). Simulated results with known uncertainties offers a new research product as a valuable resource for the whole scientific community.

  7. A large deformation viscoelastic model for double-network hydrogels

    Science.gov (United States)

    Mao, Yunwei; Lin, Shaoting; Zhao, Xuanhe; Anand, Lallit

    2017-03-01

    We present a large deformation viscoelasticity model for recently synthesized double network hydrogels which consist of a covalently-crosslinked polyacrylamide network with long chains, and an ionically-crosslinked alginate network with short chains. Such double-network gels are highly stretchable and at the same time tough, because when stretched the crosslinks in the ionically-crosslinked alginate network rupture which results in distributed internal microdamage which dissipates a substantial amount of energy, while the configurational entropy of the covalently-crosslinked polyacrylamide network allows the gel to return to its original configuration after deformation. In addition to the large hysteresis during loading and unloading, these double network hydrogels also exhibit a substantial rate-sensitive response during loading, but exhibit almost no rate-sensitivity during unloading. These features of large hysteresis and asymmetric rate-sensitivity are quite different from the response of conventional hydrogels. We limit our attention to modeling the complex viscoelastic response of such hydrogels under isothermal conditions. Our model is restricted in the sense that we have limited our attention to conditions under which one might neglect any diffusion of the water in the hydrogel - as might occur when the gel has a uniform initial value of the concentration of water, and the mobility of the water molecules in the gel is low relative to the time scale of the mechanical deformation. We also do not attempt to model the final fracture of such double-network hydrogels.

  8. Multiresolution comparison of precipitation datasets for large-scale models

    Science.gov (United States)

    Chun, K. P.; Sapriza Azuri, G.; Davison, B.; DeBeer, C. M.; Wheater, H. S.

    2014-12-01

    Gridded precipitation datasets are crucial for driving large-scale models which are related to weather forecast and climate research. However, the quality of precipitation products is usually validated individually. Comparisons between gridded precipitation products along with ground observations provide another avenue for investigating how the precipitation uncertainty would affect the performance of large-scale models. In this study, using data from a set of precipitation gauges over British Columbia and Alberta, we evaluate several widely used North America gridded products including the Canadian Gridded Precipitation Anomalies (CANGRD), the National Center for Environmental Prediction (NCEP) reanalysis, the Water and Global Change (WATCH) project, the thin plate spline smoothing algorithms (ANUSPLIN) and Canadian Precipitation Analysis (CaPA). Based on verification criteria for various temporal and spatial scales, results provide an assessment of possible applications for various precipitation datasets. For long-term climate variation studies (~100 years), CANGRD, NCEP, WATCH and ANUSPLIN have different comparative advantages in terms of their resolution and accuracy. For synoptic and mesoscale precipitation patterns, CaPA provides appealing performance of spatial coherence. In addition to the products comparison, various downscaling methods are also surveyed to explore new verification and bias-reduction methods for improving gridded precipitation outputs for large-scale models.

  9. Use of models in large-area forest surveys: comparing model-assisted, model-based and hybrid estimation

    Science.gov (United States)

    Goran Stahl; Svetlana Saarela; Sebastian Schnell; Soren Holm; Johannes Breidenbach; Sean P. Healey; Paul L. Patterson; Steen Magnussen; Erik Naesset; Ronald E. McRoberts; Timothy G. Gregoire

    2016-01-01

    This paper focuses on the use of models for increasing the precision of estimators in large-area forest surveys. It is motivated by the increasing availability of remotely sensed data, which facilitates the development of models predicting the variables of interest in forest surveys. We present, review and compare three different estimation frameworks where...

  10. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  11. Is oral cancer incidence among patients with oral lichen planus/oral lichenoid lesions underestimated?

    Science.gov (United States)

    Gonzalez-Moles, M A; Gil-Montoya, J A; Ruiz-Avila, I; Bravo, M

    2017-02-01

    Oral lichen planus (OLP) and oral lichenoid lesions (OLL) are considered potentially malignant disorders with a cancer incidence of around 1% of cases, although this estimation is controversial. The aim of this study was to analyze the cancer incidence in a case series of patients with OLP and OLL and to explore clinicopathological aspects that may cause underestimation of the cancer incidence in these diseases. A retrospective study was conducted of 102 patients diagnosed with OLP (n = 21, 20.58%) or OLL (n = 81) between January 2006 and January 2016. Patients were informed of the risk of malignization and followed up annually. The number of sessions programmed for each patient was compared with the number actually attended. Follow-up was classified as complete (100% attendance), good (75-99%), moderate (25-74%), or poor (<25% attendance) compliance. Cancer was developed by four patients (3.9%), three males and one male. One of these developed three carcinomas, which were diagnosed at the follow-up visit (two in lower gingiva, one in floor of mouth); one had OLL and the other three had OLP. The carcinoma developed in mucosal areas with no OLP or OLL involvement in three of these patients, while OLP and cancer were diagnosed simultaneously in the fourth. Of the six carcinomas diagnosed, five (83.3%) were T1 and one (16.7%) T2. None were N+, and all patients remain alive and disease-free. The cancer incidence in OLP and OLL appears to be underestimated due to the strict exclusion criteria usually imposed. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  12. Modelling hydrologic and hydrodynamic processes in basins with large semi-arid wetlands

    Science.gov (United States)

    Fleischmann, Ayan; Siqueira, Vinícius; Paris, Adrien; Collischonn, Walter; Paiva, Rodrigo; Pontes, Paulo; Crétaux, Jean-François; Bergé-Nguyen, Muriel; Biancamaria, Sylvain; Gosset, Marielle; Calmant, Stephane; Tanimoun, Bachir

    2018-06-01

    Hydrological and hydrodynamic models are core tools for simulation of large basins and complex river systems associated to wetlands. Recent studies have pointed towards the importance of online coupling strategies, representing feedbacks between floodplain inundation and vertical hydrology. Especially across semi-arid regions, soil-floodplain interactions can be strong. In this study, we included a two-way coupling scheme in a large scale hydrological-hydrodynamic model (MGB) and tested different model structures, in order to assess which processes are important to be simulated in large semi-arid wetlands and how these processes interact with water budget components. To demonstrate benefits from this coupling over a validation case, the model was applied to the Upper Niger River basin encompassing the Niger Inner Delta, a vast semi-arid wetland in the Sahel Desert. Simulation was carried out from 1999 to 2014 with daily TMPA 3B42 precipitation as forcing, using both in-situ and remotely sensed data for calibration and validation. Model outputs were in good agreement with discharge and water levels at stations both upstream and downstream of the Inner Delta (Nash-Sutcliffe Efficiency (NSE) >0.6 for most gauges), as well as for flooded areas within the Delta region (NSE = 0.6; r = 0.85). Model estimates of annual water losses across the Delta varied between 20.1 and 30.6 km3/yr, while annual evapotranspiration ranged between 760 mm/yr and 1130 mm/yr. Evaluation of model structure indicated that representation of both floodplain channels hydrodynamics (storage, bifurcations, lateral connections) and vertical hydrological processes (floodplain water infiltration into soil column; evapotranspiration from soil and vegetation and evaporation of open water) are necessary to correctly simulate flood wave attenuation and evapotranspiration along the basin. Two-way coupled models are necessary to better understand processes in large semi-arid wetlands. Finally, such coupled

  13. Underestimating nearby nature: affective forecasting errors obscure the happy path to sustainability.

    Science.gov (United States)

    Nisbet, Elizabeth K; Zelenski, John M

    2011-09-01

    Modern lifestyles disconnect people from nature, and this may have adverse consequences for the well-being of both humans and the environment. In two experiments, we found that although outdoor walks in nearby nature made participants much happier than indoor walks did, participants made affective forecasting errors, such that they systematically underestimated nature's hedonic benefit. The pleasant moods experienced on outdoor nature walks facilitated a subjective sense of connection with nature, a construct strongly linked with concern for the environment and environmentally sustainable behavior. To the extent that affective forecasts determine choices, our findings suggest that people fail to maximize their time in nearby nature and thus miss opportunities to increase their happiness and relatedness to nature. Our findings suggest a happy path to sustainability, whereby contact with nature fosters individual happiness and environmentally responsible behavior.

  14. Large-eddy simulation of atmospheric flow over complex terrain

    Energy Technology Data Exchange (ETDEWEB)

    Bechmann, A.

    2006-11-15

    performed. Speed-up and turbulence intensities show good agreement with measurements, except 400m downstream of the hill summit where speed-up is underestimated. Flow over a cube in a thick turbulent boundary layer is the final test case. The turbulence model ability to capture the physics of the large separated region downstream of the cube is demonstrated. The turbulence model is, however, shown to have trouble with very large values of roughness. (au)

  15. Dynamic Model Averaging in Large Model Spaces Using Dynamic Occam’s Window*

    Science.gov (United States)

    Onorante, Luca; Raftery, Adrian E.

    2015-01-01

    Bayesian model averaging has become a widely used approach to accounting for uncertainty about the structural form of the model generating the data. When data arrive sequentially and the generating model can change over time, Dynamic Model Averaging (DMA) extends model averaging to deal with this situation. Often in macroeconomics, however, many candidate explanatory variables are available and the number of possible models becomes too large for DMA to be applied in its original form. We propose a new method for this situation which allows us to perform DMA without considering the whole model space, but using a subset of models and dynamically optimizing the choice of models at each point in time. This yields a dynamic form of Occam’s window. We evaluate the method in the context of the problem of nowcasting GDP in the Euro area. We find that its forecasting performance compares well with that of other methods. PMID:26917859

  16. Research on Francis Turbine Modeling for Large Disturbance Hydropower Station Transient Process Simulation

    Directory of Open Access Journals (Sweden)

    Guangtao Zhang

    2015-01-01

    Full Text Available In the field of hydropower station transient process simulation (HSTPS, characteristic graph-based iterative hydroturbine model (CGIHM has been widely used when large disturbance hydroturbine modeling is involved. However, by this model, iteration should be used to calculate speed and pressure, and slow convergence or no convergence problems may be encountered for some reasons like special characteristic graph profile, inappropriate iterative algorithm, or inappropriate interpolation algorithm, and so forth. Also, other conventional large disturbance hydroturbine models are of some disadvantages and difficult to be used widely in HSTPS. Therefore, to obtain an accurate simulation result, a simple method for hydroturbine modeling is proposed. By this method, both the initial operating point and the transfer coefficients of linear hydroturbine model keep changing during simulation. Hence, it can reflect the nonlinearity of the hydroturbine and be used for Francis turbine simulation under large disturbance condition. To validate the proposed method, both large disturbance and small disturbance simulations of a single hydrounit supplying a resistive, isolated load were conducted. It was shown that the simulation result is consistent with that of field test. Consequently, the proposed method is an attractive option for HSTPS involving Francis turbine modeling under large disturbance condition.

  17. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  18. Modeling and Forecasting Large Realized Covariance Matrices and Portfolio Choice

    NARCIS (Netherlands)

    Callot, Laurent A.F.; Kock, Anders B.; Medeiros, Marcelo C.

    2017-01-01

    We consider modeling and forecasting large realized covariance matrices by penalized vector autoregressive models. We consider Lasso-type estimators to reduce the dimensionality and provide strong theoretical guarantees on the forecast capability of our procedure. We show that we can forecast

  19. Tundra water budget and implications of precipitation underestimation.

    Science.gov (United States)

    Liljedahl, Anna K; Hinzman, Larry D; Kane, Douglas L; Oechel, Walter C; Tweedie, Craig E; Zona, Donatella

    2017-08-01

    Difficulties in obtaining accurate precipitation measurements have limited meaningful hydrologic assessment for over a century due to performance challenges of conventional snowfall and rainfall gauges in windy environments. Here, we compare snowfall observations and bias adjusted snowfall to end-of-winter snow accumulation measurements on the ground for 16 years (1999-2014) and assess the implication of precipitation underestimation on the water balance for a low-gradient tundra wetland near Utqiagvik (formerly Barrow), Alaska (2007-2009). In agreement with other studies, and not accounting for sublimation, conventional snowfall gauges captured 23-56% of end-of-winter snow accumulation. Once snowfall and rainfall are bias adjusted, long-term annual precipitation estimates more than double (from 123 to 274 mm), highlighting the risk of studies using conventional or unadjusted precipitation that dramatically under-represent water balance components. Applying conventional precipitation information to the water balance analysis produced consistent storage deficits (79 to 152 mm) that were all larger than the largest actual deficit (75 mm), which was observed in the unusually low rainfall summer of 2007. Year-to-year variability in adjusted rainfall (±33 mm) was larger than evapotranspiration (±13 mm). Measured interannual variability in partitioning of snow into runoff (29% in 2008 to 68% in 2009) in years with similar end-of-winter snow accumulation (180 and 164 mm, respectively) highlights the importance of the previous summer's rainfall (25 and 60 mm, respectively) on spring runoff production. Incorrect representation of precipitation can therefore have major implications for Arctic water budget descriptions that in turn can alter estimates of carbon and energy fluxes.

  20. Substantial Underestimation of Post-harvest Burning Emissions in East China as Seen by Multi-species Space Observations

    Science.gov (United States)

    Stavrakou, T.; Muller, J. F.; Bauwens, M.; De Smedt, I.; Lerot, C.; Van Roozendael, M.

    2015-12-01

    Crop residue burning is an important contributor to global biomass burning. In the North China Plain, one of the largest and densely populated world plains, post-harvest crop burning is a common agricultural management practice, allowing for land clearing from residual straw and preparation for the subsequent crop cultivation. The most extensive crop fires occur in the North China Plain in June after the winter wheat comes to maturity, and have been blamed for spikes in air pollution leading to serious health problems. Estimating harvest season burning emissions is therefore of primary importance to assess air quality and define best policies for its improvement in this sensitive region. Bottom-up approaches, either based on crop production and emission factors, or on satellite burned area and fire radiative power products, have been adopted so far, however, these methods crucially depend, among other assumptions, on the satellite skill to detect small fires, and could lead to underestimation of the actual emissions. The flux inversion of atmospheric observations is an alternative, independent approach for inferring the emissions from crop fires. Satellite column observations of formaldehyde (HCHO) exhibit a strong peak over the North China Plain in June, resulting from enhanced pyrogenic emissions of a large suite of volatile organic compounds (VOCs), precursors of HCHO. We use vertical columns of formaldehyde retrieved from the OMI instrument between 2005 and 2012 as constraints in an adjoint inversion scheme built on IMAGESv2 CTM, and perform the optimization of biogenic, pyrogenic, and anthropogenic emission parameters at the model resolution. We investigate the interannual variability of the top-down source, quantify its importance for the atmospheric composition on the regional scale, and explore its uncertainties. The OMI-based crop burning source is compared with the corresponding anthropogenic flux in the North China Plain, and is evaluated against HCHO

  1. Are the impacts of land use on warming underestimated in climate policy?

    Science.gov (United States)

    Mahowald, Natalie M.; Ward, Daniel S.; Doney, Scott C.; Hess, Peter G.; Randerson, James T.

    2017-09-01

    While carbon dioxide emissions from energy use must be the primary target of climate change mitigation efforts, land use and land cover change (LULCC) also represent an important source of climate forcing. In this study we compute time series of global surface temperature change separately for LULCC and non-LULCC sources (primarily fossil fuel burning), and show that because of the extra warming associated with the co-emission of methane and nitrous oxide with LULCC carbon dioxide emissions, and a co-emission of cooling aerosols with non-LULCC emissions of carbon dioxide, the linear relationship between cumulative carbon dioxide emissions and temperature has a two-fold higher slope for LULCC than for non-LULCC activities. Moreover, projections used in the Intergovernmental Panel on Climate Change (IPCC) for the rate of tropical land conversion in the future are relatively low compared to contemporary observations, suggesting that the future projections of land conversion used in the IPCC may underestimate potential impacts of LULCC. By including a ‘business as usual’ future LULCC scenario for tropical deforestation, we find that even if all non-LULCC emissions are switched off in 2015, it is likely that 1.5 °C of warming relative to the preindustrial era will occur by 2100. Thus, policies to reduce LULCC emissions must remain a high priority if we are to achieve the low to medium temperature change targets proposed as a part of the Paris Agreement. Future studies using integrated assessment models and other climate simulations should include more realistic deforestation rates and the integration of policy that would reduce LULCC emissions.

  2. Large animal models for vaccine development and testing.

    Science.gov (United States)

    Gerdts, Volker; Wilson, Heather L; Meurens, Francois; van Drunen Littel-van den Hurk, Sylvia; Wilson, Don; Walker, Stewart; Wheler, Colette; Townsend, Hugh; Potter, Andrew A

    2015-01-01

    The development of human vaccines continues to rely on the use of animals for research. Regulatory authorities require novel vaccine candidates to undergo preclinical assessment in animal models before being permitted to enter the clinical phase in human subjects. Substantial progress has been made in recent years in reducing and replacing the number of animals used for preclinical vaccine research through the use of bioinformatics and computational biology to design new vaccine candidates. However, the ultimate goal of a new vaccine is to instruct the immune system to elicit an effective immune response against the pathogen of interest, and no alternatives to live animal use currently exist for evaluation of this response. Studies identifying the mechanisms of immune protection; determining the optimal route and formulation of vaccines; establishing the duration and onset of immunity, as well as the safety and efficacy of new vaccines, must be performed in a living system. Importantly, no single animal model provides all the information required for advancing a new vaccine through the preclinical stage, and research over the last two decades has highlighted that large animals more accurately predict vaccine outcome in humans than do other models. Here we review the advantages and disadvantages of large animal models for human vaccine development and demonstrate that much of the success in bringing a new vaccine to market depends on choosing the most appropriate animal model for preclinical testing. © The Author 2015. Published by Oxford University Press on behalf of the Institute for Laboratory Animal Research. All rights reserved. For permissions, please email: journals.permissions@oup.com.

  3. Hydraulic experiment on formation mechanism of tsunami deposit and verification of sediment transport model for tsunamis

    Science.gov (United States)

    Yamamoto, A.; Takahashi, T.; Harada, K.; Sakuraba, M.; Nojima, K.

    2017-12-01

    An underestimation of the 2011 Tohoku tsunami caused serious damage in coastal area. Reconsideration for tsunami estimation needs knowledge of paleo tsunamis. The historical records of giant tsunamis are limited, because they had occurred infrequently. Tsunami deposits may include many of tsunami records and are expected to analyze paleo tsunamis. However, present research on tsunami deposits are not able to estimate the tsunami source and its magnitude. Furthermore, numerical models of tsunami and its sediment transport are also important. Takahashi et al. (1999) proposed a model of movable bed condition due to tsunamis, although it has some issues. Improvement of the model needs basic data on sediment transport and deposition. This study investigated the formation mechanism of tsunami deposit by hydraulic experiment using a two-dimensional water channel with slope. In a fixed bed condition experiment, velocity, water level and suspended load concentration were measured at many points. In a movable bed condition, effects of sand grains and bore wave on the deposit were examined. Yamamoto et al. (2016) showed deposition range varied with sand grain sizes. In addition, it is revealed that the range fluctuated by number of waves and wave period. The measurements of velocity and water level showed that flow was clearly different near shoreline and in run-up area. Large velocity by return flow was affected the amount of sand deposit near shoreline. When a cutoff wall was installed on the slope, the amount of sand deposit repeatedly increased and decreased. Especially, sand deposit increased where velocity decreased. Takahashi et al. (1999) adapted the proposed model into Kesennuma bay when the 1960 Chilean tsunami arrived, although the amount of sand transportation was underestimated. The cause of the underestimation is inferred that the velocity of this model was underestimated. A relationship between velocity and sediment transport has to be studied in detail, but

  4. Language issues, an underestimated danger in major hazard control?

    Science.gov (United States)

    Lindhout, Paul; Ale, Ben J M

    2009-12-15

    Language issues are problems with communication via speech, signs, gestures or their written equivalents. They may result from poor reading and writing skills, a mix of foreign languages and other circumstances. Language issues are not picked up as a safety risk on the shop floor by current safety management systems. These safety risks need to be identified, acknowledged, quantified and prioritized in order to allow risk reducing measures to be taken. This study investigates the nature of language issues related danger in literature, by experiment and by a survey among the Seveso II companies in the Netherlands. Based on human error frequencies, and on the contents of accident investigation reports, the risks associated with language issues were ranked. Accident investigation method causal factor categories were found not to be sufficiently representative for the type and magnitude of these risks. Readability of safety related documents used by the companies was investigated and found to be poor in many cases. Interviews among regulators and a survey among Seveso II companies were used to identify the gap between the language issue related dangers found in literature and current best practices. This study demonstrates by means of triangulation with different investigative methods that language issue related risks are indeed underestimated. A recommended coarse of action in order to arrive at appropriate measures is presented.

  5. Language issues, an underestimated danger in major hazard control?

    Energy Technology Data Exchange (ETDEWEB)

    Lindhout, Paul, E-mail: plindhout@minszw.nl [Ministry of Social Affairs and Employment, AI-MHC, Anna van Hannoverstraat 4, P.O. Box 90801, 2509 LV The Hague (Netherlands); Ale, Ben J.M. [Delft University of Technology, TBM-Safety Science Group, Jaffalaan 5, 2628 BX Delft (Netherlands)

    2009-12-15

    Language issues are problems with communication via speech, signs, gestures or their written equivalents. They may result from poor reading and writing skills, a mix of foreign languages and other circumstances. Language issues are not picked up as a safety risk on the shop floor by current safety management systems. These safety risks need to be identified, acknowledged, quantified and prioritised in order to allow risk reducing measures to be taken. This study investigates the nature of language issues related danger in literature, by experiment and by a survey among the Seveso II companies in the Netherlands. Based on human error frequencies, and on the contents of accident investigation reports, the risks associated with language issues were ranked. Accident investigation method causal factor categories were found not to be sufficiently representative for the type and magnitude of these risks. Readability of safety related documents used by the companies was investigated and found to be poor in many cases. Interviews among regulators and a survey among Seveso II companies were used to identify the gap between the language issue related dangers found in literature and current best practices. This study demonstrates by means of triangulation with different investigative methods that language issue related risks are indeed underestimated. A recommended coarse of action in order to arrive at appropriate measures is presented.

  6. Language issues, an underestimated danger in major hazard control?

    International Nuclear Information System (INIS)

    Lindhout, Paul; Ale, Ben J.M.

    2009-01-01

    Language issues are problems with communication via speech, signs, gestures or their written equivalents. They may result from poor reading and writing skills, a mix of foreign languages and other circumstances. Language issues are not picked up as a safety risk on the shop floor by current safety management systems. These safety risks need to be identified, acknowledged, quantified and prioritised in order to allow risk reducing measures to be taken. This study investigates the nature of language issues related danger in literature, by experiment and by a survey among the Seveso II companies in the Netherlands. Based on human error frequencies, and on the contents of accident investigation reports, the risks associated with language issues were ranked. Accident investigation method causal factor categories were found not to be sufficiently representative for the type and magnitude of these risks. Readability of safety related documents used by the companies was investigated and found to be poor in many cases. Interviews among regulators and a survey among Seveso II companies were used to identify the gap between the language issue related dangers found in literature and current best practices. This study demonstrates by means of triangulation with different investigative methods that language issue related risks are indeed underestimated. A recommended coarse of action in order to arrive at appropriate measures is presented.

  7. Parenting practices, parents' underestimation of daughters' risks, and alcohol and sexual behaviors of urban girls.

    Science.gov (United States)

    O'Donnell, Lydia; Stueve, Ann; Duran, Richard; Myint-U, Athi; Agronick, Gail; San Doval, Alexi; Wilson-Simmons, Renée

    2008-05-01

    In urban economically distressed communities, high rates of early sexual initiation combined with alcohol use place adolescent girls at risk for myriad negative health consequences. This article reports on the extent to which parents of young teens underestimate both the risks their daughters are exposed to and the considerable influence that they have over their children's decisions and behaviors. Surveys were conducted with more than 700 sixth-grade girls and their parents, recruited from seven New York City schools serving low-income families. Bivariate and multivariate analyses examined relationships among parents' practices and perceptions of daughters' risks, girls' reports of parenting, and outcomes of girls' alcohol use, media and peer conduct, and heterosexual romantic and social behaviors that typically precede sexual intercourse. Although only four parents thought that their daughters had used alcohol, 22% of the daughters reported drinking in the past year. Approximately 5% of parents thought that daughters had hugged and kissed a boy for a long time or had "hung out" with older boys, whereas 38% of girls reported these behaviors. Parents' underestimation of risk was correlated with lower reports of positive parenting practices by daughters. In multivariate analyses, girls' reports of parental oversight, rules, and disapproval of risk are associated with all three behavioral outcomes. Adult reports of parenting practices are associated with girls' conduct and heterosexual behaviors, but not with their alcohol use. Creating greater awareness of the early onset of risk behaviors among urban adolescent girls is important for fostering positive parenting practices, which in turn may help parents to support their daughters' healthier choices.

  8. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesosca...... of the transmission system, especially regarding the cross-border power flows. The tuning of these regional models is done using historical meteorological data acquired on a per-country basis and using publicly available data of installed capacity.......This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...

  9. Modeling and control of a large nuclear reactor. A three-time-scale approach

    Energy Technology Data Exchange (ETDEWEB)

    Shimjith, S.R. [Indian Institute of Technology Bombay, Mumbai (India); Bhabha Atomic Research Centre, Mumbai (India); Tiwari, A.P. [Bhabha Atomic Research Centre, Mumbai (India); Bandyopadhyay, B. [Indian Institute of Technology Bombay, Mumbai (India). IDP in Systems and Control Engineering

    2013-07-01

    Recent research on Modeling and Control of a Large Nuclear Reactor. Presents a three-time-scale approach. Written by leading experts in the field. Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property, with emphasis on three-time-scale systems.

  10. Precise MRI-based stereotaxic surgery in large animal models

    DEFF Research Database (Denmark)

    Glud, Andreas Nørgaard; Bech, Johannes; Tvilling, Laura

    BACKGROUND: Stereotaxic neurosurgery in large animals is used widely in different sophisticated models, where precision is becoming more crucial as desired anatomical target regions are becoming smaller. Individually calculated coordinates are necessary in large animal models with cortical...... and subcortical anatomical differences. NEW METHOD: We present a convenient method to make an MRI-visible skull fiducial for 3D MRI-based stereotaxic procedures in larger experimental animals. Plastic screws were filled with either copper-sulphate solution or MRI-visible paste from a commercially available...... cranial head marker. The screw fiducials were inserted in the animal skulls and T1 weighted MRI was performed allowing identification of the inserted skull marker. RESULTS: Both types of fiducial markers were clearly visible on the MRÍs. This allows high precision in the stereotaxic space. COMPARISON...

  11. Large degeneracy of excited hadrons and quark models

    International Nuclear Information System (INIS)

    Bicudo, P.

    2007-01-01

    The pattern of a large approximate degeneracy of the excited hadron spectra (larger than the chiral restoration degeneracy) is present in the recent experimental report of Bugg. Here we try to model this degeneracy with state of the art quark models. We review how the Coulomb Gauge chiral invariant and confining Bethe-Salpeter equation simplifies in the case of very excited quark-antiquark mesons, including angular or radial excitations, to a Salpeter equation with an ultrarelativistic kinetic energy with the spin-independent part of the potential. The resulting meson spectrum is solved, and the excited chiral restoration is recovered, for all mesons with J>0. Applying the ultrarelativistic simplification to a linear equal-time potential, linear Regge trajectories are obtained, for both angular and radial excitations. The spectrum is also compared with the semiclassical Bohr-Sommerfeld quantization relation. However, the excited angular and radial spectra do not coincide exactly. We then search, with the classical Bertrand theorem, for central potentials producing always classical closed orbits with the ultrarelativistic kinetic energy. We find that no such potential exists, and this implies that no exact larger degeneracy can be obtained in our equal-time framework, with a single principal quantum number comparable to the nonrelativistic Coulomb or harmonic oscillator potentials. Nevertheless we find it plausible that the large experimental approximate degeneracy will be modeled in the future by quark models beyond the present state of the art

  12. Large carbon dioxide fluxes from headwater boreal and sub-boreal streams.

    Science.gov (United States)

    Venkiteswaran, Jason J; Schiff, Sherry L; Wallin, Marcus B

    2014-01-01

    Half of the world's forest is in boreal and sub-boreal ecozones, containing large carbon stores and fluxes. Carbon lost from headwater streams in these forests is underestimated. We apply a simple stable carbon isotope idea for quantifying the CO2 loss from these small streams; it is based only on in-stream samples and integrates over a significant distance upstream. We demonstrate that conventional methods of determining CO2 loss from streams necessarily underestimate the CO2 loss with results from two catchments. Dissolved carbon export from headwater catchments is similar to CO2 loss from stream surfaces. Most of the CO2 originating in high CO2 groundwaters has been lost before typical in-stream sampling occurs. In the Harp Lake catchment in Canada, headwater streams account for 10% of catchment net CO2 uptake. In the Krycklan catchment in Sweden, this more than doubles the CO2 loss from the catchment. Thus, even when corrected for aquatic CO2 loss measured by conventional methods, boreal and sub-boreal forest carbon budgets currently overestimate carbon sequestration on the landscape.

  13. Disguised Distress in Children and Adolescents "Flying under the Radar": Why Psychological Problems Are Underestimated and How Schools Must Respond

    Science.gov (United States)

    Flett, Gordon L.; Hewitt, Paul L.

    2013-01-01

    It is now recognized that there is a very high prevalence of psychological disorders among children and adolescents and relatively few receive psychological treatment. In the current article, we present the argument that levels of distress and dysfunction among young people are substantially underestimated and the prevalence of psychological…

  14. Large-scale building energy efficiency retrofit: Concept, model and control

    International Nuclear Information System (INIS)

    Wu, Zhou; Wang, Bo; Xia, Xiaohua

    2016-01-01

    BEER (Building energy efficiency retrofit) projects are initiated in many nations and regions over the world. Existing studies of BEER focus on modeling and planning based on one building and one year period of retrofitting, which cannot be applied to certain large BEER projects with multiple buildings and multi-year retrofit. In this paper, the large-scale BEER problem is defined in a general TBT (time-building-technology) framework, which fits essential requirements of real-world projects. The large-scale BEER is newly studied in the control approach rather than the optimization approach commonly used before. Optimal control is proposed to design optimal retrofitting strategy in terms of maximal energy savings and maximal NPV (net present value). The designed strategy is dynamically changing on dimensions of time, building and technology. The TBT framework and the optimal control approach are verified in a large BEER project, and results indicate that promising performance of energy and cost savings can be achieved in the general TBT framework. - Highlights: • Energy efficiency retrofit of many buildings is studied. • A TBT (time-building-technology) framework is proposed. • The control system of the large-scale BEER is modeled. • The optimal retrofitting strategy is obtained.

  15. The cost of simplifying air travel when modeling disease spread.

    Directory of Open Access Journals (Sweden)

    Justin Lessler

    Full Text Available BACKGROUND: Air travel plays a key role in the spread of many pathogens. Modeling the long distance spread of infectious disease in these cases requires an air travel model. Highly detailed air transportation models can be over determined and computationally problematic. We compared the predictions of a simplified air transport model with those of a model of all routes and assessed the impact of differences on models of infectious disease. METHODOLOGY/PRINCIPAL FINDINGS: Using U.S. ticket data from 2007, we compared a simplified "pipe" model, in which individuals flow in and out of the air transport system based on the number of arrivals and departures from a given airport, to a fully saturated model where all routes are modeled individually. We also compared the pipe model to a "gravity" model where the probability of travel is scaled by physical distance; the gravity model did not differ significantly from the pipe model. The pipe model roughly approximated actual air travel, but tended to overestimate the number of trips between small airports and underestimate travel between major east and west coast airports. For most routes, the maximum number of false (or missed introductions of disease is small (<1 per day but for a few routes this rate is greatly underestimated by the pipe model. CONCLUSIONS/SIGNIFICANCE: If our interest is in large scale regional and national effects of disease, the simplified pipe model may be adequate. If we are interested in specific effects of interventions on particular air routes or the time for the disease to reach a particular location, a more complex point-to-point model will be more accurate. For many problems a hybrid model that independently models some frequently traveled routes may be the best choice. Regardless of the model used, the effect of simplifications and sensitivity to errors in parameter estimation should be analyzed.

  16. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    with a complex conversion route. Computational fluid dynamics is used to model transport phenomena in large reactors capturing tank profiles, and delays due to plug flows. This work publishes for the first time demonstration scale real data for validation showing that the model library is suitable...

  17. Maximum rates of climate change are systematically underestimated in the geological record.

    Science.gov (United States)

    Kemp, David B; Eichenseer, Kilian; Kiessling, Wolfgang

    2015-11-10

    Recently observed rates of environmental change are typically much higher than those inferred for the geological past. At the same time, the magnitudes of ancient changes were often substantially greater than those established in recent history. The most pertinent disparity, however, between recent and geological rates is the timespan over which the rates are measured, which typically differ by several orders of magnitude. Here we show that rates of marked temperature changes inferred from proxy data in Earth history scale with measurement timespan as an approximate power law across nearly six orders of magnitude (10(2) to >10(7) years). This scaling reveals how climate signals measured in the geological record alias transient variability, even during the most pronounced climatic perturbations of the Phanerozoic. Our findings indicate that the true attainable pace of climate change on timescales of greatest societal relevance is underestimated in geological archives.

  18. Preliminary evaluation of the Community Multiscale Air Quality model for 2002 over the Southeastern United States.

    Science.gov (United States)

    Morris, Ralph E; McNally, Dennis E; Tesche, Thomas W; Tonnesen, Gail; Boylan, James W; Brewer, Patricia

    2005-11-01

    The Visibility Improvement State and Tribal Association of the Southeast (VISTAS) is one of five Regional Planning Organizations that is charged with the management of haze, visibility, and other regional air quality issues in the United States. The VISTAS Phase I work effort modeled three episodes (January 2002, July 1999, and July 2001) to identify the optimal model configuration(s) to be used for the 2002 annual modeling in Phase II. Using model configurations recommended in the Phase I analysis, 2002 annual meteorological (Mesoscale Meterological Model [MM5]), emissions (Sparse Matrix Operator Kernal Emissions [SMOKE]), and air quality (Community Multiscale Air Quality [CMAQ]) simulations were performed on a 36-km grid covering the continental United States and a 12-km grid covering the Eastern United States. Model estimates were then compared against observations. This paper presents the results of the preliminary CMAQ model performance evaluation for the initial 2002 annual base case simulation. Model performance is presented for the Eastern United States using speciated fine particle concentration and wet deposition measurements from several monitoring networks. Initial results indicate fairly good performance for sulfate with fractional bias values generally within +/-20%. Nitrate is overestimated in the winter by approximately +50% and underestimated in the summer by more than -100%. Organic carbon exhibits a large summer underestimation bias of approximately -100% with much improved performance seen in the winter with a bias near zero. Performance for elemental carbon is reasonable with fractional bias values within +/- 40%. Other fine particulate (soil) and coarse particular matter exhibit large (80-150%) overestimation in the winter but improved performance in the summer. The preliminary 2002 CMAQ runs identified several areas of enhancements to improve model performance, including revised temporal allocation factors for ammonia emissions to improve

  19. Underestimation of urinary albumin to creatinine ratio in morbidly obese subjects due to high urinary creatinine excretion.

    Science.gov (United States)

    Guidone, Caterina; Gniuli, Donatella; Castagneto-Gissey, Lidia; Leccesi, Laura; Arrighi, Eugenio; Iaconelli, Amerigo; Mingrone, Geltrude

    2012-04-01

    Albuminuria, a chronic kidney and/or cardiovascular disease biomarker, is currently measured as albumin-to-creatinine ratio (ACR). We hypothesize that in severely obese individuals ACR might be abnormally low in spite of relatively high levels of urinary albumin due to increased creatininuria. One-hundred-eighty-four subjects were divided into tertiles based on their BMI. Fat-free mass (FFM) and fat-mass were assessed by DEXA; 24-h creatinine and albumin excretion, ACR, lipid profile and blood pressure were measured. Twenty-four-hour creatinine highly correlated (R = 0.75) with FFM. Since both creatininuria and albuminuria increased with the BMI, being the increase in creatininuria preponderant in subjects with BMI>35, their ratio (AC-ratio) did not change significantly from that of subjects in the lower BMI tertile. ACR only correlated with the systolic blood pressure, while both albuminuria and cretininuria correlated (P = 0.01) with the absolute 10-year CHD risk. In subjects with BMI>35, 100 mg of albumin excreted with urine increased the CHD risk of 2%. Albumin-to-creatinine ratio is underestimated in severely obese individuals as a consequence of the large creatininuria, which is proportional to the increased FFM. Therefore, at least in this population 24-h albuminuria should be more reliable than ACR. Copyright © 2011 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  20. Examining Pedestrian Injury Severity Using Alternative Disaggregate Models

    DEFF Research Database (Denmark)

    Abay, Kibrom Araya

    2013-01-01

    This paper investigates the injury severity of pedestrians considering detailed road user characteristics and alternative model specification using a high-quality Danish road accident data. Such detailed and alternative modeling approach helps to assess the sensitivity of empirical inferences...... to the choice of these models. The empirical analysis reveals that detailed road user characteristics such as crime history of drivers and momentary activities of road users at the time of the accident provides an interesting insight in the injury severity analysis. Likewise, the alternative analytical...... specification of the models reveals that some of the conventionally employed fixed parameters injury severity models could underestimate the effect of some important behavioral attributes of the accidents. For instance, the standard ordered logit model underestimated the marginal effects of some...

  1. Comparison of hard scattering models for particle production at large transverse momentum. 2

    International Nuclear Information System (INIS)

    Schiller, A.; Ilgenfritz, E.M.; Kripfganz, J.; Moehring, H.J.; Ranft, G.; Ranft, J.

    1977-01-01

    Single particle distributions of π + and π - at large transverse momentum are analysed using various hard collision models: qq → qq, qantiq → MantiM, qM → qM. The transverse momentum dependence at thetasub(cm) = 90 0 is well described in all models except qantiq → MantiM. This model has problems with the ratios (pp → π + +X)/(π +- p → π 0 +X). Presently available data on rapidity distributions of pions in π - p and pantip collisions are at rather low transverse momentum (however large xsub(perpendicular) = 2psub(perpendicular)/√s) where it is not obvious that hard collision models should dominate. The data, in particular the π - /π + asymmetry are well described by all models except qM → Mq (CIM). At large values of transverse momentum significant differences between the models are predicted. (author)

  2. An improved large signal model of InP HEMTs

    Science.gov (United States)

    Li, Tianhao; Li, Wenjun; Liu, Jun

    2018-05-01

    An improved large signal model for InP HEMTs is proposed in this paper. The channel current and charge model equations are constructed based on the Angelov model equations. Both the equations for channel current and gate charge models were all continuous and high order drivable, and the proposed gate charge model satisfied the charge conservation. For the strong leakage induced barrier reduction effect of InP HEMTs, the Angelov current model equations are improved. The channel current model could fit DC performance of devices. A 2 × 25 μm × 70 nm InP HEMT device is used to demonstrate the extraction and validation of the model, in which the model has predicted the DC I–V, C–V and bias related S parameters accurately. Project supported by the National Natural Science Foundation of China (No. 61331006).

  3. Mucus: An Underestimated Gut Target for Environmental Pollutants and Food Additives.

    Science.gov (United States)

    Gillois, Kévin; Lévêque, Mathilde; Théodorou, Vassilia; Robert, Hervé; Mercier-Bonin, Muriel

    2018-06-15

    Synthetic chemicals (environmental pollutants, food additives) are widely used for many industrial purposes and consumer-related applications, which implies, through manufactured products, diet, and environment, a repeated exposure of the general population with growing concern regarding health disorders. The gastrointestinal tract is the first physical and biological barrier against these compounds, and thus their first target. Mounting evidence indicates that the gut microbiota represents a major player in the toxicity of environmental pollutants and food additives; however, little is known on the toxicological relevance of the mucus/pollutant interplay, even though mucus is increasingly recognized as essential in gut homeostasis. Here, we aimed at describing how environmental pollutants (heavy metals, pesticides, and other persistent organic pollutants) and food additives (emulsifiers, nanomaterials) might interact with mucus and mucus-related microbial species; that is, “mucophilic” bacteria such as mucus degraders. This review highlights that intestinal mucus, either directly or through its crosstalk with the gut microbiota, is a key, yet underestimated gut player that must be considered for better risk assessment and management of environmental pollution.

  4. Total mesophilic counts underestimate in many cases the contamination levels of psychrotrophic lactic acid bacteria (LAB) in chilled-stored food products at the end of their shelf-life.

    Science.gov (United States)

    Pothakos, Vasileios; Samapundo, Simbarashe; Devlieghere, Frank

    2012-12-01

    The major objective of this study was to determine the role of psychrotrophic lactic acid bacteria (LAB) in spoilage-associated phenomena at the end of the shelf-life of 86 various packaged (air, vacuum, modified-atmosphere) chilled-stored retail food products. The current microbiological standards, which are largely based on the total viable mesophilic counts lack discriminatory capacity to detect psychrotrophic LAB. A comparison between the total viable counts on plates incubated at 30 °C (representing the mesophiles) and at 22 °C (indicating the psychrotrophs) for 86 food samples covering a wide range - ready-to-eat vegetable salads, fresh raw meat, cooked meat products and composite food - showed that a consistent underestimation of the microbial load occurs when the total aerobic mesophilic counts are used as a shelf-life parameter. In 38% of the samples, the psychrotrophic counts had significantly higher values (+0.5-3 log CFU/g) than the corresponding total aerobic mesophilic counts. A total of 154 lactic acid bacteria, which were unable to proliferate at 30 °C were isolated. In addition, a further 43 with a poor recovery at this temperature were also isolated. This study highlights the potential fallacy of the total aerobic mesophilic count as a reference shelf-life parameter for chilled food products as it can often underestimate the contamination levels at the end of the shelf-life. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. Detonation and fragmentation modeling for the description of large scale vapor explosions

    International Nuclear Information System (INIS)

    Buerger, M.; Carachalios, C.; Unger, H.

    1985-01-01

    The thermal detonation modeling of large-scale vapor explosions is shown to be indispensable for realistic safety evaluations. A steady-state as well as transient detonation model have been developed including detailed descriptions of the dynamics as well as the fragmentation processes inside a detonation wave. Strong restrictions for large-scale vapor explosions are obtained from this modeling and they indicate that the reactor pressure vessel would even withstand explosions with unrealistically high masses of corium involved. The modeling is supported by comparisons with a detonation experiment and - concerning its key part - hydronamic fragmentation experiments. (orig.) [de

  6. The Cauchy problem for a model of immiscible gas flow with large data

    Energy Technology Data Exchange (ETDEWEB)

    Sande, Hilde

    2008-12-15

    The thesis consists of an introduction and two papers; 1. The solution of the Cauchy problem with large data for a model of a mixture of gases. 2. Front tracking for a model of immiscible gas flow with large data. (AG) refs, figs

  7. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...... can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate...... the analyses quantitatively, a framework based on an iterative interaction between the equilibrium model and a separate risk-adjustment module was constructed. To illustrate the features of the proposed modelling approach we examined how uncertainty in demand and variable costs affects the optimal choice...

  8. Air quality models and unusually large ozone increases: Identifying model failures, understanding environmental causes, and improving modeled chemistry

    Science.gov (United States)

    Couzo, Evan A.

    Several factors combine to make ozone (O3) pollution in Houston, Texas, unique when compared to other metropolitan areas. These include complex meteorology, intense clustering of industrial activity, and significant precursor emissions from the heavily urbanized eight-county area. Decades of air pollution research have borne out two different causes, or conceptual models, of O 3 formation. One conceptual model describes a gradual region-wide increase in O3 concentrations "typical" of many large U.S. cities. The other conceptual model links episodic emissions of volatile organic compounds to spatially limited plumes of high O3, which lead to large hourly increases that have exceeded 100 parts per billion (ppb) per hour. These large hourly increases are known to lead to violations of the federal O 3 standard and impact Houston's status as a non-attainment area. There is a need to further understand and characterize the causes of peak O 3 levels in Houston and simulate them correctly so that environmental regulators can find the most cost-effective pollution controls. This work provides a detailed understanding of unusually large O 3 increases in the natural and modeled environments. First, we probe regulatory model simulations and assess their ability to reproduce the observed phenomenon. As configured for the purpose of demonstrating future attainment of the O3 standard, the model fails to predict the spatially limited O3 plumes observed in Houston. Second, we combine ambient meteorological and pollutant measurement data to identify the most likely geographic origins and preconditions of the concentrated O3 plumes. We find evidence that the O3 plumes are the result of photochemical activity accelerated by industrial emissions. And, third, we implement changes to the modeled chemistry to add missing formation mechanisms of nitrous acid, which is an important radical precursor. Radicals control the chemical reactivity of atmospheric systems, and perturbations to

  9. The sheep as a large osteoporotic model for orthopaedic research in humans

    DEFF Research Database (Denmark)

    Cheng, L.; Ding, Ming; Li, Z.

    2008-01-01

    Although small animals as rodents are very popular animals for osteoporosis models , large animals models are necessary for research of human osteoporotic diseases. Sheep osteoporosis models are becoming more important because of its unique advantages for osteoporosis reseach. Sheep are docile...... in nature and large in size , which facilitates obtaining blood samples , urine samples and bone tissue samples for different biochemical tests and histological tests , and surgical manipulation and instrument examinations. Their physiology is similar to humans. To induce osteoporosis , OVX and calcium...... intake restriction and glucocorticoid application are the most effective methods for sheep osteoporosis model. Sheep osteoporosis model is an ideal animal model for studying various medicines reacting to osteoporosis and other treatment methods such as prosthetic replacement reacting to osteoporotic...

  10. Towards a 'standard model' of large scale structure formation

    International Nuclear Information System (INIS)

    Shafi, Q.

    1994-01-01

    We explore constraints on inflationary models employing data on large scale structure mainly from COBE temperature anisotropies and IRAS selected galaxy surveys. In models where the tensor contribution to the COBE signal is negligible, we find that the spectral index of density fluctuations n must exceed 0.7. Furthermore the COBE signal cannot be dominated by the tensor component, implying n > 0.85 in such models. The data favors cold plus hot dark matter models with n equal or close to unity and Ω HDM ∼ 0.2 - 0.35. Realistic grand unified theories, including supersymmetric versions, which produce inflation with these properties are presented. (author). 46 refs, 8 figs

  11. A dynamic globalization model for large eddy simulation of complex turbulent flow

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Hae Cheon; Park, No Ma; Kim, Jin Seok [Seoul National Univ., Seoul (Korea, Republic of)

    2005-07-01

    A dynamic subgrid-scale model is proposed for large eddy simulation of turbulent flows in complex geometry. The eddy viscosity model by Vreman [Phys. Fluids, 16, 3670 (2004)] is considered as a base model. A priori tests with the original Vreman model show that it predicts the correct profile of subgrid-scale dissipation in turbulent channel flow but the optimal model coefficient is far from universal. Dynamic procedures of determining the model coefficient are proposed based on the 'global equilibrium' between the subgrid-scale dissipation and viscous dissipation. An important feature of the proposed procedures is that the model coefficient determined is globally constant in space but varies only in time. Large eddy simulations with the present dynamic model are conducted for forced isotropic turbulence, turbulent channel flow and flow over a sphere, showing excellent agreements with previous results.

  12. Oxygen Distributions-Evaluation of Computational Methods, Using a Stochastic Model for Large Tumour Vasculature, to Elucidate the Importance of Considering a Complete Vascular Network.

    Directory of Open Access Journals (Sweden)

    Jakob H Lagerlöf

    Full Text Available To develop a general model that utilises a stochastic method to generate a vessel tree based on experimental data, and an associated irregular, macroscopic tumour. These will be used to evaluate two different methods for computing oxygen distribution.A vessel tree structure, and an associated tumour of 127 cm3, were generated, using a stochastic method and Bresenham's line algorithm to develop trees on two different scales and fusing them together. The vessel dimensions were adjusted through convolution and thresholding and each vessel voxel was assigned an oxygen value. Diffusion and consumption were modelled using a Green's function approach together with Michaelis-Menten kinetics. The computations were performed using a combined tree method (CTM and an individual tree method (ITM. Five tumour sub-sections were compared, to evaluate the methods.The oxygen distributions of the same tissue samples, using different methods of computation, were considerably less similar (root mean square deviation, RMSD≈0.02 than the distributions of different samples using CTM (0.001< RMSD<0.01. The deviations of ITM from CTM increase with lower oxygen values, resulting in ITM severely underestimating the level of hypoxia in the tumour. Kolmogorov Smirnov (KS tests showed that millimetre-scale samples may not represent the whole.The stochastic model managed to capture the heterogeneous nature of hypoxic fractions and, even though the simplified computation did not considerably alter the oxygen distribution, it leads to an evident underestimation of tumour hypoxia, and thereby radioresistance. For a trustworthy computation of tumour oxygenation, the interaction between adjacent microvessel trees must not be neglected, why evaluation should be made using high resolution and the CTM, applied to the entire tumour.

  13. Characteristics of the large corporation-based, bureaucratic model among oecd countries - an foi model analysis

    Directory of Open Access Journals (Sweden)

    Bartha Zoltán

    2014-03-01

    Full Text Available Deciding on the development path of the economy has been a delicate question in economic policy, not least because of the trade-off effects which immediately worsen certain economic indicators as steps are taken to improve others. The aim of the paper is to present a framework that helps decide on such policy dilemmas. This framework is based on an analysis conducted among OECD countries with the FOI model (focusing on future, outside and inside potentials. Several development models can be deduced by this method, out of which only the large corporation-based, bureaucratic model is discussed in detail. The large corporation-based, bureaucratic model implies a development strategy focused on the creation of domestic safe havens. Based on country studies, it is concluded that well-performing safe havens require the active participation of the state. We find that, in countries adhering to this model, business competitiveness is sustained through intensive public support, and an active role taken by the government in education, research and development, in detecting and exploiting special market niches, and in encouraging sectorial cooperation.

  14. Computational Modeling of Large Wildfires: A Roadmap

    KAUST Repository

    Coen, Janice L.

    2010-08-01

    Wildland fire behavior, particularly that of large, uncontrolled wildfires, has not been well understood or predicted. Our methodology to simulate this phenomenon uses high-resolution dynamic models made of numerical weather prediction (NWP) models coupled to fire behavior models to simulate fire behavior. NWP models are capable of modeling very high resolution (< 100 m) atmospheric flows. The wildland fire component is based upon semi-empirical formulas for fireline rate of spread, post-frontal heat release, and a canopy fire. The fire behavior is coupled to the atmospheric model such that low level winds drive the spread of the surface fire, which in turn releases sensible heat, latent heat, and smoke fluxes into the lower atmosphere, feeding back to affect the winds directing the fire. These coupled dynamic models capture the rapid spread downwind, flank runs up canyons, bifurcations of the fire into two heads, and rough agreement in area, shape, and direction of spread at periods for which fire location data is available. Yet, intriguing computational science questions arise in applying such models in a predictive manner, including physical processes that span a vast range of scales, processes such as spotting that cannot be modeled deterministically, estimating the consequences of uncertainty, the efforts to steer simulations with field data ("data assimilation"), lingering issues with short term forecasting of weather that may show skill only on the order of a few hours, and the difficulty of gathering pertinent data for verification and initialization in a dangerous environment. © 2010 IEEE.

  15. Does verbatim sentence recall underestimate the language competence of near-native speakers?

    Directory of Open Access Journals (Sweden)

    Judith eSchweppe

    2015-02-01

    Full Text Available Verbatim sentence recall is widely used to test the language competence of native and non-native speakers since it involves comprehension and production of connected speech. However, we assume that, to maintain surface information, sentence recall relies particularly on attentional resources, which differentially affects native and non-native speakers. Since even in near-natives language processing is less automatized than in native speakers, processing a sentence in a foreign language plus retaining its surface may result in a cognitive overload. We contrasted sentence recall performance of German native speakers with that of highly proficient non-natives. Non-natives recalled the sentences significantly poorer than the natives, but performed equally well on a cloze test. This implies that sentence recall underestimates the language competence of good non-native speakers in mixed groups with native speakers. The findings also suggest that theories of sentence recall need to consider both its linguistic and its attentional aspects.

  16. Look before You Leap: Underestimating Chinese Student History, Chinese University Setting and Chinese University Steering in Sino-British HE Joint Ventures?

    Science.gov (United States)

    Dow, Ewan G.

    2010-01-01

    This article makes the case--in three parts--that many Anglo-Chinese university collaborations (joint ventures) to date have seriously underestimated Chinese (student) history, the Chinese university setting and Chinese national governmental steering as part of the process of "glocalisation". Recent turbulence in this particular HE…

  17. Modeling and Control of a Large Nuclear Reactor A Three-Time-Scale Approach

    CERN Document Server

    Shimjith, S R; Bandyopadhyay, B

    2013-01-01

    Control analysis and design of large nuclear reactors requires a suitable mathematical model representing the steady state and dynamic behavior of the reactor with reasonable accuracy. This task is, however, quite challenging because of several complex dynamic phenomena existing in a reactor. Quite often, the models developed would be of prohibitively large order, non-linear and of complex structure not readily amenable for control studies. Moreover, the existence of simultaneously occurring dynamic variations at different speeds makes the mathematical model susceptible to numerical ill-conditioning, inhibiting direct application of standard control techniques. This monograph introduces a technique for mathematical modeling of large nuclear reactors in the framework of multi-point kinetics, to obtain a comparatively smaller order model in standard state space form thus overcoming these difficulties. It further brings in innovative methods for controller design for systems exhibiting multi-time-scale property,...

  18. Software engineering the mixed model for genome-wide association studies on large samples.

    Science.gov (United States)

    Zhang, Zhiwu; Buckler, Edward S; Casstevens, Terry M; Bradbury, Peter J

    2009-11-01

    Mixed models improve the ability to detect phenotype-genotype associations in the presence of population stratification and multiple levels of relatedness in genome-wide association studies (GWAS), but for large data sets the resource consumption becomes impractical. At the same time, the sample size and number of markers used for GWAS is increasing dramatically, resulting in greater statistical power to detect those associations. The use of mixed models with increasingly large data sets depends on the availability of software for analyzing those models. While multiple software packages implement the mixed model method, no single package provides the best combination of fast computation, ability to handle large samples, flexible modeling and ease of use. Key elements of association analysis with mixed models are reviewed, including modeling phenotype-genotype associations using mixed models, population stratification, kinship and its estimation, variance component estimation, use of best linear unbiased predictors or residuals in place of raw phenotype, improving efficiency and software-user interaction. The available software packages are evaluated, and suggestions made for future software development.

  19. A refined regional modeling approach for the Corn Belt - Experiences and recommendations for large-scale integrated modeling

    Science.gov (United States)

    Panagopoulos, Yiannis; Gassman, Philip W.; Jha, Manoj K.; Kling, Catherine L.; Campbell, Todd; Srinivasan, Raghavan; White, Michael; Arnold, Jeffrey G.

    2015-05-01

    Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or 'subwatersheds' as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential

  20. A Novel Method Using Abstract Convex Underestimation in Ab-Initio Protein Structure Prediction for Guiding Search in Conformational Feature Space.

    Science.gov (United States)

    Hao, Xiao-Hu; Zhang, Gui-Jun; Zhou, Xiao-Gen; Yu, Xu-Feng

    2016-01-01

    To address the searching problem of protein conformational space in ab-initio protein structure prediction, a novel method using abstract convex underestimation (ACUE) based on the framework of evolutionary algorithm was proposed. Computing such conformations, essential to associate structural and functional information with gene sequences, is challenging due to the high-dimensionality and rugged energy surface of the protein conformational space. As a consequence, the dimension of protein conformational space should be reduced to a proper level. In this paper, the high-dimensionality original conformational space was converted into feature space whose dimension is considerably reduced by feature extraction technique. And, the underestimate space could be constructed according to abstract convex theory. Thus, the entropy effect caused by searching in the high-dimensionality conformational space could be avoided through such conversion. The tight lower bound estimate information was obtained to guide the searching direction, and the invalid searching area in which the global optimal solution is not located could be eliminated in advance. Moreover, instead of expensively calculating the energy of conformations in the original conformational space, the estimate value is employed to judge if the conformation is worth exploring to reduce the evaluation time, thereby making computational cost lower and the searching process more efficient. Additionally, fragment assembly and the Monte Carlo method are combined to generate a series of metastable conformations by sampling in the conformational space. The proposed method provides a novel technique to solve the searching problem of protein conformational space. Twenty small-to-medium structurally diverse proteins were tested, and the proposed ACUE method was compared with It Fix, HEA, Rosetta and the developed method LEDE without underestimate information. Test results show that the ACUE method can more rapidly and more

  1. Large psub(T) pion production and clustered parton model

    Energy Technology Data Exchange (ETDEWEB)

    Kanki, T [Osaka Univ., Toyonaka (Japan). Coll. of General Education

    1977-05-01

    Recent experimental results on the large p sub(T) inclusive ..pi../sup 0/ productions by pp and ..pi..p collisions are interpreted by the parton model in which the constituent quarks are defined to be the clusters of the quark-partons and gluons.

  2. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  3. Photorealistic large-scale urban city model reconstruction.

    Science.gov (United States)

    Poullis, Charalambos; You, Suya

    2009-01-01

    The rapid and efficient creation of virtual environments has become a crucial part of virtual reality applications. In particular, civil and defense applications often require and employ detailed models of operations areas for training, simulations of different scenarios, planning for natural or man-made events, monitoring, surveillance, games, and films. A realistic representation of the large-scale environments is therefore imperative for the success of such applications since it increases the immersive experience of its users and helps reduce the difference between physical and virtual reality. However, the task of creating such large-scale virtual environments still remains a time-consuming and manual work. In this work, we propose a novel method for the rapid reconstruction of photorealistic large-scale virtual environments. First, a novel, extendible, parameterized geometric primitive is presented for the automatic building identification and reconstruction of building structures. In addition, buildings with complex roofs containing complex linear and nonlinear surfaces are reconstructed interactively using a linear polygonal and a nonlinear primitive, respectively. Second, we present a rendering pipeline for the composition of photorealistic textures, which unlike existing techniques, can recover missing or occluded texture information by integrating multiple information captured from different optical sensors (ground, aerial, and satellite).

  4. Finite element modelling for fatigue stress analysis of large suspension bridges

    Science.gov (United States)

    Chan, Tommy H. T.; Guo, L.; Li, Z. X.

    2003-03-01

    Fatigue is an important failure mode for large suspension bridges under traffic loadings. However, large suspension bridges have so many attributes that it is difficult to analyze their fatigue damage using experimental measurement methods. Numerical simulation is a feasible method of studying such fatigue damage. In British standards, the finite element method is recommended as a rigorous method for steel bridge fatigue analysis. This paper aims at developing a finite element (FE) model of a large suspension steel bridge for fatigue stress analysis. As a case study, a FE model of the Tsing Ma Bridge is presented. The verification of the model is carried out with the help of the measured bridge modal characteristics and the online data measured by the structural health monitoring system installed on the bridge. The results show that the constructed FE model is efficient for bridge dynamic analysis. Global structural analyses using the developed FE model are presented to determine the components of the nominal stress generated by railway loadings and some typical highway loadings. The critical locations in the bridge main span are also identified with the numerical results of the global FE stress analysis. Local stress analysis of a typical weld connection is carried out to obtain the hot-spot stresses in the region. These results provide a basis for evaluating fatigue damage and predicting the remaining life of the bridge.

  5. A trans-Amazonian screening of mtDNA reveals deep intraspecific divergence in forest birds and suggests a vast underestimation of species diversity.

    Directory of Open Access Journals (Sweden)

    Borja Milá

    Full Text Available The Amazonian avifauna remains severely understudied relative to that of the temperate zone, and its species richness is thought to be underestimated by current taxonomy. Recent molecular systematic studies using mtDNA sequence reveal that traditionally accepted species-level taxa often conceal genetically divergent subspecific lineages found to represent new species upon close taxonomic scrutiny, suggesting that intraspecific mtDNA variation could be useful in species discovery. Surveys of mtDNA variation in Holarctic species have revealed patterns of variation that are largely congruent with species boundaries. However, little information exists on intraspecific divergence in most Amazonian species. Here we screen intraspecific mtDNA genetic variation in 41 Amazonian forest understory species belonging to 36 genera and 17 families in 6 orders, using 758 individual samples from Ecuador and French Guiana. For 13 of these species, we also analyzed trans-Andean populations from the Ecuadorian Chocó. A consistent pattern of deep intraspecific divergence among trans-Amazonian haplogroups was found for 33 of the 41 taxa, and genetic differentiation and genetic diversity among them was highly variable, suggesting a complex range of evolutionary histories. Mean sequence divergence within families was the same as that found in North American birds (13%, yet mean intraspecific divergence in Neotropical species was an order of magnitude larger (2.13% vs. 0.23%, with mean distance between intraspecific lineages reaching 3.56%. We found no clear relationship between genetic distances and differentiation in plumage color. Our results identify numerous genetically and phenotypically divergent lineages which may result in new species-level designations upon closer taxonomic scrutiny and thorough sampling, although lineages in the tropical region could be older than those in the temperate zone without necessarily representing separate species. In

  6. Penalized Estimation in Large-Scale Generalized Linear Array Models

    DEFF Research Database (Denmark)

    Lund, Adam; Vincent, Martin; Hansen, Niels Richard

    2017-01-01

    Large-scale generalized linear array models (GLAMs) can be challenging to fit. Computation and storage of its tensor product design matrix can be impossible due to time and memory constraints, and previously considered design matrix free algorithms do not scale well with the dimension...

  7. Comparison of radiation parametrizations within the HARMONIE-AROME NWP model

    Science.gov (United States)

    Rontu, Laura; Lindfors, Anders V.

    2018-05-01

    Downwelling shortwave radiation at the surface (SWDS, global solar radiation flux), given by three different parametrization schemes, was compared to observations in the HARMONIE-AROME numerical weather prediction (NWP) model experiments over Finland in spring 2017. Simulated fluxes agreed well with each other and with the observations in the clear-sky cases. In the cloudy-sky conditions, all schemes tended to underestimate SWDS at the daily level, as compared to the measurements. Large local and temporal differences between the model results and observations were seen, related to the variations and uncertainty of the predicted cloud properties. The results suggest a possibility to benefit from the use of different radiative transfer parametrizations in a NWP model to obtain perturbations for the fine-resolution ensemble prediction systems. In addition, we recommend usage of the global radiation observations for the standard validation of the NWP models.

  8. Investigation on the integral output power model of a large-scale wind farm

    Institute of Scientific and Technical Information of China (English)

    BAO Nengsheng; MA Xiuqian; NI Weidou

    2007-01-01

    The integral output power model of a large-scale wind farm is needed when estimating the wind farm's output over a period of time in the future.The actual wind speed power model and calculation method of a wind farm made up of many wind turbine units are discussed.After analyzing the incoming wind flow characteristics and their energy distributions,and after considering the multi-effects among the wind turbine units and certain assumptions,the incoming wind flow model of multi-units is built.The calculation algorithms and steps of the integral output power model of a large-scale wind farm are provided.Finally,an actual power output of the wind farm is calculated and analyzed by using the practical measurement wind speed data.The characteristics of a large-scale wind farm are also discussed.

  9. The Large Office Environment - Measurement and Modeling of the Wideband Radio Channel

    DEFF Research Database (Denmark)

    Andersen, Jørgen Bach; Nielsen, Jesper Ødum; Bauch, Gerhard

    2006-01-01

    In a future 4G or WLAN wideband application we can imagine multiple users in a large office environment con-sisting of a single room with partitions. Up to now, indoor radio channel measurement and modelling has mainly concentrated on scenarios with several office rooms and corridors. We present...... here measurements at 5.8GHz for 100 MHz bandwidth and a novel modelling approach for the wideband radio channel in a large office room envi-ronment. An acoustic like reverberation theory is pro-posed that allows to specify a tapped delay line model just from the room dimensions and an average...... calculated from the measurements. The pro-posed model can likely also be applied to indoor hot spot scenarios....

  10. A Model for Teaching Large Classes: Facilitating a "Small Class Feel"

    Science.gov (United States)

    Lynch, Rosealie P.; Pappas, Eric

    2017-01-01

    This paper presents a model for teaching large classes that facilitates a "small class feel" to counteract the distance, anonymity, and formality that often characterize large lecture-style courses in higher education. One author (E. P.) has been teaching a 300-student general education critical thinking course for ten years, and the…

  11. Investigating compound flooding in an estuary using hydrodynamic modelling: a case study from the Shoalhaven River, Australia

    Science.gov (United States)

    Kumbier, Kristian; Carvalho, Rafael C.; Vafeidis, Athanasios T.; Woodroffe, Colin D.

    2018-02-01

    Many previous modelling studies have considered storm-tide and riverine flooding independently, even though joint-probability analysis highlighted significant dependence between extreme rainfall and extreme storm surges in estuarine environments. This study investigates compound flooding by quantifying horizontal and vertical differences in coastal flood risk estimates resulting from a separation of storm-tide and riverine flooding processes. We used an open-source version of the Delft3D model to simulate flood extent and inundation depth due to a storm event that occurred in June 2016 in the Shoalhaven Estuary, south-eastern Australia. Time series of observed water levels and discharge measurements are used to force model boundaries, whereas observational data such as satellite imagery, aerial photographs, tidal gauges and water level logger measurements are used to validate modelling results. The comparison of simulation results including and excluding riverine discharge demonstrated large differences in modelled flood extents and inundation depths. A flood risk assessment accounting only for storm-tide flooding would have underestimated the flood extent of the June 2016 storm event by 30 % (20.5 km2). Furthermore, inundation depths would have been underestimated on average by 0.34 m and by up to 1.5 m locally. We recommend considering storm-tide and riverine flooding processes jointly in estuaries with large catchment areas, which are known to have a quick response time to extreme rainfall. In addition, comparison of different boundary set-ups at the intermittent entrance in Shoalhaven Heads indicated that a permanent opening, in order to reduce exposure to riverine flooding, would increase tidal range and exposure to both storm-tide flooding and wave action.

  12. Underestimation of nuclear fuel burnup – theory, demonstration and solution in numerical models

    Directory of Open Access Journals (Sweden)

    Gajda Paweł

    2016-01-01

    Full Text Available Monte Carlo methodology provides reference statistical solution of neutron transport criticality problems of nuclear systems. Estimated reaction rates can be applied as an input to Bateman equations that govern isotopic evolution of reactor materials. Because statistical solution of Boltzmann equation is computationally expensive, it is in practice applied to time steps of limited length. In this paper we show that simple staircase step model leads to underprediction of numerical fuel burnup (Fissions per Initial Metal Atom – FIMA. Theoretical considerations indicates that this error is inversely proportional to the length of the time step and origins from the variation of heating per source neutron. The bias can be diminished by application of predictor-corrector step model. A set of burnup simulations with various step length and coupling schemes has been performed. SERPENT code version 1.17 has been applied to the model of a typical fuel assembly from Pressurized Water Reactor. In reference case FIMA reaches 6.24% that is equivalent to about 60 GWD/tHM of industrial burnup. The discrepancies up to 1% have been observed depending on time step model and theoretical predictions are consistent with numerical results. Conclusions presented in this paper are important for research and development concerning nuclear fuel cycle also in the context of Gen4 systems.

  13. Large-Signal DG-MOSFET Modelling for RFID Rectification

    Directory of Open Access Journals (Sweden)

    R. Rodríguez

    2016-01-01

    Full Text Available This paper analyses the undoped DG-MOSFETs capability for the operation of rectifiers for RFIDs and Wireless Power Transmission (WPT at microwave frequencies. For this purpose, a large-signal compact model has been developed and implemented in Verilog-A. The model has been numerically validated with a device simulator (Sentaurus. It is found that the number of stages to achieve the optimal rectifier performance is inferior to that required with conventional MOSFETs. In addition, the DC output voltage could be incremented with the use of appropriate mid-gap metals for the gate, as TiN. Minor impact of short channel effects (SCEs on rectification is also pointed out.

  14. Bilevel Traffic Evacuation Model and Algorithm Design for Large-Scale Activities

    Directory of Open Access Journals (Sweden)

    Danwen Bao

    2017-01-01

    Full Text Available This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.

  15. Parameterization of a Hydrological Model for a Large, Ungauged Urban Catchment

    Directory of Open Access Journals (Sweden)

    Gerald Krebs

    2016-10-01

    Full Text Available Urbanization leads to the replacement of natural areas by impervious surfaces and affects the catchment hydrological cycle with adverse environmental impacts. Low impact development tools (LID that mimic hydrological processes of natural areas have been developed and applied to mitigate these impacts. Hydrological simulations are one possibility to evaluate the LID performance but the associated small-scale processes require a highly spatially distributed and explicit modeling approach. However, detailed data for model development are often not available for large urban areas, hampering the model parameterization. In this paper we propose a methodology to parameterize a hydrological model to a large, ungauged urban area by maintaining at the same time a detailed surface discretization for direct parameter manipulation for LID simulation and a firm reliance on available data for model conceptualization. Catchment delineation was based on a high-resolution digital elevation model (DEM and model parameterization relied on a novel model regionalization approach. The impact of automated delineation and model regionalization on simulation results was evaluated for three monitored study catchments (5.87–12.59 ha. The simulated runoff peak was most sensitive to accurate catchment discretization and calibration, while both the runoff volume and the fit of the hydrograph were less affected.

  16. Pain begets pain: When marathon runners are not in pain anymore, they underestimate their memory of marathon pain: A mediation analysis

    NARCIS (Netherlands)

    Babel, P.; Bajcar, E.A.; Smieja, M.; Adamczyk, W.; Swider, K.J.; Kicman, P.; Lisinska, N.

    2018-01-01

    Background: A previous study has shown that memory of pain induced by running a marathon might be underestimated. However, little is known about the factors that might influence such a memory distortion during pain recall. The aim of the study was to investigate the memory of pain induced by running

  17. Halo modelling in chameleon theories

    Energy Technology Data Exchange (ETDEWEB)

    Lombriser, Lucas; Koyama, Kazuya [Institute of Cosmology and Gravitation, University of Portsmouth, Dennis Sciama Building, Burnaby Road, Portsmouth, PO1 3FX (United Kingdom); Li, Baojiu, E-mail: lucas.lombriser@port.ac.uk, E-mail: kazuya.koyama@port.ac.uk, E-mail: baojiu.li@durham.ac.uk [Institute for Computational Cosmology, Ogden Centre for Fundamental Physics, Department of Physics, University of Durham, Science Laboratories, South Road, Durham, DH1 3LE (United Kingdom)

    2014-03-01

    We analyse modelling techniques for the large-scale structure formed in scalar-tensor theories of constant Brans-Dicke parameter which match the concordance model background expansion history and produce a chameleon suppression of the gravitational modification in high-density regions. Thereby, we use a mass and environment dependent chameleon spherical collapse model, the Sheth-Tormen halo mass function and linear halo bias, the Navarro-Frenk-White halo density profile, and the halo model. Furthermore, using the spherical collapse model, we extrapolate a chameleon mass-concentration scaling relation from a ΛCDM prescription calibrated to N-body simulations. We also provide constraints on the model parameters to ensure viability on local scales. We test our description of the halo mass function and nonlinear matter power spectrum against the respective observables extracted from large-volume and high-resolution N-body simulations in the limiting case of f(R) gravity, corresponding to a vanishing Brans-Dicke parameter. We find good agreement between the two; the halo model provides a good qualitative description of the shape of the relative enhancement of the f(R) matter power spectrum with respect to ΛCDM caused by the extra attractive gravitational force but fails to recover the correct amplitude. Introducing an effective linear power spectrum in the computation of the two-halo term to account for an underestimation of the chameleon suppression at intermediate scales in our approach, we accurately reproduce the measurements from the N-body simulations.

  18. Halo modelling in chameleon theories

    International Nuclear Information System (INIS)

    Lombriser, Lucas; Koyama, Kazuya; Li, Baojiu

    2014-01-01

    We analyse modelling techniques for the large-scale structure formed in scalar-tensor theories of constant Brans-Dicke parameter which match the concordance model background expansion history and produce a chameleon suppression of the gravitational modification in high-density regions. Thereby, we use a mass and environment dependent chameleon spherical collapse model, the Sheth-Tormen halo mass function and linear halo bias, the Navarro-Frenk-White halo density profile, and the halo model. Furthermore, using the spherical collapse model, we extrapolate a chameleon mass-concentration scaling relation from a ΛCDM prescription calibrated to N-body simulations. We also provide constraints on the model parameters to ensure viability on local scales. We test our description of the halo mass function and nonlinear matter power spectrum against the respective observables extracted from large-volume and high-resolution N-body simulations in the limiting case of f(R) gravity, corresponding to a vanishing Brans-Dicke parameter. We find good agreement between the two; the halo model provides a good qualitative description of the shape of the relative enhancement of the f(R) matter power spectrum with respect to ΛCDM caused by the extra attractive gravitational force but fails to recover the correct amplitude. Introducing an effective linear power spectrum in the computation of the two-halo term to account for an underestimation of the chameleon suppression at intermediate scales in our approach, we accurately reproduce the measurements from the N-body simulations

  19. Parallel runs of a large air pollution model on a grid of Sun computers

    DEFF Research Database (Denmark)

    Alexandrov, V.N.; Owczarz, W.; Thomsen, Per Grove

    2004-01-01

    Large -scale air pollution models can successfully be used in different environmental studies. These models are described mathematically by systems of partial differential equations. Splitting procedures followed by discretization of the spatial derivatives leads to several large systems...

  20. Groundwater Flow and Thermal Modeling to Support a Preferred Conceptual Model for the Large Hydraulic Gradient North of Yucca Mountain

    International Nuclear Information System (INIS)

    McGraw, D.; Oberlander, P.

    2007-01-01

    The purpose of this study is to report on the results of a preliminary modeling framework to investigate the causes of the large hydraulic gradient north of Yucca Mountain. This study builds on the Saturated Zone Site-Scale Flow and Transport Model (referenced herein as the Site-scale model (Zyvoloski, 2004a)), which is a three-dimensional saturated zone model of the Yucca Mountain area. Groundwater flow was simulated under natural conditions. The model framework and grid design describe the geologic layering and the calibration parameters describe the hydrogeology. The Site-scale model is calibrated to hydraulic heads, fluid temperature, and groundwater flowpaths. One area of interest in the Site-scale model represents the large hydraulic gradient north of Yucca Mountain. Nearby water levels suggest over 200 meters of hydraulic head difference in less than 1,000 meters horizontal distance. Given the geologic conceptual models defined by various hydrogeologic reports (Faunt, 2000, 2001; Zyvoloski, 2004b), no definitive explanation has been found for the cause of the large hydraulic gradient. Luckey et al. (1996) presents several possible explanations for the large hydraulic gradient as provided below: The gradient is simply the result of flow through the upper volcanic confining unit, which is nearly 300 meters thick near the large gradient. The gradient represents a semi-perched system in which flow in the upper and lower aquifers is predominantly horizontal, whereas flow in the upper confining unit would be predominantly vertical. The gradient represents a drain down a buried fault from the volcanic aquifers to the lower Carbonate Aquifer. The gradient represents a spillway in which a fault marks the effective northern limit of the lower volcanic aquifer. The large gradient results from the presence at depth of the Eleana Formation, a part of the Paleozoic upper confining unit, which overlies the lower Carbonate Aquifer in much of the Death Valley region. The

  1. Characterization of the Sahelian-Sudan rainfall based on observations and regional climate models

    Science.gov (United States)

    Salih, Abubakr A. M.; Elagib, Nadir Ahmed; Tjernström, Michael; Zhang, Qiong

    2018-04-01

    The African Sahel region is known to be highly vulnerable to climate variability and change. We analyze rainfall in the Sahelian Sudan in terms of distribution of rain-days and amounts, and examine whether regional climate models can capture these rainfall features. Three regional models namely, Regional Model (REMO), Rossby Center Atmospheric Model (RCA) and Regional Climate Model (RegCM4), are evaluated against gridded observations (Climate Research Unit, Tropical Rainfall Measuring Mission, and ERA-interim reanalysis) and rain-gauge data from six arid and semi-arid weather stations across Sahelian Sudan over the period 1989 to 2008. Most of the observed rain-days are characterized by weak (0.1-1.0 mm/day) to moderate (> 1.0-10.0 mm/day) rainfall, with average frequencies of 18.5% and 48.0% of the total annual rain-days, respectively. Although very strong rainfall events (> 30.0 mm/day) occur rarely, they account for a large fraction of the total annual rainfall (28-42% across the stations). The performance of the models varies both spatially and temporally. RegCM4 most closely reproduces the observed annual rainfall cycle, especially for the more arid locations, but all of the three models fail to capture the strong rainfall events and hence underestimate its contribution to the total annual number of rain-days and rainfall amount. However, excessive moderate rainfall compensates this underestimation in the models in an annual average sense. The present study uncovers some of the models' limitations in skillfully reproducing the observed climate over dry regions, will aid model users in recognizing the uncertainties in the model output and will help climate and hydrological modeling communities in improving models.

  2. Two Proposals for determination of large reactivity of reactor

    International Nuclear Information System (INIS)

    Kaneko, Yoshihiko; Nagao, Yoshiharu; Yamane, Tsuyoshi; Takeuchi, Mituo

    1999-01-01

    Two Proposals for determination of large reactivity of reactors are presented. One is for large positive reactivity. The other is for large negative reactivity. Existing experimental methods for determination of large positive reactivity, the fuel addition method and the neutron adsorption substitution method were analyzed. It is found that both the experimental methods are possibly affected to the substantially large systematic error up to ∼ 20%, when the value of the excess multiplication factor comes into the range close to ∼20%Δk. To cope with this difficulty, a revised method is validly proposed. The revised method evaluates the value of the potential excess multiplication factor as the consecutive increments of the effective multiplication factor in a virtual core, which are converted from those in an actual core by multiplying a conversion factor f to it. The conversion factor f is to be obtained in principle by calculation. Numerical experiments were done on a slab reactor using one group diffusion model. The rod drop experimental method is widely used for determination of large negative negative reactivity values. The decay of the neutron density followed by initiating the insertion of the rod is obliged to be slowed down according to its speed. It is proved by analysis based on the one point reactor kinetics that in such a case the integral counting method hitherto used tend to significantly underestimate the absolute values of negative reactivity, even if the insertion time is in the range of 1-2 s. As for the High Temperature Engineering Test Reactor (HTTR), the insertion time will be lengthened up to 4-6 s. In order to overcome the difficulty , the delayed integral counting method is proposed, in which the integration of neutron counting starts after the rod drop has been completed and the counts before is evaluated by calculation using one point reactor kinetics. This is because the influence of the insertion time on the decay of the neutron

  3. The Chemistry of Atmosphere-Forest Exchange (CAFE Model – Part 2: Application to BEARPEX-2007 observations

    Directory of Open Access Journals (Sweden)

    G. M. Wolfe

    2011-02-01

    Full Text Available In a companion paper, we introduced the Chemistry of Atmosphere-Forest Exchange (CAFE model, a vertically-resolved 1-D chemical transport model designed to probe the details of near-surface reactive gas exchange. Here, we apply CAFE to noontime observations from the 2007 Biosphere Effects on Aerosols and Photochemistry Experiment (BEARPEX-2007. In this work we evaluate the CAFE modeling approach, demonstrate the significance of in-canopy chemistry for forest-atmosphere exchange and identify key shortcomings in the current understanding of intra-canopy processes.

    CAFE generally reproduces BEARPEX-2007 observations but requires an enhanced radical recycling mechanism to overcome a factor of 6 underestimate of hydroxyl (OH concentrations observed during a warm (~29 °C period. Modeled fluxes of acyl peroxy nitrates (APN are quite sensitive to gradients in chemical production and loss, demonstrating that chemistry may perturb forest-atmosphere exchange even when the chemical timescale is long relative to the canopy mixing timescale. The model underestimates peroxy acetyl nitrate (PAN fluxes by 50% and the exchange velocity by nearly a factor of three under warmer conditions, suggesting that near-surface APN sinks are underestimated relative to the sources. Nitric acid typically dominates gross dry N deposition at this site, though other reactive nitrogen (NOy species can comprise up to 28% of the N deposition budget under cooler conditions. Upward NO2 fluxes cause the net above-canopy NOy flux to be ~30% lower than the gross depositional flux. CAFE under-predicts ozone fluxes and exchange velocities by ~20%. Large uncertainty in the parameterization of cuticular and ground deposition precludes conclusive attribution of non-stomatal fluxes to chemistry or surface uptake. Model-measurement comparisons of vertical concentration gradients for several emitted species suggests that the lower canopy airspace may be

  4. First results on material identification and imaging with a large-volume muon tomography prototype

    Energy Technology Data Exchange (ETDEWEB)

    Pesente, S. [INFN Sezione di Padova, via Marzolo 8, 35131 Padova (Italy); Vanini, S. [University of Padova and INFN Sezione di Padova, via Marzolo 8, 35131 Padova (Italy)], E-mail: sara.vanini@pd.infn.it; Benettoni, M. [INFN Sezione di Padova, via Marzolo 8, 35131 Padova (Italy); Bonomi, G. [University of Brescia, via Branze 38, 25123 Brescia and INFN Sezione di Pavia, via Bassi 6, 27100 Pavia (Italy); Calvini, P. [University of Genova and INFN Sezione di Genova, via Dodecaneso 33, 16146 Genova (Italy); Checchia, P.; Conti, E.; Gonella, F.; Nebbia, G. [INFN Sezione di Padova, via Marzolo 8, 35131 Padova (Italy); Squarcia, S. [University of Genova and INFN Sezione di Genova, via Dodecaneso 33, 16146 Genova (Italy); Viesti, G. [University of Padova and INFN Sezione di Padova, via Marzolo 8, 35131 Padova (Italy); Zenoni, A. [University of Brescia, via Branze 38, 25123 Brescia and INFN Sezione di Pavia, via Bassi 6, 27100 Pavia (Italy); Zumerle, G. [University of Padova and INFN Sezione di Padova, via Marzolo 8, 35131 Padova (Italy)

    2009-06-11

    The muon tomography technique, based on the Multiple Coulomb Scattering of cosmic ray muons, has been proposed recently as a tool to perform non-destructive assays of large-volume objects without any radiation hazard. In this paper we discuss experimental results obtained with a scanning system prototype, assembled using two large-area CMS Muon Barrel drift chambers. The capability of the apparatus to produce 3D images of objects and to classify them according to their density is presented. We show that the absorption of low-momentum muons in the scanned objects produces an underestimate of their scattering density, making the discrimination of materials heavier than lead more difficult.

  5. The three-point function as a probe of models for large-scale structure

    International Nuclear Information System (INIS)

    Frieman, J.A.; Gaztanaga, E.

    1993-01-01

    The authors analyze the consequences of models of structure formation for higher-order (n-point) galaxy correlation functions in the mildly non-linear regime. Several variations of the standard Ω = 1 cold dark matter model with scale-invariant primordial perturbations have recently been introduced to obtain more power on large scales, R p ∼20 h -1 Mpc, e.g., low-matter-density (non-zero cosmological constant) models, open-quote tilted close-quote primordial spectra, and scenarios with a mixture of cold and hot dark matter. They also include models with an effective scale-dependent bias, such as the cooperative galaxy formation scenario of Bower, et al. The authors show that higher-order (n-point) galaxy correlation functions can provide a useful test of such models and can discriminate between models with true large-scale power in the density field and those where the galaxy power arises from scale-dependent bias: a bias with rapid scale-dependence leads to a dramatic decrease of the hierarchical amplitudes Q J at large scales, r approx-gt R p . Current observational constraints on the three-point amplitudes Q 3 and S 3 can place limits on the bias parameter(s) and appear to disfavor, but not yet rule out, the hypothesis that scale-dependent bias is responsible for the extra power observed on large scales

  6. Lumped hydrological models is an Occam' razor for runoff modeling in large Russian Arctic basins

    OpenAIRE

    Ayzel Georgy

    2018-01-01

    This study is aimed to investigate the possibility of three lumped hydrological models to predict daily runoff of large-scale Arctic basins for the modern period (1979-2014) in the case of substantial data scarcity. All models were driven only by meteorological forcing reanalysis dataset without any additional information about landscape, soil or vegetation cover properties of studied basins. We found limitations of model parameters calibration in ungauged basins using global optimization alg...

  7. Estimation and Inference for Very Large Linear Mixed Effects Models

    OpenAIRE

    Gao, K.; Owen, A. B.

    2016-01-01

    Linear mixed models with large imbalanced crossed random effects structures pose severe computational problems for maximum likelihood estimation and for Bayesian analysis. The costs can grow as fast as $N^{3/2}$ when there are N observations. Such problems arise in any setting where the underlying factors satisfy a many to many relationship (instead of a nested one) and in electronic commerce applications, the N can be quite large. Methods that do not account for the correlation structure can...

  8. Large deformation analysis of adhesive by Eulerian method with new material model

    International Nuclear Information System (INIS)

    Maeda, K; Nishiguchi, K; Iwamoto, T; Okazawa, S

    2010-01-01

    The material model to describe large deformation of a pressure sensitive adhesive (PSA) is presented. A relationship between stress and strain of PSA includes viscoelasticity and rubber-elasticity. Therefore, we propose the material model for describing viscoelasticity and rubber-elasticity, and extend the presented material model to the rate form for three dimensional finite element analysis. After proposing the material model for PSA, we formulate the Eulerian method to simulate large deformation behavior. In the Eulerian calculation, the Piecewise Linear Interface Calculation (PLIC) method for capturing material surface is employed. By using PLIC method, we can impose dynamic and kinematic boundary conditions on captured material surface. The representative two computational examples are calculated to check validity of the present methods.

  9. The large-scale peculiar velocity field in flat models of the universe

    International Nuclear Information System (INIS)

    Vittorio, N.; Turner, M.S.

    1986-10-01

    The inflationary Universe scenario predicts a flat Universe and both adiabatic and isocurvature primordial density perturbations with the Zel'dovich spectrum. The two simplest realizations, models dominated by hot or cold dark matter, seem to be in conflict with observations. Flat models are examined with two components of mass density, where one of the components of mass density is smoothly distributed and the large-scale (≥10h -1 MpC) peculiar velocity field for these models is considered. For the smooth component relativistic particles, a relic cosmological term, and light strings are considered. At present the observational situation is unsettled; but, in principle, the large-scale peculiar velocity field is very powerful discriminator between these different models. 61 refs

  10. Particle production at large transverse momentum and hard collision models

    International Nuclear Information System (INIS)

    Ranft, G.; Ranft, J.

    1977-04-01

    The majority of the presently available experimental data is consistent with hard scattering models. Therefore the hard scattering model seems to be well established. There is good evidence for jets in large transverse momentum reactions as predicted by these models. The overall picture is however not yet well enough understood. We mention only the empirical hard scattering cross section introduced in most of the models, the lack of a deep theoretical understanding of the interplay between quark confinement and jet production, and the fact that we are not yet able to discriminate conclusively between the many proposed hard scattering models. The status of different hard collision models discussed in this paper is summarized. (author)

  11. Optimizing Prediction Using Bayesian Model Averaging: Examples Using Large-Scale Educational Assessments.

    Science.gov (United States)

    Kaplan, David; Lee, Chansoon

    2018-01-01

    This article provides a review of Bayesian model averaging as a means of optimizing the predictive performance of common statistical models applied to large-scale educational assessments. The Bayesian framework recognizes that in addition to parameter uncertainty, there is uncertainty in the choice of models themselves. A Bayesian approach to addressing the problem of model uncertainty is the method of Bayesian model averaging. Bayesian model averaging searches the space of possible models for a set of submodels that satisfy certain scientific principles and then averages the coefficients across these submodels weighted by each model's posterior model probability (PMP). Using the weighted coefficients for prediction has been shown to yield optimal predictive performance according to certain scoring rules. We demonstrate the utility of Bayesian model averaging for prediction in education research with three examples: Bayesian regression analysis, Bayesian logistic regression, and a recently developed approach for Bayesian structural equation modeling. In each case, the model-averaged estimates are shown to yield better prediction of the outcome of interest than any submodel based on predictive coverage and the log-score rule. Implications for the design of large-scale assessments when the goal is optimal prediction in a policy context are discussed.

  12. Large urban fire environment: trends and model city predictions

    International Nuclear Information System (INIS)

    Larson, D.A.; Small, R.D.

    1983-01-01

    The urban fire environment that would result from a megaton-yield nuclear weapon burst is considered. The dependence of temperatures and velocities on fire size, burning intensity, turbulence, and radiation is explored, and specific calculations for three model urban areas are presented. In all cases, high velocity fire winds are predicted. The model-city results show the influence of building density and urban sprawl on the fire environment. Additional calculations consider large-area fires with the burning intensity reduced in a blast-damaged urban center

  13. Flat epithelial atypia and atypical ductal hyperplasia: carcinoma underestimation rate.

    Science.gov (United States)

    Ingegnoli, Anna; d'Aloia, Cecilia; Frattaruolo, Antonia; Pallavera, Lara; Martella, Eugenia; Crisi, Girolamo; Zompatori, Maurizio

    2010-01-01

    This study was carried out to determine the underestimation rate of carcinoma upon surgical biopsy after a diagnosis of flat epithelial atypia and atypical ductal hyperplasia and 11-gauge vacuum-assisted breast biopsy. A retrospective review was conducted of 476 vacuum-assisted breast biopsy performed from May 2005 to January 2007 and a total of 70 cases of atypia were identified. Fifty cases (71%) were categorized as pure atypical ductal hyperplasia, 18 (26%) as pure flat epithelial atypia and two (3%) as concomitant flat epithelial atypia and atypical ductal hyperplasia. Each group were compared with the subsequent open surgical specimens. Surgical biopsy was performed in 44 patients with atypical ductal hyperplasia, 15 patients with flat epithelial atypia, and two patients with flat epithelial atypia and atypical ductal hyperplasia. Five cases of atypical ductal hyperplasia were upgraded to ductal carcinoma in situ, three cases of flat epithelial atypia yielded one ductal carcinoma in situ and two cases of invasive ductal carcinoma, and one case of flat epithelial atypia/atypical ductal hyperplasia had invasive ductal carcinoma. The overall rate of malignancy was 16% for atypical ductal hyperplasia (including flat epithelial atypia/atypical ductal hyperplasia patients) and 20% for flat epithelial atypia. The presence of flat epithelial atypia and atypical ductal hyperplasia at biopsy requires careful consideration, and surgical excision should be suggested.

  14. Multi-model evaluation of short-lived pollutant distributions over east Asia during summer 2008

    Science.gov (United States)

    Quennehen, B.; Raut, J.-C.; Law, K. S.; Daskalakis, N.; Ancellet, G.; Clerbaux, C.; Kim, S.-W.; Lund, M. T.; Myhre, G.; Olivié, D. J. L.; Safieddine, S.; Skeie, R. B.; Thomas, J. L.; Tsyro, S.; Bazureau, A.; Bellouin, N.; Hu, M.; Kanakidou, M.; Klimont, Z.; Kupiainen, K.; Myriokefalitakis, S.; Quaas, J.; Rumbold, S. T.; Schulz, M.; Cherian, R.; Shimizu, A.; Wang, J.; Yoon, S.-C.; Zhu, T.

    2016-08-01

    is too weak to explain the differences between the models. Our results rather point to an overestimation of SO2 emissions, in particular, close to the surface in Chinese urban areas. However, we also identify a clear underestimation of aerosol concentrations over northern India, suggesting that the rapid recent growth of emissions in India, as well as their spatial extension, is underestimated in emission inventories. Model deficiencies in the representation of pollution accumulation due to the Indian monsoon may also be playing a role. Comparison with vertical aerosol lidar measurements highlights a general underestimation of scattering aerosols in the boundary layer associated with overestimation in the free troposphere pointing to modelled aerosol lifetimes that are too long. This is likely linked to too strong vertical transport and/or insufficient deposition efficiency during transport or export from the boundary layer, rather than chemical processing (in the case of sulphate aerosols). Underestimation of sulphate in the boundary layer implies potentially large errors in simulated aerosol-cloud interactions, via impacts on boundary-layer clouds.This evaluation has important implications for accurate assessment of air pollutants on regional air quality and global climate based on global model calculations. Ideally, models should be run at higher resolution over source regions to better simulate urban-rural pollutant gradients and/or chemical regimes, and also to better resolve pollutant processing and loss by wet deposition as well as vertical transport. Discrepancies in vertical distributions require further quantification and improvement since these are a key factor in the determination of radiative forcing from short-lived pollutants.

  15. Multi-model evaluation of short-lived pollutant distributions over east Asia during summer 2008

    Directory of Open Access Journals (Sweden)

    B. Quennehen

    2016-08-01

    mitigation in Beijing is too weak to explain the differences between the models. Our results rather point to an overestimation of SO2 emissions, in particular, close to the surface in Chinese urban areas. However, we also identify a clear underestimation of aerosol concentrations over northern India, suggesting that the rapid recent growth of emissions in India, as well as their spatial extension, is underestimated in emission inventories. Model deficiencies in the representation of pollution accumulation due to the Indian monsoon may also be playing a role. Comparison with vertical aerosol lidar measurements highlights a general underestimation of scattering aerosols in the boundary layer associated with overestimation in the free troposphere pointing to modelled aerosol lifetimes that are too long. This is likely linked to too strong vertical transport and/or insufficient deposition efficiency during transport or export from the boundary layer, rather than chemical processing (in the case of sulphate aerosols. Underestimation of sulphate in the boundary layer implies potentially large errors in simulated aerosol–cloud interactions, via impacts on boundary-layer clouds.This evaluation has important implications for accurate assessment of air pollutants on regional air quality and global climate based on global model calculations. Ideally, models should be run at higher resolution over source regions to better simulate urban–rural pollutant gradients and/or chemical regimes, and also to better resolve pollutant processing and loss by wet deposition as well as vertical transport. Discrepancies in vertical distributions require further quantification and improvement since these are a key factor in the determination of radiative forcing from short-lived pollutants.

  16. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  17. Numerically modelling the large scale coronal magnetic field

    Science.gov (United States)

    Panja, Mayukh; Nandi, Dibyendu

    2016-07-01

    The solar corona spews out vast amounts of magnetized plasma into the heliosphere which has a direct impact on the Earth's magnetosphere. Thus it is important that we develop an understanding of the dynamics of the solar corona. With our present technology it has not been possible to generate 3D magnetic maps of the solar corona; this warrants the use of numerical simulations to study the coronal magnetic field. A very popular method of doing this, is to extrapolate the photospheric magnetic field using NLFF or PFSS codes. However the extrapolations at different time intervals are completely independent of each other and do not capture the temporal evolution of magnetic fields. On the other hand full MHD simulations of the global coronal field, apart from being computationally very expensive would be physically less transparent, owing to the large number of free parameters that are typically used in such codes. This brings us to the Magneto-frictional model which is relatively simpler and computationally more economic. We have developed a Magnetofrictional Model, in 3D spherical polar co-ordinates to study the large scale global coronal field. Here we present studies of changing connectivities between active regions, in response to photospheric motions.

  18. Comment on "Polarized window for left-right symmetry and a right-handed neutrino at the Large Hadron-Electron Collider"

    Science.gov (United States)

    Queiroz, Farinaldo S.

    2016-06-01

    Reference [1 S. Mondal and S. K. Rai, Phys. Rev. D 93, 011702 (2016).] recently argued that the projected Large Hadron Electron Collider (LHeC) presents a unique opportunity to discover a left-right symmetry since the LHeC has availability for polarized electrons. In particular, the authors apply some basic pT cuts on the jets and claim that the on-shell production of right-handed neutrinos at the LHeC, which violates lepton number in two units, has practically no standard model background and, therefore, that the right-handed nature of WR interactions that are intrinsic to left-right symmetric models can be confirmed by using colliding beams consisting of an 80% polarized electron and a 7 TeV proton. In this Comment, we show that their findings, as presented, have vastly underestimated the SM background which prevents a Left-Right symmetry signal from being seen at the LHeC.

  19. Illustrating the benefit of using hourly monitoring data on secondary inorganic aerosol and its precursors for model evaluation

    Directory of Open Access Journals (Sweden)

    M. Schaap

    2011-11-01

    Full Text Available Secondary inorganic aerosol, most notably ammonium nitrate and ammonium sulphate, is an important contributor to ambient particulate mass and provides a means for long range transport of acidifying components. The modelling of the formation and fate of these components is challenging. Especially, the formation of the semi-volatile ammonium nitrate is strongly dependent on ambient conditions and the precursor concentrations. For the first time an hourly artefact free data set from the MARGA instrument is available for the period of a full year (1 August 2007 to 1 August 2008 at Cabauw, the Netherlands. This data set is used to verify the results of the LOTOS-EUROS model. The comparison showed that the model underestimates the SIA levels. Closer inspection revealed that base line values appear well estimated for ammonium and sulphate and that the underestimation predominantly takes place at the peak concentrations. For nitrate the variability towards high concentrations is much better captured, however, a systematic relative underestimation was found. The model is able to reproduce many features of the intra-day variability observed for SIA. Although the model captures the seasonal and average diurnal variation of the SIA components, the modelled variability for the nitrate precursor gas nitric acid is much too large. It was found that the thermodynamic equilibrium module produces a too stable ammonium nitrate in winter and during night time in summer, whereas during the daytime in summer it is too unstable. We recommend to improve the model by verification of the equilibrium module, inclusion of coarse mode nitrate and to address the processes concerning SIA formation combined with a detailed analysis of the data set at hand. The benefit of the hourly data with both particulate and gas phase concentrations is illustrated and a continuation of these measurements may prove to be very useful in future model evaluation and improvement studies. Based

  20. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part II: Multi-layered cloud

    Energy Technology Data Exchange (ETDEWEB)

    Morrison, H; McCoy, R B; Klein, S A; Xie, S; Luo, Y; Avramov, A; Chen, M; Cole, J; Falk, M; Foster, M; Genio, A D; Harrington, J; Hoose, C; Khairoutdinov, M; Larson, V; Liu, X; McFarquhar, G; Poellot, M; Shipway, B; Shupe, M; Sud, Y; Turner, D; Veron, D; Walker, G; Wang, Z; Wolf, A; Xu, K; Yang, F; Zhang, G

    2008-02-27

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a deep, multi-layered, mixed-phase cloud system observed during the ARM Mixed-Phase Arctic Cloud Experiment. This cloud system was associated with strong surface turbulent sensible and latent heat fluxes as cold air flowed over the open Arctic Ocean, combined with a low pressure system that supplied moisture at mid-level. The simulations, performed by 13 single-column and 4 cloud-resolving models, generally overestimate the liquid water path and strongly underestimate the ice water path, although there is a large spread among the models. This finding is in contrast with results for the single-layer, low-level mixed-phase stratocumulus case in Part I of this study, as well as previous studies of shallow mixed-phase Arctic clouds, that showed an underprediction of liquid water path. The overestimate of liquid water path and underestimate of ice water path occur primarily when deeper mixed-phase clouds extending into the mid-troposphere were observed. These results suggest important differences in the ability of models to simulate Arctic mixed-phase clouds that are deep and multi-layered versus shallow and single-layered. In general, models with a more sophisticated, two-moment treatment of the cloud microphysics produce a somewhat smaller liquid water path that is closer to observations. The cloud-resolving models tend to produce a larger cloud fraction than the single-column models. The liquid water path and especially the cloud fraction have a large impact on the cloud radiative forcing at the surface, which is dominated by the longwave flux for this case.

  1. Cardiac regeneration using pluripotent stem cells—Progression to large animal models

    Directory of Open Access Journals (Sweden)

    James J.H. Chong

    2014-11-01

    Full Text Available Pluripotent stem cells (PSCs have indisputable cardiomyogenic potential and therefore have been intensively investigated as a potential cardiac regenerative therapy. Current directed differentiation protocols are able to produce high yields of cardiomyocytes from PSCs and studies in small animal models of cardiovascular disease have proven sustained engraftment and functional efficacy. Therefore, the time is ripe for cardiac regenerative therapies using PSC derivatives to be tested in large animal models that more closely resemble the hearts of humans. In this review, we discuss the results of our recent study using human embryonic stem cell derived cardiomyocytes (hESC-CM in a non-human primate model of ischemic cardiac injury. Large scale remuscularization, electromechanical coupling and short-term arrhythmias demonstrated by our hESC-CM grafts are discussed in the context of other studies using adult stem cells for cardiac regeneration.

  2. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    The fields of sensitivity and uncertainty analysis have traditionally been dominated by statistical techniques when large-scale modeling codes are being analyzed. These methods are able to estimate sensitivities, generate response surfaces, and estimate response probability distributions given the input parameter probability distributions. Because the statistical methods are computationally costly, they are usually applied only to problems with relatively small parameter sets. Deterministic methods, on the other hand, are very efficient and can handle large data sets, but generally require simpler models because of the considerable programming effort required for their implementation. The first part of this paper reports on the development and availability of two systems, GRESS and ADGEN, that make use of computer calculus compilers to automate the implementation of deterministic sensitivity analysis capability into existing computer models. This automation removes the traditional limitation of deterministic sensitivity methods. This second part of the paper describes a deterministic uncertainty analysis method (DUA) that uses derivative information as a basis to propagate parameter probability distributions to obtain result probability distributions. This paper is applicable to low-level radioactive waste disposal system performance assessment

  3. Hybrid Reynolds-Averaged/Large Eddy Simulation of a Cavity Flameholder; Assessment of Modeling Sensitivities

    Science.gov (United States)

    Baurle, R. A.

    2015-01-01

    Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit

  4. Large signal S-parameters: modeling and radiation effects in microwave power transistors

    International Nuclear Information System (INIS)

    Graham, E.D. Jr.; Chaffin, R.J.; Gwyn, C.W.

    1973-01-01

    Microwave power transistors are usually characterized by measuring the source and load impedances, efficiency, and power output at a specified frequency and bias condition in a tuned circuit. These measurements provide limited data for circuit design and yield essentially no information concerning broadbanding possibilities. Recently, a method using large signal S-parameters has been developed which provides a rapid and repeatable means for measuring microwave power transistor parameters. These large signal S-parameters have been successfully used to design rf power amplifiers. Attempts at modeling rf power transistors have in the past been restricted to a modified Ebers-Moll procedure with numerous adjustable model parameters. The modified Ebers-Moll model is further complicated by inclusion of package parasitics. In the present paper an exact one-dimensional device analysis code has been used to model the performance of the transistor chip. This code has been integrated into the SCEPTRE circuit analysis code such that chip, package and circuit performance can be coupled together in the analysis. Using []his computational tool, rf transistor performance has been examined with particular attention given to the theoretical validity of large-signal S-parameters and the effects of nuclear radiation on device parameters. (auth)

  5. Critical behavior in dome D = 1 large-N matrix models

    International Nuclear Information System (INIS)

    Das, S.R.; Dhar, A.; Sengupta, A.M.; Wadia, D.R.

    1990-01-01

    The authors study the critical behavior in D = 1 large-N matrix models. The authors also look at the subleading terms in susceptibility in order to find out the dimensions of some of the operators in the theory

  6. Large-n limit of the Heisenberg model: The decorated lattice and the disordered chain

    International Nuclear Information System (INIS)

    Khoruzhenko, B.A.; Pastur, L.A.; Shcherbina, M.V.

    1989-01-01

    The critical temperature of the generalized spherical model (large-component limit of the classical Heisenberg model) on a cubic lattice, whose every bond is decorated by L spins, is found. When L → ∞, the asymptotics of the temperature is T c ∼ aL -1 . The reduction of the number of spherical constraints for the model is found to be fairly large. The free energy of the one-dimensional generalized spherical model with random nearest neighbor interaction is calculated

  7. Large-N limit of the two-Hermitian-matrix model by the hidden BRST method

    International Nuclear Information System (INIS)

    Alfaro, J.

    1993-01-01

    This paper discusses the large-N limit of the two-Hermitian-matrix model in zero dimensions, using the hidden Becchi-Rouet-Stora-Tyutin method. A system of integral equations previously found is solved, showing that it contained the exact solution of the model in leading order of large N

  8. A semiparametric graphical modelling approach for large-scale equity selection.

    Science.gov (United States)

    Liu, Han; Mulvey, John; Zhao, Tianqi

    2016-01-01

    We propose a new stock selection strategy that exploits rebalancing returns and improves portfolio performance. To effectively harvest rebalancing gains, we apply ideas from elliptical-copula graphical modelling and stability inference to select stocks that are as independent as possible. The proposed elliptical-copula graphical model has a latent Gaussian representation; its structure can be effectively inferred using the regularized rank-based estimators. The resulting algorithm is computationally efficient and scales to large data-sets. To show the efficacy of the proposed method, we apply it to conduct equity selection based on a 16-year health care stock data-set and a large 34-year stock data-set. Empirical tests show that the proposed method is superior to alternative strategies including a principal component analysis-based approach and the classical Markowitz strategy based on the traditional buy-and-hold assumption.

  9. Performance of the SUBSTOR-potato model across contrasting growing conditions

    DEFF Research Database (Denmark)

    Raymundo, Rubí; Asseng, Senthold; Prassad, Rishi

    2017-01-01

    and cultivars, N fertilizer application, water supply, sowing dates, soil types, temperature environments, and atmospheric CO2 concentrations, and included open top chamber and Free-Air-CO2-Enrichment (FACE) experiments. Tuber yields were generally well simulated with the SUBSTOR-potato model across a wide.......4% for tuber fresh weight. Cultivars ‘Desiree’ and ‘Atlantic’ were grown in experiments across the globe and well simulated using consistent cultivar parameters. However, the model underestimated the impact of elevated atmospheric CO2 concentrations and poorly simulated high temperature effects on crop growth....... Other simulated crop variables, including leaf area, stem weight, crop N, and soil water, differed frequently from measurements; some of these variables had significant large measurement errors. The SUBSTOR-potato model was shown to be suitable to simulate tuber growth and yields over a wide range...

  10. Findings and Challenges in Fine-Resolution Large-Scale Hydrological Modeling

    Science.gov (United States)

    Her, Y. G.

    2017-12-01

    Fine-resolution large-scale (FL) modeling can provide the overall picture of the hydrological cycle and transport while taking into account unique local conditions in the simulation. It can also help develop water resources management plans consistent across spatial scales by describing the spatial consequences of decisions and hydrological events extensively. FL modeling is expected to be common in the near future as global-scale remotely sensed data are emerging, and computing resources have been advanced rapidly. There are several spatially distributed models available for hydrological analyses. Some of them rely on numerical methods such as finite difference/element methods (FDM/FEM), which require excessive computing resources (implicit scheme) to manipulate large matrices or small simulation time intervals (explicit scheme) to maintain the stability of the solution, to describe two-dimensional overland processes. Others make unrealistic assumptions such as constant overland flow velocity to reduce the computational loads of the simulation. Thus, simulation efficiency often comes at the expense of precision and reliability in FL modeling. Here, we introduce a new FL continuous hydrological model and its application to four watersheds in different landscapes and sizes from 3.5 km2 to 2,800 km2 at the spatial resolution of 30 m on an hourly basis. The model provided acceptable accuracy statistics in reproducing hydrological observations made in the watersheds. The modeling outputs including the maps of simulated travel time, runoff depth, soil water content, and groundwater recharge, were animated, visualizing the dynamics of hydrological processes occurring in the watersheds during and between storm events. Findings and challenges were discussed in the context of modeling efficiency, accuracy, and reproducibility, which we found can be improved by employing advanced computing techniques and hydrological understandings, by using remotely sensed hydrological

  11. Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations

    Science.gov (United States)

    Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara

    2018-05-01

    Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  12. Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations

    Directory of Open Access Journals (Sweden)

    C. Orbe

    2018-05-01

    Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.

  13. Aerosol modelling and validation during ESCOMPTE 2001

    Science.gov (United States)

    Cousin, F.; Liousse, C.; Cachier, H.; Bessagnet, B.; Guillaume, B.; Rosset, R.

    The ESCOMPTE 2001 programme (Atmospheric Research. 69(3-4) (2004) 241) has resulted in an exhaustive set of dynamical, radiative, gas and aerosol observations (surface and aircraft measurements). A previous paper (Atmospheric Research. (2004) in press) has dealt with dynamics and gas-phase chemistry. The present paper is an extension to aerosol formation, transport and evolution. To account for important loadings of primary and secondary aerosols and their transformation processes in the ESCOMPTE domain, the ORISAM aerosol module (Atmospheric Environment. 35 (2001) 4751) was implemented on-line in the air-quality Meso-NH-C model. Additional developments have been introduced in ORganic and Inorganic Spectral Aerosol Module (ORISAM) to improve the comparison between simulations and experimental surface and aircraft field data. This paper discusses this comparison for a simulation performed during one selected day, 24 June 2001, during the Intensive Observation Period IOP2b. Our work relies on BC and OCp emission inventories specifically developed for ESCOMPTE. This study confirms the need for a fine resolution aerosol inventory with spectral chemical speciation. BC levels are satisfactorily reproduced, thus validating our emission inventory and its processing through Meso-NH-C. However, comparisons for reactive species generally denote an underestimation of concentrations. Organic aerosol levels are rather well simulated though with a trend to underestimation in the afternoon. Inorganic aerosol species are underestimated for several reasons, some of them have been identified. For sulphates, primary emissions were introduced. Improvement was obtained too for modelled nitrate and ammonium levels after introducing heterogeneous chemistry. However, no modelling of terrigeneous particles is probably a major cause for nitrates and ammonium underestimations. Particle numbers and size distributions are well reproduced, but only in the submicrometer range. Our work points out

  14. Evaluation of cloud-resolving model simulations of midlatitude cirrus with ARM and A-train observations

    Science.gov (United States)

    Muhlbauer, A.; Ackerman, T. P.; Lawson, R. P.; Xie, S.; Zhang, Y.

    2015-07-01

    Cirrus clouds are ubiquitous in the upper troposphere and still constitute one of the largest uncertainties in climate predictions. This paper evaluates cloud-resolving model (CRM) and cloud system-resolving model (CSRM) simulations of a midlatitude cirrus case with comprehensive observations collected under the auspices of the Atmospheric Radiation Measurements (ARM) program and with spaceborne observations from the National Aeronautics and Space Administration A-train satellites. The CRM simulations are driven with periodic boundary conditions and ARM forcing data, whereas the CSRM simulations are driven by the ERA-Interim product. Vertical profiles of temperature, relative humidity, and wind speeds are reasonably well simulated by the CSRM and CRM, but there are remaining biases in the temperature, wind speeds, and relative humidity, which can be mitigated through nudging the model simulations toward the observed radiosonde profiles. Simulated vertical velocities are underestimated in all simulations except in the CRM simulations with grid spacings of 500 m or finer, which suggests that turbulent vertical air motions in cirrus clouds need to be parameterized in general circulation models and in CSRM simulations with horizontal grid spacings on the order of 1 km. The simulated ice water content and ice number concentrations agree with the observations in the CSRM but are underestimated in the CRM simulations. The underestimation of ice number concentrations is consistent with the overestimation of radar reflectivity in the CRM simulations and suggests that the model produces too many large ice particles especially toward the cloud base. Simulated cloud profiles are rather insensitive to perturbations in the initial conditions or the dimensionality of the model domain, but the treatment of the forcing data has a considerable effect on the outcome of the model simulations. Despite considerable progress in observations and microphysical parameterizations, simulating

  15. Simplest simulation model for three-dimensional xenon oscillations in large PWRs

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro

    2004-01-01

    Xenon oscillations in large PWRs are well understood and there have been no operational problems remained. However, in order to suppress the oscillations effectively, optimal control strategy is preferable. Generally speaking in such optimality search based on the modern control theory, a large volume of transient core analyses is required. For example, three dimensional core calculations are inevitable for the analyses of radial oscillations. From this point of view, a very simple 3-D model is proposed, which is based on a reactor model of only four points. As in the actual reactor operation, the magnitude of xenon oscillations should be limited from the view point of safety, the model further assumes that the neutron leakage can be also small or even constant. It can explicitly use reactor parameters such as reactivity coefficients and control rod worth directly. The model is so simplified as described above that it can predict oscillation behavior in a very short calculation time even on a PC. However the prediction result is good. The validity of the model in comparison with measured data and the applications are discussed. (author)

  16. A turbulence model for large interfaces in high Reynolds two-phase CFD

    International Nuclear Information System (INIS)

    Coste, P.; Laviéville, J.

    2015-01-01

    Highlights: • Two-phase CFD commonly involves interfaces much larger than the computational cells. • A two-phase turbulence model is developed to better take them into account. • It solves k–epsilon transport equations in each phase. • The special treatments and transfer terms at large interfaces are described. • Validation cases are presented. - Abstract: A model for two-phase (six-equation) CFD modelling of turbulence is presented, for the regions of the flow where the liquid–gas interface takes place on length scales which are much larger than the typical computational cell size. In the other regions of the flow, the liquid or gas volume fractions range from 0 to 1. Heat and mass transfer, compressibility of the fluids, are included in the system, which is used at high Reynolds numbers in large scale industrial calculations. In this context, a model based on k and ε transport equations in each phase was chosen. The paper describes the model, with a focus on the large interfaces, which require special treatments and transfer terms between the phases, including some approaches inspired from wall functions. The validation of the model is based on high Reynolds number experiments with turbulent quantities measurements of a liquid jet impinging a free surface and an air water stratified flow. A steam–water stratified condensing flow experiment is also used for an indirect validation in the case of heat and mass transfer

  17. Direct and large eddy simulations of a bottom Ekman layer under an external stratification

    Energy Technology Data Exchange (ETDEWEB)

    Taylor, John R. [Department of Mechanical and Aerospace Engineering, University of California, San Diego La Jolla, CA 92093 (United States); Sarkar, Sutanu [Department of Mechanical and Aerospace Engineering, University of California, San Diego La Jolla, CA 92093 (United States)], E-mail: sarkar@ucsd.edu

    2008-06-15

    A steady Ekman layer with a thermally stratified outer flow and an adiabatic boundary condition at the lower wall is studied using direct numerical simulation (DNS) and large eddy simulation (LES). An initially linear temperature profile is mixed by turbulence near the wall, and a stable thermocline forms above the mixed layer. The thickness of the mixed layer is reduced by the outer layer stratification. Observations from the DNS are used to evaluate the performance of the LES model and to examine the resolution requirements. A resolved LES and a near-wall model LES (NWM-LES) both compare reasonably well with the DNS when the thermal field is treated as a passive scalar. When buoyancy effects are included, the LES mean velocity and temperature profiles also agree well with the DNS. However, the NWM-LES does not sufficiently account for the overturning scales responsible for entrainment at the top of the mixed layer. As a result, the turbulent heat flux and the rate of change of the mixed layer temperature are significantly underestimated in the NWM-LES. In order to accurately simulate the boundary layer growth, the motions responsible for entrainment must either be resolved or more accurately represented in improved subgrid-scale models.

  18. Direct and large eddy simulations of a bottom Ekman layer under an external stratification

    International Nuclear Information System (INIS)

    Taylor, John R.; Sarkar, Sutanu

    2008-01-01

    A steady Ekman layer with a thermally stratified outer flow and an adiabatic boundary condition at the lower wall is studied using direct numerical simulation (DNS) and large eddy simulation (LES). An initially linear temperature profile is mixed by turbulence near the wall, and a stable thermocline forms above the mixed layer. The thickness of the mixed layer is reduced by the outer layer stratification. Observations from the DNS are used to evaluate the performance of the LES model and to examine the resolution requirements. A resolved LES and a near-wall model LES (NWM-LES) both compare reasonably well with the DNS when the thermal field is treated as a passive scalar. When buoyancy effects are included, the LES mean velocity and temperature profiles also agree well with the DNS. However, the NWM-LES does not sufficiently account for the overturning scales responsible for entrainment at the top of the mixed layer. As a result, the turbulent heat flux and the rate of change of the mixed layer temperature are significantly underestimated in the NWM-LES. In order to accurately simulate the boundary layer growth, the motions responsible for entrainment must either be resolved or more accurately represented in improved subgrid-scale models

  19. Two-group modeling of interfacial area transport in large diameter channels

    Energy Technology Data Exchange (ETDEWEB)

    Schlegel, J.P., E-mail: schlegelj@mst.edu [Department of Mining and Nuclear Engineering, Missouri University of Science and Technology, 301 W 14th St., Rolla, MO 65409 (United States); Hibiki, T.; Ishii, M. [School of Nuclear Engineering, Purdue University, 400 Central Dr., West Lafayette, IN 47907 (United States)

    2015-11-15

    Highlights: • Implemented updated constitutive models and benchmarking method for IATE in large pipes. • New model and method with new data improved the overall IATE prediction for large pipes. • Not all conditions well predicted shows that further development is still required. - Abstract: A comparison of the existing two-group interfacial area transport equation source and sink terms for large diameter channels with recently collected interfacial area concentration measurements (Schlegel et al., 2012, 2014. Int. J. Heat Fluid Flow 47, 42) has indicated that the model does not perform well in predicting interfacial area transport outside of the range of flow conditions used in the original benchmarking effort. In order to reduce the error in the prediction of interfacial area concentration by the interfacial area transport equation, several constitutive relations have been updated including the turbulence model and relative velocity correlation. The transport equation utilizing these updated models has been modified by updating the inter-group transfer and Group 2 coalescence and disintegration kernels using an expanded range of experimental conditions extending to pipe sizes of 0.304 m [12 in.], gas velocities of up to nearly 11 m/s [36.1 ft/s] and liquid velocities of up to 2 m/s [6.56 ft/s], as well as conditions with both bubbly flow and cap-bubbly flow injection (Schlegel et al., 2012, 2014). The modifications to the transport equation have resulted in a decrease in the RMS error for void fraction and interfacial area concentration from 17.32% to 12.3% and 21.26% to 19.6%. The combined RMS error, for both void fraction and interfacial area concentration, is below 15% for most of the experiments used in the comparison, a distinct improvement over the previous version of the model.

  20. Evaluation of surface air temperature and urban effects in Japan simulated by non-hydrostatic regional climate model

    Science.gov (United States)

    Murata, A.; Sasaki, H.; Hanafusa, M.; Kurihara, K.

    2012-12-01

    We evaluated the performance of a well-developed nonhydrostatic regional climate model (NHRCM) with a spatial resolution of 5 km with respect to temperature in the present-day climate of Japan, and estimated urban heat island (UHI) intensity by comparing the model results and observations. The magnitudes of root mean square error (RMSE) and systematic error (bias) for the annual average of daily mean (Ta), maximum (Tx), and minimum (Tn) temperatures are within 1.5 K, demonstrating that the temperatures of the present-day climate are reproduced well by NHRCM. These small errors indicate that temperature variability produced by local-scale phenomena is represented well by the model with a higher spatial resolution. It is also found that the magnitudes of RMSE and bias in the annually-average Tx are relatively large compared with those in Ta and Tn. The horizontal distributions of the error, defined as the difference between simulated and observed temperatures (simulated minus observed), illustrate negative errors in the annually-averaged Tn in three major metropolitan areas: Tokyo, Osaka, and Nagoya. These negative errors in urban areas affect the cold bias in the annually-averaged Tx. The relation between the underestimation of temperature and degree of urbanization is therefore examined quantitatively using National Land Numerical Information provided by the Ministry of Land, Infrastructure, Transport, and Tourism. The annually-averaged Ta, Tx, and Tn are all underestimated in the areas where the degree of urbanization is relatively high. The underestimations in these areas are attributed to the treatment of urban areas in NHRCM, where the effects of urbanization, such as waste heat and artificial structures, are not included. In contrast, in rural areas, the simulated Tx is underestimated and Tn is overestimated although the errors in Ta are small. This indicates that the simulated diurnal temperature range is underestimated. The reason for the relatively large

  1. Using radar altimetry to update a large-scale hydrological model of the Brahmaputra river basin

    DEFF Research Database (Denmark)

    Finsen, F.; Milzow, Christian; Smith, R.

    2014-01-01

    Measurements of river and lake water levels from space-borne radar altimeters (past missions include ERS, Envisat, Jason, Topex) are useful for calibration and validation of large-scale hydrological models in poorly gauged river basins. Altimetry data availability over the downstream reaches...... of the Brahmaputra is excellent (17 high-quality virtual stations from ERS-2, 6 from Topex and 10 from Envisat are available for the Brahmaputra). In this study, altimetry data are used to update a large-scale Budyko-type hydrological model of the Brahmaputra river basin in real time. Altimetry measurements...... improved model performance considerably. The Nash-Sutcliffe model efficiency increased from 0.77 to 0.83. Real-time river basin modelling using radar altimetry has the potential to improve the predictive capability of large-scale hydrological models elsewhere on the planet....

  2. Mixed-signal instrumentation for large-signal device characterization and modelling

    NARCIS (Netherlands)

    Marchetti, M.

    2013-01-01

    This thesis concentrates on the development of advanced large-signal measurement and characterization tools to support technology development, model extraction and validation, and power amplifier (PA) designs that address the newly introduced third and fourth generation (3G and 4G) wideband

  3. Large-deflection statics analysis of active cardiac catheters through co-rotational modelling.

    Science.gov (United States)

    Peng Qi; Chen Qiu; Mehndiratta, Aadarsh; I-Ming Chen; Haoyong Yu

    2016-08-01

    This paper presents a co-rotational concept for large-deflection formulation of cardiac catheters. Using this approach, the catheter is first discretized with a number of equal length beam elements and nodes, and the rigid body motions of an individual beam element are separated from its deformations. Therefore, it is adequate for modelling arbitrarily large deflections of a catheter with linear elastic analysis at the local element level. A novel design of active cardiac catheter of 9 Fr in diameter at the beginning of the paper is proposed, which is based on the contra-rotating double helix patterns and is improved from the previous prototypes. The modelling section is followed by MATLAB simulations of various deflections when the catheter is exerted different types of loads. This proves the feasibility of the presented modelling approach. To the best knowledge of the authors, it is the first to utilize this methodology for large-deflection static analysis of the catheter, which will enable more accurate control of robot-assisted cardiac catheterization procedures. Future work would include further experimental validations.

  4. Should we build more large dams? The actual costs of hydropower megaproject development

    International Nuclear Information System (INIS)

    Ansar, Atif; Flyvbjerg, Bent; Budzier, Alexander; Lunn, Daniel

    2014-01-01

    A brisk building boom of hydropower mega-dams is underway from China to Brazil. Whether benefits of new dams will outweigh costs remains unresolved despite contentious debates. We investigate this question with the “outside view” or “reference class forecasting” based on literature on decision-making under uncertainty in psychology. We find overwhelming evidence that budgets are systematically biased below actual costs of large hydropower dams—excluding inflation, substantial debt servicing, environmental, and social costs. Using the largest and most reliable reference data of its kind and multilevel statistical techniques applied to large dams for the first time, we were successful in fitting parsimonious models to predict cost and schedule overruns. The outside view suggests that in most countries large hydropower dams will be too costly in absolute terms and take too long to build to deliver a positive risk-adjusted return unless suitable risk management measures outlined in this paper can be affordably provided. Policymakers, particularly in developing countries, are advised to prefer agile energy alternatives that can be built over shorter time horizons to energy megaprojects. - Highlights: • We investigate ex post outcomes of schedule and cost estimates of hydropower dams. • We use the “outside view” based on Kahneman and Tversky's research in psychology. • Estimates are systematically and severely biased below actual values. • Projects that take longer have greater cost overruns; bigger projects take longer. • Uplift required to de-bias systematic cost underestimation for large dams is +99%

  5. A model-based eco-routing strategy for electric vehicles in large urban networks

    OpenAIRE

    De Nunzio , Giovanni; Thibault , Laurent; Sciarretta , Antonio

    2016-01-01

    International audience; A novel eco-routing navigation strategy and energy consumption modeling approach for electric vehicles are presented in this work. Speed fluctuations and road network infrastructure have a large impact on vehicular energy consumption. Neglecting these effects may lead to large errors in eco-routing navigation, which could trivially select the route with the lowest average speed. We propose an energy consumption model that considers both accelerations and impact of the ...

  6. Effects of uncertainty in model predictions of individual tree volume on large area volume estimates

    Science.gov (United States)

    Ronald E. McRoberts; James A. Westfall

    2014-01-01

    Forest inventory estimates of tree volume for large areas are typically calculated by adding model predictions of volumes for individual trees. However, the uncertainty in the model predictions is generally ignored with the result that the precision of the large area volume estimates is overestimated. The primary study objective was to estimate the effects of model...

  7. Perturbation theory instead of large scale shell model calculations

    International Nuclear Information System (INIS)

    Feldmeier, H.; Mankos, P.

    1977-01-01

    Results of large scale shell model calculations for (sd)-shell nuclei are compared with a perturbation theory provides an excellent approximation when the SU(3)-basis is used as a starting point. The results indicate that perturbation theory treatment in an SU(3)-basis including 2hω excitations should be preferable to a full diagonalization within the (sd)-shell. (orig.) [de

  8. Reliability of Monte Carlo simulations in modeling neutron yields from a shielded fission source

    Energy Technology Data Exchange (ETDEWEB)

    McArthur, Matthew S., E-mail: matthew.s.mcarthur@gmail.com; Rees, Lawrence B., E-mail: Lawrence_Rees@byu.edu; Czirr, J. Bart, E-mail: czirr@juno.com

    2016-08-11

    Using the combination of a neutron-sensitive {sup 6}Li glass scintillator detector with a neutron-insensitive {sup 7}Li glass scintillator detector, we are able to make an accurate measurement of the capture rate of fission neutrons on {sup 6}Li. We used this detector with a {sup 252}Cf neutron source to measure the effects of both non-borated polyethylene and 5% borated polyethylene shielding on detection rates over a range of shielding thicknesses. Both of these measurements were compared with MCNP calculations to determine how well the calculations reproduced the measurements. When the source is highly shielded, the number of interactions experienced by each neutron prior to arriving at the detector is large, so it is important to compare Monte Carlo modeling with actual experimental measurements. MCNP reproduces the data fairly well, but it does generally underestimate detector efficiency both with and without polyethylene shielding. For non-borated polyethylene it underestimates the measured value by an average of 8%. This increases to an average of 11% for borated polyethylene.

  9. Underestimation of Microearthquake Size by the Magnitude Scale of the Japan Meteorological Agency: Influence on Earthquake Statistics

    Science.gov (United States)

    Uchide, Takahiko; Imanishi, Kazutoshi

    2018-01-01

    Magnitude scales based on the amplitude of seismic waves, including the Japan Meteorological Agency magnitude scale (Mj), are commonly used in routine processes. The moment magnitude scale (Mw), however, is more physics based and is able to evaluate any type and size of earthquake. This paper addresses the relation between Mj and Mw for microearthquakes. The relative moment magnitudes among earthquakes are well constrained by multiple spectral ratio analyses. The results for the events in the Fukushima Hamadori and northern Ibaraki prefecture areas of Japan imply that Mj is significantly and systematically smaller than Mw for microearthquakes. The Mj-Mw curve has slopes of 1/2 and 1 for small and large values of Mj, respectively; for example, Mj = 1.0 corresponds to Mw = 2.0. A simple numerical simulation implies that this is due to anelastic attenuation and the recording using a finite sampling interval. The underestimation affects earthquake statistics. The completeness magnitude, Mc, for magnitudes lower than which the magnitude-frequency distribution deviates from the Gutenberg-Richter law, is effectively lower for Mw than that for Mj, by taking into account the systematic difference between Mj and Mw. The b values of the Gutenberg-Richter law are larger for Mw than for Mj. As the b values for Mj and Mw are well correlated, qualitative argument using b values is not affected. While the estimated b values for Mj are below 1.5, those for Mw often exceed 1.5. This may affect the physical implication of the seismicity.

  10. Expected Utility and Entropy-Based Decision-Making Model for Large Consumers in the Smart Grid

    Directory of Open Access Journals (Sweden)

    Bingtuan Gao

    2015-09-01

    Full Text Available In the smart grid, large consumers can procure electricity energy from various power sources to meet their load demands. To maximize its profit, each large consumer needs to decide their energy procurement strategy under risks such as price fluctuations from the spot market and power quality issues. In this paper, an electric energy procurement decision-making model is studied for large consumers who can obtain their electric energy from the spot market, generation companies under bilateral contracts, the options market and self-production facilities in the smart grid. Considering the effect of unqualified electric energy, the profit model of large consumers is formulated. In order to measure the risks from the price fluctuations and power quality, the expected utility and entropy is employed. Consequently, the expected utility and entropy decision-making model is presented, which helps large consumers to minimize their expected profit of electricity procurement while properly limiting the volatility of this cost. Finally, a case study verifies the feasibility and effectiveness of the proposed model.

  11. Comparing potential recharge estimates from three Land Surface Models across the Western US

    Science.gov (United States)

    NIRAULA, REWATI; MEIXNER, THOMAS; AJAMI, HOORI; RODELL, MATTHEW; GOCHIS, DAVID; CASTRO, CHRISTOPHER L.

    2018-01-01

    Groundwater is a major source of water in the western US. However, there are limited recharge estimates available in this region due to the complexity of recharge processes and the challenge of direct observations. Land surface Models (LSMs) could be a valuable tool for estimating current recharge and projecting changes due to future climate change. In this study, simulations of three LSMs (Noah, Mosaic and VIC) obtained from the North American Land Data Assimilation System (NLDAS-2) are used to estimate potential recharge in the western US. Modeled recharge was compared with published recharge estimates for several aquifers in the region. Annual recharge to precipitation ratios across the study basins varied from 0.01–15% for Mosaic, 3.2–42% for Noah, and 6.7–31.8% for VIC simulations. Mosaic consistently underestimates recharge across all basins. Noah captures recharge reasonably well in wetter basins, but overestimates it in drier basins. VIC slightly overestimates recharge in drier basins and slightly underestimates it for wetter basins. While the average annual recharge values vary among the models, the models were consistent in identifying high and low recharge areas in the region. Models agree in seasonality of recharge occurring dominantly during the spring across the region. Overall, our results highlight that LSMs have the potential to capture the spatial and temporal patterns as well as seasonality of recharge at large scales. Therefore, LSMs (specifically VIC and Noah) can be used as a tool for estimating future recharge rates in data limited regions. PMID:29618845

  12. Modeling organic aerosols during MILAGRO: importance of biogenic secondary organic aerosols

    Directory of Open Access Journals (Sweden)

    A. Hodzic

    2009-09-01

    Full Text Available The meso-scale chemistry-transport model CHIMERE is used to assess our understanding of major sources and formation processes leading to a fairly large amount of organic aerosols – OA, including primary OA (POA and secondary OA (SOA – observed in Mexico City during the MILAGRO field project (March 2006. Chemical analyses of submicron aerosols from aerosol mass spectrometers (AMS indicate that organic particles found in the Mexico City basin contain a large fraction of oxygenated organic species (OOA which have strong correspondence with SOA, and that their production actively continues downwind of the city. The SOA formation is modeled here by the one-step oxidation of anthropogenic (i.e. aromatics, alkanes, biogenic (i.e. monoterpenes and isoprene, and biomass-burning SOA precursors and their partitioning into both organic and aqueous phases. Conservative assumptions are made for uncertain parameters to maximize the amount of SOA produced by the model. The near-surface model evaluation shows that predicted OA correlates reasonably well with measurements during the campaign, however it remains a factor of 2 lower than the measured total OA. Fairly good agreement is found between predicted and observed POA within the city suggesting that anthropogenic and biomass burning emissions are reasonably captured. Consistent with previous studies in Mexico City, large discrepancies are encountered for SOA, with a factor of 2–10 model underestimate. When only anthropogenic SOA precursors were considered, the model was able to reproduce within a factor of two the sharp increase in OOA concentrations during the late morning at both urban and near-urban locations but the discrepancy increases rapidly later in the day, consistent with previous results, and is especially obvious when the column-integrated SOA mass is considered instead of the surface concentration. The increase in the missing SOA mass in the afternoon coincides with the sharp drop in POA

  13. Virtualizing ancient Rome: 3D acquisition and modeling of a large plaster-of-Paris model of imperial Rome

    Science.gov (United States)

    Guidi, Gabriele; Frischer, Bernard; De Simone, Monica; Cioci, Andrea; Spinetti, Alessandro; Carosso, Luca; Micoli, Laura L.; Russo, Michele; Grasso, Tommaso

    2005-01-01

    Computer modeling through digital range images has been used for many applications, including 3D modeling of objects belonging to our cultural heritage. The scales involved range from small objects (e.g. pottery), to middle-sized works of art (statues, architectural decorations), up to very large structures (architectural and archaeological monuments). For any of these applications, suitable sensors and methodologies have been explored by different authors. The object to be modeled within this project is the "Plastico di Roma antica," a large plaster-of-Paris model of imperial Rome (16x17 meters) created in the last century. Its overall size therefore demands an acquisition approach typical of large structures, but it also is characterized extremely tiny details typical of small objects (houses are a few centimeters high; their doors, windows, etc. are smaller than 1 centimeter). This paper gives an account of the procedures followed for solving this "contradiction" and describes how a huge 3D model was acquired and generated by using a special metrology Laser Radar. The procedures for reorienting in a single reference system the huge point clouds obtained after each acquisition phase, thanks to the measurement of fixed redundant references, are described. The data set was split in smaller sub-areas 2 x 2 meters each for purposes of mesh editing. This subdivision was necessary owing to the huge number of points in each individual scan (50-60 millions). The final merge of the edited parts made it possible to create a single mesh. All these processes were made with software specifically designed for this project since no commercial package could be found that was suitable for managing such a large number of points. Preliminary models are presented. Finally, the significance of the project is discussed in terms of the overall project known as "Rome Reborn," of which the present acquisition is an important component.

  14. Material model for non-linear finite element analyses of large concrete structures

    NARCIS (Netherlands)

    Engen, Morten; Hendriks, M.A.N.; Øverli, Jan Arve; Åldstedt, Erik; Beushausen, H.

    2016-01-01

    A fully triaxial material model for concrete was implemented in a commercial finite element code. The only required input parameter was the cylinder compressive strength. The material model was suitable for non-linear finite element analyses of large concrete structures. The importance of including

  15. The pig as a large animal model for influenza a virus infection

    DEFF Research Database (Denmark)

    Skovgaard, Kerstin; Brogaard, Louise; Larsen, Lars Erik

    It is increasingly realized that large animal models like the pig are exceptionally human like and serve as an excellent model for disease and inflammation. Pigs are fully susceptible to human influenza, share many similarities with humans regarding lung physiology and innate immune cell...

  16. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  17. Large-scale hydrological modelling in the semi-arid north-east of Brazil

    Science.gov (United States)

    Güntner, Andreas

    2002-07-01

    for the generating rainfall time series of higher temporal resolution. All model parameters of Wasa can be derived from physiographic information of the study area. Thus, model calibration is primarily not required. Model applications of Wasa for historical time series generally results in a good model performance when comparing the simulation results of river discharge and reservoir storage volumes with observed data for river basins of various sizes. The mean water balance as well as the high interannual and intra-annual variability is reasonably represented by the model. Limitations of the modelling concept are most markedly seen for sub-basins with a runoff component from deep groundwater bodies of which the dynamics cannot be satisfactorily represented without calibration. Further results of model applications are: (1) Lateral processes of redistribution of runoff and soil moisture at the hillslope scale, in particular reinfiltration of surface runoff, lead to markedly smaller discharge volumes at the basin scale than the simple sum of runoff of the individual sub-areas. Thus, these processes are to be captured also in large-scale models. The different relevance of these processes for different conditions is demonstrated by a larger percentage decrease of discharge volumes in dry as compared to wet years. (2) Precipitation characteristics have a major impact on the hydrological response of semi-arid environments. In particular, underestimated rainfall intensities in the rainfall input due to the rough temporal resolution of the model and due to interpolation effects and, consequently, underestimated runoff volumes have to be compensated in the model. A scaling factor in the infiltration module or the use of disaggregated hourly rainfall data show good results in this respect. The simulation results of Wasa are characterized by large uncertainties. These are, on the one hand, due to uncertainties of the model structure to adequately represent the relevant

  18. Large Scale Skill in Regional Climate Modeling and the Lateral Boundary Condition Scheme

    Science.gov (United States)

    Veljović, K.; Rajković, B.; Mesinger, F.

    2009-04-01

    Several points are made concerning the somewhat controversial issue of regional climate modeling: should a regional climate model (RCM) be expected to maintain the large scale skill of the driver global model that is supplying its lateral boundary condition (LBC)? Given that this is normally desired, is it able to do so without help via the fairly popular large scale nudging? Specifically, without such nudging, will the RCM kinetic energy necessarily decrease with time compared to that of the driver model or analysis data as suggested by a study using the Regional Atmospheric Modeling System (RAMS)? Finally, can the lateral boundary condition scheme make a difference: is the almost universally used but somewhat costly relaxation scheme necessary for a desirable RCM performance? Experiments are made to explore these questions running the Eta model in two versions differing in the lateral boundary scheme used. One of these schemes is the traditional relaxation scheme, and the other the Eta model scheme in which information is used at the outermost boundary only, and not all variables are prescribed at the outflow boundary. Forecast lateral boundary conditions are used, and results are verified against the analyses. Thus, skill of the two RCM forecasts can be and is compared not only against each other but also against that of the driver global forecast. A novel verification method is used in the manner of customary precipitation verification in that forecast spatial wind speed distribution is verified against analyses by calculating bias adjusted equitable threat scores and bias scores for wind speeds greater than chosen wind speed thresholds. In this way, focusing on a high wind speed value in the upper troposphere, verification of large scale features we suggest can be done in a manner that may be more physically meaningful than verifications via spectral decomposition that are a standard RCM verification method. The results we have at this point are somewhat

  19. Model Experiments for the Determination of Airflow in Large Spaces

    DEFF Research Database (Denmark)

    Nielsen, Peter V.

    Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....

  20. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in

  1. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed

  2. Numerical Modeling of Large-Scale Rocky Coastline Evolution

    Science.gov (United States)

    Limber, P.; Murray, A. B.; Littlewood, R.; Valvo, L.

    2008-12-01

    Seventy-five percent of the world's ocean coastline is rocky. On large scales (i.e. greater than a kilometer), many intertwined processes drive rocky coastline evolution, including coastal erosion and sediment transport, tectonics, antecedent topography, and variations in sea cliff lithology. In areas such as California, an additional aspect of rocky coastline evolution involves submarine canyons that cut across the continental shelf and extend into the nearshore zone. These types of canyons intercept alongshore sediment transport and flush sand to abyssal depths during periodic turbidity currents, thereby delineating coastal sediment transport pathways and affecting shoreline evolution over large spatial and time scales. How tectonic, sediment transport, and canyon processes interact with inherited topographic and lithologic settings to shape rocky coastlines remains an unanswered, and largely unexplored, question. We will present numerical model results of rocky coastline evolution that starts with an immature fractal coastline. The initial shape is modified by headland erosion, wave-driven alongshore sediment transport, and submarine canyon placement. Our previous model results have shown that, as expected, an initial sediment-free irregularly shaped rocky coastline with homogeneous lithology will undergo smoothing in response to wave attack; headlands erode and mobile sediment is swept into bays, forming isolated pocket beaches. As this diffusive process continues, pocket beaches coalesce, and a continuous sediment transport pathway results. However, when a randomly placed submarine canyon is introduced to the system as a sediment sink, the end results are wholly different: sediment cover is reduced, which in turn increases weathering and erosion rates and causes the entire shoreline to move landward more rapidly. The canyon's alongshore position also affects coastline morphology. When placed offshore of a headland, the submarine canyon captures local sediment

  3. Regional climate modeling over the Maritime Continent: Assessment of RegCM3-BATS1e and RegCM3-IBIS

    Science.gov (United States)

    Gianotti, R. L.; Zhang, D.; Eltahir, E. A.

    2010-12-01

    Despite its importance to global rainfall and circulation processes, the Maritime Continent remains a region that is poorly simulated by climate models. Relatively few studies have been undertaken using a model with fine enough resolution to capture the small-scale spatial heterogeneity of this region and associated land-atmosphere interactions. These studies have shown that even regional climate models (RCMs) struggle to reproduce the climate of this region, particularly the diurnal cycle of rainfall. This study builds on previous work by undertaking a more thorough evaluation of RCM performance in simulating the timing and intensity of rainfall over the Maritime Continent, with identification of major sources of error. An assessment was conducted of the Regional Climate Model Version 3 (RegCM3) used in a coupled system with two land surface schemes: Biosphere Atmosphere Transfer System Version 1e (BATS1e) and Integrated Biosphere Simulator (IBIS). The model’s performance in simulating precipitation was evaluated against the 3-hourly TRMM 3B42 product, with some validation provided of this TRMM product against ground station meteorological data. It is found that the model suffers from three major errors in the rainfall histogram: underestimation of the frequency of dry periods, overestimation of the frequency of low intensity rainfall, and underestimation of the frequency of high intensity rainfall. Additionally, the model shows error in the timing of the diurnal rainfall peak, particularly over land surfaces. These four errors were largely insensitive to the choice of boundary conditions, convective parameterization scheme or land surface scheme. The presence of a wet or dry bias in the simulated volumes of rainfall was, however, dependent on the choice of convection scheme and boundary conditions. This study also showed that the coupled model system has significant error in overestimation of latent heat flux and evapotranspiration from the land surface, and

  4. Natural and drought scenarios in an east central Amazon forest: Fidelity of the Community Land Model 3.5 with three biogeochemical models

    Science.gov (United States)

    Sakaguchi, Koichi; Zeng, Xubin; Christoffersen, Bradley J.; Restrepo-Coupe, Natalia; Saleska, Scott R.; Brando, Paulo M.

    2011-03-01

    Recent development of general circulation models involves biogeochemical cycles: flows of carbon and other chemical species that circulate through the Earth system. Such models are valuable tools for future projections of climate, but still bear large uncertainties in the model simulations. One of the regions with especially high uncertainty is the Amazon forest where large-scale dieback associated with the changing climate is predicted by several models. In order to better understand the capability and weakness of global-scale land-biogeochemical models in simulating a tropical ecosystem under the present day as well as significantly drier climates, we analyzed the off-line simulations for an east central Amazon forest by the Community Land Model version 3.5 of the National Center for Atmospheric Research and its three independent biogeochemical submodels (CASA', CN, and DGVM). Intense field measurements carried out under Large Scale Biosphere-Atmosphere Experiment in Amazonia, including forest response to drought from a throughfall exclusion experiment, are utilized to evaluate the whole spectrum of biogeophysical and biogeochemical aspects of the models. Our analysis shows reasonable correspondence in momentum and energy turbulent fluxes, but it highlights three processes that are not in agreement with observations: (1) inconsistent seasonality in carbon fluxes, (2) biased biomass size and allocation, and (3) overestimation of vegetation stress to short-term drought but underestimation of biomass loss from long-term drought. Without resolving these issues the modeled feedbacks from the biosphere in future climate projections would be questionable. We suggest possible directions for model improvements and also emphasize the necessity of more studies using a variety of in situ data for both driving and evaluating land-biogeochemical models.

  5. Modeling of reservoir operation in UNH global hydrological model

    Science.gov (United States)

    Shiklomanov, Alexander; Prusevich, Alexander; Frolking, Steve; Glidden, Stanley; Lammers, Richard; Wisser, Dominik

    2015-04-01

    reservoirs designed for hydropower generation, water supply and flood control. Less reliable results were observed for Africa and dry areas of Asia and America. There are several possible causes of large uncertainties in discharge simulations for these areas including: accuracy of observational data, model underestimation of extensive water use and greater uncertainties of used climatic data in these regions due to sparser observational network. In general the applied approach for streamflow routing through reservoirs and large natural lakes has significantly improved simulated discharge estimates.

  6. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  7. Modeling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    AUTHOR|(SzGeCERN)756497; The ATLAS collaboration; Garcia Garcia, Pedro Javier; Vandelli, Wainer; Froening, Holger

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  8. Modelling Resource Utilization of a Large Data Acquisition System

    CERN Document Server

    Santos, Alejandro; The ATLAS collaboration

    2017-01-01

    The ATLAS 'Phase-II' upgrade, scheduled to start in 2024, will significantly change the requirements under which the data-acquisition system operates. The input data rate, currently fixed around 150 GB/s, is anticipated to reach 5 TB/s. In order to deal with the challenging conditions, and exploit the capabilities of newer technologies, a number of architectural changes are under consideration. Of particular interest is a new component, known as the Storage Handler, which will provide a large buffer area decoupling real-time data taking from event filtering. Dynamic operational models of the upgraded system can be used to identify the required resources and to select optimal techniques. In order to achieve a robust and dependable model, the current data-acquisition architecture has been used as a test case. This makes it possible to verify and calibrate the model against real operation data. Such a model can then be evolved toward the future ATLAS Phase-II architecture. In this paper we introduce the current ...

  9. Large-scale groundwater modeling using global datasets: A test case for the Rhine-Meuse basin

    NARCIS (Netherlands)

    Sutanudjaja, E.H.; Beek, L.P.H. van; Jong, S.M. de; Geer, F.C. van; Bierkens, M.F.P.

    2011-01-01

    Large-scale groundwater models involving aquifers and basins of multiple countries are still rare due to a lack of hydrogeological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global

  10. Large animal and primate models of spinal cord injury for the testing of novel therapies.

    Science.gov (United States)

    Kwon, Brian K; Streijger, Femke; Hill, Caitlin E; Anderson, Aileen J; Bacon, Mark; Beattie, Michael S; Blesch, Armin; Bradbury, Elizabeth J; Brown, Arthur; Bresnahan, Jacqueline C; Case, Casey C; Colburn, Raymond W; David, Samuel; Fawcett, James W; Ferguson, Adam R; Fischer, Itzhak; Floyd, Candace L; Gensel, John C; Houle, John D; Jakeman, Lyn B; Jeffery, Nick D; Jones, Linda Ann Truett; Kleitman, Naomi; Kocsis, Jeffery; Lu, Paul; Magnuson, David S K; Marsala, Martin; Moore, Simon W; Mothe, Andrea J; Oudega, Martin; Plant, Giles W; Rabchevsky, Alexander Sasha; Schwab, Jan M; Silver, Jerry; Steward, Oswald; Xu, Xiao-Ming; Guest, James D; Tetzlaff, Wolfram

    2015-07-01

    Large animal and primate models of spinal cord injury (SCI) are being increasingly utilized for the testing of novel therapies. While these represent intermediary animal species between rodents and humans and offer the opportunity to pose unique research questions prior to clinical trials, the role that such large animal and primate models should play in the translational pipeline is unclear. In this initiative we engaged members of the SCI research community in a questionnaire and round-table focus group discussion around the use of such models. Forty-one SCI researchers from academia, industry, and granting agencies were asked to complete a questionnaire about their opinion regarding the use of large animal and primate models in the context of testing novel therapeutics. The questions centered around how large animal and primate models of SCI would be best utilized in the spectrum of preclinical testing, and how much testing in rodent models was warranted before employing these models. Further questions were posed at a focus group meeting attended by the respondents. The group generally felt that large animal and primate models of SCI serve a potentially useful role in the translational pipeline for novel therapies, and that the rational use of these models would depend on the type of therapy and specific research question being addressed. While testing within these models should not be mandatory, the detection of beneficial effects using these models lends additional support for translating a therapy to humans. These models provides an opportunity to evaluate and refine surgical procedures prior to use in humans, and safety and bio-distribution in a spinal cord more similar in size and anatomy to that of humans. Our results reveal that while many feel that these models are valuable in the testing of novel therapies, important questions remain unanswered about how they should be used and how data derived from them should be interpreted. Copyright © 2015 Elsevier

  11. Investigating the role of chemical and physical processes on organic aerosol modelling with CAMx in the Po Valley during a winter episode

    Science.gov (United States)

    Meroni, A.; Pirovano, G.; Gilardoni, S.; Lonati, G.; Colombi, C.; Gianelle, V.; Paglione, M.; Poluzzi, V.; Riva, G. M.; Toppetti, A.

    2017-12-01

    Traditional aerosol mechanisms underestimate the observed organic aerosol concentration, especially due to the lack of information on secondary organic aerosol (SOA) formation and processing. In this study we evaluate the chemical and transport model CAMx during a one-month in winter (February 2013) over a 5 km resolution domain, covering the whole Po valley (Northern Italy). This works aims at investigating the effects of chemical and physical atmospheric processing on modelling results and, in particular, to evaluate the CAMx sensitivity to organic aerosol (OA) modelling schemes: we will compare the recent 1.5D-VBS algorithm (CAMx-VBS) with the traditional Odum 2-product model (CAMx-SOAP). Additionally, the thorough diagnostic analysis of the reproduction of meteorology, precursors and aerosol components was intended to point put strength and weaknesses of the modelling system and address its improvement. Firstly, we evaluate model performance for criteria PM concentration. PM10 concentration was underestimated both by CAMx-SOAP and even more by CAMx-VBS, with the latter showing a bias ranging between -4.7 and -7.1 μg m-3. PM2.5 model performance was to some extent better than PM10, showing a mean bias ranging between -0.5 μg m-3 at rural sites and -5.5 μg m-3 at urban and suburban sites. CAMx performance for OA was clearly worse than for the other PM compounds (negative bias ranging between -40% and -75%). The comparisons of model results with OA sources (identified by PMF analysis) shows that the VBS scheme underestimates freshly emitted organic aerosol while SOAP overestimates. The VBS scheme correctly reproduces biomass burning (BBOA) contributions to primary OA concentrations (POA). In contrast VBS slightly underestimates the contribution from fossil-fuel combustion (HOA), indicating that POA emissions related to road transport are either underestimated or associated to higher volatility classes. The VBS scheme under-predictes the SOA too, but to a lesser

  12. REIONIZATION ON LARGE SCALES. I. A PARAMETRIC MODEL CONSTRUCTED FROM RADIATION-HYDRODYNAMIC SIMULATIONS

    International Nuclear Information System (INIS)

    Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.

    2013-01-01

    We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions

  13. Application of simplified models to CO2 migration and immobilization in large-scale geological systems

    KAUST Repository

    Gasda, Sarah E.

    2012-07-01

    Long-term stabilization of injected carbon dioxide (CO 2) is an essential component of risk management for geological carbon sequestration operations. However, migration and trapping phenomena are inherently complex, involving processes that act over multiple spatial and temporal scales. One example involves centimeter-scale density instabilities in the dissolved CO 2 region leading to large-scale convective mixing that can be a significant driver for CO 2 dissolution. Another example is the potentially important effect of capillary forces, in addition to buoyancy and viscous forces, on the evolution of mobile CO 2. Local capillary effects lead to a capillary transition zone, or capillary fringe, where both fluids are present in the mobile state. This small-scale effect may have a significant impact on large-scale plume migration as well as long-term residual and dissolution trapping. Computational models that can capture both large and small-scale effects are essential to predict the role of these processes on the long-term storage security of CO 2 sequestration operations. Conventional modeling tools are unable to resolve sufficiently all of these relevant processes when modeling CO 2 migration in large-scale geological systems. Herein, we present a vertically-integrated approach to CO 2 modeling that employs upscaled representations of these subgrid processes. We apply the model to the Johansen formation, a prospective site for sequestration of Norwegian CO 2 emissions, and explore the sensitivity of CO 2 migration and trapping to subscale physics. Model results show the relative importance of different physical processes in large-scale simulations. The ability of models such as this to capture the relevant physical processes at large spatial and temporal scales is important for prediction and analysis of CO 2 storage sites. © 2012 Elsevier Ltd.

  14. Regional climate model sensitivity to domain size

    Energy Technology Data Exchange (ETDEWEB)

    Leduc, Martin [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada); UQAM/Ouranos, Montreal, QC (Canada); Laprise, Rene [Universite du Quebec a Montreal, Canadian Regional Climate Modelling and Diagnostics (CRCMD) Network, ESCER Centre, Montreal (Canada)

    2009-05-15

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the ''perfect model'' approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 x 100 grid points). The permanent ''spatial spin-up'' corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere. (orig.)

  15. Development of a transverse mixing model for large scale impulsion phenomenon in tight lattice

    International Nuclear Information System (INIS)

    Liu, Xiaojing; Ren, Shuo; Cheng, Xu

    2017-01-01

    Highlights: • Experiment data of Krauss is used to validate the feasibility of CFD simulation method. • CFD simulation is performed to simulate the large scale impulsion phenomenon for tight-lattice bundle. • A mixing model to simulate the large scale impulsion phenomenon is proposed based on CFD result fitting. • The new developed mixing model has been added in the subchannel code. - Abstract: Tight-lattice is widely adopted in the innovative reactor fuel bundles design since it can increase the conversion ratio and improve the heat transfer between fuel bundles and coolant. It has been noticed that a large scale impulsion of cross-velocity exists in the gap region, which plays an important role on the transverse mixing flow and heat transfer. Although many experiments and numerical simulation have been carried out to study the impulsion of velocity, a model to describe the wave length, amplitude and frequency of mixing coefficient is still missing. This research work takes advantage of the CFD method to simulate the experiment of Krauss and to compare experiment data and simulation result in order to demonstrate the feasibility of simulation method and turbulence model. Then, based on this verified method and model, several simulations are performed with different Reynolds number and different Pitch-to-Diameter ratio. By fitting the CFD results achieved, a mixing model to simulate the large scale impulsion phenomenon is proposed and adopted in the current subchannel code. The new mixing model is applied to some fuel assembly analysis by subchannel calculation, it can be noticed that the new developed mixing model can reduce the hot channel factor and contribute to a uniform distribution of outlet temperature.

  16. Solving large linear systems in an implicit thermohaline ocean model

    NARCIS (Netherlands)

    de Niet, Arie Christiaan

    2007-01-01

    The climate on earth is largely determined by the global ocean circulation. Hence it is important to predict how the flow will react to perturbation by for example melting icecaps. To answer questions about the stability of the global ocean flow, a computer model has been developed that is able to

  17. Monte Carlo technique for very large ising models

    Science.gov (United States)

    Kalle, C.; Winkelmann, V.

    1982-08-01

    Rebbi's multispin coding technique is improved and applied to the kinetic Ising model with size 600*600*600. We give the central part of our computer program (for a CDC Cyber 76), which will be helpful also in a simulation of smaller systems, and describe the other tricks necessary to go to large lattices. The magnetization M at T=1.4* T c is found to decay asymptotically as exp(-t/2.90) if t is measured in Monte Carlo steps per spin, and M( t = 0) = 1 initially.

  18. A coordination model for ultra-large scale systems of systems

    Directory of Open Access Journals (Sweden)

    Manuela L. Bujorianu

    2013-11-01

    Full Text Available The ultra large multi-agent systems are becoming increasingly popular due to quick decay of the individual production costs and the potential of speeding up the solving of complex problems. Examples include nano-robots, or systems of nano-satellites for dangerous meteorite detection, or cultures of stem cells for organ regeneration or nerve repair. The topics associated with these systems are usually dealt within the theories of intelligent swarms or biologically inspired computation systems. Stochastic models play an important role and they are based on various formulations of the mechanical statistics. In these cases, the main assumption is that the swarm elements have a simple behaviour and that some average properties can be deduced for the entire swarm. In contrast, complex systems in areas like aeronautics are formed by elements with sophisticated behaviour, which are even autonomous. In situations like this, a new approach to swarm coordination is necessary. We present a stochastic model where the swarm elements are communicating autonomous systems, the coordination is separated from the component autonomous activity and the entire swarm can be abstracted away as a piecewise deterministic Markov process, which constitutes one of the most popular model in stochastic control. Keywords: ultra large multi-agent systems, system of systems, autonomous systems, stochastic hybrid systems.

  19. A Multi-Resolution Spatial Model for Large Datasets Based on the Skew-t Distribution

    KAUST Repository

    Tagle, Felipe

    2017-12-06

    Large, non-Gaussian spatial datasets pose a considerable modeling challenge as the dependence structure implied by the model needs to be captured at different scales, while retaining feasible inference. Skew-normal and skew-t distributions have only recently begun to appear in the spatial statistics literature, without much consideration, however, for the ability to capture dependence at multiple resolutions, and simultaneously achieve feasible inference for increasingly large data sets. This article presents the first multi-resolution spatial model inspired by the skew-t distribution, where a large-scale effect follows a multivariate normal distribution and the fine-scale effects follow a multivariate skew-normal distributions. The resulting marginal distribution for each region is skew-t, thereby allowing for greater flexibility in capturing skewness and heavy tails characterizing many environmental datasets. Likelihood-based inference is performed using a Monte Carlo EM algorithm. The model is applied as a stochastic generator of daily wind speeds over Saudi Arabia.

  20. Exploiting multi-scale parallelism for large scale numerical modelling of laser wakefield accelerators

    International Nuclear Information System (INIS)

    Fonseca, R A; Vieira, J; Silva, L O; Fiuza, F; Davidson, A; Tsung, F S; Mori, W B

    2013-01-01

    A new generation of laser wakefield accelerators (LWFA), supported by the extreme accelerating fields generated in the interaction of PW-Class lasers and underdense targets, promises the production of high quality electron beams in short distances for multiple applications. Achieving this goal will rely heavily on numerical modelling to further understand the underlying physics and identify optimal regimes, but large scale modelling of these scenarios is computationally heavy and requires the efficient use of state-of-the-art petascale supercomputing systems. We discuss the main difficulties involved in running these simulations and the new developments implemented in the OSIRIS framework to address these issues, ranging from multi-dimensional dynamic load balancing and hybrid distributed/shared memory parallelism to the vectorization of the PIC algorithm. We present the results of the OASCR Joule Metric program on the issue of large scale modelling of LWFA, demonstrating speedups of over 1 order of magnitude on the same hardware. Finally, scalability to over ∼10 6 cores and sustained performance over ∼2 P Flops is demonstrated, opening the way for large scale modelling of LWFA scenarios. (paper)

  1. Fast and accurate focusing analysis of large photon sieve using pinhole ring diffraction model.

    Science.gov (United States)

    Liu, Tao; Zhang, Xin; Wang, Lingjie; Wu, Yanxiong; Zhang, Jizhen; Qu, Hemeng

    2015-06-10

    In this paper, we developed a pinhole ring diffraction model for the focusing analysis of a large photon sieve. Instead of analyzing individual pinholes, we discuss the focusing of all of the pinholes in a single ring. An explicit equation for the diffracted field of individual pinhole ring has been proposed. We investigated the validity range of this generalized model and analytically describe the sufficient conditions for the validity of this pinhole ring diffraction model. A practical example and investigation reveals the high accuracy of the pinhole ring diffraction model. This simulation method could be used for fast and accurate focusing analysis of a large photon sieve.

  2. Forecasting the mortality rates using Lee-Carter model and Heligman-Pollard model

    Science.gov (United States)

    Ibrahim, R. I.; Ngataman, N.; Abrisam, W. N. A. Wan Mohd

    2017-09-01

    Improvement in life expectancies has driven further declines in mortality. The sustained reduction in mortality rates and its systematic underestimation has been attracting the significant interest of researchers in recent years because of its potential impact on population size and structure, social security systems, and (from an actuarial perspective) the life insurance and pensions industry worldwide. Among all forecasting methods, the Lee-Carter model has been widely accepted by the actuarial community and Heligman-Pollard model has been widely used by researchers in modelling and forecasting future mortality. Therefore, this paper only focuses on Lee-Carter model and Heligman-Pollard model. The main objective of this paper is to investigate how accurately these two models will perform using Malaysian data. Since these models involves nonlinear equations that are explicitly difficult to solve, the Matrix Laboratory Version 8.0 (MATLAB 8.0) software will be used to estimate the parameters of the models. Autoregressive Integrated Moving Average (ARIMA) procedure is applied to acquire the forecasted parameters for both models as the forecasted mortality rates are obtained by using all the values of forecasted parameters. To investigate the accuracy of the estimation, the forecasted results will be compared against actual data of mortality rates. The results indicate that both models provide better results for male population. However, for the elderly female population, Heligman-Pollard model seems to underestimate to the mortality rates while Lee-Carter model seems to overestimate to the mortality rates.

  3. Reduction of the nitro group during sample preparation may cause underestimation of the nitration level in 3-nitrotyrosine immunoblotting

    DEFF Research Database (Denmark)

    Söderling, Ann-Sofi; Hultman, Lena; Delbro, Dick

    2007-01-01

    We noted differences in the antibody response to 3-nitrotyrosine (NO(2)Tyr) in fixed and non-fixed tissues, and studied therefore potential problems associated with non-fixed tissues in Western blot analyses. Three different monoclonal anti-nitrotyrosine antibodies in Western blot analysis of inf...... is not detected by anti-NO(2)Tyr antibodies. Western blot analysis may therefore underestimate the level of tissue nitration, and factors causing a reduction of NO(2)Tyr during sample preparation might conceal the actual nitration of proteins....

  4. Effective models of new physics at the Large Hadron Collider

    International Nuclear Information System (INIS)

    Llodra-Perez, J.

    2011-07-01

    With the start of the Large Hadron Collider runs, in 2010, particle physicists will be soon able to have a better understanding of the electroweak symmetry breaking. They might also answer to many experimental and theoretical open questions raised by the Standard Model. Surfing on this really favorable situation, we will first present in this thesis a highly model-independent parametrization in order to characterize the new physics effects on mechanisms of production and decay of the Higgs boson. This original tool will be easily and directly usable in data analysis of CMS and ATLAS, the huge generalist experiments of LHC. It will help indeed to exclude or validate significantly some new theories beyond the Standard Model. In another approach, based on model-building, we considered a scenario of new physics, where the Standard Model fields can propagate in a flat six-dimensional space. The new spatial extra-dimensions will be compactified on a Real Projective Plane. This orbifold is the unique six-dimensional geometry which possesses chiral fermions and a natural Dark Matter candidate. The scalar photon, which is the lightest particle of the first Kaluza-Klein tier, is stabilized by a symmetry relic of the six dimension Lorentz invariance. Using the current constraints from cosmological observations and our first analytical calculation, we derived a characteristic mass range around few hundred GeV for the Kaluza-Klein scalar photon. Therefore the new states of our Universal Extra-Dimension model are light enough to be produced through clear signatures at the Large Hadron Collider. So we used a more sophisticated analysis of particle mass spectrum and couplings, including radiative corrections at one-loop, in order to establish our first predictions and constraints on the expected LHC phenomenology. (author)

  5. Application of Logic Models in a Large Scientific Research Program

    Science.gov (United States)

    O'Keefe, Christine M.; Head, Richard J.

    2011-01-01

    It is the purpose of this article to discuss the development and application of a logic model in the context of a large scientific research program within the Commonwealth Scientific and Industrial Research Organisation (CSIRO). CSIRO is Australia's national science agency and is a publicly funded part of Australia's innovation system. It conducts…

  6. Water and salt balance modelling to predict the effects of land-use changes in forested catchments. 3. The large catchment model

    Science.gov (United States)

    Sivapalan, Murugesu; Viney, Neil R.; Jeevaraj, Charles G.

    1996-03-01

    This paper presents an application of a long-term, large catchment-scale, water balance model developed to predict the effects of forest clearing in the south-west of Western Australia. The conceptual model simulates the basic daily water balance fluxes in forested catchments before and after clearing. The large catchment is divided into a number of sub-catchments (1-5 km2 in area), which are taken as the fundamental building blocks of the large catchment model. The responses of the individual subcatchments to rainfall and pan evaporation are conceptualized in terms of three inter-dependent subsurface stores A, B and F, which are considered to represent the moisture states of the subcatchments. Details of the subcatchment-scale water balance model have been presented earlier in Part 1 of this series of papers. The response of any subcatchment is a function of its local moisture state, as measured by the local values of the stores. The variations of the initial values of the stores among the subcatchments are described in the large catchment model through simple, linear equations involving a number of similarity indices representing topography, mean annual rainfall and level of forest clearing.The model is applied to the Conjurunup catchment, a medium-sized (39·6 km2) catchment in the south-west of Western Australia. The catchment has been heterogeneously (in space and time) cleared for bauxite mining and subsequently rehabilitated. For this application, the catchment is divided into 11 subcatchments. The model parameters are estimated by calibration, by comparing observed and predicted runoff values, over a 18 year period, for the large catchment and two of the subcatchments. Excellent fits are obtained.

  7. Large-scale ligand-based predictive modelling using support vector machines.

    Science.gov (United States)

    Alvarsson, Jonathan; Lampa, Samuel; Schaal, Wesley; Andersson, Claes; Wikberg, Jarl E S; Spjuth, Ola

    2016-01-01

    The increasing size of datasets in drug discovery makes it challenging to build robust and accurate predictive models within a reasonable amount of time. In order to investigate the effect of dataset sizes on predictive performance and modelling time, ligand-based regression models were trained on open datasets of varying sizes of up to 1.2 million chemical structures. For modelling, two implementations of support vector machines (SVM) were used. Chemical structures were described by the signatures molecular descriptor. Results showed that for the larger datasets, the LIBLINEAR SVM implementation performed on par with the well-established libsvm with a radial basis function kernel, but with dramatically less time for model building even on modest computer resources. Using a non-linear kernel proved to be infeasible for large data sizes, even with substantial computational resources on a computer cluster. To deploy the resulting models, we extended the Bioclipse decision support framework to support models from LIBLINEAR and made our models of logD and solubility available from within Bioclipse.

  8. ARMA modelling of neutron stochastic processes with large measurement noise

    International Nuclear Information System (INIS)

    Zavaljevski, N.; Kostic, Lj.; Pesic, M.

    1994-01-01

    An autoregressive moving average (ARMA) model of the neutron fluctuations with large measurement noise is derived from langevin stochastic equations and validated using time series data obtained during prompt neutron decay constant measurements at the zero power reactor RB in Vinca. Model parameters are estimated using the maximum likelihood (ML) off-line algorithm and an adaptive pole estimation algorithm based on the recursive prediction error method (RPE). The results show that subcriticality can be determined from real data with high measurement noise using much shorter statistical sample than in standard methods. (author)

  9. CO emission and export from Asia: an analysis combining complementary satellite measurements (MOPITT, SCIAMACHY and ACE-FTS with global modeling

    Directory of Open Access Journals (Sweden)

    P. F. Bernath

    2008-09-01

    Full Text Available This study presents the complementary picture of the pollution outflow provided by several satellite observations of carbon monoxide (CO, based on different observation techniques. This is illustrated by an analysis of the Asian outflow during the spring of 2005, through comparisons with simulations by the LMDz-INCA global chemistry transport model. The CO observations from the MOPITT and SCIAMACHY nadir sounders, which provide vertically integrated information with excellent horizontal sampling, and from the ACE-FTS solar occultation instrument, which has limited spatial coverage but allows the retrieval of vertical profiles, are used. Combining observations from MOPITT (mainly sensitive to the free troposphere and SCIAMACHY (sensitive to the full column allows a qualitative evaluation of the boundary layer CO. The model tends to underestimate this residual compared to the observations, suggesting underestimated emissions, especially in eastern Asia. However, a better understanding of the consistency and possible biases between the MOPITT and SCIAMACHY CO is necessary for a quantitative evaluation. Underestimated emissions, and possibly too low lofting and underestimated chemical production in the model, lead to an underestimate of the export to the free troposphere, as highlighted by comparisons with MOPITT and ACE-FTS. Both instruments observe large trans-Pacific transport extending from ~20° N to ~60° N, with high upper tropospheric CO observed by ACE-FTS above the eastern Pacific (with values of up to 300 ppbv around 50° N at 500 hPa and up to ~200 ppbv around 30° N at 300 hPa. The low vertical and horizontal resolutions of the global model do not allow the simulation of the strong enhancements in the observed plumes. However, the transport patterns are well captured, and are mainly attributed to export from eastern Asia, with increasing contributions from South Asia and Indonesia towards the tropics. Additional measurements of C2

  10. Large-scale groundwater modeling using global datasets: a test case for the Rhine-Meuse basin

    Directory of Open Access Journals (Sweden)

    E. H. Sutanudjaja

    2011-09-01

    Full Text Available The current generation of large-scale hydrological models does not include a groundwater flow component. Large-scale groundwater models, involving aquifers and basins of multiple countries, are still rare mainly due to a lack of hydro-geological data which are usually only available in developed countries. In this study, we propose a novel approach to construct large-scale groundwater models by using global datasets that are readily available. As the test-bed, we use the combined Rhine-Meuse basin that contains groundwater head data used to verify the model output. We start by building a distributed land surface model (30 arc-second resolution to estimate groundwater recharge and river discharge. Subsequently, a MODFLOW transient groundwater model is built and forced by the recharge and surface water levels calculated by the land surface model. Results are promising despite the fact that we still use an offline procedure to couple the land surface and MODFLOW groundwater models (i.e. the simulations of both models are separately performed. The simulated river discharges compare well to the observations. Moreover, based on our sensitivity analysis, in which we run several groundwater model scenarios with various hydro-geological parameter settings, we observe that the model can reasonably well reproduce the observed groundwater head time series. However, we note that there are still some limitations in the current approach, specifically because the offline-coupling technique simplifies the dynamic feedbacks between surface water levels and groundwater heads, and between soil moisture states and groundwater heads. Also the current sensitivity analysis ignores the uncertainty of the land surface model output. Despite these limitations, we argue that the results of the current model show a promise for large-scale groundwater modeling practices, including for data-poor environments and at the global scale.

  11. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    International Nuclear Information System (INIS)

    Zhai, Jianliang; Zhang, Tusheng

    2017-01-01

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  12. Large Deviations for Stochastic Models of Two-Dimensional Second Grade Fluids

    Energy Technology Data Exchange (ETDEWEB)

    Zhai, Jianliang, E-mail: zhaijl@ustc.edu.cn [University of Science and Technology of China, School of Mathematical Sciences (China); Zhang, Tusheng, E-mail: Tusheng.Zhang@manchester.ac.uk [University of Manchester, School of Mathematics (United Kingdom)

    2017-06-15

    In this paper, we establish a large deviation principle for stochastic models of incompressible second grade fluids. The weak convergence method introduced by Budhiraja and Dupuis (Probab Math Statist 20:39–61, 2000) plays an important role.

  13. Ecotoxicological potential of the biocides terbutryn, octhilinone and methylisothiazolinone: Underestimated risk from biocidal pathways?

    Science.gov (United States)

    Kresmann, Simon; Arokia, Arokia Hansel Rajan; Koch, Christoph; Sures, Bernd

    2018-06-01

    The use of biocides by industry, agriculture and households increased throughout the last two decades. Many new applications with known substances enriched the variety of biocidal pollution sources for the aquatic environment. While agriculture was the major source for a long time, leaching from building facades and preservation of personal care and cleaning products was identified as new sources in the last few years. With the different usage forms of biocidal products the complexity of legislative regulation increased as well. The requirements for risk assessment differ from one law to another and the potential risk of substances under different regulations might be underestimated. Still EC 50 and predicted no-effect concentration (PNEC) values gained from testing with different species are the core of environmental risk assessment, but ecotoxicological data is limited or lacking for many biocides. In this study the biocides widely used in facade coatings and household products terbutryn, octhilinone and methylisothiazolinone were tested with the Daphnia magna acute immobilisation assay, the neutral red uptake assay and the ethoxyresorufin-O-deethylase (EROD) assay, performed with rainbow trout liver (RTL-W1) cells. Further, the MTT assay with the ovarian cell line CHO-9 from Chinese hamster was used as mammalian model. Octhilinone induced the strongest effects with EC 50 values of 156μg/l in the D. magna assay, while terbutryn showed the weakest effects with 8390μg/l and methylisothiazolinone 513μg/l respectively. All other assays showed higher EC 50 values and thus only weak effects. EROD assays did not show any effects. With additional literature and database records PNEC values were calculated: terbutryn reached 0.003μg/l, octhilinone 0.05μg/l and methylisothiazolinone 0.5μg/l. Potential ecotoxicological risks of these biocides are discussed, considering environmental concentrations. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Regional modeling of large wildfires under current and potential future climates in Colorado and Wyoming, USA

    Science.gov (United States)

    West, Amanda; Kumar, Sunil; Jarnevich, Catherine S.

    2016-01-01

    Regional analysis of large wildfire potential given climate change scenarios is crucial to understanding areas most at risk in the future, yet wildfire models are not often developed and tested at this spatial scale. We fit three historical climate suitability models for large wildfires (i.e. ≥ 400 ha) in Colorado andWyoming using topography and decadal climate averages corresponding to wildfire occurrence at the same temporal scale. The historical models classified points of known large wildfire occurrence with high accuracies. Using a novel approach in wildfire modeling, we applied the historical models to independent climate and wildfire datasets, and the resulting sensitivities were 0.75, 0.81, and 0.83 for Maxent, Generalized Linear, and Multivariate Adaptive Regression Splines, respectively. We projected the historic models into future climate space using data from 15 global circulation models and two representative concentration pathway scenarios. Maps from these geospatial analyses can be used to evaluate the changing spatial distribution of climate suitability of large wildfires in these states. April relative humidity was the most important covariate in all models, providing insight to the climate space of large wildfires in this region. These methods incorporate monthly and seasonal climate averages at a spatial resolution relevant to land management (i.e. 1 km2) and provide a tool that can be modified for other regions of North America, or adapted for other parts of the world.

  15. Two methods for estimating limits to large-scale wind power generation.

    Science.gov (United States)

    Miller, Lee M; Brunsell, Nathaniel A; Mechem, David B; Gans, Fabian; Monaghan, Andrew J; Vautard, Robert; Keith, David W; Kleidon, Axel

    2015-09-08

    Wind turbines remove kinetic energy from the atmospheric flow, which reduces wind speeds and limits generation rates of large wind farms. These interactions can be approximated using a vertical kinetic energy (VKE) flux method, which predicts that the maximum power generation potential is 26% of the instantaneous downward transport of kinetic energy using the preturbine climatology. We compare the energy flux method to the Weather Research and Forecasting (WRF) regional atmospheric model equipped with a wind turbine parameterization over a 10(5) km2 region in the central United States. The WRF simulations yield a maximum generation of 1.1 We⋅m(-2), whereas the VKE method predicts the time series while underestimating the maximum generation rate by about 50%. Because VKE derives the generation limit from the preturbine climatology, potential changes in the vertical kinetic energy flux from the free atmosphere are not considered. Such changes are important at night when WRF estimates are about twice the VKE value because wind turbines interact with the decoupled nocturnal low-level jet in this region. Daytime estimates agree better to 20% because the wind turbines induce comparatively small changes to the downward kinetic energy flux. This combination of downward transport limits and wind speed reductions explains why large-scale wind power generation in windy regions is limited to about 1 We⋅m(-2), with VKE capturing this combination in a comparatively simple way.

  16. A phase transition between small- and large-field models of inflation

    International Nuclear Information System (INIS)

    Itzhaki, Nissan; Kovetz, Ely D

    2009-01-01

    We show that models of inflection point inflation exhibit a phase transition from a region in parameter space where they are of large-field type to a region where they are of small-field type. The phase transition is between a universal behavior, with respect to the initial condition, at the large-field region and non-universal behavior at the small-field region. The order parameter is the number of e-foldings. We find integer critical exponents at the transition between the two phases.

  17. Estimation of rates-across-sites distributions in phylogenetic substitution models.

    Science.gov (United States)

    Susko, Edward; Field, Chris; Blouin, Christian; Roger, Andrew J

    2003-10-01

    Previous work has shown that it is often essential to account for the variation in rates at different sites in phylogenetic models in order to avoid phylogenetic artifacts such as long branch attraction. In most current models, the gamma distribution is used for the rates-across-sites distributions and is implemented as an equal-probability discrete gamma. In this article, we introduce discrete distribution estimates with large numbers of equally spaced rate categories allowing us to investigate the appropriateness of the gamma model. With large numbers of rate categories, these discrete estimates are flexible enough to approximate the shape of almost any distribution. Likelihood ratio statistical tests and a nonparametric bootstrap confidence-bound estimation procedure based on the discrete estimates are presented that can be used to test the fit of a parametric family. We applied the methodology to several different protein data sets, and found that although the gamma model often provides a good parametric model for this type of data, rate estimates from an equal-probability discrete gamma model with a small number of categories will tend to underestimate the largest rates. In cases when the gamma model assumption is in doubt, rate estimates coming from the discrete rate distribution estimate with a large number of rate categories provide a robust alternative to gamma estimates. An alternative implementation of the gamma distribution is proposed that, for equal numbers of rate categories, is computationally more efficient during optimization than the standard gamma implementation and can provide more accurate estimates of site rates.

  18. Searches for phenomena beyond the Standard Model at the Large ...

    Indian Academy of Sciences (India)

    metry searches at the LHC is thus the channel with large missing transverse momentum and jets of high transverse momentum. No excess above the expected SM background is observed and limits are set on supersymmetric models. Figures 1 and 2 show the limits from ATLAS [11] and CMS [12]. In addition to setting limits ...

  19. Review of Dynamic Modeling and Simulation of Large Scale Belt Conveyor System

    Science.gov (United States)

    He, Qing; Li, Hong

    Belt conveyor is one of the most important devices to transport bulk-solid material for long distance. Dynamic analysis is the key to decide whether the design is rational in technique, safe and reliable in running, feasible in economy. It is very important to study dynamic properties, improve efficiency and productivity, guarantee conveyor safe, reliable and stable running. The dynamic researches and applications of large scale belt conveyor are discussed. The main research topics, the state-of-the-art of dynamic researches on belt conveyor are analyzed. The main future works focus on dynamic analysis, modeling and simulation of main components and whole system, nonlinear modeling, simulation and vibration analysis of large scale conveyor system.

  20. A 40-year accumulation dataset for Adelie Land, Antarctica and its application for model validation

    Energy Technology Data Exchange (ETDEWEB)

    Agosta, Cecile; Favier, Vincent [UJF-Grenoble 1 / CNRS, Laboratoire de Glaciologie et de Geophysique de l' Environnement UMR 5183, Saint Martin d' Heres (France); Genthon, Christophe; Gallee, Hubert; Krinner, Gerhard [CNRS / UJF-Grenoble 1, Laboratoire de Glaciologie et de Geophysique de l' Environnement UMR 5183, Saint Martin d' Heres (France); Lenaerts, Jan T.M.; Broeke, Michiel R. van den [Utrecht University, Institute for Marine and Atmospheric Research Utrecht (Netherlands)

    2012-01-15

    The GLACIOCLIM-SAMBA (GS) Antarctic accumulation monitoring network, which extends from the coast of Adelie Land to the Antarctic plateau, has been surveyed annually since 2004. The network includes a 156-km stake-line from the coast inland, along which accumulation shows high spatial and interannual variability with a mean value of 362 mm water equivalent a{sup -1}. In this paper, this accumulation is compared with older accumulation reports from between 1971 and 1991. The mean and annual standard deviation and the km-scale spatial pattern of accumulation were seen to be very similar in the older and more recent data. The data did not reveal any significant accumulation trend over the last 40 years. The ECMWF analysis-based forecasts (ERA-40 and ERA-Interim), a stretched-grid global general circulation model (LMDZ4) and three regional circulation models (PMM5, MAR and RACMO2), all with high resolution over Antarctica (27-125 km), were tested against the GS reports. They qualitatively reproduced the meso-scale spatial pattern of the annual-mean accumulation except MAR. MAR significantly underestimated mean accumulation, while LMDZ4 and RACMO2 overestimated it. ERA-40 and the regional models that use ERA-40 as lateral boundary condition qualitatively reproduced the chronology of interannual variability but underestimated the magnitude of interannual variations. Two widely used climatologies for Antarctic accumulation agreed well with the mean GS data. The model-based climatology was also able to reproduce the observed spatial pattern. These data thus provide new stringent constraints on models and other large-scale evaluations of the Antarctic accumulation. (orig.)

  1. DMPy: a Python package for automated mathematical model construction of large-scale metabolic systems.

    Science.gov (United States)

    Smith, Robert W; van Rosmalen, Rik P; Martins Dos Santos, Vitor A P; Fleck, Christian

    2018-06-19

    Models of metabolism are often used in biotechnology and pharmaceutical research to identify drug targets or increase the direct production of valuable compounds. Due to the complexity of large metabolic systems, a number of conclusions have been drawn using mathematical methods with simplifying assumptions. For example, constraint-based models describe changes of internal concentrations that occur much quicker than alterations in cell physiology. Thus, metabolite concentrations and reaction fluxes are fixed to constant values. This greatly reduces the mathematical complexity, while providing a reasonably good description of the system in steady state. However, without a large number of constraints, many different flux sets can describe the optimal model and we obtain no information on how metabolite levels dynamically change. Thus, to accurately determine what is taking place within the cell, finer quality data and more detailed models need to be constructed. In this paper we present a computational framework, DMPy, that uses a network scheme as input to automatically search for kinetic rates and produce a mathematical model that describes temporal changes of metabolite fluxes. The parameter search utilises several online databases to find measured reaction parameters. From this, we take advantage of previous modelling efforts, such as Parameter Balancing, to produce an initial mathematical model of a metabolic pathway. We analyse the effect of parameter uncertainty on model dynamics and test how recent flux-based model reduction techniques alter system properties. To our knowledge this is the first time such analysis has been performed on large models of metabolism. Our results highlight that good estimates of at least 80% of the reaction rates are required to accurately model metabolic systems. Furthermore, reducing the size of the model by grouping reactions together based on fluxes alters the resulting system dynamics. The presented pipeline automates the

  2. Pressure fluctuation prediction in pump mode using large eddy simulation and unsteady Reynolds-averaged Navier–Stokes in a pump–turbine

    Directory of Open Access Journals (Sweden)

    De-You Li

    2016-06-01

    Full Text Available For pump–turbines, most of the instabilities couple with high-level pressure fluctuations, which are harmful to pump–turbines, even the whole units. In order to understand the causes of pressure fluctuations and reduce their amplitudes, proper numerical methods should be chosen to obtain the accurate results. The method of large eddy simulation with wall-adapting local eddy-viscosity model was chosen to predict the pressure fluctuations in pump mode of a pump–turbine compared with the method of unsteady Reynolds-averaged Navier–Stokes with two-equation turbulence model shear stress transport k–ω. Partial load operating point (0.91QBEP under 15-mm guide vane opening was selected to make a comparison of performance and frequency characteristics between large eddy simulation and unsteady Reynolds-averaged Navier–Stokes based on the experimental validation. Good agreement indicates that the method of large eddy simulation could be applied in the simulation of pump–turbines. Then, a detailed comparison of variation for peak-to-peak value in the whole passage was presented. Both the methods show that the highest level pressure fluctuations occur in the vaneless space. In addition, the propagation of amplitudes of blade pass frequency, 2 times of blade pass frequency, and 3 times of blade pass frequency in the circumferential and flow directions was investigated. Although the difference exists between large eddy simulation and unsteady Reynolds-averaged Navier–Stokes, the trend of variation in different parts is almost the same. Based on the analysis, using the same mesh (8 million, large eddy simulation underestimates pressure characteristics and shows a better result compared with the experiments, while unsteady Reynolds-averaged Navier–Stokes overestimates them.

  3. Improving rainfall representation for large-scale hydrological modelling of tropical mountain basins

    Science.gov (United States)

    Zulkafli, Zed; Buytaert, Wouter; Onof, Christian; Lavado, Waldo; Guyot, Jean-Loup

    2013-04-01

    Errors in the forcing data are sometimes overlooked in hydrological studies even when they could be the most important source of uncertainty. The latter particularly holds true in tropical countries with short historical records of rainfall monitoring and remote areas with sparse rain gauge network. In such instances, alternative data such as the remotely sensed precipitation from the TRMM (Tropical Rainfall Measuring Mission) satellite have been used. These provide a good spatial representation of rainfall processes but have been established in the literature to contain volumetric biases that may impair the results of hydrological modelling or worse, are compensated during model calibration. In this study, we analysed precipitation time series from the TMPA (TRMM Multiple Precipitation Algorithm, version 6) against measurements from over 300 gauges in the Andes and Amazon regions of Peru and Ecuador. We found moderately good monthly correlation between the pixel and gauge pairs but a severe underestimation of rainfall amounts and wet days. The discrepancy between the time series pairs is particularly visible over the east side of the Andes and may be attributed to localized and orographic-driven high intensity rainfall, which the satellite product may have limited skills at capturing due to technical and scale issues. This consequently results in a low bias in the simulated streamflow volumes further downstream. In comparison, with the recently released TMPA, version 7, the biases reduce. This work further explores several approaches to merge the two sources of rainfall measurements, each of a different spatial and temporal support, with the objective of improving the representation of rainfall in hydrological simulations. The methods used are (1) mean bias correction (2) data assimilation using Kalman filter Bayesian updating. The results are evaluated by means of (1) a comparison of runoff ratios (the ratio of the total runoff and the total precipitation over an

  4. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai; Sang, Huiyan; Huang, Jianhua Z.

    2014-01-01

    of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov

  5. Modeling the Hydrologic Effects of Large-Scale Green Infrastructure Projects with GIS

    Science.gov (United States)

    Bado, R. A.; Fekete, B. M.; Khanbilvardi, R.

    2015-12-01

    Impervious surfaces in urban areas generate excess runoff, which in turn causes flooding, combined sewer overflows, and degradation of adjacent surface waters. Municipal environmental protection agencies have shown a growing interest in mitigating these effects with 'green' infrastructure practices that partially restore the perviousness and water holding capacity of urban centers. Assessment of the performance of current and future green infrastructure projects is hindered by the lack of adequate hydrological modeling tools; conventional techniques fail to account for the complex flow pathways of urban environments, and detailed analyses are difficult to prepare for the very large domains in which green infrastructure projects are implemented. Currently, no standard toolset exists that can rapidly and conveniently predict runoff, consequent inundations, and sewer overflows at a city-wide scale. We demonstrate how streamlined modeling techniques can be used with open-source GIS software to efficiently model runoff in large urban catchments. Hydraulic parameters and flow paths through city blocks, roadways, and sewer drains are automatically generated from GIS layers, and ultimately urban flow simulations can be executed for a variety of rainfall conditions. With this methodology, users can understand the implications of large-scale land use changes and green/gray storm water retention systems on hydraulic loading, peak flow rates, and runoff volumes.

  6. Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency

    Science.gov (United States)

    Aikens, Kurt; Craft, Kyle; Redman, Andrew

    2015-11-01

    The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.

  7. Evaluation of climate-related carbon turnover processes in global vegetation models for boreal and temperate forests.

    Science.gov (United States)

    Thurner, Martin; Beer, Christian; Ciais, Philippe; Friend, Andrew D; Ito, Akihiko; Kleidon, Axel; Lomas, Mark R; Quegan, Shaun; Rademacher, Tim T; Schaphoff, Sibyll; Tum, Markus; Wiltshire, Andy; Carvalhais, Nuno

    2017-08-01

    Turnover concepts in state-of-the-art global vegetation models (GVMs) account for various processes, but are often highly simplified and may not include an adequate representation of the dominant processes that shape vegetation carbon turnover rates in real forest ecosystems at a large spatial scale. Here, we evaluate vegetation carbon turnover processes in GVMs participating in the Inter-Sectoral Impact Model Intercomparison Project (ISI-MIP, including HYBRID4, JeDi, JULES, LPJml, ORCHIDEE, SDGVM, and VISIT) using estimates of vegetation carbon turnover rate (k) derived from a combination of remote sensing based products of biomass and net primary production (NPP). We find that current model limitations lead to considerable biases in the simulated biomass and in k (severe underestimations by all models except JeDi and VISIT compared to observation-based average k), likely contributing to underestimation of positive feedbacks of the northern forest carbon balance to climate change caused by changes in forest mortality. A need for improved turnover concepts related to frost damage, drought, and insect outbreaks to better reproduce observation-based spatial patterns in k is identified. As direct frost damage effects on mortality are usually not accounted for in these GVMs, simulated relationships between k and winter length in boreal forests are not consistent between different regions and strongly biased compared to the observation-based relationships. Some models show a response of k to drought in temperate forests as a result of impacts of water availability on NPP, growth efficiency or carbon balance dependent mortality as well as soil or litter moisture effects on leaf turnover or fire. However, further direct drought effects such as carbon starvation (only in HYBRID4) or hydraulic failure are usually not taken into account by the investigated GVMs. While they are considered dominant large-scale mortality agents, mortality mechanisms related to insects and

  8. Potential impacts of the Deepwater Horizon oil spill on large pelagic fishes

    Science.gov (United States)

    Frias-Torres, Sarrah; Bostater, Charles R., Jr.

    2011-11-01

    Biogeographical analyses provide insights on how the Deepwater Horizon oil spill impacted large pelagic fishes. We georeferenced historical ichthyoplankton surveys and published literature to map the spawning and larval areas of bluefin tuna, swordfish, blue marlin and whale shark sightings in the Gulf of Mexico with daily satellite-derived images detecting surface oil. The oil spill covered critical areas used by large pelagic fishes. Surface oil was detected in 100% of the northernmost whale shark sightings, in 32.8 % of the bluefin tuna spawning area and 38 % of the blue marlin larval area. No surface oil was detected in the swordfish spawning and larval area. Our study likely underestimates the extend of the oil spill due to satellite sensors detecting only the upper euphotic zone and the use of dispersants altering crude oil density, but provides a previously unknown spatio-temporal analysis.

  9. A Model-Model and Data-Model Comparison for the Early Eocene Hydrological Cycle

    Science.gov (United States)

    Carmichael, Matthew J.; Lunt, Daniel J.; Huber, Matthew; Heinemann, Malte; Kiehl, Jeffrey; LeGrande, Allegra; Loptson, Claire A.; Roberts, Chris D.; Sagoo, Navjit; Shields, Christine

    2016-01-01

    A range of proxy observations have recently provided constraints on how Earth's hydrological cycle responded to early Eocene climatic changes. However, comparisons of proxy data to general circulation model (GCM) simulated hydrology are limited and inter-model variability remains poorly characterised. In this work, we undertake an intercomparison of GCM-derived precipitation and P - E distributions within the extended EoMIP ensemble (Eocene Modelling Intercomparison Project; Lunt et al., 2012), which includes previously published early Eocene simulations performed using five GCMs differing in boundary conditions, model structure, and precipitation-relevant parameterisation schemes. We show that an intensified hydrological cycle, manifested in enhanced global precipitation and evaporation rates, is simulated for all Eocene simulations relative to the preindustrial conditions. This is primarily due to elevated atmospheric paleo-CO2, resulting in elevated temperatures, although the effects of differences in paleogeography and ice sheets are also important in some models. For a given CO2 level, globally averaged precipitation rates vary widely between models, largely arising from different simulated surface air temperatures. Models with a similar global sensitivity of precipitation rate to temperature (dP=dT ) display different regional precipitation responses for a given temperature change. Regions that are particularly sensitive to model choice include the South Pacific, tropical Africa, and the Peri-Tethys, which may represent targets for future proxy acquisition. A comparison of early and middle Eocene leaf-fossil-derived precipitation estimates with the GCM output illustrates that GCMs generally underestimate precipitation rates at high latitudes, although a possible seasonal bias of the proxies cannot be excluded. Models which warm these regions, either via elevated CO2 or by varying poorly constrained model parameter values, are most successful in simulating a

  10. Modelling of decay heat removal using large water pools

    International Nuclear Information System (INIS)

    Munther, R.; Raussi, P.; Kalli, H.

    1992-01-01

    The main task for investigating of passive safety systems typical for ALWRs (Advanced Light Water Reactors) has been reviewing decay heat removal systems. The reference system for calculations has been represented in Hitachi's SBWR-concept. The calculations for energy transfer to the suppression pool were made using two different fluid mechanics codes, namely FIDAP and PHOENICS. FIDAP is based on finite element methodology and PHOENICS uses finite differences. The reason choosing these codes has been to compare their modelling and calculating abilities. The thermal stratification behaviour and the natural circulation was modelled with several turbulent flow models. Also, energy transport to the suppression pool was calculated for laminar flow conditions. These calculations required a large amount of computer resources and so the CRAY-supercomputer of the state computing centre was used. The results of the calculations indicated that the capabilities of these codes for modelling the turbulent flow regime are limited. Output from these codes should be considered carefully, and whenever possible, experimentally determined parameters should be used as input to enhance the code reliability. (orig.). (31 refs., 21 figs., 3 tabs.)

  11. Large tan β in gauge-mediated SUSY-breaking models

    International Nuclear Information System (INIS)

    Rattazzi, R.

    1997-01-01

    We explore some topics in the phenomenology of gauge-mediated SUSY-breaking scenarios having a large hierarchy of Higgs VEVs, v U /v D = tan β>>1. Some motivation for this scenario is first presented. We then use a systematic, analytic expansion (including some threshold corrections) to calculate the μ-parameter needed for proper electroweak breaking and the radiative corrections to the B-parameter, which fortuitously cancel at leading order. If B = 0 at the messenger scale then tan β is naturally large and calculable; we calculate it. We then confront this prediction with classical and quantum vacuum stability constraints arising from the Higgs-slepton potential, and indicate the preferred values of the top quark mass and messenger scale(s). The possibility of vacuum instability in a different direction yields an upper bound on the messenger mass scale complementary to the familiar bound from gravitino relic abundance. Next, we calculate the rate for b→sγ and show the possibility of large deviations (in the direction currently favored by experiment) from standard-model and small tan β predictions. Finally, we discuss the implications of these findings and their applicability to future, broader and more detailed investigations. (orig.)

  12. How uncertainty in socio-economic variables affects large-scale transport model forecasts

    DEFF Research Database (Denmark)

    Manzo, Stefano; Nielsen, Otto Anker; Prato, Carlo Giacomo

    2015-01-01

    A strategic task assigned to large-scale transport models is to forecast the demand for transport over long periods of time to assess transport projects. However, by modelling complex systems transport models have an inherent uncertainty which increases over time. As a consequence, the longer...... the period forecasted the less reliable is the forecasted model output. Describing uncertainty propagation patterns over time is therefore important in order to provide complete information to the decision makers. Among the existing literature only few studies analyze uncertainty propagation patterns over...

  13. Evaluation of model-simulated source contributions to tropospheric ozone with aircraft observations in the factor-projected space

    Directory of Open Access Journals (Sweden)

    Y. Yoshida

    2008-03-01

    Full Text Available Trace gas measurements of TOPSE and TRACE-P experiments and corresponding global GEOS-Chem model simulations are analyzed with the Positive Matrix Factorization (PMF method for model evaluation purposes. Specially, we evaluate the model simulated contributions to O3 variability from stratospheric transport, intercontinental transport, and production from urban/industry and biomass burning/biogenic sources. We select a suite of relatively long-lived tracers, including 7 chemicals (O3, NOy, PAN, CO, C3H8, CH3Cl, and 7Be and 1 dynamic tracer (potential temperature. The largest discrepancy is found in the stratospheric contribution to 7Be. The model underestimates this contribution by a factor of 2–3, corresponding well to a reduction of 7Be source by the same magnitude in the default setup of the standard GEOS-Chem model. In contrast, we find that the simulated O3 contributions from stratospheric transport are in reasonable agreement with those derived from the measurements. However, the springtime increasing trend over North America derived from the measurements are largely underestimated in the model, indicating that the magnitude of simulated stratospheric O3 source is reasonable but the temporal distribution needs improvement. The simulated O3 contributions from long-range transport and production from urban/industry and biomass burning/biogenic emissions are also in reasonable agreement with those derived from the measurements, although significant discrepancies are found for some regions.

  14. Analogue scale modelling of extensional tectonic processes using a large state-of-the-art centrifuge

    Science.gov (United States)

    Park, Heon-Joon; Lee, Changyeol

    2017-04-01

    Analogue scale modelling of extensional tectonic processes such as rifting and basin opening has been numerously conducted. Among the controlling factors, gravitational acceleration (g) on the scale models was regarded as a constant (Earth's gravity) in the most of the analogue model studies, and only a few model studies considered larger gravitational acceleration by using a centrifuge (an apparatus generating large centrifugal force by rotating the model at a high speed). Although analogue models using a centrifuge allow large scale-down and accelerated deformation that is derived by density differences such as salt diapir, the possible model size is mostly limited up to 10 cm. A state-of-the-art centrifuge installed at the KOCED Geotechnical Centrifuge Testing Center, Korea Advanced Institute of Science and Technology (KAIST) allows a large surface area of the scale-models up to 70 by 70 cm under the maximum capacity of 240 g-tons. Using the centrifuge, we will conduct analogue scale modelling of the extensional tectonic processes such as opening of the back-arc basin. Acknowledgement This research was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (grant number 2014R1A6A3A04056405).

  15. Drought Persistence Errors in Global Climate Models

    Science.gov (United States)

    Moon, H.; Gudmundsson, L.; Seneviratne, S. I.

    2018-04-01

    The persistence of drought events largely determines the severity of socioeconomic and ecological impacts, but the capability of current global climate models (GCMs) to simulate such events is subject to large uncertainties. In this study, the representation of drought persistence in GCMs is assessed by comparing state-of-the-art GCM model simulations to observation-based data sets. For doing so, we consider dry-to-dry transition probabilities at monthly and annual scales as estimates for drought persistence, where a dry status is defined as negative precipitation anomaly. Though there is a substantial spread in the drought persistence bias, most of the simulations show systematic underestimation of drought persistence at global scale. Subsequently, we analyzed to which degree (i) inaccurate observations, (ii) differences among models, (iii) internal climate variability, and (iv) uncertainty of the employed statistical methods contribute to the spread in drought persistence errors using an analysis of variance approach. The results show that at monthly scale, model uncertainty and observational uncertainty dominate, while the contribution from internal variability is small in most cases. At annual scale, the spread of the drought persistence error is dominated by the statistical estimation error of drought persistence, indicating that the partitioning of the error is impaired by the limited number of considered time steps. These findings reveal systematic errors in the representation of drought persistence in current GCMs and suggest directions for further model improvement.

  16. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  17. Linear velocity fields in non-Gaussian models for large-scale structure

    Science.gov (United States)

    Scherrer, Robert J.

    1992-01-01

    Linear velocity fields in two types of physically motivated non-Gaussian models are examined for large-scale structure: seed models, in which the density field is a convolution of a density profile with a distribution of points, and local non-Gaussian fields, derived from a local nonlinear transformation on a Gaussian field. The distribution of a single component of the velocity is derived for seed models with randomly distributed seeds, and these results are applied to the seeded hot dark matter model and the global texture model with cold dark matter. An expression for the distribution of a single component of the velocity in arbitrary local non-Gaussian models is given, and these results are applied to such fields with chi-squared and lognormal distributions. It is shown that all seed models with randomly distributed seeds and all local non-Guassian models have single-component velocity distributions with positive kurtosis.

  18. Modeling natural wetlands: A new global framework built on wetland observations

    Science.gov (United States)

    Matthews, E.; Romanski, J.; Olefeldt, D.

    2015-12-01

    Natural wetlands are the world's largest methane (CH4) source, and their distribution and CH4 fluxes are sensitive to interannual and longer-term climate variations. Wetland distributions used in wetland-CH4 models diverge widely, and these geographic differences contribute substantially to large variations in magnitude, seasonality and distribution of modeled methane fluxes. Modeling wetland type and distribution—closely tied to simulating CH4 emissions—is a high priority, particularly for studies of wetlands and CH4 dynamics under past and future climates. Methane-wetland models either prescribe or simulate methane-producing areas (aka wetlands) and both approaches result in predictable over- and under-estimates. 1) Monthly satellite-derived inundation data include flooded areas that are not wetlands (e.g., lakes, reservoirs, and rivers), and do not identify non-flooded wetlands. 2) Models simulating methane-producing areas overwhelmingly rely on modeled soil moisture, systematically over-estimating total global area, with regional over- and under-estimates, while schemes to model soil-moisture typically cannot account for positive water tables (i.e., flooding). Interestingly, while these distinct hydrological approaches to identify wetlands are complementary, merging them does not provide critical data needed to model wetlands for methane studies. We present a new integrated framework for modeling wetlands, and ultimately their methane emissions, that exploits the extensive body of data and information on wetlands. The foundation of the approach is an existing global gridded data set comprising all and only wetlands, including vegetation information. This data set is augmented with data inter alia on climate, inundation dynamics, soil type and soil carbon, permafrost, active-layer depth, growth form, and species composition. We investigate this enhanced wetland data set to identify which variables best explain occurrence and characteristics of observed

  19. Evaluation of sub grid scale and local wall models in Large-eddy simulations of separated flow

    Directory of Open Access Journals (Sweden)

    Sam Ali Al

    2015-01-01

    Full Text Available The performance of the Sub Grid Scale models is studied by simulating a separated flow over a wavy channel. The first and second order statistical moments of the resolved velocities obtained by using Large-Eddy simulations at different mesh resolutions are compared with Direct Numerical Simulations data. The effectiveness of modeling the wall stresses by using local log-law is then tested on a relatively coarse grid. The results exhibit a good agreement between highly-resolved Large Eddy Simulations and Direct Numerical Simulations data regardless the Sub Grid Scale models. However, the agreement is less satisfactory with relatively coarse grid without using any wall models and the differences between Sub Grid Scale models are distinguishable. Using local wall model retuned the basic flow topology and reduced significantly the differences between the coarse meshed Large-Eddy Simulations and Direct Numerical Simulations data. The results show that the ability of local wall model to predict the separation zone depends strongly on its implementation way.

  20. Automatic Generation of Connectivity for Large-Scale Neuronal Network Models through Structural Plasticity.

    Science.gov (United States)

    Diaz-Pier, Sandra; Naveau, Mikaël; Butz-Ostendorf, Markus; Morrison, Abigail

    2016-01-01

    With the emergence of new high performance computation technology in the last decade, the simulation of large scale neural networks which are able to reproduce the behavior and structure of the brain has finally become an achievable target of neuroscience. Due to the number of synaptic connections between neurons and the complexity of biological networks, most contemporary models have manually defined or static connectivity. However, it is expected that modeling the dynamic generation and deletion of the links among neurons, locally and between different regions of the brain, is crucial to unravel important mechanisms associated with learning, memory and healing. Moreover, for many neural circuits that could potentially be modeled, activity data is more readily and reliably available than connectivity data. Thus, a framework that enables networks to wire themselves on the basis of specified activity targets can be of great value in specifying network models where connectivity data is incomplete or has large error margins. To address these issues, in the present work we present an implementation of a model of structural plasticity in the neural network simulator NEST. In this model, synapses consist of two parts, a pre- and a post-synaptic element. Synapses are created and deleted during the execution of the simulation following local homeostatic rules until a mean level of electrical activity is reached in the network. We assess the scalability of the implementation in order to evaluate its potential usage in the self generation of connectivity of large scale networks. We show and discuss the results of simulations on simple two population networks and more complex models of the cortical microcircuit involving 8 populations and 4 layers using the new framework.

  1. Efficient trawl avoidance by mesopelagic fishes causes large underestimation of their biomass

    KAUST Repository

    Kaartvedt, Stein; Staby, A; Aksnes, Dag L.

    2012-01-01

    Mesopelagic fishes occur in all the world’s oceans, but their abundance and consequently their ecological significance remains uncertain. The current global estimate based on net sampling prior to 1980 suggests a global abundance of one gigatonne

  2. Pathology economic model tool: a novel approach to workflow and budget cost analysis in an anatomic pathology laboratory.

    Science.gov (United States)

    Muirhead, David; Aoun, Patricia; Powell, Michael; Juncker, Flemming; Mollerup, Jens

    2010-08-01

    The need for higher efficiency, maximum quality, and faster turnaround time is a continuous focus for anatomic pathology laboratories and drives changes in work scheduling, instrumentation, and management control systems. To determine the costs of generating routine, special, and immunohistochemical microscopic slides in a large, academic anatomic pathology laboratory using a top-down approach. The Pathology Economic Model Tool was used to analyze workflow processes at The Nebraska Medical Center's anatomic pathology laboratory. Data from the analysis were used to generate complete cost estimates, which included not only materials, consumables, and instrumentation but also specific labor and overhead components for each of the laboratory's subareas. The cost data generated by the Pathology Economic Model Tool were compared with the cost estimates generated using relative value units. Despite the use of automated systems for different processes, the workflow in the laboratory was found to be relatively labor intensive. The effect of labor and overhead on per-slide costs was significantly underestimated by traditional relative-value unit calculations when compared with the Pathology Economic Model Tool. Specific workflow defects with significant contributions to the cost per slide were identified. The cost of providing routine, special, and immunohistochemical slides may be significantly underestimated by traditional methods that rely on relative value units. Furthermore, a comprehensive analysis may identify specific workflow processes requiring improvement.

  3. Large animals as potential models of human mental and behavioral disorders.

    Science.gov (United States)

    Danek, Michał; Danek, Janusz; Araszkiewicz, Aleksander

    2017-12-30

    Many animal models in different species have been developed for mental and behavioral disorders. This review presents large animals (dog, ovine, swine, horse) as potential models of this disorders. The article was based on the researches that were published in the peer-reviewed journals. Aliterature research was carried out using the PubMed database. The above issues were discussed in the several problem groups in accordance with the WHO International Statistical Classification of Diseases and Related Health Problems 10thRevision (ICD-10), in particular regarding: organic, including symptomatic, disorders; mental disorders (Alzheimer's disease and Huntington's disease, pernicious anemia and hepatic encephalopathy, epilepsy, Parkinson's disease, Creutzfeldt-Jakob disease); behavioral disorders due to psychoactive substance use (alcoholic intoxication, abuse of morphine); schizophrenia and other schizotypal disorders (puerperal psychosis); mood (affective) disorders (depressive episode); neurotic, stress-related and somatoform disorders (posttraumatic stress disorder, obsessive-compulsive disorder); behavioral syndromes associated with physiological disturbances and physical factors (anxiety disorders, anorexia nervosa, narcolepsy); mental retardation (Cohen syndrome, Down syndrome, Hunter syndrome); behavioral and emotional disorders (attention deficit hyperactivity disorder). This data indicates many large animal disorders which can be models to examine the above human mental and behavioral disorders.

  4. Modelling of large sodium fires: A coupled experimental and calculational approach

    International Nuclear Information System (INIS)

    Astegiano, J.C.; Balard, F.; Cartier, L.; De Pascale, C.; Forestier, A.; Merigot, C.; Roubin, P.; Tenchine, D.; Bakouta, N.

    1996-01-01

    The consequences of large sodium leaks in secondary circuit of Super-Phenix have been studied mainly with the FEUMIX code, on the basis of sodium fire experiments. This paper presents the status of the coupled AIRBUS (water experiment) FEUMIX approach under development in order to strengthen the extrapolation made for the Super-Phenix secondary circuits calculations for large leakage flow. FEUMIX code is a point code based on the concept of a global interfacial area between sodium and air. Mass and heat transfers through this global area is supposed to be similar. Then, global interfacial transfer coefficient Sih is an important parameter of the model. Correlations for the interfacial area are extracted from a large number of sodium tests. For the studies of hypothetical large sodium leak in secondary circuit of Super-Phenix, flow rates of more than 1 t/s have been considered and extrapolation was made from the existing results (maximum flow rate 225 kg/s). In order to strengthen the extrapolation, water test has been contemplated, on the basis of a thermal hydraulic similarity. The principle is to measure the interfacial area of a hot water jet in air, then to transpose the Sih to sodium without combustion, and to use this value in FEUMIX with combustion modelling. AIRBUS test section is a parallelepipedic gastight tank, 106 m 3 (5.7 x 3.7 x 5) internally insulated. Water jet is injected from heated external auxiliary tank into the cell using pressurized air tank and specific valve. The main measurements performed during each test are injected flow rate air pressure water temperature gas temperature A first series of tests were performed in order to qualify the methodology: typical FCA and IGNA sodium fire tests were represented in AIRBUS, and a comparison of the FEUMIX calculation using Sih value deduced from water experiments show satisfactory agreement. A second series of test for large flow rate, corresponding to large sodium leak in secondary circuit of Super

  5. Atmospheric Dust Modeling from Meso to Global Scales with the Online NMMB/BSC-Dust Model Part 2: Experimental Campaigns in Northern Africa

    Science.gov (United States)

    Haustein, K.; Perez, C.; Baldasano, J. M.; Jorba, O.; Basart, S.; Miller, R. L.; Janjic, Z.; Black, T.; Nickovic, S.; Todd, M. C.; hide

    2012-01-01

    The new NMMB/BSC-Dust model is intended to provide short to medium-range weather and dust forecasts from regional to global scales. It is an online model in which the dust aerosol dynamics and physics are solved at each model time step. The companion paper (Perez et al., 2011) develops the dust model parameterizations and provides daily to annual evaluations of the model for its global and regional configurations. Modeled aerosol optical depth (AOD) was evaluated against AERONET Sun photometers over Northern Africa, Middle East and Europe with correlations around 0.6-0.7 on average without dust data assimilation. In this paper we analyze in detail the behavior of the model using data from the Saharan Mineral dUst experiment (SAMUM-1) in 2006 and the Bodele Dust Experiment (BoDEx) in 2005. AOD from satellites and Sun photometers, vertically resolved extinction coefficients from lidars and particle size distributions at the ground and in the troposphere are used, complemented by wind profile data and surface meteorological measurements. All simulations were performed at the regional scale for the Northern African domain at the expected operational horizontal resolution of 25 km. Model results for SAMUM-1 generally show good agreement with satellite data over the most active Saharan dust sources. The model reproduces the AOD from Sun photometers close to sources and after long-range transport, and the dust size spectra at different height levels. At this resolution, the model is not able to reproduce a large haboob that occurred during the campaign. Some deficiencies are found concerning the vertical dust distribution related to the representation of the mixing height in the atmospheric part of the model. For the BoDEx episode, we found the diurnal temperature cycle to be strongly dependant on the soil moisture, which is underestimated in the NCEP analysis used for model initialization. The low level jet (LLJ) and the dust AOD over the Bodélé are well reproduced

  6. An underestimated role of precipitation frequency in regulating summer soil moisture

    International Nuclear Information System (INIS)

    Wu Chaoyang; Chen, Jing M; Pumpanen, Jukka; Cescatti, Alessandro; Marcolla, Barbara; Blanken, Peter D; Ardö, Jonas; Tang, Yanhong; Magliulo, Vincenzo; Georgiadis, Teodoro; Soegaard, Henrik; Cook, David R; Harding, Richard J

    2012-01-01

    Soil moisture induced droughts are expected to become more frequent under future global climate change. Precipitation has been previously assumed to be mainly responsible for variability in summer soil moisture. However, little is known about the impacts of precipitation frequency on summer soil moisture, either interannually or spatially. To better understand the temporal and spatial drivers of summer drought, 415 site yr measurements observed at 75 flux sites world wide were used to analyze the temporal and spatial relationships between summer soil water content (SWC) and the precipitation frequencies at various temporal scales, i.e., from half-hourly, 3, 6, 12 and 24 h measurements. Summer precipitation was found to be an indicator of interannual SWC variability with r of 0.49 (p < 0.001) for the overall dataset. However, interannual variability in summer SWC was also significantly correlated with the five precipitation frequencies and the sub-daily precipitation frequencies seemed to explain the interannual SWC variability better than the total of precipitation. Spatially, all these precipitation frequencies were better indicators of summer SWC than precipitation totals, but these better performances were only observed in non-forest ecosystems. Our results demonstrate that precipitation frequency may play an important role in regulating both interannual and spatial variations of summer SWC, which has probably been overlooked or underestimated. However, the spatial interpretation should carefully consider other factors, such as the plant functional types and soil characteristics of diverse ecoregions. (letter)

  7. Vast underestimation of Madagascar's biodiversity evidenced by an integrative amphibian inventory.

    Science.gov (United States)

    Vieites, David R; Wollenberg, Katharina C; Andreone, Franco; Köhler, Jörn; Glaw, Frank; Vences, Miguel

    2009-05-19

    Amphibians are in decline worldwide. However, their patterns of diversity, especially in the tropics, are not well understood, mainly because of incomplete information on taxonomy and distribution. We assess morphological, bioacoustic, and genetic variation of Madagascar's amphibians, one of the first near-complete taxon samplings from a biodiversity hotspot. Based on DNA sequences of 2,850 specimens sampled from over 170 localities, our analyses reveal an extreme proportion of amphibian diversity, projecting an almost 2-fold increase in species numbers from the currently described 244 species to a minimum of 373 and up to 465. This diversity is widespread geographically and across most major phylogenetic lineages except in a few previously well-studied genera, and is not restricted to morphologically cryptic clades. We classify the genealogical lineages in confirmed and unconfirmed candidate species or deeply divergent conspecific lineages based on concordance of genetic divergences with other characters. This integrative approach may be widely applicable to improve estimates of organismal diversity. Our results suggest that in Madagascar the spatial pattern of amphibian richness and endemism must be revisited, and current habitat destruction may be affecting more species than previously thought, in amphibians as well as in other animal groups. This case study suggests that worldwide tropical amphibian diversity is probably underestimated at an unprecedented level and stresses the need for integrated taxonomic surveys as a basis for prioritizing conservation efforts within biodiversity hotspots.

  8. An Axiomatic Analysis Approach for Large-Scale Disaster-Tolerant Systems Modeling

    Directory of Open Access Journals (Sweden)

    Theodore W. Manikas

    2011-02-01

    Full Text Available Disaster tolerance in computing and communications systems refers to the ability to maintain a degree of functionality throughout the occurrence of a disaster. We accomplish the incorporation of disaster tolerance within a system by simulating various threats to the system operation and identifying areas for system redesign. Unfortunately, extremely large systems are not amenable to comprehensive simulation studies due to the large computational complexity requirements. To address this limitation, an axiomatic approach that decomposes a large-scale system into smaller subsystems is developed that allows the subsystems to be independently modeled. This approach is implemented using a data communications network system example. The results indicate that the decomposition approach produces simulation responses that are similar to the full system approach, but with greatly reduced simulation time.

  9. Flexible non-linear predictive models for large-scale wind turbine diagnostics

    DEFF Research Database (Denmark)

    Bach-Andersen, Martin; Rømer-Odgaard, Bo; Winther, Ole

    2017-01-01

    We demonstrate how flexible non-linear models can provide accurate and robust predictions on turbine component temperature sensor data using data-driven principles and only a minimum of system modeling. The merits of different model architectures are evaluated using data from a large set...... of turbines operating under diverse conditions. We then go on to test the predictive models in a diagnostic setting, where the output of the models are used to detect mechanical faults in rotor bearings. Using retrospective data from 22 actual rotor bearing failures, the fault detection performance...... of the models are quantified using a structured framework that provides the metrics required for evaluating the performance in a fleet wide monitoring setup. It is demonstrated that faults are identified with high accuracy up to 45 days before a warning from the hard-threshold warning system....

  10. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    International Nuclear Information System (INIS)

    Lahtinen, J.; Launiainen, T.; Heljanko, K.; Ropponen, J.

    2012-01-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  11. Model checking methodology for large systems, faults and asynchronous behaviour. SARANA 2011 work report

    Energy Technology Data Exchange (ETDEWEB)

    Lahtinen, J. [VTT Technical Research Centre of Finland, Espoo (Finland); Launiainen, T.; Heljanko, K.; Ropponen, J. [Aalto Univ., Espoo (Finland). Dept. of Information and Computer Science

    2012-07-01

    Digital instrumentation and control (I and C) systems are challenging to verify. They enable complicated control functions, and the state spaces of the models easily become too large for comprehensive verification through traditional methods. Model checking is a formal method that can be used for system verification. A number of efficient model checking systems are available that provide analysis tools to determine automatically whether a given state machine model satisfies the desired safety properties. This report reviews the work performed in the Safety Evaluation and Reliability Analysis of Nuclear Automation (SARANA) project in 2011 regarding model checking. We have developed new, more exact modelling methods that are able to capture the behaviour of a system more realistically. In particular, we have developed more detailed fault models depicting the hardware configuration of a system, and methodology to model function-block-based systems asynchronously. In order to improve the usability of our model checking methods, we have developed an algorithm for model checking large modular systems. The algorithm can be used to verify properties of a model that could otherwise not be verified in a straightforward manner. (orig.)

  12. Modelling the fate of persistent organic pollutants in Europe: parameterisation of a gridded distribution model

    International Nuclear Information System (INIS)

    Prevedouros, Konstantinos; MacLeod, Matthew; Jones, Kevin C.; Sweetman, Andrew J.

    2004-01-01

    A regionally segmented multimedia fate model for the European continent is described together with an illustrative steady-state case study examining the fate of γ-HCH (lindane) based on 1998 emission data. The study builds on the regionally segmented BETR North America model structure and describes the regional segmentation and parameterisation for Europe. The European continent is described by a 5 deg. x 5 deg. grid, leading to 50 regions together with four perimetric boxes representing regions buffering the European environment. Each zone comprises seven compartments including; upper and lower atmosphere, soil, vegetation, fresh water and sediment and coastal water. Inter-regions flows of air and water are described, exploiting information originating from GIS databases and other georeferenced data. The model is primarily designed to describe the fate of Persistent Organic Pollutants (POPs) within the European environment by examining chemical partitioning and degradation in each region, and inter-region transport either under steady-state conditions or fully dynamically. A test case scenario is presented which examines the fate of estimated spatially resolved atmospheric emissions of lindane throughout Europe within the lower atmosphere and surface soil compartments. In accordance with the predominant wind direction in Europe, the model predicts high concentrations close to the major sources as well as towards Central and Northeast regions. Elevated soil concentrations in Scandinavian soils provide further evidence of the potential of increased scavenging by forests and subsequent accumulation by organic-rich terrestrial surfaces. Initial model predictions have revealed a factor of 5-10 underestimation of lindane concentrations in the atmosphere. This is explained by an underestimation of source strength and/or an underestimation of European background levels. The model presented can further be used to predict deposition fluxes and chemical inventories, and it

  13. Absorption and scattering coefficient dependence of laser-Doppler flowmetry models for large tissue volumes

    International Nuclear Information System (INIS)

    Binzoni, T; Leung, T S; Ruefenacht, D; Delpy, D T

    2006-01-01

    Based on quasi-elastic scattering theory (and random walk on a lattice approach), a model of laser-Doppler flowmetry (LDF) has been derived which can be applied to measurements in large tissue volumes (e.g. when the interoptode distance is >30 mm). The model holds for a semi-infinite medium and takes into account the transport-corrected scattering coefficient and the absorption coefficient of the tissue, and the scattering coefficient of the red blood cells. The model holds for anisotropic scattering and for multiple scattering of the photons by the moving scatterers of finite size. In particular, it has also been possible to take into account the simultaneous presence of both Brownian and pure translational movements. An analytical and simplified version of the model has also been derived and its validity investigated, for the case of measurements in human skeletal muscle tissue. It is shown that at large optode spacing it is possible to use the simplified model, taking into account only a 'mean' light pathlength, to predict the blood flow related parameters. It is also demonstrated that the 'classical' blood volume parameter, derived from LDF instruments, may not represent the actual blood volume variations when the investigated tissue volume is large. The simplified model does not need knowledge of the tissue optical parameters and thus should allow the development of very simple and cost-effective LDF hardware

  14. Oligopolistic competition in wholesale electricity markets: Large-scale simulation and policy analysis using complementarity models

    Science.gov (United States)

    Helman, E. Udi

    This dissertation conducts research into the large-scale simulation of oligopolistic competition in wholesale electricity markets. The dissertation has two parts. Part I is an examination of the structure and properties of several spatial, or network, equilibrium models of oligopolistic electricity markets formulated as mixed linear complementarity problems (LCP). Part II is a large-scale application of such models to the electricity system that encompasses most of the United States east of the Rocky Mountains, the Eastern Interconnection. Part I consists of Chapters 1 to 6. The models developed in this part continue research into mixed LCP models of oligopolistic electricity markets initiated by Hobbs [67] and subsequently developed by Metzler [87] and Metzler, Hobbs and Pang [88]. Hobbs' central contribution is a network market model with Cournot competition in generation and a price-taking spatial arbitrage firm that eliminates spatial price discrimination by the Cournot firms. In one variant, the solution to this model is shown to be equivalent to the "no arbitrage" condition in a "pool" market, in which a Regional Transmission Operator optimizes spot sales such that the congestion price between two locations is exactly equivalent to the difference in the energy prices at those locations (commonly known as locational marginal pricing). Extensions to this model are presented in Chapters 5 and 6. One of these is a market model with a profit-maximizing arbitrage firm. This model is structured as a mathematical program with equilibrium constraints (MPEC), but due to the linearity of its constraints, can be solved as a mixed LCP. Part II consists of Chapters 7 to 12. The core of these chapters is a large-scale simulation of the U.S. Eastern Interconnection applying one of the Cournot competition with arbitrage models. This is the first oligopolistic equilibrium market model to encompass the full Eastern Interconnection with a realistic network representation (using

  15. Induction of continuous expanding infrarenal aortic aneurysms in a large porcine animal model

    DEFF Research Database (Denmark)

    Kloster, Brian Ozeraitis; Lund, Lars; Lindholt, Jes S.

    2015-01-01

    BackgroundA large animal model with a continuous expanding infrarenal aortic aneurysm gives access to a more realistic AAA model with anatomy and physiology similar to humans, and thus allows for new experimental research in the natural history and treatment options of the disease. Methods10 pigs...

  16. Fast sampling from a Hidden Markov Model posterior for large data

    DEFF Research Database (Denmark)

    Bonnevie, Rasmus; Hansen, Lars Kai

    2014-01-01

    Hidden Markov Models are of interest in a broad set of applications including modern data driven systems involving very large data sets. However, approximate inference methods based on Bayesian averaging are precluded in such applications as each sampling step requires a full sweep over the data...

  17. A large-scale multi-species spatial depletion model for overwintering waterfowl

    NARCIS (Netherlands)

    Baveco, J.M.; Kuipers, H.; Nolet, B.A.

    2011-01-01

    In this paper, we develop a model to evaluate the capacity of accommodation areas for overwintering waterfowl, at a large spatial scale. Each day geese are distributed over roosting sites. Based on the energy minimization principle, the birds daily decide which surrounding fields to exploit within

  18. Large-Signal Code TESLA: Improvements in the Implementation and in the Model

    National Research Council Canada - National Science Library

    Chernyavskiy, Igor A; Vlasov, Alexander N; Anderson, Jr., Thomas M; Cooke, Simon J; Levush, Baruch; Nguyen, Khanh T

    2006-01-01

    We describe the latest improvements made in the large-signal code TESLA, which include transformation of the code to a Fortran-90/95 version with dynamical memory allocation and extension of the model...

  19. Simple Model for Simulating Characteristics of River Flow Velocity in Large Scale

    Directory of Open Access Journals (Sweden)

    Husin Alatas

    2015-01-01

    Full Text Available We propose a simple computer based phenomenological model to simulate the characteristics of river flow velocity in large scale. We use shuttle radar tomography mission based digital elevation model in grid form to define the terrain of catchment area. The model relies on mass-momentum conservation law and modified equation of motion of falling body in inclined plane. We assume inelastic collision occurs at every junction of two river branches to describe the dynamics of merged flow velocity.

  20. Computational Modeling of Cultural Dimensions in Adversary Organizations

    Science.gov (United States)

    2010-01-01

    theatre of operations. 50 51 Chapter 5 Adversary Modeling Applications 5.1 Modeling Uncertainty in Adversary Behavior: Attacks in...Underestimate the Strength of Coalition Power 1 1 (= True) 1 1 1 -- Coalition Deploys Forces to Indonesia 1 1 2 1 2 -- Thai can Conduct Unilateral NEO 1 1

  1. Regional climate model sensitivity to domain size

    Science.gov (United States)

    Leduc, Martin; Laprise, René

    2009-05-01

    Regional climate models are increasingly used to add small-scale features that are not present in their lateral boundary conditions (LBC). It is well known that the limited area over which a model is integrated must be large enough to allow the full development of small-scale features. On the other hand, integrations on very large domains have shown important departures from the driving data, unless large scale nudging is applied. The issue of domain size is studied here by using the “perfect model” approach. This method consists first of generating a high-resolution climatic simulation, nicknamed big brother (BB), over a large domain of integration. The next step is to degrade this dataset with a low-pass filter emulating the usual coarse-resolution LBC. The filtered nesting data (FBB) are hence used to drive a set of four simulations (LBs for Little Brothers), with the same model, but on progressively smaller domain sizes. The LB statistics for a climate sample of four winter months are compared with BB over a common region. The time average (stationary) and transient-eddy standard deviation patterns of the LB atmospheric fields generally improve in terms of spatial correlation with the reference (BB) when domain gets smaller. The extraction of the small-scale features by using a spectral filter allows detecting important underestimations of the transient-eddy variability in the vicinity of the inflow boundary, which can penalize the use of small domains (less than 100 × 100 grid points). The permanent “spatial spin-up” corresponds to the characteristic distance that the large-scale flow needs to travel before developing small-scale features. The spin-up distance tends to grow in size at higher levels in the atmosphere.

  2. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    organization of the brain in continuously increasing resolution. From these images, networks of structural and functional connectivity can be constructed. Bayesian stochastic block modelling provides a prominent data-driven approach for uncovering the latent organization, by clustering the networks into groups...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  3. Outbreaks associated to large open air festivals, including music festivals, 1980 to 2012.

    Science.gov (United States)

    Botelho-Nevers, E; Gautret, P

    2013-03-14

    In the minds of many, large scale open air festivals have become associated with spring and summer, attracting many people, and in the case of music festivals, thousands of music fans. These festivals share the usual health risks associated with large mass gatherings, including transmission of communicable diseases and risk of outbreaks. Large scale open air festivals have however specific characteristics, including outdoor settings, on-site housing and food supply and the generally young age of the participants. Outbreaks at large scale open air festivals have been caused by Cryptosporium parvum, Campylobacter spp., Escherichia coli, Salmonella enterica, Shigella sonnei, Staphylococcus aureus, hepatitis A virus, influenza virus, measles virus, mumps virus and norovirus. Faecal-oral and respiratory transmissions of pathogens result from non-compliance with hygiene rules, inadequate sanitation and insufficient vaccination coverage. Sexual transmission of infectious diseases may also occur and is likely to be underestimated and underreported. Enhanced surveillance during and after festivals is essential. Preventive measures such as immunisations of participants and advice on-site and via social networks should be considered to reduce outbreaks at these large scale open air festivals.

  4. Intercomparison of model simulations of mixed-phase clouds observed during the ARM Mixed-Phase Arctic Cloud Experiment. Part I: Single layer cloud

    Energy Technology Data Exchange (ETDEWEB)

    Klein, Stephen A.; McCoy, Renata B.; Morrison, Hugh; Ackerman, Andrew S.; Avramov, Alexander; de Boer, Gijs; Chen, Mingxuan; Cole, Jason N.S.; Del Genio, Anthony D.; Falk, Michael; Foster, Michael J.; Fridlind, Ann; Golaz, Jean-Christophe; Hashino, Tempei; Harrington, Jerry Y.; Hoose, Corinna; Khairoutdinov, Marat F.; Larson, Vincent E.; Liu, Xiaohong; Luo, Yali; McFarquhar, Greg M.; Menon, Surabi; Neggers, Roel A. J.; Park, Sungsu; Poellot, Michael R.; Schmidt, Jerome M.; Sednev, Igor; Shipway, Ben J.; Shupe, Matthew D.; Spangenberg, Douglas A.; Sud, Yogesh C.; Turner, David D.; Veron, Dana E.; von Salzen, Knut; Walker, Gregory K.; Wang, Zhien; Wolf, Audrey B.; Xie, Shaocheng; Xu, Kuan-Man; Yang, Fanglin; Zhang, Gong

    2009-02-02

    Results are presented from an intercomparison of single-column and cloud-resolving model simulations of a cold-air outbreak mixed-phase stratocumulus cloud observed during the Atmospheric Radiation Measurement (ARM) program's Mixed-Phase Arctic Cloud Experiment. The observed cloud occurred in a well-mixed boundary layer with a cloud top temperature of -15 C. The observed average liquid water path of around 160 g m{sup -2} was about two-thirds of the adiabatic value and much greater than the average mass of ice crystal precipitation which when integrated from the surface to cloud top was around 15 g m{sup -2}. The simulations were performed by seventeen single-column models (SCMs) and nine cloud-resolving models (CRMs). While the simulated ice water path is generally consistent with the observed values, the median SCM and CRM liquid water path is a factor of three smaller than observed. Results from a sensitivity study in which models removed ice microphysics suggest that in many models the interaction between liquid and ice-phase microphysics is responsible for the large model underestimate of liquid water path. Despite this general underestimate, the simulated liquid and ice water paths of several models are consistent with the observed values. Furthermore, there is evidence that models with more sophisticated microphysics simulate liquid and ice water paths that are in better agreement with the observed values, although considerable scatter is also present. Although no single factor guarantees a good simulation, these results emphasize the need for improvement in the model representation of mixed-phase microphysics.

  5. Atmospheric dust modeling from meso to global scales with the online NMMB/BSC-Dust model – Part 2: Experimental campaigns in Northern Africa

    Directory of Open Access Journals (Sweden)

    K. Haustein

    2012-03-01

    Full Text Available The new NMMB/BSC-Dust model is intended to provide short to medium-range weather and dust forecasts from regional to global scales. It is an online model in which the dust aerosol dynamics and physics are solved at each model time step. The companion paper (Pérez et al., 2011 develops the dust model parameterizations and provides daily to annual evaluations of the model for its global and regional configurations. Modeled aerosol optical depth (AOD was evaluated against AERONET Sun photometers over Northern Africa, Middle East and Europe with correlations around 0.6–0.7 on average without dust data assimilation. In this paper we analyze in detail the behavior of the model using data from the Saharan Mineral dUst experiment (SAMUM-1 in 2006 and the Bodélé Dust Experiment (BoDEx in 2005. AOD from satellites and Sun photometers, vertically resolved extinction coefficients from lidars and particle size distributions at the ground and in the troposphere are used, complemented by wind profile data and surface meteorological measurements. All simulations were performed at the regional scale for the Northern African domain at the expected operational horizontal resolution of 25 km. Model results for SAMUM-1 generally show good agreement with satellite data over the most active Saharan dust sources. The model reproduces the AOD from Sun photometers close to sources and after long-range transport, and the dust size spectra at different height levels. At this resolution, the model is not able to reproduce a large haboob that occurred during the campaign. Some deficiencies are found concerning the vertical dust distribution related to the representation of the mixing height in the atmospheric part of the model. For the BoDEx episode, we found the diurnal temperature cycle to be strongly dependant on the soil moisture, which is underestimated in the NCEP analysis used for model initialization. The low level jet (LLJ and the dust AOD over the Bodélé are

  6. Phase-field-based lattice Boltzmann modeling of large-density-ratio two-phase flows

    Science.gov (United States)

    Liang, Hong; Xu, Jiangrong; Chen, Jiangxing; Wang, Huili; Chai, Zhenhua; Shi, Baochang

    2018-03-01

    In this paper, we present a simple and accurate lattice Boltzmann (LB) model for immiscible two-phase flows, which is able to deal with large density contrasts. This model utilizes two LB equations, one of which is used to solve the conservative Allen-Cahn equation, and the other is adopted to solve the incompressible Navier-Stokes equations. A forcing distribution function is elaborately designed in the LB equation for the Navier-Stokes equations, which make it much simpler than the existing LB models. In addition, the proposed model can achieve superior numerical accuracy compared with previous Allen-Cahn type of LB models. Several benchmark two-phase problems, including static droplet, layered Poiseuille flow, and spinodal decomposition are simulated to validate the present LB model. It is found that the present model can achieve relatively small spurious velocity in the LB community, and the obtained numerical results also show good agreement with the analytical solutions or some available results. Lastly, we use the present model to investigate the droplet impact on a thin liquid film with a large density ratio of 1000 and the Reynolds number ranging from 20 to 500. The fascinating phenomena of droplet splashing is successfully reproduced by the present model and the numerically predicted spreading radius exhibits to obey the power law reported in the literature.

  7. Background modelling of diffraction data in the presence of ice rings

    Directory of Open Access Journals (Sweden)

    James M. Parkhurst

    2017-09-01

    Full Text Available An algorithm for modelling the background for each Bragg reflection in a series of X-ray diffraction images containing Debye–Scherrer diffraction from ice in the sample is presented. The method involves the use of a global background model which is generated from the complete X-ray diffraction data set. Fitting of this model to the background pixels is then performed for each reflection independently. The algorithm uses a static background model that does not vary over the course of the scan. The greatest improvement can be expected for data where ice rings are present throughout the data set and the local background shape at the size of a spot on the detector does not exhibit large time-dependent variation. However, the algorithm has been applied to data sets whose background showed large pixel variations (variance/mean > 2 and has been shown to improve the results of processing for these data sets. It is shown that the use of a simple flat-background model as in traditional integration programs causes systematic bias in the background determination at ice-ring resolutions, resulting in an overestimation of reflection intensities at the peaks of the ice rings and an underestimation of reflection intensities either side of the ice ring. The new global background-model algorithm presented here corrects for this bias, resulting in a noticeable improvement in R factors following refinement.

  8. Finite Mixture Multilevel Multidimensional Ordinal IRT Models for Large Scale Cross-Cultural Research

    Science.gov (United States)

    de Jong, Martijn G.; Steenkamp, Jan-Benedict E. M.

    2010-01-01

    We present a class of finite mixture multilevel multidimensional ordinal IRT models for large scale cross-cultural research. Our model is proposed for confirmatory research settings. Our prior for item parameters is a mixture distribution to accommodate situations where different groups of countries have different measurement operations, while…

  9. Full-Scale Approximations of Spatio-Temporal Covariance Models for Large Datasets

    KAUST Repository

    Zhang, Bohai

    2014-01-01

    Various continuously-indexed spatio-temporal process models have been constructed to characterize spatio-temporal dependence structures, but the computational complexity for model fitting and predictions grows in a cubic order with the size of dataset and application of such models is not feasible for large datasets. This article extends the full-scale approximation (FSA) approach by Sang and Huang (2012) to the spatio-temporal context to reduce computational complexity. A reversible jump Markov chain Monte Carlo (RJMCMC) algorithm is proposed to select knots automatically from a discrete set of spatio-temporal points. Our approach is applicable to nonseparable and nonstationary spatio-temporal covariance models. We illustrate the effectiveness of our method through simulation experiments and application to an ozone measurement dataset.

  10. RELAPS choked flow model and application to a large scale flow test

    International Nuclear Information System (INIS)

    Ransom, V.H.; Trapp, J.A.

    1980-01-01

    The RELAP5 code was used to simulate a large scale choked flow test. The fluid system used in the test was modeled in RELAP5 using a uniform, but coarse, nodalization. The choked mass discharge rate was calculated using the RELAP5 choked flow model. The calulations were in good agreement with the test data, and the flow was calculated to be near thermal equilibrium

  11. Cloud-enabled large-scale land surface model simulations with the NASA Land Information System

    Science.gov (United States)

    Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.

    2017-12-01

    Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and

  12. Field theory of large amplitude collective motion. A schematic model

    International Nuclear Information System (INIS)

    Reinhardt, H.

    1978-01-01

    By using path integral methods the equation for large amplitude collective motion for a schematic two-level model is derived. The original fermion theory is reformulated in terms of a collective (Bose) field. The classical equation of motion for the collective field coincides with the time-dependent Hartree-Fock equation. Its classical solution is quantized by means of the field-theoretical generalization of the WKB method. (author)

  13. Laboratory astrophysics. Model experiments of astrophysics with large-scale lasers

    International Nuclear Information System (INIS)

    Takabe, Hideaki

    2012-01-01

    I would like to review the model experiment of astrophysics with high-power, large-scale lasers constructed mainly for laser nuclear fusion research. The four research directions of this new field named 'Laser Astrophysics' are described with four examples mainly promoted in our institute. The description is of magazine style so as to be easily understood by non-specialists. A new theory and its model experiment on the collisionless shock and particle acceleration observed in supernova remnants (SNRs) are explained in detail and its result and coming research direction are clarified. In addition, the vacuum breakdown experiment to be realized with the near future ultra-intense laser is also introduced. (author)

  14. Large-eddy simulation of ethanol spray combustion using a finite-rate combustion model

    Energy Technology Data Exchange (ETDEWEB)

    Li, K.; Zhou, L.X. [Tsinghua Univ., Beijing (China). Dept. of Engineering Mechanics; Chan, C.K. [Hong Kong Polytechnic Univ. (China). Dept. of Applied Mathematics

    2013-07-01

    Large-eddy simulation of spray combustion is under its rapid development, but the combustion models are less validated by detailed experimental data. In this paper, large-eddy simulation of ethanol-air spray combustion was made using an Eulerian-Lagrangian approach, a subgrid-scale kinetic energy stress model, and a finite-rate combustion model. The simulation results are validated in detail by experiments. The LES obtained statistically averaged temperature is in agreement with the experimental results in most regions. The instantaneous LES results show the coherent structures of the shear region near the high-temperature flame zone and the fuel vapor concentration map, indicating the droplets are concentrated in this shear region. The droplet sizes are found to be in the range of 20-100{mu}m. The instantaneous temperature map shows the close interaction between the coherent structures and the combustion reaction.

  15. Large-x dependence of νW2 in the generalized vector-dominance model

    International Nuclear Information System (INIS)

    Argyres, E.N.; Lam, C.S.

    1977-01-01

    It is well known that the usual generalized vector-meson-dominance (GVMD) model gives too large a contribution to νW 2 for large x. Various heuristic modifications, for example making use of the t/sub min/ effect, have been proposed in order to achieve a reduction of this contribution. In this paper we examine within the GVMD context whether such reductions can rigorously be achieved. This is done utilizing a potential as well as a relativistic eikonal model. We find that whereas a reduction equivalent to that of t/sub min/ can be arranged in vector-meson photoproduction, the same is not true for virtual-photon Compton scattering in such diagonal models. The reason for this difference is discussed in detail. Finally we show that the desired reduction can be obtained if nondiagonal vector-meson scattering terms are properly taken into account

  16. Modelling large scale human activity in San Francisco

    Science.gov (United States)

    Gonzalez, Marta

    2010-03-01

    Diverse group of people with a wide variety of schedules, activities and travel needs compose our cities nowadays. This represents a big challenge for modeling travel behaviors in urban environments; those models are of crucial interest for a wide variety of applications such as traffic forecasting, spreading of viruses, or measuring human exposure to air pollutants. The traditional means to obtain knowledge about travel behavior is limited to surveys on travel journeys. The obtained information is based in questionnaires that are usually costly to implement and with intrinsic limitations to cover large number of individuals and some problems of reliability. Using mobile phone data, we explore the basic characteristics of a model of human travel: The distribution of agents is proportional to the population density of a given region, and each agent has a characteristic trajectory size contain information on frequency of visits to different locations. Additionally we use a complementary data set given by smart subway fare cards offering us information about the exact time of each passenger getting in or getting out of the subway station and the coordinates of it. This allows us to uncover the temporal aspects of the mobility. Since we have the actual time and place of individual's origin and destination we can understand the temporal patterns in each visited location with further details. Integrating two described data set we provide a dynamical model of human travels that incorporates different aspects observed empirically.

  17. Intercomparison between CMIP5 model and MODIS satellite-retrieved data of aerosol optical depth, cloud fraction, and cloud-aerosol interactions

    Science.gov (United States)

    Sockol, Alyssa; Small Griswold, Jennifer D.

    2017-08-01

    Aerosols are a critical component of the Earth's atmosphere and can affect the climate of the Earth through their interactions with solar radiation and clouds. Cloud fraction (CF) and aerosol optical depth (AOD) at 550 nm from the Moderate Resolution Imaging Spectroradiometer (MODIS) are used with analogous cloud and aerosol properties from Historical Phase 5 of the Coupled Model Intercomparison Project (CMIP5) model runs that explicitly include anthropogenic aerosols and parameterized cloud-aerosol interactions. The models underestimate AOD by approximately 15% and underestimate CF by approximately 10% overall on a global scale. A regional analysis is then used to evaluate model performance in two regions with known biomass burning activity and absorbing aerosol (South America (SAM) and South Africa (SAF)). In SAM, the models overestimate AOD by 4.8% and underestimate CF by 14%. In SAF, the models underestimate AOD by 35% and overestimate CF by 13.4%. Average annual cycles show that the monthly timing of AOD peaks closely match satellite data in both SAM and SAF for all except the Community Atmosphere Model 5 and Geophysical Fluid Dynamics Laboratory (GFDL) models. Monthly timing of CF peaks closely match for all models (except GFDL) for SAM and SAF. Sorting monthly averaged 2° × 2.5° model or MODIS CF as a function of AOD does not result in the previously observed "boomerang"-shaped CF versus AOD relationship characteristic of regions with absorbing aerosols from biomass burning. Cloud-aerosol interactions, as observed using daily (or higher) temporal resolution data, are not reproducible at the spatial or temporal resolution provided by the CMIP5 models.

  18. Mechanical test of the model coil wound with large conductor

    International Nuclear Information System (INIS)

    Hiue, Hisaaki; Sugimoto, Makoto; Nakajima, Hideo; Yasukawa, Yukio; Yoshida, Kiyoshi; Hasegawa, Mitsuru; Ito, Ikuo; Konno, Masayuki.

    1992-09-01

    The high rigidity and strength of the winding pack are required to realize the large superconducting magnet for the fusion reactor. This paper describes mechanical tests concerning the rigidity of the winding pack. Samples were prepared to evaluate the adhesive strength between conductors and insulators. Epoxy and Bismaleimide-Triazine resin (BT resin) were used as the conductor insulator. The stainless steel (SS) 304 bars, whose surface was treated mechanically and chemically, was applied to the modeled conductor. The model coil was would with the model conductors covered with the insulator by grand insulator. A winding model combining 3 x 3 conductors was produced for measuring shearing rigidity. The sample was loaded with pure shearing force at the LN 2 temperature. The bar winding sample, by 8 x 6 conductors, was measured the bending rigidity. These three point bending tests were carried out at room temperature. The pancake winding sample was loaded with compressive forces to measure compressive rigidity of winding. (author)

  19. Instantons and Large N

    Science.gov (United States)

    Mariño, Marcos

    2015-09-01

    Preface; Part I. Instantons: 1. Instantons in quantum mechanics; 2. Unstable vacua in quantum field theory; 3. Large order behavior and Borel summability; 4. Non-perturbative aspects of Yang-Mills theories; 5. Instantons and fermions; Part II. Large N: 6. Sigma models at large N; 7. The 1=N expansion in QCD; 8. Matrix models and matrix quantum mechanics at large N; 9. Large N QCD in two dimensions; 10. Instantons at large N; Appendix A. Harmonic analysis on S3; Appendix B. Heat kernel and zeta functions; Appendix C. Effective action for large N sigma models; References; Author index; Subject index.

  20. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number.

    Science.gov (United States)

    Klewicki, J C; Chini, G P; Gibson, J F

    2017-03-13

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier-Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted.This article is part of the themed issue 'Toward the development of high-fidelity models of wall turbulence at large Reynolds number'. © 2017 The Author(s).

  1. Prospectus: towards the development of high-fidelity models of wall turbulence at large Reynolds number

    Science.gov (United States)

    Klewicki, J. C.; Chini, G. P.; Gibson, J. F.

    2017-01-01

    Recent and on-going advances in mathematical methods and analysis techniques, coupled with the experimental and computational capacity to capture detailed flow structure at increasingly large Reynolds numbers, afford an unprecedented opportunity to develop realistic models of high Reynolds number turbulent wall-flow dynamics. A distinctive attribute of this new generation of models is their grounding in the Navier–Stokes equations. By adhering to this challenging constraint, high-fidelity models ultimately can be developed that not only predict flow properties at high Reynolds numbers, but that possess a mathematical structure that faithfully captures the underlying flow physics. These first-principles models are needed, for example, to reliably manipulate flow behaviours at extreme Reynolds numbers. This theme issue of Philosophical Transactions of the Royal Society A provides a selection of contributions from the community of researchers who are working towards the development of such models. Broadly speaking, the research topics represented herein report on dynamical structure, mechanisms and transport; scale interactions and self-similarity; model reductions that restrict nonlinear interactions; and modern asymptotic theories. In this prospectus, the challenges associated with modelling turbulent wall-flows at large Reynolds numbers are briefly outlined, and the connections between the contributing papers are highlighted. This article is part of the themed issue ‘Toward the development of high-fidelity models of wall turbulence at large Reynolds number’. PMID:28167585

  2. Low cycle fatigue strength of austenitic stainless steel under large strain regime

    International Nuclear Information System (INIS)

    Sakai, Michiya; Saito, Kiyoshi; Matsuura, Shinichi

    1998-01-01

    In order to establish realistic seismic safety of nuclear power plants, it is necessary to clarify the failure mode of each components and prepare a damage evaluation method. The authors have proposed the damage evaluation method based on the fully numerical approach to evaluate the low cycle fatigue (LCF) failure under seismic loadings. This method has been validated by comparison with the dynamic failure tests of thin elbows which should be the one of the important components of the FBR primary piping system. However, since there exists limited LCF data, fatigue lives under large strain regime have been extrapolated by available fatigue data. In this study, LCF tests have been conducted over a large strain range from 2% to 10% on austenitic stainless steel SUS304. From the results, the regressive LCF curve has been proposed to modify the Wada's best-fit LCF curve under large strain regime. The usage factors calculated by author's numerical approach using proposed LCF curve have been improved to correct the underestimation of the fatigue damage. (author)

  3. The Oncopig Cancer Model: An Innovative Large Animal Translational Oncology Platform

    DEFF Research Database (Denmark)

    Schachtschneider, Kyle M.; Schwind, Regina M.; Newson, Jordan

    2017-01-01

    -the Oncopig Cancer Model (OCM)-as a next-generation large animal platform for the study of hematologic and solid tumor oncology. With mutations in key tumor suppressor and oncogenes, TP53R167H and KRASG12D , the OCM recapitulates transcriptional hallmarks of human disease while also exhibiting clinically...

  4. A mechanistic diagnosis of the simulation of soil CO2 efflux of the ACME Land Model

    Science.gov (United States)

    Liang, J.; Ricciuto, D. M.; Wang, G.; Gu, L.; Hanson, P. J.; Mayes, M. A.

    2017-12-01

    Accurate simulation of the CO2 efflux from soils (i.e., soil respiration) to the atmosphere is critical to project global biogeochemical cycles and the magnitude of climate change in Earth system models (ESMs). Currently, the simulated soil respiration by ESMs still have a large uncertainty. In this study, a mechanistic diagnosis of soil respiration in the Accelerated Climate Model for Energy (ACME) Land Model (ALM) was conducted using long-term observations at the Missouri Ozark AmeriFlux (MOFLUX) forest site in the central U.S. The results showed that the ALM default run significantly underestimated annual soil respiration and gross primary production (GPP), while incorrectly estimating soil water potential. Improved simulations of soil water potential with site-specific data significantly improved the modeled annual soil respiration, primarily because annual GPP was simultaneously improved. Therefore, accurate simulations of soil water potential must be carefully calibrated in ESMs. Despite improved annual soil respiration, the ALM continued to underestimate soil respiration during peak growing seasons, and to overestimate soil respiration during non-peak growing seasons. Simulations involving increased GPP during peak growing seasons increased soil respiration, while neither improved plant phenology nor increased temperature sensitivity affected the simulation of soil respiration during non-peak growing seasons. One potential reason for the overestimation of the soil respiration during non-peak growing seasons may be that the current model structure is substrate-limited, while microbial dormancy under stress may cause the system to become decomposer-limited. Further studies with more microbial data are required to provide adequate representation of soil respiration and to understand the underlying reasons for inaccurate model simulations.

  5. Are we under-estimating the association between autism symptoms?: The importance of considering simultaneous selection when using samples of individuals who meet diagnostic criteria for an autism spectrum disorder.

    Science.gov (United States)

    Murray, Aja Louise; McKenzie, Karen; Kuenssberg, Renate; O'Donnell, Michael

    2014-11-01

    The magnitude of symptom inter-correlations in diagnosed individuals has contributed to the evidence that autism spectrum disorders (ASD) is a fractionable disorder. Such correlations may substantially under-estimate the population correlations among symptoms due to simultaneous selection on the areas of deficit required for diagnosis. Using statistical simulations of this selection mechanism, we provide estimates of the extent of this bias, given different levels of population correlation between symptoms. We then use real data to compare domain inter-correlations in the Autism Spectrum Quotient, in those with ASD versus a combined ASD and non-ASD sample. Results from both studies indicate that samples restricted to individuals with a diagnosis of ASD potentially substantially under-estimate the magnitude of association between features of ASD.

  6. Solid-Liquid equilibrium of n-alkanes using the Chain Delta Lattice Parameter model

    DEFF Research Database (Denmark)

    Coutinho, João A.P.; Andersen, Simon Ivar; Stenby, Erling Halfdan

    1996-01-01

    The formation of a solid phase in liquid mixtures with large paraffinic molecules is a phenomenon of interest in the petroleum, pharmaceutical, and biotechnological industries among onters. Efforts to model the solid-liquid equilibrium in these systems have been mainly empirical and with different...... degrees of success.An attempt to describe the equilibrium between the high temperature form of a paraffinic solid solution, commonly known as rotator phase, and the liquid phase is performed. The Chain Delta Lattice Parameter model (CDLP) is developed allowing a successful description of the solid-liquid...... equilibrium of n-alkanes ranging from n-C_20 to n-C_40.The model is further modified to achieve a more correct temperature dependence because it severely underestimates the excess enthalpy. It is shown that the ratio of excess enthalpy and entropy for n-alkane solid solutions, as happens for other solid...

  7. Verification of high-speed solar wind stream forecasts using operational solar wind models

    DEFF Research Database (Denmark)

    Reiss, Martin A.; Temmer, Manuela; Veronig, Astrid M.

    2016-01-01

    and the background solar wind conditions. We found that both solar wind models are capable of predicting the large-scale features of the observed solar wind speed (root-mean-square error, RMSE ≈100 km/s) but tend to either overestimate (ESWF) or underestimate (WSA) the number of high-speed solar wind streams (threat......High-speed solar wind streams emanating from coronal holes are frequently impinging on the Earth's magnetosphere causing recurrent, medium-level geomagnetic storm activity. Modeling high-speed solar wind streams is thus an essential element of successful space weather forecasting. Here we evaluate...... high-speed stream forecasts made by the empirical solar wind forecast (ESWF) and the semiempirical Wang-Sheeley-Arge (WSA) model based on the in situ plasma measurements from the Advanced Composition Explorer (ACE) spacecraft for the years 2011 to 2014. While the ESWF makes use of an empirical relation...

  8. Modeling and analysis of large-eddy simulations of particle-laden turbulent boundary layer flows

    KAUST Repository

    Rahman, Mustafa M.

    2017-01-05

    We describe a framework for the large-eddy simulation of solid particles suspended and transported within an incompressible turbulent boundary layer (TBL). For the fluid phase, the large-eddy simulation (LES) of incompressible turbulent boundary layer employs stretched spiral vortex subgrid-scale model and a virtual wall model similar to the work of Cheng, Pullin & Samtaney (J. Fluid Mech., 2015). This LES model is virtually parameter free and involves no active filtering of the computed velocity field. Furthermore, a recycling method to generate turbulent inflow is implemented. For the particle phase, the direct quadrature method of moments (DQMOM) is chosen in which the weights and abscissas of the quadrature approximation are tracked directly rather than the moments themselves. The numerical method in this framework is based on a fractional-step method with an energy-conservative fourth-order finite difference scheme on a staggered mesh. This code is parallelized based on standard message passing interface (MPI) protocol and is designed for distributed-memory machines. It is proposed to utilize this framework to examine transport of particles in very large-scale simulations. The solver is validated using the well know result of Taylor-Green vortex case. A large-scale sandstorm case is simulated and the altitude variations of number density along with its fluctuations are quantified.

  9. THE TEXTBOOK AS A PRODUCT OF SCHOOL GEOGRAPHY: underestimated work?

    Directory of Open Access Journals (Sweden)

    José Eustáquio de Sene

    2014-01-01

    Full Text Available ABSTRACT: This article will address the textbook as a specific cultural production of school disciplines having as reference the theoretical debate that opposed the conceptions of "didactic transposition" (CHEVALLARD, 1997 and "school culture" (CHERVEL, 1990. Based on this debate, characteristic of the curriculum field, this article aims to understand why, historically, the textbook has been underestimated and even considered a "less important work” within the limits of the academy (BITTENCOURT, 2004. The examples used will always be of the Geography discipline – both school and academic, as well as the relations between this two fields – having in mind their "multiplicity of paradigms" (LESTEGÁS, 2002. The analysis will also take into account the historic process of institutionalization of academic Geography based on "Layton’s stages" (GOODSON, 2005. RESUMO: Este artigo abordará o livro didático como uma produção cultural específica das disciplinas escolares tendo como referência o debate teórico que opõem as concepções de “transposição didática” (CHEVALLARD, 1997 e de “cultura escolar” (CHERVEL, 1990. Com base em tal debate, próprio do campo curricular, procurará compreender porque historicamente o livro didático tem sido pouco valorizado e até mesmo considerado uma “obra menor” nos limites da academia (BITTENCOURT, 2004. Os exemplos utilizados serão sempre da disciplina Geografia – tanto a escolar quanto a acadêmica, assim como das relações entre ambas – tendo em vista sua “multiplicidade de paradigmas” (LESTEGÁS, 2002. A análise também levará em conta o histórico processo de institucionalização da Geografia acadêmica com base nos “estágios de Layton” (GOODSON, 2005.

  10. A 2D nonlinear multiring model for blood flow in large elastic arteries

    Science.gov (United States)

    Ghigo, Arthur R.; Fullana, Jose-Maria; Lagrée, Pierre-Yves

    2017-12-01

    In this paper, we propose a two-dimensional nonlinear ;multiring; model to compute blood flow in axisymmetric elastic arteries. This model is designed to overcome the numerical difficulties of three-dimensional fluid-structure interaction simulations of blood flow without using the over-simplifications necessary to obtain one-dimensional blood flow models. This multiring model is derived by integrating over concentric rings of fluid the simplified long-wave Navier-Stokes equations coupled to an elastic model of the arterial wall. The resulting system of balance laws provides a unified framework in which both the motion of the fluid and the displacement of the wall are dealt with simultaneously. The mathematical structure of the multiring model allows us to use a finite volume method that guarantees the conservation of mass and the positivity of the numerical solution and can deal with nonlinear flows and large deformations of the arterial wall. We show that the finite volume numerical solution of the multiring model provides at a reasonable computational cost an asymptotically valid description of blood flow velocity profiles and other averaged quantities (wall shear stress, flow rate, ...) in large elastic and quasi-rigid arteries. In particular, we validate the multiring model against well-known solutions such as the Womersley or the Poiseuille solutions as well as against steady boundary layer solutions in quasi-rigid constricted and expanded tubes.

  11. Dynamic subgrid scale model of large eddy simulation of cross bundle flows

    International Nuclear Information System (INIS)

    Hassan, Y.A.; Barsamian, H.R.

    1996-01-01

    The dynamic subgrid scale closure model of Germano et. al (1991) is used in the large eddy simulation code GUST for incompressible isothermal flows. Tube bundle geometries of staggered and non-staggered arrays are considered in deep bundle simulations. The advantage of the dynamic subgrid scale model is the exclusion of an input model coefficient. The model coefficient is evaluated dynamically for each nodal location in the flow domain. Dynamic subgrid scale results are obtained in the form of power spectral densities and flow visualization of turbulent characteristics. Comparisons are performed among the dynamic subgrid scale model, the Smagorinsky eddy viscosity model (that is used as the base model for the dynamic subgrid scale model) and available experimental data. Spectral results of the dynamic subgrid scale model correlate better with experimental data. Satisfactory turbulence characteristics are observed through flow visualization

  12. QUAL-NET, a high temporal-resolution eutrophication model for large hydrographic networks

    Science.gov (United States)

    Minaudo, Camille; Curie, Florence; Jullian, Yann; Gassama, Nathalie; Moatar, Florentina

    2018-04-01

    To allow climate change impact assessment of water quality in river systems, the scientific community lacks efficient deterministic models able to simulate hydrological and biogeochemical processes in drainage networks at the regional scale, with high temporal resolution and water temperature explicitly determined. The model QUALity-NETwork (QUAL-NET) was developed and tested on the Middle Loire River Corridor, a sub-catchment of the Loire River in France, prone to eutrophication. Hourly variations computed efficiently by the model helped disentangle the complex interactions existing between hydrological and biological processes across different timescales. Phosphorus (P) availability was the most constraining factor for phytoplankton development in the Loire River, but simulating bacterial dynamics in QUAL-NET surprisingly evidenced large amounts of organic matter recycled within the water column through the microbial loop, which delivered significant fluxes of available P and enhanced phytoplankton growth. This explained why severe blooms still occur in the Loire River despite large P input reductions since 1990. QUAL-NET could be used to study past evolutions or predict future trajectories under climate change and land use scenarios.

  13. QUAL-NET, a high temporal-resolution eutrophication model for large hydrographic networks

    Directory of Open Access Journals (Sweden)

    C. Minaudo

    2018-04-01

    Full Text Available To allow climate change impact assessment of water quality in river systems, the scientific community lacks efficient deterministic models able to simulate hydrological and biogeochemical processes in drainage networks at the regional scale, with high temporal resolution and water temperature explicitly determined. The model QUALity-NETwork (QUAL-NET was developed and tested on the Middle Loire River Corridor, a sub-catchment of the Loire River in France, prone to eutrophication. Hourly variations computed efficiently by the model helped disentangle the complex interactions existing between hydrological and biological processes across different timescales. Phosphorus (P availability was the most constraining factor for phytoplankton development in the Loire River, but simulating bacterial dynamics in QUAL-NET surprisingly evidenced large amounts of organic matter recycled within the water column through the microbial loop, which delivered significant fluxes of available P and enhanced phytoplankton growth. This explained why severe blooms still occur in the Loire River despite large P input reductions since 1990. QUAL-NET could be used to study past evolutions or predict future trajectories under climate change and land use scenarios.

  14. Modifying a dynamic global vegetation model for simulating large spatial scale land surface water balance

    Science.gov (United States)

    Tang, G.; Bartlein, P. J.

    2012-01-01

    Water balance models of simple structure are easier to grasp and more clearly connect cause and effect than models of complex structure. Such models are essential for studying large spatial scale land surface water balance in the context of climate and land cover change, both natural and anthropogenic. This study aims to (i) develop a large spatial scale water balance model by modifying a dynamic global vegetation model (DGVM), and (ii) test the model's performance in simulating actual evapotranspiration (ET), soil moisture and surface runoff for the coterminous United States (US). Toward these ends, we first introduced development of the "LPJ-Hydrology" (LH) model by incorporating satellite-based land covers into the Lund-Potsdam-Jena (LPJ) DGVM instead of dynamically simulating them. We then ran LH using historical (1982-2006) climate data and satellite-based land covers at 2.5 arc-min grid cells. The simulated ET, soil moisture and surface runoff were compared to existing sets of observed or simulated data for the US. The results indicated that LH captures well the variation of monthly actual ET (R2 = 0.61, p 0.46, p 0.52) with observed values over the years 1982-2006, respectively. The modeled spatial patterns of annual ET and surface runoff are in accordance with previously published data. Compared to its predecessor, LH simulates better monthly stream flow in winter and early spring by incorporating effects of solar radiation on snowmelt. Overall, this study proves the feasibility of incorporating satellite-based land-covers into a DGVM for simulating large spatial scale land surface water balance. LH developed in this study should be a useful tool for studying effects of climate and land cover change on land surface hydrology at large spatial scales.

  15. Gastroesophageal reflux disease vs. Panayiotopoulos syndrome: an underestimated misdiagnosis in pediatric age?

    Science.gov (United States)

    Parisi, Pasquale; Pacchiarotti, Claudia; Ferretti, Alessandro; Bianchi, Simona; Paolino, Maria Chiara; Barreto, Mario; Principessa, Luigi; Villa, Maria Pia

    2014-12-01

    Autonomic signs and symptoms could be of epileptic or nonepileptic origin, and the differential diagnosis depends on a number of factors which include the nature of the autonomic manifestations themselves, the occurrence of other nonictal autonomic signs/symptoms, and the age of the patient. Here, we describe twelve children (aged from ten months to six years at the onset of the symptoms) with Panayiotopoulos syndrome misdiagnosed as gastroesophageal reflux disease. Gastroesophageal reflux disease and Panayiotopoulos syndrome may represent an underestimated diagnostic challenge. When the signs/symptoms occur mainly during sleep, a sleep EEG or, if available, a polysomnographic evaluation may be the most useful investigation to make a differential diagnosis between autonomic epileptic and nonepileptic disorders. An early detection can reduce both the high morbidity related to mismanagement and the high costs to the national health service related to the incorrect diagnostic and therapeutic approaches. To decide if antiseizure therapy is required, one should take into account both the frequency and severity of epileptic seizures and the tendency to have potentially lethal autonomic cardiorespiratory involvement. In conclusion, we would emphasize the need to make a differential diagnosis between gastroesophageal reflux disease and Panayiotopoulos syndrome in patients with "an unusual" late-onset picture of GERD and acid therapy-resistant gastroesophageal reflux, especially if associated with other autonomic symptoms and signs. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Nonlinear model and attitude dynamics of flexible spacecraft with large amplitude slosh

    Science.gov (United States)

    Deng, Mingle; Yue, Baozeng

    2017-04-01

    This paper is focused on the nonlinearly modelling and attitude dynamics of spacecraft coupled with large amplitude liquid sloshing dynamics and flexible appendage vibration. The large amplitude fuel slosh dynamics is included by using an improved moving pulsating ball model. The moving pulsating ball model is an equivalent mechanical model that is capable of imitating the whole liquid reorientation process. A modification is introduced in the capillary force computation in order to more precisely estimate the settling location of liquid in microgravity or zero-g environment. The flexible appendage is modelled as a three dimensional Bernoulli-Euler beam and the assumed modal method is employed to derive the nonlinear mechanical model for the overall coupled system of liquid filled spacecraft with appendage. The attitude maneuver is implemented by the momentum transfer technique, and a feedback controller is designed. The simulation results show that the liquid sloshing can always result in nutation behavior, but the effect of flexible deformation of appendage depends on the amplitude and direction of attitude maneuver performed by spacecraft. Moreover, it is found that the liquid sloshing and the vibration of flexible appendage are coupled with each other, and the coupling becomes more significant with more rapid motion of spacecraft. This study reveals that the appendage's flexibility has influence on the liquid's location and settling time in microgravity. The presented nonlinear system model can provide an important reference for the overall design of the modern spacecraft composed of rigid platform, liquid filled tank and flexible appendage.

  17. Design and modelling of innovative machinery systems for large ships

    DEFF Research Database (Denmark)

    Larsen, Ulrik

    Eighty percent of the growing global merchandise trade is transported by sea. The shipping industry is required to reduce the pollution and increase the energy efficiency of ships in the near future. There is a relatively large potential for approaching these requirements by implementing waste heat...... consisting of a two-zone combustion and NOx emission model, a double Wiebe heat release model, the Redlich-Kwong equation of state and the Woschni heat loss correlation. A novel methodology is presented and used to determine the optimum organic Rankine cycle process layout, working fluid and process......, are evaluated with regards to the fuel consumption and NOx emissions trade-off. The results of the calibration and validation of the engine model suggest that the main performance parameters can be predicted with adequate accuracies for the overall purpose. The results of the ORC and the Kalina cycle...

  18. Comparison of Langevin and Markov channel noise models for neuronal signal generation.

    Science.gov (United States)

    Sengupta, B; Laughlin, S B; Niven, J E

    2010-01-01

    The stochastic opening and closing of voltage-gated ion channels produce noise in neurons. The effect of this noise on the neuronal performance has been modeled using either an approximate or Langevin model based on stochastic differential equations or an exact model based on a Markov process model of channel gating. Yet whether the Langevin model accurately reproduces the channel noise produced by the Markov model remains unclear. Here we present a comparison between Langevin and Markov models of channel noise in neurons using single compartment Hodgkin-Huxley models containing either Na+ and K+, or only K+ voltage-gated ion channels. The performance of the Langevin and Markov models was quantified over a range of stimulus statistics, membrane areas, and channel numbers. We find that in comparison to the Markov model, the Langevin model underestimates the noise contributed by voltage-gated ion channels, overestimating information rates for both spiking and nonspiking membranes. Even with increasing numbers of channels, the difference between the two models persists. This suggests that the Langevin model may not be suitable for accurately simulating channel noise in neurons, even in simulations with large numbers of ion channels.

  19. Attenuation Model Using the Large-N Array from the Source Physics Experiment

    Science.gov (United States)

    Atterholt, J.; Chen, T.; Snelson, C. M.; Mellors, R. J.

    2017-12-01

    The Source Physics Experiment (SPE) consists of a series of chemical explosions at the Nevada National Security Site. SPE seeks to better characterize the influence of subsurface heterogeneities on seismic wave propagation and energy dissipation from explosions. As a part of this experiment, SPE-5, a 5000 kg TNT equivalent chemical explosion, was detonated in 2016. During the SPE-5 experiment, a Large-N array of 996 geophones (half 3-component and half z-component) was deployed. This array covered an area that includes loosely consolidated alluvium (weak rock) and weathered granite (hard rock), and recorded the SPE-5 explosion as well as 53 weight drops. We use these Large-N recordings to develop an attenuation model of the area to better characterize how geologic structures influence source energy partitioning. We found a clear variation in seismic attenuation for different rock types: high attenuation (low Q) for alluvium and low attenuation (high Q) for granite. The attenuation structure correlates well with local geology, and will be incorporated into the large simulation effort of the SPE program to validate predictive models. (LA-UR-17-26382)

  20. Towards large scale stochastic rainfall models for flood risk assessment in trans-national basins

    Science.gov (United States)

    Serinaldi, F.; Kilsby, C. G.

    2012-04-01

    While extensive research has been devoted to rainfall-runoff modelling for risk assessment in small and medium size watersheds, less attention has been paid, so far, to large scale trans-national basins, where flood events have severe societal and economic impacts with magnitudes quantified in billions of Euros. As an example, in the April 2006 flood events along the Danube basin at least 10 people lost their lives and up to 30 000 people were displaced, with overall damages estimated at more than half a billion Euros. In this context, refined analytical methods are fundamental to improve the risk assessment and, then, the design of structural and non structural measures of protection, such as hydraulic works and insurance/reinsurance policies. Since flood events are mainly driven by exceptional rainfall events, suitable characterization and modelling of space-time properties of rainfall fields is a key issue to perform a reliable flood risk analysis based on alternative precipitation scenarios to be fed in a new generation of large scale rainfall-runoff models. Ultimately, this approach should be extended to a global flood risk model. However, as the need of rainfall models able to account for and simulate spatio-temporal properties of rainfall fields over large areas is rather new, the development of new rainfall simulation frameworks is a challenging task involving that faces with the problem of overcoming the drawbacks of the existing modelling schemes (devised for smaller spatial scales), but keeping the desirable properties. In this study, we critically summarize the most widely used approaches for rainfall simulation. Focusing on stochastic approaches, we stress the importance of introducing suitable climate forcings in these simulation schemes in order to account for the physical coherence of rainfall fields over wide areas. Based on preliminary considerations, we suggest a modelling framework relying on the Generalized Additive Models for Location, Scale

  1. Modeling and experiments of biomass combustion in a large-scale grate boiler

    DEFF Research Database (Denmark)

    Yin, Chungen; Rosendahl, Lasse; Kær, Søren Knudsen

    2007-01-01

    is inherently more difficult due to the complexity of the solid biomass fuel bed on the grate, the turbulent reacting flow in the combustion chamber and the intensive interaction between them. This paper presents the CFD validation efforts for a modern large-scale biomass-fired grate boiler. Modeling...... and experiments are both done for the grate boiler. The comparison between them shows an overall acceptable agreement in tendency. However at some measuring ports, big discrepancies between the modeling and the experiments are observed, mainly because the modeling-based boundary conditions (BCs) could differ...

  2. Structure of exotic nuclei by large-scale shell model calculations

    International Nuclear Information System (INIS)

    Utsuno, Yutaka; Otsuka, Takaharu; Mizusaki, Takahiro; Honma, Michio

    2006-01-01

    An extensive large-scale shell-model study is conducted for unstable nuclei around N = 20 and N = 28, aiming to investigate how the shell structure evolves from stable to unstable nuclei and affects the nuclear structure. The structure around N = 20 including the disappearance of the magic number is reproduced systematically, exemplified in the systematics of the electromagnetic moments in the Na isotope chain. As a key ingredient dominating the structure/shell evolution in the exotic nuclei from a general viewpoint, we pay attention to the tensor force. Including a proper strength of the tensor force in the effective interaction, we successfully reproduce the proton shell evolution ranging from N = 20 to 28 without any arbitrary modifications in the interaction and predict the ground state of 42Si to contain a large deformed component

  3. An overview of modeling methods for thermal mixing and stratification in large enclosures for reactor safety analysis

    Energy Technology Data Exchange (ETDEWEB)

    Haihua Zhao; Per F. Peterson

    2010-10-01

    Thermal mixing and stratification phenomena play major roles in the safety of reactor systems with large enclosures, such as containment safety in current fleet of LWRs, long-term passive containment cooling in Gen III+ plants including AP-1000 and ESBWR, the cold and hot pool mixing in pool type sodium cooled fast reactor systems (SFR), and reactor cavity cooling system behavior in high temperature gas cooled reactors (HTGR), etc. Depending on the fidelity requirement and computational resources, 0-D steady state models (heat transfer correlations), 0-D lumped parameter based transient models, 1-D physical-based coarse grain models, and 3-D CFD models are available. Current major system analysis codes either have no models or only 0-D models for thermal stratification and mixing, which can only give highly approximate results for simple cases. While 3-D CFD methods can be used to analyze simple configurations, these methods require very fine grid resolution to resolve thin substructures such as jets and wall boundaries. Due to prohibitive computational expenses for long transients in very large volumes, 3-D CFD simulations remain impractical for system analyses. For mixing in stably stratified large enclosures, UC Berkeley developed 1-D models basing on Zuber’s hierarchical two-tiered scaling analysis (HTTSA) method where the ambient fluid volume is represented by 1-D transient partial differential equations and substructures such as free or wall jets are modeled with 1-D integral models. This allows very large reductions in computational effort compared to 3-D CFD modeling. This paper will present an overview on important thermal mixing and stratification phenomena in large enclosures for different reactors, major modeling methods and their advantages and limits, potential paths to improve simulation capability and reduce analysis uncertainty in this area for advanced reactor system analysis tools.

  4. Validating modeled turbulent heat fluxes across large freshwater surfaces

    Science.gov (United States)

    Lofgren, B. M.; Fujisaki-Manome, A.; Gronewold, A.; Anderson, E. J.; Fitzpatrick, L.; Blanken, P.; Spence, C.; Lenters, J. D.; Xiao, C.; Charusambot, U.

    2017-12-01

    Turbulent fluxes of latent and sensible heat are important physical processes that influence the energy and water budgets of the Great Lakes. Validation and improvement of bulk flux algorithms to simulate these turbulent heat fluxes are critical for accurate prediction of hydrodynamics, water levels, weather, and climate over the region. Here we consider five heat flux algorithms from several model systems; the Finite-Volume Community Ocean Model, the Weather Research and Forecasting model, and the Large Lake Thermodynamics Model, which are used in research and operational environments and concentrate on different aspects of the Great Lakes' physical system, but interface at the lake surface. The heat flux algorithms were isolated from each model and driven by meteorological data from over-lake stations in the Great Lakes Evaporation Network. The simulation results were compared with eddy covariance flux measurements at the same stations. All models show the capacity to the seasonal cycle of the turbulent heat fluxes. Overall, the Coupled Ocean Atmosphere Response Experiment algorithm in FVCOM has the best agreement with eddy covariance measurements. Simulations with the other four algorithms are overall improved by updating the parameterization of roughness length scales of temperature and humidity. Agreement between modelled and observed fluxes notably varied with geographical locations of the stations. For example, at the Long Point station in Lake Erie, observed fluxes are likely influenced by the upwind land surface while the simulations do not take account of the land surface influence, and therefore the agreement is worse in general.

  5. An improved mounting device for attaching intracranial probes in large animal models.

    Science.gov (United States)

    Dunster, Kimble R

    2015-12-01

    The rigid support of intracranial probes can be difficult when using animal models, as mounting devices suitable for the probes are either not available, or designed for human use and not suitable in animal skulls. A cheap and reliable mounting device for securing intracranial probes in large animal models is described. Using commonly available clinical consumables, a universal mounting device for securing intracranial probes to the skull of large animals was developed and tested. A simply made mounting device to hold a variety of probes from 500 μm to 1.3 mm in diameter to the skull was developed. The device was used to hold probes to the skulls of sheep for up to 18 h. No adhesives or cements were used. The described device provides a reliable method of securing probes to the skull of animals.

  6. Two-Dimensional Physical and CFD Modelling of Large Gas Bubble Behaviour in Bath Smelting Furnaces

    Directory of Open Access Journals (Sweden)

    Yuhua Pan

    2010-09-01

    Full Text Available The behaviour of large gas bubbles in a liquid bath and the mechanisms of splash generation due to gas bubble rupture in high-intensity bath smelting furnaces were investigated by means of physical and mathematical (CFD modelling techniques. In the physical modelling work, a two-dimensional Perspex model of the pilot plant furnace at CSIRO Process Science and Engineering was established in the laboratory. An aqueous glycerol solution was used to simulate liquid slag. Air was injected via a submerged lance into the liquid bath and the bubble behaviour and the resultant splashing phenomena were observed and recorded with a high-speed video camera. In the mathematical modelling work, a two-dimensional CFD model was developed to simulate the free surface flows due to motion and deformation of large gas bubbles in the liquid bath and rupture of the bubbles at the bath free surface. It was concluded from these modelling investigations that the splashes generated in high-intensity bath smelting furnaces are mainly caused by the rupture of fast rising large gas bubbles. The acceleration of the bubbles into the preceding bubbles and the rupture of the coalescent bubbles at the bath surface contribute significantly to splash generation.

  7. Forcings and feedbacks on convection in the 2010 Pakistan flood: Modeling extreme precipitation with interactive large-scale ascent

    Science.gov (United States)

    Nie, Ji; Shaevitz, Daniel A.; Sobel, Adam H.

    2016-09-01

    Extratropical extreme precipitation events are usually associated with large-scale flow disturbances, strong ascent, and large latent heat release. The causal relationships between these factors are often not obvious, however, the roles of different physical processes in producing the extreme precipitation event can be difficult to disentangle. Here we examine the large-scale forcings and convective heating feedback in the precipitation events, which caused the 2010 Pakistan flood within the Column Quasi-Geostrophic framework. A cloud-revolving model (CRM) is forced with large-scale forcings (other than large-scale vertical motion) computed from the quasi-geostrophic omega equation using input data from a reanalysis data set, and the large-scale vertical motion is diagnosed interactively with the simulated convection. Numerical results show that the positive feedback of convective heating to large-scale dynamics is essential in amplifying the precipitation intensity to the observed values. Orographic lifting is the most important dynamic forcing in both events, while differential potential vorticity advection also contributes to the triggering of the first event. Horizontal moisture advection modulates the extreme events mainly by setting the environmental humidity, which modulates the amplitude of the convection's response to the dynamic forcings. When the CRM is replaced by either a single-column model (SCM) with parameterized convection or a dry model with a reduced effective static stability, the model results show substantial discrepancies compared with reanalysis data. The reasons for these discrepancies are examined, and the implications for global models and theoretical models are discussed.

  8. Validation of CALMET/CALPUFF models simulations around a large power plant stack

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Garces, A.; Souto, J. A.; Rodriguez, A.; Saavedra, S.; Casares, J. J.

    2015-07-01

    Calmest/CALPUFF modeling system is frequently used in the study of atmospheric processes and pollution, and several validation tests were performed until now; nevertheless, most of them were based on experiments with a large compilation of surface and aloft meteorological measurements, rarely available. At the same time, the use of a large operational smokestack as tracer/pollutant source is not usual. In this work, first CALMET meteorological diagnostic model is nested to WRF meteorological prognostic model simulations (3x3 km{sup 2} horizontal resolution) over a complex terrain and coastal domain at NW Spain, covering 100x100 km{sup 2}, with a coal-fired power plant emitting SO{sub 2}. Simulations were performed during three different periods when SO{sub 2} hourly glc peaks were observed. NCEP reanalysis were applied as initial and boundary conditions. Yong Sei University-Pleim-Chang (YSU) PBL scheme was selected in the WRF model to provide the best input to three different CALMET horizontal resolutions, 1x1 km{sup 2}, 0.5x0.5 km{sup 2}, and 0.2x0.2 km{sup 2}. The best results, very similar between them, were achieved using the last two resolutions; therefore, the 0.5x0.5 km{sup 2} resolution was selected to test different CALMET meteorological inputs, using several combinations of WRF outputs and/or surface and upper-air measurements available in the simulation domain. With respect to meteorological aloft models output, CALMET PBL depth estimations are very similar to PBL depth estimations using upper-air measurements (rawinsondes), and significantly better than WRF PBL depth results. Regarding surface models surface output, the available meteorological sites were divided in two groups, one to provide meteorological input to CALMET (when applied), and another to models validation. Comparing WRF and CALMET outputs against surface measurements (from sites for models validation) the lowest RMSE was achieved using as CALMET input dataset WRF output combined with

  9. Validation of CALMET/CALPUFF models simulations around a large power plant stack

    Energy Technology Data Exchange (ETDEWEB)

    Hernandez-Garces, A.; Souto Rodriguez, J.A.; Saavedra, S.; Casares, J.J.

    2015-07-01

    CALMET/CALPUFF modeling system is frequently used in the study of atmospheric processes and pollution, and several validation tests were performed until now; nevertheless, most of them were based on experiments with a large compilation of surface and aloft meteorological measurements, rarely available. At the same time, the use of a large operational smokestack as tracer/pollutant source is not usual. In this work, first CALMET meteorological diagnostic model is nested to WRF meteorological prognostic model simulations (3x3 km2 horizontal resolution) over a complex terrain and coastal domain at NW Spain, covering 100x100 km2 , with a coal-fired power plant emitting SO2. Simulations were performed during three different periods when SO2 hourly glc peaks were observed. NCEP reanalysis were applied as initial and boundary conditions. Yong Sei University-Pleim-Chang (YSU) PBL scheme was selected in the WRF model to provide the best input to three different CALMET horizontal resolutions, 1x1 km2 , 0.5x0.5 km2 , and 0.2x0.2 km2. The best results, very similar between them, were achieved using the last two resolutions; therefore, the 0.5x0.5 km2 resolution was selected to test different CALMET meteorological inputs, using several combinations of WRF outputs and/or surface and upper-air measurements available in the simulation domain. With respect to meteorological aloft models output, CALMET PBL depth estimations are very similar to PBL depth estimations using upper-air measurements (rawinsondes), and significantly better than WRF PBL depth results. Regarding surface models surface output, the available meteorological sites were divided in two groups, one to provide meteorological input to CALMET (when applied), and another to models validation. Comparing WRF and CALMET outputs against surface measurements (from sites for models validation) the lowest RMSE was achieved using as CALMET input dataset WRF output combined with surface measurements (from sites for CALMET model

  10. Validation of CALMET/CALPUFF models simulations around a large power plant stack

    International Nuclear Information System (INIS)

    Hernandez-Garces, A.; Souto, J. A.; Rodriguez, A.; Saavedra, S.; Casares, J. J.

    2015-01-01

    Calmest/CALPUFF modeling system is frequently used in the study of atmospheric processes and pollution, and several validation tests were performed until now; nevertheless, most of them were based on experiments with a large compilation of surface and aloft meteorological measurements, rarely available. At the same time, the use of a large operational smokestack as tracer/pollutant source is not usual. In this work, first CALMET meteorological diagnostic model is nested to WRF meteorological prognostic model simulations (3x3 km 2 horizontal resolution) over a complex terrain and coastal domain at NW Spain, covering 100x100 km 2 , with a coal-fired power plant emitting SO 2 . Simulations were performed during three different periods when SO 2 hourly glc peaks were observed. NCEP reanalysis were applied as initial and boundary conditions. Yong Sei University-Pleim-Chang (YSU) PBL scheme was selected in the WRF model to provide the best input to three different CALMET horizontal resolutions, 1x1 km 2 , 0.5x0.5 km 2 , and 0.2x0.2 km 2 . The best results, very similar between them, were achieved using the last two resolutions; therefore, the 0.5x0.5 km 2 resolution was selected to test different CALMET meteorological inputs, using several combinations of WRF outputs and/or surface and upper-air measurements available in the simulation domain. With respect to meteorological aloft models output, CALMET PBL depth estimations are very similar to PBL depth estimations using upper-air measurements (rawinsondes), and significantly better than WRF PBL depth results. Regarding surface models surface output, the available meteorological sites were divided in two groups, one to provide meteorological input to CALMET (when applied), and another to models validation. Comparing WRF and CALMET outputs against surface measurements (from sites for models validation) the lowest RMSE was achieved using as CALMET input dataset WRF output combined with surface measurements (from sites for

  11. Large scale air pollution estimation method combining land use regression and chemical transport modeling in a geostatistical framework.

    Science.gov (United States)

    Akita, Yasuyuki; Baldasano, Jose M; Beelen, Rob; Cirach, Marta; de Hoogh, Kees; Hoek, Gerard; Nieuwenhuijsen, Mark; Serre, Marc L; de Nazelle, Audrey

    2014-04-15

    In recognition that intraurban exposure gradients may be as large as between-city variations, recent air pollution epidemiologic studies have become increasingly interested in capturing within-city exposure gradients. In addition, because of the rapidly accumulating health data, recent studies also need to handle large study populations distributed over large geographic domains. Even though several modeling approaches have been introduced, a consistent modeling framework capturing within-city exposure variability and applicable to large geographic domains is still missing. To address these needs, we proposed a modeling framework based on the Bayesian Maximum Entropy method that integrates monitoring data and outputs from existing air quality models based on Land Use Regression (LUR) and Chemical Transport Models (CTM). The framework was applied to estimate the yearly average NO2 concentrations over the region of Catalunya in Spain. By jointly accounting for the global scale variability in the concentration from the output of CTM and the intraurban scale variability through LUR model output, the proposed framework outperformed more conventional approaches.

  12. Underestimation of glucose turnover measured with [6-3H]- and [6,6-2H]- but not [6-14C]glucose during hyperinsulinemia in humans

    International Nuclear Information System (INIS)

    McMahon, M.M.; Schwenk, W.F.; Haymond, M.W.; Rizza, R.A.

    1989-01-01

    Recent studies indicate that hydrogen-labeled glucose tracers underestimate glucose turnover in humans under conditions of high flux. The cause of this underestimation is unknown. To determine whether the error is time-, pool-, model-, or insulin-dependent, glucose turnover was measured simultaneously with [6-3H]-, [6,6-2H2]-, and [6-14C]glucose during a 7-h infusion of either insulin (1 mU.kg-1.min-1) or saline. During the insulin infusion, steady-state glucose turnover measured with both [6-3H]glucose (8.0 +/- 0.5 mg.kg-1.min-1) and [6,6-2H2]glucose (7.6 +/- 0.5 mg.kg-1.min-1) was lower (P less than .01) than either the glucose infusion rate required to maintain euglycemia (9.8 +/- 0.7 mg.kg-1.min-1) or glucose turnover determined with [6-14C]glucose and corrected for Cori cycle activity (9.8 +/- 0.7 mg.kg-1.min-1). Consequently negative glucose production rates (P less than .01) were obtained with either [6-3H]- or [6,6-2H2]- but not [6-14C]glucose. The difference between turnover estimated with [6-3H]glucose and actual glucose disposal (or 14C glucose flux) did not decrease with time and was not dependent on duration of isotope infusion. During saline infusion, estimates of glucose turnover were similar regardless of the glucose tracer used. High-performance liquid chromatography of the radioactive glucose tracer and plasma revealed the presence of a tritiated nonglucose contaminant. Although the contaminant represented only 1.5% of the radioactivity in the [6-3H]glucose infusate, its clearance was 10-fold less (P less than .001) than that of [6-3H]glucose. This resulted in accumulation in plasma, with the contaminant accounting for 16.6 +/- 2.09 and 10.8 +/- 0.9% of what customarily is assumed to be plasma glucose radioactivity during the insulin or saline infusion, respectively (P less than .01)

  13. Large scale injection test (LASGIT) modelling

    International Nuclear Information System (INIS)

    Arnedo, D.; Olivella, S.; Alonso, E.E.

    2010-01-01

    Document available in extended abstract form only. With the objective of understanding the gas flow processes through clay barriers in schemes of radioactive waste disposal, the Lasgit in situ experiment was planned and is currently in progress. The modelling of the experiment will permit to better understand of the responses, to confirm hypothesis of mechanisms and processes and to learn in order to design future experiments. The experiment and modelling activities are included in the project FORGE (FP7). The in situ large scale injection test Lasgit is currently being performed at the Aespoe Hard Rock Laboratory by SKB and BGS. An schematic layout of the test is shown. The deposition hole follows the KBS3 scheme. A copper canister is installed in the axe of the deposition hole, surrounded by blocks of highly compacted MX-80 bentonite. A concrete plug is placed at the top of the buffer. A metallic lid anchored to the surrounding host rock is included in order to prevent vertical movements of the whole system during gas injection stages (high gas injection pressures are expected to be reached). Hydration of the buffer material is achieved by injecting water through filter mats, two placed at the rock walls and two at the interfaces between bentonite blocks. Water is also injected through the 12 canister filters. Gas injection stages are performed injecting gas to some of the canister injection filters. Since the water pressure and the stresses (swelling pressure development) will be high during gas injection, it is necessary to inject at high gas pressures. This implies mechanical couplings as gas penetrates after the gas entry pressure is achieved and may produce deformations which in turn lead to permeability increments. A 3D hydro-mechanical numerical model of the test using CODE-BRIGHT is presented. The domain considered for the modelling is shown. The materials considered in the simulation are the MX-80 bentonite blocks (cylinders and rings), the concrete plug

  14. Monitoring carnivore populations at the landscape scale: occupancy modelling of tigers from sign surveys

    Science.gov (United States)

    Karanth, Kota Ullas; Gopalaswamy, Arjun M.; Kumar, Narayanarao Samba; Vaidyanathan, Srinivas; Nichols, James D.; MacKenzie, Darryl I.

    2011-01-01

    1. Assessing spatial distributions of threatened large carnivores at landscape scales poses formidable challenges because of their rarity and elusiveness. As a consequence of logistical constraints, investigators typically rely on sign surveys. Most survey methods, however, do not explicitly address the central problem of imperfect detections of animal signs in the field, leading to underestimates of true habitat occupancy and distribution. 2. We assessed habitat occupancy for a tiger Panthera tigris metapopulation across a c. 38 000-km2 landscape in India, employing a spatially replicated survey to explicitly address imperfect detections. Ecological predictions about tiger presence were confronted with sign detection data generated from occupancy sampling of 205 sites, each of 188 km2. 3. A recent occupancy model that considers Markovian dependency among sign detections on spatial replicates performed better than the standard occupancy model (ΔAIC = 184·9). A formulation of this model that fitted the data best showed that density of ungulate prey and levels of human disturbance were key determinants of local tiger presence. Model averaging resulted in a replicate-level detection probability [inline image] = 0·17 (0·17) for signs and a tiger habitat occupancy estimate of [inline image] = 0·665 (0·0857) or 14 076 (1814) km2 of potential habitat of 21 167 km2. In contrast, a traditional presence-versus-absence approach underestimated occupancy by 47%. Maps of probabilities of local site occupancy clearly identified tiger source populations at higher densities and matched observed tiger density variations, suggesting their potential utility for population assessments at l