WorldWideScience

Sample records for validate biogeochemical models

  1. What sea-ice biogeochemical modellers need from observers

    OpenAIRE

    Steiner, Nadja; Deal, Clara; Lannuzel, Delphine; Lavoie, Diane; Massonnet, François; Miller, Lisa A.; Moreau, Sebastien; Popova, Ekaterina; Stefels, Jacqueline; Tedesco, Letizia

    2016-01-01

    Abstract Numerical models can be a powerful tool helping to understand the role biogeochemical processes play in local and global systems and how this role may be altered in a changing climate. With respect to sea-ice biogeochemical models, our knowledge is severely limited by our poor confidence in numerical model parameterisations representing those processes. Improving model parameterisations requires communication between observers and modellers to guide model development and improve the ...

  2. Wetland biogeochemical processes and simulation modeling

    Science.gov (United States)

    Bai, Junhong; Huang, Laibin; Gao, Haifeng; Jia, Jia; Wang, Xin

    2018-02-01

    As the important landscape with rich biodiversity and high productivity, wetlands can provide numerous ecological services including playing an important role in regulating global biogeochemical cycles, filteringpollutants from terrestrial runoff and atmospheric deposition, protecting and improving water quality, providing living habitats for plants and animals, controlling floodwaters, and retaining surface water flow during dry periods (Reddy and DeLaune, 2008; Qin and Mitsch, 2009; Zhao et al., 2016). However, more than 50% of the world's wetlands had been altered, degraded or lost through a wide range of human activities in the past 150 years, and only a small percentage of the original wetlands remained around the world after over two centuries of intensive development and urbanization (O'connell, 2003; Zhao et al., 2016).

  3. Incorporating nitrogen fixing cyanobacteria in the global biogeochemical model HAMOCC

    Science.gov (United States)

    Paulsen, Hanna; Ilyina, Tatiana; Six, Katharina

    2015-04-01

    Nitrogen fixation by marine diazotrophs plays a fundamental role in the oceanic nitrogen and carbon cycle as it provides a major source of 'new' nitrogen to the euphotic zone that supports biological carbon export and sequestration. Since most global biogeochemical models include nitrogen fixation only diagnostically, they are not able to capture its spatial pattern sufficiently. Here we present the incorporation of an explicit, dynamic representation of diazotrophic cyanobacteria and the corresponding nitrogen fixation in the global ocean biogeochemical model HAMOCC (Hamburg Ocean Carbon Cycle model), which is part of the Max Planck Institute for Meteorology Earth system model (MPI-ESM). The parameterization of the diazotrophic growth is thereby based on available knowledge about the cyanobacterium Trichodesmium spp., which is considered as the most significant pelagic nitrogen fixer. Evaluation against observations shows that the model successfully reproduces the main spatial distribution of cyanobacteria and nitrogen fixation, covering large parts of the tropical and subtropical oceans. Besides the role of cyanobacteria in marine biogeochemical cycles, their capacity to form extensive surface blooms induces a number of bio-physical feedback mechanisms in the Earth system. The processes driving these interactions, which are related to the alteration of heat absorption, surface albedo and momentum input by wind, are incorporated in the biogeochemical and physical model of the MPI-ESM in order to investigate their impacts on a global scale. First preliminary results will be shown.

  4. Surrogate-Based Optimization of Biogeochemical Transport Models

    Science.gov (United States)

    Prieß, Malte; Slawig, Thomas

    2010-09-01

    First approaches towards a surrogate-based optimization method for a one-dimensional marine biogeochemical model of NPZD type are presented. The model, developed by Oschlies and Garcon [1], simulates the distribution of nitrogen, phytoplankton, zooplankton and detritus in a water column and is driven by ocean circulation data. A key issue is to minimize the misfit between the model output and given observational data. Our aim is to reduce the overall optimization cost avoiding expensive function and derivative evaluations by using a surrogate model replacing the high-fidelity model in focus. This in particular becomes important for more complex three-dimensional models. We analyse a coarsening in the discretization of the model equations as one way to create such a surrogate. Here the numerical stability crucially depends upon the discrete stepsize in time and space and the biochemical terms. We show that for given model parameters the level of grid coarsening can be choosen accordingly yielding a stable and satisfactory surrogate. As one example of a surrogate-based optimization method we present results of the Aggressive Space Mapping technique (developed by John W. Bandler [2, 3]) applied to the optimization of this one-dimensional biogeochemical transport model.

  5. Modelling benthic biophysical drivers of ecosystem structure and biogeochemical response

    Science.gov (United States)

    Stephens, Nicholas; Bruggeman, Jorn; Lessin, Gennadi; Allen, Icarus

    2016-04-01

    The fate of carbon deposited at the sea floor is ultimately decided by biophysical drivers that control the efficiency of remineralisation and timescale of carbon burial in sediments. Specifically, these drivers include bioturbation through ingestion and movement, burrow-flushing and sediment reworking, which enhance vertical particulate transport and solute diffusion. Unfortunately, these processes are rarely satisfactorily resolved in models. To address this, a benthic model that explicitly describes the vertical position of biology (e.g., habitats) and biogeochemical processes is presented that includes biological functionality and biogeochemical response capturing changes in ecosystem structure, benthic-pelagic fluxes and biodiversity on inter-annual timescales. This is demonstrated by the model's ability to reproduce temporal variability in benthic infauna, vertical pore water nutrients and pelagic-benthic solute fluxes compared to in-situ data. A key advance is the replacement of bulk parameterisation of bioturbation by explicit description of the bio-physical processes responsible. This permits direct comparison with observations and determination of key parameters in experiments. Crucially, the model resolves the two-way interaction between sediment biogeochemistry and ecology, allowing exploration of the benthic response to changing environmental conditions, the importance of infaunal functional traits in shaping benthic ecological structure and the feedback the resulting bio-physical processes exert on pore water nutrient profiles. The model is actively being used to understand shelf sea carbon cycling, the response of the benthos to climatic change, food provision and other societal benefits.

  6. High resolution modelling of the biogeochemical processes in the eutrophic Loire River (France)

    Science.gov (United States)

    Minaudo, Camille; Moatar, Florentina; Curie, Florence; Gassama, Nathalie; Billen, Gilles

    2016-04-01

    A biogeochemical model was developed, coupling a physically based water temperature model (T-NET) with a semi-mechanistic biogeochemical model (RIVE, used in ProSe and Riverstrahler models) in order to assess at a fine temporal and spatial resolution the biogeochemical processes in the eutrophic Middle Loire hydrosystem (≈10 000 km², 3361 river segments). The code itself allows parallelized computing, which decreased greatly the calculation time (5 hours for simulating 3 years hourly). We conducted a daily survey during the period 2012-2014 at 2 sampling stations located in the Middle Loire of nutrients, chlorophyll pigments, phytoplankton and physic-chemical variables. This database was used as both input data (upstream Loire boundary) and validation data of the model (basin outlet). Diffuse and non-point sources were assessed based on a land cover analysis and WWTP datasets. The results appeared very sensible to the coefficients governing the dynamic of suspended solids and of phosphorus (sorption/desorption processes) within the model and some parameters needed to be estimated numerically. Both the Lagrangian point of view and fluxes budgets at the seasonal and event-based scale evidenced the biogeochemical functioning of the Loire River. Low discharge levels set up favorable physical conditions for phytoplankton growth (long water travel time, limited water depth, suspended particles sedimentation). Conversely, higher discharge levels highly limited the phytoplankton biomass (dilution of the colony, washing-out, limited travel time, remobilization of suspended sediments increasing turbidity), and most biogeochemical species were basically transferred downstream. When hydrological conditions remained favorable for phytoplankton development, P-availability was the critical factor. However, the model evidenced that most of the P in summer was recycled within the water body: on one hand it was assimilated by the algae biomass, and on the other hand it was

  7. The acclimative biogeochemical model of the southern North Sea

    Science.gov (United States)

    Kerimoglu, Onur; Hofmeister, Richard; Maerz, Joeran; Riethmüller, Rolf; Wirtz, Kai W.

    2017-10-01

    Ecosystem models often rely on heuristic descriptions of autotrophic growth that fail to reproduce various stationary and dynamic states of phytoplankton cellular composition observed in laboratory experiments. Here, we present the integration of an advanced phytoplankton growth model within a coupled three-dimensional physical-biogeochemical model and the application of the model system to the southern North Sea (SNS) defined on a relatively high resolution (˜ 1.5-4.5 km) curvilinear grid. The autotrophic growth model, recently introduced by Wirtz and Kerimoglu (2016), is based on a set of novel concepts for the allocation of internal resources and operation of cellular metabolism. The coupled model system consists of the General Estuarine Transport Model (GETM) as the hydrodynamical driver, a lower-trophic-level model and a simple sediment diagenesis model. We force the model system with realistic atmospheric and riverine fluxes, background turbidity caused by suspended particulate matter (SPM) and open ocean boundary conditions. For a simulation for the period 2000-2010, we show that the model system satisfactorily reproduces the physical and biogeochemical states of the system within the German Bight characterized by steep salinity; nutrient and chlorophyll (Chl) gradients, as inferred from comparisons against observation data from long-term monitoring stations; sparse in situ measurements; continuous transects; and satellites. The model also displays skill in capturing the formation of thin chlorophyll layers at the pycnocline, which is frequently observed within the stratified regions during summer. A sensitivity analysis reveals that the vertical distributions of phytoplankton concentrations estimated by the model can be qualitatively sensitive to the description of the light climate and dependence of sinking rates on the internal nutrient reserves. A non-acclimative (fixed-physiology) version of the model predicted entirely different vertical profiles

  8. Development of interactive graphic user interfaces for modeling reaction-based biogeochemical processes in batch systems with BIOGEOCHEM

    Science.gov (United States)

    Chang, C.; Li, M.; Yeh, G.

    2010-12-01

    The BIOGEOCHEM numerical model (Yeh and Fang, 2002; Fang et al., 2003) was developed with FORTRAN for simulating reaction-based geochemical and biochemical processes with mixed equilibrium and kinetic reactions in batch systems. A complete suite of reactions including aqueous complexation, adsorption/desorption, ion-exchange, redox, precipitation/dissolution, acid-base reactions, and microbial mediated reactions were embodied in this unique modeling tool. Any reaction can be treated as fast/equilibrium or slow/kinetic reaction. An equilibrium reaction is modeled with an implicit finite rate governed by a mass action equilibrium equation or by a user-specified algebraic equation. A kinetic reaction is modeled with an explicit finite rate with an elementary rate, microbial mediated enzymatic kinetics, or a user-specified rate equation. None of the existing models has encompassed this wide array of scopes. To ease the input/output learning curve using the unique feature of BIOGEOCHEM, an interactive graphic user interface was developed with the Microsoft Visual Studio and .Net tools. Several user-friendly features, such as pop-up help windows, typo warning messages, and on-screen input hints, were implemented, which are robust. All input data can be real-time viewed and automated to conform with the input file format of BIOGEOCHEM. A post-processor for graphic visualizations of simulated results was also embedded for immediate demonstrations. By following data input windows step by step, errorless BIOGEOCHEM input files can be created even if users have little prior experiences in FORTRAN. With this user-friendly interface, the time effort to conduct simulations with BIOGEOCHEM can be greatly reduced.

  9. Deriving forest fire ignition risk with biogeochemical process modelling.

    Science.gov (United States)

    Eastaugh, C S; Hasenauer, H

    2014-05-01

    Climate impacts the growth of trees and also affects disturbance regimes such as wildfire frequency. The European Alps have warmed considerably over the past half-century, but incomplete records make it difficult to definitively link alpine wildfire to climate change. Complicating this is the influence of forest composition and fuel loading on fire ignition risk, which is not considered by purely meteorological risk indices. Biogeochemical forest growth models track several variables that may be used as proxies for fire ignition risk. This study assesses the usefulness of the ecophysiological model BIOME-BGC's 'soil water' and 'labile litter carbon' variables in predicting fire ignition. A brief application case examines historic fire occurrence trends over pre-defined regions of Austria from 1960 to 2008. Results show that summer fire ignition risk is largely a function of low soil moisture, while winter fire ignitions are linked to the mass of volatile litter and atmospheric dryness.

  10. Traceable components of terrestrial carbon storage capacity in biogeochemical models.

    Science.gov (United States)

    Xia, Jianyang; Luo, Yiqi; Wang, Ying-Ping; Hararuk, Oleksandra

    2013-07-01

    Biogeochemical models have been developed to account for more and more processes, making their complex structures difficult to be understood and evaluated. Here, we introduce a framework to decompose a complex land model into traceable components based on mutually independent properties of modeled biogeochemical processes. The framework traces modeled ecosystem carbon storage capacity (Xss ) to (i) a product of net primary productivity (NPP) and ecosystem residence time (τE ). The latter τE can be further traced to (ii) baseline carbon residence times (τ'E ), which are usually preset in a model according to vegetation characteristics and soil types, (iii) environmental scalars (ξ), including temperature and water scalars, and (iv) environmental forcings. We applied the framework to the Australian Community Atmosphere Biosphere Land Exchange (CABLE) model to help understand differences in modeled carbon processes among biomes and as influenced by nitrogen processes. With the climate forcings of 1990, modeled evergreen broadleaf forest had the highest NPP among the nine biomes and moderate residence times, leading to a relatively high carbon storage capacity (31.5 kg cm(-2) ). Deciduous needle leaf forest had the longest residence time (163.3 years) and low NPP, leading to moderate carbon storage (18.3 kg cm(-2) ). The longest τE in deciduous needle leaf forest was ascribed to its longest τ'E (43.6 years) and small ξ (0.14 on litter/soil carbon decay rates). Incorporation of nitrogen processes into the CABLE model decreased Xss in all biomes via reduced NPP (e.g., -12.1% in shrub land) or decreased τE or both. The decreases in τE resulted from nitrogen-induced changes in τ'E (e.g., -26.7% in C3 grassland) through carbon allocation among plant pools and transfers from plant to litter and soil pools. Our framework can be used to facilitate data model comparisons and model intercomparisons via tracking a few traceable components for all terrestrial carbon

  11. Hyporheic flow and transport processes: mechanisms, models, and biogeochemical implications

    Science.gov (United States)

    Boano, Fulvio; Harvey, Judson W.; Marion, Andrea; Packman, Aaron I.; Revelli, Roberto; Ridolfi, Luca; Anders, Wörman

    2014-01-01

    Fifty years of hyporheic zone research have shown the important role played by the hyporheic zone as an interface between groundwater and surface waters. However, it is only in the last two decades that what began as an empirical science has become a mechanistic science devoted to modeling studies of the complex fluid dynamical and biogeochemical mechanisms occurring in the hyporheic zone. These efforts have led to the picture of surface-subsurface water interactions as regulators of the form and function of fluvial ecosystems. Rather than being isolated systems, surface water bodies continuously interact with the subsurface. Exploration of hyporheic zone processes has led to a new appreciation of their wide reaching consequences for water quality and stream ecology. Modern research aims toward a unified approach, in which processes occurring in the hyporheic zone are key elements for the appreciation, management, and restoration of the whole river environment. In this unifying context, this review summarizes results from modeling studies and field observations about flow and transport processes in the hyporheic zone and describes the theories proposed in hydrology and fluid dynamics developed to quantitatively model and predict the hyporheic transport of water, heat, and dissolved and suspended compounds from sediment grain scale up to the watershed scale. The implications of these processes for stream biogeochemistry and ecology are also discussed."

  12. Nitrous Oxide Emissions from Biofuel Crops and Parameterization in the EPIC Biogeochemical Model

    Science.gov (United States)

    This presentation describes year 1 field measurements of N2O fluxes and crop yields which are used to parameterize the EPIC biogeochemical model for the corresponding field site. Initial model simulations are also presented.

  13. A Comprehensive Plan for the Long-Term Calibration and Validation of Oceanic Biogeochemical Satellite Data

    Science.gov (United States)

    Hooker, Stanford B.; McClain, Charles R.; Mannino, Antonio

    2007-01-01

    The primary objective of this planning document is to establish a long-term capability and validating oceanic biogeochemical satellite data. It is a pragmatic solution to a practical problem based primarily o the lessons learned from prior satellite missions. All of the plan's elements are seen to be interdependent, so a horizontal organizational scheme is anticipated wherein the overall leadership comes from the NASA Ocean Biology and Biogeochemistry (OBB) Program Manager and the entire enterprise is split into two components of equal sature: calibration and validation plus satellite data processing. The detailed elements of the activity are based on the basic tasks of the two main components plus the current objectives of the Carbon Cycle and Ecosystems Roadmap. The former is distinguished by an internal core set of responsibilities and the latter is facilitated through an external connecting-core ring of competed or contracted activities. The core elements for the calibration and validation component include a) publish protocols and performance metrics; b) verify uncertainty budgets; c) manage the development and evaluation of instrumentation; and d) coordinate international partnerships. The core elements for the satellite data processing component are e) process and reprocess multisensor data; f) acquire, distribute, and archive data products; and g) implement new data products. Both components have shared responsibilities for initializing and temporally monitoring satellite calibration. Connecting-core elements include (but are not restricted to) atmospheric correction and characterization, standards and traceability, instrument and analysis round robins, field campaigns and vicarious calibration sites, in situ database, bio-optical algorithm (and product) validation, satellite characterization and vicarious calibration, and image processing software. The plan also includes an accountability process, creating a Calibration and Validation Team (to help manage

  14. Biogeochemical Modeling of the Second Rise of Oxygen

    Science.gov (United States)

    Smith, M. L.; Catling, D.; Claire, M.; Zahnle, K.

    2014-03-01

    The rise of atmospheric oxygen set the tempo for the evolution of complex life on Earth. Oxygen levels are thought to have increased in two broad steps: one step occurred in the Archean ~ 2.45 Ga (the Great Oxidation Event or GOE), and another step occured in the Neoproterozoic ~750-580 Ma (the Neoprotoerozoic Oxygenation Event or NOE). During the NOE, oxygen levels increased from ~1-10% of the present atmospheric level (PAL) (Holland, 2006), to ~15% PAL in the late Neoproterozoic, to ~100% PAL later in the Phanerozoic. Complex life requires O2, so this transition allowed complex life to evolve. We seek to understand what caused the NOE. To explore causes for the NOE, we build upon the biogeochemical model of Claire et al. (2006), which calculates the redox evolution of the atmosphere, ocean, biosphere, and crust in the Archean through to the early Proterozoic. In this model, the balance between oxygenconsuming and oyxgen-producing fluxes evolves over time such that at ~2.4 Ga, the rapidly acting sources of oxygen outweigh the rapidly-acting sinks. Or, in other words, at ~2.4 Ga, the flux of oxygen from organic carbon burial exceeds the sinks of oxygen from reaction with reduced volcanic and metamoprphic gases. The model is able to drive oxygen levels to 1-10% PAL in the Proterozoic; however, the evolving redox fluxes in the model cannot explain how oxygen levels pushed above 1-10% in the late Proterozoic. The authors suggest that perhaps another buffer, such as sulfur, is needed to describe Proterozoic and Phanerozoic redox evolution. Geologic proxies show that in the Proterozoic, up to 10% of the deep ocean may have been sulfidic. With this ocean chemistry, the global sulfur cycle would have worked differently than it does today. Because the sulfur and oxygen cycles interact, the oxygen concentration could have permanently changed due to an evolving sulfur cycle (in combination with evolving redox fluxes associated with other parts of the oxygen cycle and carbon

  15. Using NEON Data to Test and Refine Conceptual and Numerical Models of Soil Biogeochemical and Microbial Dynamics

    Science.gov (United States)

    Weintraub, S. R.; Stanish, L.; Ayers, E.

    2017-12-01

    Recent conceptual and numerical models have proposed new mechanisms that underpin key biogeochemical phenomena, including soil organic matter storage and ecosystem response to nitrogen deposition. These models seek to explicitly capture the ecological links among biota, especially microbes, and their physical and chemical environment to represent belowground pools and fluxes and how they respond to perturbation. While these models put forth exciting new concepts, their broad predictive abilities are unclear as some have been developed and tested against only small or regional datasets. The National Ecological Observatory Network (NEON) presents new opportunities to test and validate these models with multi-site data that span wide climatic, edaphic, and ecological gradients. NEON is measuring surface soil biogeochemical pools and fluxes along with diversity, abundance, and functional potential of soil microbiota at 47 sites distributed across the United States. This includes co-located measurements of soil carbon and nitrogen concentrations and stable isotopes, net nitrogen mineralization and nitrification rates, soil moisture, pH, microbial biomass, and community composition via 16S and ITS rRNA sequencing and shotgun metagenomic analyses. Early NEON data demonstrates that these wide edaphic and climatic gradients are related to changes in microbial community structure and functional potential, as well as element pools and process rates. Going forward, NEON's suite of standardized soil data has the potential to advance our understanding of soil communities and processes by allowing us to test the predictions of new soil biogeochemical frameworks and models. Here, we highlight several recently developed models that are ripe for this kind of data validation, and discuss key insights that may result. Further, we explore synergies with other networks, such as (i)LTER and (i)CZO, which may increase our ability to advance the frontiers of soil biogeochemical modeling.

  16. Modelling of transport and biogeochemical processes in pollution plumes: Literature review of model development

    DEFF Research Database (Denmark)

    Brun, A.; Engesgaard, Peter Knudegaard

    2002-01-01

    A literature survey shows how biogeochemical (coupled organic and inorganic reaction processes) transport models are based on considering the complete biodegradation process as either a single- or as a two-step process. It is demonstrated that some two-step process models rely on the Partial...... Equilibrium Approach (PEA). The PEA assumes the organic degradation step, and not the electron acceptor consumption step, is rate limiting. This distinction is not possible in one-step process models, where consumption of both the electron donor and acceptor are treated kinetically. A three-dimensional, two......-step PEA model is developed. The model allows for Monod kinetics and biomass growth, features usually included only in one-step process models. The biogeochemical part of the model is tested for a batch system with degradation of organic matter under the consumption of a sequence of electron acceptors...

  17. Past and present of sediment and carbon biogeochemical cycling models

    Directory of Open Access Journals (Sweden)

    F. T. Mackenzie

    2004-01-01

    Full Text Available The global carbon cycle is part of the much more extensive sedimentary cycle that involves large masses of carbon in the Earth's inner and outer spheres. Studies of the carbon cycle generally followed a progression in knowledge of the natural biological, then chemical, and finally geological processes involved, culminating in a more or less integrated picture of the biogeochemical carbon cycle by the 1920s. However, knowledge of the ocean's carbon cycle behavior has only within the last few decades progressed to a stage where meaningful discussion of carbon processes on an annual to millennial time scale can take place. In geologically older and pre-industrial time, the ocean was generally a net source of CO2 emissions to the atmosphere owing to the mineralization of land-derived organic matter in addition to that produced in situ and to the process of CaCO3 precipitation. Due to rising atmospheric CO2 concentrations because of fossil fuel combustion and land use changes, the direction of the air-sea CO2 flux has reversed, leading to the ocean as a whole being a net sink of anthropogenic CO2. The present thickness of the surface ocean layer, where part of the anthropogenic CO2 emissions are stored, is estimated as of the order of a few hundred meters. The oceanic coastal zone net air-sea CO2 exchange flux has also probably changed during industrial time. Model projections indicate that in pre-industrial times, the coastal zone may have been net heterotrophic, releasing CO2 to the atmosphere from the imbalance between gross photosynthesis and total respiration. This, coupled with extensive CaCO3 precipitation in coastal zone environments, led to a net flux of CO2 out of the system. During industrial time the coastal zone ocean has tended to reverse its trophic status toward a non-steady state situation of net autotrophy, resulting in net uptake of anthropogenic CO2 and storage of carbon in the coastal ocean, despite the significant calcification

  18. Biogeochemical modelling vs. tree-ring data - comparison of forest ecosystem productivity estimates

    Science.gov (United States)

    Zorana Ostrogović Sever, Maša; Barcza, Zoltán; Hidy, Dóra; Paladinić, Elvis; Kern, Anikó; Marjanović, Hrvoje

    2017-04-01

    Forest ecosystems are sensitive to environmental changes as well as human-induce disturbances, therefore process-based models with integrated management modules represent valuable tool for estimating and forecasting forest ecosystem productivity under changing conditions. Biogeochemical model Biome-BGC simulates carbon, nitrogen and water fluxes, and it is widely used for different terrestrial ecosystems. It was modified and parameterised by many researchers in the past to meet the specific local conditions. In this research, we used recently published improved version of the model Biome-BGCMuSo (BBGCMuSo), with multilayer soil module and integrated management module. The aim of our research is to validate modelling results of forest ecosystem productivity (NPP) from BBGCMuSo model with observed productivity estimated from an extensive dataset of tree-rings. The research was conducted in two distinct forest complexes of managed Pedunculate oak in SE Europe (Croatia), namely Pokupsko basin and Spačva basin. First, we parameterized BBGCMuSo model at a local level using eddy-covariance (EC) data from Jastrebarsko EC site. Parameterized model was used for the assessment of productivity on a larger scale. Results of NPP assessment with BBGCMuSo are compared with NPP estimated from tree ring data taken from trees on over 100 plots in both forest complexes. Keywords: Biome-BGCMuSo, forest productivity, model parameterization, NPP, Pedunculate oak

  19. Reactive transport modelling of biogeochemical processes and carbon isotope geochemistry inside a landfill leachate plume.

    NARCIS (Netherlands)

    van Breukelen, B.M.; Griffioen, J.; Roling, W.F.M.; van Verseveld, H.W.

    2004-01-01

    The biogeochemical processes governing leachate attenuation inside a landfill leachate plume (Banisveld, the Netherlands) were revealed and quantified using the 1D reactive transport model PHREEQC-2. Biodegradation of dissolved organic carbon (DOC) was simulated assuming first-order oxidation of two

  20. Nitrous oxide emissions from cropland: a procedure for calibrating the DayCent biogeochemical model using inverse modelling

    Science.gov (United States)

    Rafique, Rashad; Fienen, Michael N.; Parkin, Timothy B.; Anex, Robert P.

    2013-01-01

    DayCent is a biogeochemical model of intermediate complexity widely used to simulate greenhouse gases (GHG), soil organic carbon and nutrients in crop, grassland, forest and savannah ecosystems. Although this model has been applied to a wide range of ecosystems, it is still typically parameterized through a traditional “trial and error” approach and has not been calibrated using statistical inverse modelling (i.e. algorithmic parameter estimation). The aim of this study is to establish and demonstrate a procedure for calibration of DayCent to improve estimation of GHG emissions. We coupled DayCent with the parameter estimation (PEST) software for inverse modelling. The PEST software can be used for calibration through regularized inversion as well as model sensitivity and uncertainty analysis. The DayCent model was analysed and calibrated using N2O flux data collected over 2 years at the Iowa State University Agronomy and Agricultural Engineering Research Farms, Boone, IA. Crop year 2003 data were used for model calibration and 2004 data were used for validation. The optimization of DayCent model parameters using PEST significantly reduced model residuals relative to the default DayCent parameter values. Parameter estimation improved the model performance by reducing the sum of weighted squared residual difference between measured and modelled outputs by up to 67 %. For the calibration period, simulation with the default model parameter values underestimated mean daily N2O flux by 98 %. After parameter estimation, the model underestimated the mean daily fluxes by 35 %. During the validation period, the calibrated model reduced sum of weighted squared residuals by 20 % relative to the default simulation. Sensitivity analysis performed provides important insights into the model structure providing guidance for model improvement.

  1. Error assessment of biogeochemical models by lower bound methods (NOMMA-1.0)

    Science.gov (United States)

    Sauerland, Volkmar; Löptien, Ulrike; Leonhard, Claudine; Oschlies, Andreas; Srivastav, Anand

    2018-03-01

    Biogeochemical models, capturing the major feedbacks of the pelagic ecosystem of the world ocean, are today often embedded into Earth system models which are increasingly used for decision making regarding climate policies. These models contain poorly constrained parameters (e.g., maximum phytoplankton growth rate), which are typically adjusted until the model shows reasonable behavior. Systematic approaches determine these parameters by minimizing the misfit between the model and observational data. In most common model approaches, however, the underlying functions mimicking the biogeochemical processes are nonlinear and non-convex. Thus, systematic optimization algorithms are likely to get trapped in local minima and might lead to non-optimal results. To judge the quality of an obtained parameter estimate, we propose determining a preferably large lower bound for the global optimum that is relatively easy to obtain and that will help to assess the quality of an optimum, generated by an optimization algorithm. Due to the unavoidable noise component in all observations, such a lower bound is typically larger than zero. We suggest deriving such lower bounds based on typical properties of biogeochemical models (e.g., a limited number of extremes and a bounded time derivative). We illustrate the applicability of the method with two real-world examples. The first example uses real-world observations of the Baltic Sea in a box model setup. The second example considers a three-dimensional coupled ocean circulation model in combination with satellite chlorophyll a.

  2. Error assessment of biogeochemical models by lower bound methods (NOMMA-1.0

    Directory of Open Access Journals (Sweden)

    V. Sauerland

    2018-03-01

    Full Text Available Biogeochemical models, capturing the major feedbacks of the pelagic ecosystem of the world ocean, are today often embedded into Earth system models which are increasingly used for decision making regarding climate policies. These models contain poorly constrained parameters (e.g., maximum phytoplankton growth rate, which are typically adjusted until the model shows reasonable behavior. Systematic approaches determine these parameters by minimizing the misfit between the model and observational data. In most common model approaches, however, the underlying functions mimicking the biogeochemical processes are nonlinear and non-convex. Thus, systematic optimization algorithms are likely to get trapped in local minima and might lead to non-optimal results. To judge the quality of an obtained parameter estimate, we propose determining a preferably large lower bound for the global optimum that is relatively easy to obtain and that will help to assess the quality of an optimum, generated by an optimization algorithm. Due to the unavoidable noise component in all observations, such a lower bound is typically larger than zero. We suggest deriving such lower bounds based on typical properties of biogeochemical models (e.g., a limited number of extremes and a bounded time derivative. We illustrate the applicability of the method with two real-world examples. The first example uses real-world observations of the Baltic Sea in a box model setup. The second example considers a three-dimensional coupled ocean circulation model in combination with satellite chlorophyll a.

  3. Observationally-based Metrics of Ocean Carbon and Biogeochemical Variables are Essential for Evaluating Earth System Model Projections

    Science.gov (United States)

    Russell, J. L.; Sarmiento, J. L.

    2017-12-01

    The Southern Ocean is central to the climate's response to increasing levels of atmospheric greenhouse gases as it ventilates a large fraction of the global ocean volume. Global coupled climate models and earth system models, however, vary widely in their simulations of the Southern Ocean and its role in, and response to, the ongoing anthropogenic forcing. Due to its complex water-mass structure and dynamics, Southern Ocean carbon and heat uptake depend on a combination of winds, eddies, mixing, buoyancy fluxes and topography. Understanding how the ocean carries heat and carbon into its interior and how the observed wind changes are affecting this uptake is essential to accurately projecting transient climate sensitivity. Observationally-based metrics are critical for discerning processes and mechanisms, and for validating and comparing climate models. As the community shifts toward Earth system models with explicit carbon simulations, more direct observations of important biogeochemical parameters, like those obtained from the biogeochemically-sensored floats that are part of the Southern Ocean Carbon and Climate Observations and Modeling project, are essential. One goal of future observing systems should be to create observationally-based benchmarks that will lead to reducing uncertainties in climate projections, and especially uncertainties related to oceanic heat and carbon uptake.

  4. Modeling biogeochemical reactive transport in a fracture zone

    International Nuclear Information System (INIS)

    Molinero, Jorge; Samper, Javier; Yang, Chan Bing; Zhang, Guoxiang; Guoxiang, Zhang

    2005-01-01

    A coupled model of groundwater flow, reactive solute transport and microbial processes for a fracture zone of the Aspo site at Sweden is presented. This is the model of the so-called Redox Zone Experiment aimed at evaluating the effects of tunnel construction on the geochemical conditions prevailing in a fracture granite. It is found that a model accounting for microbially-mediated geochemical processes is able to reproduce the unexpected measured increasing trends of dissolved sulfate and bicarbonate. The model is also useful for testing hypotheses regarding the role of microbial processes and evaluating the sensitivity of model results to changes in biochemical parameters

  5. Biogeochemical modelling of dissolved oxygen in a changing ocean

    Science.gov (United States)

    Andrews, Oliver; Buitenhuis, Erik; Le Quéré, Corinne; Suntharalingam, Parvadha

    2017-08-01

    Secular decreases in dissolved oxygen concentration have been observed within the tropical oxygen minimum zones (OMZs) and at mid- to high latitudes over the last approximately 50 years. Earth system model projections indicate that a reduction in the oxygen inventory of the global ocean, termed ocean deoxygenation, is a likely consequence of on-going anthropogenic warming. Current models are, however, unable to consistently reproduce the observed trends and variability of recent decades, particularly within the established tropical OMZs. Here, we conduct a series of targeted hindcast model simulations using a state-of-the-art global ocean biogeochemistry model in order to explore and review biases in model distributions of oceanic oxygen. We show that the largest magnitude of uncertainty is entrained into ocean oxygen response patterns due to model parametrization of pCO2-sensitive C : N ratios in carbon fixation and imposed atmospheric forcing data. Inclusion of a pCO2-sensitive C : N ratio drives historical oxygen depletion within the ocean interior due to increased organic carbon export and subsequent remineralization. Atmospheric forcing is shown to influence simulated interannual variability in ocean oxygen, particularly due to differences in imposed variability of wind stress and heat fluxes. This article is part of the themed issue 'Ocean ventilation and deoxygenation in a warming world'.

  6. Physical/biogeochemical coupled model : impact of an offline vs online strategy

    Science.gov (United States)

    Hameau, Angélique; Perruche, Coralie; Bricaud, Clément; Gutknecht, Elodie; Reffray, Guillaume

    2014-05-01

    Mercator-Ocean, the French ocean forecasting center, has been developing several operational forecasting systems and reanalysis of the physical and biogeochemical 3D-Ocean. Here we study the impact of an offline vs online strategy to couple the physical (OPA) and biogeochemical (PISCES) modules included in the NEMO platform. For this purpose, we perform global one-year long simulations at 1° resolution. The model was initialized with global climatologies. The spin-up involved 10 years of biogeochemical off-line simulation forced by a climatology of ocean physics. The online mode consists in running physical and biogeochemical models simultaneously whereas in the offline mode, the biogeochemical model is launched alone, forced by averaged physical forcing (1 day, 7 days,… ). The Mercator operational biogeochemical system is currently using the offline mode with a weekly physical forcing. A special treatment is applied to the vertical diffusivity coefficient (Kz): as it varies of several orders of magnitude, we compute the mean of the LOG10 of Kz. Moreover, a threshold value is applied to remove the highest values corresponding to enhanced convection. To improve this system, 2 directions are explored. First, 3 physical forcing frequencies are compared to quantify errors due to the offline mode: 1 hour (online mode), 1 day and 1 week (offline modes). Secondly, sensitivity tests to the threshold value applied to Kz are performed. The simulations are evaluated by systematically comparing model fields to observations (Globcolour product and World Ocean Atlas 2005) at global and regional scales. We show first that offline simulations are in good agreement with online simulation. As expected, the lower the physical forcing frequency is, the closer to the online solution is the offline simulation. The threshold value on the vertical diffusivity coefficient manages the mixing strength within the mixed layer. A value of 1 m2.s-1 appears to be a good compromise to approach

  7. Possible impacts of global warming on tundra and boreal forest ecosystems - comparison of some biogeochemical models

    Energy Technology Data Exchange (ETDEWEB)

    Ploechl, M.; Cramer, W.

    1995-06-01

    Global warming affects the magnitude of carbon, water and nitrogen fluxes between biosphere and atmosphere as well as the distribution of vegetation types. Biogeochemical models, global as well as patch models, can be used to estimate the differences between the mean values of annual net primary production (NPP) for the present and for future climate scenarios. Both approaches rely on the prescribed pattern of vegetation types. Structural, rule based models can predict such patterns, provided that vegetation and climate are in equilibrium. The coupling of biogeochemical and structural models gives the opportunity to test the sensitivity of biogeochemical processes not only to climatic change but also to biome shifts. Whether the annual mean NPP of a vegetation type increses or decreases depends strongly on the assumptions about a CO{sub 2} fertilization effect and nitrogen cycling. Results from our coupled model show that, given that direct CO{sub 2} effects are uncertain, (i) average NPP of these northern biomes might decrease under global warming, but (ii) total NPP of the region would increase, due to the northward shift of the taiga biome. (orig.)

  8. Validation of HEDR models

    International Nuclear Information System (INIS)

    Napier, B.A.; Simpson, J.C.; Eslinger, P.W.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1994-05-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computer models for estimating the possible radiation doses that individuals may have received from past Hanford Site operations. This document describes the validation of these models. In the HEDR Project, the model validation exercise consisted of comparing computational model estimates with limited historical field measurements and experimental measurements that are independent of those used to develop the models. The results of any one test do not mean that a model is valid. Rather, the collection of tests together provide a level of confidence that the HEDR models are valid

  9. Model Validation Status Review

    International Nuclear Information System (INIS)

    E.L. Hardin

    2001-01-01

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M and O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  10. Model Validation Status Review

    Energy Technology Data Exchange (ETDEWEB)

    E.L. Hardin

    2001-11-28

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified, and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural and

  11. Assessing the utility of frequency dependent nudging for reducing biases in biogeochemical models

    Science.gov (United States)

    Lagman, Karl B.; Fennel, Katja; Thompson, Keith R.; Bianucci, Laura

    2014-09-01

    Bias errors, resulting from inaccurate boundary and forcing conditions, incorrect model parameterization, etc. are a common problem in environmental models including biogeochemical ocean models. While it is important to correct bias errors wherever possible, it is unlikely that any environmental model will ever be entirely free of such errors. Hence, methods for bias reduction are necessary. A widely used technique for online bias reduction is nudging, where simulated fields are continuously forced toward observations or a climatology. Nudging is robust and easy to implement, but suppresses high-frequency variability and introduces artificial phase shifts. As a solution to this problem Thompson et al. (2006) introduced frequency dependent nudging where nudging occurs only in prescribed frequency bands, typically centered on the mean and the annual cycle. They showed this method to be effective for eddy resolving ocean circulation models. Here we add a stability term to the previous form of frequency dependent nudging which makes the method more robust for non-linear biological models. Then we assess the utility of frequency dependent nudging for biological models by first applying the method to a simple predator-prey model and then to a 1D ocean biogeochemical model. In both cases we only nudge in two frequency bands centered on the mean and the annual cycle, and then assess how well the variability in higher frequency bands is recovered. We evaluate the effectiveness of frequency dependent nudging in comparison to conventional nudging and find significant improvements with the former.

  12. Validation of simulation models

    DEFF Research Database (Denmark)

    Rehman, Muniza; Pedersen, Stig Andur

    2012-01-01

    In philosophy of science, the interest for computational models and simulations has increased heavily during the past decades. Different positions regarding the validity of models have emerged but the views have not succeeded in capturing the diversity of validation methods. The wide variety...

  13. Implementation ambiguity: The fifth element long lost in uncertainty budgets for land biogeochemical modeling

    Science.gov (United States)

    Tang, J.; Riley, W. J.

    2015-12-01

    Previous studies have identified four major sources of predictive uncertainty in modeling land biogeochemical (BGC) processes: (1) imperfect initial conditions (e.g., assumption of preindustrial equilibrium); (2) imperfect boundary conditions (e.g., climate forcing data); (3) parameterization (type I equifinality); and (4) model structure (type II equifinality). As if that were not enough to cause substantial sleep loss in modelers, we propose here a fifth element of uncertainty that results from implementation ambiguity that occurs when the model's mathematical description is translated into computational code. We demonstrate the implementation ambiguity using the example of nitrogen down regulation, a necessary process in modeling carbon-climate feedbacks. We show that, depending on common land BGC model interpretations of the governing equations for mineral nitrogen, there are three different implementations of nitrogen down regulation. We coded these three implementations in the ACME land model (ALM), and explored how they lead to different preindustrial and contemporary land biogeochemical states and fluxes. We also show how this implementation ambiguity can lead to different carbon-climate feedback estimates across the RCP scenarios. We conclude by suggesting how to avoid such implementation ambiguity in ESM BGC models.

  14. Multimillennium changes in dissolved oxygen under global warming: results from an AOGCM and offline ocean biogeochemical model

    Science.gov (United States)

    Yamamoto, A.; Abe-Ouchi, A.; Shigemitsu, M.; Oka, A.; Takahashi, K.; Ohgaito, R.; Yamanaka, Y.

    2016-12-01

    Long-term oceanic oxygen change due to global warming is still unclear; most future projections (such as CMIP5) are only performed until 2100. Indeed, few previous studies using conceptual models project oxygen change in the next thousands of years, showing persistent global oxygen reduction by about 30% in the next 2000 years, even after atmospheric carbon dioxide stops rising. Yet, these models cannot sufficiently represent the ocean circulation change: the key driver of oxygen change. Moreover, considering serious effect oxygen reduction has on marine life and biogeochemical cycling, long-term oxygen change should be projected for higher validity. Therefore, we used a coupled atmosphere-ocean general circulation model (AOGCM) and an offline ocean biogeochemical model, investigating realistic long-term changes in oceanic oxygen concentration and ocean circulation. We integrated these models for 2000 years under atmospheric CO2 doubling and quadrupling. After global oxygen reduction in the first 500 years, oxygen concentration in deep ocean globally recovers and overshoots, despite surface oxygen decrease and weaker Atlantic Meridional Overturning Circulation. Deep ocean convection in the Weddell Sea recovers and overshoots, after initial cessation. Thus, enhanced deep convection and associated Antarctic Bottom Water supply oxygen-rich surface waters to deep ocean, resulting global deep ocean oxygenation. We conclude that the change in ocean circulation in the Southern Ocean potentially drives millennial-scale oxygenation in the deep ocean; contrary to past reported long-term oxygen reduction and general expectation. In presentation, we will discuss the mechanism of response of deep ocean convection in the Weddell Sea and show the volume changes of hypoxic waters.

  15. Exploring the Influence of Topography on Belowground C Processes Using a Coupled Hydrologic-Biogeochemical Model

    Science.gov (United States)

    Shi, Y.; Davis, K. J.; Eissenstat, D. M.; Kaye, J. P.; Duffy, C.; Yu, X.; He, Y.

    2014-12-01

    Belowground carbon processes are affected by soil moisture and soil temperature, but current biogeochemical models are 1-D and cannot resolve topographically driven hill-slope soil moisture patterns, and cannot simulate the nonlinear effects of soil moisture on carbon processes. Coupling spatially-distributed physically-based hydrologic models with biogeochemical models may yield significant improvements in the representation of topographic influence on belowground C processes. We will couple the Flux-PIHM model to the Biome-BGC (BBGC) model. Flux-PIHM is a coupled physically-based land surface hydrologic model, which incorporates a land-surface scheme into the Penn State Integrated Hydrologic Model (PIHM). The land surface scheme is adapted from the Noah land surface model. Because PIHM is capable of simulating lateral water flow and deep groundwater, Flux-PIHM is able to represent the link between groundwater and the surface energy balance, as well as the land surface heterogeneities caused by topography. The coupled Flux-PIHM-BBGC model will be tested at the Susquehanna/Shale Hills critical zone observatory (SSHCZO). The abundant observations, including eddy covariance fluxes, soil moisture, groundwater level, sap flux, stream discharge, litterfall, leaf area index, above ground carbon stock, and soil carbon efflux, make SSHCZO an ideal test bed for the coupled model. In the coupled model, each Flux-PIHM model grid will couple a BBGC cell. Flux-PIHM will provide BBGC with soil moisture and soil temperature information, while BBGC provides Flux-PIHM with leaf area index. Preliminary results show that when Biome- BGC is driven by PIHM simulated soil moisture pattern, the simulated soil carbon is clearly impacted by topography.

  16. HEDR model validation plan

    International Nuclear Information System (INIS)

    Napier, B.A.; Gilbert, R.O.; Simpson, J.C.; Ramsdell, J.V. Jr.; Thiede, M.E.; Walters, W.H.

    1993-06-01

    The Hanford Environmental Dose Reconstruction (HEDR) Project has developed a set of computational ''tools'' for estimating the possible radiation dose that individuals may have received from past Hanford Site operations. This document describes the planned activities to ''validate'' these tools. In the sense of the HEDR Project, ''validation'' is a process carried out by comparing computational model predictions with field observations and experimental measurements that are independent of those used to develop the model

  17. Groundwater Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process of stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation

  18. How to `Elk-test' biogeochemical models in a data rich world? (Invited)

    Science.gov (United States)

    Reichstein, M.; Ciais, P.; Seneviratne, S. I.; Carvalhais, N.; Dalmonech, D.; Jung, M.; Luo, Y.; Mahecha, M. D.; Moffat, A. M.; Tomelleri, E.; Zaehle, S.

    2010-12-01

    Process-oriented biogeochemical models are a primary tool that has been used to project future states of climate and ecosystems in the earth system in response to anthropogenic and other forcing, and receive tremendous attention also in the context us the planned assessment report AR5 by the IPCC. However, model intercomparison and data-model comparison studies indicate large uncertainties regarding predictions of global interactions between atmosphere and biosphere. Rigorous scientific testing of these models is essential but very challenging, largely because neither it is technically and ethically possible to perform global earth-scale experiments, nor do we have replicate Earths for hypothesis testing. Hence, model evaluations have to rely on monitoring data such as ecological observation networks, global remote sensing or short-term and small-scale experiments. Here, we critically examine strategies of how model evaluations have been performed with a particular emphasis on terrestrial ecosystems. Often weak ‘validations’ are being presented which do not take advantage of all the relevant information in the observed data, but also apparent falsifications are made, that are hampered by a confusion of system processes with system behavior. We propose that a stronger integration of recent advances in pattern-oriented and system-oriented methodologies will lead to more satisfying earth system model evaluation and development, and show a few enlightening examples from terrestrial biogeochemical modeling and other disciplines. Moreover it is crucial to take advantage of the multidimensional nature of arising earth observation data sets which should be matched by models simultaneously, instead of relying on univariate simple comparisons. A new critical model evaluation is needed to improve future IPCC assessments in order to reduce uncertainties by distinguishing plausible simulation trajectories from fairy tales.

  19. A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling

    Science.gov (United States)

    Shapiro, B.; Jin, Q.

    2015-12-01

    Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.

  20. Carbon sequestration by patch fertilization: A comprehensive assessment using coupled physical-ecological-biogeochemical models

    Energy Technology Data Exchange (ETDEWEB)

    Sarmiento, Jorge L. [Princeton Univ., NJ (United States); Gnanadesikan, Anand [Princeton Univ., NJ (United States); Gruber, Nicolas [Univ. of California, Los Angeles, CA (United States); Jin, Xin [Univ. of California, Los Angeles, CA (United States); Armstrong, Robert [State Univ. of New York (SUNY), Plattsburgh, NY (United States)

    2007-06-21

    This final report summarizes research undertaken collaboratively between Princeton University, the NOAA Geophysical Fluid Dynamics Laboratory on the Princeton University campus, the State University of New York at Stony Brook, and the University of California, Los Angeles between September 1, 2000, and November 30, 2006, to do fundamental research on ocean iron fertilization as a means to enhance the net oceanic uptake of CO2 from the atmosphere. The approach we proposed was to develop and apply a suite of coupled physical-ecological-biogeochemical models in order to (i) determine to what extent enhanced carbon fixation from iron fertilization will lead to an increase in the oceanic uptake of atmospheric CO2 and how long this carbon will remain sequestered (efficiency), and (ii) examine the changes in ocean ecology and natural biogeochemical cycles resulting from iron fertilization (consequences). The award was funded in two separate three-year installments: September 1, 2000 to November 30, 2003, for a project entitled “Ocean carbon sequestration by fertilization: An integrated biogeochemical assessment.” A final report was submitted for this at the end of 2003 and is included here as Appendix 1; and, December 1, 2003 to November 30, 2006, for a follow-on project under the same grant number entitled “Carbon sequestration by patch fertilization: A comprehensive assessment using coupled physical-ecological-biogeochemical models.” This report focuses primarily on the progress we made during the second period of funding subsequent to the work reported on in Appendix 1. When we began this project, we were thinking almost exclusively in terms of long-term fertilization over large regions of the ocean such as the Southern Ocean, with much of our focus being on how ocean circulation and biogeochemical cycling would interact to control the response to a given fertilization scenario. Our research on these types of scenarios, which was carried out largely during the

  1. Numerical modeling of watershed-scale radiocesium transport coupled with biogeochemical cycling in forests

    Science.gov (United States)

    Mori, K.; Tada, K.; Tawara, Y.; Tosaka, H.; Ohno, K.; Asami, M.; Kosaka, K.

    2015-12-01

    Since the Fukushima Dai-ichi Nuclear Power Plant (FDNPP) accident, intensive monitoring and modeling works on radionuclide transfer in environment have been carried out. Although Cesium (Cs) concentration has been attenuating due to both physical and environmental half-life (i.e., wash-off by water and sediment), the attenuation rate depends clearly on the type of land use and land cover. In the Fukushima case, studying the migration in forest land use is important for predicting the long-term behavior of Cs because most of the contaminated region is covered by forests. Atmospheric fallout is characterized by complicated behavior in biogeochemical cycle in forests which can be described by biotic/abiotic interactions between many components. In developing conceptual and mathematical model on Cs transfer in forest ecosystem, defining the dominant components and their interactions are crucial issues (BIOMASS, 1997-2001). However, the modeling of fate and transport in geosphere after Cs exports from the forest ecosystem is often ignored. An integrated watershed modeling for simulating spatiotemporal redistribution of Cs that includes the entire region from source to mouth and surface to subsurface, has been recently developed. Since the deposited Cs can migrate due to water and sediment movement, the different species (i.e., dissolved and suspended) and their interactions are key issues in the modeling. However, the initial inventory as source-term was simplified to be homogeneous and time-independent, and biogeochemical cycle in forests was not explicitly considered. Consequently, it was difficult to evaluate the regionally-inherent characteristics which differ according to land uses, even if the model was well calibrated. In this study, we combine the different advantages in modeling of forest ecosystem and watershed. This enable to include more realistic Cs deposition and time series of inventory can be forced over the land surface. These processes are integrated

  2. Study of the seasonal cycle of the biogeochemical processes in the Ligurian Sea using a 1D interdisciplinary model

    NARCIS (Netherlands)

    Raick, C.; Delhez, E.J.M.; Soetaert, K.E.R.; Grégoire, M.

    2005-01-01

    A one-dimensional coupled physical–biogeochemical model has been built to study the pelagic food web of the Ligurian Sea (NW Mediterranean Sea). The physical model is the turbulent closure model (version 1D) developed at the GeoHydrodynamics and Environmental Laboratory (GHER) of the University of

  3. Biogeochemical Protocols and Diagnostics for the CMIP6 Ocean Model Intercomparison Project (OMIP)

    Science.gov (United States)

    Orr, James C.; Najjar, Raymond G.; Aumont, Olivier; Bopp, Laurent; Bullister, John L.; Danabasoglu, Gokhan; Doney, Scott C.; Dunne, John P.; Dutay, Jean-Claude; Graven, Heather; hide

    2017-01-01

    The Ocean Model Intercomparison Project (OMIP) focuses on the physics and biogeochemistry of the ocean component of Earth system models participating in the sixth phase of the Coupled Model Intercomparison Project (CMIP6). OMIP aims to provide standard protocols and diagnostics for ocean models, while offering a forum to promote their common assessment and improvement. It also offers to compare solutions of the same ocean models when forced with reanalysis data (OMIP simulations) vs. when integrated within fully coupled Earth system models (CMIP6). Here we detail simulation protocols and diagnostics for OMIP's biogeochemical and inert chemical tracers. These passive-tracer simulations will be coupled to ocean circulation models, initialized with observational data or output from a model spin-up, and forced by repeating the 1948-2009 surface fluxes of heat, fresh water, and momentum. These so-called OMIP-BGC simulations include three inert chemical tracers (CFC-11, CFC-12, SF [subscript] 6) and biogeochemical tracers (e.g., dissolved inorganic carbon, carbon isotopes, alkalinity, nutrients, and oxygen). Modelers will use their preferred prognostic BGC model but should follow common guidelines for gas exchange and carbonate chemistry. Simulations include both natural and total carbon tracers. The required forced simulation (omip1) will be initialized with gridded observational climatologies. An optional forced simulation (omip1-spunup) will be initialized instead with BGC fields from a long model spin-up, preferably for 2000 years or more, and forced by repeating the same 62-year meteorological forcing. That optional run will also include abiotic tracers of total dissolved inorganic carbon and radiocarbon, CTabio and 14CTabio, to assess deep-ocean ventilation and distinguish the role of physics vs. biology. These simulations will be forced by observed atmospheric histories of the three inert gases and CO2 as well as carbon isotope ratios of CO2. OMIP-BGC simulation

  4. Biogeochemical protocols and diagnostics for the CMIP6 Ocean Model Intercomparison Project (OMIP)

    Science.gov (United States)

    Orr, James C.; Najjar, Raymond G.; Aumont, Olivier; Bopp, Laurent; Bullister, John L.; Danabasoglu, Gokhan; Doney, Scott C.; Dunne, John P.; Dutay, Jean-Claude; Graven, Heather; Griffies, Stephen M.; John, Jasmin G.; Joos, Fortunat; Levin, Ingeborg; Lindsay, Keith; Matear, Richard J.; McKinley, Galen A.; Mouchet, Anne; Oschlies, Andreas; Romanou, Anastasia; Schlitzer, Reiner; Tagliabue, Alessandro; Tanhua, Toste; Yool, Andrew

    2017-06-01

    The Ocean Model Intercomparison Project (OMIP) focuses on the physics and biogeochemistry of the ocean component of Earth system models participating in the sixth phase of the Coupled Model Intercomparison Project (CMIP6). OMIP aims to provide standard protocols and diagnostics for ocean models, while offering a forum to promote their common assessment and improvement. It also offers to compare solutions of the same ocean models when forced with reanalysis data (OMIP simulations) vs. when integrated within fully coupled Earth system models (CMIP6). Here we detail simulation protocols and diagnostics for OMIP's biogeochemical and inert chemical tracers. These passive-tracer simulations will be coupled to ocean circulation models, initialized with observational data or output from a model spin-up, and forced by repeating the 1948-2009 surface fluxes of heat, fresh water, and momentum. These so-called OMIP-BGC simulations include three inert chemical tracers (CFC-11, CFC-12, SF6) and biogeochemical tracers (e.g., dissolved inorganic carbon, carbon isotopes, alkalinity, nutrients, and oxygen). Modelers will use their preferred prognostic BGC model but should follow common guidelines for gas exchange and carbonate chemistry. Simulations include both natural and total carbon tracers. The required forced simulation (omip1) will be initialized with gridded observational climatologies. An optional forced simulation (omip1-spunup) will be initialized instead with BGC fields from a long model spin-up, preferably for 2000 years or more, and forced by repeating the same 62-year meteorological forcing. That optional run will also include abiotic tracers of total dissolved inorganic carbon and radiocarbon, CTabio and 14CTabio, to assess deep-ocean ventilation and distinguish the role of physics vs. biology. These simulations will be forced by observed atmospheric histories of the three inert gases and CO2 as well as carbon isotope ratios of CO2. OMIP-BGC simulation protocols are

  5. Biogeochemical protocols and diagnostics for the CMIP6 Ocean Model Intercomparison Project (OMIP

    Directory of Open Access Journals (Sweden)

    J. C. Orr

    2017-06-01

    Full Text Available The Ocean Model Intercomparison Project (OMIP focuses on the physics and biogeochemistry of the ocean component of Earth system models participating in the sixth phase of the Coupled Model Intercomparison Project (CMIP6. OMIP aims to provide standard protocols and diagnostics for ocean models, while offering a forum to promote their common assessment and improvement. It also offers to compare solutions of the same ocean models when forced with reanalysis data (OMIP simulations vs. when integrated within fully coupled Earth system models (CMIP6. Here we detail simulation protocols and diagnostics for OMIP's biogeochemical and inert chemical tracers. These passive-tracer simulations will be coupled to ocean circulation models, initialized with observational data or output from a model spin-up, and forced by repeating the 1948–2009 surface fluxes of heat, fresh water, and momentum. These so-called OMIP-BGC simulations include three inert chemical tracers (CFC-11, CFC-12, SF6 and biogeochemical tracers (e.g., dissolved inorganic carbon, carbon isotopes, alkalinity, nutrients, and oxygen. Modelers will use their preferred prognostic BGC model but should follow common guidelines for gas exchange and carbonate chemistry. Simulations include both natural and total carbon tracers. The required forced simulation (omip1 will be initialized with gridded observational climatologies. An optional forced simulation (omip1-spunup will be initialized instead with BGC fields from a long model spin-up, preferably for 2000 years or more, and forced by repeating the same 62-year meteorological forcing. That optional run will also include abiotic tracers of total dissolved inorganic carbon and radiocarbon, CTabio and 14CTabio, to assess deep-ocean ventilation and distinguish the role of physics vs. biology. These simulations will be forced by observed atmospheric histories of the three inert gases and CO2 as well as carbon isotope ratios of CO2. OMIP-BGC simulation

  6. Modelling of transport and biogeochemical processes in pollution plumes: Vejen landfill, Denmark

    DEFF Research Database (Denmark)

    Brun, A.; Engesgaard, Peter Knudegaard; Christensen, Thomas Højlund

    2002-01-01

    A biogeochemical transport code is used to simulate leachate attenuation. biogeochemical processes. and development of redox zones in a pollution plume downstream of the Vejen landfill in Denmark. Calibration of the degradation parameters resulted in a good agreement with the observed distribution...

  7. MOPS-1.0: towards a model for the regulation of the global oceanic nitrogen budget by marine biogeochemical processes

    Directory of Open Access Journals (Sweden)

    I. Kriest

    2015-09-01

    Analysis of the model misfit with respect to observed biogeochemical tracer distributions and fluxes suggests a particle flux profile close to the one suggested by Martin et al. (1987. Simulated pelagic denitrification best agrees with the lower values between 59 and 84 Tg N yr−1 recently estimated by other authors.

  8. A biogeochemical transport model to simulate the attenuation of chlorinated hydrocarbon contaminant fluxes across the groundwater-surface water interface

    DEFF Research Database (Denmark)

    Malaguerra, Flavio; Binning, Philip John; Albrechtsen, Hans-Jørgen

    2009-01-01

    and biogeochemical transformation model of the discharge of a TCE plume into a stream, and to determine which parameters most strongly affect pollutant discharge concentrations. Here biological kinetics and the interaction with the soil matrix are implemented in PHREEQC. The ability of PHREEQC to deal with a large...

  9. IIASA's climate-vegetation-biogeochemical cycle module as a part of an integrated model for climate change

    International Nuclear Information System (INIS)

    Ganopolski, A.V.; Jonas, M.; Krabec, J.; Olendrzynski, K.; Petoukhov, V.K.; Venevsky, S.V.

    1994-01-01

    The main objective of this study is the development of a hierarchy of coupled climate biosphere models with a full description of the global biogeochemical cycles. These models are planned for use as the core of a set of integrated models of climate change and they will incorporate the main elements of the Earth system (atmosphere, hydrosphere, pedosphere and biosphere) linked with each other (and eventually with the antroposphere) through the fluxes of heat, momentum, water and through the global biogeochemical cycles of carbon and nitrogen. This set of integrated models can be considered to fill the gap between highly simplified integrated models of climate change and very sophisticated and computationally expensive coupled models, developed on the basis of general circulation models (GCMs). It is anticipated that this range of integrated models will be an effective tool for investigating the broad spectrum of problems connected with the coexistence of human society and biosphere

  10. Implications of a More Comprehensive Nitrogen Cycle in a Global Biogeochemical Ocean Model

    Science.gov (United States)

    Six, K. D.; Ilyina, T.

    2016-02-01

    Nitrogen plays a crucial role for nearly all living organisms in the Earth system. Changes in the marine nitrogen cycle not only alter the marine biota, but will also have an impact on the marine carbon cycle and, in turn, on climate due to the close coupling of the carbon-nitrogen cycle. The understanding of processes and controls of the marine nitrogen cycle is therefore a prerequisite to reduce uncertainties in the prediction of future climate. Nevertheless, most ocean biogeochemical components of modern Earth system models have a rather simplistic representation of marine N-cycle mainly focusing on nitrate. Here we present results of the HAMburg Ocean Carbon Cycle model (HAMOCC) as part of the MPI-ESM which was extended by a prognostic representation of ammonium and nitrite to resolve important processes of the marine N-cycle such as nitrification and anaerobic ammonium oxidation (anammox). Additionally, we updated the production of nitrous oxide, an important greenhouse gas, allowing for two sources from oxidation of ammonium (nitrification) and from reduction of nitrite (nitrifier-denitrification) at low oxygen concentrations. Besides an extended model data comparison we discuss the following aspects of the N-cycle by model means: (1) contribution of anammox to the loss of fixed nitrogen, and (2) production and emission of marine nitrous oxide.

  11. Variably Saturated Flow and Multicomponent Biogeochemical Reactive Transport Modeling of a Uranium Bioremediation Field Experiment

    International Nuclear Information System (INIS)

    Yabusaki, Steven B.; Fang, Yilin; Williams, Kenneth H.; Murray, Christopher J.; Ward, Anderson L.; Dayvault, Richard; Waichler, Scott R.; Newcomer, Darrell R.; Spane, Frank A.; Long, Philip E.

    2011-01-01

    Field experiments at a former uranium mill tailings site have identified the potential for stimulating indigenous bacteria to catalyze the conversion of aqueous uranium in the +6 oxidation state to immobile solid-associated uranium in the +4 oxidation state. This effectively removes uranium from solution resulting in groundwater concentrations below actionable standards. Three-dimensional, coupled variably-saturated flow and biogeochemical reactive transport modeling of a 2008 in situ uranium bioremediation field experiment is used to better understand the interplay of transport rates and biogeochemical reaction rates that determine the location and magnitude of key reaction products. A comprehensive reaction network, developed largely through previous 1-D modeling studies, was used to simulate the impacts on uranium behavior of pulsed acetate amendment, seasonal water table variation, spatially-variable physical (hydraulic conductivity, porosity) and geochemical (reactive surface area) material properties. A principal challenge is the mechanistic representation of biologically-mediated terminal electron acceptor process (TEAP) reactions whose products significantly alter geochemical controls on uranium mobility through increases in pH, alkalinity, exchangeable cations, and highly reactive reduction products. In general, these simulations of the 2008 Big Rusty acetate biostimulation field experiment in Rifle, Colorado confirmed previously identified behaviors including (1) initial dominance by iron reducing bacteria that concomitantly reduce aqueous U(VI), (2) sulfate reducing bacteria that become dominant after ∼30 days and outcompete iron reducers for the acetate electron donor, (3) continuing iron-reducer activity and U(VI) bioreduction during dominantly sulfate reducing conditions, and (4) lower apparent U(VI) removal from groundwater during dominantly sulfate reducing conditions. New knowledge on simultaneously active metal and sulfate reducers has been

  12. Introducing mixotrophy into a biogeochemical model describing an eutrophied coastal ecosystem: The Southern North Sea

    Science.gov (United States)

    Ghyoot, Caroline; Lancelot, Christiane; Flynn, Kevin J.; Mitra, Aditee; Gypens, Nathalie

    2017-09-01

    Most biogeochemical/ecological models divide planktonic protists between phototrophs (phytoplankton) and heterotrophs (zooplankton). However, a large number of planktonic protists are able to combine several mechanisms of carbon and nutrient acquisition. Not representing these multiple mechanisms in biogeochemical/ecological models describing eutrophied coastal ecosystems can potentially lead to different conclusions regarding ecosystem functioning, especially regarding the success of harmful algae, which are often reported as mixotrophic. This modelling study investigates the implications for trophic dynamics of including 3 contrasting forms of mixotrophy, namely osmotrophy (using alkaline phosphatase activity, APA), non-constitutive mixotrophy (acquired phototrophy by microzooplankton) and also constitutive mixotrophy. The application is in the Southern North Sea, an ecosystem that faced, between 1985 and 2005, a significant increase in the nutrient supply N:P ratio (from 31 to 81 mol N:P). The comparison with a traditional model shows that, when the winter N:P ratio in the Southern North Sea is above 22 molN molP-1 (as occurred from mid-1990s), APA allows a 3-32% increase of annual gross primary production (GPP). In result of the higher GPP, the annual sedimentation increases as well as the bacterial production. By contrast, APA does not affect the export of matter to higher trophic levels because the increased GPP is mainly due to Phaeocystis colonies, which are not grazed by copepods. Under high irradiance, non-constitutive mixotrophy appreciably increases annual GPP, transfer to higher trophic levels, sedimentation, and nutrient remineralisation. In this ecosystem, non-constitutive mixotrophy is also observed to have an indirect stimulating effect on diatoms. Constitutive mixotrophy in nanoflagellates appears to have little influence on this ecosystem functioning. An important conclusion from this work is that contrasting forms of mixotrophy have different

  13. Parameter Sensitivity and Laboratory Benchmarking of a Biogeochemical Process Model for Enhanced Anaerobic Dechlorination

    Science.gov (United States)

    Kouznetsova, I.; Gerhard, J. I.; Mao, X.; Barry, D. A.; Robinson, C.; Brovelli, A.; Harkness, M.; Fisher, A.; Mack, E. E.; Payne, J. A.; Dworatzek, S.; Roberts, J.

    2008-12-01

    A detailed model to simulate trichloroethene (TCE) dechlorination in anaerobic groundwater systems has been developed and implemented through PHAST, a robust and flexible geochemical modeling platform. The approach is comprehensive but retains flexibility such that models of varying complexity can be used to simulate TCE biodegradation in the vicinity of nonaqueous phase liquid (NAPL) source zones. The complete model considers a full suite of biological (e.g., dechlorination, fermentation, sulfate and iron reduction, electron donor competition, toxic inhibition, pH inhibition), physical (e.g., flow and mass transfer) and geochemical processes (e.g., pH modulation, gas formation, mineral interactions). Example simulations with the model demonstrated that the feedback between biological, physical, and geochemical processes is critical. Successful simulation of a thirty-two-month column experiment with site soil, complex groundwater chemistry, and exhibiting both anaerobic dechlorination and endogenous respiration, provided confidence in the modeling approach. A comprehensive suite of batch simulations was then conducted to estimate the sensitivity of predicted TCE degradation to the 36 model input parameters. A local sensitivity analysis was first employed to rank the importance of parameters, revealing that 5 parameters consistently dominated model predictions across a range of performance metrics. A global sensitivity analysis was then performed to evaluate the influence of a variety of full parameter data sets available in the literature. The modeling study was performed as part of the SABRE (Source Area BioREmediation) project, a public/private consortium whose charter is to determine if enhanced anaerobic bioremediation can result in effective and quantifiable treatment of chlorinated solvent DNAPL source areas. The modelling conducted has provided valuable insight into the complex interactions between processes in the evolving biogeochemical systems

  14. CMS: Simulated Physical-Biogeochemical Data, SABGOM Model, Gulf of Mexico, 2005-2010

    Data.gov (United States)

    National Aeronautics and Space Administration — This dataset contains monthly mean ocean surface physical and biogeochemical data for the Gulf of Mexico simulated by the South Atlantic Bight and Gulf of Mexico...

  15. Mapping rice ecosystem dynamics and greenhouse gas emissions using multiscale imagery and biogeochemical models

    Science.gov (United States)

    Salas, W.; Torbick, N.

    2017-12-01

    Rice greenhouse gas (GHG) emissions in production hot spots have been mapped using multiscale satellite imagery and a processed-based biogeochemical model. The multiscale Synthetic Aperture Radar (SAR) and optical imagery were co-processed and fed into a machine leanring framework to map paddy attributes that are tuned using field observations and surveys. Geospatial maps of rice extent, crop calendar, hydroperiod, and cropping intensity were then used to parameterize the DeNitrification-DeComposition (DNDC) model to estimate emissions. Results, in the Red River Detla for example, show total methane emissions at 345.4 million kgCH4-C equivalent to 11.5 million tonnes CO2e (carbon dioxide equivalent). We further assessed the role of Alternative Wetting and Drying and the impact on GHG and yield across production hot spots with uncertainty estimates. The approach described in this research provides a framework for using SAR to derive maps of rice and landscape characteristics to drive process models like DNDC. These types of tools and approaches will support the next generation of Monitoring, Reporting, and Verification (MRV) to combat climate change and support ecosystem service markets.

  16. A Hydrological Concept including Lateral Water Flow Compatible with the Biogeochemical Model ForSAFE

    Directory of Open Access Journals (Sweden)

    Giuliana Zanchi

    2016-03-01

    Full Text Available The study presents a hydrology concept developed to include lateral water flow in the biogeochemical model ForSAFE. The hydrology concept was evaluated against data collected at Svartberget in the Vindeln Research Forest in Northern Sweden. The results show that the new concept allows simulation of a saturated and an unsaturated zone in the soil as well as water flow that reaches the stream comparable to measurements. The most relevant differences compared to streamflow measurements are that the model simulates a higher base flow in winter and lower flow peaks after snowmelt. These differences are mainly caused by the assumptions made to regulate the percolation at the bottom of the simulated soil columns. The capability for simulating lateral flows and a saturated zone in ForSAFE can greatly improve the simulation of chemical exchange in the soil and export of elements from the soil to watercourses. Such a model can help improve the understanding of how environmental changes in the forest landscape will influence chemical loads to surface waters.

  17. Capturing optically important constituents and properties in a marine biogeochemical and ecosystem model

    Science.gov (United States)

    Dutkiewicz, S.; Hickman, A. E.; Jahn, O.; Gregg, W. W.; Mouw, C. B.; Follows, M. J.

    2015-07-01

    We present a numerical model of the ocean that couples a three-stream radiative transfer component with a marine biogeochemical-ecosystem component in a dynamic three-dimensional physical framework. The radiative transfer component resolves the penetration of spectral irradiance as it is absorbed and scattered within the water column. We explicitly include the effect of several optically important water constituents (different phytoplankton functional types; detrital particles; and coloured dissolved organic matter, CDOM). The model is evaluated against in situ-observed and satellite-derived products. In particular we compare to concurrently measured biogeochemical, ecosystem, and optical data along a meridional transect of the Atlantic Ocean. The simulation captures the patterns and magnitudes of these data, and estimates surface upwelling irradiance analogous to that observed by ocean colour satellite instruments. We find that incorporating the different optically important constituents explicitly and including spectral irradiance was crucial to capture the variability in the depth of the subsurface chlorophyll a (Chl a) maximum. We conduct a series of sensitivity experiments to demonstrate, globally, the relative importance of each of the water constituents, as well as the crucial feedbacks between the light field, the relative fitness of phytoplankton types, and the biogeochemistry of the ocean. CDOM has proportionally more importance at attenuating light at short wavelengths and in more productive waters, phytoplankton absorption is relatively more important at the subsurface Chl a maximum, and water molecules have the greatest contribution when concentrations of other constituents are low, such as in the oligotrophic gyres. Scattering had less effect on attenuation, but since it is important for the amount and type of upwelling irradiance, it is crucial for setting sea surface reflectance. Strikingly, sensitivity experiments in which absorption by any of the

  18. Reconciling surface ocean productivity, export fluxes and sediment composition in a global biogeochemical ocean model

    Directory of Open Access Journals (Sweden)

    M. Gehlen

    2006-01-01

    Full Text Available This study focuses on an improved representation of the biological soft tissue pump in the global three-dimensional biogeochemical ocean model PISCES. We compare three parameterizations of particle dynamics: (1 the model standard version including two particle size classes, aggregation-disaggregation and prescribed sinking speed; (2 an aggregation-disaggregation model with a particle size spectrum and prognostic sinking speed; (3 a mineral ballast parameterization with no size classes, but prognostic sinking speed. In addition, the model includes a description of surface sediments and organic carbon early diagenesis. Model output is compared to data or data based estimates of ocean productivity, pe-ratios, particle fluxes, surface sediment bulk composition and benthic O2 fluxes. Model results suggest that different processes control POC fluxes at different depths. In the wind mixed layer turbulent particle coagulation appears as key process in controlling pe-ratios. Parameterization (2 yields simulated pe-ratios that compare well to observations. Below the wind mixed layer, POC fluxes are most sensitive to the intensity of zooplankton flux feeding, indicating the importance of zooplankton community composition. All model parameters being kept constant, the capability of the model to reproduce yearly mean POC fluxes below 2000 m and benthic oxygen demand does at first order not dependent on the resolution of the particle size spectrum. Aggregate formation appears essential to initiate an intense biological pump. At great depth the reported close to constant particle fluxes are most likely the result of the combined effect of aggregate formation and mineral ballasting.

  19. Improving National Capability in Biogeochemical Flux Modelling: the UK Environmental Virtual Observatory (EVOp)

    Science.gov (United States)

    Johnes, P.; Greene, S.; Freer, J. E.; Bloomfield, J.; Macleod, K.; Reaney, S. M.; Odoni, N. A.

    2012-12-01

    The best outcomes from watershed management arise where policy and mitigation efforts are underpinned by strong science evidence, but there are major resourcing problems associated with the scale of monitoring needed to effectively characterise the sources rates and impacts of nutrient enrichment nationally. The challenge is to increase national capability in predictive modelling of nutrient flux to waters, securing an effective mechanism for transferring knowledge and management tools from data-rich to data-poor regions. The inadequacy of existing tools and approaches to address these challenges provided the motivation for the Environmental Virtual Observatory programme (EVOp), an innovation from the UK Natural Environment Research Council (NERC). EVOp is exploring the use of a cloud-based infrastructure in catchment science, developing an exemplar to explore N and P fluxes to inland and coastal waters in the UK from grid to catchment and national scale. EVOp is bringing together for the first time national data sets, models and uncertainty analysis into cloud computing environments to explore and benchmark current predictive capability for national scale biogeochemical modelling. The objective is to develop national biogeochemical modelling capability, capitalising on extensive national investment in the development of science understanding and modelling tools to support integrated catchment management, and supporting knowledge transfer from data rich to data poor regions, The AERC export coefficient model (Johnes et al., 2007) has been adapted to function within the EVOp cloud environment, and on a geoclimatic basis, using a range of high resolution, geo-referenced digital datasets as an initial demonstration of the enhanced national capacity for N and P flux modelling using cloud computing infrastructure. Geoclimatic regions are landscape units displaying homogenous or quasi-homogenous functional behaviour in terms of process controls on N and P cycling

  20. Glacial-interglacial variability in ocean oxygen and phosphorus in a global biogeochemical model

    Directory of Open Access Journals (Sweden)

    V Palastanga

    2013-02-01

    Full Text Available Increased transfer of particulate matter from continental shelves to the open ocean during glacials may have had a major impact on the biogeochemistry of the ocean. Here, we assess the response of the coupled oceanic cycles of oxygen, carbon, phosphorus, and iron to the input of particulate organic carbon and reactive phosphorus from shelves. We use a biogeochemical ocean model and specifically focus on the Last Glacial Maximum (LGM. When compared to an interglacial reference run, our glacial scenario with shelf input shows major increases in ocean productivity and phosphorus burial, while mean deep-water oxygen concentrations decline. There is a downward expansion of the oxygen minimum zones (OMZs in the Atlantic and Indian Ocean, while the extension of the OMZ in the Pacific is slightly reduced. Oxygen concentrations below 2000 m also decline but bottom waters do not become anoxic. The model simulations show when shelf input of particulate organic matter and particulate reactive P is considered, low oxygen areas in the glacial ocean expand, but concentrations are not low enough to generate wide scale changes in sediment biogeochemistry and sedimentary phosphorus recycling. Increased reactive phosphorus burial in the open ocean during the LGM in the model is related to dust input, notably over the southwest Atlantic and northwest Pacific, whereas input of material from shelves explains higher burial fluxes in continental slope and rise regions. Our model results are in qualitative agreement with available data and reproduce the strong spatial differences in the response of phosphorus burial to glacial-interglacial change. Our model results also highlight the need for additional sediment core records from all ocean basins to allow further insight into changes in phosphorus, carbon and oxygen dynamics in the ocean on glacial-interglacial timescales.

  1. PISCES-v2: an ocean biogeochemical model for carbon and ecosystem studies

    Directory of Open Access Journals (Sweden)

    O. Aumont

    2015-08-01

    of marine ecosystems (phytoplankton, microzooplankton and mesozooplankton and the biogeochemical cycles of carbon and of the main nutrients (P, N, Fe, and Si. The model is intended to be used for both regional and global configurations at high or low spatial resolutions as well as for short-term (seasonal, interannual and long-term (climate change, paleoceanography analyses. There are 24 prognostic variables (tracers including two phytoplankton compartments (diatoms and nanophytoplankton, two zooplankton size classes (microzooplankton and mesozooplankton and a description of the carbonate chemistry. Formulations in PISCES-v2 are based on a mixed Monod–quota formalism. On the one hand, stoichiometry of C / N / P is fixed and growth rate of phytoplankton is limited by the external availability in N, P and Si. On the other hand, the iron and silicon quotas are variable and the growth rate of phytoplankton is limited by the internal availability in Fe. Various parameterizations can be activated in PISCES-v2, setting, for instance, the complexity of iron chemistry or the description of particulate organic materials. So far, PISCES-v2 has been coupled to the Nucleus for European Modelling of the Ocean (NEMO and Regional Ocean Modeling System (ROMS systems. A full description of PISCES-v2 and of its optional functionalities is provided here. The results of a quasi-steady-state simulation are presented and evaluated against diverse observational and satellite-derived data. Finally, some of the new functionalities of PISCES-v2 are tested in a series of sensitivity experiments.

  2. Compilation of a global N{sub 2}O emission inventory for tropical rainforest soils using a detailed biogeochemical model

    Energy Technology Data Exchange (ETDEWEB)

    Werner, C.

    2007-09-15

    rainforests for the re-calibration of the ForestDNDC-tropica model using a multi-site, parallel Bayesian calibration approach. Extensive validation and sensitivity studies underlined the good agreement of the improved biogeochemical model with observed N{sub 2}O fluxes. Based on a newly developed detailed GIS database for tropical rainforests worldwide, the new model was then used for the calculation of a global N{sub 2}O emission inventory. Daily N{sub 2}O emissions for the years 1991 - 2000 were calculated. The results show striking spatial and temporal differences of N{sub 2}O source strength. Based on the calculations in this study the source estimate of global N2O emissions from tropical rainforest soils was revised from previously 1.2 - 3.6 Tg N yr{sup -1} (based on a wide range of source areas considered) to 1.3 Tg N yr{sup -1}. As the accuracy of the model output is dependant on the data quality driving the models, an uncertainty assessment was performed to quantify the data-induced uncertainty on the presented N{sub 2}O emission inventory.

  3. Water, Energy, and Biogeochemical Model (WEBMOD), user’s manual, version 1

    Science.gov (United States)

    Webb, Richard M.T.; Parkhurst, David L.

    2017-02-08

    The Water, Energy, and Biogeochemical Model (WEBMOD) uses the framework of the U.S. Geological Survey (USGS) Modular Modeling System to simulate fluxes of water and solutes through watersheds. WEBMOD divides watersheds into model response units (MRU) where fluxes and reactions are simulated for the following eight hillslope reservoir types: canopy; snowpack; ponding on impervious surfaces; O-horizon; two reservoirs in the unsaturated zone, which represent preferential flow and matrix flow; and two reservoirs in the saturated zone, which also represent preferential flow and matrix flow. The reservoir representing ponding on impervious surfaces, currently not functional (2016), will be implemented once the model is applied to urban areas. MRUs discharge to one or more stream reservoirs that flow to the outlet of the watershed. Hydrologic fluxes in the watershed are simulated by modules derived from the USGS Precipitation Runoff Modeling System; the National Weather Service Hydro-17 snow model; and a topography-driven hydrologic model (TOPMODEL). Modifications to the standard TOPMODEL include the addition of heterogeneous vertical infiltration rates; irrigation; lateral and vertical preferential flows through the unsaturated zone; pipe flow draining the saturated zone; gains and losses to regional aquifer systems; and the option to simulate baseflow discharge by using an exponential, parabolic, or linear decrease in transmissivity. PHREEQC, an aqueous geochemical model, is incorporated to simulate chemical reactions as waters evaporate, mix, and react within the various reservoirs of the model. The reactions that can be specified for a reservoir include equilibrium reactions among water; minerals; surfaces; exchangers; and kinetic reactions such as kinetic mineral dissolution or precipitation, biologically mediated reactions, and radioactive decay. WEBMOD also simulates variations in the concentrations of the stable isotopes deuterium and oxygen-18 as a result of

  4. Development of an advanced eco-hydrologic and biogeochemical coupling model aimed at clarifying the missing role of inland water in the global biogeochemical cycle

    Science.gov (United States)

    Nakayama, Tadanobu

    2017-04-01

    Recent research showed that inland water including rivers, lakes, and groundwater may play some role in carbon cycling, although its contribution has remained uncertain due to limited amount of reliable data available. In this study, the author developed an advanced model coupling eco-hydrology and biogeochemical cycle (National Integrated Catchment-based Eco-hydrology (NICE)-BGC). This new model incorporates complex coupling of hydrologic-carbon cycle in terrestrial-aquatic linkages and interplay between inorganic and organic carbon during the whole process of carbon cycling. The model could simulate both horizontal transports (export from land to inland water 2.01 ± 1.98 Pg C/yr and transported to ocean 1.13 ± 0.50 Pg C/yr) and vertical fluxes (degassing 0.79 ± 0.38 Pg C/yr, and sediment storage 0.20 ± 0.09 Pg C/yr) in major rivers in good agreement with previous researches, which was an improved estimate of carbon flux from previous studies. The model results also showed global net land flux simulated by NICE-BGC (-1.05 ± 0.62 Pg C/yr) decreased carbon sink a little in comparison with revised Lund-Potsdam-Jena Wetland Hydrology and Methane (-1.79 ± 0.64 Pg C/yr) and previous materials (-2.8 to -1.4 Pg C/yr). This is attributable to CO2 evasion and lateral carbon transport explicitly included in the model, and the result suggests that most previous researches have generally overestimated the accumulation of terrestrial carbon and underestimated the potential for lateral transport. The results further implied difference between inverse techniques and budget estimates suggested can be explained to some extent by a net source from inland water. NICE-BGC would play an important role in reevaluation of greenhouse gas budget of the biosphere, quantification of hot spots, and bridging the gap between top-down and bottom-up approaches to global carbon budget.

  5. Assessment of the GHG Reduction Potential from Energy Crops Using a Combined LCA and Biogeochemical Process Models: A Review

    Directory of Open Access Journals (Sweden)

    Dong Jiang

    2014-01-01

    Full Text Available The main purpose for developing biofuel is to reduce GHG (greenhouse gas emissions, but the comprehensive environmental impact of such fuels is not clear. Life cycle analysis (LCA, as a complete comprehensive analysis method, has been widely used in bioenergy assessment studies. Great efforts have been directed toward establishing an efficient method for comprehensively estimating the greenhouse gas (GHG emission reduction potential from the large-scale cultivation of energy plants by combining LCA with ecosystem/biogeochemical process models. LCA presents a general framework for evaluating the energy consumption and GHG emission from energy crop planting, yield acquisition, production, product use, and postprocessing. Meanwhile, ecosystem/biogeochemical process models are adopted to simulate the fluxes and storage of energy, water, carbon, and nitrogen in the soil-plant (energy crops soil continuum. Although clear progress has been made in recent years, some problems still exist in current studies and should be addressed. This paper reviews the state-of-the-art method for estimating GHG emission reduction through developing energy crops and introduces in detail a new approach for assessing GHG emission reduction by combining LCA with biogeochemical process models. The main achievements of this study along with the problems in current studies are described and discussed.

  6. Reconstructing the Nd oceanic cycle using a coupled dynamical – biogeochemical model

    Directory of Open Access Journals (Sweden)

    T. Arsouze

    2009-12-01

    Full Text Available The decoupled behaviour observed between Nd isotopic composition (Nd IC, also referred as εNd and Nd concentration cycles has led to the notion of a "Nd paradox". While εNd behaves in a quasi-conservative way in the open ocean, leading to its broad use as a water-mass tracer, Nd concentration displays vertical profiles that increase with depth, together with a deep-water enrichment along the global thermohaline circulation. This non-conservative behaviour is typical of nutrients affected by scavenging in surface waters and remineralisation at depth. In addition, recent studies suggest the only way to reconcile both concentration and Nd IC oceanic budgets, is to invoke a "Boundary Exchange" process (BE, defined as the co-occurrence of transfer of elements from the margin to the sea with removal of elements from the sea by Boundary Scavenging as a source-sink term. However, these studies do not simulate the input/output fluxes of Nd to the ocean, and therefore prevents from crucial information that limits our understanding of Nd decoupling. To investigate this paradox on a global scale, this study uses for the first time a fully prognostic coupled dynamical/biogeochemical model with an explicit representation of Nd sources and sinks to simulate the Nd oceanic cycle. Sources considered include dissolved river fluxes, atmospheric dusts and margin sediment re-dissolution. Sinks are scavenging by settling particles. This model simulates the global features of the Nd oceanic cycle well, and produces a realistic distribution of Nd concentration (correct order of magnitude, increase with depth and along the conveyor belt, 65% of the simulated values fit in the ±10 pmol/kg envelop when compared to the data and isotopic composition (inter-basin gradient, characterization of the main water-masses, more than 70% of the simulated values fit in the ±3 εNd envelop when compared to the data, though a slight overestimation of

  7. HYDROBIOGEOCHEM: A coupled model of HYDROlogic transport and mixed BIOGEOCHEMical kinetic/equilibrium reactions in saturated-unsaturated media

    Energy Technology Data Exchange (ETDEWEB)

    Yeh, G.T.; Salvage, K.M. [Pennsylvania State Univ., University Park, PA (United States). Dept. of Civil and Environmental Engineering; Gwo, J.P. [Oak Ridge National Lab., TN (United States); Zachara, J.M.; Szecsody, J.E. [Pacific Northwest National Lab., Richland, WA (United States)

    1998-07-01

    The computer program HYDROBIOGEOCHEM is a coupled model of HYDROlogic transport and BIOGEOCHEMical kinetic and/or equilibrium reactions in saturated/unsaturated media. HYDROBIOGEOCHEM iteratively solves the two-dimensional transport equations and the ordinary differential and algebraic equations of mixed biogeochemical reactions. The transport equations are solved for all aqueous chemical components and kinetically controlled aqueous species. HYDROBIOGEOCHEM is designed for generic application to reactive transport problems affected by both microbiological and geochemical reactions in subsurface media. Input to the program includes the geometry of the system, the spatial distribution of finite elements and nodes, the properties of the media, the potential chemical and microbial reactions, and the initial and boundary conditions. Output includes the spatial distribution of chemical and microbial concentrations as a function of time and space, and the chemical speciation at user-specified nodes.

  8. Regional impacts of iron-light colimitation in a global biogeochemical model

    Science.gov (United States)

    Galbraith, E. D.; Gnanadesikan, A.; Dunne, J. P.; Hiscock, M. R.

    2010-03-01

    Laboratory and field studies have revealed that iron has multiple roles in phytoplankton physiology, with particular importance for light-harvesting cellular machinery. However, although iron-limitation is explicitly included in numerous biogeochemical/ecosystem models, its implementation varies, and its effect on the efficiency of light harvesting is often ignored. Given the complexity of the ocean environment, it is difficult to predict the consequences of applying different iron limitation schemes. Here we explore the interaction of iron and nutrient cycles in an ocean general circulation model using a new, streamlined model of ocean biogeochemistry. Building on previously published parameterizations of photoadaptation and export production, the Biogeochemistry with Light Iron Nutrients and Gasses (BLING) model is constructed with only four explicit tracers but including macronutrient and micronutrient limitation, light limitation, and an implicit treatment of community structure. The structural simplicity of this computationally-inexpensive model allows us to clearly isolate the global effect that iron availability has on maximum light-saturated photosynthesis rates vs. the effect iron has on photosynthetic efficiency. We find that the effect on light-saturated photosynthesis rates is dominant, negating the importance of photosynthetic efficiency in most regions, especially the cold waters of the Southern Ocean. The primary exceptions to this occur in iron-rich regions of the Northern Hemisphere, where high light-saturated photosynthesis rates allow photosynthetic efficiency to play a more important role. In other words, the ability to efficiently harvest photons has little effect in regions where light-saturated growth rates are low. Additionally, we speculate that the phytoplankton cells dominating iron-limited regions tend to have relatively high photosynthetic efficiency, due to reduced packaging effects. If this speculation is correct, it would imply that

  9. Regional impacts of iron-light colimitation in a global biogeochemical model

    Directory of Open Access Journals (Sweden)

    E. D. Galbraith

    2010-03-01

    Full Text Available Laboratory and field studies have revealed that iron has multiple roles in phytoplankton physiology, with particular importance for light-harvesting cellular machinery. However, although iron-limitation is explicitly included in numerous biogeochemical/ecosystem models, its implementation varies, and its effect on the efficiency of light harvesting is often ignored. Given the complexity of the ocean environment, it is difficult to predict the consequences of applying different iron limitation schemes. Here we explore the interaction of iron and nutrient cycles in an ocean general circulation model using a new, streamlined model of ocean biogeochemistry. Building on previously published parameterizations of photoadaptation and export production, the Biogeochemistry with Light Iron Nutrients and Gasses (BLING model is constructed with only four explicit tracers but including macronutrient and micronutrient limitation, light limitation, and an implicit treatment of community structure. The structural simplicity of this computationally-inexpensive model allows us to clearly isolate the global effect that iron availability has on maximum light-saturated photosynthesis rates vs. the effect iron has on photosynthetic efficiency. We find that the effect on light-saturated photosynthesis rates is dominant, negating the importance of photosynthetic efficiency in most regions, especially the cold waters of the Southern Ocean. The primary exceptions to this occur in iron-rich regions of the Northern Hemisphere, where high light-saturated photosynthesis rates allow photosynthetic efficiency to play a more important role. In other words, the ability to efficiently harvest photons has little effect in regions where light-saturated growth rates are low. Additionally, we speculate that the phytoplankton cells dominating iron-limited regions tend to have relatively high photosynthetic efficiency, due to reduced packaging effects. If this speculation is correct

  10. Constraining a complex biogeochemical model for CO2 and N2O emission simulations from various land uses by model-data fusion

    Science.gov (United States)

    Houska, Tobias; Kraus, David; Kiese, Ralf; Breuer, Lutz

    2017-07-01

    This study presents the results of a combined measurement and modelling strategy to analyse N2O and CO2 emissions from adjacent arable land, forest and grassland sites in Hesse, Germany. The measured emissions reveal seasonal patterns and management effects, including fertilizer application, tillage, harvest and grazing. The measured annual N2O fluxes are 4.5, 0.4 and 0.1 kg N ha-1 a-1, and the CO2 fluxes are 20.0, 12.2 and 3.0 t C ha-1 a-1 for the arable land, grassland and forest sites, respectively. An innovative model-data fusion concept based on a multicriteria evaluation (soil moisture at different depths, yield, CO2 and N2O emissions) is used to rigorously test the LandscapeDNDC biogeochemical model. The model is run in a Latin-hypercube-based uncertainty analysis framework to constrain model parameter uncertainty and derive behavioural model runs. The results indicate that the model is generally capable of predicting trace gas emissions, as evaluated with RMSE as the objective function. The model shows a reasonable performance in simulating the ecosystem C and N balances. The model-data fusion concept helps to detect remaining model errors, such as missing (e.g. freeze-thaw cycling) or incomplete model processes (e.g. respiration rates after harvest). This concept further elucidates the identification of missing model input sources (e.g. the uptake of N through shallow groundwater on grassland during the vegetation period) and uncertainty in the measured validation data (e.g. forest N2O emissions in winter months). Guidance is provided to improve the model structure and field measurements to further advance landscape-scale model predictions.

  11. Optical Remote Sensing Algorithm Validation using High-Frequency Underway Biogeochemical Measurements in Three Large Global River Systems

    Science.gov (United States)

    Kuhn, C.; Richey, J. E.; Striegl, R. G.; Ward, N.; Sawakuchi, H. O.; Crawford, J.; Loken, L. C.; Stadler, P.; Dornblaser, M.; Butman, D. E.

    2017-12-01

    More than 93% of the world's river-water volume occurs in basins impacted by large dams and about 43% of river water discharge is impacted by flow regulation. Human land use also alters nutrient and carbon cycling and the emission of carbon dioxide from inland reservoirs. Increased water residence times and warmer temperatures in reservoirs fundamentally alter the physical settings for biogeochemical processing in large rivers, yet river biogeochemistry for many large systems remains undersampled. Satellite remote sensing holds promise as a methodology for responsive regional and global water resources management. Decades of ocean optics research has laid the foundation for the use of remote sensing reflectance in optical wavelengths (400 - 700 nm) to produce satellite-derived, near-surface estimates of phytoplankton chlorophyll concentration. Significant improvements between successive generations of ocean color sensors have enabled the scientific community to document changes in global ocean productivity (NPP) and estimate ocean biomass with increasing accuracy. Despite large advances in ocean optics, application of optical methods to inland waters has been limited to date due to their optical complexity and small spatial scale. To test this frontier, we present a study evaluating the accuracy and suitability of empirical inversion approaches for estimating chlorophyll-a, turbidity and temperature for the Amazon, Columbia and Mississippi rivers using satellite remote sensing. We demonstrate how riverine biogeochemical measurements collected at high frequencies from underway vessels can be used as in situ matchups to evaluate remotely-sensed, near-surface temperature, turbidity, chlorophyll-a derived from the Landsat 8 (NASA) and Sentinel 2 (ESA) satellites. We investigate the use of remote sensing water reflectance to infer trophic status as well as tributary influences on the optical characteristics of the Amazon, Mississippi and Columbia rivers.

  12. Decoupling of arsenic and iron release from ferrihydrite suspension under reducing conditions: a biogeochemical model

    Directory of Open Access Journals (Sweden)

    Morin Guillaume

    2007-11-01

    Full Text Available Abstract High levels of arsenic in groundwater and drinking water are a major health problem. Although the processes controlling the release of As are still not well known, the reductive dissolution of As-rich Fe oxyhydroxides has so far been a favorite hypothesis. Decoupling between arsenic and iron redox transformations has been experimentally demonstrated, but not quantitatively interpreted. Here, we report on incubation batch experiments run with As(V sorbed on, or co-precipitated with, 2-line ferrihydrite. The biotic and abiotic processes of As release were investigated by using wet chemistry, X-ray diffraction, X-ray absorption and genomic techniques. The incubation experiments were carried out with a phosphate-rich growth medium and a community of Fe(III-reducing bacteria under strict anoxic conditions for two months. During the first month, the release of Fe(II in the aqueous phase amounted to only 3% to 10% of the total initial solid Fe concentration, whilst the total aqueous As remained almost constant after an initial exchange with phosphate ions. During the second month, the aqueous Fe(II concentration remained constant, or even decreased, whereas the total quantity of As released to the solution accounted for 14% to 45% of the total initial solid As concentration. At the end of the incubation, the aqueous-phase arsenic was present predominately as As(III whilst X-ray absorption spectroscopy indicated that more than 70% of the solid-phase arsenic was present as As(V. X-ray diffraction revealed vivianite Fe(II3(PO42.8H2O in some of the experiments. A biogeochemical model was then developed to simulate these aqueous- and solid-phase results. The two main conclusions drawn from the model are that (1 As(V is not reduced during the first incubation month with high Eh values, but rather re-adsorbed onto the ferrihydrite surface, and this state remains until arsenic reduction is energetically more favorable than iron reduction, and (2 the

  13. Quantifying the Variability of CH4 Emissions from Pan-Arctic Lakes with Lake Biogeochemical and Landscape Evolution Models

    Science.gov (United States)

    Tan, Z.; Zhuang, Q.

    2014-12-01

    Recent studies in the arctic and subarctic show that CH4 emissions from pan-arctic lakes are playing much more significant roles in the regional carbon cycling than previously estimated. Permafrost thawing due to pronounced warming at northern high latitudes affects lake morphology, changing its CH4 emissions. Thermokarst can enlarge the extent of artic lakes, exposing stable ancient carbon buried in the permafrost zone for degradation and changing a previously known carbon sink to a large carbon source. In some areas, the thawing of subarctic discontinuous and isolated permafrost can diminish thermokarst lakes. To date, few models have considered these important hydrological and biogeochemical processes to provide adequate estimation of CH4 emissions from these lakes. To fill this gap, we have developed a process-based climate-sensitive lake biogeochemical model and a landscape evolution model, which have been applied to quantify the state and variability of CH4 emissions from this freshwater system. Site-level experiments show the models are capable to capture the spatial and temporal variability of CH4 emissions from lakes across Siberia and Alaska. With the lake biogeochemical model solely, we estimate that the magnitude of CH4 emissions from lakes is 13.2 Tg yr-1 in the north of 60 ºN at present, which is on the same order of CH4 emissions from northern high-latitude wetlands. The maximum increment is 11.8 Tg CH4 yr-1 by the end of the 21st century when the worst warming scenario is assumed. We expect the landscape evolution model will improve the existing estimates.

  14. Biogeochemical impact of a model western iron source in the Pacific Equatorial Undercurrent

    OpenAIRE

    Slemons, L.; Gorgues, T.; Aumont, Olivier; Menkès, Christophe; Murray, J. W.

    2009-01-01

    Trace element distributions in the source waters of the Pacific Equatorial Undercurrent (EUC) show the existence of elevated total acid-soluble iron concentrations. This region has been suggested to contribute enough bioavailable iron to regulate interannual and interglacial variability in biological productivity downstream in the high-nitrate low-chlorophyll upwelling zone of the eastern equatorial Pacific. We investigated the advection and first-order biogeochemical impact of an imposed, da...

  15. Natural and drought scenarios in an east central Amazon forest: Fidelity of the Community Land Model 3.5 with three biogeochemical models

    Science.gov (United States)

    Sakaguchi, Koichi; Zeng, Xubin; Christoffersen, Bradley J.; Restrepo-Coupe, Natalia; Saleska, Scott R.; Brando, Paulo M.

    2011-03-01

    Recent development of general circulation models involves biogeochemical cycles: flows of carbon and other chemical species that circulate through the Earth system. Such models are valuable tools for future projections of climate, but still bear large uncertainties in the model simulations. One of the regions with especially high uncertainty is the Amazon forest where large-scale dieback associated with the changing climate is predicted by several models. In order to better understand the capability and weakness of global-scale land-biogeochemical models in simulating a tropical ecosystem under the present day as well as significantly drier climates, we analyzed the off-line simulations for an east central Amazon forest by the Community Land Model version 3.5 of the National Center for Atmospheric Research and its three independent biogeochemical submodels (CASA', CN, and DGVM). Intense field measurements carried out under Large Scale Biosphere-Atmosphere Experiment in Amazonia, including forest response to drought from a throughfall exclusion experiment, are utilized to evaluate the whole spectrum of biogeophysical and biogeochemical aspects of the models. Our analysis shows reasonable correspondence in momentum and energy turbulent fluxes, but it highlights three processes that are not in agreement with observations: (1) inconsistent seasonality in carbon fluxes, (2) biased biomass size and allocation, and (3) overestimation of vegetation stress to short-term drought but underestimation of biomass loss from long-term drought. Without resolving these issues the modeled feedbacks from the biosphere in future climate projections would be questionable. We suggest possible directions for model improvements and also emphasize the necessity of more studies using a variety of in situ data for both driving and evaluating land-biogeochemical models.

  16. Validating Animal Models

    Directory of Open Access Journals (Sweden)

    Nina Atanasova

    2015-06-01

    Full Text Available In this paper, I respond to the challenge raised against contemporary experimental neurobiology according to which the field is in a state of crisis because of the multiple experimental protocols employed in different laboratories and strengthening their reliability that presumably preclude the validity of neurobiological knowledge. I provide an alternative account of experimentation in neurobiology which makes sense of its experimental practices. I argue that maintaining a multiplicity of experimental protocols and strengthening their reliability are well justified and they foster rather than preclude the validity of neurobiological knowledge. Thus, their presence indicates thriving rather than crisis of experimental neurobiology.

  17. Impact of urban effluents on summer hypoxia in the highly turbid Gironde Estuary, applying a 3D model coupling hydrodynamics, sediment transport and biogeochemical processes

    Science.gov (United States)

    Lajaunie-Salla, Katixa; Wild-Allen, Karen; Sottolichio, Aldo; Thouvenin, Bénédicte; Litrico, Xavier; Abril, Gwenaël

    2017-10-01

    Estuaries are increasingly degraded due to coastal urban development and are prone to hypoxia problems. The macro-tidal Gironde Estuary is characterized by a highly concentrated turbidity maximum zone (TMZ). Field observations show that hypoxia occurs in summer in the TMZ at low river flow and a few days after the spring tide peak. In situ data highlight lower dissolved oxygen (DO) concentrations around the city of Bordeaux, located in the upper estuary. Interactions between multiple factors limit the understanding of the processes controlling the dynamics of hypoxia. A 3D biogeochemical model was developed, coupled with hydrodynamics and a sediment transport model, to assess the contribution of the TMZ and the impact of urban effluents through wastewater treatment plants (WWTPs) and sewage overflows (SOs) on hypoxia. Our model describes the transport of solutes and suspended material and the biogeochemical mechanisms impacting oxygen: primary production, degradation of all organic matter (i.e. including phytoplankton respiration, degradation of river and urban watershed matter), nitrification and gas exchange. The composition and the degradation rates of each variable were characterized by in situ measurements and experimental data from the study area. The DO model was validated against observations in Bordeaux City. The simulated DO concentrations show good agreement with field observations and satisfactorily reproduce the seasonal and neap-spring time scale variations around the city of Bordeaux. Simulations show a spatial and temporal correlation between the formation of summer hypoxia and the location of the TMZ, with minimum DO centered in the vicinity of Bordeaux. To understand the contribution of the urban watershed forcing, different simulations with the presence or absence of urban effluents were compared. Our results show that in summer, a reduction of POC from SO would increase the DO minimum in the vicinity of Bordeaux by 3% of saturation. Omitting

  18. Dynamic modeling of nitrogen losses in river networks unravels the coupled effects of hydrological and biogeochemical processes

    Science.gov (United States)

    Alexander, Richard B.; Böhlke, John Karl; Boyer, Elizabeth W.; David, Mark B.; Harvey, Judson W.; Mulholland, Patrick J.; Seitzinger, Sybil P.; Tobias, Craig R.; Tonitto, Christina; Wollheim, Wilfred M.

    2009-01-01

    The importance of lotic systems as sinks for nitrogen inputs is well recognized. A fraction of nitrogen in streamflow is removed to the atmosphere via denitrification with the remainder exported in streamflow as nitrogen loads. At the watershed scale, there is a keen interest in understanding the factors that control the fate of nitrogen throughout the stream channel network, with particular attention to the processes that deliver large nitrogen loads to sensitive coastal ecosystems. We use a dynamic stream transport model to assess biogeochemical (nitrate loadings, concentration, temperature) and hydrological (discharge, depth, velocity) effects on reach-scale denitrification and nitrate removal in the river networks of two watersheds having widely differing levels of nitrate enrichment but nearly identical discharges. Stream denitrification is estimated by regression as a nonlinear function of nitrate concentration, streamflow, and temperature, using more than 300 published measurements from a variety of US streams. These relations are used in the stream transport model to characterize nitrate dynamics related to denitrification at a monthly time scale in the stream reaches of the two watersheds. Results indicate that the nitrate removal efficiency of streams, as measured by the percentage of the stream nitrate flux removed via denitrification per unit length of channel, is appreciably reduced during months with high discharge and nitrate flux and increases during months of low-discharge and flux. Biogeochemical factors, including land use, nitrate inputs, and stream concentrations, are a major control on reach-scale denitrification, evidenced by the disproportionately lower nitrate removal efficiency in streams of the highly nitrate-enriched watershed as compared with that in similarly sized streams in the less nitrate-enriched watershed. Sensitivity analyses reveal that these important biogeochemical factors and physical hydrological factors contribute nearly

  19. Comparison of Greenhouse Gas Emissions between Two Dairy Farm Systems (Conventional vs. Organic Management) in New Hampshire Using the Manure DNDC Biogeochemical Model

    Science.gov (United States)

    Dorich, C.; Contosta, A.; Li, C.; Brito, A.; Varner, R. K.

    2013-12-01

    Agriculture contributes 20 to 25 % of the total anthropogenic greenhouse gas (GHG) emissions globally. These agricultural emissions are primarily in the form of methane (CH4) and nitrous oxide (N2O) with these GHG accounting for roughly 40 and 80 % of the total anthropogenic emissions of CH4 and N2O, respectively. Due to varied management and the complexities of agricultural ecosystems, it is difficult to estimate these CH4 and N2O emissions. The IPCC emission factors can be used to yield rough estimates of CH4 and N2O emissions but they are often based on limited data. Accurate modeling validated by measurements is needed in order to identify potential mitigation areas, reduce GHG emissions from agriculture, and improve sustainability of farming practices. The biogeochemical model Manure DNDC was validated using measurements from two dairy farms in New Hampshire, USA in order to quantify GHG emissions under different management systems. One organic and one conventional dairy farm operated by the University of New Hampshire's Agriculture Experiment Station were utilized as the study sites for validation of Manure DNDC. Compilation of management records started in 2011 to provide model inputs. Model results were then compared to field collected samples of soil carbon and nitrogen, above-ground biomass, and GHG fluxes. Fluxes were measured in crop, animal, housing, and waste management sites on the farms in order to examine the entire farm ecosystem and test the validity of the model. Fluxes were measured by static flux chambers, with enteric fermentation measurements being conducted by the SF6 tracer test as well as a new method called Greenfeeder. Our preliminary GHG flux analysis suggests higher emissions than predicted by IPCC emission factors and equations. Results suggest that emissions from manure management is a key concern at the conventional dairy farm while bedded housing at the organic dairy produced large quantities of GHG.

  20. A 3D SPM model for biogeochemical modelling, with application to the northwest European continental shelf

    NARCIS (Netherlands)

    van der Molen, J.; Ruardij, P.; Greenwood, N.

    2017-01-01

    An SPM resuspension method was developed for use in 3D coupled hydrodynamics-biogeochemistry models to feed into simulations of the under-water light climate and and primary production. The method uses a single mineral fine SPM component for computational efficiency, with a concentration-dependent

  1. Validating Dart Model

    Directory of Open Access Journals (Sweden)

    Mazur Jolanta

    2014-12-01

    Full Text Available The primary objective of the study was to quantitatively test the DART model, which despite being one of the most popular representations of co-creation concept was so far studied almost solely with qualitative methods. To this end, the researchers developed a multiple measurement scale and employed it in interviewing managers. The statistical evidence for adequacy of the model was obtained through CFA with AMOS software. The findings suggest that the DART model may not be an accurate representation of co-creation practices in companies. From the data analysis it was evident that the building blocks of DART had too much of conceptual overlap to be an effective framework for quantitative analysis. It was also implied that the phenomenon of co-creation is so rich and multifaceted that it may be more adequately captured by a measurement model where co-creation is conceived as a third-level factor with two layers of intermediate latent variables.

  2. Simulation of glacial ocean biogeochemical tracer and isotope distributions based on the PMIP3 suite of climate models

    Science.gov (United States)

    Khatiwala, Samar; Muglia, Juan; Kvale, Karin; Schmittner, Andreas

    2016-04-01

    In the present climate system, buoyancy forced convection at high-latitudes together with internal mixing results in a vigorous overturning circulation whose major component is North Atlantic Deep Water. One of the key questions of climate science is whether this "mode" of circulation persisted during glacial periods, and in particular at the Last Glacial Maximum (LGM; 21000 years before present). Resolving this question is both important for advancing our understanding of the climate system, as well as a critical test of numerical models' ability to reliably simulate different climates. The observational evidence, based on interpreting geochemical tracers archived in sediments, is conflicting, as are simulations carried out with state-of-the-art climate models (e.g., as part of the PMIP3 suite), which, due to the computational cost involved, do not by and large include biogeochemical and isotope tracers that can be directly compared with proxy data. Here, we apply geochemical observations to evaluate the ability of several realisations of an ocean model driven by atmospheric forcing from the PMIP3 suite of climate models to simulate global ocean circulation during the LGM. This results in a wide range of circulation states that are then used to simulate biogeochemical tracer and isotope (13C, 14C and Pa/Th) distributions using an efficient, "offline" computational scheme known as the transport matrix method (TMM). One of the key advantages of this approach is the use of a uniform set of biogeochemical and isotope parameterizations across all the different circulations based on the PMIP3 models. We compare these simulated distributions to both modern observations and data from LGM ocean sediments to identify similarities and discrepancies between model and data. We find, for example, that when the ocean model is forced with wind stress from the PMIP3 models the radiocarbon age of the deep ocean is systematically younger compared with reconstructions. Changes in

  3. Validation through model testing

    International Nuclear Information System (INIS)

    1995-01-01

    Geoval-94 is the third Geoval symposium arranged jointly by the OECD/NEA and the Swedish Nuclear Power Inspectorate. Earlier symposia in this series took place in 1987 and 1990. In many countries, the ongoing programmes to site and construct deep geological repositories for high and intermediate level nuclear waste are close to realization. A number of studies demonstrates the potential barrier function of the geosphere, but also that there are many unresolved issues. A key to these problems are the possibilities to gain knowledge by model testing with experiments and to increase confidence in models used for prediction. The sessions cover conclusions from the INTRAVAL-project, experiences from integrated experimental programs and underground research laboratories as well as the integration between performance assessment and site characterisation. Technical issues ranging from waste and buffer interactions with the rock to radionuclide migration in different geological media is addressed. (J.S.)

  4. Perpetual Model Validation

    Science.gov (United States)

    2017-03-01

    25]. This inference process is carried out by a tool referred to as Hynger (Hybrid iNvariant GEneratoR), overviewed in Figure 4, which is a MATLAB ...initially on memory access patterns. A monitoring module will check, at runtime that the observed memory access pattern matches the pattern the software is...necessary. By using the developed approach, a model may be derived from initial tests or simulations , which will then be formally checked at runtime

  5. Application of a SEEK filter to a 1D biogeochemical model of the Ligurian Sea: Twin experiments and real data assimilation

    NARCIS (Netherlands)

    Raick, C.; Alvera-Azcarate, A.; Barth, A.; Brankart, J.-M.; Soetaert, K.E.R.; Grégoire, M.

    2007-01-01

    The Singular Evolutive Extended Kalman (SEEK) filter has been implemented to assimilate in-situ data in a 1D coupled physical-ecosystem model of the Ligurian Sea. The biogeochemical model describes the partly decoupled nitrogen and carbon cycles of the pelagic food web. The GHER hydrodynamic model

  6. Comparison of alternative spatial resolutions in the application of a spatially distributed biogeochemical model over complex terrain

    Science.gov (United States)

    Turner, D.P.; Dodson, R.; Marks, D.

    1996-01-01

    Spatially distributed biogeochemical models may be applied over grids at a range of spatial resolutions, however, evaluation of potential errors and loss of information at relatively coarse resolutions is rare. In this study, a georeferenced database at the 1-km spatial resolution was developed to initialize and drive a process-based model (Forest-BGC) of water and carbon balance over a gridded 54976 km2 area covering two river basins in mountainous western Oregon. Corresponding data sets were also prepared at 10-km and 50-km spatial resolutions using commonly employed aggregation schemes. Estimates were made at each grid cell for climate variables including daily solar radiation, air temperature, humidity, and precipitation. The topographic structure, water holding capacity, vegetation type and leaf area index were likewise estimated for initial conditions. The daily time series for the climatic drivers was developed from interpolations of meteorological station data for the water year 1990 (1 October 1989-30 September 1990). Model outputs at the 1-km resolution showed good agreement with observed patterns in runoff and productivity. The ranges for model inputs at the 10-km and 50-km resolutions tended to contract because of the smoothed topography. Estimates for mean evapotranspiration and runoff were relatively insensitive to changing the spatial resolution of the grid whereas estimates of mean annual net primary production varied by 11%. The designation of a vegetation type and leaf area at the 50-km resolution often subsumed significant heterogeneity in vegetation, and this factor accounted for much of the difference in the mean values for the carbon flux variables. Although area wide means for model outputs were generally similar across resolutions, difference maps often revealed large areas of disagreement. Relatively high spatial resolution analyses of biogeochemical cycling are desirable from several perspectives and may be particularly important in the

  7. Using a Hydrodynamic and Biogeochemical Model to Investigate the Effects of Nutrient Loading from a Wastewater Treatment Plant into Lake Michigan

    Science.gov (United States)

    Khazaei, B.; Bravo, H.; Bootsma, H.

    2017-12-01

    There is clear evidence that excessive nutrient, in particular phosphorus (P), loading into Lake Michigan has produced significant problems, such as algal blooms, hypoxia, and reduced water quality. Addressing those problems requires understanding the transport and fate of P in the lake. The dominance of mixing and dispersion processes on the P transport has been demonstrated, yet recent research has shown the remarkable influence of dreissenid mussels and Cladophora on water clarity and the P budget. Since mussels and Cladophora tend to concentrate near the coastlines, nearshore-offshore P exchange is of a big importance. In this research, a computer model was developed to simulate the P cycle by incorporating the biogeochemical processes relevant to the transport of P into a 3D high-resolution hydrodynamic model. The near-bottom biogeochemical model consists of three linked modules: Cladophora, mussel, and sediment storage modules. The model was applied to the Milwaukee Metropolitan Sewerage District South Shore Wastewater Treatment Plant, between June and October of 2013 and 2015, as a case study. The plant outfall introduces a point source of P into the study domain—the nearshore zone of Lake Michigan adjacent to Milwaukee County. The model was validated against field observations of water temperature, dissolved phosphorus (DP), particulate phosphorus (PP), Cladophora biomass, and P content. The model simulations showed reasonably good agreement with field measurements. Model results showed a) different temporal patterns in 2013 and 2015, b) a larger range of fluctuations in DP than that in PP, and c) that the effects of mussels and Cladophora could explain the differences in patterns and ranges. PP concentrations showed more frequent spikes of concentration in 2013 due to resuspension events during that year because of stronger winds. The model is being applied as a management tool to test scenarios of nutrient loading to determine effluent P limits for the

  8. A dynamic organic soil biogeochemical model for simulating the effects of wildfire on soil environmental conditions and carbon dynamics of black spruce forests

    Science.gov (United States)

    Shuhua Yi; A. David McGuire; Eric Kasischke; Jennifer Harden; Kristen Manies; Michelle Mack; Merritt. Turetsky

    2010-01-01

    Ecosystem models have not comprehensively considered how interactions among fire disturbance, soil environmental conditions, and biogeochemical processes affect ecosystem dynamics in boreal forest ecosystems. In this study, we implemented a dynamic organic soil structure in the Terrestrial Ecosystem Model (DOS-TEM) to investigate the effects of fire on soil temperature...

  9. A positive and multi-element conserving time stepping scheme for biogeochemical processes in marine ecosystem models

    Science.gov (United States)

    Radtke, H.; Burchard, H.

    2015-01-01

    In this paper, an unconditionally positive and multi-element conserving time stepping scheme for systems of non-linearly coupled ODE's is presented. These systems of ODE's are used to describe biogeochemical transformation processes in marine ecosystem models. The numerical scheme is a positive-definite modification of the Runge-Kutta method, it can have arbitrarily high order of accuracy and does not require time step adaption. If the scheme is combined with a modified Patankar-Runge-Kutta method from Burchard et al. (2003), it also gets the ability to solve a certain class of stiff numerical problems, but the accuracy is restricted to second-order then. The performance of the new scheme on two test case problems is shown.

  10. Uncertainty in Earth System Models: Benchmarks for Ocean Model Performance and Validation

    Science.gov (United States)

    Ogunro, O. O.; Elliott, S.; Collier, N.; Wingenter, O. W.; Deal, C.; Fu, W.; Hoffman, F. M.

    2017-12-01

    The mean ocean CO2 sink is a major component of the global carbon budget, with marine reservoirs holding about fifty times more carbon than the atmosphere. Phytoplankton play a significant role in the net carbon sink through photosynthesis and drawdown, such that about a quarter of anthropogenic CO2 emissions end up in the ocean. Biology greatly increases the efficiency of marine environments in CO2 uptake and ultimately reduces the impact of the persistent rise in atmospheric concentrations. However, a number of challenges remain in appropriate representation of marine biogeochemical processes in Earth System Models (ESM). These threaten to undermine the community effort to quantify seasonal to multidecadal variability in ocean uptake of atmospheric CO2. In a bid to improve analyses of marine contributions to climate-carbon cycle feedbacks, we have developed new analysis methods and biogeochemistry metrics as part of the International Ocean Model Benchmarking (IOMB) effort. Our intent is to meet the growing diagnostic and benchmarking needs of ocean biogeochemistry models. The resulting software package has been employed to validate DOE ocean biogeochemistry results by comparison with observational datasets. Several other international ocean models contributing results to the fifth phase of the Coupled Model Intercomparison Project (CMIP5) were analyzed simultaneously. Our comparisons suggest that the biogeochemical processes determining CO2 entry into the global ocean are not well represented in most ESMs. Polar regions continue to show notable biases in many critical biogeochemical and physical oceanographic variables. Some of these disparities could have first order impacts on the conversion of atmospheric CO2 to organic carbon. In addition, single forcing simulations show that the current ocean state can be partly explained by the uptake of anthropogenic emissions. Combined effects of two or more of these forcings on ocean biogeochemical cycles and ecosystems

  11. A high-resolution hydrodynamic-biogeochemical coupled model of the Gulf of Cadiz – Alboran Sea region.

    Directory of Open Access Journals (Sweden)

    D. M. MACIAS

    2014-12-01

    Full Text Available The southern Iberia regional seas comprise the Gulf of Cadiz and the Alboran Sea sub-basins connected by the narrow Strait of Gibraltar. Both basins are very different in their hydrological and biological characteristics but are, also, tightly connected to each other. Integrative studies of the whole regional oceanic system are scarce and difficult to perform due to the relative large area to cover and the different relevant time-scales of the main forcings in each sub-basin. Here we propose, for the first time, a fully coupled, 3D, hydrodynamic-biogeochemical model that covers, in a single domain (~2km resolution both marine basins for a 20 years simulation (1989-2008. Model performance is assessed against available data in terms of spatial and temporal distributions of biological variables. In general, the proposed model is able to represent the climatological distributions of primary and secondary producers and also the main seasonality of primary production in the different sub-regions of the analyzed basins. Potential causes of the observed mismatches between model and data are identified and some solutions are proposed for future model development. We conclude that most of these mismatches could be attributed to the missing tidal forcing in the actual model configuration. This model is a first step to obtain a meaningful tool to study past and future oceanographic conditions in this important marine region constituting the unique connection of the Mediterranean Sea with the open world’s ocean.

  12. Validation process of simulation model

    International Nuclear Information System (INIS)

    San Isidro, M. J.

    1998-01-01

    It is presented a methodology on empirical validation about any detailed simulation model. This king of validation it is always related with an experimental case. The empirical validation has a residual sense, because the conclusions are based on comparisons between simulated outputs and experimental measurements. This methodology will guide us to detect the fails of the simulation model. Furthermore, it can be used a guide in the design of posterior experiments. Three steps can be well differentiated: Sensitivity analysis. It can be made with a DSA, differential sensitivity analysis, and with a MCSA, Monte-Carlo sensitivity analysis. Looking the optimal domains of the input parameters. It has been developed a procedure based on the Monte-Carlo methods and Cluster techniques, to find the optimal domains of these parameters. Residual analysis. This analysis has been made on the time domain and on the frequency domain, it has been used the correlation analysis and spectral analysis. As application of this methodology, it is presented the validation carried out on a thermal simulation model on buildings, Esp., studying the behavior of building components on a Test Cell of LECE of CIEMAT. (Author) 17 refs

  13. Verification and validation of models

    International Nuclear Information System (INIS)

    Herbert, A.W.; Hodgkinson, D.P.; Jackson, C.P.; Lever, D.A.; Robinson, P.C.

    1986-12-01

    The numerical accuracy of the computer models for groundwater flow and radionuclide transport that are to be used in repository safety assessment must be tested, and their ability to describe experimental data assessed: they must be verified and validated respectively. Also appropriate ways to use the codes in performance assessments, taking into account uncertainties in present data and future conditions, must be studied. These objectives are being met by participation in international exercises, by developing bench-mark problems, and by analysing experiments. In particular the project has funded participation in the HYDROCOIN project for groundwater flow models, the Natural Analogues Working Group, and the INTRAVAL project for geosphere models. (author)

  14. Modeling the fate of nitrogen on the catchment scale using a spatially explicit hydro-biogeochemical simulation system

    Science.gov (United States)

    Klatt, S.; Butterbach-Bahl, K.; Kiese, R.; Haas, E.; Kraus, D.; Molina-Herrera, S. W.; Kraft, P.

    2015-12-01

    The continuous growth of the human population demands an equally growing supply for fresh water and food. As a result, available land for efficient agriculture is constantly diminishing which forces farmers to cultivate inferior croplands and intensify agricultural practices, e.g., increase the use of synthetic fertilizers. This intensification of marginal areas in particular will cause a dangerous rise in nitrate discharge into open waters or even drinking water resources. In order to reduce the amount of nitrate lost by surface runoff or lateral subsurface transport, bufferstrips have proved to be a valuable means. Current laws, however, promote rather static designs (i.e., width and usage) even though a multitude of factors, e.g., soil type, slope, vegetation and the nearby agricultural management, determines its effectiveness. We propose a spatially explicit modeling approach enabling to assess the effects of those factors on nitrate discharge from arable lands using the fully distributed hydrology model CMF coupled to the complex biogeochemical model LandscapeDNDC. Such a modeling scheme allows to observe the displacement of dissolved nutrients in both vertical and horizontal directions and serves to estimate both their uptake by the vegetated bufferstrip and loss to the environment. First results indicate a significant reduction of nitrate loss in the presence of a bufferstrip (2.5 m). We show effects induced by various buffer strip widths and plant cover on the nitrate retention.

  15. Development of a 3D coupled physical-biogeochemical model for the Marseille coastal area (NW Mediterranean Sea: what complexity is required in the coastal zone?

    Directory of Open Access Journals (Sweden)

    Marion Fraysse

    Full Text Available Terrestrial inputs (natural and anthropogenic from rivers, the atmosphere and physical processes strongly impact the functioning of coastal pelagic ecosystems. The objective of this study was to develop a tool for the examination of these impacts on the Marseille coastal area, which experiences inputs from the Rhone River and high rates of atmospheric deposition. Therefore, a new 3D coupled physical/biogeochemical model was developed. Two versions of the biogeochemical model were tested, one model considering only the carbon (C and nitrogen (N cycles and a second model that also considers the phosphorus (P cycle. Realistic simulations were performed for a period of 5 years (2007-2011. The model accuracy assessment showed that both versions of the model were able of capturing the seasonal changes and spatial characteristics of the ecosystem. The model also reproduced upwelling events and the intrusion of Rhone River water into the Bay of Marseille well. Those processes appeared to greatly impact this coastal oligotrophic area because they induced strong increases in chlorophyll-a concentrations in the surface layer. The model with the C, N and P cycles better reproduced the chlorophyll-a concentrations at the surface than did the model without the P cycle, especially for the Rhone River water. Nevertheless, the chlorophyll-a concentrations at depth were better represented by the model without the P cycle. Therefore, the complexity of the biogeochemical model introduced errors into the model results, but it also improved model results during specific events. Finally, this study suggested that in coastal oligotrophic areas, improvements in the description and quantification of the hydrodynamics and the terrestrial inputs should be preferred over increasing the complexity of the biogeochemical model.

  16. Development of a 3D coupled physical-biogeochemical model for the Marseille coastal area (NW Mediterranean Sea): what complexity is required in the coastal zone?

    Science.gov (United States)

    Fraysse, Marion; Pinazo, Christel; Faure, Vincent Martin; Fuchs, Rosalie; Lazzari, Paolo; Raimbault, Patrick; Pairaud, Ivane

    2013-01-01

    Terrestrial inputs (natural and anthropogenic) from rivers, the atmosphere and physical processes strongly impact the functioning of coastal pelagic ecosystems. The objective of this study was to develop a tool for the examination of these impacts on the Marseille coastal area, which experiences inputs from the Rhone River and high rates of atmospheric deposition. Therefore, a new 3D coupled physical/biogeochemical model was developed. Two versions of the biogeochemical model were tested, one model considering only the carbon (C) and nitrogen (N) cycles and a second model that also considers the phosphorus (P) cycle. Realistic simulations were performed for a period of 5 years (2007-2011). The model accuracy assessment showed that both versions of the model were able of capturing the seasonal changes and spatial characteristics of the ecosystem. The model also reproduced upwelling events and the intrusion of Rhone River water into the Bay of Marseille well. Those processes appeared to greatly impact this coastal oligotrophic area because they induced strong increases in chlorophyll-a concentrations in the surface layer. The model with the C, N and P cycles better reproduced the chlorophyll-a concentrations at the surface than did the model without the P cycle, especially for the Rhone River water. Nevertheless, the chlorophyll-a concentrations at depth were better represented by the model without the P cycle. Therefore, the complexity of the biogeochemical model introduced errors into the model results, but it also improved model results during specific events. Finally, this study suggested that in coastal oligotrophic areas, improvements in the description and quantification of the hydrodynamics and the terrestrial inputs should be preferred over increasing the complexity of the biogeochemical model.

  17. Modeling Biogeochemical-Physical Interactions and Carbon Flux in the Sargasso Sea (Bermuda Atlantic Time-series Study site)

    Science.gov (United States)

    Signorini, Sergio R.; McClain, Charles R.; Christian, James R.

    2001-01-01

    An ecosystem-carbon cycle model is used to analyze the biogeochemical-physical interactions and carbon fluxes in the Bermuda Atlantic Time-series Study (BATS) site for the period of 1992-1998. The model results compare well with observations (most variables are within 8% of observed values). The sea-air flux ranges from -0.32 to -0.50 mol C/sq m/yr, depending upon the gas transfer algorithm used. This estimate is within the range (-0.22 to -0.83 mol C/sq m/yr) of previously reported values which indicates that the BATS region is a weak sink of atmospheric CO2. The overall carbon balance consists of atmospheric CO2 uptake of 0.3 Mol C/sq m/yr, upward dissolved inorganic carbon (DIC) bottom flux of 1.1 Mol C/sq m/yr, and carbon export of 1.4 mol C/sq m/yr via sedimentation. Upper ocean DIC levels increased between 1992 and 1996 at a rate of approximately 1.2 (micro)mol/kg/yr, consistent with observations. However, this trend was reversed during 1997-1998 to -2.7 (micro)mol/kg/yr in response to hydrographic changes imposed by the El Nino-La Nina transition, which were manifested in the Sargasso Sea by the warmest SST and lowest surface salinity of the period (1992-1998).

  18. PEMFC modeling and experimental validation

    Energy Technology Data Exchange (ETDEWEB)

    Vargas, J.V.C. [Federal University of Parana (UFPR), Curitiba, PR (Brazil). Dept. of Mechanical Engineering], E-mail: jvargas@demec.ufpr.br; Ordonez, J.C.; Martins, L.S. [Florida State University, Tallahassee, FL (United States). Center for Advanced Power Systems], Emails: ordonez@caps.fsu.edu, martins@caps.fsu.edu

    2009-07-01

    In this paper, a simplified and comprehensive PEMFC mathematical model introduced in previous studies is experimentally validated. Numerical results are obtained for an existing set of commercial unit PEM fuel cells. The model accounts for pressure drops in the gas channels, and for temperature gradients with respect to space in the flow direction, that are investigated by direct infrared imaging, showing that even at low current operation such gradients are present in fuel cell operation, and therefore should be considered by a PEMFC model, since large coolant flow rates are limited due to induced high pressure drops in the cooling channels. The computed polarization and power curves are directly compared to the experimentally measured ones with good qualitative and quantitative agreement. The combination of accuracy and low computational time allow for the future utilization of the model as a reliable tool for PEMFC simulation, control, design and optimization purposes. (author)

  19. Monitoring strategies and scale appropriate hydrologic and biogeochemical modelling for natural resource management

    DEFF Research Database (Denmark)

    Bende-Michl, Ulrike; Volk, Martin; Harmel, Daren

    2011-01-01

    This short communication paper presents recommendations for developing scale-appropriate monitoring and modelling strategies to assist decision making in natural resource management (NRM). These ideas presented here were discussed in the session (S5) ‘Monitoring strategies and scale...... and communication between researcher and model developer on the one side, and natural resource managers and the model users on the other side to increase knowledge in: 1) the limitations and uncertainties of current monitoring and modelling strategies, 2) scale-dependent linkages between monitoring and modelling...

  20. Calibration of a biome-biogeochemical cycles model for modeling the net primary production of teak forests through inverse modeling of remotely sensed data

    Science.gov (United States)

    Imvitthaya, Chomchid; Honda, Kiyoshi; Lertlum, Surat; Tangtham, Nipon

    2011-01-01

    In this paper, we present the results of a net primary production (NPP) modeling of teak (Tectona grandis Lin F.), an important species in tropical deciduous forests. The biome-biogeochemical cycles or Biome-BGC model was calibrated to estimate net NPP through the inverse modeling approach. A genetic algorithm (GA) was linked with Biome-BGC to determine the optimal ecophysiological model parameters. The Biome-BGC was calibrated by adjusting the ecophysiological model parameters to fit the simulated LAI to the satellite LAI (SPOT-Vegetation), and the best fitness confirmed the high accuracy of generated ecophysioligical parameter from GA. The modeled NPP, using optimized parameters from GA as input data, was evaluated using daily NPP derived by the MODIS satellite and the annual field data in northern Thailand. The results showed that NPP obtained using the optimized ecophysiological parameters were more accurate than those obtained using default literature parameterization. This improvement occurred mainly because the model's optimized parameters reduced the bias by reducing systematic underestimation in the model. These Biome-BGC results can be effectively applied in teak forests in tropical areas. The study proposes a more effective method of using GA to determine ecophysiological parameters at the site level and represents a first step toward the analysis of the carbon budget of teak plantations at the regional scale.

  1. Investigating the Role of Biogeochemical Processes in the Northern High Latitudes on Global Climate Feedbacks Using an Efficient Scalable Earth System Model

    Energy Technology Data Exchange (ETDEWEB)

    Jain, Atul K. [Univ. of Illinois, Urbana-Champaign, IL (United States)

    2016-09-14

    The overall objectives of this DOE funded project is to combine scientific and computational challenges in climate modeling by expanding our understanding of the biogeophysical-biogeochemical processes and their interactions in the northern high latitudes (NHLs) using an earth system modeling (ESM) approach, and by adopting an adaptive parallel runtime system in an ESM to achieve efficient and scalable climate simulations through improved load balancing algorithms.

  2. Quantification of sediment-water interactions in a polluted tropical river through biogeochemical modeling

    NARCIS (Netherlands)

    Trinh, A.D.; Meysman, F.; Rochelle-Newall, E.; Bonnet, M.P.

    2012-01-01

    Diagenetic modeling presents an interesting and robust way to understand sediment-water column processes. Here we present the application of such a model to the Day River in Northern Vietnam, a system that is subject to high levels of domestic wastewater inputs from the Hanoi metropolitan area.

  3. The Effects of Chlorophyll Assimilation on Carbon Fluxes in a Global Biogeochemical Model. [Technical Report Series on Global Modeling and Data Assimilation

    Science.gov (United States)

    Koster, Randal D. (Editor); Rousseaux, Cecile Severine; Gregg, Watson W.

    2014-01-01

    In this paper, we investigated whether the assimilation of remotely-sensed chlorophyll data can improve the estimates of air-sea carbon dioxide fluxes (FCO2). Using a global, established biogeochemical model (NASA Ocean Biogeochemical Model, NOBM) for the period 2003-2010, we found that the global FCO2 values produced in the free-run and after assimilation were within -0.6 mol C m(sup -2) y(sup -1) of the observations. The effect of satellite chlorophyll assimilation was assessed in 12 major oceanographic regions. The region with the highest bias was the North Atlantic. Here the model underestimated the fluxes by 1.4 mol C m(sup -2) y(sup -1) whereas all the other regions were within 1 mol C m(sup -2) y(sup -1) of the data. The FCO2 values were not strongly impacted by the assimilation, and the uncertainty in FCO2 was not decreased, despite the decrease in the uncertainty in chlorophyll concentration. Chlorophyll concentrations were within approximately 25% of the database in 7 out of the 12 regions, and the assimilation improved the chlorophyll concentration in the regions with the highest bias by 10-20%. These results suggest that the assimilation of chlorophyll data does not considerably improve FCO2 estimates and that other components of the carbon cycle play a role that could further improve our FCO2 estimates.

  4. PnET-BGC: Modeling Biogeochemical Processes in a Northern Hardwood Forest Ecosystem

    Data.gov (United States)

    National Aeronautics and Space Administration — This archived model product contains the directions, executables, and procedures for running PnET-BGC to recreate the results of: Gbondo-Tugbawa, S.S., C.T. Driscoll...

  5. PnET-BGC: Modeling Biogeochemical Processes in a Northern Hardwood Forest Ecosystem

    Data.gov (United States)

    National Aeronautics and Space Administration — ABSTRACT: This archived model product contains the directions, executables, and procedures for running PnET-BGC to recreate the results of: Gbondo-Tugbawa, S.S.,...

  6. Parameter sensitivity and identifiability for a biogeochemical model of hypoxia in the northern Gulf of Mexico

    Science.gov (United States)

    Local sensitivity analyses and identifiable parameter subsets were used to describe numerical constraints of a hypoxia model for bottom waters of the northern Gulf of Mexico. The sensitivity of state variables differed considerably with parameter changes, although most variables ...

  7. Modelling shelf-ocean exchange and its biogeochemical consequences in coastal upwelling systems

    DEFF Research Database (Denmark)

    Muchamad, Al Azhar

    margin bathymetry, and 3) what processes determine the observed variability of total organic carbon (TOC) content in shelf sediments underlying the upwelling system, with implications for the formation of petroleum source rocks. Here, a numerical ocean modeling approach is used in this thesis to explore...... processes and the development of anoxia/euxinia under the present day or past geological conditions. Thirdly and last, processes controlling distribution of total organic carbon (TOC) content in sediments across the continental margin is evaluated by application of the model to the Benguela upwelling system....... In the model, biological primary production and shelf bottom-water anoxia result in enhanced sedimentary TOC concentrations on the mid shelf and upper slope. The simulated TOCs implicate that bottom lateral transport only has a significant effect on increasing the deposition of the organic carbon on the mid...

  8. Biogeochemical modeling of CO2 and CH4 production in anoxic Arctic soil microcosms

    Science.gov (United States)

    Tang, Guoping; Zheng, Jianqiu; Xu, Xiaofeng; Yang, Ziming; Graham, David E.; Gu, Baohua; Painter, Scott L.; Thornton, Peter E.

    2016-09-01

    Soil organic carbon turnover to CO2 and CH4 is sensitive to soil redox potential and pH conditions. However, land surface models do not consider redox and pH in the aqueous phase explicitly, thereby limiting their use for making predictions in anoxic environments. Using recent data from incubations of Arctic soils, we extend the Community Land Model with coupled carbon and nitrogen (CLM-CN) decomposition cascade to include simple organic substrate turnover, fermentation, Fe(III) reduction, and methanogenesis reactions, and assess the efficacy of various temperature and pH response functions. Incorporating the Windermere Humic Aqueous Model (WHAM) enables us to approximately describe the observed pH evolution without additional parameterization. Although Fe(III) reduction is normally assumed to compete with methanogenesis, the model predicts that Fe(III) reduction raises the pH from acidic to neutral, thereby reducing environmental stress to methanogens and accelerating methane production when substrates are not limiting. The equilibrium speciation predicts a substantial increase in CO2 solubility as pH increases, and taking into account CO2 adsorption to surface sites of metal oxides further decreases the predicted headspace gas-phase fraction at low pH. Without adequate representation of these speciation reactions, as well as the impacts of pH, temperature, and pressure, the CO2 production from closed microcosms can be substantially underestimated based on headspace CO2 measurements only. Our results demonstrate the efficacy of geochemical models for simulating soil biogeochemistry and provide predictive understanding and mechanistic representations that can be incorporated into land surface models to improve climate predictions.

  9. Characterization of mixing errors in a coupled physical biogeochemical model of the North Atlantic: implications for nonlinear estimation using Gaussian anamorphosis

    Directory of Open Access Journals (Sweden)

    D. Béal

    2010-02-01

    Full Text Available In biogeochemical models coupled to ocean circulation models, vertical mixing is an important physical process which governs the nutrient supply and the plankton residence in the euphotic layer. However, vertical mixing is often poorly represented in numerical simulations because of approximate parameterizations of sub-grid scale turbulence, wind forcing errors and other mis-represented processes such as restratification by mesoscale eddies. Getting a sufficient knowledge of the nature and structure of these errors is necessary to implement appropriate data assimilation methods and to evaluate if they can be controlled by a given observation system.

    In this paper, Monte Carlo simulations are conducted to study mixing errors induced by approximate wind forcings in a three-dimensional coupled physical-biogeochemical model of the North Atlantic with a 1/4° horizontal resolution. An ensemble forecast involving 200 members is performed during the 1998 spring bloom, by prescribing perturbations of the wind forcing to generate mixing errors. The biogeochemical response is shown to be rather complex because of nonlinearities and threshold effects in the coupled model. The response of the surface phytoplankton depends on the region of interest and is particularly sensitive to the local stratification. In addition, the statistical relationships computed between the various physical and biogeochemical variables reflect the signature of the non-Gaussian behaviour of the system. It is shown that significant information on the ecosystem can be retrieved from observations of chlorophyll concentration or sea surface temperature if a simple nonlinear change of variables (anamorphosis is performed by mapping separately and locally the ensemble percentiles of the distributions of each state variable on the Gaussian percentiles. The results of idealized observational updates (performed with perfect observations and neglecting horizontal correlations

  10. Mapping and modeling the biogeochemical cycling of turf grasses in the United States.

    Science.gov (United States)

    Milesi, Cristina; Running, Steven W; Elvidge, Christopher D; Dietz, John B; Tuttle, Benjamin T; Nemani, Ramakrishna R

    2005-09-01

    Turf grasses are ubiquitous in the urban landscape of the United States and are often associated with various types of environmental impacts, especially on water resources, yet there have been limited efforts to quantify their total surface and ecosystem functioning, such as their total impact on the continental water budget and potential net ecosystem exchange (NEE). In this study, relating turf grass area to an estimate of fractional impervious surface area, it was calculated that potentially 163,800 km2 (+/- 35,850 km2) of land are cultivated with turf grasses in the continental United States, an area three times larger than that of any irrigated crop. Using the Biome-BGC ecosystem process model, the growth of warm-season and cool-season turf grasses was modeled at a number of sites across the 48 conterminous states under different management scenarios, simulating potential carbon and water fluxes as if the entire turf surface was to be managed like a well-maintained lawn. The results indicate that well-watered and fertilized turf grasses act as a carbon sink. The potential NEE that could derive from the total surface potentially under turf (up to 17 Tg C/yr with the simulated scenarios) would require up to 695 to 900 liters of water per person per day, depending on the modeled water irrigation practices, suggesting that outdoor water conservation practices such as xeriscaping and irrigation with recycled waste-water may need to be extended as many municipalities continue to face increasing pressures on freshwater.

  11. Evaluation of forest management practices through application of a biogeochemical model, PnET-BGC

    Science.gov (United States)

    Valipour, M.; Driscoll, C. T.; Johnson, C. E.; Campbell, J. L.; Fahey, T.; Zeng, T.

    2017-12-01

    Forest ecosystem response to logging disturbance varies significantly, depending on site conditions, species composition, land use history, and the method and frequency of harvesting. The long-term effects of forest cuttings are less clear due to limited information on land use history and long-term time series observations. The hydrochemical model, PnET-BGC was modified and verified using field data from multiple experimentally harvested northern hardwood watersheds at the Hubbard Brook Experimental Forest (HBEF), New Hampshire, USA, including a commercial whole-tree harvest (Watershed 5), a devegetation experiment (Watershed 2; devegetation and herbicide treatment), a commercial strip-cut (Watershed 4) to simulate the hydrology, biomass accumulation, and soil solution and stream water chemistry responses to clear-cutting. The confirmed model was used to investigate temporal changes in aboveground biomass accumulation and nutrient dynamics under three different harvesting intensities (40%, 60%, 80%) over four varied rotation lengths (20, 40, 60, 80 years) with results compared with a scenario of no forest harvesting. The total ecosystem carbon pool (biomass, soil and litter) was reduced over harvesting events. The greatest decline occurred in litter by 40%-70%, while the pool of carbon stored in aboveground biomass decreased by 30%-60% for 80% cutting levels at 40 and 20 year rotation lengths, respectively. The large pool of soil organic carbon remained relatively stable, with only minor declines over logging regimes. Stream water simulations demonstrated increased loss of major elements over cutting events. Ca+2 and NO3- were the most sensitive elements to leaching over frequent intensive logging. Accumulated leaching of Ca+2 and NO3- varied between 90-520 t Ca/ha and 40-420 t N/ha from conservative (80-year period and 40% cutting) to aggressive (20-year period and 80% cutting) cutting regimes, respectively. Moreover, a reduction in nutrient plant uptake over

  12. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  13. Functional Enzyme-Based Approach for Linking Microbial Community Functions with Biogeochemical Process Kinetics

    Energy Technology Data Exchange (ETDEWEB)

    Li, Minjing [School; Qian, Wei-jun [Pacific Northwest National Laboratory, Richland, Washington 99354, United States; Gao, Yuqian [Pacific Northwest National Laboratory, Richland, Washington 99354, United States; Shi, Liang [School; Liu, Chongxuan [Pacific Northwest National Laboratory, Richland, Washington 99354, United States; School

    2017-09-28

    The kinetics of biogeochemical processes in natural and engineered environmental systems are typically described using Monod-type or modified Monod-type models. These models rely on biomass as surrogates for functional enzymes in microbial community that catalyze biogeochemical reactions. A major challenge to apply such models is the difficulty to quantitatively measure functional biomass for constraining and validating the models. On the other hand, omics-based approaches have been increasingly used to characterize microbial community structure, functions, and metabolites. Here we proposed an enzyme-based model that can incorporate omics-data to link microbial community functions with biogeochemical process kinetics. The model treats enzymes as time-variable catalysts for biogeochemical reactions and applies biogeochemical reaction network to incorporate intermediate metabolites. The sequences of genes and proteins from metagenomes, as well as those from the UniProt database, were used for targeted enzyme quantification and to provide insights into the dynamic linkage among functional genes, enzymes, and metabolites that are necessary to be incorporated in the model. The application of the model was demonstrated using denitrification as an example by comparing model-simulated with measured functional enzymes, genes, denitrification substrates and intermediates

  14. Modelling biogeochemical processes in sediments from the north-western Adriatic Sea: response to enhanced particulate organic carbon fluxes

    Science.gov (United States)

    Brigolin, Daniele; Rabouille, Christophe; Bombled, Bruno; Colla, Silvia; Vizzini, Salvatrice; Pastres, Roberto; Pranovi, Fabio

    2018-03-01

    This work presents the result of a study carried out in the north-western Adriatic Sea, by combining two different types of biogeochemical models with field sampling efforts. A longline mussel farm was taken as a local source of perturbation to the natural particulate organic carbon (POC) downward flux. This flux was first quantified by means of a pelagic model of POC deposition coupled to sediment trap data, and its effects on sediment bioirrigation capacity and organic matter (OM) degradation pathways were investigated constraining an early diagenesis model by using original data collected in sediment porewater. The measurements were performed at stations located inside and outside the area affected by mussel farm deposition. Model-predicted POC fluxes showed marked spatial and temporal variability, which was mostly associated with the dynamics of the farming cycle. Sediment trap data at the two sampled stations (inside and outside of the mussel farm) showed average POC background flux of 20.0-24.2 mmol C m-2 d-1. The difference of organic carbon (OC) fluxes between the two stations was in agreement with model results, ranging between 3.3 and 14.2 mmol C m-2 d-1, and was primarily associated with mussel physiological conditions. Although restricted, these changes in POC fluxes induced visible effects on sediment biogeochemistry. Observed oxygen microprofiles presented a 50 % decrease in oxygen penetration depth (from 2.3 to 1.4 mm), accompanied by an increase in the O2 influx at the station below the mussel farm (19-31 versus 10-12 mmol O2 m-2 d-1) characterised by higher POC flux. Dissolved inorganic carbon (DIC) and NH4+ concentrations showed similar behaviour, with a more evident effect of bioirrigation underneath the farm. This was confirmed through constraining the early diagenesis model, of which calibration leads to an estimation of enhanced and shallower bioirrigation underneath the farm: bioirrigation rates of 40 yr-1 and irrigation depth of 15 cm were

  15. Modeling the biogeochemical impact of atmospheric phosphate deposition from desert dust and combustion sources to the Mediterranean Sea

    Science.gov (United States)

    Richon, Camille; Dutay, Jean-Claude; Dulac, François; Wang, Rong; Balkanski, Yves

    2018-04-01

    Daily modeled fields of phosphate deposition to the Mediterranean from natural dust, anthropogenic combustion and wildfires were used to assess the effect of this external nutrient on marine biogeochemistry. The ocean model used is a high-resolution (1/12°) regional coupled dynamical-biogeochemical model of the Mediterranean Sea (NEMO-MED12/PISCES). The input fields of phosphorus are for 2005, which are the only available daily resolved deposition fields from the global atmospheric chemical transport model LMDz-INCA. Traditionally, dust has been suggested to be the main atmospheric source of phosphorus, but the LMDz-INCA model suggests that combustion is dominant over natural dust as an atmospheric source of phosphate (PO4, the bioavailable form of phosphorus in seawater) for the Mediterranean Sea. According to the atmospheric transport model, phosphate deposition from combustion (Pcomb) brings on average 40.5×10-6 mol PO4 m-2 yr-1 over the entire Mediterranean Sea for the year 2005 and is the primary source over the northern part (e.g., 101×10-6 mol PO4 m-2 yr-1 from combustion deposited in 2005 over the north Adriatic against 12.4×10-6 from dust). Lithogenic dust brings 17.2×10-6 mol PO4 m-2 yr-1 on average over the Mediterranean Sea in 2005 and is the primary source of atmospheric phosphate to the southern Mediterranean Basin in our simulations (e.g., 31.8×10-6 mol PO4 m-2 yr-1 from dust deposited in 2005 on average over the south Ionian basin against 12.4×10-6 from combustion). The evaluation of monthly averaged deposition flux variability of Pdust and Pcomb for the 1997-2012 period indicates that these conclusions may hold true for different years. We examine separately the two atmospheric phosphate sources and their respective flux variability and evaluate their impacts on marine surface biogeochemistry (phosphate concentration, chlorophyll a, primary production). The impacts of the different phosphate deposition sources on the biogeochemistry of the

  16. Biogeochemical Responses and Feedbacks to Climate Change: Synthetic Meta-Analyses Relevant to Earth System Models

    Energy Technology Data Exchange (ETDEWEB)

    van Gestel, Natasja; Jan van Groenigen, Kees; Osenberg, Craig; Dukes, Jeffrey; Dijkstra, Paul

    2018-03-20

    This project examined the sensitivity of carbon in land ecosystems to environmental change, focusing on carbon contained in soil, and the role of carbon-nitrogen interactions in regulating ecosystem carbon storage. The project used a combination of empirical measurements, mathematical models, and statistics to partition effects of climate change on soil into processes enhancing soil carbon and processes through which it decomposes. By synthesizing results from experiments around the world, the work provided novel insight on ecological controls and responses across broad spatial and temporal scales. The project developed new approaches in meta-analysis using principles of element mass balance and large datasets to derive metrics of ecosystem responses to environmental change. The project used meta-analysis to test how nutrients regulate responses of ecosystems to elevated CO2 and warming, in particular responses of nitrogen fixation, critical for regulating long-term C balance.

  17. Biogeochemical processes in a clay formation in situ experiment: Part F - Reactive transport modelling

    Energy Technology Data Exchange (ETDEWEB)

    Tournassat, Christophe, E-mail: c.tournassat@brgm.fr [BRGM, French Geological Survey, Orleans (France); Alt-Epping, Peter [Rock-Water Interaction Group, Institute of Geological Sciences, University of Bern (Switzerland); Gaucher, Eric C. [BRGM, French Geological Survey, Orleans (France); Gimmi, Thomas [Rock-Water Interaction Group, Institute of Geological Sciences, University of Bern (Switzerland)] [Laboratory for Waste Management, Paul Scherrer Institut, Villigen (Switzerland); Leupin, Olivier X. [NAGRA, CH-5430 Wettingen (Switzerland); Wersin, Paul [Gruner Ltd., CH-4020 Basel (Switzerland)

    2011-06-15

    Highlights: > Reactive transport modelling was used to simulate simultaneously solute transport, thermodynamic reactions, ion exchange and biodegradation during an in-situ experiment in a clay-rock formation. > Opalinus clay formation has a high buffering capacity in terms of chemical perturbations caused by bacterial activity. > Buffering capacity is mainly attributed to the carbonate system and to the reactivity of clay surfaces (cation exchange, pH buffering). - Abstract: Reactive transport modelling was used to simulate solute transport, thermodynamic reactions, ion exchange and biodegradation in the Porewater Chemistry (PC) experiment at the Mont Terri Rock Laboratory. Simulations show that the most important chemical processes controlling the fluid composition within the borehole and the surrounding formation during the experiment are ion exchange, biodegradation and dissolution/precipitation reactions involving pyrite and carbonate minerals. In contrast, thermodynamic mineral dissolution/precipitation reactions involving alumo-silicate minerals have little impact on the fluid composition on the time-scale of the experiment. With the accurate description of the initial chemical condition in the formation in combination with kinetic formulations describing the different stages of bacterial activities, it has been possible to reproduce the evolution of important system parameters, such as the pH, redox potential, total organic C, dissolved inorganic C and SO{sub 4} concentration. Leaching of glycerol from the pH-electrode may be the primary source of organic material that initiated bacterial growth, which caused the chemical perturbation in the borehole. Results from these simulations are consistent with data from the over-coring and demonstrate that the Opalinus Clay has a high buffering capacity in terms of chemical perturbations caused by bacterial activity. This buffering capacity can be attributed to the carbonate system as well as to the reactivity of

  18. Effects of Model Resolution and Ocean Mixing on Forced Ice-Ocean Physical and Biogeochemical Simulations Using Global and Regional System Models

    Science.gov (United States)

    Jin, Meibing; Deal, Clara; Maslowski, Wieslaw; Matrai, Patricia; Roberts, Andrew; Osinski, Robert; Lee, Younjoo J.; Frants, Marina; Elliott, Scott; Jeffery, Nicole; Hunke, Elizabeth; Wang, Shanlin

    2018-01-01

    The current coarse-resolution global Community Earth System Model (CESM) can reproduce major and large-scale patterns but is still missing some key biogeochemical features in the Arctic Ocean, e.g., low surface nutrients in the Canada Basin. We incorporated the CESM Version 1 ocean biogeochemical code into the Regional Arctic System Model (RASM) and coupled it with a sea-ice algal module to investigate model limitations. Four ice-ocean hindcast cases are compared with various observations: two in a global 1° (40˜60 km in the Arctic) grid: G1deg and G1deg-OLD with/without new sea-ice processes incorporated; two on RASM's 1/12° (˜9 km) grid R9km and R9km-NB with/without a subgrid scale brine rejection parameterization which improves ocean vertical mixing under sea ice. Higher-resolution and new sea-ice processes contributed to lower model errors in sea-ice extent, ice thickness, and ice algae. In the Bering Sea shelf, only higher resolution contributed to lower model errors in salinity, nitrate (NO3), and chlorophyll-a (Chl-a). In the Arctic Basin, model errors in mixed layer depth (MLD) were reduced 36% by brine rejection parameterization, 20% by new sea-ice processes, and 6% by higher resolution. The NO3 concentration biases were caused by both MLD bias and coarse resolution, because of excessive horizontal mixing of high NO3 from the Chukchi Sea into the Canada Basin in coarse resolution models. R9km showed improvements over G1deg on NO3, but not on Chl-a, likely due to light limitation under snow and ice cover in the Arctic Basin.

  19. Evaluation of additional biogeochemical impacts on mitigation pathways in an energy sytem integrated assessment model.

    Science.gov (United States)

    Dessens, O.

    2017-12-01

    Within the last IPCC AR5 a large and systematic sensitivity study around available technologies and timing of policies applied in IAMs to achieve the 2°C target has been conducted. However the simple climate representations included in IAMs are generally tuned to the results of ensemble means. This may result in hiding within the ensemble mean results possible challenging mitigation pathways for the economy or the technology future scenarios. This work provides new insights on the sensitivity of the socio-economic response to different climate factors under a 2°C climate change target in order to help guide future efforts to reduce uncertainty in the climate mitigation decisions. The main objective is to understand and bring new insights on how future global warming will affect the natural biochemical feedbacks on the climate system and what could be the consequences of these feedbacks on the anthropogenic emission pathways with a specific focus on the energy-economy system. It specifically focuses on three issues of the climate representation affecting the energy system transformation and GHG emissions pathways: 1- Impacts of the climate sensitivity (or TCR); 2- Impacts of warming on the radiative forcing (cloudiness,...); 3- Impacts of warming on the carbon cycle (carbon cycle feedback). We use the integrated assessment model TIAM-UCL to examine the mitigation pathways compatible with the 2C target depending on assumptions regarding the 3 issues of the climate representation introduced above. The following key conclusions drawn from this study are that mitigation to 2°C is still possible under strong climate sensitivity (TCR), strong carbon cycle amplification or positive radiative forcing feedback. However, this level of climate mitigation will require a significant transformation in the way we produce and consume energy. Carbon capture and sequestration on electricity generation, industry and biomass is part of the technology pool needed to achieve this

  20. Probabilistic Downscaling of Remote Sensing Data with Applications for Multi-Scale Biogeochemical Flux Modeling.

    Science.gov (United States)

    Stoy, Paul C; Quaife, Tristan

    2015-01-01

    Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.

  1. CLM4-BeTR, a generic biogeochemical transport and reaction module for CLM4: model development, evaluation, and application

    Directory of Open Access Journals (Sweden)

    J. Y. Tang

    2013-01-01

    Full Text Available To improve regional and global biogeochemistry modeling and climate predictability, we have developed a generic reactive transport module for the land model CLM4 (called CLM4-BeTR (Biogeochemical Transport and Reactions. CLM4-BeTR represents the transport, interactions, and biotic and abiotic transformations of an arbitrary number of tracers (aka chemical species in an arbitrary number of phases (e.g., dissolved, gaseous, sorbed, aggregate. An operator splitting approach was employed and consistent boundary conditions were derived for each modeled sub-process. Aqueous tracer fluxes, associated with hydrological processes such as surface run-on and run-off, belowground drainage, and ice to liquid conversion were also computed consistently with the bulk water fluxes calculated by the soil physics module in CLM4. The transport code was evaluated and found in good agreement with several analytical test cases using a time step of 30 min. The model was then applied at the Harvard Forest site with a representation of depth-dependent belowground biogeochemistry. The results indicated that, at this site, (1 CLM4-BeTR was able to simulate soil–surface CO2 effluxes and soil CO2 profiles accurately; (2 the transient surface CO2 effluxes calculated based on the tracer transport mechanism were in general not equal to the belowground CO2 production rates with the magnitude of the difference being a function of averaging timescale and site conditions: differences were large (−20 ~ 20% on hourly, smaller (−5 ~ 5% at daily timescales, and persisted to the monthly timescales with a smaller magnitude (<4%; (3 losses of CO2 through processes other than surface gas efflux were less than 1% of the overall soil respiration; and (4 the contributions of root respiration and heterotrophic respiration have distinct temporal signals in surface CO2 effluxes and soil CO2 concentrations. The

  2. Geochemistry Model Validation Report: External Accumulation Model

    International Nuclear Information System (INIS)

    Zarrabi, K.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the External Accumulation Model that predicts accumulation of fissile materials in fractures and lithophysae in the rock beneath a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. (Lithophysae are voids in the rock having concentric shells of finely crystalline alkali feldspar, quartz, and other materials that were formed due to entrapped gas that later escaped, DOE 1998, p. A-25.) The intended use of this model is to estimate the quantities of external accumulation of fissile material for use in external criticality risk assessments for different types of degrading WPs: U.S. Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The scope of the model validation is to (1) describe the model and the parameters used to develop the model, (2) provide rationale for selection of the parameters by comparisons with measured values, and (3) demonstrate that the parameters chosen are the most conservative selection for external criticality risk calculations. To demonstrate the applicability of the model, a Pu-ceramic WP is used as an example. The model begins with a source term from separately documented EQ6 calculations; where the source term is defined as the composition versus time of the water flowing out of a breached waste package (WP). Next, PHREEQC, is used to simulate the transport and interaction of the source term with the resident water and fractured tuff below the repository. In these simulations the primary mechanism for accumulation is mixing of the high pH, actinide-laden source term with resident water; thus lowering the pH values sufficiently for fissile minerals to become insoluble and precipitate. In the final section of the model, the outputs from PHREEQC, are processed to produce mass of accumulation

  3. The roles of resuspension, diffusion and biogeochemical processes on oxygen dynamics offshore of the Rhône River, France: a numerical modeling study

    Science.gov (United States)

    Moriarty, Julia M.; Harris, Courtney K.; Fennel, Katja; Friedrichs, Marjorie A. M.; Xu, Kehui; Rabouille, Christophe

    2017-04-01

    Observations indicate that resuspension and associated fluxes of organic material and porewater between the seabed and overlying water can alter biogeochemical dynamics in some environments, but measuring the role of sediment processes on oxygen and nutrient dynamics is challenging. A modeling approach offers a means of quantifying these fluxes for a range of conditions, but models have typically relied on simplifying assumptions regarding seabed-water-column interactions. Thus, to evaluate the role of resuspension on biogeochemical dynamics, we developed a coupled hydrodynamic, sediment transport, and biogeochemical model (HydroBioSed) within the Regional Ocean Modeling System (ROMS). This coupled model accounts for processes including the storage of particulate organic matter (POM) and dissolved nutrients within the seabed; fluxes of this material between the seabed and the water column via erosion, deposition, and diffusion at the sediment-water interface; and biogeochemical reactions within the seabed. A one-dimensional version of HydroBioSed was then implemented for the Rhône subaqueous delta in France. To isolate the role of resuspension on biogeochemical dynamics, this model implementation was run for a 2-month period that included three resuspension events; also, the supply of organic matter, oxygen, and nutrients to the model was held constant in time. Consistent with time series observations from the Rhône Delta, model results showed that erosion increased the diffusive flux of oxygen into the seabed by increasing the vertical gradient of oxygen at the seabed-water interface. This enhanced supply of oxygen to the seabed, as well as resuspension-induced increases in ammonium availability in surficial sediments, allowed seabed oxygen consumption to increase via nitrification. This increase in nitrification compensated for the decrease in seabed oxygen consumption due to aerobic remineralization that occurred as organic matter was entrained into the water

  4. Modelling biogeochemical processes in sediments from the north-western Adriatic Sea: response to enhanced particulate organic carbon fluxes

    Directory of Open Access Journals (Sweden)

    D. Brigolin

    2018-03-01

    Full Text Available This work presents the result of a study carried out in the north-western Adriatic Sea, by combining two different types of biogeochemical models with field sampling efforts. A longline mussel farm was taken as a local source of perturbation to the natural particulate organic carbon (POC downward flux. This flux was first quantified by means of a pelagic model of POC deposition coupled to sediment trap data, and its effects on sediment bioirrigation capacity and organic matter (OM degradation pathways were investigated constraining an early diagenesis model by using original data collected in sediment porewater. The measurements were performed at stations located inside and outside the area affected by mussel farm deposition. Model-predicted POC fluxes showed marked spatial and temporal variability, which was mostly associated with the dynamics of the farming cycle. Sediment trap data at the two sampled stations (inside and outside of the mussel farm showed average POC background flux of 20.0–24.2 mmol C m−2 d−1. The difference of organic carbon (OC fluxes between the two stations was in agreement with model results, ranging between 3.3 and 14.2 mmol C m−2 d−1, and was primarily associated with mussel physiological conditions. Although restricted, these changes in POC fluxes induced visible effects on sediment biogeochemistry. Observed oxygen microprofiles presented a 50 % decrease in oxygen penetration depth (from 2.3 to 1.4 mm, accompanied by an increase in the O2 influx at the station below the mussel farm (19–31 versus 10–12 mmol O2 m−2 d−1 characterised by higher POC flux. Dissolved inorganic carbon (DIC and NH4+ concentrations showed similar behaviour, with a more evident effect of bioirrigation underneath the farm. This was confirmed through constraining the early diagenesis model, of which calibration leads to an estimation of enhanced and shallower bioirrigation underneath the farm

  5. Tropical Pacific Climate, Carbon, and Ocean Biogeochemical Response to the Central American Seaway in a GFDL Earth System Model

    Science.gov (United States)

    Sentman, L. T.; Dunne, J. P.; Stouffer, R. J.; Krasting, J. P.; Wittenberg, A. T.; Toggweiler, J. R.; Broccoli, A. J.

    2017-12-01

    To explore the tropical Pacific climate, carbon, and ocean biogeochemical response to the shoaling and closure of the Central American Seaway during the Pliocene (5.3-2.6 Ma), we performed a suite of sensitivity experiments using the Geophysical Fluid Dynamics Laboratory Earth System Model, GFDL-ESM2G, varying only the seaway widths and sill depths. These novel ESM simulations include near-final closure of the seaway with a very narrow, 1º grid cell wide opening. Net mass transport through the seaway into the Caribbean is 20.5-23.1 Sv with a deep seaway, but only 14.1 Sv for the wide/shallow seaway because of the inter-basin bi-directional horizontal mass transport. Seaway transport originates from the Antarctic Circumpolar Current in the Pacific and rejoins it in the South Atlantic, reducing the Indonesian Throughflow and transporting heat and salt southward into the South Atlantic, in contrast to present-day and previous seaway simulations. Tropical Pacific mean climate and interannual variability is sensitive to the seaway shoaling, with the largest response to the wider/deeper seaway. In the tropical Pacific, the top 300-m warms 0.4-0.8°C, the equatorial east-west sea surface temperature gradient increases, the north-south sea surface temperature asymmetry at 110°W decreases, thermocline deepens 5-11 m, and the east-west thermocline gradient increases. In the Niño-3 region, ENSO amplitude increases, skewed toward more cold (La Niña) events, El Niño and La Niña develops earlier ( 3 months), the annual cycle weakens and the semi-annual and interannual cycles strengthen from increased symmetry of the north-south sea surface temperature gradient, and atmospheric global teleconnections strengthen with the seaway. The increase in global ocean overturning with the seaway results in a younger average ocean ideal age, reduced dissolved inorganic carbon inventory and marine net primary productivity, and altered inter-basin patterns of surface sediment carbonate

  6. Simulating temporal variations of nitrogen losses in river networks with a dynamic transport model unravels the coupled effects of hydrological and biogeochemical processes

    Energy Technology Data Exchange (ETDEWEB)

    Mulholland, Patrick J [ORNL; Alexander, Richard [U.S. Geological Survey; Bohlke, John [U.S. Geological Survey; Boyer, Elizabeth [Pennsylvania State University; Harvey, Judson [U.S. Geological Survey; Seitzinger, Sybil [Rutgers University; Tobias, Craig [University of North Carolina, Wilmington; Tonitto, Christina [Cornell University; Wollheim, Wilfred [University of New Hampshire

    2009-01-01

    The importance of lotic systems as sinks for nitrogen inputs is well recognized. A fraction of nitrogen in streamflow is removed to the atmosphere via denitrification with the remainder exported in streamflow as nitrogen loads. At the watershed scale, there is a keen interest in understanding the factors that control the fate of nitrogen throughout the stream channel network, with particular attention to the processes that deliver large nitrogen loads to sensitive coastal ecosystems. We use a dynamic stream transport model to assess biogeochemical (nitrate loadings, concentration, temperature) and hydrological (discharge, depth, velocity) effects on reach-scale denitrification and nitrate removal in the river networks of two watersheds having widely differing levels of nitrate enrichment but nearly identical discharges. Stream denitrification is estimated by regression as a nonlinear function of nitrate concentration, streamflow, and temperature, using more than 300 published measurements from a variety of US streams. These relations are used in the stream transport model to characterize nitrate dynamics related to denitrification at a monthly time scale in the stream reaches of the two watersheds. Results indicate that the nitrate removal efficiency of streams, as measured by the percentage of the stream nitrate flux removed via denitrification per unit length of channel, is appreciably reduced during months with high discharge and nitrate flux and increases during months of low-discharge and flux. Biogeochemical factors, including land use, nitrate inputs, and stream concentrations, are a major control on reach-scale denitrification, evidenced by the disproportionately lower nitrate removal efficiency in streams of the highly nitrate-enriched watershed as compared with that in similarly sized streams in the less nitrate-enriched watershed. Sensitivity analyses reveal that these important biogeochemical factors and physical hydrological factors contribute nearly

  7. Validation of models with multivariate output

    International Nuclear Information System (INIS)

    Rebba, Ramesh; Mahadevan, Sankaran

    2006-01-01

    This paper develops metrics for validating computational models with experimental data, considering uncertainties in both. A computational model may generate multiple response quantities and the validation experiment might yield corresponding measured values. Alternatively, a single response quantity may be predicted and observed at different spatial and temporal points. Model validation in such cases involves comparison of multiple correlated quantities. Multiple univariate comparisons may give conflicting inferences. Therefore, aggregate validation metrics are developed in this paper. Both classical and Bayesian hypothesis testing are investigated for this purpose, using multivariate analysis. Since, commonly used statistical significance tests are based on normality assumptions, appropriate transformations are investigated in the case of non-normal data. The methodology is implemented to validate an empirical model for energy dissipation in lap joints under dynamic loading

  8. Feature Extraction for Structural Dynamics Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, Charles [Los Alamos National Laboratory; Nishio, Mayuko [Yokohama University; Hemez, Francois [Los Alamos National Laboratory; Stull, Chris [Los Alamos National Laboratory; Park, Gyuhae [Chonnam Univesity; Cornwell, Phil [Rose-Hulman Institute of Technology; Figueiredo, Eloi [Universidade Lusófona; Luscher, D. J. [Los Alamos National Laboratory; Worden, Keith [University of Sheffield

    2016-01-13

    As structural dynamics becomes increasingly non-modal, stochastic and nonlinear, finite element model-updating technology must adopt the broader notions of model validation and uncertainty quantification. For example, particular re-sampling procedures must be implemented to propagate uncertainty through a forward calculation, and non-modal features must be defined to analyze nonlinear data sets. The latter topic is the focus of this report, but first, some more general comments regarding the concept of model validation will be discussed.

  9. Towards an assessment of riverine dissolved organic carbon in surface waters of the western Arctic Ocean based on remote sensing and biogeochemical modeling

    Science.gov (United States)

    Le Fouest, Vincent; Matsuoka, Atsushi; Manizza, Manfredi; Shernetsky, Mona; Tremblay, Bruno; Babin, Marcel

    2018-03-01

    Future climate warming of the Arctic could potentially enhance the load of terrigenous dissolved organic carbon (tDOC) of Arctic rivers due to increased carbon mobilization within watersheds. A greater flux of tDOC might impact the biogeochemical processes of the coastal Arctic Ocean (AO) and ultimately its capacity to absorb atmospheric CO2. In this study, we show that sea-surface tDOC concentrations simulated by a physical-biogeochemical coupled model in the Canadian Beaufort Sea for 2003-2011 compare favorably with estimates retrieved by satellite imagery. Our results suggest that, over spring-summer, tDOC of riverine origin contributes to 35 % of primary production and that an equivalent of ˜ 10 % of tDOC is exported westwards with the potential of fueling the biological production of the eastern Alaskan nearshore waters. The combination of model and satellite data provides promising results to extend this work to the entire AO so as to quantify, in conjunction with in situ data, the expected changes in tDOC fluxes and their potential impact on the AO biogeochemistry at basin scale.

  10. Model Validation in Ontology Based Transformations

    Directory of Open Access Journals (Sweden)

    Jesús M. Almendros-Jiménez

    2012-10-01

    Full Text Available Model Driven Engineering (MDE is an emerging approach of software engineering. MDE emphasizes the construction of models from which the implementation should be derived by applying model transformations. The Ontology Definition Meta-model (ODM has been proposed as a profile for UML models of the Web Ontology Language (OWL. In this context, transformations of UML models can be mapped into ODM/OWL transformations. On the other hand, model validation is a crucial task in model transformation. Meta-modeling permits to give a syntactic structure to source and target models. However, semantic requirements have to be imposed on source and target models. A given transformation will be sound when source and target models fulfill the syntactic and semantic requirements. In this paper, we present an approach for model validation in ODM based transformations. Adopting a logic programming based transformational approach we will show how it is possible to transform and validate models. Properties to be validated range from structural and semantic requirements of models (pre and post conditions to properties of the transformation (invariants. The approach has been applied to a well-known example of model transformation: the Entity-Relationship (ER to Relational Model (RM transformation.

  11. MEDUSA-2.0: an intermediate complexity biogeochemical model of the marine carbon cycle for climate change and ocean acidification studies

    Directory of Open Access Journals (Sweden)

    A. Yool

    2013-10-01

    Full Text Available MEDUSA-1.0 (Model of Ecosystem Dynamics, nutrient Utilisation, Sequestration and Acidification was developed as an "intermediate complexity" plankton ecosystem model to study the biogeochemical response, and especially that of the so-called "biological pump", to anthropogenically driven change in the World Ocean (Yool et al., 2011. The base currency in this model was nitrogen from which fluxes of organic carbon, including export to the deep ocean, were calculated by invoking fixed C:N ratios in phytoplankton, zooplankton and detritus. However, due to anthropogenic activity, the atmospheric concentration of carbon dioxide (CO2 has significantly increased above its natural, inter-glacial background. As such, simulating and predicting the carbon cycle in the ocean in its entirety, including ventilation of CO2 with the atmosphere and the resulting impact of ocean acidification on marine ecosystems, requires that both organic and inorganic carbon be afforded a more complete representation in the model specification. Here, we introduce MEDUSA-2.0, an expanded successor model which includes additional state variables for dissolved inorganic carbon, alkalinity, dissolved oxygen and detritus carbon (permitting variable C:N in exported organic matter, as well as a simple benthic formulation and extended parameterizations of phytoplankton growth, calcification and detritus remineralisation. A full description of MEDUSA-2.0, including its additional functionality, is provided and a multi-decadal spin-up simulation (1860–2005 is performed. The biogeochemical performance of the model is evaluated using a diverse range of observational data, and MEDUSA-2.0 is assessed relative to comparable models using output from the Coupled Model Intercomparison Project (CMIP5.

  12. A broad view of model validation

    International Nuclear Information System (INIS)

    Tsang, C.F.

    1989-10-01

    The safety assessment of a nuclear waste repository requires the use of models. Such models need to be validated to ensure, as much as possible, that they are a good representation of the actual processes occurring in the real system. In this paper we attempt to take a broad view by reviewing step by step the modeling process and bringing out the need to validating every step of this process. This model validation includes not only comparison of modeling results with data from selected experiments, but also evaluation of procedures for the construction of conceptual models and calculational models as well as methodologies for studying data and parameter correlation. The need for advancing basic scientific knowledge in related fields, for multiple assessment groups, and for presenting our modeling efforts in open literature to public scrutiny is also emphasized. 16 refs

  13. A regional scale modeling framework combining biogeochemical model with life cycle and economic analysis for integrated assessment of cropping systems.

    Science.gov (United States)

    Tabatabaie, Seyed Mohammad Hossein; Bolte, John P; Murthy, Ganti S

    2018-06-01

    The goal of this study was to integrate a crop model, DNDC (DeNitrification-DeComposition), with life cycle assessment (LCA) and economic analysis models using a GIS-based integrated platform, ENVISION. The integrated model enables LCA practitioners to conduct integrated economic analysis and LCA on a regional scale while capturing the variability of soil emissions due to variation in regional factors during production of crops and biofuel feedstocks. In order to evaluate the integrated model, the corn-soybean cropping system in Eagle Creek Watershed, Indiana was studied and the integrated model was used to first model the soil emissions and then conduct the LCA as well as economic analysis. The results showed that the variation in soil emissions due to variation in weather is high causing some locations to be carbon sink in some years and source of CO 2 in other years. In order to test the model under different scenarios, two tillage scenarios were defined: 1) conventional tillage (CT) and 2) no tillage (NT) and analyzed with the model. The overall GHG emissions for the corn-soybean cropping system was simulated and results showed that the NT scenario resulted in lower soil GHG emissions compared to CT scenario. Moreover, global warming potential (GWP) of corn ethanol from well to pump varied between 57 and 92gCO 2 -eq./MJ while GWP under the NT system was lower than that of the CT system. The cost break-even point was calculated as $3612.5/ha in a two year corn-soybean cropping system and the results showed that under low and medium prices for corn and soybean most of the farms did not meet the break-even point. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Establishing model credibility involves more than validation

    International Nuclear Information System (INIS)

    Kirchner, T.

    1991-01-01

    One widely used definition of validation is that the quantitative test of the performance of a model through the comparison of model predictions to independent sets of observations from the system being simulated. The ability to show that the model predictions compare well with observations is often thought to be the most rigorous test that can be used to establish credibility for a model in the scientific community. However, such tests are only part of the process used to establish credibility, and in some cases may be either unnecessary or misleading. Naylor and Finger extended the concept of validation to include the establishment of validity for the postulates embodied in the model and the test of assumptions used to select postulates for the model. Validity of postulates is established through concurrence by experts in the field of study that the mathematical or conceptual model contains the structural components and mathematical relationships necessary to adequately represent the system with respect to the goals for the model. This extended definition of validation provides for consideration of the structure of the model, not just its performance, in establishing credibility. Evaluation of a simulation model should establish the correctness of the code and the efficacy of the model within its domain of applicability. (24 refs., 6 figs.)

  15. Base Flow Model Validation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The innovation is the systematic "building-block" validation of CFD/turbulence models employing a GUI driven CFD code (RPFM) and existing as well as new data sets to...

  16. Model validation: Correlation for updating

    Indian Academy of Sciences (India)

    In this paper, a review is presented of the various methods which ... to make a direct and objective comparison of specific dynamic properties, measured ..... stiffness matrix is available from the analytical model, is that of reducing or condensing.

  17. Validating EHR clinical models using ontology patterns.

    Science.gov (United States)

    Martínez-Costa, Catalina; Schulz, Stefan

    2017-12-01

    Clinical models are artefacts that specify how information is structured in electronic health records (EHRs). However, the makeup of clinical models is not guided by any formal constraint beyond a semantically vague information model. We address this gap by advocating ontology design patterns as a mechanism that makes the semantics of clinical models explicit. This paper demonstrates how ontology design patterns can validate existing clinical models using SHACL. Based on the Clinical Information Modelling Initiative (CIMI), we show how ontology patterns detect both modeling and terminology binding errors in CIMI models. SHACL, a W3C constraint language for the validation of RDF graphs, builds on the concept of "Shape", a description of data in terms of expected cardinalities, datatypes and other restrictions. SHACL, as opposed to OWL, subscribes to the Closed World Assumption (CWA) and is therefore more suitable for the validation of clinical models. We have demonstrated the feasibility of the approach by manually describing the correspondences between six CIMI clinical models represented in RDF and two SHACL ontology design patterns. Using a Java-based SHACL implementation, we found at least eleven modeling and binding errors within these CIMI models. This demonstrates the usefulness of ontology design patterns not only as a modeling tool but also as a tool for validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Verification and validation for waste disposal models

    International Nuclear Information System (INIS)

    1987-07-01

    A set of evaluation criteria has been developed to assess the suitability of current verification and validation techniques for waste disposal methods. A survey of current practices and techniques was undertaken and evaluated using these criteria with the items most relevant to waste disposal models being identified. Recommendations regarding the most suitable verification and validation practices for nuclear waste disposal modelling software have been made

  19. Tracer travel time and model validation

    International Nuclear Information System (INIS)

    Tsang, Chin-Fu.

    1988-01-01

    The performance assessment of a nuclear waste repository demands much more in comparison to the safety evaluation of any civil constructions such as dams, or the resource evaluation of a petroleum or geothermal reservoir. It involves the estimation of low probability (low concentration) of radionuclide transport extrapolated 1000's of years into the future. Thus models used to make these estimates need to be carefully validated. A number of recent efforts have been devoted to the study of this problem. Some general comments on model validation were given by Tsang. The present paper discusses some issues of validation in regards to radionuclide transport. 5 refs

  20. Validating the passenger traffic model for Copenhagen

    DEFF Research Database (Denmark)

    Overgård, Christian Hansen; VUK, Goran

    2006-01-01

    The paper presents a comprehensive validation procedure for the passenger traffic model for Copenhagen based on external data from the Danish national travel survey and traffic counts. The model was validated for the years 2000 to 2004, with 2004 being of particular interest because the Copenhagen...... matched the observed traffic better than those of the transit assignment model. With respect to the metro forecasts, the model over-predicts metro passenger flows by 10% to 50%. The wide range of findings from the project resulted in two actions. First, a project was started in January 2005 to upgrade...

  1. The Role of External Inputs and Internal Cycling in Shaping the Global Ocean Cobalt Distribution: Insights From the First Cobalt Biogeochemical Model

    Science.gov (United States)

    Tagliabue, Alessandro; Hawco, Nicholas J.; Bundy, Randelle M.; Landing, William M.; Milne, Angela; Morton, Peter L.; Saito, Mak A.

    2018-04-01

    Cobalt is an important micronutrient for ocean microbes as it is present in vitamin B12 and is a co-factor in various metalloenzymes that catalyze cellular processes. Moreover, when seawater availability of cobalt is compared to biological demands, cobalt emerges as being depleted in seawater, pointing to a potentially important limiting role. To properly account for the potential biological role for cobalt, there is therefore a need to understand the processes driving the biogeochemical cycling of cobalt and, in particular, the balance between external inputs and internal cycling. To do so, we developed the first cobalt model within a state-of-the-art three-dimensional global ocean biogeochemical model. Overall, our model does a good job in reproducing measurements with a correlation coefficient of >0.7 in the surface and >0.5 at depth. We find that continental margins are the dominant source of cobalt, with a crucial role played by supply under low bottom-water oxygen conditions. The basin-scale distribution of cobalt supplied from margins is facilitated by the activity of manganese-oxidizing bacteria being suppressed under low oxygen and low temperatures, which extends the residence time of cobalt. Overall, we find a residence time of 7 and 250 years in the upper 250 m and global ocean, respectively. Importantly, we find that the dominant internal resupply process switches from regeneration and recycling of particulate cobalt to dissolution of scavenged cobalt between the upper ocean and the ocean interior. Our model highlights key regions of the ocean where biological activity may be most sensitive to cobalt availability.

  2. BIOMOVS: an international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1988-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (author)

  3. BIOMOVS: An international model validation study

    International Nuclear Information System (INIS)

    Haegg, C.; Johansson, G.

    1987-01-01

    BIOMOVS (BIOspheric MOdel Validation Study) is an international study where models used for describing the distribution of radioactive and nonradioactive trace substances in terrestrial and aquatic environments are compared and tested. The main objectives of the study are to compare and test the accuracy of predictions between such models, explain differences in these predictions, recommend priorities for future research concerning the improvement of the accuracy of model predictions and act as a forum for the exchange of ideas, experience and information. (orig.)

  4. Biogeochemical speciation of Fe in ocean water

    NARCIS (Netherlands)

    Hiemstra, T.; Riemsdijk, van W.H.

    2006-01-01

    The biogeochemical speciation of Fe in seawater has been evaluated using the consistent Non-Ideal Competitive Adsorption model (NICA¿Donnan model). Two types of data sets were used, i.e. Fe-hydroxide solubility data and competitive ligand equilibration/cathodic stripping voltammetry (CLE/CSV) Fe

  5. Model validation: a systemic and systematic approach

    International Nuclear Information System (INIS)

    Sheng, G.; Elzas, M.S.; Cronhjort, B.T.

    1993-01-01

    The term 'validation' is used ubiquitously in association with the modelling activities of numerous disciplines including social, political natural, physical sciences, and engineering. There is however, a wide range of definitions which give rise to very different interpretations of what activities the process involves. Analyses of results from the present large international effort in modelling radioactive waste disposal systems illustrate the urgent need to develop a common approach to model validation. Some possible explanations are offered to account for the present state of affairs. The methodology developed treats model validation and code verification in a systematic fashion. In fact, this approach may be regarded as a comprehensive framework to assess the adequacy of any simulation study. (author)

  6. Ground-water models: Validate or invalidate

    Science.gov (United States)

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  7. Model performance analysis and model validation in logistic regression

    Directory of Open Access Journals (Sweden)

    Rosa Arboretti Giancristofaro

    2007-10-01

    Full Text Available In this paper a new model validation procedure for a logistic regression model is presented. At first, we illustrate a brief review of different techniques of model validation. Next, we define a number of properties required for a model to be considered "good", and a number of quantitative performance measures. Lastly, we describe a methodology for the assessment of the performance of a given model by using an example taken from a management study.

  8. A discussion on validation of hydrogeological models

    International Nuclear Information System (INIS)

    Carrera, J.; Mousavi, S.F.; Usunoff, E.J.; Sanchez-Vila, X.; Galarza, G.

    1993-01-01

    Groundwater flow and solute transport are often driven by heterogeneities that elude easy identification. It is also difficult to select and describe the physico-chemical processes controlling solute behaviour. As a result, definition of a conceptual model involves numerous assumptions both on the selection of processes and on the representation of their spatial variability. Validating a numerical model by comparing its predictions with actual measurements may not be sufficient for evaluating whether or not it provides a good representation of 'reality'. Predictions will be close to measurements, regardless of model validity, if these are taken from experiments that stress well-calibrated model modes. On the other hand, predictions will be far from measurements when model parameters are very uncertain, even if the model is indeed a very good representation of the real system. Hence, we contend that 'classical' validation of hydrogeological models is not possible. Rather, models should be viewed as theories about the real system. We propose to follow a rigorous modeling approach in which different sources of uncertainty are explicitly recognized. The application of one such approach is illustrated by modeling a laboratory uranium tracer test performed on fresh granite, which was used as Test Case 1b in INTRAVAL. (author)

  9. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  10. Advanced training simulator models. Implementation and validation

    International Nuclear Information System (INIS)

    Borkowsky, Jeffrey; Judd, Jerry; Belblidia, Lotfi; O'farrell, David; Andersen, Peter

    2008-01-01

    Modern training simulators are required to replicate plant data for both thermal-hydraulic and neutronic response. Replication is required such that reactivity manipulation on the simulator properly trains the operator for reactivity manipulation at the plant. This paper discusses advanced models which perform this function in real-time using the coupled code system THOR/S3R. This code system models the all fluids systems in detail using an advanced, two-phase thermal-hydraulic a model. The nuclear core is modeled using an advanced, three-dimensional nodal method and also by using cycle-specific nuclear data. These models are configured to run interactively from a graphical instructor station or handware operation panels. The simulator models are theoretically rigorous and are expected to replicate the physics of the plant. However, to verify replication, the models must be independently assessed. Plant data is the preferred validation method, but plant data is often not available for many important training scenarios. In the absence of data, validation may be obtained by slower-than-real-time transient analysis. This analysis can be performed by coupling a safety analysis code and a core design code. Such a coupling exists between the codes RELAP5 and SIMULATE-3K (S3K). RELAP5/S3K is used to validate the real-time model for several postulated plant events. (author)

  11. Validation of a phytoremediation computer model

    Energy Technology Data Exchange (ETDEWEB)

    Corapcioglu, M Y; Sung, K; Rhykerd, R L; Munster, C; Drew, M [Texas A and M Univ., College Station, TX (United States)

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg[sub -1

  12. Mapping pan-Arctic CH4 emissions using an adjoint method by integrating process-based wetland and lake biogeochemical models and atmospheric CH4 concentrations

    Science.gov (United States)

    Tan, Z.; Zhuang, Q.; Henze, D. K.; Frankenberg, C.; Dlugokencky, E. J.; Sweeney, C.; Turner, A. J.

    2015-12-01

    Understanding CH4 emissions from wetlands and lakes are critical for the estimation of Arctic carbon balance under fast warming climatic conditions. To date, our knowledge about these two CH4 sources is almost solely built on the upscaling of discontinuous measurements in limited areas to the whole region. Many studies indicated that, the controls of CH4 emissions from wetlands and lakes including soil moisture, lake morphology and substrate content and quality are notoriously heterogeneous, thus the accuracy of those simple estimates could be questionable. Here we apply a high spatial resolution atmospheric inverse model (nested-grid GEOS-Chem Adjoint) over the Arctic by integrating SCIAMACHY and NOAA/ESRL CH4 measurements to constrain the CH4 emissions estimated with process-based wetland and lake biogeochemical models. Our modeling experiments using different wetland CH4 emission schemes and satellite and surface measurements show that the total amount of CH4 emitted from the Arctic wetlands is well constrained, but the spatial distribution of CH4 emissions is sensitive to priors. For CH4 emissions from lakes, our high-resolution inversion shows that the models overestimate CH4 emissions in Alaskan costal lowlands and East Siberian lowlands. Our study also indicates that the precision and coverage of measurements need to be improved to achieve more accurate high-resolution estimates.

  13. A methodology for PSA model validation

    International Nuclear Information System (INIS)

    Unwin, S.D.

    1995-09-01

    This document reports Phase 2 of work undertaken by Science Applications International Corporation (SAIC) in support of the Atomic Energy Control Board's Probabilistic Safety Assessment (PSA) review. A methodology is presented for the systematic review and evaluation of a PSA model. These methods are intended to support consideration of the following question: To within the scope and depth of modeling resolution of a PSA study, is the resultant model a complete and accurate representation of the subject plant? This question was identified as a key PSA validation issue in SAIC's Phase 1 project. The validation methods are based on a model transformation process devised to enhance the transparency of the modeling assumptions. Through conversion to a 'success-oriented' framework, a closer correspondence to plant design and operational specifications is achieved. This can both enhance the scrutability of the model by plant personnel, and provide an alternative perspective on the model that may assist in the identification of deficiencies. The model transformation process is defined and applied to fault trees documented in the Darlington Probabilistic Safety Evaluation. A tentative real-time process is outlined for implementation and documentation of a PSA review based on the proposed methods. (author). 11 refs., 9 tabs., 30 refs

  14. Paleoclimate validation of a numerical climate model

    International Nuclear Information System (INIS)

    Schelling, F.J.; Church, H.W.; Zak, B.D.; Thompson, S.L.

    1994-01-01

    An analysis planned to validate regional climate model results for a past climate state at Yucca Mountain, Nevada, against paleoclimate evidence for the period is described. This analysis, which will use the GENESIS model of global climate nested with the RegCM2 regional climate model, is part of a larger study for DOE's Yucca Mountain Site Characterization Project that is evaluating the impacts of long term future climate change on performance of the potential high level nuclear waste repository at Yucca Mountain. The planned analysis and anticipated results are presented

  15. Validation of the STAFF-5 computer model

    International Nuclear Information System (INIS)

    Fletcher, J.F.; Fields, S.R.

    1981-04-01

    STAFF-5 is a dynamic heat-transfer-fluid-flow stress model designed for computerized prediction of the temperature-stress performance of spent LWR fuel assemblies under storage/disposal conditions. Validation of the temperature calculating abilities of this model was performed by comparing temperature calculations under specified conditions to experimental data from the Engine Maintenance and Dissassembly (EMAD) Fuel Temperature Test Facility and to calculations performed by Battelle Pacific Northwest Laboratory (PNL) using the HYDRA-1 model. The comparisons confirmed the ability of STAFF-5 to calculate representative fuel temperatures over a considerable range of conditions, as a first step in the evaluation and prediction of fuel temperature-stress performance

  16. SPR Hydrostatic Column Model Verification and Validation.

    Energy Technology Data Exchange (ETDEWEB)

    Bettin, Giorgia [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lord, David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rudeen, David Keith [Gram, Inc. Albuquerque, NM (United States)

    2015-10-01

    A Hydrostatic Column Model (HCM) was developed to help differentiate between normal "tight" well behavior and small-leak behavior under nitrogen for testing the pressure integrity of crude oil storage wells at the U.S. Strategic Petroleum Reserve. This effort was motivated by steady, yet distinct, pressure behavior of a series of Big Hill caverns that have been placed under nitrogen for extended period of time. This report describes the HCM model, its functional requirements, the model structure and the verification and validation process. Different modes of operation are also described, which illustrate how the software can be used to model extended nitrogen monitoring and Mechanical Integrity Tests by predicting wellhead pressures along with nitrogen interface movements. Model verification has shown that the program runs correctly and it is implemented as intended. The cavern BH101 long term nitrogen test was used to validate the model which showed very good agreement with measured data. This supports the claim that the model is, in fact, capturing the relevant physical phenomena and can be used to make accurate predictions of both wellhead pressure and interface movements.

  17. The ocean response to volcanic iron fertilisation after the eruption of Kasatochi volcano: a regional-scale biogeochemical ocean model study

    Directory of Open Access Journals (Sweden)

    A. Lindenthal

    2013-06-01

    Full Text Available In high-nutrient–low-chlorophyll regions, phytoplankton growth is limited by the availability of water-soluble iron. The eruption of Kasatochi volcano in August 2008 led to ash deposition into the iron-limited NE Pacific Ocean. Volcanic ash released iron upon contact with seawater and generated a massive phytoplankton bloom. Here we investigate this event with a one-dimensional ocean biogeochemical column model to illuminate the ocean response to iron fertilisation by volcanic ash. The results indicate that the added iron triggered a phytoplankton bloom in the summer of 2008. Associated with this bloom, macronutrient concentrations such as nitrate and silicate decline and zooplankton biomass is enhanced in the ocean mixed layer. The simulated development of the drawdown of carbon dioxide and increase of pH in surface seawater is in good agreement with available observations. Sensitivity studies with different supply dates of iron to the ocean emphasise the favourable oceanic conditions in the NE Pacific to generate massive phytoplankton blooms in particular during July and August in comparison to other months. By varying the amount of volcanic ash and associated bio-available iron supplied to the ocean, model results demonstrate that the NE Pacific Ocean has higher, but limited capabilities to consume CO2 after iron fertilisation than those observed after the volcanic eruption of Kasatochi.

  18. Data-model integration to interpret connectivity between biogeochemical cycling, and vegetation phenology and productivity in mountainous ecosystems under changing hydrologic regimes

    Science.gov (United States)

    Brodie, E.; Arora, B.; Beller, H. R.; Bill, M.; Bouskill, N.; Chakraborty, R.; Conrad, M. E.; Dafflon, B.; Enquist, B. J.; Falco, N.; Henderson, A.; Karaoz, U.; Polussa, A.; Sorensen, P.; Steltzer, H.; Wainwright, H. M.; Wang, S.; Williams, K. H.; Wilmer, C.; Wu, Y.

    2017-12-01

    In mountainous systems, snow-melt is associated with a large pulse of nutrients that originates from under-snow microbial mineralization of organic matter and microbial biomass turnover. Vegetation phenology in these systems is regulated by environmental cues such as air temperature ranges and photoperiod, such that, under typical conditions, vegetation greening and nutrient uptake occur in sync with microbial biomass turnover and nutrient release, closing nutrient cycles and enhancing nutrient retention. However, early snow-melt has been observed with increasing frequency in the mountainous west and is hypothesized to disrupt coupled plant-microbial behavior, potentially resulting in a temporal discontinuity between microbial nutrient release and vegetation greening. As part of the Watershed Function Scientific Focus Area (SFA) at Berkeley Lab we are quantifying below-ground biogeochemistry and above-ground phenology and vegetation chemistry and their relationships to hydrologic events at a lower montane hillslope in the East River catchment, Crested Butte, CO. This presentation will focus on data-model integration to interpret connectivity between biogeochemical cycling of nitrogen and vegetation nitrogen demand. Initial model results suggest that early snow-melt will result in an earlier accumulation and leaching loss of nitrate from the upper soil depths but that vegetation productivity may not decline as traits such as greater rooting depth and resource allocation to stems are favored.

  19. Natural analogues and radionuclide transport model validation

    International Nuclear Information System (INIS)

    Lever, D.A.

    1987-08-01

    In this paper, some possible roles for natural analogues are discussed from the point of view of those involved with the development of mathematical models for radionuclide transport and with the use of these models in repository safety assessments. The characteristic features of a safety assessment are outlined in order to address the questions of where natural analogues can be used to improve our understanding of the processes involved and where they can assist in validating the models that are used. Natural analogues have the potential to provide useful information about some critical processes, especially long-term chemical processes and migration rates. There is likely to be considerable uncertainty and ambiguity associated with the interpretation of natural analogues, and thus it is their general features which should be emphasized, and models with appropriate levels of sophistication should be used. Experience gained in modelling the Koongarra uranium deposit in northern Australia is drawn upon. (author)

  20. Predictive validation of an influenza spread model.

    Directory of Open Access Journals (Sweden)

    Ayaz Hyder

    Full Text Available BACKGROUND: Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. METHODS AND FINDINGS: We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998-1999. Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type. Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. CONCLUSIONS: Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve

  1. Predictive Validation of an Influenza Spread Model

    Science.gov (United States)

    Hyder, Ayaz; Buckeridge, David L.; Leung, Brian

    2013-01-01

    Background Modeling plays a critical role in mitigating impacts of seasonal influenza epidemics. Complex simulation models are currently at the forefront of evaluating optimal mitigation strategies at multiple scales and levels of organization. Given their evaluative role, these models remain limited in their ability to predict and forecast future epidemics leading some researchers and public-health practitioners to question their usefulness. The objective of this study is to evaluate the predictive ability of an existing complex simulation model of influenza spread. Methods and Findings We used extensive data on past epidemics to demonstrate the process of predictive validation. This involved generalizing an individual-based model for influenza spread and fitting it to laboratory-confirmed influenza infection data from a single observed epidemic (1998–1999). Next, we used the fitted model and modified two of its parameters based on data on real-world perturbations (vaccination coverage by age group and strain type). Simulating epidemics under these changes allowed us to estimate the deviation/error between the expected epidemic curve under perturbation and observed epidemics taking place from 1999 to 2006. Our model was able to forecast absolute intensity and epidemic peak week several weeks earlier with reasonable reliability and depended on the method of forecasting-static or dynamic. Conclusions Good predictive ability of influenza epidemics is critical for implementing mitigation strategies in an effective and timely manner. Through the process of predictive validation applied to a current complex simulation model of influenza spread, we provided users of the model (e.g. public-health officials and policy-makers) with quantitative metrics and practical recommendations on mitigating impacts of seasonal influenza epidemics. This methodology may be applied to other models of communicable infectious diseases to test and potentially improve their predictive

  2. External validation of EPIWIN biodegradation models.

    Science.gov (United States)

    Posthumus, R; Traas, T P; Peijnenburg, W J G M; Hulzebos, E M

    2005-01-01

    The BIOWIN biodegradation models were evaluated for their suitability for regulatory purposes. BIOWIN includes the linear and non-linear BIODEG and MITI models for estimating the probability of rapid aerobic biodegradation and an expert survey model for primary and ultimate biodegradation estimation. Experimental biodegradation data for 110 newly notified substances were compared with the estimations of the different models. The models were applied separately and in combinations to determine which model(s) showed the best performance. The results of this study were compared with the results of other validation studies and other biodegradation models. The BIOWIN models predict not-readily biodegradable substances with high accuracy in contrast to ready biodegradability. In view of the high environmental concern of persistent chemicals and in view of the large number of not-readily biodegradable chemicals compared to the readily ones, a model is preferred that gives a minimum of false positives without a corresponding high percentage false negatives. A combination of the BIOWIN models (BIOWIN2 or BIOWIN6) showed the highest predictive value for not-readily biodegradability. However, the highest score for overall predictivity with lowest percentage false predictions was achieved by applying BIOWIN3 (pass level 2.75) and BIOWIN6.

  3. A biogeochemical model of Lake Pusiano (North Italy and its use in the predictability of phytoplankton blooms: first preliminary results

    Directory of Open Access Journals (Sweden)

    Alessandro OGGIONI

    2006-02-01

    Full Text Available This study reports the first preliminary results of the DYRESM-CAEDYM model application to a mid size sub-alpine lake (Lake Pusiano North Italy. The in-lake modelling is a part of a more general project called Pusiano Integrated Lake/Catchment project (PILE whose final goal is to understand the hydrological and trophic relationship between lake and catchment, supporting the restoration plan of the lake through field data analysis and numerical models. DYRESM is a 1D-3D hydrodynamics model for predicting the vertical profile of temperature, salinity and density. CAEDYM is multi-component ecological model, used here as a phytoplankton-zooplankton processes based model, which includes algorithms to simulate the nutrient cycles within the water column as well as the air-water gas exchanges and the water-sediments fluxes. The first results of the hydrodynamics simulations underline the capability of the model to accurately simulate the surface temperature seasonal trend and the thermal gradient whereas, during summer stratification, the model underestimates the bottom temperature of around 2 °C. The ecological model describes the epilimnetic reactive phosphorus (PO4 depletion (due to the phytoplankton uptake and the increase in PO4 concentrations in the deepest layers of the lake (due to the mineralization processes and the sediments release. In terms of phytoplankton dynamics the model accounts for the Planktothrix rubescens dominance during the whole season, whereas it seems to underestimate the peak in primary production related to both the simulated algal groups (P. rubescens and the rest of the other species aggregated in a single class. The future aims of the project are to complete the model parameterization and to connect the in-lake and the catchment modelling in order to gain an integrated view of the lake-catchment ecosystem as well as to develop a three dimensional model of the lake.

  4. Modeling the nitrogen fluxes in the Black Sea using a 3D coupledhydrodynamical-biogeochemical model: transport versus biogeochemicalprocesses, exchanges across the shelf break and comparison of the shelf anddeep sea ecodynamics

    Directory of Open Access Journals (Sweden)

    M. Grégoire

    2004-01-01

    Full Text Available A 6-compartment biogeochemical model of nitrogen cycling and plankton productivity has been coupled with a 3D general circulation model in an enclosed environment (the Black Sea so as to quantify and compare, on a seasonal and annual scale, the typical internal biogeochemical functioning of the shelf and of the deep sea as well as to estimate the nitrogen and water exchanges at the shelf break. Model results indicate that the annual nitrogen net export to the deep sea roughly corresponds to the annual load of nitrogen discharged by the rivers on the shelf. The model estimated vertically integrated gross annual primary production is 130gCm-2yr-1 for the whole basin, 220gCm-2yr-1 for the shelf and 40gCm-2yr-1 for the central basin. In agreement with sediment trap observations, model results indicate a rapid and efficient recycling of particulate organic matter in the sub-oxic portion of the water column (60-80m of the open sea. More than 95% of the PON produced in the euphotic layer is recycled in the upper 100m of the water column, 87% in the upper 80 m and 67% in the euphotic layer. The model estimates the annual export of POC towards the anoxic layer to 4 1010molyr-1. This POC is definitely lost for the system and represents 2% of the annual primary production of the open sea.

  5. Biogeochemical Modeling of In Situ U(VI) Reduction and Immobilization with Emulsified Vegetable Oil as the Electron Donor at a Field Site in Oak Ridge, Tennessee

    Science.gov (United States)

    Tang, G.; Parker, J.; Wu, W.; Schadt, C. W.; Watson, D. B.; Brooks, S. C.; Orifrc Team

    2011-12-01

    A comprehensive biogeochemical model was developed to quantitatively describe the coupled hydrologic, geochemical and microbiological processes that occurred following injection of emulsified vegetable oil (EVO) as the electron donor to immobilize U(VI) at the Oak Ridge Integrated Field Research Challenge site (ORIFRC) in Tennessee. The model couples the degradation of EVO, production and oxidation of long-chain fatty acids (LCFA), glycerol, hydrogen and acetate, reduction of nitrate, manganese, ferrous iron, sulfate and uranium, and methanoganesis with growth of multiple microbial groups. The model describes the evolution of geochemistry and microbial populations not only in the aqueous phase as typically observed, but also in the mineral phase and therefore enables us to evaluate the applicability of rates from the literature for field scale assessment, estimate the retention and degradation rates of EVO and LCFA, and assess the influence of the coupled processes on fate and transport of U(VI). Our results suggested that syntrophic bacteria or metal reducers might catalyze LCFA oxidation in the downstream locations when sulfate was consumed, and competition between methanogens and others for electron donors and slow growth of methanogen might contribute to the sustained reducing condition. Among the large amount of hydrologic, geochemical and microbiological parameter values, the initial biomass, and the interactions (e.g., inhibition) of the microbial functional groups, and the rate and extent of Mn and Fe oxide reduction appear as the major sources of uncertainty. Our model provides a platform to conduct numerical experiments to study these interactions, and could be useful for further iterative experimental and modeling investigations into the bioreductive immobiliztion of radionuclide and metal contaminants in the subsurface.

  6. Integrated water system simulation by considering hydrological and biogeochemical processes: model development, with parameter sensitivity and autocalibration

    Science.gov (United States)

    Zhang, Y. Y.; Shao, Q. X.; Ye, A. Z.; Xing, H. T.; Xia, J.

    2016-02-01

    Integrated water system modeling is a feasible approach to understanding severe water crises in the world and promoting the implementation of integrated river basin management. In this study, a classic hydrological model (the time variant gain model: TVGM) was extended to an integrated water system model by coupling multiple water-related processes in hydrology, biogeochemistry, water quality, and ecology, and considering the interference of human activities. A parameter analysis tool, which included sensitivity analysis, autocalibration and model performance evaluation, was developed to improve modeling efficiency. To demonstrate the model performances, the Shaying River catchment, which is the largest highly regulated and heavily polluted tributary of the Huai River basin in China, was selected as the case study area. The model performances were evaluated on the key water-related components including runoff, water quality, diffuse pollution load (or nonpoint sources) and crop yield. Results showed that our proposed model simulated most components reasonably well. The simulated daily runoff at most regulated and less-regulated stations matched well with the observations. The average correlation coefficient and Nash-Sutcliffe efficiency were 0.85 and 0.70, respectively. Both the simulated low and high flows at most stations were improved when the dam regulation was considered. The daily ammonium-nitrogen (NH4-N) concentration was also well captured with the average correlation coefficient of 0.67. Furthermore, the diffuse source load of NH4-N and the corn yield were reasonably simulated at the administrative region scale. This integrated water system model is expected to improve the simulation performances with extension to more model functionalities, and to provide a scientific basis for the implementation in integrated river basin managements.

  7. Estimating the potential of energy saving and carbon emission mitigation of cassava-based fuel ethanol using life cycle assessment coupled with a biogeochemical process model

    Science.gov (United States)

    Jiang, Dong; Hao, Mengmeng; Fu, Jingying; Tian, Guangjin; Ding, Fangyu

    2017-09-01

    Global warming and increasing concentration of atmospheric greenhouse gas (GHG) have prompted considerable interest in the potential role of energy plant biomass. Cassava-based fuel ethanol is one of the most important bioenergy and has attracted much attention in both developed and developing countries. However, the development of cassava-based fuel ethanol is still faced with many uncertainties, including raw material supply, net energy potential, and carbon emission mitigation potential. Thus, an accurate estimation of these issues is urgently needed. This study provides an approach to estimate energy saving and carbon emission mitigation potentials of cassava-based fuel ethanol through LCA (life cycle assessment) coupled with a biogeochemical process model—GEPIC (GIS-based environmental policy integrated climate) model. The results indicate that the total potential of cassava yield on marginal land in China is 52.51 million t; the energy ratio value varies from 0.07 to 1.44, and the net energy surplus of cassava-based fuel ethanol in China is 92,920.58 million MJ. The total carbon emission mitigation from cassava-based fuel ethanol in China is 4593.89 million kgC. Guangxi, Guangdong, and Fujian are identified as target regions for large-scale development of cassava-based fuel ethanol industry. These results can provide an operational approach and fundamental data for scientific research and energy planning.

  8. Validation of a phytoremediation computer model

    International Nuclear Information System (INIS)

    Corapcioglu, M.Y.; Sung, K.; Rhykerd, R.L.; Munster, C.; Drew, M.

    1999-01-01

    The use of plants to stimulate remediation of contaminated soil is an effective, low-cost cleanup method which can be applied to many different sites. A phytoremediation computer model has been developed to simulate how recalcitrant hydrocarbons interact with plant roots in unsaturated soil. A study was conducted to provide data to validate and calibrate the model. During the study, lysimeters were constructed and filled with soil contaminated with 10 [mg kg -1 ] TNT, PBB and chrysene. Vegetated and unvegetated treatments were conducted in triplicate to obtain data regarding contaminant concentrations in the soil, plant roots, root distribution, microbial activity, plant water use and soil moisture. When given the parameters of time and depth, the model successfully predicted contaminant concentrations under actual field conditions. Other model parameters are currently being evaluated. 15 refs., 2 figs

  9. Towards policy relevant environmental modeling: contextual validity and pragmatic models

    Science.gov (United States)

    Miles, Scott B.

    2000-01-01

    "What makes for a good model?" In various forms, this question is a question that, undoubtedly, many people, businesses, and institutions ponder with regards to their particular domain of modeling. One particular domain that is wrestling with this question is the multidisciplinary field of environmental modeling. Examples of environmental models range from models of contaminated ground water flow to the economic impact of natural disasters, such as earthquakes. One of the distinguishing claims of the field is the relevancy of environmental modeling to policy and environment-related decision-making in general. A pervasive view by both scientists and decision-makers is that a "good" model is one that is an accurate predictor. Thus, determining whether a model is "accurate" or "correct" is done by comparing model output to empirical observations. The expected outcome of this process, usually referred to as "validation" or "ground truthing," is a stamp on the model in question of "valid" or "not valid" that serves to indicate whether or not the model will be reliable before it is put into service in a decision-making context. In this paper, I begin by elaborating on the prevailing view of model validation and why this view must change. Drawing from concepts coming out of the studies of science and technology, I go on to propose a contextual view of validity that can overcome the problems associated with "ground truthing" models as an indicator of model goodness. The problem of how we talk about and determine model validity has much to do about how we perceive the utility of environmental models. In the remainder of the paper, I argue that we should adopt ideas of pragmatism in judging what makes for a good model and, in turn, developing good models. From such a perspective of model goodness, good environmental models should facilitate communication, convey—not bury or "eliminate"—uncertainties, and, thus, afford the active building of consensus decisions, instead

  10. Concepts of Model Verification and Validation

    International Nuclear Information System (INIS)

    Thacker, B.H.; Doebling, S.W.; Hemez, F.M.; Anderson, M.C.; Pepin, J.E.; Rodriguez, E.A.

    2004-01-01

    Model verification and validation (VandV) is an enabling methodology for the development of computational models that can be used to make engineering predictions with quantified confidence. Model VandV procedures are needed by government and industry to reduce the time, cost, and risk associated with full-scale testing of products, materials, and weapon systems. Quantifying the confidence and predictive accuracy of model calculations provides the decision-maker with the information necessary for making high-consequence decisions. The development of guidelines and procedures for conducting a model VandV program are currently being defined by a broad spectrum of researchers. This report reviews the concepts involved in such a program. Model VandV is a current topic of great interest to both government and industry. In response to a ban on the production of new strategic weapons and nuclear testing, the Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship Program (SSP). An objective of the SSP is to maintain a high level of confidence in the safety, reliability, and performance of the existing nuclear weapons stockpile in the absence of nuclear testing. This objective has challenged the national laboratories to develop high-confidence tools and methods that can be used to provide credible models needed for stockpile certification via numerical simulation. There has been a significant increase in activity recently to define VandV methods and procedures. The U.S. Department of Defense (DoD) Modeling and Simulation Office (DMSO) is working to develop fundamental concepts and terminology for VandV applied to high-level systems such as ballistic missile defense and battle management simulations. The American Society of Mechanical Engineers (ASME) has recently formed a Standards Committee for the development of VandV procedures for computational solid mechanics models. The Defense Nuclear Facilities Safety Board (DNFSB) has been a proponent of model

  11. Potential environmental impact of tidal energy extraction in the Pentland Firth at large spatial scales: results of a biogeochemical model

    Science.gov (United States)

    van der Molen, Johan; Ruardij, Piet; Greenwood, Naomi

    2016-05-01

    A model study was carried out of the potential large-scale (> 100 km) effects of marine renewable tidal energy generation in the Pentland Firth, using the 3-D hydrodynamics-biogeochemistry model GETM-ERSEM-BFM. A realistic 800 MW scenario and a high-impact scenario with massive expansion of tidal energy extraction to 8 GW scenario were considered. The realistic 800 MW scenario suggested minor effects on the tides, and undetectable effects on the biogeochemistry. The massive-expansion 8 GW scenario suggested effects would be observed over hundreds of kilometres away with changes of up to 10 % in tidal and ecosystem variables, in particular in a broad area in the vicinity of the Wash. There, waters became less turbid, and primary production increased with associated increases in faunal ecosystem variables. Moreover, a one-off increase in carbon storage in the sea bed was detected. Although these first results suggest positive environmental effects, further investigation is recommended of (i) the residual circulation in the vicinity of the Pentland Firth and effects on larval dispersal using a higher-resolution model and (ii) ecosystem effects with (future) state-of-the-art models if energy extraction substantially beyond 1 GW is planned.

  12. The influence of winter convection on primary production: A parameterisation using a hydrostatic three-dimensional biogeochemical model

    DEFF Research Database (Denmark)

    Grosse, Fabian; Lindemann, Christian; Pätch, Johannes

    2014-01-01

    organic carbon. The carbon export during late winter/early spring significantly exceeded the export of the reference run. Furthermore, a non-hydrostatic convection model was used to evaluate the major assumption of the presented parameterisation which implies the matching of the mixed layer depth...

  13. Where in the Marsh is the Water (and When)?: Measuring and modeling salt marsh hydrology for ecological and biogeochemical applications

    Science.gov (United States)

    Salt marsh hydrology presents many difficulties from a measurement and modeling standpoint: the bi-directional flows of tidal waters, variable water densities due to mixing of fresh and salt water, significant influences from vegetation, and complex stream morphologies. Because o...

  14. A validated physical model of greenhouse climate

    International Nuclear Information System (INIS)

    Bot, G.P.A.

    1989-01-01

    In the greenhouse model the momentaneous environmental crop growth factors are calculated as output, together with the physical behaviour of the crop. The boundary conditions for this model are the outside weather conditions; other inputs are the physical characteristics of the crop, of the greenhouse and of the control system. The greenhouse model is based on the energy, water vapour and CO 2 balances of the crop-greenhouse system. While the emphasis is on the dynamic behaviour of the greenhouse for implementation in continuous optimization, the state variables temperature, water vapour pressure and carbondioxide concentration in the relevant greenhouse parts crop, air, soil and cover are calculated from the balances over these parts. To do this in a proper way, the physical exchange processes between the system parts have to be quantified first. Therefore the greenhouse model is constructed from submodels describing these processes: a. Radiation transmission model for the modification of the outside to the inside global radiation. b. Ventilation model to describe the ventilation exchange between greenhouse and outside air. c. The description of the exchange of energy and mass between the crop and the greenhouse air. d. Calculation of the thermal radiation exchange between the various greenhouse parts. e. Quantification of the convective exchange processes between the greenhouse air and respectively the cover, the heating pipes and the soil surface and between the cover and the outside air. f. Determination of the heat conduction in the soil. The various submodels are validated first and then the complete greenhouse model is verified

  15. Biogeochemical Reactive Transport Model of the Redox Zone Experiment of the sp Hard Rock Laboratory in Sweden

    International Nuclear Information System (INIS)

    Molinero-Huguet, Jorge; Samper-Calvete, F. Javier; Zhang, Guoxiang; Yang, Changbing

    2004-01-01

    Underground facilities are being operated by several countries around the world for performing research and demonstration of the safety of deep radioactive waste repositories. The ''sp'' Hard Rock Laboratory is one such facility launched and operated by the Swedish Nuclear Fuel and Waste Management Company where various in situ experiments have been performed in fractured granites. One such experiment is the redox zone experiment, which aimed at evaluating the effects of the construction of an access tunnel on the hydrochemical conditions of a fracture zone. Dilution of the initially saline groundwater by fresh recharge water is the dominant process controlling the hydrochemical evolution of most chemical species, except for bicarbonate and sulfate, which unexpectedly increase with time. We present a numerical model of water flow, reactive transport, and microbial processes for the redox zone experiment. This model provides a plausible quantitatively based explanation for the unexpected evolution of bicarbonate and sulfate, reproduces the breakthrough curves of other reactive species, and is consistent with previous hydrogeological and solute transport models

  16. Validated predictive modelling of the environmental resistome.

    Science.gov (United States)

    Amos, Gregory C A; Gozzard, Emma; Carter, Charlotte E; Mead, Andrew; Bowes, Mike J; Hawkey, Peter M; Zhang, Lihong; Singer, Andrew C; Gaze, William H; Wellington, Elizabeth M H

    2015-06-01

    Multi-drug-resistant bacteria pose a significant threat to public health. The role of the environment in the overall rise in antibiotic-resistant infections and risk to humans is largely unknown. This study aimed to evaluate drivers of antibiotic-resistance levels across the River Thames catchment, model key biotic, spatial and chemical variables and produce predictive models for future risk assessment. Sediment samples from 13 sites across the River Thames basin were taken at four time points across 2011 and 2012. Samples were analysed for class 1 integron prevalence and enumeration of third-generation cephalosporin-resistant bacteria. Class 1 integron prevalence was validated as a molecular marker of antibiotic resistance; levels of resistance showed significant geospatial and temporal variation. The main explanatory variables of resistance levels at each sample site were the number, proximity, size and type of surrounding wastewater-treatment plants. Model 1 revealed treatment plants accounted for 49.5% of the variance in resistance levels. Other contributing factors were extent of different surrounding land cover types (for example, Neutral Grassland), temporal patterns and prior rainfall; when modelling all variables the resulting model (Model 2) could explain 82.9% of variations in resistance levels in the whole catchment. Chemical analyses correlated with key indicators of treatment plant effluent and a model (Model 3) was generated based on water quality parameters (contaminant and macro- and micro-nutrient levels). Model 2 was beta tested on independent sites and explained over 78% of the variation in integron prevalence showing a significant predictive ability. We believe all models in this study are highly useful tools for informing and prioritising mitigation strategies to reduce the environmental resistome.

  17. Estimation of p,p'-DDT degradation in soil by modeling and constraining hydrological and biogeochemical controls.

    Science.gov (United States)

    Sanka, Ondrej; Kalina, Jiri; Lin, Yan; Deutscher, Jan; Futter, Martyn; Butterfield, Dan; Melymuk, Lisa; Brabec, Karel; Nizzetto, Luca

    2018-08-01

    Despite not being used for decades in most countries, DDT remains ubiquitous in soils due to its persistence and intense past usage. Because of this it is still a pollutant of high global concern. Assessing long term dissipation of DDT from this reservoir is fundamental to understand future environmental and human exposure. Despite a large research effort, key properties controlling fate in soil (in particular, the degradation half-life (τ soil )) are far from being fully quantified. This paper describes a case study in a large central European catchment where hundreds of measurements of p,p'-DDT concentrations in air, soil, river water and sediment are available for the last two decades. The goal was to deliver an integrated estimation of τ soil by constraining a state-of-the-art hydrobiogeochemical-multimedia fate model of the catchment against the full body of empirical data available for this area. The INCA-Contaminants model was used for this scope. Good predictive performance against an (external) dataset of water and sediment concentrations was achieved with partitioning properties taken from the literature and τ soil estimates obtained from forcing the model against empirical historical data of p,p'-DDT in the catchment multicompartments. This approach allowed estimation of p,p'-DDT degradation in soil after taking adequate consideration of losses due to runoff and volatilization. Estimated τ soil ranged over 3000-3800 days. Degradation was the most important loss process, accounting on a yearly basis for more than 90% of the total dissipation. The total dissipation flux from the catchment soils was one order of magnitude higher than the total current atmospheric input estimated from atmospheric concentrations, suggesting that the bulk of p,p'-DDT currently being remobilized or lost is essentially that accumulated over two decades ago. Copyright © 2018 Elsevier Ltd. All rights reserved.

  18. Validity of information security policy models

    Directory of Open Access Journals (Sweden)

    Joshua Onome Imoniana

    Full Text Available Validity is concerned with establishing evidence for the use of a method to be used with a particular set of population. Thus, when we address the issue of application of security policy models, we are concerned with the implementation of a certain policy, taking into consideration the standards required, through attribution of scores to every item in the research instrument. En today's globalized economic scenarios, the implementation of information security policy, in an information technology environment, is a condition sine qua non for the strategic management process of any organization. Regarding this topic, various studies present evidences that, the responsibility for maintaining a policy rests primarily with the Chief Security Officer. The Chief Security Officer, in doing so, strives to enhance the updating of technologies, in order to meet all-inclusive business continuity planning policies. Therefore, for such policy to be effective, it has to be entirely embraced by the Chief Executive Officer. This study was developed with the purpose of validating specific theoretical models, whose designs were based on literature review, by sampling 10 of the Automobile Industries located in the ABC region of Metropolitan São Paulo City. This sampling was based on the representativeness of such industries, particularly with regards to each one's implementation of information technology in the region. The current study concludes, presenting evidence of the discriminating validity of four key dimensions of the security policy, being such: the Physical Security, the Logical Access Security, the Administrative Security, and the Legal & Environmental Security. On analyzing the Alpha of Crombach structure of these security items, results not only attest that the capacity of those industries to implement security policies is indisputable, but also, the items involved, homogeneously correlate to each other.

  19. Polarographic validation of chemical speciation models

    International Nuclear Information System (INIS)

    Duffield, J.R.; Jarratt, J.A.

    2001-01-01

    It is well established that the chemical speciation of an element in a given matrix, or system of matrices, is of fundamental importance in controlling the transport behaviour of the element. Therefore, to accurately understand and predict the transport of elements and compounds in the environment it is a requirement that both the identities and concentrations of trace element physico-chemical forms can be ascertained. These twin requirements present the analytical scientist with considerable challenges given the labile equilibria, the range of time scales (from nanoseconds to years) and the range of concentrations (ultra-trace to macro) that may be involved. As a result of this analytical variability, chemical equilibrium modelling has become recognised as an important predictive tool in chemical speciation analysis. However, this technique requires firm underpinning by the use of complementary experimental techniques for the validation of the predictions made. The work reported here has been undertaken with the primary aim of investigating possible methodologies that can be used for the validation of chemical speciation models. However, in approaching this aim, direct chemical speciation analyses have been made in their own right. Results will be reported and analysed for the iron(II)/iron(III)-citrate proton system (pH 2 to 10; total [Fe] = 3 mmol dm -3 ; total [citrate 3- ] 10 mmol dm -3 ) in which equilibrium constants have been determined using glass electrode potentiometry, speciation is predicted using the PHREEQE computer code, and validation of predictions is achieved by determination of iron complexation and redox state with associated concentrations. (authors)

  20. Mechanistic site-based emulation of a global ocean biogeochemical model (MEDUSA 1.0 for parametric analysis and calibration: an application of the Marine Model Optimization Testbed (MarMOT 1.1

    Directory of Open Access Journals (Sweden)

    J. C. P. Hemmings

    2015-03-01

    Full Text Available Biogeochemical ocean circulation models used to investigate the role of plankton ecosystems in global change rely on adjustable parameters to capture the dominant biogeochemical dynamics of a complex biological system. In principle, optimal parameter values can be estimated by fitting models to observational data, including satellite ocean colour products such as chlorophyll that achieve good spatial and temporal coverage of the surface ocean. However, comprehensive parametric analyses require large ensemble experiments that are computationally infeasible with global 3-D simulations. Site-based simulations provide an efficient alternative but can only be used to make reliable inferences about global model performance if robust quantitative descriptions of their relationships with the corresponding 3-D simulations can be established. The feasibility of establishing such a relationship is investigated for an intermediate complexity biogeochemistry model (MEDUSA coupled with a widely used global ocean model (NEMO. A site-based mechanistic emulator is constructed for surface chlorophyll output from this target model as a function of model parameters. The emulator comprises an array of 1-D simulators and a statistical quantification of the uncertainty in their predictions. The unknown parameter-dependent biogeochemical environment, in terms of initial tracer concentrations and lateral flux information required by the simulators, is a significant source of uncertainty. It is approximated by a mean environment derived from a small ensemble of 3-D simulations representing variability of the target model behaviour over the parameter space of interest. The performance of two alternative uncertainty quantification schemes is examined: a direct method based on comparisons between simulator output and a sample of known target model "truths" and an indirect method that is only partially reliant on knowledge of the target model output. In general, chlorophyll

  1. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  2. Assessment model validity document FARF31

    International Nuclear Information System (INIS)

    Elert, Mark; Gylling Bjoern; Lindgren, Maria

    2004-08-01

    The prime goal of model validation is to build confidence in the model concept and that the model is fit for its intended purpose. In other words: Does the model predict transport in fractured rock adequately to be used in repository performance assessments. Are the results reasonable for the type of modelling tasks the model is designed for. Commonly, in performance assessments a large number of realisations of flow and transport is made to cover the associated uncertainties. Thus, the flow and transport including radioactive chain decay are preferably calculated in the same model framework. A rather sophisticated concept is necessary to be able to model flow and radionuclide transport in the near field and far field of a deep repository, also including radioactive chain decay. In order to avoid excessively long computational times there is a need for well-based simplifications. For this reason, the far field code FARF31 is made relatively simple, and calculates transport by using averaged entities to represent the most important processes. FARF31 has been shown to be suitable for the performance assessments within the SKB studies, e.g. SR 97. Among the advantages are that it is a fast, simple and robust code, which enables handling of many realisations with wide spread in parameters in combination with chain decay of radionuclides. Being a component in the model chain PROPER, it is easy to assign statistical distributions to the input parameters. Due to the formulation of the advection-dispersion equation in FARF31 it is possible to perform the groundwater flow calculations separately.The basis for the modelling is a stream tube, i.e. a volume of rock including fractures with flowing water, with the walls of the imaginary stream tube defined by streamlines. The transport within the stream tube is described using a dual porosity continuum approach, where it is assumed that rock can be divided into two distinct domains with different types of porosity

  3. Modelling and validation of electromechanical shock absorbers

    Science.gov (United States)

    Tonoli, Andrea; Amati, Nicola; Girardello Detoni, Joaquim; Galluzzi, Renato; Gasparin, Enrico

    2013-08-01

    Electromechanical vehicle suspension systems represent a promising substitute to conventional hydraulic solutions. However, the design of electromechanical devices that are able to supply high damping forces without exceeding geometric dimension and mass constraints is a difficult task. All these challenges meet in off-road vehicle suspension systems, where the power density of the dampers is a crucial parameter. In this context, the present paper outlines a particular shock absorber configuration where a suitable electric machine and a transmission mechanism are utilised to meet off-road vehicle requirements. A dynamic model is used to represent the device. Subsequently, experimental tests are performed on an actual prototype to verify the functionality of the damper and validate the proposed model.

  4. Genome-Enabled Modeling of Biogeochemical Processes Predicts Metabolic Dependencies that Connect the Relative Fitness of Microbial Functional Guilds

    Science.gov (United States)

    Brodie, E.; King, E.; Molins, S.; Karaoz, U.; Steefel, C. I.; Banfield, J. F.; Beller, H. R.; Anantharaman, K.; Ligocki, T. J.; Trebotich, D.

    2015-12-01

    Pore-scale processes mediated by microorganisms underlie a range of critical ecosystem services, regulating carbon stability, nutrient flux, and the purification of water. Advances in cultivation-independent approaches now provide us with the ability to reconstruct thousands of genomes from microbial populations from which functional roles may be assigned. With this capability to reveal microbial metabolic potential, the next step is to put these microbes back where they belong to interact with their natural environment, i.e. the pore scale. At this scale, microorganisms communicate, cooperate and compete across their fitness landscapes with communities emerging that feedback on the physical and chemical properties of their environment, ultimately altering the fitness landscape and selecting for new microbial communities with new properties and so on. We have developed a trait-based model of microbial activity that simulates coupled functional guilds that are parameterized with unique combinations of traits that govern fitness under dynamic conditions. Using a reactive transport framework, we simulate the thermodynamics of coupled electron donor-acceptor reactions to predict energy available for cellular maintenance, respiration, biomass development, and enzyme production. From metagenomics, we directly estimate some trait values related to growth and identify the linkage of key traits associated with respiration and fermentation, macromolecule depolymerizing enzymes, and other key functions such as nitrogen fixation. Our simulations were carried out to explore abiotic controls on community emergence such as seasonally fluctuating water table regimes across floodplain organic matter hotspots. Simulations and metagenomic/metatranscriptomic observations highlighted the many dependencies connecting the relative fitness of functional guilds and the importance of chemolithoautotrophic lifestyles. Using an X-Ray microCT-derived soil microaggregate physical model combined

  5. A hybrid ensemble-OI Kalman filter for efficient data assimilation into a 3-D biogeochemical model of the Mediterranean

    KAUST Repository

    Tsiaras, Kostas P.

    2017-04-20

    A hybrid ensemble data assimilation scheme (HYBRID), combining a flow-dependent with a static background covariance, was developed and implemented for assimilating satellite (SeaWiFS) Chl-a data into a marine ecosystem model of the Mediterranean. The performance of HYBRID was assessed against a model free-run, the ensemble-based singular evolutive interpolated Kalman (SEIK) and its variant with static covariance (SFEK), with regard to the assimilated variable (Chl-a) and non-assimilated variables (dissolved inorganic nutrients). HYBRID was found more efficient than both SEIK and SFEK, reducing the Chl-a error by more than 40% in most areas, as compared to the free-run. Data assimilation had a positive overall impact on nutrients, except for a deterioration of nitrates simulation by SEIK in the most productive area (Adriatic). This was related to SEIK pronounced update in this area and the phytoplankton limitation on phosphate that lead to a built up of excess nitrates. SEIK was found more efficient in productive and variable areas, where its ensemble exhibited important spread. SFEK had an effect mostly on Chl-a, performing better than SEIK in less dynamic areas, adequately described by the dominant modes of its static covariance. HYBRID performed well in all areas, due to its “blended” covariance. Its flow-dependent component appears to track changes in the system dynamics, while its static covariance helps maintaining sufficient spread in the forecast. HYBRID sensitivity experiments showed that an increased contribution from the flow-dependent covariance results in a deterioration of nitrates, similar to SEIK, while the improvement of HYBRID with increasing flow-dependent ensemble size quickly levels off.

  6. A hybrid ensemble-OI Kalman filter for efficient data assimilation into a 3-D biogeochemical model of the Mediterranean

    Science.gov (United States)

    Tsiaras, Kostas P.; Hoteit, Ibrahim; Kalaroni, Sofia; Petihakis, George; Triantafyllou, George

    2017-06-01

    A hybrid ensemble data assimilation scheme (HYBRID), combining a flow-dependent with a static background covariance, was developed and implemented for assimilating satellite (SeaWiFS) Chl-a data into a marine ecosystem model of the Mediterranean. The performance of HYBRID was assessed against a model free-run, the ensemble-based singular evolutive interpolated Kalman (SEIK) and its variant with static covariance (SFEK), with regard to the assimilated variable (Chl-a) and non-assimilated variables (dissolved inorganic nutrients). HYBRID was found more efficient than both SEIK and SFEK, reducing the Chl-a error by more than 40% in most areas, as compared to the free-run. Data assimilation had a positive overall impact on nutrients, except for a deterioration of nitrates simulation by SEIK in the most productive area (Adriatic). This was related to SEIK pronounced update in this area and the phytoplankton limitation on phosphate that lead to a built up of excess nitrates. SEIK was found more efficient in productive and variable areas, where its ensemble exhibited important spread. SFEK had an effect mostly on Chl-a, performing better than SEIK in less dynamic areas, adequately described by the dominant modes of its static covariance. HYBRID performed well in all areas, due to its "blended" covariance. Its flow-dependent component appears to track changes in the system dynamics, while its static covariance helps maintaining sufficient spread in the forecast. HYBRID sensitivity experiments showed that an increased contribution from the flow-dependent covariance results in a deterioration of nitrates, similar to SEIK, while the improvement of HYBRID with increasing flow-dependent ensemble size quickly levels off.

  7. A hybrid ensemble-OI Kalman filter for efficient data assimilation into a 3-D biogeochemical model of the Mediterranean

    KAUST Repository

    Tsiaras, Kostas P.; Hoteit, Ibrahim; Kalaroni, Sofia; Petihakis, George; Triantafyllou, George

    2017-01-01

    A hybrid ensemble data assimilation scheme (HYBRID), combining a flow-dependent with a static background covariance, was developed and implemented for assimilating satellite (SeaWiFS) Chl-a data into a marine ecosystem model of the Mediterranean. The performance of HYBRID was assessed against a model free-run, the ensemble-based singular evolutive interpolated Kalman (SEIK) and its variant with static covariance (SFEK), with regard to the assimilated variable (Chl-a) and non-assimilated variables (dissolved inorganic nutrients). HYBRID was found more efficient than both SEIK and SFEK, reducing the Chl-a error by more than 40% in most areas, as compared to the free-run. Data assimilation had a positive overall impact on nutrients, except for a deterioration of nitrates simulation by SEIK in the most productive area (Adriatic). This was related to SEIK pronounced update in this area and the phytoplankton limitation on phosphate that lead to a built up of excess nitrates. SEIK was found more efficient in productive and variable areas, where its ensemble exhibited important spread. SFEK had an effect mostly on Chl-a, performing better than SEIK in less dynamic areas, adequately described by the dominant modes of its static covariance. HYBRID performed well in all areas, due to its “blended” covariance. Its flow-dependent component appears to track changes in the system dynamics, while its static covariance helps maintaining sufficient spread in the forecast. HYBRID sensitivity experiments showed that an increased contribution from the flow-dependent covariance results in a deterioration of nitrates, similar to SEIK, while the improvement of HYBRID with increasing flow-dependent ensemble size quickly levels off.

  8. Atmospheric corrosion: statistical validation of models

    International Nuclear Information System (INIS)

    Diaz, V.; Martinez-Luaces, V.; Guineo-Cobs, G.

    2003-01-01

    In this paper we discuss two different methods for validation of regression models, applied to corrosion data. One of them is based on the correlation coefficient and the other one is the statistical test of lack of fit. Both methods are used here to analyse fitting of bi logarithmic model in order to predict corrosion for very low carbon steel substrates in rural and urban-industrial atmospheres in Uruguay. Results for parameters A and n of the bi logarithmic model are reported here. For this purpose, all repeated values were used instead of using average values as usual. Modelling is carried out using experimental data corresponding to steel substrates under the same initial meteorological conditions ( in fact, they are put in the rack at the same time). Results of correlation coefficient are compared with the lack of it tested at two different signification levels (α=0.01 and α=0.05). Unexpected differences between them are explained and finally, it is possible to conclude, at least in the studied atmospheres, that the bi logarithmic model does not fit properly the experimental data. (Author) 18 refs

  9. Impact of vegetation and ecosystems on chlorine(-36) cycling and its modeling: from simplified approaches towards more complex biogeochemical tools

    Science.gov (United States)

    Thiry, Yves; Redon, Paul-Olivier; Gustafsson, Malin; Marang, Laura; Bastviken, David

    2013-04-01

    Chlorine is very soluble at a global scale with chloride (Cl-), the dominating form. Because of its high mobility, chlorine is usually perceived as a good conservative tracer in hydrological studies and by analogy as little reactive in biosphere. Since 36Cl can be considered to have the same behaviour than stable Cl, a good knowledge of chlorine distribution between compartments of terrestrial ecosystems is sufficient to calibrate a specific activity model which supposes rapid dilution of 36Cl within the large pool of stable Cl and isotopic equilibrium between compartments. By assuming 36Cl redistribution similar to that of stable Cl at steady-state, specific activity models are simplified interesting tools for regulatory purposes in environmental safety assessment, especially in case of potential long term chronic contamination of agricultural food chain (IAEA, 2010). In many other more complex scenarios (accidental acute release, intermediate time frame, and contrasted natural ecosystems), new information and tools are necessary for improving (radio-)ecological realism, which entails a non-conservative behavior of chlorine. Indeed observed dynamics of chlorine in terrestrial ecosystems is far from a simple equilibrium notably because of natural processes of organic matter (SOM) chlorination mainly occurring in surface soils (Öberg, 1998) and mediated by microbial activities on a large extent (Bastviken et al. 2007). Our recent studies have strengthened the view that an organic cycle for chlorine should now be recognized, in addition to its inorganic cycle. Major results showed that: organochlorine (Clorg) formation occurs in all type of soils and ecosystems (culture, pasture, forest), leading to an average fraction of the total Cl pool in soil of about 80 % (Redon et al., 2012), chlorination in more organic soils over time leads to a larger Clorg pool and in turn to a possible high internal supply of inorganic chlorine (Clin) upon dechlorination. (Gustafsson et

  10. Modeling biogeochemical processes in sediments from the Rhône River prodelta area (NW Mediterranean Sea

    Directory of Open Access Journals (Sweden)

    L. Pastor

    2011-05-01

    Full Text Available In situ oxygen microprofiles, sediment organic carbon content, and pore-water concentrations of nitrate, ammonium, iron, manganese, and sulfides obtained in sediments from the Rhône River prodelta and its adjacent continental shelf were used to constrain a numerical diagenetic model. Results showed that (1 the organic matter from the Rhône River is composed of a fraction of fresh material associated to high first-order degradation rate constants (11–33 yr−1; (2 the burial efficiency (burial/input ratio in the Rhône prodelta (within 3 km of the river outlet can be up to 80 %, and decreases to ~20 % on the adjacent continental shelf 10–15 km further offshore; (3 there is a large contribution of anoxic processes to total mineralization in sediments near the river mouth, certainly due to large inputs of fresh organic material combined with high sedimentation rates; (4 diagenetic by-products originally produced during anoxic organic matter mineralization are almost entirely precipitated (>97 % and buried in the sediment, which leads to (5 a low contribution of the re-oxidation of reduced products to total oxygen consumption. Consequently, total carbon mineralization rates as based on oxygen consumption rates and using Redfield stoichiometry can be largely underestimated in such River-dominated Ocean Margins (RiOMar environments.

  11. SDG and qualitative trend based model multiple scale validation

    Science.gov (United States)

    Gao, Dong; Xu, Xin; Yin, Jianjin; Zhang, Hongyu; Zhang, Beike

    2017-09-01

    Verification, Validation and Accreditation (VV&A) is key technology of simulation and modelling. For the traditional model validation methods, the completeness is weak; it is carried out in one scale; it depends on human experience. The SDG (Signed Directed Graph) and qualitative trend based multiple scale validation is proposed. First the SDG model is built and qualitative trends are added to the model. And then complete testing scenarios are produced by positive inference. The multiple scale validation is carried out by comparing the testing scenarios with outputs of simulation model in different scales. Finally, the effectiveness is proved by carrying out validation for a reactor model.

  12. Quantification of terrestrial ecosystem carbon dynamics in the conterminous United States combining a process-based biogeochemical model and MODIS and AmeriFlux data

    Directory of Open Access Journals (Sweden)

    M. Chen

    2011-09-01

    Full Text Available Satellite remote sensing provides continuous temporal and spatial information of terrestrial ecosystems. Using these remote sensing data and eddy flux measurements and biogeochemical models, such as the Terrestrial Ecosystem Model (TEM, should provide a more adequate quantification of carbon dynamics of terrestrial ecosystems. Here we use Moderate Resolution Imaging Spectroradiometer (MODIS Enhanced Vegetation Index (EVI, Land Surface Water Index (LSWI and carbon flux data of AmeriFlux to conduct such a study. We first modify the gross primary production (GPP modeling in TEM by incorporating EVI and LSWI to account for the effects of the changes of canopy photosynthetic capacity, phenology and water stress. Second, we parameterize and verify the new version of TEM with eddy flux data. We then apply the model to the conterminous United States over the period 2000–2005 at a 0.05° × 0.05° spatial resolution. We find that the new version of TEM made improvement over the previous version and generally captured the expected temporal and spatial patterns of regional carbon dynamics. We estimate that regional GPP is between 7.02 and 7.78 Pg C yr−1 and net primary production (NPP ranges from 3.81 to 4.38 Pg C yr−1 and net ecosystem production (NEP varies within 0.08–0.73 Pg C yr−1 over the period 2000–2005 for the conterminous United States. The uncertainty due to parameterization is 0.34, 0.65 and 0.18 Pg C yr−1 for the regional estimates of GPP, NPP and NEP, respectively. The effects of extreme climate and disturbances such as severe drought in 2002 and destructive Hurricane Katrina in 2005 were captured by the model. Our study provides a new independent and more adequate measure of carbon fluxes for the conterminous United States, which will benefit studies of carbon-climate feedback and facilitate policy-making of carbon management and climate.

  13. Unit testing, model validation, and biological simulation.

    Science.gov (United States)

    Sarma, Gopal P; Jacobs, Travis W; Watts, Mark D; Ghayoomie, S Vahid; Larson, Stephen D; Gerkin, Richard C

    2016-01-01

    The growth of the software industry has gone hand in hand with the development of tools and cultural practices for ensuring the reliability of complex pieces of software. These tools and practices are now acknowledged to be essential to the management of modern software. As computational models and methods have become increasingly common in the biological sciences, it is important to examine how these practices can accelerate biological software development and improve research quality. In this article, we give a focused case study of our experience with the practices of unit testing and test-driven development in OpenWorm, an open-science project aimed at modeling Caenorhabditis elegans. We identify and discuss the challenges of incorporating test-driven development into a heterogeneous, data-driven project, as well as the role of model validation tests, a category of tests unique to software which expresses scientific models.

  14. Skill assessment of the coupled physical-biogeochemical operational Mediterranean Forecasting System

    Science.gov (United States)

    Cossarini, Gianpiero; Clementi, Emanuela; Salon, Stefano; Grandi, Alessandro; Bolzon, Giorgio; Solidoro, Cosimo

    2016-04-01

    The Mediterranean Monitoring and Forecasting Centre (Med-MFC) is one of the regional production centres of the European Marine Environment Monitoring Service (CMEMS-Copernicus). Med-MFC operatively manages a suite of numerical model systems (3DVAR-NEMO-WW3 and 3DVAR-OGSTM-BFM) that provides gridded datasets of physical and biogeochemical variables for the Mediterranean marine environment with a horizontal resolution of about 6.5 km. At the present stage, the operational Med-MFC produces ten-day forecast: daily for physical parameters and bi-weekly for biogeochemical variables. The validation of the coupled model system and the estimate of the accuracy of model products are key issues to ensure reliable information to the users and the downstream services. Product quality activities at Med-MFC consist of two levels of validation and skill analysis procedures. Pre-operational qualification activities focus on testing the improvement of the quality of a new release of the model system and relays on past simulation and historical data. Then, near real time (NRT) validation activities aim at the routinely and on-line skill assessment of the model forecast and relays on the NRT available observations. Med-MFC validation framework uses both independent (i.e. Bio-Argo float data, in-situ mooring and vessel data of oxygen, nutrients and chlorophyll, moored buoys, tide-gauges and ADCP of temperature, salinity, sea level and velocity) and semi-independent data (i.e. data already used for assimilation, such as satellite chlorophyll, Satellite SLA and SST and in situ vertical profiles of temperature and salinity from XBT, Argo and Gliders) We give evidence that different variables (e.g. CMEMS-products) can be validated at different levels (i.e. at the forecast level or at the level of model consistency) and at different spatial and temporal scales. The fundamental physical parameters temperature, salinity and sea level are routinely validated on daily, weekly and quarterly base

  15. MAAP4 model and validation status

    International Nuclear Information System (INIS)

    Plys, M.G.; Paik, C.Y.; Henry, R.E.; Wu, Chunder; Suh, K.Y.; Sung Jin Lee; McCartney, M.A.; Wang, Zhe

    1993-01-01

    The MAAP 4 code for integrated severe accident analysis is intended to be used for Level 1 and Level 2 probabilistic safety assessment and severe accident management evaluations for current and advanced light water reactors. MAAP 4 can be used to determine which accidents lead to fuel damage and which are successfully terminated which accidents lead to fuel damage and which are successfully terminated before or after fuel damage (a level 1 application). It can also be used to determine which sequences result in fission product release to the environment and provide the time history of such releases (a level 2 application). The MAAP 4 thermal-hydraulic and fission product models and their validation are discussed here. This code is the newest version of MAAP offered by the Electric Power Research Institute (EPRI) and it contains substantial mechanistic improvements over its predecessor, MAAP 3.0B

  16. The Southern Ocean biogeochemical divide.

    Science.gov (United States)

    Marinov, I; Gnanadesikan, A; Toggweiler, J R; Sarmiento, J L

    2006-06-22

    Modelling studies have demonstrated that the nutrient and carbon cycles in the Southern Ocean play a central role in setting the air-sea balance of CO(2) and global biological production. Box model studies first pointed out that an increase in nutrient utilization in the high latitudes results in a strong decrease in the atmospheric carbon dioxide partial pressure (pCO2). This early research led to two important ideas: high latitude regions are more important in determining atmospheric pCO2 than low latitudes, despite their much smaller area, and nutrient utilization and atmospheric pCO2 are tightly linked. Subsequent general circulation model simulations show that the Southern Ocean is the most important high latitude region in controlling pre-industrial atmospheric CO(2) because it serves as a lid to a larger volume of the deep ocean. Other studies point out the crucial role of the Southern Ocean in the uptake and storage of anthropogenic carbon dioxide and in controlling global biological production. Here we probe the system to determine whether certain regions of the Southern Ocean are more critical than others for air-sea CO(2) balance and the biological export production, by increasing surface nutrient drawdown in an ocean general circulation model. We demonstrate that atmospheric CO(2) and global biological export production are controlled by different regions of the Southern Ocean. The air-sea balance of carbon dioxide is controlled mainly by the biological pump and circulation in the Antarctic deep-water formation region, whereas global export production is controlled mainly by the biological pump and circulation in the Subantarctic intermediate and mode water formation region. The existence of this biogeochemical divide separating the Antarctic from the Subantarctic suggests that it may be possible for climate change or human intervention to modify one of these without greatly altering the other.

  17. Validation of A Global Hydrological Model

    Science.gov (United States)

    Doell, P.; Lehner, B.; Kaspar, F.; Vassolo, S.

    due to the precipitation mea- surement errors. Even though the explicit modeling of wetlands and lakes leads to a much improved modeling of both the vertical water balance and the lateral transport of water, not enough information is included in WGHM to accurately capture the hy- drology of these water bodies. Certainly, the reliability of model results is highest at the locations at which WGHM was calibrated. The validation indicates that reliability for cells inside calibrated basins is satisfactory if the basin is relatively homogeneous. Analyses of the few available stations outside of calibrated basins indicate a reason- ably high model reliability, particularly in humid regions.

  18. Developing a model for validation and prediction of bank customer ...

    African Journals Online (AJOL)

    Credit risk is the most important risk of banks. The main approaches of the bank to reduce credit risk are correct validation using the final status and the validation model parameters. High fuel of bank reserves and lost or outstanding facilities of banks indicate the lack of appropriate validation models in the banking network.

  19. A proposed best practice model validation framework for banks

    Directory of Open Access Journals (Sweden)

    Pieter J. (Riaan de Jongh

    2017-06-01

    Full Text Available Background: With the increasing use of complex quantitative models in applications throughout the financial world, model risk has become a major concern. The credit crisis of 2008–2009 provoked added concern about the use of models in finance. Measuring and managing model risk has subsequently come under scrutiny from regulators, supervisors, banks and other financial institutions. Regulatory guidance indicates that meticulous monitoring of all phases of model development and implementation is required to mitigate this risk. Considerable resources must be mobilised for this purpose. The exercise must embrace model development, assembly, implementation, validation and effective governance. Setting: Model validation practices are generally patchy, disparate and sometimes contradictory, and although the Basel Accord and some regulatory authorities have attempted to establish guiding principles, no definite set of global standards exists. Aim: Assessing the available literature for the best validation practices. Methods: This comprehensive literature study provided a background to the complexities of effective model management and focussed on model validation as a component of model risk management. Results: We propose a coherent ‘best practice’ framework for model validation. Scorecard tools are also presented to evaluate if the proposed best practice model validation framework has been adequately assembled and implemented. Conclusion: The proposed best practice model validation framework is designed to assist firms in the construction of an effective, robust and fully compliant model validation programme and comprises three principal elements: model validation governance, policy and process.

  20. Aerosol modelling and validation during ESCOMPTE 2001

    Science.gov (United States)

    Cousin, F.; Liousse, C.; Cachier, H.; Bessagnet, B.; Guillaume, B.; Rosset, R.

    The ESCOMPTE 2001 programme (Atmospheric Research. 69(3-4) (2004) 241) has resulted in an exhaustive set of dynamical, radiative, gas and aerosol observations (surface and aircraft measurements). A previous paper (Atmospheric Research. (2004) in press) has dealt with dynamics and gas-phase chemistry. The present paper is an extension to aerosol formation, transport and evolution. To account for important loadings of primary and secondary aerosols and their transformation processes in the ESCOMPTE domain, the ORISAM aerosol module (Atmospheric Environment. 35 (2001) 4751) was implemented on-line in the air-quality Meso-NH-C model. Additional developments have been introduced in ORganic and Inorganic Spectral Aerosol Module (ORISAM) to improve the comparison between simulations and experimental surface and aircraft field data. This paper discusses this comparison for a simulation performed during one selected day, 24 June 2001, during the Intensive Observation Period IOP2b. Our work relies on BC and OCp emission inventories specifically developed for ESCOMPTE. This study confirms the need for a fine resolution aerosol inventory with spectral chemical speciation. BC levels are satisfactorily reproduced, thus validating our emission inventory and its processing through Meso-NH-C. However, comparisons for reactive species generally denote an underestimation of concentrations. Organic aerosol levels are rather well simulated though with a trend to underestimation in the afternoon. Inorganic aerosol species are underestimated for several reasons, some of them have been identified. For sulphates, primary emissions were introduced. Improvement was obtained too for modelled nitrate and ammonium levels after introducing heterogeneous chemistry. However, no modelling of terrigeneous particles is probably a major cause for nitrates and ammonium underestimations. Particle numbers and size distributions are well reproduced, but only in the submicrometer range. Our work points out

  1. Validating agent based models through virtual worlds.

    Energy Technology Data Exchange (ETDEWEB)

    Lakkaraju, Kiran; Whetzel, Jonathan H.; Lee, Jina; Bier, Asmeret Brooke; Cardona-Rivera, Rogelio E.; Bernstein, Jeremy Ray Rhythm

    2014-01-01

    As the US continues its vigilance against distributed, embedded threats, understanding the political and social structure of these groups becomes paramount for predicting and dis- rupting their attacks. Agent-based models (ABMs) serve as a powerful tool to study these groups. While the popularity of social network tools (e.g., Facebook, Twitter) has provided extensive communication data, there is a lack of ne-grained behavioral data with which to inform and validate existing ABMs. Virtual worlds, in particular massively multiplayer online games (MMOG), where large numbers of people interact within a complex environ- ment for long periods of time provide an alternative source of data. These environments provide a rich social environment where players engage in a variety of activities observed between real-world groups: collaborating and/or competing with other groups, conducting battles for scarce resources, and trading in a market economy. Strategies employed by player groups surprisingly re ect those seen in present-day con icts, where players use diplomacy or espionage as their means for accomplishing their goals. In this project, we propose to address the need for ne-grained behavioral data by acquiring and analyzing game data a commercial MMOG, referred to within this report as Game X. The goals of this research were: (1) devising toolsets for analyzing virtual world data to better inform the rules that govern a social ABM and (2) exploring how virtual worlds could serve as a source of data to validate ABMs established for analogous real-world phenomena. During this research, we studied certain patterns of group behavior to compliment social modeling e orts where a signi cant lack of detailed examples of observed phenomena exists. This report outlines our work examining group behaviors that underly what we have termed the Expression-To-Action (E2A) problem: determining the changes in social contact that lead individuals/groups to engage in a particular behavior

  2. Geochemistry Model Validation Report: Material Degradation and Release Model

    Energy Technology Data Exchange (ETDEWEB)

    H. Stockman

    2001-09-28

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17).

  3. Geochemistry Model Validation Report: Material Degradation and Release Model

    International Nuclear Information System (INIS)

    Stockman, H.

    2001-01-01

    The purpose of this Analysis and Modeling Report (AMR) is to validate the Material Degradation and Release (MDR) model that predicts degradation and release of radionuclides from a degrading waste package (WP) in the potential monitored geologic repository at Yucca Mountain. This AMR is prepared according to ''Technical Work Plan for: Waste Package Design Description for LA'' (Ref. 17). The intended use of the MDR model is to estimate the long-term geochemical behavior of waste packages (WPs) containing U. S . Department of Energy (DOE) Spent Nuclear Fuel (SNF) codisposed with High Level Waste (HLW) glass, commercial SNF, and Immobilized Plutonium Ceramic (Pu-ceramic) codisposed with HLW glass. The model is intended to predict (1) the extent to which criticality control material, such as gadolinium (Gd), will remain in the WP after corrosion of the initial WP, (2) the extent to which fissile Pu and uranium (U) will be carried out of the degraded WP by infiltrating water, and (3) the chemical composition and amounts of minerals and other solids left in the WP. The results of the model are intended for use in criticality calculations. The scope of the model validation report is to (1) describe the MDR model, and (2) compare the modeling results with experimental studies. A test case based on a degrading Pu-ceramic WP is provided to help explain the model. This model does not directly feed the assessment of system performance. The output from this model is used by several other models, such as the configuration generator, criticality, and criticality consequence models, prior to the evaluation of system performance. This document has been prepared according to AP-3.10Q, ''Analyses and Models'' (Ref. 2), and prepared in accordance with the technical work plan (Ref. 17)

  4. Statistical validation of normal tissue complication probability models

    NARCIS (Netherlands)

    Xu, Cheng-Jian; van der Schaaf, Arjen; van t Veld, Aart; Langendijk, Johannes A.; Schilstra, Cornelis

    2012-01-01

    PURPOSE: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. METHODS AND MATERIALS: A penalized regression method, LASSO (least absolute shrinkage

  5. Validation of the community radiative transfer model

    International Nuclear Information System (INIS)

    Ding Shouguo; Yang Ping; Weng Fuzhong; Liu Quanhua; Han Yong; Delst, Paul van; Li Jun; Baum, Bryan

    2011-01-01

    To validate the Community Radiative Transfer Model (CRTM) developed by the U.S. Joint Center for Satellite Data Assimilation (JCSDA), the discrete ordinate radiative transfer (DISORT) model and the line-by-line radiative transfer model (LBLRTM) are combined in order to provide a reference benchmark. Compared with the benchmark, the CRTM appears quite accurate for both clear sky and ice cloud radiance simulations with RMS errors below 0.2 K, except for clouds with small ice particles. In a computer CPU run time comparison, the CRTM is faster than DISORT by approximately two orders of magnitude. Using the operational MODIS cloud products and the European Center for Medium-range Weather Forecasting (ECMWF) atmospheric profiles as an input, the CRTM is employed to simulate the Atmospheric Infrared Sounder (AIRS) radiances. The CRTM simulations are shown to be in reasonably close agreement with the AIRS measurements (the discrepancies are within 2 K in terms of brightness temperature difference). Furthermore, the impact of uncertainties in the input cloud properties and atmospheric profiles on the CRTM simulations has been assessed. The CRTM-based brightness temperatures (BTs) at the top of the atmosphere (TOA), for both thin (τ 30) clouds, are highly sensitive to uncertainties in atmospheric temperature and cloud top pressure. However, for an optically thick cloud, the CRTM-based BTs are not sensitive to the uncertainties of cloud optical thickness, effective particle size, and atmospheric humidity profiles. On the contrary, the uncertainties of the CRTM-based TOA BTs resulting from effective particle size and optical thickness are not negligible in an optically thin cloud.

  6. Development of a Conservative Model Validation Approach for Reliable Analysis

    Science.gov (United States)

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  7. Validation of ecological state space models using the Laplace approximation

    DEFF Research Database (Denmark)

    Thygesen, Uffe Høgsbro; Albertsen, Christoffer Moesgaard; Berg, Casper Willestofte

    2017-01-01

    Many statistical models in ecology follow the state space paradigm. For such models, the important step of model validation rarely receives as much attention as estimation or hypothesis testing, perhaps due to lack of available algorithms and software. Model validation is often based on a naive...... for estimation in general mixed effects models. Implementing one-step predictions in the R package Template Model Builder, we demonstrate that it is possible to perform model validation with little effort, even if the ecological model is multivariate, has non-linear dynamics, and whether observations...... useful directions in which the model could be improved....

  8. Biogeochemical Controls on Technetium Mobility in Biogeochemical Controls on Technetium Mobility in FRC Sediments

    International Nuclear Information System (INIS)

    Lloyd, J.R.; McBeth, J.M.; Livens, F.R.; Bryan, N.D.; Ellis, B.; Sharma, H.; Burke, I.T.; Morris, K.

    2004-01-01

    Technetium-99 is a priority pollutant at numerous DOE sites, due to its long half-life (2.1 x 10 5 years), high mobility as Tc(VII) in oxic waters, and bioavailability as a sulfate analog. 99 Tc is far less mobile under anaerobic conditions, forming insoluble Tc(IV) precipitates. As anaerobic microorganisms can reduce soluble Tc(VII) to insoluble Tc(IV), microbial metabolism may have the potential to treat sediments and waters contaminated with Tc. Baseline studies of fundamental mechanisms of Tc(VII) bioreduction and precipitation (reviewed by Lloyd et al, 2002) have generally used pure cultures of metal-reducing bacteria, in order to develop conceptual models for the biogeochemical cycling of Tc. There is, however, comparatively little known about interactions of metal-reducing bacteria with environmentally relevant trace concentrations of Tc, against a more complex biogeochemical background provided by mixed microbial communities in the subsurface. The objective of this new NABIR project is to probe the site specific biogeochemical conditions that control the mobility of Tc at the FRC (Oak Ridge, TN). This information is required for the rational design of in situ bioremediation strategies for technetium-contaminated subsurface environments. We will use a combination of geochemical, mineralogical, microbiological and spectroscopic techniques to determine the solubility and phase associations of Tc in FRC sediments, and characterize the underpinning biogeochemical controls. A key strength of this project is that many of the techniques we are using have already been optimized by our research team, who are also studying the biogeochemical controls on Tc mobility in marine and freshwater sediments in the UK in a NERC funded companion study.

  9. Validation of models in an imaging infrared simulation

    CSIR Research Space (South Africa)

    Willers, C

    2007-10-01

    Full Text Available threeprocessesfortransformingtheinformationbetweentheentities. Reality/ Problem Entity Conceptual Model Computerized Model Model Validation ModelVerification Model Qualification Computer Implementation Analysisand Modelling Simulationand Experimentation “Substantiationthata....C.Refsgaard ,ModellingGuidelines-terminology andguidingprinciples, AdvancesinWaterResources, Vol27,No1,January2004,?pp.71-82(12),Elsevier. et.al. [5]N.Oreskes,et.al.,Verification,Validation,andConfirmationof NumericalModelsintheEarthSciences,Science,Vol263, Number...

  10. Model Validation Using Coordinate Distance with Performance Sensitivity

    Directory of Open Access Journals (Sweden)

    Jiann-Shiun Lew

    2008-01-01

    Full Text Available This paper presents an innovative approach to model validation for a structure with significant parameter variations. Model uncertainty of the structural dynamics is quantified with the use of a singular value decomposition technique to extract the principal components of parameter change, and an interval model is generated to represent the system with parameter uncertainty. The coordinate vector, corresponding to the identified principal directions, of the validation system is computed. The coordinate distance between the validation system and the identified interval model is used as a metric for model validation. A beam structure with an attached subsystem, which has significant parameter uncertainty, is used to demonstrate the proposed approach.

  11. Projecting the long-term biogeochemical impacts of a diverse agroforestry system in the Midwest

    Science.gov (United States)

    Wolz, K. J.; DeLucia, E. H.; Paul, R. F.

    2014-12-01

    Annual, monoculture cropping systems have become the standard agricultural model in the Midwestern US. Unintended consequences of these systems include surface and groundwater pollution, greenhouse gas emissions, loss of biodiversity, and soil erosion. Diverse agroforestry (DA) systems dominated by fruit and nut trees/shrubs have been proposed as an agricultural model for the Midwestern US that can restore ecosystem services while simultaneously providing economically viable and industrially relevant staple food crops. A DA system including six species of fruit and nut crops was established on long-time conventional agricultural land at the University of Illinois at Urbana-Champaign in 2012, with the conventional corn-soybean rotation (CSR) as a control. Initial field measurements of the nitrogen and water cycles during the first two years of transition have indicated a significant decrease in N losses and modification of the seasonal evapotranspiration (ET) pattern. While these early results suggest that the land use transition from CSR to DA can have positive biogeochemical consequences, models must be utilized to make long-term biogeochemical projections in agroforestry systems. Initial field measurements of plant phenology, net N2O flux, nitrate leaching, soil respiration, and soil moisture were used to parameterize the DA system within the DayCENT biogeochemical model as the "savanna" ecosystem type. The model was validated with an independent subset of field measurements and then run to project biogeochemical cycling in the DA system for 25 years past establishment. Model results show that N losses via N2O emission or nitrate leaching reach a minimum within the first 5 years and then maintain this tight cycle into the future. While early ET field measurements revealed similar magnitudes between the DA and CSR systems, modeled ET continued to increase for the DA system throughout the projected time since the trees would continue to grow larger. These modeling

  12. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Whitmore, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kaffine, Leah [National Renewable Energy Lab. (NREL), Golden, CO (United States); Blair, Nate [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron P. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  13. Test-driven verification/validation of model transformations

    Institute of Scientific and Technical Information of China (English)

    László LENGYEL; Hassan CHARAF

    2015-01-01

    Why is it important to verify/validate model transformations? The motivation is to improve the quality of the trans-formations, and therefore the quality of the generated software artifacts. Verified/validated model transformations make it possible to ensure certain properties of the generated software artifacts. In this way, verification/validation methods can guarantee different requirements stated by the actual domain against the generated/modified/optimized software products. For example, a verified/ validated model transformation can ensure the preservation of certain properties during the model-to-model transformation. This paper emphasizes the necessity of methods that make model transformation verified/validated, discusses the different scenarios of model transformation verification and validation, and introduces the principles of a novel test-driven method for verifying/ validating model transformations. We provide a solution that makes it possible to automatically generate test input models for model transformations. Furthermore, we collect and discuss the actual open issues in the field of verification/validation of model transformations.

  14. A comprehensive model for piezoceramic actuators: modelling, validation and application

    International Nuclear Information System (INIS)

    Quant, Mario; Elizalde, Hugo; Flores, Abiud; Ramírez, Ricardo; Orta, Pedro; Song, Gangbing

    2009-01-01

    This paper presents a comprehensive model for piezoceramic actuators (PAs), which accounts for hysteresis, non-linear electric field and dynamic effects. The hysteresis model is based on the widely used general Maxwell slip model, while an enhanced electro-mechanical non-linear model replaces the linear constitutive equations commonly used. Further on, a linear second order model compensates the frequency response of the actuator. Each individual model is fully characterized from experimental data yielded by a specific PA, then incorporated into a comprehensive 'direct' model able to determine the output strain based on the applied input voltage, fully compensating the aforementioned effects, where the term 'direct' represents an electrical-to-mechanical operating path. The 'direct' model was implemented in a Matlab/Simulink environment and successfully validated via experimental results, exhibiting higher accuracy and simplicity than many published models. This simplicity would allow a straightforward inclusion of other behaviour such as creep, ageing, material non-linearity, etc, if such parameters are important for a particular application. Based on the same formulation, two other models are also presented: the first is an 'alternate' model intended to operate within a force-controlled scheme (instead of a displacement/position control), thus able to capture the complex mechanical interactions occurring between a PA and its host structure. The second development is an 'inverse' model, able to operate within an open-loop control scheme, that is, yielding a 'linearized' PA behaviour. The performance of the developed models is demonstrated via a numerical sample case simulated in Matlab/Simulink, consisting of a PA coupled to a simple mechanical system, aimed at shifting the natural frequency of the latter

  15. Some considerations for validation of repository performance assessment models

    International Nuclear Information System (INIS)

    Eisenberg, N.

    1991-01-01

    Validation is an important aspect of the regulatory uses of performance assessment. A substantial body of literature exists indicating the manner in which validation of models is usually pursued. Because performance models for a nuclear waste repository cannot be tested over the long time periods for which the model must make predictions, the usual avenue for model validation is precluded. Further impediments to model validation include a lack of fundamental scientific theory to describe important aspects of repository performance and an inability to easily deduce the complex, intricate structures characteristic of a natural system. A successful strategy for validation must attempt to resolve these difficulties in a direct fashion. Although some procedural aspects will be important, the main reliance of validation should be on scientific substance and logical rigor. The level of validation needed will be mandated, in part, by the uses to which these models are put, rather than by the ideal of validation of a scientific theory. Because of the importance of the validation of performance assessment models, the NRC staff has engaged in a program of research and international cooperation to seek progress in this important area. 2 figs., 16 refs

  16. Statistical Validation of Normal Tissue Complication Probability Models

    Energy Technology Data Exchange (ETDEWEB)

    Xu Chengjian, E-mail: c.j.xu@umcg.nl [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schaaf, Arjen van der; Veld, Aart A. van' t; Langendijk, Johannes A. [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Schilstra, Cornelis [Department of Radiation Oncology, University of Groningen, University Medical Center Groningen, Groningen (Netherlands); Radiotherapy Institute Friesland, Leeuwarden (Netherlands)

    2012-09-01

    Purpose: To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. Methods and Materials: A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Results: Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Conclusion: Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use.

  17. Statistical validation of normal tissue complication probability models.

    Science.gov (United States)

    Xu, Cheng-Jian; van der Schaaf, Arjen; Van't Veld, Aart A; Langendijk, Johannes A; Schilstra, Cornelis

    2012-09-01

    To investigate the applicability and value of double cross-validation and permutation tests as established statistical approaches in the validation of normal tissue complication probability (NTCP) models. A penalized regression method, LASSO (least absolute shrinkage and selection operator), was used to build NTCP models for xerostomia after radiation therapy treatment of head-and-neck cancer. Model assessment was based on the likelihood function and the area under the receiver operating characteristic curve. Repeated double cross-validation showed the uncertainty and instability of the NTCP models and indicated that the statistical significance of model performance can be obtained by permutation testing. Repeated double cross-validation and permutation tests are recommended to validate NTCP models before clinical use. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Validation of mentorship model for newly qualified professional ...

    African Journals Online (AJOL)

    Newly qualified professional nurses (NQPNs) allocated to community health care services require the use of validated model to practice independently. Validation was done to adapt and assess if the model is understood and could be implemented by NQPNs and mentors employed in community health care services.

  19. Validation and Adaptation of Router and Switch Models

    NARCIS (Netherlands)

    Boltjes, B.; Fernandez Diaz, I.; Kock, B.A.; Langeveld, R.J.G.M.; Schoenmaker, G.

    2003-01-01

    This paper describes validating OPNET models of key devices for the next generation IP-based tactical network of the Royal Netherlands Army (RNLA). The task of TNO-FEL is to provide insight in scalability and performance of future deployed networks. Because validated models ol key Cisco equipment

  20. Stream biogeochemical resilience in the age of Anthropocene

    Science.gov (United States)

    Dong, H.; Creed, I. F.

    2017-12-01

    Recent evidence indicates that biogeochemical cycles are being pushed beyond the tolerance limits of the earth system in the age of the Anthropocene placing terrestrial and aquatic ecosystems at risk. Here, we explored the question: Is there empirical evidence of global atmospheric changes driving losses in stream biogeochemical resilience towards a new normal? Stream biogeochemical resilience is the process of returning to equilibrium conditions after a disturbance and can be measured using three metrics: reactivity (the highest initial response after a disturbance), return rate (the rate of return to equilibrium condition after reactive changes), and variance of the stationary distribution (the signal to noise ratio). Multivariate autoregressive models were used to derive the three metrics for streams along a disturbance gradient - from natural systems where global drivers would dominate, to relatively managed or modified systems where global and local drivers would interact. We observed a loss of biogeochemical resilience in all streams. The key biogeochemical constituent(s) that may be driving loss of biogeochemical resilience were identified from the time series of the stream biogeochemical constituents. Non-stationary trends (detected by Mann-Kendall analysis) and stationary cycles (revealed through Morlet wavelet analysis) were removed, and the standard deviation (SD) of the remaining residuals were analyzed to determine if there was an increase in SD over time that would indicate a pending shift towards a new normal. We observed that nitrate-N and total phosphorus showed behaviours indicative of a pending shift in natural and managed forest systems, but not in agricultural systems. This study provides empirical support that stream ecosystems are showing signs of exceeding planetary boundary tolerance levels and shifting towards a "new normal" in response to global changes, which can be exacerbated by local management activities. Future work will consider

  1. Linking Chaotic Advection with Subsurface Biogeochemical Processes

    Science.gov (United States)

    Mays, D. C.; Freedman, V. L.; White, S. K.; Fang, Y.; Neupauer, R.

    2017-12-01

    This work investigates the extent to which groundwater flow kinematics drive subsurface biogeochemical processes. In terms of groundwater flow kinematics, we consider chaotic advection, whose essential ingredient is stretching and folding of plumes. Chaotic advection is appealing within the context of groundwater remediation because it has been shown to optimize plume spreading in the laminar flows characteristic of aquifers. In terms of subsurface biogeochemical processes, we consider an existing model for microbially-mediated reduction of relatively mobile uranium(VI) to relatively immobile uranium(IV) following injection of acetate into a floodplain aquifer beneath a former uranium mill in Rifle, Colorado. This model has been implemented in the reactive transport code eSTOMP, the massively parallel version of STOMP (Subsurface Transport Over Multiple Phases). This presentation will report preliminary numerical simulations in which the hydraulic boundary conditions in the eSTOMP model are manipulated to simulate chaotic advection resulting from engineered injection and extraction of water through a manifold of wells surrounding the plume of injected acetate. This approach provides an avenue to simulate the impact of chaotic advection within the existing framework of the eSTOMP code.

  2. Cost model validation: a technical and cultural approach

    Science.gov (United States)

    Hihn, J.; Rosenberg, L.; Roust, K.; Warfield, K.

    2001-01-01

    This paper summarizes how JPL's parametric mission cost model (PMCM) has been validated using both formal statistical methods and a variety of peer and management reviews in order to establish organizational acceptance of the cost model estimates.

  3. Simulation of anthropogenic CO2 uptake in the CCSM3.1 ocean circulation-biogeochemical model: comparison with data-based estimates

    Directory of Open Access Journals (Sweden)

    S. Khatiwala

    2012-04-01

    Full Text Available The global ocean has taken up a large fraction of the CO2 released by human activities since the industrial revolution. Quantifying the oceanic anthropogenic carbon (Cant inventory and its variability is important for predicting the future global carbon cycle. The detailed comparison of data-based and model-based estimates is essential for the validation and continued improvement of our prediction capabilities. So far, three global estimates of oceanic Cant inventory that are "data-based" and independent of global ocean circulation models have been produced: one based on the Δ C* method, and two that are based on constraining surface-to-interior transport of tracers, the TTD method and a maximum entropy inversion method (GF. The GF method, in particular, is capable of reconstructing the history of Cant inventory through the industrial era. In the present study we use forward model simulations of the Community Climate System Model (CCSM3.1 to estimate the Cant inventory and compare the results with the data-based estimates. We also use the simulations to test several assumptions of the GF method, including the assumption of constant climate and circulation, which is common to all the data-based estimates. Though the integrated estimates of global Cant inventories are consistent with each other, the regional estimates show discrepancies up to 50 %. The CCSM3 model underestimates the total Cant inventory, in part due to weak mixing and ventilation in the North Atlantic and Southern Ocean. Analyses of different simulation results suggest that key assumptions about ocean circulation and air-sea disequilibrium in the GF method are generally valid on the global scale, but may introduce errors in Cant estimates on regional scales. The GF method should also be used with caution when predicting future oceanic anthropogenic carbon uptake.

  4. Validation of Embedded System Verification Models

    NARCIS (Netherlands)

    Marincic, J.; Mader, Angelika H.; Wieringa, Roelf J.

    The result of a model-based requirements verification shows that the model of a system satisfies (or not) formalised system requirements. The verification result is correct only if the model represents the system adequately. No matter what modelling technique we use, what precedes the model

  5. IVIM: modeling, experimental validation and application to animal models

    International Nuclear Information System (INIS)

    Fournet, Gabrielle

    2016-01-01

    This PhD thesis is centered on the study of the IVIM ('Intravoxel Incoherent Motion') MRI sequence. This sequence allows for the study of the blood microvasculature such as the capillaries, arterioles and venules. To be sensitive only to moving groups of spins, diffusion gradients are added before and after the 180 degrees pulse of a spin echo (SE) sequence. The signal component corresponding to spins diffusing in the tissue can be separated from the one related to spins travelling in the blood vessels which is called the IVIM signal. These two components are weighted by f IVIM which represents the volume fraction of blood inside the tissue. The IVIM signal is usually modelled by a mono-exponential (ME) function and characterized by a pseudo-diffusion coefficient, D*. We propose instead a bi-exponential IVIM model consisting of a slow pool, characterized by F slow and D* slow corresponding to the capillaries as in the ME model, and a fast pool, characterized by F fast and D* fast, related to larger vessels such as medium-size arterioles and venules. This model was validated experimentally and more information was retrieved by comparing the experimental signals to a dictionary of simulated IVIM signals. The influence of the pulse sequence, the repetition time and the diffusion encoding time was also studied. Finally, the IVIM sequence was applied to the study of an animal model of Alzheimer's disease. (author) [fr

  6. The concept of validation of numerical models for consequence analysis

    International Nuclear Information System (INIS)

    Borg, Audun; Paulsen Husted, Bjarne; Njå, Ove

    2014-01-01

    Numerical models such as computational fluid dynamics (CFD) models are increasingly used in life safety studies and other types of analyses to calculate the effects of fire and explosions. The validity of these models is usually established by benchmark testing. This is done to quantitatively measure the agreement between the predictions provided by the model and the real world represented by observations in experiments. This approach assumes that all variables in the real world relevant for the specific study are adequately measured in the experiments and in the predictions made by the model. In this paper the various definitions of validation for CFD models used for hazard prediction are investigated to assess their implication for consequence analysis in a design phase. In other words, how is uncertainty in the prediction of future events reflected in the validation process? The sources of uncertainty are viewed from the perspective of the safety engineer. An example of the use of a CFD model is included to illustrate the assumptions the analyst must make and how these affect the prediction made by the model. The assessments presented in this paper are based on a review of standards and best practice guides for CFD modeling and the documentation from two existing CFD programs. Our main thrust has been to assess how validation work is performed and communicated in practice. We conclude that the concept of validation adopted for numerical models is adequate in terms of model performance. However, it does not address the main sources of uncertainty from the perspective of the safety engineer. Uncertainty in the input quantities describing future events, which are determined by the model user, outweighs the inaccuracies in the model as reported in validation studies. - Highlights: • Examine the basic concept of validation applied to models for consequence analysis. • Review standards and guides for validation of numerical models. • Comparison of the validation

  7. Models for Validation of Prior Learning (VPL)

    DEFF Research Database (Denmark)

    Ehlers, Søren

    The national policies for the education/training of adults are in the 21st century highly influenced by proposals which are formulated and promoted by The European Union (EU) as well as other transnational players and this shift in policy making has consequences. One is that ideas which in the past...... would have been categorized as utopian can become realpolitik. Validation of Prior Learning (VPL) was in Europe mainly regarded as utopian while universities in the United States of America (USA) were developing ways to obtain credits to those students which was coming with experiences from working life....

  8. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    Directory of Open Access Journals (Sweden)

    Daan Nieboer

    Full Text Available External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting.We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1 the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2 the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury.The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples and heterogeneous in scenario 2 (in 17%-39% of simulated samples. Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2.The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  9. Ecotoxicological, ecophysiological, and biogeochemical fundamentals of risk assessment

    International Nuclear Information System (INIS)

    Bashkin, V.N.; Kozlov, M.Ya.; Evstafjeva, E.V.

    1993-01-01

    Risk assessment (RA) influenced by different factors in radionuclide polluted regions is carried out by determining the biogeochemical structure of a region. Consequently, ecological-biogeochemical regionalization, ecotoxicological and ecophysiological monitoring of human population health are the important approach to RA. These criteria should conjugate with LCA of various industrial and agricultural products. Given fundamentals and approaches are needed for areas where traditional pollutants (heavy metals, pesticides, fertilizers, POPs etc) are enforced sharply by radioactive pollution. For RA of these complex pollutants, the methods of human adaptability to a polluted environment have been carried out. These techniques include biogeochemical, ecotoxicological, and ecophysiological analyses of risk factors as well as quantitative analysis of uncertainties using expert-modeling systems. Furthermore, the modern statistical methods are used for quantitative assessment of human adaptability to radioactive and nonradioactive pollutants. The results obtained in Chernobyl regions show the acceptability of these methods for risk assessment

  10. Experimental Validation of Flow Force Models for Fast Switching Valves

    DEFF Research Database (Denmark)

    Bender, Niels Christian; Pedersen, Henrik Clemmensen; Nørgård, Christian

    2017-01-01

    This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties of the surroun......This paper comprises a detailed study of the forces acting on a Fast Switching Valve (FSV) plunger. The objective is to investigate to what extend different models are valid to be used for design purposes. These models depend on the geometry of the moving plunger and the properties...... to compare and validate different models, where an effort is directed towards capturing the fluid squeeze effect just before material on material contact. The test data is compared with simulation data relying solely on analytic formulations. The general dynamics of the plunger is validated...

  11. Validation of elk resource selection models with spatially independent data

    Science.gov (United States)

    Priscilla K. Coe; Bruce K. Johnson; Michael J. Wisdom; John G. Cook; Marty Vavra; Ryan M. Nielson

    2011-01-01

    Knowledge of how landscape features affect wildlife resource use is essential for informed management. Resource selection functions often are used to make and validate predictions about landscape use; however, resource selection functions are rarely validated with data from landscapes independent of those from which the models were built. This problem has severely...

  12. A Practical Approach to Validating a PD Model

    NARCIS (Netherlands)

    Medema, L.; Koning, de R.; Lensink, B.W.

    2009-01-01

    The capital adequacy framework Basel II aims to promote the adoption of stronger risk management practices by the banking industry. The implementation makes validation of credit risk models more important. Lenders therefore need a validation methodology to convince their supervisors that their

  13. A practical approach to validating a PD model

    NARCIS (Netherlands)

    Medema, Lydian; Koning, Ruud H.; Lensink, Robert; Medema, M.

    The capital adequacy framework Basel II aims to promote the adoption of stronger risk management practices by the banking industry. The implementation makes validation of credit risk models more important. Lenders therefore need a validation methodology to convince their supervisors that their

  14. Improving Coastal Ocean Color Validation Capabilities through Application of Inherent Optical Properties (IOPs)

    Science.gov (United States)

    Mannino, Antonio

    2008-01-01

    Understanding how the different components of seawater alter the path of incident sunlight through scattering and absorption is essential to using remotely sensed ocean color observations effectively. This is particularly apropos in coastal waters where the different optically significant components (phytoplankton, detrital material, inorganic minerals, etc.) vary widely in concentration, often independently from one another. Inherent Optical Properties (IOPs) form the link between these biogeochemical constituents and the Apparent Optical Properties (AOPs). understanding this interrelationship is at the heart of successfully carrying out inversions of satellite-measured radiance to biogeochemical properties. While sufficient covariation of seawater constituents in case I waters typically allows empirical algorithms connecting AOPs and biogeochemical parameters to behave well, these empirical algorithms normally do not hold for case I1 regimes (Carder et al. 2003). Validation in the context of ocean color remote sensing refers to in-situ measurements used to verify or characterize algorithm products or any assumption used as input to an algorithm. In this project, validation capabilities are considered those measurement capabilities, techniques, methods, models, etc. that allow effective validation. Enhancing current validation capabilities by incorporating state-of-the-art IOP measurements and optical models is the purpose of this work. Involved in this pursuit is improving core IOP measurement capabilities (spectral, angular, spatio-temporal resolutions), improving our understanding of the behavior of analytical AOP-IOP approximations in complex coastal waters, and improving the spatial and temporal resolution of biogeochemical data for validation by applying biogeochemical-IOP inversion models so that these parameters can be computed from real-time IOP sensors with high sampling rates. Research cruises supported by this project provides for collection and

  15. Amendment to Validated dynamic flow model

    DEFF Research Database (Denmark)

    Knudsen, Torben

    2011-01-01

    The purpose of WP2 is to establish flow models relating the wind speed at turbines in a farm. Until now, active control of power reference has not been included in these models as only data with standard operation has been available. In this report the first data series with power reference excit...... turbine in undisturbed flow. For this data set both the multiplicative model and in particular the simple first order transfer function model can predict the down wind wind speed from upwind wind speed and loading.......The purpose of WP2 is to establish flow models relating the wind speed at turbines in a farm. Until now, active control of power reference has not been included in these models as only data with standard operation has been available. In this report the first data series with power reference...

  16. EPIC Forest LAI Dataset: LAI estimates generated from the USDA Environmental Policy Impact Climate (EPIC) model (a widely used, field-scale, biogeochemical model) on four forest complexes spanning three physiographic provinces in VA and NC.

    Data.gov (United States)

    U.S. Environmental Protection Agency — This data depicts calculated and validated LAI estimates generated from the USDA Environmental Policy Impact Climate (EPIC) model (a widely used, field-scale,...

  17. A validated physical model of greenhouse climate.

    NARCIS (Netherlands)

    Bot, G.P.A.

    1989-01-01

    In the greenhouse model the momentaneous environmental crop growth factors are calculated as output, together with the physical behaviour of the crop. The boundary conditions for this model are the outside weather conditions; other inputs are the physical characteristics of the crop, of the

  18. Numerical simulation of in-situ chemical oxidation (ISCO) and biodegradation of petroleum hydrocarbons using a coupled model for bio-geochemical reactive transport

    Science.gov (United States)

    Marin, I. S.; Molson, J. W.

    2013-05-01

    Petroleum hydrocarbons (PHCs) are a major source of groundwater contamination, being a worldwide and well-known problem. Formed by a complex mixture of hundreds of organic compounds (including BTEX - benzene, toluene, ethylbenzene and xylenes), many of which are toxic and persistent in the subsurface and are capable of creating a serious risk to human health. Several remediation technologies can be used to clean-up PHC contamination. In-situ chemical oxidation (ISCO) and intrinsic bioremediation (IBR) are two promising techniques that can be applied in this case. However, the interaction of these processes with the background aquifer geochemistry and the design of an efficient treatment presents a challenge. Here we show the development and application of BIONAPL/Phreeqc, a modeling tool capable of simulating groundwater flow, contaminant transport with coupled biological and geochemical processes in porous or fractured porous media. BIONAPL/Phreeqc is based on the well-tested BIONAPL/3D model, using a powerful finite element simulation engine, capable of simulating non-aqueous phase liquid (NAPL) dissolution, density-dependent advective-dispersive transport, and solving the geochemical and kinetic processes with the library Phreeqc. To validate the model, we compared BIONAPL/Phreeqc with results from the literature for different biodegradation processes and different geometries, with good agreement. We then used the model to simulate the behavior of sodium persulfate (NaS2O8) as an oxidant for BTEX degradation, coupled with sequential biodegradation in a 2D case and to evaluate the effect of inorganic geochemistry reactions. The results show the advantages of a treatment train remediation scheme based on ISCO and IBR. The numerical performance and stability of the integrated BIONAPL/Phreeqc model was also verified.

  19. Statistical Validation of Engineering and Scientific Models: Background

    International Nuclear Information System (INIS)

    Hills, Richard G.; Trucano, Timothy G.

    1999-01-01

    A tutorial is presented discussing the basic issues associated with propagation of uncertainty analysis and statistical validation of engineering and scientific models. The propagation of uncertainty tutorial illustrates the use of the sensitivity method and the Monte Carlo method to evaluate the uncertainty in predictions for linear and nonlinear models. Four example applications are presented; a linear model, a model for the behavior of a damped spring-mass system, a transient thermal conduction model, and a nonlinear transient convective-diffusive model based on Burger's equation. Correlated and uncorrelated model input parameters are considered. The model validation tutorial builds on the material presented in the propagation of uncertainty tutoriaI and uses the damp spring-mass system as the example application. The validation tutorial illustrates several concepts associated with the application of statistical inference to test model predictions against experimental observations. Several validation methods are presented including error band based, multivariate, sum of squares of residuals, and optimization methods. After completion of the tutorial, a survey of statistical model validation literature is presented and recommendations for future work are made

  20. Validity of microgravity simulation models on earth

    DEFF Research Database (Denmark)

    Regnard, J; Heer, M; Drummer, C

    2001-01-01

    Many studies have used water immersion and head-down bed rest as experimental models to simulate responses to microgravity. However, some data collected during space missions are at variance or in contrast with observations collected from experimental models. These discrepancies could reflect...... incomplete knowledge of the characteristics inherent to each model. During water immersion, the hydrostatic pressure lowers the peripheral vascular capacity and causes increased thoracic blood volume and high vascular perfusion. In turn, these changes lead to high urinary flow, low vasomotor tone, and a high...

  1. Verification and Validation of Tropospheric Model/Database

    National Research Council Canada - National Science Library

    Junho, choi

    1998-01-01

    A verification and validation of tropospheric models and databases has been performed based on ray tracing algorithm, statistical analysis, test on real time system operation, and other technical evaluation process...

  2. Base Flow Model Validation, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — The program focuses on turbulence modeling enhancements for predicting high-speed rocket base flows. A key component of the effort is the collection of high-fidelity...

  3. Validating predictions from climate envelope models.

    Directory of Open Access Journals (Sweden)

    James I Watling

    Full Text Available Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species' distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967-1971 (t1 and evaluated using occurrence data from 1998-2002 (t2. Model sensitivity (the ability to correctly classify species presences was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on

  4. Validating predictions from climate envelope models

    Science.gov (United States)

    Watling, J.; Bucklin, D.; Speroterra, C.; Brandt, L.; Cabal, C.; Romañach, Stephanie S.; Mazzotti, Frank J.

    2013-01-01

    Climate envelope models are a potentially important conservation tool, but their ability to accurately forecast species’ distributional shifts using independent survey data has not been fully evaluated. We created climate envelope models for 12 species of North American breeding birds previously shown to have experienced poleward range shifts. For each species, we evaluated three different approaches to climate envelope modeling that differed in the way they treated climate-induced range expansion and contraction, using random forests and maximum entropy modeling algorithms. All models were calibrated using occurrence data from 1967–1971 (t1) and evaluated using occurrence data from 1998–2002 (t2). Model sensitivity (the ability to correctly classify species presences) was greater using the maximum entropy algorithm than the random forest algorithm. Although sensitivity did not differ significantly among approaches, for many species, sensitivity was maximized using a hybrid approach that assumed range expansion, but not contraction, in t2. Species for which the hybrid approach resulted in the greatest improvement in sensitivity have been reported from more land cover types than species for which there was little difference in sensitivity between hybrid and dynamic approaches, suggesting that habitat generalists may be buffered somewhat against climate-induced range contractions. Specificity (the ability to correctly classify species absences) was maximized using the random forest algorithm and was lowest using the hybrid approach. Overall, our results suggest cautious optimism for the use of climate envelope models to forecast range shifts, but also underscore the importance of considering non-climate drivers of species range limits. The use of alternative climate envelope models that make different assumptions about range expansion and contraction is a new and potentially useful way to help inform our understanding of climate change effects on species.

  5. Preliminary validation of a Monte Carlo model for IMRT fields

    International Nuclear Information System (INIS)

    Wright, Tracy; Lye, Jessica; Mohammadi, Mohammad

    2011-01-01

    Full text: A Monte Carlo model of an Elekta linac, validated for medium to large (10-30 cm) symmetric fields, has been investigated for small, irregular and asymmetric fields suitable for IMRT treatments. The model has been validated with field segments using radiochromic film in solid water. The modelled positions of the multileaf collimator (MLC) leaves have been validated using EBT film, In the model, electrons with a narrow energy spectrum are incident on the target and all components of the linac head are included. The MLC is modelled using the EGSnrc MLCE component module. For the validation, a number of single complex IMRT segments with dimensions approximately 1-8 cm were delivered to film in solid water (see Fig, I), The same segments were modelled using EGSnrc by adjusting the MLC leaf positions in the model validated for 10 cm symmetric fields. Dose distributions along the centre of each MLC leaf as determined by both methods were compared. A picket fence test was also performed to confirm the MLC leaf positions. 95% of the points in the modelled dose distribution along the leaf axis agree with the film measurement to within 1%/1 mm for dose difference and distance to agreement. Areas of most deviation occur in the penumbra region. A system has been developed to calculate the MLC leaf positions in the model for any planned field size.

  6. Validation of the simulator neutronics model

    International Nuclear Information System (INIS)

    Gregory, M.V.

    1984-01-01

    The neutronics model in the SRP reactor training simulator computes the variation with time of the neutron population in the reactor core. The power output of a reactor is directly proportional to the neutron population, thus in a very real sense the neutronics model determines the response of the simulator. The geometrical complexity of the reactor control system in SRP reactors requires the neutronics model to provide a detailed, 3D representation of the reactor core. Existing simulator technology does not allow such a detailed representation to run in real-time in a minicomputer environment, thus an entirely different approach to the problem was required. A prompt jump method has been developed in answer to this need

  7. Context discovery using attenuated Bloom codes: model description and validation

    NARCIS (Netherlands)

    Liu, F.; Heijenk, Geert

    A novel approach to performing context discovery in ad-hoc networks based on the use of attenuated Bloom filters is proposed in this report. In order to investigate the performance of this approach, a model has been developed. This document describes the model and its validation. The model has been

  8. Traffic modelling validation of advanced driver assistance systems

    NARCIS (Netherlands)

    Tongeren, R. van; Gietelink, O.J.; Schutter, B. de; Verhaegen, M.

    2007-01-01

    This paper presents a microscopic traffic model for the validation of advanced driver assistance systems. This model describes single-lane traffic and is calibrated with data from a field operational test. To illustrate the use of the model, a Monte Carlo simulation of single-lane traffic scenarios

  9. Application of parameters space analysis tools for empirical model validation

    Energy Technology Data Exchange (ETDEWEB)

    Paloma del Barrio, E. [LEPT-ENSAM UMR 8508, Talence (France); Guyon, G. [Electricite de France, Moret-sur-Loing (France)

    2004-01-01

    A new methodology for empirical model validation has been proposed in the framework of the Task 22 (Building Energy Analysis Tools) of the International Energy Agency. It involves two main steps: checking model validity and diagnosis. Both steps, as well as the underlying methods, have been presented in the first part of the paper. In this part, they are applied for testing modelling hypothesis in the framework of the thermal analysis of an actual building. Sensitivity analysis tools have been first used to identify the parts of the model that can be really tested on the available data. A preliminary diagnosis is then supplied by principal components analysis. Useful information for model behaviour improvement has been finally obtained by optimisation techniques. This example of application shows how model parameters space analysis is a powerful tool for empirical validation. In particular, diagnosis possibilities are largely increased in comparison with residuals analysis techniques. (author)

  10. Quantitative system validation in model driven design

    DEFF Research Database (Denmark)

    Hermanns, Hilger; Larsen, Kim Guldstrand; Raskin, Jean-Francois

    2010-01-01

    The European STREP project Quasimodo1 develops theory, techniques and tool components for handling quantitative constraints in model-driven development of real-time embedded systems, covering in particular real-time, hybrid and stochastic aspects. This tutorial highlights the advances made, focus...

  11. Carbon sequestration by patch fertilization: A comprehensive assessment using coupled physical-ecological-biogeochemical models: FINAL REPORT of grant Grant No. DE-FG02-04ER63726

    Energy Technology Data Exchange (ETDEWEB)

    Sarmiento, Jorge L; Gnanadesikan, Anand; Gruber, Nicolas

    2007-06-21

    This final report summarizes research undertaken collaboratively between Princeton University, the NOAA Geophysical Fluid Dynamics Laboratory on the Princeton University campus, the State University of New York at Stony Brook, and the University of California, Los Angeles between September 1, 2000, and November 30, 2006, to do fundamental research on ocean iron fertilization as a means to enhance the net oceanic uptake of CO2 from the atmosphere. The approach we proposed was to develop and apply a suite of coupled physical-ecologicalbiogeochemical models in order to (i) determine to what extent enhanced carbon fixation from iron fertilization will lead to an increase in the oceanic uptake of atmospheric CO2 and how long this carbon will remain sequestered (efficiency), and (ii) examine the changes in ocean ecology and natural biogeochemical cycles resulting from iron fertilization (consequences). The award was funded in two separate three-year installments: • September 1, 2000 to November 30, 2003, for a project entitled “Ocean carbon sequestration by fertilization: An integrated biogeochemical assessment.” A final report was submitted for this at the end of 2003 and is included here as Appendix 1. • December 1, 2003 to November 30, 2006, for a follow-on project under the same grant number entitled “Carbon sequestration by patch fertilization: A comprehensive assessment using coupled physical-ecological-biogeochemical models.” This report focuses primarily on the progress we made during the second period of funding subsequent to the work reported on in Appendix 1. When we began this project, we were thinking almost exclusively in terms of long-term fertilization over large regions of the ocean such as the Southern Ocean, with much of our focus being on how ocean circulation and biogeochemical cycling would interact to control the response to a given fertilization scenario. Our research on these types of scenarios, which was carried out largely during

  12. Total maximum allocated load calculation of nitrogen pollutants by linking a 3D biogeochemical-hydrodynamic model with a programming model in Bohai Sea

    Science.gov (United States)

    Dai, Aiquan; Li, Keqiang; Ding, Dongsheng; Li, Yan; Liang, Shengkang; Li, Yanbin; Su, Ying; Wang, Xiulin

    2015-12-01

    The equal percent removal (EPR) method, in which pollutant reduction ratio was set as the same in all administrative regions, failed to satisfy the requirement for water quality improvement in the Bohai Sea. Such requirement was imposed by the developed Coastal Pollution Total Load Control Management. The total maximum allocated load (TMAL) of nitrogen pollutants in the sea-sink source regions (SSRs) around the Bohai Rim, which is the maximum pollutant load of every outlet under the limitation of water quality criteria, was estimated by optimization-simulation method (OSM) combined with loop approximation calculation. In OSM, water quality is simulated using a water quality model and pollutant load is calculated with a programming model. The effect of changes in pollutant loads on TMAL was discussed. Results showed that the TMAL of nitrogen pollutants in 34 SSRs was 1.49×105 ton/year. The highest TMAL was observed in summer, whereas the lowest in winter. TMAL was also higher in the Bohai Strait and central Bohai Sea and lower in the inner area of the Liaodong Bay, Bohai Bay and Laizhou Bay. In loop approximation calculation, the TMAL obtained was considered satisfactory for water quality criteria as fluctuation of concentration response matrix with pollutant loads was eliminated. Results of numerical experiment further showed that water quality improved faster and were more evident under TMAL input than that when using the EPR method

  13. Ensuring the Validity of the Micro Foundation in DSGE Models

    DEFF Research Database (Denmark)

    Andreasen, Martin Møller

    & Primiceri (American Economic Review, forth- coming) and Fernández-Villaverde & Rubio-Ramírez (Review of Economic Studies, 2007) do not satisfy these sufficient conditions, or any other known set of conditions ensuring finite values for the objective functions. Thus, the validity of the micro foundation......The presence of i) stochastic trends, ii) deterministic trends, and/or iii) stochastic volatil- ity in DSGE models may imply that the agents' objective functions attain infinite values. We say that such models do not have a valid micro foundation. The paper derives sufficient condi- tions which...... ensure that the objective functions of the households and the firms are finite even when various trends and stochastic volatility are included in a standard DSGE model. Based on these conditions we test the validity of the micro foundation in six DSGE models from the literature. The models of Justiniano...

  14. Evaluation of the transport matrix method for simulation of ocean biogeochemical tracers

    Science.gov (United States)

    Kvale, Karin F.; Khatiwala, Samar; Dietze, Heiner; Kriest, Iris; Oschlies, Andreas

    2017-06-01

    Conventional integration of Earth system and ocean models can accrue considerable computational expenses, particularly for marine biogeochemical applications. Offline numerical schemes in which only the biogeochemical tracers are time stepped and transported using a pre-computed circulation field can substantially reduce the burden and are thus an attractive alternative. One such scheme is the transport matrix method (TMM), which represents tracer transport as a sequence of sparse matrix-vector products that can be performed efficiently on distributed-memory computers. While the TMM has been used for a variety of geochemical and biogeochemical studies, to date the resulting solutions have not been comprehensively assessed against their online counterparts. Here, we present a detailed comparison of the two. It is based on simulations of the state-of-the-art biogeochemical sub-model embedded within the widely used coarse-resolution University of Victoria Earth System Climate Model (UVic ESCM). The default, non-linear advection scheme was first replaced with a linear, third-order upwind-biased advection scheme to satisfy the linearity requirement of the TMM. Transport matrices were extracted from an equilibrium run of the physical model and subsequently used to integrate the biogeochemical model offline to equilibrium. The identical biogeochemical model was also run online. Our simulations show that offline integration introduces some bias to biogeochemical quantities through the omission of the polar filtering used in UVic ESCM and in the offline application of time-dependent forcing fields, with high latitudes showing the largest differences with respect to the online model. Differences in other regions and in the seasonality of nutrients and phytoplankton distributions are found to be relatively minor, giving confidence that the TMM is a reliable tool for offline integration of complex biogeochemical models. Moreover, while UVic ESCM is a serial code, the TMM can

  15. Functional Validation of Heteromeric Kainate Receptor Models.

    Science.gov (United States)

    Paramo, Teresa; Brown, Patricia M G E; Musgaard, Maria; Bowie, Derek; Biggin, Philip C

    2017-11-21

    Kainate receptors require the presence of external ions for gating. Most work thus far has been performed on homomeric GluK2 but, in vivo, kainate receptors are likely heterotetramers. Agonists bind to the ligand-binding domain (LBD) which is arranged as a dimer of dimers as exemplified in homomeric structures, but no high-resolution structure currently exists of heteromeric kainate receptors. In a full-length heterotetramer, the LBDs could potentially be arranged either as a GluK2 homomer alongside a GluK5 homomer or as two GluK2/K5 heterodimers. We have constructed models of the LBD dimers based on the GluK2 LBD crystal structures and investigated their stability with molecular dynamics simulations. We have then used the models to make predictions about the functional behavior of the full-length GluK2/K5 receptor, which we confirmed via electrophysiological recordings. A key prediction and observation is that lithium ions bind to the dimer interface of GluK2/K5 heteromers and slow their desensitization. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  16. Validation of limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring - is a separate validation group required?

    NARCIS (Netherlands)

    Proost, J. H.

    Objective: Limited sampling models (LSM) for estimating AUC in therapeutic drug monitoring are usually validated in a separate group of patients, according to published guidelines. The aim of this study is to evaluate the validation of LSM by comparing independent validation with cross-validation

  17. Validation of nuclear models used in space radiation shielding applications

    International Nuclear Information System (INIS)

    Norman, Ryan B.; Blattnig, Steve R.

    2013-01-01

    A program of verification and validation has been undertaken to assess the applicability of models to space radiation shielding applications and to track progress as these models are developed over time. In this work, simple validation metrics applicable to testing both model accuracy and consistency with experimental data are developed. The developed metrics treat experimental measurement uncertainty as an interval and are therefore applicable to cases in which epistemic uncertainty dominates the experimental data. To demonstrate the applicability of the metrics, nuclear physics models used by NASA for space radiation shielding applications are compared to an experimental database consisting of over 3600 experimental cross sections. A cumulative uncertainty metric is applied to the question of overall model accuracy, while a metric based on the median uncertainty is used to analyze the models from the perspective of model development by examining subsets of the model parameter space.

  18. Diel biogeochemical processes in terrestrial waters

    Science.gov (United States)

    Nimick, David A.; Gammons, Christopher H.

    2011-01-01

    scientists typically do not encompass the wide diversity and range of processes that produce diel cycles, and (3) the logistics of making field measurements for 24-h periods has limited recognition and understanding of these important cycles. Thus, the topical session brought together hydrologists, biologists, geochemists, and ecologists to discuss field studies, laboratory experiments, theoretical modeling, and measurement techniques related to diel cycling. Hopefully with the cross-disciplinary synergy developed at the session as well as by this special issue, a more comprehensive understanding of the interrelationships between the diel processes will be developed. Needless to say, understanding diel processes is critical for regulatory agencies and the greater scientific community. And perhaps more importantly, expanded knowledge of biogeochemical cycling may lead to better predictions of how aquatic ecosystems might react to changing conditions of contaminant loading, eutrophication, climate change, drought, industrialization, development, and other variables.

  19. Composing, Analyzing and Validating Software Models

    Science.gov (United States)

    Sheldon, Frederick T.

    1998-10-01

    This research has been conducted at the Computational Sciences Division of the Information Sciences Directorate at Ames Research Center (Automated Software Engineering Grp). The principle work this summer has been to review and refine the agenda that were carried forward from last summer. Formal specifications provide good support for designing a functionally correct system, however they are weak at incorporating non-functional performance requirements (like reliability). Techniques which utilize stochastic Petri nets (SPNs) are good for evaluating the performance and reliability for a system, but they may be too abstract and cumbersome from the stand point of specifying and evaluating functional behavior. Therefore, one major objective of this research is to provide an integrated approach to assist the user in specifying both functionality (qualitative: mutual exclusion and synchronization) and performance requirements (quantitative: reliability and execution deadlines). In this way, the merits of a powerful modeling technique for performability analysis (using SPNs) can be combined with a well-defined formal specification language. In doing so, we can come closer to providing a formal approach to designing a functionally correct system that meets reliability and performance goals.

  20. Model validation and calibration based on component functions of model output

    International Nuclear Information System (INIS)

    Wu, Danqing; Lu, Zhenzhou; Wang, Yanping; Cheng, Lei

    2015-01-01

    The target in this work is to validate the component functions of model output between physical observation and computational model with the area metric. Based on the theory of high dimensional model representations (HDMR) of independent input variables, conditional expectations are component functions of model output, and the conditional expectations reflect partial information of model output. Therefore, the model validation of conditional expectations tells the discrepancy between the partial information of the computational model output and that of the observations. Then a calibration of the conditional expectations is carried out to reduce the value of model validation metric. After that, a recalculation of the model validation metric of model output is taken with the calibrated model parameters, and the result shows that a reduction of the discrepancy in the conditional expectations can help decrease the difference in model output. At last, several examples are employed to demonstrate the rationality and necessity of the methodology in case of both single validation site and multiple validation sites. - Highlights: • A validation metric of conditional expectations of model output is proposed. • HDRM explains the relationship of conditional expectations and model output. • An improved approach of parameter calibration updates the computational models. • Validation and calibration process are applied at single site and multiple sites. • Validation and calibration process show a superiority than existing methods

  1. Predicting the ungauged basin: Model validation and realism assessment

    Directory of Open Access Journals (Sweden)

    Tim evan Emmerik

    2015-10-01

    Full Text Available The hydrological decade on Predictions in Ungauged Basins (PUB led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of model outcome has not been discussed to a great extent. With this paper we aim to contribute to the discussion on how one can determine the value and validity of a hydrological model developed for an ungauged basin. As in many cases no local, or even regional, data are available, alternative methods should be applied. Using a PUB case study in a genuinely ungauged basin in southern Cambodia, we give several examples of how one can use different types of soft data to improve model design, calibrate and validate the model, and assess the realism of the model output. A rainfall-runoff model was coupled to an irrigation reservoir, allowing the use of additional and unconventional data. The model was mainly forced with remote sensing data, and local knowledge was used to constrain the parameters. Model realism assessment was done using data from surveys. This resulted in a successful reconstruction of the reservoir dynamics, and revealed the different hydrological characteristics of the two topographical classes. This paper does not present a generic approach that can be transferred to other ungauged catchments, but it aims to show how clever model design and alternative data acquisition can result in a valuable hydrological model for an ungauged catchment.

  2. Making Validated Educational Models Central in Preschool Standards.

    Science.gov (United States)

    Schweinhart, Lawrence J.

    This paper presents some ideas to preschool educators and policy makers about how to make validated educational models central in standards for preschool education and care programs that are available to all 3- and 4-year-olds. Defining an educational model as a coherent body of program practices, curriculum content, program and child, and teacher…

  3. Validation of ASTEC core degradation and containment models

    International Nuclear Information System (INIS)

    Kruse, Philipp; Brähler, Thimo; Koch, Marco K.

    2014-01-01

    Ruhr-Universitaet Bochum performed in a German funded project validation of in-vessel and containment models of the integral code ASTEC V2, jointly developed by IRSN (France) and GRS (Germany). In this paper selected results of this validation are presented. In the in-vessel part, the main point of interest was the validation of the code capability concerning cladding oxidation and hydrogen generation. The ASTEC calculations of QUENCH experiments QUENCH-03 and QUENCH-11 show satisfactory results, despite of some necessary adjustments in the input deck. Furthermore, the oxidation models based on the Cathcart–Pawel and Urbanic–Heidrick correlations are not suitable for higher temperatures while the ASTEC model BEST-FIT based on the Prater–Courtright approach at high temperature gives reliable enough results. One part of the containment model validation was the assessment of three hydrogen combustion models of ASTEC against the experiment BMC Ix9. The simulation results of these models differ from each other and therefore the quality of the simulations depends on the characteristic of each model. Accordingly, the CPA FRONT model, corresponding to the simplest necessary input parameters, provides the best agreement to the experimental data

  4. Validation of a multi-objective, predictive urban traffic model

    NARCIS (Netherlands)

    Wilmink, I.R.; Haak, P. van den; Woldeab, Z.; Vreeswijk, J.

    2013-01-01

    This paper describes the results of the verification and validation of the ecoStrategic Model, which was developed, implemented and tested in the eCoMove project. The model uses real-time and historical traffic information to determine the current, predicted and desired state of traffic in a

  5. Predicting the ungauged basin : Model validation and realism assessment

    NARCIS (Netherlands)

    Van Emmerik, T.H.M.; Mulder, G.; Eilander, D.; Piet, M.; Savenije, H.H.G.

    2015-01-01

    The hydrological decade on Predictions in Ungauged Basins (PUB) led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of

  6. Validating a Technology Enhanced Student-Centered Learning Model

    Science.gov (United States)

    Kang, Myunghee; Hahn, Jungsun; Chung, Warren

    2015-01-01

    The Technology Enhanced Student Centered Learning (TESCL) Model in this study presents the core factors that ensure the quality of learning in a technology-supported environment. Although the model was conceptually constructed using a student-centered learning framework and drawing upon previous studies, it should be validated through real-world…

  7. Validation of Power Requirement Model for Active Loudspeakers

    DEFF Research Database (Denmark)

    Schneider, Henrik; Madsen, Anders Normann; Bjerregaard, Ruben

    2015-01-01

    . There are however many advantages that could be harvested from such knowledge like size, cost and efficiency improvements. In this paper a recently proposed power requirement model for active loudspeakers is experimentally validated and the model is expanded to include the closed and vented type enclosures...

  8. Predicting the ungauged basin: model validation and realism assessment

    NARCIS (Netherlands)

    van Emmerik, Tim; Mulder, Gert; Eilander, Dirk; Piet, Marijn; Savenije, Hubert

    2015-01-01

    The hydrological decade on Predictions in Ungauged Basins (PUB) led to many new insights in model development, calibration strategies, data acquisition and uncertainty analysis. Due to a limited amount of published studies on genuinely ungauged basins, model validation and realism assessment of

  9. Validation and comparison of dispersion models of RTARC DSS

    International Nuclear Information System (INIS)

    Duran, J.; Pospisil, M.

    2004-01-01

    RTARC DSS (Real Time Accident Release Consequences - Decision Support System) is a computer code developed at the VUJE Trnava, Inc. (Stubna, M. et al, 1993). The code calculations include atmospheric transport and diffusion, dose assessment, evaluation and displaying of the affected zones, evaluation of the early health effects, concentration and dose rate time dependence in the selected sites etc. The simulation of the protective measures (sheltering, iodine administration) is involved. The aim of this paper is to present the process of validation of the RTARC dispersion models. RTARC includes models for calculations of release for very short (Method Monte Carlo - MEMOC), short (Gaussian Straight-Line Model) and long distances (Puff Trajectory Model - PTM). Validation of the code RTARC was performed using the results of comparisons and experiments summarized in the Table 1.: 1. Experiments and comparisons in the process of validation of the system RTARC - experiments or comparison - distance - model. Wind tunnel experiments (Universitaet der Bundeswehr, Muenchen) - Area of NPP - Method Monte Carlo. INEL (Idaho National Engineering Laboratory) - short/medium - Gaussian model and multi tracer atmospheric experiment - distances - PTM. Model Validation Kit - short distances - Gaussian model. STEP II.b 'Realistic Case Studies' - long distances - PTM. ENSEMBLE comparison - long distances - PTM (orig.)

  10. Model Validation and Verification of Data Mining from the ...

    African Journals Online (AJOL)

    Michael Horsfall

    In this paper, we seek to present a hybrid method for Model Validation and Verification of Data Mining from the ... This model generally states the numerical value of knowledge .... procedures found in the field of software engineering should be ...

  11. Validation of heat transfer models for gap cooling

    International Nuclear Information System (INIS)

    Okano, Yukimitsu; Nagae, Takashi; Murase, Michio

    2004-01-01

    For severe accident assessment of a light water reactor, models of heat transfer in a narrow annular gap between overheated core debris and a reactor pressure vessel are important for evaluating vessel integrity and accident management. The authors developed and improved the models of heat transfer. However, validation was not sufficient for applicability of the gap heat flux correlation to the debris cooling in the vessel lower head and applicability of the local boiling heat flux correlations to the high-pressure conditions. Therefore, in this paper, we evaluated the validity of the heat transfer models and correlations by analyses for ALPHA and LAVA experiments where molten aluminum oxide (Al 2 O 3 ) at about 2700 K was poured into the high pressure water pool in a small-scale simulated vessel lower head. In the heating process of the vessel wall, the calculated heating rate and peak temperature agreed well with the measured values, and the validity of the heat transfer models and gap heat flux correlation was confirmed. In the cooling process of the vessel wall, the calculated cooling rate was compared with the measured value, and the validity of the nucleate boiling heat flux correlation was confirmed. The peak temperatures of the vessel wall in ALPHA and LAVA experiments were lower than the temperature at the minimum heat flux point between film boiling and transition boiling, so the minimum heat flux correlation could not be validated. (author)

  12. Developing rural palliative care: validating a conceptual model.

    Science.gov (United States)

    Kelley, Mary Lou; Williams, Allison; DeMiglio, Lily; Mettam, Hilary

    2011-01-01

    The purpose of this research was to validate a conceptual model for developing palliative care in rural communities. This model articulates how local rural healthcare providers develop palliative care services according to four sequential phases. The model has roots in concepts of community capacity development, evolves from collaborative, generalist rural practice, and utilizes existing health services infrastructure. It addresses how rural providers manage challenges, specifically those related to: lack of resources, minimal community understanding of palliative care, health professionals' resistance, the bureaucracy of the health system, and the obstacles of providing services in rural environments. Seven semi-structured focus groups were conducted with interdisciplinary health providers in 7 rural communities in two Canadian provinces. Using a constant comparative analysis approach, focus group data were analyzed by examining participants' statements in relation to the model and comparing emerging themes in the development of rural palliative care to the elements of the model. The data validated the conceptual model as the model was able to theoretically predict and explain the experiences of the 7 rural communities that participated in the study. New emerging themes from the data elaborated existing elements in the model and informed the requirement for minor revisions. The model was validated and slightly revised, as suggested by the data. The model was confirmed as being a useful theoretical tool for conceptualizing the development of rural palliative care that is applicable in diverse rural communities.

  13. Importance of Computer Model Validation in Pyroprocessing Technology Development

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Y. E.; Li, Hui; Yim, M. S. [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2014-05-15

    In this research, we developed a plan for experimental validation of one of the computer models developed for ER process modeling, i. e., the ERAD code. Several candidate surrogate materials are selected for the experiment considering the chemical and physical properties. Molten salt-based pyroprocessing technology is being examined internationally as an alternative to treat spent nuclear fuel over aqueous technology. The central process in pyroprocessing is electrorefining(ER) which separates uranium from transuranic elements and fission products present in spent nuclear fuel. ER is a widely used process in the minerals industry to purify impure metals. Studies of ER by using actual spent nuclear fuel materials are problematic for both technical and political reasons. Therefore, the initial effort for ER process optimization is made by using computer models. A number of models have been developed for this purpose. But as validation of these models is incomplete and often times problematic, the simulation results from these models are inherently uncertain.

  14. The turbulent viscosity models and their experimental validation; Les modeles de viscosite turbulente et leur validation experimentale

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This workshop on turbulent viscosity models and on their experimental validation was organized by the `convection` section of the French society of thermal engineers. From the 9 papers presented during this workshop, 8 deal with the modeling of turbulent flows inside combustion chambers, turbo-machineries or in other energy-related applications, and have been selected for ETDE. (J.S.)

  15. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  16. Field validation of the contaminant transport model, FEMA

    International Nuclear Information System (INIS)

    Wong, K.-F.V.

    1986-01-01

    The work describes the validation with field data of a finite element model of material transport through aquifers (FEMA). Field data from the Idaho Chemical Processing Plant, Idaho, USA and from the 58th Street landfill in Miami, Florida, USA are used. In both cases the model was first calibrated and then integrated over a span of eight years to check on the predictive capability of the model. Both predictive runs gave results that matched well with available data. (author)

  17. Results from the Savannah River Laboratory model validation workshop

    International Nuclear Information System (INIS)

    Pepper, D.W.

    1981-01-01

    To evaluate existing and newly developed air pollution models used in DOE-funded laboratories, the Savannah River Laboratory sponsored a model validation workshop. The workshop used Kr-85 measurements and meteorology data obtained at SRL during 1975 to 1977. Individual laboratories used models to calculate daily, weekly, monthly or annual test periods. Cumulative integrated air concentrations were reported at each grid point and at each of the eight sampler locations

  18. Contact Modelling in Resistance Welding, Part II: Experimental Validation

    DEFF Research Database (Denmark)

    Song, Quanfeng; Zhang, Wenqi; Bay, Niels

    2006-01-01

    Contact algorithms in resistance welding presented in the previous paper are experimentally validated in the present paper. In order to verify the mechanical contact algorithm, two types of experiments, i.e. sandwich upsetting of circular, cylindrical specimens and compression tests of discs...... with a solid ring projection towards a flat ring, are carried out at room temperature. The complete algorithm, involving not only the mechanical model but also the thermal and electrical models, is validated by projection welding experiments. The experimental results are in satisfactory agreement...

  19. Calibration and Validation of A One-dimensional Complex Marine Biogeochemicalfluxes Model In Different Areas of The Northern Adriatic Shelf.

    Science.gov (United States)

    Vichi, M.; Oddo, P.; Zavatarelli, M.; Coluccelli, A.; Coppini, G.; Celio, M.; Fonda Umani, S.; Pinardi, N.

    In this contribution we show results from numerical simulations carried out with a complex biogeochemical fluxes model coupled with a one-dimensional high-resol ution hydrodynamical model and implemented at three different locations of the north- ern Adriatic shelf . One location is directly affected by Po river influence, one has more open-sea characteristics and one is located in the Gulf of Trieste with an in- termediate behavior; emphasis is put on the comparison with observations and on the functioning of the northern Adriatic ecosystem in the three areas. The work has been performed in a climatological context and has to be considered as preliminary to the development of three-dimensional numerical simulations. Biogeochemical model parameterizations have been ameliorated with a detailed description of bacterial sub- strate utilization associated to the quality of the dissolved organic matter (DOM) in order to improve model skill in capturing the observed DOM dynamics in the basin. The coupled model has been calibrated and validated at the three locations by means of climatological datasets. Results show satisfactory model behavior in simulating local seasonal dynamics in the limit of the available boundary conditions and the one-dimensional implementation. Comparisons with available in situ measurements of primary and bacterial production and bacterial abundances have been performed in all locations. Model simulated rates and bacterial dynamics are in the same order of magnitude of observations and show a qualitatively correct time evolution. The importance of temperature as a factor controlling bacteria efficiency is investigated with sensitivity experiments on the model parameterizations. The different model be- havior and pelagic ecosystem structure developed by the model at the three location can be attributed to the local hydrodynamical features and interactions with external inputs of nutrients. The onset of the winter/spring bloom in the climatological

  20. Validation of spectral gas radiation models under oxyfuel conditions

    Energy Technology Data Exchange (ETDEWEB)

    Becher, Johann Valentin

    2013-05-15

    Combustion of hydrocarbon fuels with pure oxygen results in a different flue gas composition than combustion with air. Standard computational-fluid-dynamics (CFD) spectral gas radiation models for air combustion are therefore out of their validity range in oxyfuel combustion. This thesis provides a common spectral basis for the validation of new spectral models. A literature review about fundamental gas radiation theory, spectral modeling and experimental methods provides the reader with a basic understanding of the topic. In the first results section, this thesis validates detailed spectral models with high resolution spectral measurements in a gas cell with the aim of recommending one model as the best benchmark model. In the second results section, spectral measurements from a turbulent natural gas flame - as an example for a technical combustion process - are compared to simulated spectra based on measured gas atmospheres. The third results section compares simplified spectral models to the benchmark model recommended in the first results section and gives a ranking of the proposed models based on their accuracy. A concluding section gives recommendations for the selection and further development of simplified spectral radiation models. Gas cell transmissivity spectra in the spectral range of 2.4 - 5.4 {mu}m of water vapor and carbon dioxide in the temperature range from 727 C to 1500 C and at different concentrations were compared in the first results section at a nominal resolution of 32 cm{sup -1} to line-by-line models from different databases, two statistical-narrow-band models and the exponential-wide-band model. The two statistical-narrow-band models EM2C and RADCAL showed good agreement with a maximal band transmissivity deviation of 3 %. The exponential-wide-band model showed a deviation of 6 %. The new line-by-line database HITEMP2010 had the lowest band transmissivity deviation of 2.2% and was therefore recommended as a reference model for the

  1. Validation techniques of agent based modelling for geospatial simulations

    Directory of Open Access Journals (Sweden)

    M. Darvishi

    2014-10-01

    Full Text Available One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS, biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI’s ArcGIS, OpenMap, GeoTools, etc for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  2. Validation techniques of agent based modelling for geospatial simulations

    Science.gov (United States)

    Darvishi, M.; Ahmadi, G.

    2014-10-01

    One of the most interesting aspects of modelling and simulation study is to describe the real world phenomena that have specific properties; especially those that are in large scales and have dynamic and complex behaviours. Studying these phenomena in the laboratory is costly and in most cases it is impossible. Therefore, Miniaturization of world phenomena in the framework of a model in order to simulate the real phenomena is a reasonable and scientific approach to understand the world. Agent-based modelling and simulation (ABMS) is a new modelling method comprising of multiple interacting agent. They have been used in the different areas; for instance, geographic information system (GIS), biology, economics, social science and computer science. The emergence of ABM toolkits in GIS software libraries (e.g. ESRI's ArcGIS, OpenMap, GeoTools, etc) for geospatial modelling is an indication of the growing interest of users to use of special capabilities of ABMS. Since ABMS is inherently similar to human cognition, therefore it could be built easily and applicable to wide range applications than a traditional simulation. But a key challenge about ABMS is difficulty in their validation and verification. Because of frequent emergence patterns, strong dynamics in the system and the complex nature of ABMS, it is hard to validate and verify ABMS by conventional validation methods. Therefore, attempt to find appropriate validation techniques for ABM seems to be necessary. In this paper, after reviewing on Principles and Concepts of ABM for and its applications, the validation techniques and challenges of ABM validation are discussed.

  3. Validation of Computer Models for Homeland Security Purposes

    International Nuclear Information System (INIS)

    Schweppe, John E.; Ely, James; Kouzes, Richard T.; McConn, Ronald J.; Pagh, Richard T.; Robinson, Sean M.; Siciliano, Edward R.; Borgardt, James D.; Bender, Sarah E.; Earnhart, Alison H.

    2005-01-01

    At Pacific Northwest National Laboratory, we are developing computer models of radiation portal monitors for screening vehicles and cargo. Detailed models of the radiation detection equipment, vehicles, cargo containers, cargos, and radioactive sources have been created. These are used to determine the optimal configuration of detectors and the best alarm algorithms for the detection of items of interest while minimizing nuisance alarms due to the presence of legitimate radioactive material in the commerce stream. Most of the modeling is done with the Monte Carlo code MCNP to describe the transport of gammas and neutrons from extended sources through large, irregularly shaped absorbers to large detectors. A fundamental prerequisite is the validation of the computational models against field measurements. We describe the first step of this validation process, the comparison of the models to measurements with bare static sources

  4. Thermal hydraulic model validation for HOR mixed core fuel management

    International Nuclear Information System (INIS)

    Gibcus, H.P.M.; Vries, J.W. de; Leege, P.F.A. de

    1997-01-01

    A thermal-hydraulic core management model has been developed for the Hoger Onderwijsreactor (HOR), a 2 MW pool-type university research reactor. The model was adopted for safety analysis purposes in the framework of HEU/LEU core conversion studies. It is applied in the thermal-hydraulic computer code SHORT (Steady-state HOR Thermal-hydraulics) which is presently in use in designing core configurations and for in-core fuel management. An elaborate measurement program was performed for establishing the core hydraulic characteristics for a variety of conditions. The hydraulic data were obtained with a dummy fuel element with special equipment allowing a.o. direct measurement of the true core flow rate. Using these data the thermal-hydraulic model was validated experimentally. The model, experimental tests, and model validation are discussed. (author)

  5. Validation of the newborn larynx modeling with aerodynamical experimental data.

    Science.gov (United States)

    Nicollas, R; Giordano, J; Garrel, R; Medale, M; Caminat, P; Giovanni, A; Ouaknine, M; Triglia, J M

    2009-06-01

    Many authors have studied adult's larynx modelization, but the mechanisms of newborn's voice production have very rarely been investigated. After validating a numerical model with acoustic data, studies were performed on larynges of human fetuses in order to validate this model with aerodynamical experiments. Anatomical measurements were performed and a simplified numerical model was built using Fluent((R)) with the vocal folds in phonatory position. The results obtained are in good agreement with those obtained by laser Doppler velocimetry (LDV) and high-frame rate particle image velocimetry (HFR-PIV), on an experimental bench with excised human fetus larynges. It appears that computing with first cry physiological parameters leads to a model which is close to those obtained in experiments with real organs.

  6. Validation of the dynamic model for a pressurized water reactor

    International Nuclear Information System (INIS)

    Zwingelstein, Gilles.

    1979-01-01

    Dynamic model validation is a necessary procedure to assure that the developed empirical or physical models are satisfactorily representing the dynamic behavior of the actual plant during normal or abnormal transients. For small transients, physical models which represent isolated core, isolated steam generator and the overall pressurized water reactor are described. Using data collected during the step power changes that occured during the startup procedures, comparisons of experimental and actual transients are given at 30% and 100% of full power. The agreement between the transients derived from the model and those recorded on the plant indicates that the developed models are well suited for use for functional or control studies

  7. Development and validation of a mass casualty conceptual model.

    Science.gov (United States)

    Culley, Joan M; Effken, Judith A

    2010-03-01

    To develop and validate a conceptual model that provides a framework for the development and evaluation of information systems for mass casualty events. The model was designed based on extant literature and existing theoretical models. A purposeful sample of 18 experts validated the model. Open-ended questions, as well as a 7-point Likert scale, were used to measure expert consensus on the importance of each construct and its relationship in the model and the usefulness of the model to future research. Computer-mediated applications were used to facilitate a modified Delphi technique through which a panel of experts provided validation for the conceptual model. Rounds of questions continued until consensus was reached, as measured by an interquartile range (no more than 1 scale point for each item); stability (change in the distribution of responses less than 15% between rounds); and percent agreement (70% or greater) for indicator questions. Two rounds of the Delphi process were needed to satisfy the criteria for consensus or stability related to the constructs, relationships, and indicators in the model. The panel reached consensus or sufficient stability to retain all 10 constructs, 9 relationships, and 39 of 44 indicators. Experts viewed the model as useful (mean of 5.3 on a 7-point scale). Validation of the model provides the first step in understanding the context in which mass casualty events take place and identifying variables that impact outcomes of care. This study provides a foundation for understanding the complexity of mass casualty care, the roles that nurses play in mass casualty events, and factors that must be considered in designing and evaluating information-communication systems to support effective triage under these conditions.

  8. Verification and Validation of FAARR Model and Data Envelopment Analysis Models for United States Army Recruiting

    National Research Council Canada - National Science Library

    Piskator, Gene

    1998-01-01

    ...) model and to develop a Data Envelopment Analysis (DEA) modeling strategy. First, the FAARR model was verified using a simulation of a known production function and validated using sensitivity analysis and ex-post forecasts...

  9. Continuous validation of ASTEC containment models and regression testing

    International Nuclear Information System (INIS)

    Nowack, Holger; Reinke, Nils; Sonnenkalb, Martin

    2014-01-01

    The focus of the ASTEC (Accident Source Term Evaluation Code) development at GRS is primarily on the containment module CPA (Containment Part of ASTEC), whose modelling is to a large extent based on the GRS containment code COCOSYS (COntainment COde SYStem). Validation is usually understood as the approval of the modelling capabilities by calculations of appropriate experiments done by external users different from the code developers. During the development process of ASTEC CPA, bugs and unintended side effects may occur, which leads to changes in the results of the initially conducted validation. Due to the involvement of a considerable number of developers in the coding of ASTEC modules, validation of the code alone, even if executed repeatedly, is not sufficient. Therefore, a regression testing procedure has been implemented in order to ensure that the initially obtained validation results are still valid with succeeding code versions. Within the regression testing procedure, calculations of experiments and plant sequences are performed with the same input deck but applying two different code versions. For every test-case the up-to-date code version is compared to the preceding one on the basis of physical parameters deemed to be characteristic for the test-case under consideration. In the case of post-calculations of experiments also a comparison to experimental data is carried out. Three validation cases from the regression testing procedure are presented within this paper. The very good post-calculation of the HDR E11.1 experiment shows the high quality modelling of thermal-hydraulics in ASTEC CPA. Aerosol behaviour is validated on the BMC VANAM M3 experiment, and the results show also a very good agreement with experimental data. Finally, iodine behaviour is checked in the validation test-case of the THAI IOD-11 experiment. Within this test-case, the comparison of the ASTEC versions V2.0r1 and V2.0r2 shows how an error was detected by the regression testing

  10. Validation od computational model ALDERSON/EGSnrc for chest radiography

    International Nuclear Information System (INIS)

    Muniz, Bianca C.; Santos, André L. dos; Menezes, Claudio J.M.

    2017-01-01

    To perform dose studies in situations of exposure to radiation, without exposing individuals, the numerical dosimetry uses Computational Exposure Models (ECM). Composed essentially by a radioactive source simulator algorithm, a voxel phantom representing the human anatomy and a Monte Carlo code, the ECMs must be validated to determine the reliability of the physical array representation. The objective of this work is to validate the ALDERSON / EGSnrc MCE by through comparisons between the experimental measurements obtained with the ionization chamber and virtual simulations using Monte Carlo Method to determine the ratio of the input and output radiation dose. Preliminary results of these comparisons showed that the ECM reproduced the results of the experimental measurements performed with the physical phantom with a relative error of less than 10%, validating the use of this model for simulations of chest radiographs and estimates of radiation doses in tissues in the irradiated structures

  11. A practical guide for operational validation of discrete simulation models

    Directory of Open Access Journals (Sweden)

    Fabiano Leal

    2011-04-01

    Full Text Available As the number of simulation experiments increases, the necessity for validation and verification of these models demands special attention on the part of the simulation practitioners. By analyzing the current scientific literature, it is observed that the operational validation description presented in many papers does not agree on the importance designated to this process and about its applied techniques, subjective or objective. With the expectation of orienting professionals, researchers and students in simulation, this article aims to elaborate a practical guide through the compilation of statistical techniques in the operational validation of discrete simulation models. Finally, the guide's applicability was evaluated by using two study objects, which represent two manufacturing cells, one from the automobile industry and the other from a Brazilian tech company. For each application, the guide identified distinct steps, due to the different aspects that characterize the analyzed distributions

  12. Progress in Geant4 Electromagnetic Physics Modelling and Validation

    International Nuclear Information System (INIS)

    Apostolakis, J; Burkhardt, H; Ivanchenko, V N; Asai, M; Bagulya, A; Grichine, V; Brown, J M C; Chikuma, N; Cortes-Giraldo, M A; Elles, S; Jacquemier, J; Guatelli, S; Incerti, S; Kadri, O; Maire, M; Urban, L; Pandola, L; Sawkey, D; Toshito, T; Yamashita, T

    2015-01-01

    In this work we report on recent improvements in the electromagnetic (EM) physics models of Geant4 and new validations of EM physics. Improvements have been made in models of the photoelectric effect, Compton scattering, gamma conversion to electron and muon pairs, fluctuations of energy loss, multiple scattering, synchrotron radiation, and high energy positron annihilation. The results of these developments are included in the new Geant4 version 10.1 and in patches to previous versions 9.6 and 10.0 that are planned to be used for production for run-2 at LHC. The Geant4 validation suite for EM physics has been extended and new validation results are shown in this work. In particular, the effect of gamma-nuclear interactions on EM shower shape at LHC energies is discussed. (paper)

  13. Validation of statistical models for creep rupture by parametric analysis

    Energy Technology Data Exchange (ETDEWEB)

    Bolton, J., E-mail: john.bolton@uwclub.net [65, Fisher Ave., Rugby, Warks CV22 5HW (United Kingdom)

    2012-01-15

    Statistical analysis is an efficient method for the optimisation of any candidate mathematical model of creep rupture data, and for the comparative ranking of competing models. However, when a series of candidate models has been examined and the best of the series has been identified, there is no statistical criterion to determine whether a yet more accurate model might be devised. Hence there remains some uncertainty that the best of any series examined is sufficiently accurate to be considered reliable as a basis for extrapolation. This paper proposes that models should be validated primarily by parametric graphical comparison to rupture data and rupture gradient data. It proposes that no mathematical model should be considered reliable for extrapolation unless the visible divergence between model and data is so small as to leave no apparent scope for further reduction. This study is based on the data for a 12% Cr alloy steel used in BS PD6605:1998 to exemplify its recommended statistical analysis procedure. The models considered in this paper include a) a relatively simple model, b) the PD6605 recommended model and c) a more accurate model of somewhat greater complexity. - Highlights: Black-Right-Pointing-Pointer The paper discusses the validation of creep rupture models derived from statistical analysis. Black-Right-Pointing-Pointer It demonstrates that models can be satisfactorily validated by a visual-graphic comparison of models to data. Black-Right-Pointing-Pointer The method proposed utilises test data both as conventional rupture stress and as rupture stress gradient. Black-Right-Pointing-Pointer The approach is shown to be more reliable than a well-established and widely used method (BS PD6605).

  14. Multiphysics modelling and experimental validation of high concentration photovoltaic modules

    International Nuclear Information System (INIS)

    Theristis, Marios; Fernández, Eduardo F.; Sumner, Mike; O'Donovan, Tadhg S.

    2017-01-01

    Highlights: • A multiphysics modelling approach for concentrating photovoltaics was developed. • An experimental campaign was conducted to validate the models. • The experimental results were in good agreement with the models. • The multiphysics modelling allows the concentrator’s optimisation. - Abstract: High concentration photovoltaics, equipped with high efficiency multijunction solar cells, have great potential in achieving cost-effective and clean electricity generation at utility scale. Such systems are more complex compared to conventional photovoltaics because of the multiphysics effect that is present. Modelling the power output of such systems is therefore crucial for their further market penetration. Following this line, a multiphysics modelling procedure for high concentration photovoltaics is presented in this work. It combines an open source spectral model, a single diode electrical model and a three-dimensional finite element thermal model. In order to validate the models and the multiphysics modelling procedure against actual data, an outdoor experimental campaign was conducted in Albuquerque, New Mexico using a high concentration photovoltaic monomodule that is thoroughly described in terms of its geometry and materials. The experimental results were in good agreement (within 2.7%) with the predicted maximum power point. This multiphysics approach is relatively more complex when compared to empirical models, but besides the overall performance prediction it can also provide better understanding of the physics involved in the conversion of solar irradiance into electricity. It can therefore be used for the design and optimisation of high concentration photovoltaic modules.

  15. Improving Perovskite Solar Cells: Insights From a Validated Device Model

    NARCIS (Netherlands)

    Sherkar, Tejas S.; Momblona, Cristina; Gil-Escrig, Lidon; Bolink, Henk J.; Koster, L. Jan Anton

    2017-01-01

    To improve the efficiency of existing perovskite solar cells (PSCs), a detailed understanding of the underlying device physics during their operation is essential. Here, a device model has been developed and validated that describes the operation of PSCs and quantitatively explains the role of

  16. Model validation studies of solar systems, Phase III. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Lantz, L.J.; Winn, C.B.

    1978-12-01

    Results obtained from a validation study of the TRNSYS, SIMSHAC, and SOLCOST solar system simulation and design are presented. Also included are comparisons between the FCHART and SOLCOST solar system design programs and some changes that were made to the SOLCOST program. Finally, results obtained from the analysis of several solar radiation models are presented. Separate abstracts were prepared for ten papers.

  17. Validation of a tuber blight (Phytophthora infestans) prediction model

    Science.gov (United States)

    Potato tuber blight caused by Phytophthora infestans accounts for significant losses in storage. There is limited published quantitative data on predicting tuber blight. We validated a tuber blight prediction model developed in New York with cultivars Allegany, NY 101, and Katahdin using independent...

  18. Validating soil phosphorus routines in the SWAT model

    Science.gov (United States)

    Phosphorus transfer from agricultural soils to surface waters is an important environmental issue. Commonly used models like SWAT have not always been updated to reflect improved understanding of soil P transformations and transfer to runoff. Our objective was to validate the ability of the P routin...

  19. Temporal validation for landsat-based volume estimation model

    Science.gov (United States)

    Renaldo J. Arroyo; Emily B. Schultz; Thomas G. Matney; David L. Evans; Zhaofei Fan

    2015-01-01

    Satellite imagery can potentially reduce the costs and time associated with ground-based forest inventories; however, for satellite imagery to provide reliable forest inventory data, it must produce consistent results from one time period to the next. The objective of this study was to temporally validate a Landsat-based volume estimation model in a four county study...

  20. Multiphysics software and the challenge to validating physical models

    International Nuclear Information System (INIS)

    Luxat, J.C.

    2008-01-01

    This paper discusses multi physics software and validation of physical models in the nuclear industry. The major challenge is to convert the general purpose software package to a robust application-specific solution. This requires greater knowledge of the underlying solution techniques and the limitations of the packages. Good user interfaces and neat graphics do not compensate for any deficiencies

  1. Technical Note: Calibration and validation of geophysical observation models

    NARCIS (Netherlands)

    Salama, M.S.; van der Velde, R.; van der Woerd, H.J.; Kromkamp, J.C.; Philippart, C.J.M.; Joseph, A.T.; O'Neill, P.E.; Lang, R.H.; Gish, T.; Werdell, P.J.; Su, Z.

    2012-01-01

    We present a method to calibrate and validate observational models that interrelate remotely sensed energy fluxes to geophysical variables of land and water surfaces. Coincident sets of remote sensing observation of visible and microwave radiations and geophysical data are assembled and subdivided

  2. Validating firn compaction model with remote sensing data

    DEFF Research Database (Denmark)

    Simonsen, S. B.; Stenseng, Lars; Sørensen, Louise Sandberg

    A comprehensive understanding of firn processes is of outmost importance, when estimating present and future changes of the Greenland Ice Sheet. Especially, when remote sensing altimetry is used to assess the state of ice sheets and their contribution to global sea level rise, firn compaction...... models have been shown to be a key component. Now, remote sensing data can also be used to validate the firn models. Radar penetrating the upper part of the firn column in the interior part of Greenland shows a clear layering. The observed layers from the radar data can be used as an in-situ validation...... correction relative to the changes in the elevation of the surface observed with remote sensing altimetry? What model time resolution is necessary to resolved the observed layering? What model refinements are necessary to give better estimates of the surface mass balance of the Greenland ice sheet from...

  3. Selection, calibration, and validation of models of tumor growth.

    Science.gov (United States)

    Lima, E A B F; Oden, J T; Hormuth, D A; Yankeelov, T E; Almeida, R C

    2016-11-01

    This paper presents general approaches for addressing some of the most important issues in predictive computational oncology concerned with developing classes of predictive models of tumor growth. First, the process of developing mathematical models of vascular tumors evolving in the complex, heterogeneous, macroenvironment of living tissue; second, the selection of the most plausible models among these classes, given relevant observational data; third, the statistical calibration and validation of models in these classes, and finally, the prediction of key Quantities of Interest (QOIs) relevant to patient survival and the effect of various therapies. The most challenging aspects of this endeavor is that all of these issues often involve confounding uncertainties: in observational data, in model parameters, in model selection, and in the features targeted in the prediction. Our approach can be referred to as "model agnostic" in that no single model is advocated; rather, a general approach that explores powerful mixture-theory representations of tissue behavior while accounting for a range of relevant biological factors is presented, which leads to many potentially predictive models. Then representative classes are identified which provide a starting point for the implementation of OPAL, the Occam Plausibility Algorithm (OPAL) which enables the modeler to select the most plausible models (for given data) and to determine if the model is a valid tool for predicting tumor growth and morphology ( in vivo ). All of these approaches account for uncertainties in the model, the observational data, the model parameters, and the target QOI. We demonstrate these processes by comparing a list of models for tumor growth, including reaction-diffusion models, phase-fields models, and models with and without mechanical deformation effects, for glioma growth measured in murine experiments. Examples are provided that exhibit quite acceptable predictions of tumor growth in laboratory

  4. Biogeochemical cycling in the Taiwan Strait

    Digital Repository Service at National Institute of Oceanography (India)

    Naik, H.; Chen, C-T.A.

    Based on repeat observations made during 2001-2003 along two transects in the Taiwan Strait this study aims at understanding factors controlling primary productivity with an emphasis on biogeochemical cycling of nitrogen, the major bio...

  5. Statistical methods for mechanistic model validation: Salt Repository Project

    International Nuclear Information System (INIS)

    Eggett, D.L.

    1988-07-01

    As part of the Department of Energy's Salt Repository Program, Pacific Northwest Laboratory (PNL) is studying the emplacement of nuclear waste containers in a salt repository. One objective of the SRP program is to develop an overall waste package component model which adequately describes such phenomena as container corrosion, waste form leaching, spent fuel degradation, etc., which are possible in the salt repository environment. The form of this model will be proposed, based on scientific principles and relevant salt repository conditions with supporting data. The model will be used to predict the future characteristics of the near field environment. This involves several different submodels such as the amount of time it takes a brine solution to contact a canister in the repository, how long it takes a canister to corrode and expose its contents to the brine, the leach rate of the contents of the canister, etc. These submodels are often tested in a laboratory and should be statistically validated (in this context, validate means to demonstrate that the model adequately describes the data) before they can be incorporated into the waste package component model. This report describes statistical methods for validating these models. 13 refs., 1 fig., 3 tabs

  6. Verification and Validation of Heat Transfer Model of AGREE Code

    Energy Technology Data Exchange (ETDEWEB)

    Tak, N. I. [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Seker, V.; Drzewiecki, T. J.; Downar, T. J. [Department of Nuclear Engineering and Radiological Sciences, Univ. of Michigan, Michigan (United States); Kelly, J. M. [US Nuclear Regulatory Commission, Washington (United States)

    2013-05-15

    The AGREE code was originally developed as a multi physics simulation code to perform design and safety analysis of Pebble Bed Reactors (PBR). Currently, additional capability for the analysis of Prismatic Modular Reactor (PMR) core is in progress. Newly implemented fluid model for a PMR core is based on a subchannel approach which has been widely used in the analyses of light water reactor (LWR) cores. A hexagonal fuel (or graphite block) is discretized into triangular prism nodes having effective conductivities. Then, a meso-scale heat transfer model is applied to the unit cell geometry of a prismatic fuel block. Both unit cell geometries of multi-hole and pin-in-hole types of prismatic fuel blocks are considered in AGREE. The main objective of this work is to verify and validate the heat transfer model newly implemented for a PMR core in the AGREE code. The measured data in the HENDEL experiment were used for the validation of the heat transfer model for a pin-in-hole fuel block. However, the HENDEL tests were limited to only steady-state conditions of pin-in-hole fuel blocks. There exist no available experimental data regarding a heat transfer in multi-hole fuel blocks. Therefore, numerical benchmarks using conceptual problems are considered to verify the heat transfer model of AGREE for multi-hole fuel blocks as well as transient conditions. The CORONA and GAMMA+ codes were used to compare the numerical results. In this work, the verification and validation study were performed for the heat transfer model of the AGREE code using the HENDEL experiment and the numerical benchmarks of selected conceptual problems. The results of the present work show that the heat transfer model of AGREE is accurate and reliable for prismatic fuel blocks. Further validation of AGREE is in progress for a whole reactor problem using the HTTR safety test data such as control rod withdrawal tests and loss-of-forced convection tests.

  7. Validation of the Colorado Retinopathy of Prematurity Screening Model.

    Science.gov (United States)

    McCourt, Emily A; Ying, Gui-Shuang; Lynch, Anne M; Palestine, Alan G; Wagner, Brandie D; Wymore, Erica; Tomlinson, Lauren A; Binenbaum, Gil

    2018-04-01

    The Colorado Retinopathy of Prematurity (CO-ROP) model uses birth weight, gestational age, and weight gain at the first month of life (WG-28) to predict risk of severe retinopathy of prematurity (ROP). In previous validation studies, the model performed very well, predicting virtually all cases of severe ROP and potentially reducing the number of infants who need ROP examinations, warranting validation in a larger, more diverse population. To validate the performance of the CO-ROP model in a large multicenter cohort. This study is a secondary analysis of data from the Postnatal Growth and Retinopathy of Prematurity (G-ROP) Study, a retrospective multicenter cohort study conducted in 29 hospitals in the United States and Canada between January 2006 and June 2012 of 6351 premature infants who received ROP examinations. Sensitivity and specificity for severe (early treatment of ROP [ETROP] type 1 or 2) ROP, and reduction in infants receiving examinations. The CO-ROP model was applied to the infants in the G-ROP data set with all 3 data points (infants would have received examinations if they met all 3 criteria: birth weight, large validation cohort. The model requires all 3 criteria to be met to signal a need for examinations, but some infants with a birth weight or gestational age above the thresholds developed severe ROP. Most of these infants who were not detected by the CO-ROP model had obvious deviation in expected weight trajectories or nonphysiologic weight gain. These findings suggest that the CO-ROP model needs to be revised before considering implementation into clinical practice.

  8. A process improvement model for software verification and validation

    Science.gov (United States)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  9. Global biogeochemical cycle of vanadium.

    Science.gov (United States)

    Schlesinger, William H; Klein, Emily M; Vengosh, Avner

    2017-12-26

    Synthesizing published data, we provide a quantitative summary of the global biogeochemical cycle of vanadium (V), including both human-derived and natural fluxes. Through mining of V ores (130 × 10 9 g V/y) and extraction and combustion of fossil fuels (600 × 10 9 g V/y), humans are the predominant force in the geochemical cycle of V at Earth's surface. Human emissions of V to the atmosphere are now likely to exceed background emissions by as much as a factor of 1.7, and, presumably, we have altered the deposition of V from the atmosphere by a similar amount. Excessive V in air and water has potential, but poorly documented, consequences for human health. Much of the atmospheric flux probably derives from emissions from the combustion of fossil fuels, but the magnitude of this flux depends on the type of fuel, with relatively low emissions from coal and higher contributions from heavy crude oils, tar sands bitumen, and petroleum coke. Increasing interest in petroleum derived from unconventional deposits is likely to lead to greater emissions of V to the atmosphere in the near future. Our analysis further suggests that the flux of V in rivers has been incremented by about 15% from human activities. Overall, the budget of dissolved V in the oceans is remarkably well balanced-with about 40 × 10 9 g V/y to 50 × 10 9 g V/y inputs and outputs, and a mean residence time for dissolved V in seawater of about 130,000 y with respect to inputs from rivers.

  10. Biotic and Biogeochemical Feedbacks to Climate Change

    Science.gov (United States)

    Torn, M. S.; Harte, J.

    2002-12-01

    Feedbacks to paleoclimate change are evident in ice core records showing correlations of temperature with carbon dioxide, nitrous oxide, and methane. Such feedbacks may be explained by plant and microbial responses to climate change, and are likely to occur under impending climate warming, as evidenced by results of ecosystem climate manipulation experiments and biometeorological observations along ecological and climate gradients. Ecosystems exert considerable influence on climate, by controlling the energy and water balance of the land surface as well as being sinks and sources of greenhouse gases. This presentation will focus on biotic and biogeochemical climate feedbacks on decadal to century time scales, emphasizing carbon storage and energy exchange. In addition to the direct effects of climate on decomposition rates and of climate and CO2 on plant productivity, climate change can alter species composition; because plant species differ in their surface properties, productivity, phenology, and chemistry, climate-induced changes in plant species composition can exert a large influence on the magnitude and sign of climate feedbacks. We discuss the effects of plant species on ecosystem carbon storage that result from characteristic differences in plant biomass and lifetime, allocation to roots vs. leaves, litter quality, microclimate for decomposition and the ultimate stabilization of soil organic matter. We compare the effect of species transitions on transpiration, albedo, and other surface properties, with the effect of elevated CO2 and warming on single species' surface exchange. Global change models and experiments that investigate the effect of climate only on existing vegetation may miss the biggest impacts of climate change on biogeochemical cycling and feedbacks. Quantification of feedbacks will require understanding how species composition and long-term soil processes will change under global warming. Although no single approach, be it experimental

  11. Comparative calculations and validation studies with atmospheric dispersion models

    International Nuclear Information System (INIS)

    Paesler-Sauer, J.

    1986-11-01

    This report presents the results of an intercomparison of different mesoscale dispersion models and measured data of tracer experiments. The types of models taking part in the intercomparison are Gaussian-type, numerical Eulerian, and Lagrangian dispersion models. They are suited for the calculation of the atmospherical transport of radionuclides released from a nuclear installation. For the model intercomparison artificial meteorological situations were defined and corresponding arithmetical problems were formulated. For the purpose of model validation real dispersion situations of tracer experiments were used as input data for model calculations; in these cases calculated and measured time-integrated concentrations close to the ground are compared. Finally a valuation of the models concerning their efficiency in solving the problems is carried out by the aid of objective methods. (orig./HP) [de

  12. Improvement and Validation of Weld Residual Stress Modelling Procedure

    International Nuclear Information System (INIS)

    Zang, Weilin; Gunnars, Jens; Dong, Pingsha; Hong, Jeong K.

    2009-06-01

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  13. Validated TRNSYS Model for Solar Assisted Space Heating System

    International Nuclear Information System (INIS)

    Abdalla, Nedal

    2014-01-01

    The present study involves a validated TRNSYS model for solar assisted space heating system as applied to a residential building in Jordan using new detailed radiation models of the TRNSYS 17.1 and geometric building model Trnsys3d for the Google SketchUp 3D drawing program. The annual heating load for a building (Solar House) which is located at the Royal ScientiFIc Society (RS5) in Jordan is estimated under climatological conditions of Amman. The aim of this Paper is to compare measured thermal performance of the Solar House with that modeled using TRNSYS. The results showed that the annual measured space heating load for the building was 6,188 kWh while the heati.ng load for the modeled building was 6,391 kWh. Moreover, the measured solar fraction for the solar system was 50% while the modeled solar fraction was 55%. A comparison of modeled and measured data resulted in percentage mean absolute errors for solar energy for space heating, auxiliary heating and solar fraction of 13%, 7% and 10%, respectively. The validated model will be useful for long-term performance simulation under different weather and operating conditions.(author)

  14. Improvement and Validation of Weld Residual Stress Modelling Procedure

    Energy Technology Data Exchange (ETDEWEB)

    Zang, Weilin; Gunnars, Jens (Inspecta Technology AB, Stockholm (Sweden)); Dong, Pingsha; Hong, Jeong K. (Center for Welded Structures Research, Battelle, Columbus, OH (United States))

    2009-06-15

    The objective of this work is to identify and evaluate improvements for the residual stress modelling procedure currently used in Sweden. There is a growing demand to eliminate any unnecessary conservatism involved in residual stress assumptions. The study was focused on the development and validation of an improved weld residual stress modelling procedure, by taking advantage of the recent advances in residual stress modelling and stress measurement techniques. The major changes applied in the new weld residual stress modelling procedure are: - Improved procedure for heat source calibration based on use of analytical solutions. - Use of an isotropic hardening model where mixed hardening data is not available. - Use of an annealing model for improved simulation of strain relaxation in re-heated material. The new modelling procedure is demonstrated to capture the main characteristics of the through thickness stress distributions by validation to experimental measurements. Three austenitic stainless steel butt-welds cases are analysed, covering a large range of pipe geometries. From the cases it is evident that there can be large differences between the residual stresses predicted using the new procedure, and the earlier procedure or handbook recommendations. Previously recommended profiles could give misleading fracture assessment results. The stress profiles according to the new procedure agree well with the measured data. If data is available then a mixed hardening model should be used

  15. Geographic and temporal validity of prediction models: Different approaches were useful to examine model performance

    NARCIS (Netherlands)

    P.C. Austin (Peter); D. van Klaveren (David); Y. Vergouwe (Yvonne); D. Nieboer (Daan); D.S. Lee (Douglas); E.W. Steyerberg (Ewout)

    2016-01-01

    textabstractObjective: Validation of clinical prediction models traditionally refers to the assessment of model performance in new patients. We studied different approaches to geographic and temporal validation in the setting of multicenter data from two time periods. Study Design and Setting: We

  16. Validation of an O-18 leaf water enrichment model

    Energy Technology Data Exchange (ETDEWEB)

    Jaeggi, M.; Saurer, M.; Siegwolf, R.

    2002-03-01

    The seasonal trend in {delta}{sup 18}O{sub ol} in leaf organic matter of spruce needles of mature trees could be modelled for two years. The seasonality was mainly explained by the {delta}{sup 18}O of top-soil water, whereas between years differences were due to variation in air humidity. Application of a third year's data set improved the correlation between modelled and measured {delta}{sup 18}O{sub ol} and thus validated our extended Dongmann model. (author)

  17. Validation study of safety assessment model for radioactive waste disposal

    Energy Technology Data Exchange (ETDEWEB)

    Munakata, Masahiro; Takeda, Seiji; Kimura, Hideo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    2001-12-01

    The JAERI-AECL collaboration research program has been conducted to validate a groundwater flow and radionuclide transport models for safety assessment. JAERI have developed a geostatistical model for radionuclide transport through a heterogeneous geological media and verify using experimental results of field tracer tests. The simulated tracer plumes explain favorably the experimental tracer plumes. A regional groundwater flow and transport model using site-scale parameter obtained from tracer tests have been verified by comparing simulation results with observation ones of natural environmental tracer. (author)

  18. Numerical Validation of Chemical Compositional Model for Wettability Alteration Processes

    Science.gov (United States)

    Bekbauov, Bakhbergen; Berdyshev, Abdumauvlen; Baishemirov, Zharasbek; Bau, Domenico

    2017-12-01

    Chemical compositional simulation of enhanced oil recovery and surfactant enhanced aquifer remediation processes is a complex task that involves solving dozens of equations for all grid blocks representing a reservoir. In the present work, we perform a numerical validation of the newly developed mathematical formulation which satisfies the conservation laws of mass and energy and allows applying a sequential solution approach to solve the governing equations separately and implicitly. Through its application to the numerical experiment using a wettability alteration model and comparisons with existing chemical compositional model's numerical results, the new model has proven to be practical, reliable and stable.

  19. Validation of Slosh Modeling Approach Using STAR-CCM+

    Science.gov (United States)

    Benson, David J.; Ng, Wanyi

    2018-01-01

    Without an adequate understanding of propellant slosh, the spacecraft attitude control system may be inadequate to control the spacecraft or there may be an unexpected loss of science observation time due to higher slosh settling times. Computational fluid dynamics (CFD) is used to model propellant slosh. STAR-CCM+ is a commercially available CFD code. This paper seeks to validate the CFD modeling approach via a comparison between STAR-CCM+ liquid slosh modeling results and experimental, empirically, and analytically derived results. The geometries examined are a bare right cylinder tank and a right cylinder with a single ring baffle.

  20. Cross validation for the classical model of structured expert judgment

    International Nuclear Information System (INIS)

    Colson, Abigail R.; Cooke, Roger M.

    2017-01-01

    We update the 2008 TU Delft structured expert judgment database with data from 33 professionally contracted Classical Model studies conducted between 2006 and March 2015 to evaluate its performance relative to other expert aggregation models. We briefly review alternative mathematical aggregation schemes, including harmonic weighting, before focusing on linear pooling of expert judgments with equal weights and performance-based weights. Performance weighting outperforms equal weighting in all but 1 of the 33 studies in-sample. True out-of-sample validation is rarely possible for Classical Model studies, and cross validation techniques that split calibration questions into a training and test set are used instead. Performance weighting incurs an “out-of-sample penalty” and its statistical accuracy out-of-sample is lower than that of equal weighting. However, as a function of training set size, the statistical accuracy of performance-based combinations reaches 75% of the equal weight value when the training set includes 80% of calibration variables. At this point the training set is sufficiently powerful to resolve differences in individual expert performance. The information of performance-based combinations is double that of equal weighting when the training set is at least 50% of the set of calibration variables. Previous out-of-sample validation work used a Total Out-of-Sample Validity Index based on all splits of the calibration questions into training and test subsets, which is expensive to compute and includes small training sets of dubious value. As an alternative, we propose an Out-of-Sample Validity Index based on averaging the product of statistical accuracy and information over all training sets sized at 80% of the calibration set. Performance weighting outperforms equal weighting on this Out-of-Sample Validity Index in 26 of the 33 post-2006 studies; the probability of 26 or more successes on 33 trials if there were no difference between performance

  1. ADMS-AIRPORT: MODEL INTER-COMPARISIONS AND MODEL VALIDATION

    OpenAIRE

    Carruthers, David; McHugh, Christine; Church, Stephanie; Jackson, Mark; Williams, Matt; Price, Catheryn; Lad, Chetan

    2008-01-01

    Abstract: The functionality of ADMS-Airport and details of its use in the Model Inter-comparison Study of the Project for the Sustainable Development of Heathrow Airport (PSDH) have previously been presented, Carruthers et al (2007). A distinguishing feature is the treatment of jet engine emissions as moving jet sources rather than averaging these emissions into volume sources as is the case in some other models. In this presentation two further studies are presented which each contribu...

  2. Approaches to Validation of Models for Low Gravity Fluid Behavior

    Science.gov (United States)

    Chato, David J.; Marchetta, Jeffery; Hochstein, John I.; Kassemi, Mohammad

    2005-01-01

    This paper details the author experiences with the validation of computer models to predict low gravity fluid behavior. It reviews the literature of low gravity fluid behavior as a starting point for developing a baseline set of test cases. It examines authors attempts to validate their models against these cases and the issues they encountered. The main issues seem to be that: Most of the data is described by empirical correlation rather than fundamental relation; Detailed measurements of the flow field have not been made; Free surface shapes are observed but through thick plastic cylinders, and therefore subject to a great deal of optical distortion; and Heat transfer process time constants are on the order of minutes to days but the zero-gravity time available has been only seconds.

  3. Isotopes as validation tools for global climate models

    International Nuclear Information System (INIS)

    Henderson-Sellers, A.

    2001-01-01

    Global Climate Models (GCMs) are the predominant tool with which we predict the future climate. In order that people can have confidence in such predictions, GCMs require validation. As almost every available item of meteorological data has been exploited in the construction and tuning of GCMs to date, independent validation is very difficult. This paper explores the use of isotopes as a novel and fully independent means of evaluating GCMs. The focus is the Amazon Basin which has a long history of isotope collection and analysis and also of climate modelling: both having been reported for over thirty years. Careful consideration of the results of GCM simulations of Amazonian deforestation and climate change suggests that the recent stable isotope record is more consistent with the predicted effects of greenhouse warming, possibly combined with forest removal, than with GCM predictions of the effects of deforestation alone

  4. Modeling and Simulation Behavior Validation Methodology and Extension Model Validation for the Individual Soldier

    Science.gov (United States)

    2015-03-01

    domains. Major model functions include: • Ground combat: Light and heavy forces. • Air mobile forces. • Future forces. • Fixed-wing and rotary-wing...Constraints: • Study must be completed no later than 31 December 2014. • Entity behavior limited to select COMBATXXI Mobility , Unmanned Aerial System...and SQL backend , as well as any open application programming interface API. • Allows data transparency and data driven navigation through the model

  5. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models

    Science.gov (United States)

    van der Wijk, Lars; Proost, Johannes H.; Sinha, Bhanu; Touw, Daan J.

    2017-01-01

    Gentamicin shows large variations in half-life and volume of distribution (Vd) within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1) creating an optimal model for endocarditis patients; and 2) assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE) and Median Absolute Prediction Error (MDAPE) were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients) with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358) renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076) L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68%) as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37%) and standard (MDPE -0.90%, MDAPE 4.82%) models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to avoid

  6. Pharmacokinetic modeling of gentamicin in treatment of infective endocarditis: Model development and validation of existing models.

    Directory of Open Access Journals (Sweden)

    Anna Gomes

    Full Text Available Gentamicin shows large variations in half-life and volume of distribution (Vd within and between individuals. Thus, monitoring and accurately predicting serum levels are required to optimize effectiveness and minimize toxicity. Currently, two population pharmacokinetic models are applied for predicting gentamicin doses in adults. For endocarditis patients the optimal model is unknown. We aimed at: 1 creating an optimal model for endocarditis patients; and 2 assessing whether the endocarditis and existing models can accurately predict serum levels. We performed a retrospective observational two-cohort study: one cohort to parameterize the endocarditis model by iterative two-stage Bayesian analysis, and a second cohort to validate and compare all three models. The Akaike Information Criterion and the weighted sum of squares of the residuals divided by the degrees of freedom were used to select the endocarditis model. Median Prediction Error (MDPE and Median Absolute Prediction Error (MDAPE were used to test all models with the validation dataset. We built the endocarditis model based on data from the modeling cohort (65 patients with a fixed 0.277 L/h/70kg metabolic clearance, 0.698 (±0.358 renal clearance as fraction of creatinine clearance, and Vd 0.312 (±0.076 L/kg corrected lean body mass. External validation with data from 14 validation cohort patients showed a similar predictive power of the endocarditis model (MDPE -1.77%, MDAPE 4.68% as compared to the intensive-care (MDPE -1.33%, MDAPE 4.37% and standard (MDPE -0.90%, MDAPE 4.82% models. All models acceptably predicted pharmacokinetic parameters for gentamicin in endocarditis patients. However, these patients appear to have an increased Vd, similar to intensive care patients. Vd mainly determines the height of peak serum levels, which in turn correlate with bactericidal activity. In order to maintain simplicity, we advise to use the existing intensive-care model in clinical practice to

  7. In-Drift Microbial Communities Model Validation Calculations

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Jolley

    2001-09-24

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  8. In-Drift Microbial Communities Model Validation Calculation

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Jolley

    2001-10-31

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data.

  9. IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS

    Energy Technology Data Exchange (ETDEWEB)

    D.M. Jolley

    2001-12-18

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M&O 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M&O 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M&O 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M&O (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data.

  10. In-Drift Microbial Communities Model Validation Calculations

    International Nuclear Information System (INIS)

    Jolley, D.M.

    2001-01-01

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS MandO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS MandO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN MO9909SPAMING1.003 using its replacement DTN MO0106SPAIDM01.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS MandO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS MandO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 2001) which includes controls for the management of electronic data

  11. IN-DRIFT MICROBIAL COMMUNITIES MODEL VALIDATION CALCULATIONS

    International Nuclear Information System (INIS)

    D.M. Jolley

    2001-01-01

    The objective and scope of this calculation is to create the appropriate parameter input for MING 1.0 (CSCI 30018 V1.0, CRWMS M andO 1998b) that will allow the testing of the results from the MING software code with both scientific measurements of microbial populations at the site and laboratory and with natural analogs to the site. This set of calculations provides results that will be used in model validation for the ''In-Drift Microbial Communities'' model (CRWMS M andO 2000) which is part of the Engineered Barrier System Department (EBS) process modeling effort that eventually will feed future Total System Performance Assessment (TSPA) models. This calculation is being produced to replace MING model validation output that is effected by the supersession of DTN M09909SPAMINGl.003 using its replacement DTN M00106SPAIDMO 1.034 so that the calculations currently found in the ''In-Drift Microbial Communities'' AMR (CRWMS M andO 2000) will be brought up to date. This set of calculations replaces the calculations contained in sections 6.7.2, 6.7.3 and Attachment I of CRWMS M andO (2000) As all of these calculations are created explicitly for model validation, the data qualification status of all inputs can be considered corroborative in accordance with AP-3.15Q. This work activity has been evaluated in accordance with the AP-2.21 procedure, ''Quality Determinations and Planning for Scientific, Engineering, and Regulatory Compliance Activities'', and is subject to QA controls (BSC 2001). The calculation is developed in accordance with the AP-3.12 procedure, Calculations, and prepared in accordance with the ''Technical Work Plan For EBS Department Modeling FY 01 Work Activities'' (BSC 200 1) which includes controls for the management of electronic data

  12. Monte Carlo Modelling of Mammograms : Development and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Spyrou, G; Panayiotakis, G [Univercity of Patras, School of Medicine, Medical Physics Department, 265 00 Patras (Greece); Bakas, A [Technological Educational Institution of Athens, Department of Radiography, 122 10 Athens (Greece); Tzanakos, G [University of Athens, Department of Physics, Divission of Nuclear and Particle Physics, 157 71 Athens (Greece)

    1999-12-31

    A software package using Monte Carlo methods has been developed for the simulation of x-ray mammography. A simplified geometry of the mammographic apparatus has been considered along with the software phantom of compressed breast. This phantom may contain inhomogeneities of various compositions and sizes at any point. Using this model one can produce simulated mammograms. Results that demonstrate the validity of this simulation are presented. (authors) 16 refs, 4 figs

  13. Towards Model Validation and Verification with SAT Techniques

    OpenAIRE

    Gogolla, Martin

    2010-01-01

    After sketching how system development and the UML (Unified Modeling Language) and the OCL (Object Constraint Language) are related, validation and verification with the tool USE (UML-based Specification Environment) is demonstrated. As a more efficient alternative for verification tasks, two approaches using SAT-based techniques are put forward: First, a direct encoding of UML and OCL with Boolean variables and propositional formulas, and second, an encoding employing an...

  14. Trailing Edge Noise Model Validation and Application to Airfoil Optimization

    DEFF Research Database (Denmark)

    Bertagnolio, Franck; Aagaard Madsen, Helge; Bak, Christian

    2010-01-01

    The aim of this article is twofold. First, an existing trailing edge noise model is validated by comparing with airfoil surface pressure fluctuations and far field sound pressure levels measured in three different experiments. The agreement is satisfactory in one case but poor in two other cases...... across the boundary layer near the trailing edge and to a lesser extent by a smaller boundary layer displacement thickness. ©2010 American Society of Mechanical Engineers...

  15. Experimental Validation of a Permeability Model for Enrichment Membranes

    International Nuclear Information System (INIS)

    Orellano, Pablo; Brasnarof, Daniel; Florido Pablo

    2003-01-01

    An experimental loop with a real scale diffuser, in a single enrichment-stage configuration, was operated with air at different process conditions, in order to characterize the membrane permeability.Using these experimental data, an analytical geometric-and-morphologic-based model was validated.It is conclude that a new set of independent measurements, i.e. enrichment, is necessary in order to fully characterize diffusers, because of its internal parameters are not univocally determinated with permeability experimental data only

  16. Requirements Validation: Execution of UML Models with CPN Tools

    DEFF Research Database (Denmark)

    Machado, Ricardo J.; Lassen, Kristian Bisgaard; Oliveira, Sérgio

    2007-01-01

    Requirements validation is a critical task in any engineering project. The confrontation of stakeholders with static requirements models is not enough, since stakeholders with non-computer science education are not able to discover all the inter-dependencies between the elicited requirements. Eve...... requirements, where the system to be built must explicitly support the interaction between people within a pervasive cooperative workflow execution. A case study from a real project is used to illustrate the proposed approach....

  17. Monte Carlo Modelling of Mammograms : Development and Validation

    International Nuclear Information System (INIS)

    Spyrou, G.; Panayiotakis, G.; Bakas, A.; Tzanakos, G.

    1998-01-01

    A software package using Monte Carlo methods has been developed for the simulation of x-ray mammography. A simplified geometry of the mammographic apparatus has been considered along with the software phantom of compressed breast. This phantom may contain inhomogeneities of various compositions and sizes at any point. Using this model one can produce simulated mammograms. Results that demonstrate the validity of this simulation are presented. (authors)

  18. Cross-Validation of Aerobic Capacity Prediction Models in Adolescents.

    Science.gov (United States)

    Burns, Ryan Donald; Hannon, James C; Brusseau, Timothy A; Eisenman, Patricia A; Saint-Maurice, Pedro F; Welk, Greg J; Mahar, Matthew T

    2015-08-01

    Cardiorespiratory endurance is a component of health-related fitness. FITNESSGRAM recommends the Progressive Aerobic Cardiovascular Endurance Run (PACER) or One mile Run/Walk (1MRW) to assess cardiorespiratory endurance by estimating VO2 Peak. No research has cross-validated prediction models from both PACER and 1MRW, including the New PACER Model and PACER-Mile Equivalent (PACER-MEQ) using current standards. The purpose of this study was to cross-validate prediction models from PACER and 1MRW against measured VO2 Peak in adolescents. Cardiorespiratory endurance data were collected on 90 adolescents aged 13-16 years (Mean = 14.7 ± 1.3 years; 32 girls, 52 boys) who completed the PACER and 1MRW in addition to a laboratory maximal treadmill test to measure VO2 Peak. Multiple correlations among various models with measured VO2 Peak were considered moderately strong (R = .74-0.78), and prediction error (RMSE) ranged from 5.95 ml·kg⁻¹,min⁻¹ to 8.27 ml·kg⁻¹.min⁻¹. Criterion-referenced agreement into FITNESSGRAM's Healthy Fitness Zones was considered fair-to-good among models (Kappa = 0.31-0.62; Agreement = 75.5-89.9%; F = 0.08-0.65). In conclusion, prediction models demonstrated moderately strong linear relationships with measured VO2 Peak, fair prediction error, and fair-to-good criterion referenced agreement with measured VO2 Peak into FITNESSGRAM's Healthy Fitness Zones.

  19. Recent validation studies for two NRPB environmental transfer models

    International Nuclear Information System (INIS)

    Brown, J.; Simmonds, J.R.

    1991-01-01

    The National Radiological Protection Board (NRPB) developed a dynamic model for the transfer of radionuclides through terrestrial food chains some years ago. This model, now called FARMLAND, predicts both instantaneous and time integrals of concentration of radionuclides in a variety of foods. The model can be used to assess the consequences of both accidental and routine releases of radioactivity to the environment; and results can be obtained as a function of time. A number of validation studies have been carried out on FARMLAND. In these the model predictions have been compared with a variety of sets of environmental measurement data. Some of these studies will be outlined in the paper. A model to predict external radiation exposure from radioactivity deposited on different surfaces in the environment has also been developed at NRPB. This model, called EXPURT (EXPosure from Urban Radionuclide Transfer), can be used to predict radiation doses as a function of time following deposition in a variety of environments, ranging from rural to inner-city areas. This paper outlines validation studies and future extensions to be carried out on EXPURT. (12 refs., 4 figs.)

  20. Methods for Geometric Data Validation of 3d City Models

    Science.gov (United States)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges

  1. Validation of the WATEQ4 geochemical model for uranium

    International Nuclear Information System (INIS)

    Krupka, K.M.; Jenne, E.A.; Deutsch, W.J.

    1983-09-01

    As part of the Geochemical Modeling and Nuclide/Rock/Groundwater Interactions Studies Program, a study was conducted to partially validate the WATEQ4 aqueous speciation-solubility geochemical model for uranium. The solubility controls determined with the WATEQ4 geochemical model were in excellent agreement with those laboratory studies in which the solids schoepite [UO 2 (OH) 2 . H 2 O], UO 2 (OH) 2 , and rutherfordine ((UO 2 CO 3 ) were identified as actual solubility controls for uranium. The results of modeling solution analyses from laboratory studies of uranyl phosphate solids, however, identified possible errors in the characterization of solids in the original solubility experiments. As part of this study, significant deficiencies in the WATEQ4 thermodynamic data base for uranium solutes and solids were corrected. Revisions included recalculation of selected uranium reactions. Additionally, thermodynamic data for the hydroxyl complexes of U(VI), including anionic (VI) species, were evaluated (to the extent permitted by the available data). Vanadium reactions were also added to the thermodynamic data base because uranium-vanadium solids can exist in natural ground-water systems. This study is only a partial validation of the WATEQ4 geochemical model because the available laboratory solubility studies do not cover the range of solid phases, alkaline pH values, and concentrations of inorganic complexing ligands needed to evaluate the potential solubility of uranium in ground waters associated with various proposed nuclear waste repositories. Further validation of this or other geochemical models for uranium will require careful determinations of uraninite solubility over the pH range of 7 to 10 under highly reducing conditions and of uranyl hydroxide and phosphate solubilities over the pH range of 7 to 10 under oxygenated conditions

  2. Validation of fracture flow models in the Stripa project

    International Nuclear Information System (INIS)

    Herbert, A.; Dershowitz, W.; Long, J.; Hodgkinson, D.

    1991-01-01

    One of the objectives of Phase III of the Stripa Project is to develop and evaluate approaches for the prediction of groundwater flow and nuclide transport in a specific unexplored volume of the Stripa granite and make a comparison with data from field measurements. During the first stage of the project, a prediction of inflow to the D-holes, an array of six parallel closely spaced 100m boreholes, was made based on data from six other boreholes. This data included fracture geometry, stress, single borehole geophysical logging, crosshole and reflection radar and seismic tomogram, head monitoring and single hole packer test measurements. Maps of fracture traces on the drift walls have also been made. The D-holes are located along a future Validation Drift which will be excavated. The water inflow to the D-holes has been measured in an experiment called the Simulated Drift Experiment. The paper reviews the Simulated Drift Experiment validation exercise. Following a discussion of the approach to validation, the characterization data and its preliminary interpretation are summarised and commented upon. That work has proved feasible to carry through all the complex and interconnected tasks associated with the gathering and interpretation of characterization data, the development and application of complex models, and the comparison with measured inflows. This exercise has provided detailed feed-back to the experimental and theoretical work required for measurements and predictions of flow into the Validation Drift. Computer codes used: CHANGE, FRACMAN, MAFIC, NAPSAC and TRINET. 2 figs., 2 tabs., 19 refs

  3. Intercomparison and validation of operational coastal-scale models, the experience of the project MOMAR.

    Science.gov (United States)

    Brandini, C.; Coudray, S.; Taddei, S.; Fattorini, M.; Costanza, L.; Lapucci, C.; Poulain, P.; Gerin, R.; Ortolani, A.; Gozzini, B.

    2012-04-01

    The need for regional governments to implement operational systems for the sustainable management of coastal waters, in order to meet the requirements imposed by legislation (e.g. EU directives such as WFD, MSFD, BD and relevant national legislation) often lead to the implementation of coastal measurement networks and to the construction of computational models that surround and describe parts of regional seas without falling in the classic definition of regional/coastal models. Although these operational models may be structured to cover parts of different oceanographic basins, they can have considerable advantages and highlight relevant issues, such as the role of narrow channels, straits and islands in coastal circulation, as both in physical and biogeochemical processes such as in the exchanges of water masses among basins. Two models of this type were made in the context of cross-border European project MOMAR: an operational model of the Tuscan Archipelago sea and one around the Corsica coastal waters, which are both located between the Tyrrhenian and the Algerian-Ligurian-Provençal basins. Although these two models were based on different computer codes (MARS3D and ROMS), they have several elements in common, such as a 400 m resolution, boundary conditions from the same "father" model, and an important area of overlap, the Corsica channel, which has a key role in the exchange of water masses between the two oceanographic basins. In this work we present the results of the comparison of these two ocean forecasting systems in response to different weather and oceanographic forcing. In particular, we discuss aspects related to the validation of the two systems, and a systematic comparison between the forecast/hindcast based on such hydrodynamic models, as regards to both operational models available at larger scale, both to in-situ measurements made by fixed or mobile platforms. In this context we will also present the results of two oceanographic cruises in the

  4. Biogeochemical and isotopic gradients in a BTEX/PAH contaminant plume: Model-based interpretation of a high-resolution field data set

    DEFF Research Database (Denmark)

    Prommer, H.; Anneser, B.; Rolle, Massimo

    2009-01-01

    of toluene, which is the most rapidly degrading compound and the most important reductant at the site. The resulting depth profiles at the observation well show distinct differences between the small isotopic enrichment in the contaminant plume core and the much stronger enrichment of up to 3.3 parts per......A high spatial resolution data set documenting carbon and sulfur isotope fractionation at a tar oil-contaminated, sulfate-reducing field site was analyzed with a reactive transport model. Within a comprehensive numerical model, the study links the distinctive observed isotope depth profiles...... with the degradation of various monoaromatic and polycyclic aromatic hydrocarbon compounds (BTEX/PAHs) under sulfate-reducing conditions. In the numerical model, microbial dynamics were simulated explicitly and isotope fractionation was directly linked to the differential microbial uptake of lighter and heavier carbon...

  5. Image decomposition as a tool for validating stress analysis models

    Directory of Open Access Journals (Sweden)

    Mottershead J.

    2010-06-01

    Full Text Available It is good practice to validate analytical and numerical models used in stress analysis for engineering design by comparison with measurements obtained from real components either in-service or in the laboratory. In reality, this critical step is often neglected or reduced to placing a single strain gage at the predicted hot-spot of stress. Modern techniques of optical analysis allow full-field maps of displacement, strain and, or stress to be obtained from real components with relative ease and at modest cost. However, validations continued to be performed only at predicted and, or observed hot-spots and most of the wealth of data is ignored. It is proposed that image decomposition methods, commonly employed in techniques such as fingerprinting and iris recognition, can be employed to validate stress analysis models by comparing all of the key features in the data from the experiment and the model. Image decomposition techniques such as Zernike moments and Fourier transforms have been used to decompose full-field distributions for strain generated from optical techniques such as digital image correlation and thermoelastic stress analysis as well as from analytical and numerical models by treating the strain distributions as images. The result of the decomposition is 101 to 102 image descriptors instead of the 105 or 106 pixels in the original data. As a consequence, it is relatively easy to make a statistical comparison of the image descriptors from the experiment and from the analytical/numerical model and to provide a quantitative assessment of the stress analysis.

  6. Validating and Verifying Biomathematical Models of Human Fatigue

    Science.gov (United States)

    Martinez, Siera Brooke; Quintero, Luis Ortiz; Flynn-Evans, Erin

    2015-01-01

    Airline pilots experience acute and chronic sleep deprivation, sleep inertia, and circadian desynchrony due to the need to schedule flight operations around the clock. This sleep loss and circadian desynchrony gives rise to cognitive impairments, reduced vigilance and inconsistent performance. Several biomathematical models, based principally on patterns observed in circadian rhythms and homeostatic drive, have been developed to predict a pilots levels of fatigue or alertness. These models allow for the Federal Aviation Administration (FAA) and commercial airlines to make decisions about pilot capabilities and flight schedules. Although these models have been validated in a laboratory setting, they have not been thoroughly tested in operational environments where uncontrolled factors, such as environmental sleep disrupters, caffeine use and napping, may impact actual pilot alertness and performance. We will compare the predictions of three prominent biomathematical fatigue models (McCauley Model, Harvard Model, and the privately-sold SAFTE-FAST Model) to actual measures of alertness and performance. We collected sleep logs, movement and light recordings, psychomotor vigilance task (PVT), and urinary melatonin (a marker of circadian phase) from 44 pilots in a short-haul commercial airline over one month. We will statistically compare with the model predictions to lapses on the PVT and circadian phase. We will calculate the sensitivity and specificity of each model prediction under different scheduling conditions. Our findings will aid operational decision-makers in determining the reliability of each model under real-world scheduling situations.

  7. Low-frequency variability in North Sea and Baltic Sea identified through simulations with the 3-D coupled physical–biogeochemical model ECOSMO

    Directory of Open Access Journals (Sweden)

    U. Daewel

    2017-09-01

    Full Text Available Here we present results from a long-term model simulation of the 3-D coupled ecosystem model ECOSMO II for a North Sea and Baltic Sea set-up. The model allows both multi-decadal hindcast simulation of the marine system and specific process studies under controlled environmental conditions. Model results have been analysed with respect to long-term multi-decadal variability in both physical and biological parameters with the help of empirical orthogonal function (EOF analysis. The analysis of a 61-year (1948–2008 hindcast reveals a quasi-decadal variation in salinity, temperature and current fields in the North Sea in addition to singular events of major changes during restricted time frames. These changes in hydrodynamic variables were found to be associated with changes in ecosystem productivity that are temporally aligned with the timing of reported regime shifts in the areas. Our results clearly indicate that for analysing ecosystem productivity, spatially explicit methods are indispensable. Especially in the North Sea, a correlation analysis between atmospheric forcing and primary production (PP reveals significant correlations between PP and the North Atlantic Oscillation (NAO and wind forcing for the central part of the region, while the Atlantic Multi-decadal Oscillation (AMO and air temperature are correlated to long-term changes in PP in the southern North Sea frontal areas. Since correlations cannot serve to identify causal relationship, we performed scenario model runs perturbing the temporal variability in forcing condition to emphasize specifically the role of solar radiation, wind and eutrophication. The results revealed that, although all parameters are relevant for the magnitude of PP in the North Sea and Baltic Sea, the dominant impact on long-term variability and major shifts in ecosystem productivity was introduced by modulations of the wind fields.

  8. Validation of High Displacement Piezoelectric Actuator Finite Element Models

    Science.gov (United States)

    Taleghani, B. K.

    2000-01-01

    The paper presents the results obtained by using NASTRAN(Registered Trademark) and ANSYS(Regitered Trademark) finite element codes to predict doming of the THUNDER piezoelectric actuators during the manufacturing process and subsequent straining due to an applied input voltage. To effectively use such devices in engineering applications, modeling and characterization are essential. Length, width, dome height, and thickness are important parameters for users of such devices. Therefore, finite element models were used to assess the effects of these parameters. NASTRAN(Registered Trademark) and ANSYS(Registered Trademark) used different methods for modeling piezoelectric effects. In NASTRAN(Registered Trademark), a thermal analogy was used to represent voltage at nodes as equivalent temperatures, while ANSYS(Registered Trademark) processed the voltage directly using piezoelectric finite elements. The results of finite element models were validated by using the experimental results.

  9. Calibration and validation of a general infiltration model

    Science.gov (United States)

    Mishra, Surendra Kumar; Ranjan Kumar, Shashi; Singh, Vijay P.

    1999-08-01

    A general infiltration model proposed by Singh and Yu (1990) was calibrated and validated using a split sampling approach for 191 sets of infiltration data observed in the states of Minnesota and Georgia in the USA. Of the five model parameters, fc (the final infiltration rate), So (the available storage space) and exponent n were found to be more predictable than the other two parameters: m (exponent) and a (proportionality factor). A critical examination of the general model revealed that it is related to the Soil Conservation Service (1956) curve number (SCS-CN) method and its parameter So is equivalent to the potential maximum retention of the SCS-CN method and is, in turn, found to be a function of soil sorptivity and hydraulic conductivity. The general model was found to describe infiltration rate with time varying curve number.

  10. Condensation of steam in horizontal pipes: model development and validation

    International Nuclear Information System (INIS)

    Szijarto, R.

    2015-01-01

    This thesis submitted to the Swiss Federal Institute of Technology ETH in Zurich presents the development and validation of a model for the condensation of steam in horizontal pipes. Condensation models were introduced and developed particularly for the application in the emergency cooling system of a Gen-III+ boiling water reactor. Such an emergency cooling system consists of slightly inclined horizontal pipes, which are immersed in a cold water tank. The pipes are connected to the reactor pressure vessel. They are responsible for a fast depressurization of the reactor core in the case of accident. Condensation in horizontal pipes was investigated with both one-dimensional system codes (RELAP5) and three-dimensional computational fluid dynamics software (ANSYS FLUENT). The performance of the RELAP5 code was not sufficient for transient condensation processes. Therefore, a mechanistic model was developed and implemented. Four models were tested on the LAOKOON facility, which analysed direct contact condensation in a horizontal duct

  11. Development and Validation of a 3-Dimensional CFB Furnace Model

    Science.gov (United States)

    Vepsäläinen, Arl; Myöhänen, Karl; Hyppäneni, Timo; Leino, Timo; Tourunen, Antti

    At Foster Wheeler, a three-dimensional CFB furnace model is essential part of knowledge development of CFB furnace process regarding solid mixing, combustion, emission formation and heat transfer. Results of laboratory and pilot scale phenomenon research are utilized in development of sub-models. Analyses of field-test results in industrial-scale CFB boilers including furnace profile measurements are simultaneously carried out with development of 3-dimensional process modeling, which provides a chain of knowledge that is utilized as feedback for phenomenon research. Knowledge gathered by model validation studies and up-to-date parameter databases are utilized in performance prediction and design development of CFB boiler furnaces. This paper reports recent development steps related to modeling of combustion and formation of char and volatiles of various fuel types in CFB conditions. Also a new model for predicting the formation of nitrogen oxides is presented. Validation of mixing and combustion parameters for solids and gases are based on test balances at several large-scale CFB boilers combusting coal, peat and bio-fuels. Field-tests including lateral and vertical furnace profile measurements and characterization of solid materials provides a window for characterization of fuel specific mixing and combustion behavior in CFB furnace at different loads and operation conditions. Measured horizontal gas profiles are projection of balance between fuel mixing and reactions at lower part of furnace and are used together with both lateral temperature profiles at bed and upper parts of furnace for determination of solid mixing and combustion model parameters. Modeling of char and volatile based formation of NO profiles is followed by analysis of oxidizing and reducing regions formed due lower furnace design and mixing characteristics of fuel and combustion airs effecting to formation ofNO furnace profile by reduction and volatile-nitrogen reactions. This paper presents

  12. Proceedings of the first SRL model validation workshop

    International Nuclear Information System (INIS)

    Buckner, M.R.

    1981-10-01

    The Clean Air Act and its amendments have added importance to knowing the accuracy of mathematical models used to assess transport and diffusion of environmental pollutants. These models are the link between air quality standards and emissions. To test the accuracy of a number of these models, a Model Validation Workshop was held. The meteorological, source-term, and Kr-85 concentration data bases for emissions from the separations areas of the Savannah River Plant during 1975 through 1977 were used to compare calculations from various atmospheric dispersion models. The results of statistical evaluation of the models show a degradation in the ability to predict pollutant concentrations as the time span over which the calculations are made is reduced. Forecasts for annual time periods were reasonably accurate. Weighted-average squared correlation coefficients (R 2 ) were 0.74 for annual, 0.28 for monthly, 0.21 for weekly, and 0.18 for twice-daily predictions. Model performance varied within each of these four categories; however, the results indicate that the more complex, three-dimensional models provide only marginal increases in accuracy. The increased costs of running these codes is not warranted for long-term releases or for conditions of relatively simple terrain and meteorology. The overriding factor in the calculational accuracy is the accurate description of the wind field. Further improvements of the numerical accuracy of the complex models is not nearly as important as accurate calculations of the meteorological transport conditions

  13. Using Coupled Models to Study the Effects of River Discharge on Biogeochemical Cycling and Hypoxia in the Northern Gulf of Mexico

    Science.gov (United States)

    Penta, Bradley; Ko, D.; Gould, Richard W.; Arnone, Robert A.; Greene, R.; Lehrter, J.; Hagy, James; Schaeffer, B.; Murrell, M.; Kurtz, J.; hide

    2009-01-01

    We describe emerging capabilities to understand physical processes and biogeoehemical cycles in coastal waters through the use of satellites, numerical models, and ship observations. Emerging capabilities provide significantly improved ability to model ecological systems and the impact of environmental management actions on them. The complex interaction of physical and biogeoehemical processes responsible for hypoxic events requires an integrated approach to research, monitoring, and modeling in order to fully define the processes leading to hypoxia. Our efforts characterizes the carbon cycle associated with river plumes and the export of organic matter and nutrients form coastal Louisiana wetlands and embayments in a spatially and temporally intensive manner previously not possible. Riverine nutrients clearly affect ecosystems in the northern Gulf of Mexico as evidenced in the occurrence of regional hypoxia events. Less known and largely unqualified is the export of organic matter and nutrients from the large areas of disappearing coastal wetlands and large embayments adjacent to the Louisiana Continental Shelf. This project provides new methods to track the river plume along the shelf and to estimate the rate of export of suspended inorganic and organic paniculate matter and dissolved organic matter form coastal habitats of south Louisiana.

  14. Validation, Optimization and Simulation of a Solar Thermoelectric Generator Model

    Science.gov (United States)

    Madkhali, Hadi Ali; Hamil, Ali; Lee, HoSung

    2017-12-01

    This study explores thermoelectrics as a viable option for small-scale solar thermal applications. Thermoelectric technology is based on the Seebeck effect, which states that a voltage is induced when a temperature gradient is applied to the junctions of two differing materials. This research proposes to analyze, validate, simulate, and optimize a prototype solar thermoelectric generator (STEG) model in order to increase efficiency. The intent is to further develop STEGs as a viable and productive energy source that limits pollution and reduces the cost of energy production. An empirical study (Kraemer et al. in Nat Mater 10:532, 2011) on the solar thermoelectric generator reported a high efficiency performance of 4.6%. The system had a vacuum glass enclosure, a flat panel (absorber), thermoelectric generator and water circulation for the cold side. The theoretical and numerical approach of this current study validated the experimental results from Kraemer's study to a high degree. The numerical simulation process utilizes a two-stage approach in ANSYS software for Fluent and Thermal-Electric Systems. The solar load model technique uses solar radiation under AM 1.5G conditions in Fluent. This analytical model applies Dr. Ho Sung Lee's theory of optimal design to improve the performance of the STEG system by using dimensionless parameters. Applying this theory, using two cover glasses and radiation shields, the STEG model can achieve a highest efficiency of 7%.

  15. Biogeochemical reactive-transport modelling of the interactions of medium activity long-lived nuclear waste in fractured argillite and the effect on redox conditions

    International Nuclear Information System (INIS)

    Small, J.S.; Steele, H.; Kwong, S.; Albrecht, A.

    2010-01-01

    Document available in extended abstract form only. The role of anaerobic microbial processes in mediating gas generation and redox reactions in organic (cellulose) containing low level activity nuclear wastes (LLW) is well established through monitoring of operational near-surface LLW disposal sites and municipal waste disposal sites. Modelling approaches based on Monod kinetic growth models to represent the complex suite of anaerobic processes have been developed and these models are able to reproduce the evolving biogeochemistry and gas generation of large scale and long term (10 year) experiments on cellulose waste degradation. In the case of geological disposal of medium activity long-lived nuclear waste (MAVL) microbial processes have the potential to exploit metabolic energy sources present in the waste, engineered barriers and host geological formation and as a consequence influence redox potential. Several electron donors and electron acceptors may be present in MAVL. Electron donors include; hydrogen (resulting from radiolysis and anaerobic corrosion of metals), and hydrolysis products of organic waste materials. Sulphate, nitrate and Fe(III) containing minerals and corrosion products are examples of electron acceptors present in intermediate level wastes. Significant amounts of organic matter, sulphate and iron minerals may also be present in host geological formations and have the potential to act as microbial energy sources once the system is perturbed by electron donors/acceptors from the waste. The construction of a geological disposal facility will physically disturb the host formation, potentially causing fracturing of the excavation damage zone (EDZ). The EDZ may thus provide environmental conditions, such as space and free water that together with nutrient and energy sources to promote microbial activity. In this study the Generalised Repository Model (GRM) developed to simulate the coupled microbiological, chemical and transport processes in near

  16. Connections between physical, optical and biogeochemical processes in the Pacific Ocean

    Science.gov (United States)

    Xiu, Peng; Chai, Fei

    2014-03-01

    A new biogeochemical model has been developed and coupled to a three-dimensional physical model in the Pacific Ocean. With the explicitly represented dissolved organic pools, this new model is able to link key biogeochemical processes with optical processes. Model validation against satellite and in situ data indicates the model is robust in reproducing general biogeochemical and optical features. Colored dissolved organic matter (CDOM) has been suggested to play an important role in regulating underwater light field. With the coupled model, physical and biological regulations of CDOM in the euphotic zone are analyzed. Model results indicate seasonal variability of CDOM is mostly determined by biological processes, while the importance of physical regulation manifests in the annual mean terms. Without CDOM attenuating light, modeled depth-integrated primary production is about 10% higher than the control run when averaged over the entire basin, while this discrepancy is highly variable in space with magnitudes reaching higher than 100% in some locations. With CDOM dynamics integrated in physical-biological interactions, a new mechanism by which physical processes affect biological processes is suggested, namely, physical transport of CDOM changes water optical properties, which can further modify underwater light field and subsequently affect the distribution of phytoplankton chlorophyll. This mechanism tends to occur in the entire Pacific basin but with strong spatial variability, implying the importance of including optical processes in the coupled physical-biogeochemical model. If ammonium uptake is sufficient to permit utilization of DOM, that is, UB∗⩾-U{U}/{U}-{(1-r_b)}/{RB}, then bacteria uptake of DOM has the form of FB=(1-r_b){U}/{RB}, bacteria respiration, SB=r_b×U, remineralization by bacteria, EB=UC{UN}/{UC}-{(1-r_b)}/{RB}. If EB > 0, then UB = 0; otherwise, UB = -EB. If there is insufficient ammonium, that is, UB∗CO2 is calculated using the

  17. Assessment of the sea-ice carbon pump: Insights from a three-dimensional ocean-sea-ice biogeochemical model (NEMO-LIM-PISCES

    Directory of Open Access Journals (Sweden)

    Sébastien Moreau

    2016-08-01

    Full Text Available Abstract The role of sea ice in the carbon cycle is minimally represented in current Earth System Models (ESMs. Among potentially important flaws, mentioned by several authors and generally overlooked during ESM design, is the link between sea-ice growth and melt and oceanic dissolved inorganic carbon (DIC and total alkalinity (TA. Here we investigate whether this link is indeed an important feature of the marine carbon cycle misrepresented in ESMs. We use an ocean general circulation model (NEMO-LIM-PISCES with sea-ice and marine carbon cycle components, forced by atmospheric reanalyses, adding a first-order representation of DIC and TA storage and release in/from sea ice. Our results suggest that DIC rejection during sea-ice growth releases several hundred Tg C yr−1 to the surface ocean, of which < 2% is exported to depth, leading to a notable but weak redistribution of DIC towards deep polar basins. Active carbon processes (mainly CaCO3 precipitation but also ice-atmosphere CO2 fluxes and net community production increasing the TA/DIC ratio in sea-ice modified ocean-atmosphere CO2 fluxes by a few Tg C yr−1 in the sea-ice zone, with specific hemispheric effects: DIC content of the Arctic basin decreased but DIC content of the Southern Ocean increased. For the global ocean, DIC content increased by 4 Tg C yr−1 or 2 Pg C after 500 years of model run. The simulated numbers are generally small compared to the present-day global ocean annual CO2 sink (2.6 ± 0.5 Pg C yr−1. However, sea-ice carbon processes seem important at regional scales as they act significantly on DIC redistribution within and outside polar basins. The efficiency of carbon export to depth depends on the representation of surface-subsurface exchanges and their relationship with sea ice, and could differ substantially if a higher resolution or different ocean model were used.

  18. Contaminant transport model validation: The Oak Ridge Reservation

    International Nuclear Information System (INIS)

    Lee, R.R.; Ketelle, R.H.

    1988-09-01

    In the complex geologic setting on the Oak Ridge Reservation, hydraulic conductivity is anisotropic and flow is strongly influenced by an extensive and largely discontinuous fracture network. Difficulties in describing and modeling the aquifer system prompted a study to obtain aquifer property data to be used in a groundwater flow model validation experiment. Characterization studies included the performance of an extensive suite of aquifer test within a 600-square-meter area to obtain aquifer property values to describe the flow field in detail. Following aquifer test, a groundwater tracer test was performed under ambient conditions to verify the aquifer analysis. Tracer migration data in the near-field were used in model calibration to predict tracer arrival time and concentration in the far-field. Despite the extensive aquifer testing, initial modeling inaccurately predicted tracer migration direction. Initial tracer migration rates were consistent with those predicted by the model; however, changing environmental conditions resulted in an unanticipated decay in tracer movement. Evaluation of the predictive accuracy of groundwater flow and contaminant transport models on the Oak Ridge Reservation depends on defining the resolution required, followed by field testing and model grid definition at compatible scales. The use of tracer tests, both as a characterization method and to verify model results, provides the highest level of resolution of groundwater flow characteristics. 3 refs., 4 figs

  19. Evolution of Earth-like Extrasolar Planetary Atmospheres: Assessing the Atmospheres and Biospheres of Early Earth Analog Planets with a Coupled Atmosphere Biogeochemical Model.

    Science.gov (United States)

    Gebauer, S; Grenfell, J L; Stock, J W; Lehmann, R; Godolt, M; von Paris, P; Rauer, H

    2017-01-01

    Understanding the evolution of Earth and potentially habitable Earth-like worlds is essential to fathom our origin in the Universe. The search for Earth-like planets in the habitable zone and investigation of their atmospheres with climate and photochemical models is a central focus in exoplanetary science. Taking the evolution of Earth as a reference for Earth-like planets, a central scientific goal is to understand what the interactions were between atmosphere, geology, and biology on early Earth. The Great Oxidation Event in Earth's history was certainly caused by their interplay, but the origin and controlling processes of this occurrence are not well understood, the study of which will require interdisciplinary, coupled models. In this work, we present results from our newly developed Coupled Atmosphere Biogeochemistry model in which atmospheric O 2 concentrations are fixed to values inferred by geological evidence. Applying a unique tool (Pathway Analysis Program), ours is the first quantitative analysis of catalytic cycles that governed O 2 in early Earth's atmosphere near the Great Oxidation Event. Complicated oxidation pathways play a key role in destroying O 2 , whereas in the upper atmosphere, most O 2 is formed abiotically via CO 2 photolysis. The O 2 bistability found by Goldblatt et al. ( 2006 ) is not observed in our calculations likely due to our detailed CH 4 oxidation scheme. We calculate increased CH 4 with increasing O 2 during the Great Oxidation Event. For a given atmospheric surface flux, different atmospheric states are possible; however, the net primary productivity of the biosphere that produces O 2 is unique. Mixing, CH 4 fluxes, ocean solubility, and mantle/crust properties strongly affect net primary productivity and surface O 2 fluxes. Regarding exoplanets, different "states" of O 2 could exist for similar biomass output. Strong geological activity could lead to false negatives for life (since our analysis suggests that reducing gases

  20. Prospective validation of pathologic complete response models in rectal cancer: Transferability and reproducibility.

    Science.gov (United States)

    van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre

    2017-09-01

    Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.

  1. MOLECULAR VALIDATED MODEL FOR ADSORPTION OF PROTONATED DYE ON LDH

    Directory of Open Access Journals (Sweden)

    B. M. Braga

    Full Text Available Abstract Hydrotalcite-like compounds are anionic clays of scientific and technological interest for their use as ion exchange materials, catalysts and modified electrodes. Surface phenomenon are important for all these applications. Although conventional analytical methods have enabled progress in understanding the behavior of anionic clays in solution, an evaluation at the atomic scale of the dynamics of their ionic interactions has never been performed. Molecular simulation has become an extremely useful tool to provide this perspective. Our purpose is to validate a simplified model for the adsorption of 5-benzoyl-4-hydroxy-2-methoxy-benzenesulfonic acid (MBSA, a prototype molecule of anionic dyes, onto a hydrotalcite surface. Monte Carlo simulations were performed in the canonical ensemble with MBSA ions and a pore model of hydrotalcite using UFF and ClayFF force fields. The proposed molecular model has allowed us to reproduce experimental data of atomic force microscopy. Influences of protonation during the adsorption process are also presented.

  2. Validation of the replica trick for simple models

    Science.gov (United States)

    Shinzato, Takashi

    2018-04-01

    We discuss the replica analytic continuation using several simple models in order to prove mathematically the validity of the replica analysis, which is used in a wide range of fields related to large-scale complex systems. While replica analysis consists of two analytical techniques—the replica trick (or replica analytic continuation) and the thermodynamical limit (and/or order parameter expansion)—we focus our study on replica analytic continuation, which is the mathematical basis of the replica trick. We apply replica analysis to solve a variety of analytical models, and examine the properties of replica analytic continuation. Based on the positive results for these models we propose that replica analytic continuation is a robust procedure in replica analysis.

  3. Experimental Validation of a Dynamic Model for Lightweight Robots

    Directory of Open Access Journals (Sweden)

    Alessandro Gasparetto

    2013-03-01

    Full Text Available Nowadays, one of the main topics in robotics research is dynamic performance improvement by means of a lightening of the overall system structure. The effective motion and control of these lightweight robotic systems occurs with the use of suitable motion planning and control process. In order to do so, model-based approaches can be adopted by exploiting accurate dynamic models that take into account the inertial and elastic terms that are usually neglected in a heavy rigid link configuration. In this paper, an effective method for modelling spatial lightweight industrial robots based on an Equivalent Rigid Link System approach is considered from an experimental validation perspective. A dynamic simulator implementing the formulation is used and an experimental test-bench is set-up. Experimental tests are carried out with a benchmark L-shape mechanism.

  4. Building a high level sample processing and quality assessment model for biogeochemical measurements: a case study from the ocean acidification community

    Science.gov (United States)

    Thomas, R.; Connell, D.; Spears, T.; Leadbetter, A.; Burger, E. F.

    2016-12-01

    The scientific literature heavily features small-scale studies with the impact of the results extrapolated to regional/global importance. There are on-going initiatives (e.g. OA-ICC, GOA-ON, GEOTRACES, EMODNet Chemistry) aiming to assemble regional to global-scale datasets that are available for trend or meta-analyses. Assessing the quality and comparability of these data requires information about the processing chain from "sampling to spreadsheet". This provenance information needs to be captured and readily available to assess data fitness for purpose. The NOAA Ocean Acidification metadata template was designed in consultation with domain experts for this reason; the core carbonate chemistry variables have 23-37 metadata fields each and for scientists generating these datasets there could appear to be an ever increasing amount of metadata expected to accompany a dataset. While this provenance metadata should be considered essential by those generating or using the data, for those discovering data there is a sliding scale between what is considered discovery metadata (title, abstract, contacts, etc.) versus usage metadata (methodology, environmental setup, lineage, etc.), the split depending on the intended use of data. As part of the OA-ICC's activities, the metadata fields from the NOAA template relevant to the sample processing chain and QA criteria have been factored to develop profiles for, and extensions to, the OM-JSON encoding supported by the PROV ontology. While this work started focused on carbonate chemistry variable specific metadata, the factorization could be applied within the O&M model across other disciplines such as trace metals or contaminants. In a linked data world with a suitable high level model for sample processing and QA available, tools and support can be provided to link reproducible units of metadata (e.g. the standard protocol for a variable as adopted by a community) and simplify the provision of metadata and subsequent discovery.

  5. Analytical thermal model validation for Cassini radioisotope thermoelectric generator

    International Nuclear Information System (INIS)

    Lin, E.I.

    1997-01-01

    The Saturn-bound Cassini spacecraft is designed to rely, without precedent, on the waste heat from its three radioisotope thermoelectric generators (RTGs) to warm the propulsion module subsystem, and the RTG end dome temperature is a key determining factor of the amount of waste heat delivered. A previously validated SINDA thermal model of the RTG was the sole guide to understanding its complex thermal behavior, but displayed large discrepancies against some initial thermal development test data. A careful revalidation effort led to significant modifications and adjustments of the model, which result in a doubling of the radiative heat transfer from the heat source support assemblies to the end domes and bring up the end dome and flange temperature predictions to within 2 C of the pertinent test data. The increased inboard end dome temperature has a considerable impact on thermal control of the spacecraft central body. The validation process offers an example of physically-driven analytical model calibration with test data from not only an electrical simulator but also a nuclear-fueled flight unit, and has established the end dome temperatures of a flight RTG where no in-flight or ground-test data existed before

  6. Lessons learned from recent geomagnetic disturbance model validation activities

    Science.gov (United States)

    Pulkkinen, A. A.; Welling, D. T.

    2017-12-01

    Due to concerns pertaining to geomagnetically induced current impact on ground-based infrastructure, there has been significantly elevated interest in applying models for local geomagnetic disturbance or "delta-B" predictions. Correspondingly there has been elevated need for testing the quality of the delta-B predictions generated by the modern empirical and physics-based models. To address this need, community-wide activities were launched under the GEM Challenge framework and one culmination of the activities was the validation and selection of models that were transitioned into operations at NOAA SWPC. The community-wide delta-B action is continued under the CCMC-facilitated International Forum for Space Weather Capabilities Assessment and its "Ground Magnetic Perturbations: dBdt, delta-B, GICs, FACs" working group. The new delta-B working group builds on the past experiences and expands the collaborations to cover the entire international space weather community. In this paper, we discuss the key lessons learned from the past delta-B validation exercises and lay out the path forward for building on those experience under the new delta-B working group.

  7. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Directory of Open Access Journals (Sweden)

    Shankarjee Krishnamoorthi

    Full Text Available We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  8. Simulation Methods and Validation Criteria for Modeling Cardiac Ventricular Electrophysiology.

    Science.gov (United States)

    Krishnamoorthi, Shankarjee; Perotti, Luigi E; Borgstrom, Nils P; Ajijola, Olujimi A; Frid, Anna; Ponnaluri, Aditya V; Weiss, James N; Qu, Zhilin; Klug, William S; Ennis, Daniel B; Garfinkel, Alan

    2014-01-01

    We describe a sequence of methods to produce a partial differential equation model of the electrical activation of the ventricles. In our framework, we incorporate the anatomy and cardiac microstructure obtained from magnetic resonance imaging and diffusion tensor imaging of a New Zealand White rabbit, the Purkinje structure and the Purkinje-muscle junctions, and an electrophysiologically accurate model of the ventricular myocytes and tissue, which includes transmural and apex-to-base gradients of action potential characteristics. We solve the electrophysiology governing equations using the finite element method and compute both a 6-lead precordial electrocardiogram (ECG) and the activation wavefronts over time. We are particularly concerned with the validation of the various methods used in our model and, in this regard, propose a series of validation criteria that we consider essential. These include producing a physiologically accurate ECG, a correct ventricular activation sequence, and the inducibility of ventricular fibrillation. Among other components, we conclude that a Purkinje geometry with a high density of Purkinje muscle junctions covering the right and left ventricular endocardial surfaces as well as transmural and apex-to-base gradients in action potential characteristics are necessary to produce ECGs and time activation plots that agree with physiological observations.

  9. High Turbidity Solis Clear Sky Model: Development and Validation

    Directory of Open Access Journals (Sweden)

    Pierre Ineichen

    2018-03-01

    Full Text Available The Solis clear sky model is a spectral scheme based on radiative transfer calculations and the Lambert–Beer relation. Its broadband version is a simplified fast analytical version; it is limited to broadband aerosol optical depths lower than 0.45, which is a weakness when applied in countries with very high turbidity such as China or India. In order to extend the use of the original simplified version of the model for high turbidity values, we developed a new version of the broadband Solis model based on radiative transfer calculations, valid for turbidity values up to 7, for the three components, global, beam, and diffuse, and for the four aerosol types defined by Shettle and Fenn. A validation of low turbidity data acquired in Geneva shows slightly better results than the previous version. On data acquired at sites presenting higher turbidity data, the bias stays within ±4% for the beam and the global irradiances, and the standard deviation around 5% for clean and stable condition data and around 12% for questionable data and variable sky conditions.

  10. Validating modeled turbulent heat fluxes across large freshwater surfaces

    Science.gov (United States)

    Lofgren, B. M.; Fujisaki-Manome, A.; Gronewold, A.; Anderson, E. J.; Fitzpatrick, L.; Blanken, P.; Spence, C.; Lenters, J. D.; Xiao, C.; Charusambot, U.

    2017-12-01

    Turbulent fluxes of latent and sensible heat are important physical processes that influence the energy and water budgets of the Great Lakes. Validation and improvement of bulk flux algorithms to simulate these turbulent heat fluxes are critical for accurate prediction of hydrodynamics, water levels, weather, and climate over the region. Here we consider five heat flux algorithms from several model systems; the Finite-Volume Community Ocean Model, the Weather Research and Forecasting model, and the Large Lake Thermodynamics Model, which are used in research and operational environments and concentrate on different aspects of the Great Lakes' physical system, but interface at the lake surface. The heat flux algorithms were isolated from each model and driven by meteorological data from over-lake stations in the Great Lakes Evaporation Network. The simulation results were compared with eddy covariance flux measurements at the same stations. All models show the capacity to the seasonal cycle of the turbulent heat fluxes. Overall, the Coupled Ocean Atmosphere Response Experiment algorithm in FVCOM has the best agreement with eddy covariance measurements. Simulations with the other four algorithms are overall improved by updating the parameterization of roughness length scales of temperature and humidity. Agreement between modelled and observed fluxes notably varied with geographical locations of the stations. For example, at the Long Point station in Lake Erie, observed fluxes are likely influenced by the upwind land surface while the simulations do not take account of the land surface influence, and therefore the agreement is worse in general.

  11. Seismic behaviour of PWR fuel assemblies model and its validation

    International Nuclear Information System (INIS)

    Queval, J.C.; Gantenbein, F.; Brochard, D.; Benjedidia, A.

    1991-01-01

    The validity of the models simulating the seismic behaviour of PWR cores can only be exactly demonstrated by seismic testing on groups of fuel assemblies. Shake table seismic tests of rows of assembly mock-ups, conducted by the CEA in conjunction with FRAMATOME, are presented in reference /1/. This paper addresses the initial comparisons between model and test results for a row of five assemblies in air. Two models are used: a model with a single beam per assembly, used regularly in accident analyses, and described in reference /2/, and a more refined 2-beam per assembly model, geared mainly towards interpretation of test results. The 2-beam model is discussed first, together with parametric studies used to characterize it, and the study of the assembly row for a period limited to 2 seconds and for different excitation levels. For the 1-beam model assembly used in applications, the row is studied over the total test time, i.e twenty seconds, which covers the average duration of the core seismic behaviour studies, and for a peak exciting acceleration value at 0.4 g, which corresponds to the SSE level of the reference spectrum

  12. Discrete fracture modelling for the Stripa tracer validation experiment predictions

    International Nuclear Information System (INIS)

    Dershowitz, W.; Wallmann, P.

    1992-02-01

    Groundwater flow and transport through three-dimensional networks of discrete fractures was modeled to predict the recovery of tracer from tracer injection experiments conducted during phase 3 of the Stripa site characterization and validation protect. Predictions were made on the basis of an updated version of the site scale discrete fracture conceptual model used for flow predictions and preliminary transport modelling. In this model, individual fractures were treated as stochastic features described by probability distributions of geometric and hydrologic properties. Fractures were divided into three populations: Fractures in fracture zones near the drift, non-fracture zone fractures within 31 m of the drift, and fractures in fracture zones over 31 meters from the drift axis. Fractures outside fracture zones are not modelled beyond 31 meters from the drift axis. Transport predictions were produced using the FracMan discrete fracture modelling package for each of five tracer experiments. Output was produced in the seven formats specified by the Stripa task force on fracture flow modelling. (au)

  13. Numerical simulation and experimental validation of aircraft ground deicing model

    Directory of Open Access Journals (Sweden)

    Bin Chen

    2016-05-01

    Full Text Available Aircraft ground deicing plays an important role of guaranteeing the aircraft safety. In practice, most airports generally use as many deicing fluids as possible to remove the ice, which causes the waste of the deicing fluids and the pollution of the environment. Therefore, the model of aircraft ground deicing should be built to establish the foundation for the subsequent research, such as the optimization of the deicing fluid consumption. In this article, the heat balance of the deicing process is depicted, and the dynamic model of the deicing process is provided based on the analysis of the deicing mechanism. In the dynamic model, the surface temperature of the deicing fluids and the ice thickness are regarded as the state parameters, while the fluid flow rate, the initial temperature, and the injection time of the deicing fluids are treated as control parameters. Ignoring the heat exchange between the deicing fluids and the environment, the simplified model is obtained. The rationality of the simplified model is verified by the numerical simulation and the impacts of the flow rate, the initial temperature and the injection time on the deicing process are investigated. To verify the model, the semi-physical experiment system is established, consisting of the low-constant temperature test chamber, the ice simulation system, the deicing fluid heating and spraying system, the simulated wing, the test sensors, and the computer measure and control system. The actual test data verify the validity of the dynamic model and the accuracy of the simulation analysis.

  14. Validation of Symptom Validity Tests Using a "Child-model" of Adult Cognitive Impairments

    NARCIS (Netherlands)

    Rienstra, A.; Spaan, P. E. J.; Schmand, B.

    2010-01-01

    Validation studies of symptom validity tests (SVTs) in children are uncommon. However, since children's cognitive abilities are not yet fully developed, their performance may provide additional support for the validity of these measures in adult populations. Four SVTs, the Test of Memory Malingering

  15. Validation of symptom validity tests using a "child-model" of adult cognitive impairments

    NARCIS (Netherlands)

    Rienstra, A.; Spaan, P.E.J.; Schmand, B.

    2010-01-01

    Validation studies of symptom validity tests (SVTs) in children are uncommon. However, since children’s cognitive abilities are not yet fully developed, their performance may provide additional support for the validity of these measures in adult populations. Four SVTs, the Test of Memory Malingering

  16. Final Project Report - Coupled Biogeochemical Process Evaluation for Conceptualizing Trichloriethylene Co-Metabolism: Co-Metabolic Enzyme Activity Probes and Modeling Co-Metabolism and Attenuation

    Energy Technology Data Exchange (ETDEWEB)

    Starr, Robert C; Orr, Brennon R; Lee, M Hope; Delwiche, Mark

    2010-02-26

    Trichloroethene (TCE) (also known as trichloroethylene) is a common contaminant in groundwater. TCE is regulated in drinking water at a concentration of 5 µg/L, and a small mass of TCE has the potential to contaminant large volumes of water. The physical and chemical characteristics of TCE allow it to migrate quickly in most subsurface environments, and thus large plumes of contaminated groundwater can form from a single release. The migration and persistence of TCE in groundwater can be limited by biodegradation. TCE can be biodegraded via different processes under either anaerobic or aerobic conditions. Anaerobic biodegradation is widely recognized, but aerobic degradation is less well recognized. Under aerobic conditions, TCE can be oxidized to non hazardous conditions via cometabolic pathways. This study applied enzyme activity probes to demonstrate that cometabolic degradation of TCE occurs in aerobic groundwater at several locations, used laboratory microcosm studies to determine aerobic degradation rates, and extrapolated lab-measured rates to in situ rates based on concentrations of microorganisms with active enzymes involved in cometabolic TCE degradation. Microcosms were constructed using basalt chips that were inoculated with microorganisms to groundwater at the Idaho National Laboratory Test Area North TCE plume by filling a set of Flow-Through In Situ Reactors (FTISRs) with chips and placing the FTISRs into the open interval of a well for several months. A parametric study was performed to evaluate predicted degradation rates and concentration trends using a competitive inhibition kinetic model, which accounts for competition for enzyme active sites by both a growth substrate and a cometabolic substrate. The competitive inhibition kinetic expression was programmed for use in the RT3D reactive transport package. Simulations of TCE plume evolution using both competitive inhibition kinetics and first order decay were performed.

  17. Smooth particle hydrodynamic modeling and validation for impact bird substitution

    Science.gov (United States)

    Babu, Arun; Prasad, Ganesh

    2018-04-01

    Bird strike events incidentally occur and can at times be fatal for air frame structures. Federal Aviation Regulations (FAR) and such other ones mandates aircrafts to be modeled to withstand various levels of bird hit damages. The subject matter of this paper is numerical modeling of a soft body geometry for realistically substituting an actual bird for carrying out simulations of bird hit on target structures. Evolution of such a numerical code to effect an actual bird behavior through impact is much desired for making use of the state of the art computational facilities in simulating bird strike events. Validity, of simulations depicting bird hits, is largely dependent on the correctness of the bird model. In an impact, a set of complex and coupled dynamic interaction exists between the target and the impactor. To simplify this problem, impactor response needs to be decoupled from that of the target. This can be done by assuming and modeling the target as noncompliant. Bird is assumed as fluidic in a impact. Generated stresses in the bird body are significant than its yield stresses. Hydrodynamic theory is most ideal for describing this problem. Impactor literally flows steadily over the target for most part of this problem. The impact starts with an initial shock and falls into a radial release shock regime. Subsequently a steady flow is established in the bird body and this phase continues till the whole length of the bird body is turned around. Initial shock pressure and steady state pressure are ideal variables for comparing and validating the bird model. Spatial discretization of the bird is done using Smooth Particle Hydrodynamic (SPH) approach. This Discrete Element Model (DEM) offers significant advantages over other contemporary approaches. Thermodynamic state variable relations are established using Polynomial Equation of State (EOS). ANSYS AUTODYN is used to perform the explicit dynamic simulation of the impact event. Validation of the shock and steady

  18. Validation of kinetic modeling of progesterone release from polymeric membranes

    Directory of Open Access Journals (Sweden)

    Analia Irma Romero

    2018-01-01

    Full Text Available Mathematical modeling in drug release systems is fundamental in development and optimization of these systems, since it allows to predict drug release rates and to elucidate the physical transport mechanisms involved. In this paper we validate a novel mathematical model that describes progesterone (Prg controlled release from poly-3-hydroxybutyric acid (PHB membranes. A statistical analysis was conducted to compare the fitting of our model with six different models and the Akaike information criterion (AIC was used to find the equation with best-fit. A simple relation between mass and drug released rate was found, which allows predicting the effect of Prg loads on the release behavior. Our proposed model was the one with minimum AIC value, and therefore it was the one that statistically fitted better the experimental data obtained for all the Prg loads tested. Furthermore, the initial release rate was calculated and therefore, the interface mass transfer coefficient estimated and the equilibrium distribution constant of Prg between the PHB and the release medium was also determined. The results lead us to conclude that our proposed model is the one which best fits the experimental data and can be successfully used to describe Prg drug release in PHB membranes.

  19. MT3DMS: Model use, calibration, and validation

    Science.gov (United States)

    Zheng, C.; Hill, Mary C.; Cao, G.; Ma, R.

    2012-01-01

    MT3DMS is a three-dimensional multi-species solute transport model for solving advection, dispersion, and chemical reactions of contaminants in saturated groundwater flow systems. MT3DMS interfaces directly with the U.S. Geological Survey finite-difference groundwater flow model MODFLOW for the flow solution and supports the hydrologic and discretization features of MODFLOW. MT3DMS contains multiple transport solution techniques in one code, which can often be important, including in model calibration. Since its first release in 1990 as MT3D for single-species mass transport modeling, MT3DMS has been widely used in research projects and practical field applications. This article provides a brief introduction to MT3DMS and presents recommendations about calibration and validation procedures for field applications of MT3DMS. The examples presented suggest the need to consider alternative processes as models are calibrated and suggest opportunities and difficulties associated with using groundwater age in transport model calibration.

  20. Non-Linear Slosh Damping Model Development and Validation

    Science.gov (United States)

    Yang, H. Q.; West, Jeff

    2015-01-01

    Propellant tank slosh dynamics are typically represented by a mechanical model of spring mass damper. This mechanical model is then included in the equation of motion of the entire vehicle for Guidance, Navigation and Control (GN&C) analysis. For a partially-filled smooth wall propellant tank, the critical damping based on classical empirical correlation is as low as 0.05%. Due to this low value of damping, propellant slosh is potential sources of disturbance critical to the stability of launch and space vehicles. It is postulated that the commonly quoted slosh damping is valid only under the linear regime where the slosh amplitude is small. With the increase of slosh amplitude, the critical damping value should also increase. If this nonlinearity can be verified and validated, the slosh stability margin can be significantly improved, and the level of conservatism maintained in the GN&C analysis can be lessened. The purpose of this study is to explore and to quantify the dependence of slosh damping with slosh amplitude. Accurately predicting the extremely low damping value of a smooth wall tank is very challenging for any Computational Fluid Dynamics (CFD) tool. One must resolve thin boundary layers near the wall and limit numerical damping to minimum. This computational study demonstrates that with proper grid resolution, CFD can indeed accurately predict the low damping physics from smooth walls under the linear regime. Comparisons of extracted damping values with experimental data for different tank sizes show very good agreements. Numerical simulations confirm that slosh damping is indeed a function of slosh amplitude. When slosh amplitude is low, the damping ratio is essentially constant, which is consistent with the empirical correlation. Once the amplitude reaches a critical value, the damping ratio becomes a linearly increasing function of the slosh amplitude. A follow-on experiment validated the developed nonlinear damping relationship. This discovery can

  1. Biogeochemical aspects of aquifer thermal energy storage

    NARCIS (Netherlands)

    Brons, H.J.

    1992-01-01

    During the process of aquifer thermal energy storage the in situ temperature of the groundwater- sediment system may fluctuate significantly. As a result the groundwater characteristics can be considerably affected by a variety of chemical, biogeochemical and microbiological

  2. Biogeochemical cycling of radionuclides in the environment

    International Nuclear Information System (INIS)

    Livens, F.R.

    1990-01-01

    The biogeochemical cycling of radionuclides with other components such as nutrients around ecosystems is discussed. In particular the behaviour of cesium in freshwater ecosystems since the Chernobyl accident and the behaviour of technetium in the form of pertechnetate anions, TcO 4 , in marine ecosystems is considered. (UK)

  3. Modeling and validation of existing VAV system components

    Energy Technology Data Exchange (ETDEWEB)

    Nassif, N.; Kajl, S.; Sabourin, R. [Ecole de Technologie Superieure, Montreal, PQ (Canada)

    2004-07-01

    The optimization of supervisory control strategies and local-loop controllers can improve the performance of HVAC (heating, ventilating, air-conditioning) systems. In this study, the component model of the fan, the damper and the cooling coil were developed and validated against monitored data of an existing variable air volume (VAV) system installed at Montreal's Ecole de Technologie Superieure. The measured variables that influence energy use in individual HVAC models included: (1) outdoor and return air temperature and relative humidity, (2) supply air and water temperatures, (3) zone airflow rates, (4) supply duct, outlet fan, mixing plenum static pressures, (5) fan speed, and (6) minimum and principal damper and cooling and heating coil valve positions. The additional variables that were considered, but not measured were: (1) fan and outdoor airflow rate, (2) inlet and outlet cooling coil relative humidity, and (3) liquid flow rate through the heating or cooling coils. The paper demonstrates the challenges of the validation process when monitored data of existing VAV systems are used. 7 refs., 11 figs.

  4. Numerical Simulation of Hydrogen Combustion: Global Reaction Model and Validation

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yun [School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an (China); Department of Mechanical, Aerospace and Nuclear Engineering, Rensselaer Polytechnic Institute, Troy, NY (United States); Liu, Yinhe, E-mail: yinheliu@mail.xjtu.edu.cn [School of Energy and Power Engineering, Xi’an Jiaotong University, Xi’an (China)

    2017-11-20

    Due to the complexity of modeling the combustion process in nuclear power plants, the global mechanisms are preferred for numerical simulation. To quickly perform the highly resolved simulations with limited processing resources of large-scale hydrogen combustion, a method based on thermal theory was developed to obtain kinetic parameters of global reaction mechanism of hydrogen–air combustion in a wide range. The calculated kinetic parameters at lower hydrogen concentration (C{sub hydrogen} < 20%) were validated against the results obtained from experimental measurements in a container and combustion test facility. In addition, the numerical data by the global mechanism (C{sub hydrogen} > 20%) were compared with the results by detailed mechanism. Good agreement between the model prediction and the experimental data was achieved, and the comparison between simulation results by the detailed mechanism and the global reaction mechanism show that the present calculated global mechanism has excellent predictable capabilities for a wide range of hydrogen–air mixtures.

  5. Experimental validation of models for Plasma Focus devices

    International Nuclear Information System (INIS)

    Rodriguez Palomino, Luis; Gonzalez, Jose; Clausse, Alejandro

    2003-01-01

    Plasma Focus(PF) Devices are thermonuclear pulsators that produce short pulsed radiation (X-ray, charged particles and neutrons). Since Filippov and Mather, investigations have been used to study plasma properties. Nowadays the interest about PF is focused in technology applications, related to the use of these devices as pulsed neutron sources. In the numerical calculus the Inter institutional PLADEMA (PLAsmas DEnsos MAgnetizados) network is developing three models. Each one is useful in different engineering stages of the Plasma Focus design. One of the main objectives in this work is a comparative study on the influence of the different parameters involved in each models. To validate these results, several experimental measurements under different geometry and initial conditions were performed. (author)

  6. Numerical Simulation of Hydrogen Combustion: Global Reaction Model and Validation

    International Nuclear Information System (INIS)

    Zhang, Yun; Liu, Yinhe

    2017-01-01

    Due to the complexity of modeling the combustion process in nuclear power plants, the global mechanisms are preferred for numerical simulation. To quickly perform the highly resolved simulations with limited processing resources of large-scale hydrogen combustion, a method based on thermal theory was developed to obtain kinetic parameters of global reaction mechanism of hydrogen–air combustion in a wide range. The calculated kinetic parameters at lower hydrogen concentration (C hydrogen < 20%) were validated against the results obtained from experimental measurements in a container and combustion test facility. In addition, the numerical data by the global mechanism (C hydrogen > 20%) were compared with the results by detailed mechanism. Good agreement between the model prediction and the experimental data was achieved, and the comparison between simulation results by the detailed mechanism and the global reaction mechanism show that the present calculated global mechanism has excellent predictable capabilities for a wide range of hydrogen–air mixtures.

  7. Validating agent oriented methodology (AOM) for netlogo modelling and simulation

    Science.gov (United States)

    WaiShiang, Cheah; Nissom, Shane; YeeWai, Sim; Sharbini, Hamizan

    2017-10-01

    AOM (Agent Oriented Modeling) is a comprehensive and unified agent methodology for agent oriented software development. AOM methodology was proposed to aid developers with the introduction of technique, terminology, notation and guideline during agent systems development. Although AOM methodology is claimed to be capable of developing a complex real world system, its potential is yet to be realized and recognized by the mainstream software community and the adoption of AOM is still at its infancy. Among the reason is that there are not much case studies or success story of AOM. This paper presents two case studies on the adoption of AOM for individual based modelling and simulation. It demonstrate how the AOM is useful for epidemiology study and ecological study. Hence, it further validate the AOM in a qualitative manner.

  8. Global and Regional Ecosystem Modeling: Databases of Model Drivers and Validation Measurements

    Energy Technology Data Exchange (ETDEWEB)

    Olson, R.J.

    2002-03-19

    Understanding global-scale ecosystem responses to changing environmental conditions is important both as a scientific question and as the basis for making policy decisions. The confidence in regional models depends on how well the field data used to develop the model represent the region of interest, how well the environmental model driving variables (e.g., vegetation type, climate, and soils associated with a site used to parameterize ecosystem models) represent the region of interest, and how well regional model predictions agree with observed data for the region. To assess the accuracy of global model forecasts of terrestrial carbon cycling, two Ecosystem Model-Data Intercomparison (EMDI) workshops were held (December 1999 and April 2001). The workshops included 17 biogeochemical, satellite-driven, detailed process, and dynamic vegetation global model types. The approach was to run regional or global versions of the models for sites with net primary productivity (NPP) measurements (i.e., not fine-tuned for specific site conditions) and analyze the model-data differences. Extensive worldwide NPP data were assembled with model driver data, including vegetation, climate, and soils data, to perform the intercomparison. This report describes the compilation of NPP estimates for 2,523 sites and 5,164 0.5{sup o}-grid cells under the Global Primary Production Data Initiative (GPPDI) and the results of the EMDI review and outlier analysis that produced a refined set of NPP estimates and model driver data. The EMDI process resulted in 81 Class A sites, 933 Class B sites, and 3,855 Class C cells derived from the original synthesis of NPP measurements and associated driver data. Class A sites represent well-documented study sites that have complete aboveground and below ground NPP measurements. Class B sites represent more numerous ''extensive'' sites with less documentation and site-specific information available. Class C cells represent estimates of

  9. External Validity and Model Validity: A Conceptual Approach for Systematic Review Methodology

    Directory of Open Access Journals (Sweden)

    Raheleh Khorsan

    2014-01-01

    Full Text Available Background. Evidence rankings do not consider equally internal (IV, external (EV, and model validity (MV for clinical studies including complementary and alternative medicine/integrative health care (CAM/IHC research. This paper describe this model and offers an EV assessment tool (EVAT© for weighing studies according to EV and MV in addition to IV. Methods. An abbreviated systematic review methodology was employed to search, assemble, and evaluate the literature that has been published on EV/MV criteria. Standard databases were searched for keywords relating to EV, MV, and bias-scoring from inception to Jan 2013. Tools identified and concepts described were pooled to assemble a robust tool for evaluating these quality criteria. Results. This study assembled a streamlined, objective tool to incorporate for the evaluation of quality of EV/MV research that is more sensitive to CAM/IHC research. Conclusion. Improved reporting on EV can help produce and provide information that will help guide policy makers, public health researchers, and other scientists in their selection, development, and improvement in their research-tested intervention. Overall, clinical studies with high EV have the potential to provide the most useful information about “real-world” consequences of health interventions. It is hoped that this novel tool which considers IV, EV, and MV on equal footing will better guide clinical decision making.

  10. Experimental validation of solid rocket motor damping models

    Science.gov (United States)

    Riso, Cristina; Fransen, Sebastiaan; Mastroddi, Franco; Coppotelli, Giuliano; Trequattrini, Francesco; De Vivo, Alessio

    2017-12-01

    In design and certification of spacecraft, payload/launcher coupled load analyses are performed to simulate the satellite dynamic environment. To obtain accurate predictions, the system damping properties must be properly taken into account in the finite element model used for coupled load analysis. This is typically done using a structural damping characterization in the frequency domain, which is not applicable in the time domain. Therefore, the structural damping matrix of the system must be converted into an equivalent viscous damping matrix when a transient coupled load analysis is performed. This paper focuses on the validation of equivalent viscous damping methods for dynamically condensed finite element models via correlation with experimental data for a realistic structure representative of a slender launch vehicle with solid rocket motors. A second scope of the paper is to investigate how to conveniently choose a single combination of Young's modulus and structural damping coefficient—complex Young's modulus—to approximate the viscoelastic behavior of a solid propellant material in the frequency band of interest for coupled load analysis. A scaled-down test article inspired to the Z9-ignition Vega launcher configuration is designed, manufactured, and experimentally tested to obtain data for validation of the equivalent viscous damping methods. The Z9-like component of the test article is filled with a viscoelastic material representative of the Z9 solid propellant that is also preliminarily tested to investigate the dependency of the complex Young's modulus on the excitation frequency and provide data for the test article finite element model. Experimental results from seismic and shock tests performed on the test configuration are correlated with numerical results from frequency and time domain analyses carried out on its dynamically condensed finite element model to assess the applicability of different equivalent viscous damping methods to describe

  11. Design-validation of a hand exoskeleton using musculoskeletal modeling.

    Science.gov (United States)

    Hansen, Clint; Gosselin, Florian; Ben Mansour, Khalil; Devos, Pierre; Marin, Frederic

    2018-04-01

    Exoskeletons are progressively reaching homes and workplaces, allowing interaction with virtual environments, remote control of robots, or assisting human operators in carrying heavy loads. Their design is however still a challenge as these robots, being mechanically linked to the operators who wear them, have to meet ergonomic constraints besides usual robotic requirements in terms of workspace, speed, or efforts. They have in particular to fit the anthropometry and mobility of their users. This traditionally results in numerous prototypes which are progressively fitted to each individual person. In this paper, we propose instead to validate the design of a hand exoskeleton in a fully digital environment, without the need for a physical prototype. The purpose of this study is thus to examine whether finger kinematics are altered when using a given hand exoskeleton. Therefore, user specific musculoskeletal models were created and driven by a motion capture system to evaluate the fingers' joint kinematics when performing two industrial related tasks. The kinematic chain of the exoskeleton was added to the musculoskeletal models and its compliance with the hand movements was evaluated. Our results show that the proposed exoskeleton design does not influence fingers' joints angles, the coefficient of determination between the model with and without exoskeleton being consistently high (R 2 ¯=0.93) and the nRMSE consistently low (nRMSE¯ = 5.42°). These results are promising and this approach combining musculoskeletal and robotic modeling driven by motion capture data could be a key factor in the ergonomics validation of the design of orthotic devices and exoskeletons prior to manufacturing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Experimental validation of solid rocket motor damping models

    Science.gov (United States)

    Riso, Cristina; Fransen, Sebastiaan; Mastroddi, Franco; Coppotelli, Giuliano; Trequattrini, Francesco; De Vivo, Alessio

    2018-06-01

    In design and certification of spacecraft, payload/launcher coupled load analyses are performed to simulate the satellite dynamic environment. To obtain accurate predictions, the system damping properties must be properly taken into account in the finite element model used for coupled load analysis. This is typically done using a structural damping characterization in the frequency domain, which is not applicable in the time domain. Therefore, the structural damping matrix of the system must be converted into an equivalent viscous damping matrix when a transient coupled load analysis is performed. This paper focuses on the validation of equivalent viscous damping methods for dynamically condensed finite element models via correlation with experimental data for a realistic structure representative of a slender launch vehicle with solid rocket motors. A second scope of the paper is to investigate how to conveniently choose a single combination of Young's modulus and structural damping coefficient—complex Young's modulus—to approximate the viscoelastic behavior of a solid propellant material in the frequency band of interest for coupled load analysis. A scaled-down test article inspired to the Z9-ignition Vega launcher configuration is designed, manufactured, and experimentally tested to obtain data for validation of the equivalent viscous damping methods. The Z9-like component of the test article is filled with a viscoelastic material representative of the Z9 solid propellant that is also preliminarily tested to investigate the dependency of the complex Young's modulus on the excitation frequency and provide data for the test article finite element model. Experimental results from seismic and shock tests performed on the test configuration are correlated with numerical results from frequency and time domain analyses carried out on its dynamically condensed finite element model to assess the applicability of different equivalent viscous damping methods to describe

  13. Validation of a Global Hydrodynamic Flood Inundation Model

    Science.gov (United States)

    Bates, P. D.; Smith, A.; Sampson, C. C.; Alfieri, L.; Neal, J. C.

    2014-12-01

    In this work we present first validation results for a hyper-resolution global flood inundation model. We use a true hydrodynamic model (LISFLOOD-FP) to simulate flood inundation at 1km resolution globally and then use downscaling algorithms to determine flood extent and depth at 90m spatial resolution. Terrain data are taken from a custom version of the SRTM data set that has been processed specifically for hydrodynamic modelling. Return periods of flood flows along the entire global river network are determined using: (1) empirical relationships between catchment characteristics and index flood magnitude in different hydroclimatic zones derived from global runoff data; and (2) an index flood growth curve, also empirically derived. Bankful return period flow is then used to set channel width and depth, and flood defence impacts are modelled using empirical relationships between GDP, urbanization and defence standard of protection. The results of these simulations are global flood hazard maps for a number of different return period events from 1 in 5 to 1 in 1000 years. We compare these predictions to flood hazard maps developed by national government agencies in the UK and Germany using similar methods but employing detailed local data, and to observed flood extent at a number of sites including St. Louis, USA and Bangkok in Thailand. Results show that global flood hazard models can have considerable skill given careful treatment to overcome errors in the publicly available data that are used as their input.

  14. Validation of Storm Water Management Model Storm Control Measures Modules

    Science.gov (United States)

    Simon, M. A.; Platz, M. C.

    2017-12-01

    EPA's Storm Water Management Model (SWMM) is a computational code heavily relied upon by industry for the simulation of wastewater and stormwater infrastructure performance. Many municipalities are relying on SWMM results to design multi-billion-dollar, multi-decade infrastructure upgrades. Since the 1970's, EPA and others have developed five major releases, the most recent ones containing storm control measures modules for green infrastructure. The main objective of this study was to quantify the accuracy with which SWMM v5.1.10 simulates the hydrologic activity of previously monitored low impact developments. Model performance was evaluated with a mathematical comparison of outflow hydrographs and total outflow volumes, using empirical data and a multi-event, multi-objective calibration method. The calibration methodology utilized PEST++ Version 3, a parameter estimation tool, which aided in the selection of unmeasured hydrologic parameters. From the validation study and sensitivity analysis, several model improvements were identified to advance SWMM LID Module performance for permeable pavements, infiltration units and green roofs, and these were performed and reported herein. Overall, it was determined that SWMM can successfully simulate low impact development controls given accurate model confirmation, parameter measurement, and model calibration.

  15. NAIRAS aircraft radiation model development, dose climatology, and initial validation

    Science.gov (United States)

    Mertens, Christopher J.; Meier, Matthias M.; Brown, Steven; Norman, Ryan B.; Xu, Xiaojing

    2013-10-01

    The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis suggests

  16. Validating neural-network refinements of nuclear mass models

    Science.gov (United States)

    Utama, R.; Piekarewicz, J.

    2018-01-01

    Background: Nuclear astrophysics centers on the role of nuclear physics in the cosmos. In particular, nuclear masses at the limits of stability are critical in the development of stellar structure and the origin of the elements. Purpose: We aim to test and validate the predictions of recently refined nuclear mass models against the newly published AME2016 compilation. Methods: The basic paradigm underlining the recently refined nuclear mass models is based on existing state-of-the-art models that are subsequently refined through the training of an artificial neural network. Bayesian inference is used to determine the parameters of the neural network so that statistical uncertainties are provided for all model predictions. Results: We observe a significant improvement in the Bayesian neural network (BNN) predictions relative to the corresponding "bare" models when compared to the nearly 50 new masses reported in the AME2016 compilation. Further, AME2016 estimates for the handful of impactful isotopes in the determination of r -process abundances are found to be in fairly good agreement with our theoretical predictions. Indeed, the BNN-improved Duflo-Zuker model predicts a root-mean-square deviation relative to experiment of σrms≃400 keV. Conclusions: Given the excellent performance of the BNN refinement in confronting the recently published AME2016 compilation, we are confident of its critical role in our quest for mass models of the highest quality. Moreover, as uncertainty quantification is at the core of the BNN approach, the improved mass models are in a unique position to identify those nuclei that will have the strongest impact in resolving some of the outstanding questions in nuclear astrophysics.

  17. Validating Lung Models Using the ASL 5000 Breathing Simulator.

    Science.gov (United States)

    Dexter, Amanda; McNinch, Neil; Kaznoch, Destiny; Volsko, Teresa A

    2018-04-01

    This study sought to validate pediatric models with normal and altered pulmonary mechanics. PubMed and CINAHL databases were searched for studies directly measuring pulmonary mechanics of healthy infants and children, infants with severe bronchopulmonary dysplasia and neuromuscular disease. The ASL 5000 was used to construct models using tidal volume (VT), inspiratory time (TI), respiratory rate, resistance, compliance, and esophageal pressure gleaned from literature. Data were collected for a 1-minute period and repeated three times for each model. t tests compared modeled data with data abstracted from the literature. Repeated measures analyses evaluated model performance over multiple iterations. Statistical significance was established at a P value of less than 0.05. Maximum differences of means (experimental iteration mean - clinical standard mean) for TI and VT are the following: term infant without lung disease (TI = 0.09 s, VT = 0.29 mL), severe bronchopulmonary dysplasia (TI = 0.08 s, VT = 0.17 mL), child without lung disease (TI = 0.10 s, VT = 0.17 mL), and child with neuromuscular disease (TI = 0.09 s, VT = 0.57 mL). One-sample testing demonstrated statistically significant differences between clinical controls and VT and TI values produced by the ASL 5000 for each iteration and model (P < 0.01). The greatest magnitude of differences was negligible (VT < 1.6%, TI = 18%) and not clinically relevant. Inconsistencies occurred with the models constructed on the ASL 5000. It was deemed accurate for the study purposes. It is therefore essential to test models and evaluate magnitude of differences before use.

  18. Validation Analysis of the Shoal Groundwater Flow and Transport Model

    Energy Technology Data Exchange (ETDEWEB)

    A. Hassan; J. Chapman

    2008-11-01

    Environmental restoration at the Shoal underground nuclear test is following a process prescribed by a Federal Facility Agreement and Consent Order (FFACO) between the U.S. Department of Energy, the U.S. Department of Defense, and the State of Nevada. Characterization of the site included two stages of well drilling and testing in 1996 and 1999, and development and revision of numerical models of groundwater flow and radionuclide transport. Agreement on a contaminant boundary for the site and a corrective action plan was reached in 2006. Later that same year, three wells were installed for the purposes of model validation and site monitoring. The FFACO prescribes a five-year proof-of-concept period for demonstrating that the site groundwater model is capable of producing meaningful results with an acceptable level of uncertainty. The corrective action plan specifies a rigorous seven step validation process. The accepted groundwater model is evaluated using that process in light of the newly acquired data. The conceptual model of ground water flow for the Project Shoal Area considers groundwater flow through the fractured granite aquifer comprising the Sand Springs Range. Water enters the system by the infiltration of precipitation directly on the surface of the mountain range. Groundwater leaves the granite aquifer by flowing into alluvial deposits in the adjacent basins of Fourmile Flat and Fairview Valley. A groundwater divide is interpreted as coinciding with the western portion of the Sand Springs Range, west of the underground nuclear test, preventing flow from the test into Fourmile Flat. A very low conductivity shear zone east of the nuclear test roughly parallels the divide. The presence of these lateral boundaries, coupled with a regional discharge area to the northeast, is interpreted in the model as causing groundwater from the site to flow in a northeastward direction into Fairview Valley. Steady-state flow conditions are assumed given the absence of

  19. First approximations in avalanche model validations using seismic information

    Science.gov (United States)

    Roig Lafon, Pere; Suriñach, Emma; Bartelt, Perry; Pérez-Guillén, Cristina; Tapia, Mar; Sovilla, Betty

    2017-04-01

    Avalanche dynamics modelling is an essential tool for snow hazard management. Scenario based numerical modelling provides quantitative arguments for decision-making. The software tool RAMMS (WSL Institute for Snow and Avalanche Research SLF) is one such tool, often used by government authorities and geotechnical offices. As avalanche models improve, the quality of the numerical results will depend increasingly on user experience on the specification of input (e.g. release and entrainment volumes, secondary releases, snow temperature and quality). New model developments must continue to be validated using real phenomena data, for improving performance and reliability. The avalanches group form University of Barcelona (RISKNAT - UB), has studied the seismic signals generated from avalanches since 1994. Presently, the group manages the seismic installation at SLF's Vallée de la Sionne experimental site (VDLS). At VDLS the recorded seismic signals can be correlated to other avalanche measurement techniques, including both advanced remote sensing methods (radars, videogrammetry) and obstacle based sensors (pressure, capacitance, optical sender-reflector barriers). This comparison between different measurement techniques allows the group to address the question if seismic analysis can be used alone, on more additional avalanche tracks, to gain insight and validate numerical avalanche dynamics models in different terrain conditions. In this study, we aim to add the seismic data as an external record of the phenomena, able to validate RAMMS models. The seismic sensors are considerable easy and cheaper to install than other physical measuring tools, and are able to record data from the phenomena in every atmospheric conditions (e.g. bad weather, low light, freezing make photography, and other kind of sensors not usable). With seismic signals, we record the temporal evolution of the inner and denser parts of the avalanche. We are able to recognize the approximate position

  20. ExEP yield modeling tool and validation test results

    Science.gov (United States)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  1. Preventing patient absenteeism: validation of a predictive overbooking model.

    Science.gov (United States)

    Reid, Mark W; Cohen, Samuel; Wang, Hank; Kaung, Aung; Patel, Anish; Tashjian, Vartan; Williams, Demetrius L; Martinez, Bibiana; Spiegel, Brennan M R

    2015-12-01

    To develop a model that identifies patients at high risk for missing scheduled appointments ("no-shows" and cancellations) and to project the impact of predictive overbooking in a gastrointestinal endoscopy clinic-an exemplar resource-intensive environment with a high no-show rate. We retrospectively developed an algorithm that uses electronic health record (EHR) data to identify patients who do not show up to their appointments. Next, we prospectively validated the algorithm at a Veterans Administration healthcare network clinic. We constructed a multivariable logistic regression model that assigned a no-show risk score optimized by receiver operating characteristic curve analysis. Based on these scores, we created a calendar of projected open slots to offer to patients and compared the daily performance of predictive overbooking with fixed overbooking and typical "1 patient, 1 slot" scheduling. Data from 1392 patients identified several predictors of no-show, including previous absenteeism, comorbid disease burden, and current diagnoses of mood and substance use disorders. The model correctly classified most patients during the development (area under the curve [AUC] = 0.80) and validation phases (AUC = 0.75). Prospective testing in 1197 patients found that predictive overbooking averaged 0.51 unused appointments per day versus 6.18 for typical booking (difference = -5.67; 95% CI, -6.48 to -4.87; P < .0001). Predictive overbooking could have increased service utilization from 62% to 97% of capacity, with only rare clinic overflows. Information from EHRs can accurately predict whether patients will no-show. This method can be used to overbook appointments, thereby maximizing service utilization while staying within clinic capacity.

  2. Validating clustering of molecular dynamics simulations using polymer models

    Directory of Open Access Journals (Sweden)

    Phillips Joshua L

    2011-11-01

    Full Text Available Abstract Background Molecular dynamics (MD simulation is a powerful technique for sampling the meta-stable and transitional conformations of proteins and other biomolecules. Computational data clustering has emerged as a useful, automated technique for extracting conformational states from MD simulation data. Despite extensive application, relatively little work has been done to determine if the clustering algorithms are actually extracting useful information. A primary goal of this paper therefore is to provide such an understanding through a detailed analysis of data clustering applied to a series of increasingly complex biopolymer models. Results We develop a novel series of models using basic polymer theory that have intuitive, clearly-defined dynamics and exhibit the essential properties that we are seeking to identify in MD simulations of real biomolecules. We then apply spectral clustering, an algorithm particularly well-suited for clustering polymer structures, to our models and MD simulations of several intrinsically disordered proteins. Clustering results for the polymer models provide clear evidence that the meta-stable and transitional conformations are detected by the algorithm. The results for the polymer models also help guide the analysis of the disordered protein simulations by comparing and contrasting the statistical properties of the extracted clusters. Conclusions We have developed a framework for validating the performance and utility of clustering algorithms for studying molecular biopolymer simulations that utilizes several analytic and dynamic polymer models which exhibit well-behaved dynamics including: meta-stable states, transition states, helical structures, and stochastic dynamics. We show that spectral clustering is robust to anomalies introduced by structural alignment and that different structural classes of intrinsically disordered proteins can be reliably discriminated from the clustering results. To our

  3. General Potential-Current Model and Validation for Electrocoagulation

    International Nuclear Information System (INIS)

    Dubrawski, Kristian L.; Du, Codey; Mohseni, Madjid

    2014-01-01

    A model relating potential and current in continuous parallel plate iron electrocoagulation (EC) was developed for application in drinking water treatment. The general model can be applied to any EC parallel plate system relying only on geometric and tabulated input variables without the need of system-specific experimentally derived constants. For the theoretical model, the anode and cathode were vertically divided into n equipotential segments in a single pass, upflow, and adiabatic EC reactor. Potential and energy balances were simultaneously solved at each vertical segment, which included the contribution of ionic concentrations, solution temperature and conductivity, cathodic hydrogen flux, and gas/liquid ratio. We experimentally validated the numerical model with a vertical upflow EC reactor using a 24 cm height 99.99% pure iron anode divided into twelve 2 cm segments. Individual experimental currents from each segment were summed to determine total current, and compared with the theoretically derived value. Several key variables were studied to determine their impact on model accuracy: solute type, solute concentration, current density, flow rate, inter-electrode gap, and electrode surface condition. Model results were in good agreement with experimental values at cell potentials of 2-20 V (corresponding to a current density range of approximately 50-800 A/m 2 ), with mean relative deviation of 9% for low flow rate, narrow electrode gap, polished electrodes, and 150 mg/L NaCl. Highest deviation occurred with a large electrode gap, unpolished electrodes, and Na 2 SO 4 electrolyte, due to parasitic H 2 O oxidation and less than unity current efficiency. This is the first general model which can be applied to any parallel plate EC system for accurate electrochemical voltage or current prediction

  4. Validity of the Neuromuscular Recovery Scale: a measurement model approach.

    Science.gov (United States)

    Velozo, Craig; Moorhouse, Michael; Ardolino, Elizabeth; Lorenz, Doug; Suter, Sarah; Basso, D Michele; Behrman, Andrea L

    2015-08-01

    To determine how well the Neuromuscular Recovery Scale (NRS) items fit the Rasch, 1-parameter, partial-credit measurement model. Confirmatory factor analysis (CFA) and principal components analysis (PCA) of residuals were used to determine dimensionality. The Rasch, 1-parameter, partial-credit rating scale model was used to determine rating scale structure, person/item fit, point-measure item correlations, item discrimination, and measurement precision. Seven NeuroRecovery Network clinical sites. Outpatients (N=188) with spinal cord injury. Not applicable. NRS. While the NRS met 1 of 3 CFA criteria, the PCA revealed that the Rasch measurement dimension explained 76.9% of the variance. Ten of 11 items and 91% of the patients fit the Rasch model, with 9 of 11 items showing high discrimination. Sixty-nine percent of the ratings met criteria. The items showed a logical item-difficulty order, with Stand retraining as the easiest item and Walking as the most challenging item. The NRS showed no ceiling or floor effects and separated the sample into almost 5 statistically distinct strata; individuals with an American Spinal Injury Association Impairment Scale (AIS) D classification showed the most ability, and those with an AIS A classification showed the least ability. Items not meeting the rating scale criteria appear to be related to the low frequency counts. The NRS met many of the Rasch model criteria for construct validity. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  5. Using of Structural Equation Modeling Techniques in Cognitive Levels Validation

    Directory of Open Access Journals (Sweden)

    Natalija Curkovic

    2012-10-01

    Full Text Available When constructing knowledge tests, cognitive level is usually one of the dimensions comprising the test specifications with each item assigned to measure a particular level. Recently used taxonomies of the cognitive levels most often represent some modification of the original Bloom’s taxonomy. There are many concerns in current literature about existence of predefined cognitive levels. The aim of this article is to investigate can structural equation modeling techniques confirm existence of different cognitive levels. For the purpose of the research, a Croatian final high-school Mathematics exam was used (N = 9626. Confirmatory factor analysis and structural regression modeling were used to test three different models. Structural equation modeling techniques did not support existence of different cognitive levels in this case. There is more than one possible explanation for that finding. Some other techniques that take into account nonlinear behaviour of the items as well as qualitative techniques might be more useful for the purpose of the cognitive levels validation. Furthermore, it seems that cognitive levels were not efficient descriptors of the items and so improvements are needed in describing the cognitive skills measured by items.

  6. ADOPT: A Historically Validated Light Duty Vehicle Consumer Choice Model

    Energy Technology Data Exchange (ETDEWEB)

    Brooker, A.; Gonder, J.; Lopp, S.; Ward, J.

    2015-05-04

    The Automotive Deployment Option Projection Tool (ADOPT) is a light-duty vehicle consumer choice and stock model supported by the U.S. Department of Energy’s Vehicle Technologies Office. It estimates technology improvement impacts on U.S. light-duty vehicles sales, petroleum use, and greenhouse gas emissions. ADOPT uses techniques from the multinomial logit method and the mixed logit method estimate sales. Specifically, it estimates sales based on the weighted value of key attributes including vehicle price, fuel cost, acceleration, range and usable volume. The average importance of several attributes changes nonlinearly across its range and changes with income. For several attributes, a distribution of importance around the average value is used to represent consumer heterogeneity. The majority of existing vehicle makes, models, and trims are included to fully represent the market. The Corporate Average Fuel Economy regulations are enforced. The sales feed into the ADOPT stock model. It captures key aspects for summing petroleum use and greenhouse gas emissions This includes capturing the change in vehicle miles traveled by vehicle age, the creation of new model options based on the success of existing vehicles, new vehicle option introduction rate limits, and survival rates by vehicle age. ADOPT has been extensively validated with historical sales data. It matches in key dimensions including sales by fuel economy, acceleration, price, vehicle size class, and powertrain across multiple years. A graphical user interface provides easy and efficient use. It manages the inputs, simulation, and results.

  7. Validation of elastic cross section models for space radiation applications

    Energy Technology Data Exchange (ETDEWEB)

    Werneth, C.M., E-mail: charles.m.werneth@nasa.gov [NASA Langley Research Center (United States); Xu, X. [National Institute of Aerospace (United States); Norman, R.B. [NASA Langley Research Center (United States); Ford, W.P. [The University of Tennessee (United States); Maung, K.M. [The University of Southern Mississippi (United States)

    2017-02-01

    The space radiation field is composed of energetic particles that pose both acute and long-term risks for astronauts in low earth orbit and beyond. In order to estimate radiation risk to crew members, the fluence of particles and biological response to the radiation must be known at tissue sites. Given that the spectral fluence at the boundary of the shielding material is characterized, radiation transport algorithms may be used to find the fluence of particles inside the shield and body, and the radio-biological response is estimated from experiments and models. The fidelity of the radiation spectrum inside the shield and body depends on radiation transport algorithms and the accuracy of the nuclear cross sections. In a recent study, self-consistent nuclear models based on multiple scattering theory that include the option to study relativistic kinematics were developed for the prediction of nuclear cross sections for space radiation applications. The aim of the current work is to use uncertainty quantification to ascertain the validity of the models as compared to a nuclear reaction database and to identify components of the models that can be improved in future efforts.

  8. Reactor core modeling practice: Operational requirements, model characteristics, and model validation

    International Nuclear Information System (INIS)

    Zerbino, H.

    1997-01-01

    The physical models implemented in power plant simulators have greatly increased in performance and complexity in recent years. This process has been enabled by the ever increasing computing power available at affordable prices. This paper describes this process from several angles: First the operational requirements which are more critical from the point of view of model performance, both for normal and off-normal operating conditions; A second section discusses core model characteristics in the light of the solutions implemented by Thomson Training and Simulation (TT and S) in several full-scope simulators recently built and delivered for Dutch, German, and French nuclear power plants; finally we consider the model validation procedures, which are of course an integral part of model development, and which are becoming more and more severe as performance expectations increase. As a conclusion, it may be asserted that in the core modeling field, as in other areas, the general improvement in the quality of simulation codes has resulted in a fairly rapid convergence towards mainstream engineering-grade calculations. This is remarkable performance in view of the stringent real-time requirements which the simulation codes must satisfy as well as the extremely wide range of operating conditions that they are called upon to cover with good accuracy. (author)

  9. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    Science.gov (United States)

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  10. PIV validation of blood-heart valve leaflet interaction modelling.

    Science.gov (United States)

    Kaminsky, R; Dumont, K; Weber, H; Schroll, M; Verdonck, P

    2007-07-01

    The aim of this study was to validate the 2D computational fluid dynamics (CFD) results of a moving heart valve based on a fluid-structure interaction (FSI) algorithm with experimental measurements. Firstly, a pulsatile laminar flow through a monoleaflet valve model with a stiff leaflet was visualized by means of Particle Image Velocimetry (PIV). The inflow data sets were applied to a CFD simulation including blood-leaflet interaction. The measurement section with a fixed leaflet was enclosed into a standard mock loop in series with a Harvard Apparatus Pulsatile Blood Pump, a compliance chamber and a reservoir. Standard 2D PIV measurements were made at a frequency of 60 bpm. Average velocity magnitude results of 36 phase-locked measurements were evaluated at every 10 degrees of the pump cycle. For the CFD flow simulation, a commercially available package from Fluent Inc. was used in combination with inhouse developed FSI code based on the Arbitrary Lagrangian-Eulerian (ALE) method. Then the CFD code was applied to the leaflet to quantify the shear stress on it. Generally, the CFD results are in agreement with the PIV evaluated data in major flow regions, thereby validating the FSI simulation of a monoleaflet valve with a flexible leaflet. The applicability of the new CFD code for quantifying the shear stress on a flexible leaflet is thus demonstrated.

  11. Uncertainty and validation. Effect of model complexity on uncertainty estimates

    International Nuclear Information System (INIS)

    Elert, M.

    1996-09-01

    deterministic case, and the uncertainty bands did not always overlap. This suggest that there are considerable model uncertainties present, which were not considered in this study. Concerning possible constraints in the application domain of different models, the results of this exercise suggest that if only the evolution of the root zone concentration is to be predicted, all of the studied models give comparable results. However, if also the flux to the groundwater is to be predicted, then a considerably increased amount of detail is needed concerning the model and the parameterization. This applies to the hydrological as well as the transport modelling. The difference in model predictions and the magnitude of uncertainty was quite small for some of the end-points predicted, while for others it could span many orders of magnitude. Of special importance were end-points where delay in the soil was involved, e.g. release to the groundwater. In such cases the influence of radioactive decay gave rise to strongly non-linear effects. The work in the subgroup has provided many valuable insights on the effects of model simplifications, e.g. discretization in the model, averaging of the time varying input parameters and the assignment of uncertainties to parameters. The conclusions that have been drawn concerning these are primarily valid for the studied scenario. However, we believe that they to a large extent also are generally applicable. The subgroup have had many opportunities to study the pitfalls involved in model comparison. The intention was to provide a well defined scenario for the subgroup, but despite several iterations misunderstandings and ambiguities remained. The participants have been forced to scrutinize their models to try to explain differences in the predictions and most, if not all, of the participants have improved their models as a result of this

  12. A magnetospheric specification model validation study: Geosynchronous electrons

    Science.gov (United States)

    Hilmer, R. V.; Ginet, G. P.

    2000-09-01

    The Rice University Magnetospheric Specification Model (MSM) is an operational space environment model of the inner and middle magnetosphere designed to specify charged particle fluxes up to 100keV. Validation test data taken between January 1996 and June 1998 consist of electron fluxes measured by a charge control system (CCS) on a defense satellite communications system (DSCS) spacecraft. The CCS includes both electrostatic analyzers to measure the particle environment and surface potential monitors to track differential charging between various materials and vehicle ground. While typical RMS error analysis methods provide a sense of the models overall abilities, they do not specifically address physical situations critical to operations, i.e., how well does the model specify when a high differential charging state is probable. In this validation study, differential charging states observed by DSCS are used to determine several threshold fluxes for the associated 20-50keV electrons and joint probability distributions are constructed to determine Hit, Miss, and False Alarm rates for the models. An MSM run covering the two and one-half year interval is performed using the minimum required input parameter set, consisting of only the magnetic activity index Kp, in order to statistically examine the model's seasonal and yearly performance. In addition, the relative merits of the input parameter, i.e., Kp, Dst, the equatorward boundary of diffuse aurora at midnight, cross-polar cap potential, solar wind density and velocity, and interplanetary magnetic field values, are evaluated as drivers of shorter model runs of 100 d each. In an effort to develop operational tools that can address spacecraft charging issues, we also identify temporal features in the model output that can be directly linked to input parameter variations and model boundary conditions. All model output is interpreted using the full three-dimensional, dipole tilt-dependent algorithms currently in

  13. Global Sensitivity Analysis of Environmental Models: Convergence, Robustness and Validation

    Science.gov (United States)

    Sarrazin, Fanny; Pianosi, Francesca; Khorashadi Zadeh, Farkhondeh; Van Griensven, Ann; Wagener, Thorsten

    2015-04-01

    Global Sensitivity Analysis aims to characterize the impact that variations in model input factors (e.g. the parameters) have on the model output (e.g. simulated streamflow). In sampling-based Global Sensitivity Analysis, the sample size has to be chosen carefully in order to obtain reliable sensitivity estimates while spending computational resources efficiently. Furthermore, insensitive parameters are typically identified through the definition of a screening threshold: the theoretical value of their sensitivity index is zero but in a sampling-base framework they regularly take non-zero values. There is little guidance available for these two steps in environmental modelling though. The objective of the present study is to support modellers in making appropriate choices, regarding both sample size and screening threshold, so that a robust sensitivity analysis can be implemented. We performed sensitivity analysis for the parameters of three hydrological models with increasing level of complexity (Hymod, HBV and SWAT), and tested three widely used sensitivity analysis methods (Elementary Effect Test or method of Morris, Regional Sensitivity Analysis, and Variance-Based Sensitivity Analysis). We defined criteria based on a bootstrap approach to assess three different types of convergence: the convergence of the value of the sensitivity indices, of the ranking (the ordering among the parameters) and of the screening (the identification of the insensitive parameters). We investigated the screening threshold through the definition of a validation procedure. The results showed that full convergence of the value of the sensitivity indices is not necessarily needed to rank or to screen the model input factors. Furthermore, typical values of the sample sizes that are reported in the literature can be well below the sample sizes that actually ensure convergence of ranking and screening.

  14. Subsurface Biogeochemical Research FY11 Second Quarter Performance Measure

    Energy Technology Data Exchange (ETDEWEB)

    Scheibe, Timothy D.

    2011-03-31

    The Subsurface Biogeochemical Research (SBR) Long Term Measure for 2011 under the Performance Assessment Rating Tool (PART) measure is to "Refine subsurface transport models by developing computational methods to link important processes impacting contaminant transport at smaller scales to the field scale." The second quarter performance measure is to "Provide a report on computational methods linking genome-enabled understanding of microbial metabolism with reactive transport models to describe processes impacting contaminant transport in the subsurface." Microorganisms such as bacteria are by definition small (typically on the order of a micron in size), and their behavior is controlled by their local biogeochemical environment (typically within a single pore or a biofilm on a grain surface, on the order of tens of microns in size). However, their metabolic activity exerts strong influence on the transport and fate of groundwater contaminants of significant concern at DOE sites, in contaminant plumes with spatial extents of meters to kilometers. This report describes progress and key findings from research aimed at integrating models of microbial metabolism based on genomic information (small scale) with models of contaminant fate and transport in aquifers (field scale).

  15. Numerical modeling and experimental validation of thermoplastic composites induction welding

    Science.gov (United States)

    Palmieri, Barbara; Nele, Luigi; Galise, Francesco

    2018-05-01

    In this work, a numerical simulation and experimental test of the induction welding of continuous fibre-reinforced thermoplastic composites (CFRTPCs) was provided. The thermoplastic Polyamide 66 (PA66) with carbon fiber fabric was used. Using a dedicated software (JMag Designer), the influence of the fundamental process parameters such as temperature, current and holding time was investigated. In order to validate the results of the simulations, and therefore the numerical model used, experimental tests were carried out, and the temperature values measured during the tests were compared with the aid of an optical pyrometer, with those provided by the numerical simulation. The mechanical properties of the welded joints were evaluated by single lap shear tests.

  16. Validation of the Osteopenia Sheep Model for Orthopaedic Biomaterial Research

    DEFF Research Database (Denmark)

    Ding, Ming

    2009-01-01

    months. This suggests that a prolonged administration of GC is needed for a long-term observation to keep osteopenic bone.                 In conclusion, after 7 months of GC treatments with restricted diet, the microarchitectural characteristics, mechanical competence, mineralization of the bone tissues...... resemble osteoporosis in humans. This study aimed to validate glucocorticoid-induced osteopenia sheep model for orthopaedic implant and biomaterial research. We hypothesized that a 7-month GC treatment together with restricted diet but without OVX would induce osteopenia. Materials and Methods: Eighteen...... for 7 months. The sheep were housed outdoors in paddocks, and received restricted diet with low calcium and phosphorus (0.55% calcium and 0.35% phosphorus) and hay. After sacrifice, cancellous bone specimens from the 5th lumbar vertebra, bilateral distal femur, and bilateral proximal tibia, and cortical...

  17. Use of Synchronized Phasor Measurements for Model Validation in ERCOT

    Science.gov (United States)

    Nuthalapati, Sarma; Chen, Jian; Shrestha, Prakash; Huang, Shun-Hsien; Adams, John; Obadina, Diran; Mortensen, Tim; Blevins, Bill

    2013-05-01

    This paper discusses experiences in the use of synchronized phasor measurement technology in Electric Reliability Council of Texas (ERCOT) interconnection, USA. Implementation of synchronized phasor measurement technology in the region is a collaborative effort involving ERCOT, ONCOR, AEP, SHARYLAND, EPG, CCET, and UT-Arlington. As several phasor measurement units (PMU) have been installed in ERCOT grid in recent years, phasor data with the resolution of 30 samples per second is being used to monitor power system status and record system events. Post-event analyses using recorded phasor data have successfully verified ERCOT dynamic stability simulation studies. Real time monitoring software "RTDMS"® enables ERCOT to analyze small signal stability conditions by monitoring the phase angles and oscillations. The recorded phasor data enables ERCOT to validate the existing dynamic models of conventional and/or wind generator.

  18. Utilizing Chamber Data for Developing and Validating Climate Change Models

    Science.gov (United States)

    Monje, Oscar

    2012-01-01

    Controlled environment chambers (e.g. growth chambers, SPAR chambers, or open-top chambers) are useful for measuring plant ecosystem responses to climatic variables and CO2 that affect plant water relations. However, data from chambers was found to overestimate responses of C fluxes to CO2 enrichment. Chamber data may be confounded by numerous artifacts (e.g. sidelighting, edge effects, increased temperature and VPD, etc) and this limits what can be measured accurately. Chambers can be used to measure canopy level energy balance under controlled conditions and plant transpiration responses to CO2 concentration can be elucidated. However, these measurements cannot be used directly in model development or validation. The response of stomatal conductance to CO2 will be the same as in the field, but the measured response must be recalculated in such a manner to account for differences in aerodynamic conductance, temperature and VPD between the chamber and the field.

  19. Vehicle modeling and duty cycle analysis to validate technology feasibility

    Energy Technology Data Exchange (ETDEWEB)

    Castonguay, S. [National Centre for Advanced Transportation, Saint-Jerome, PQ (Canada)

    2010-07-01

    The National Centre for Advanced Transportation (CNTA) is a non-profit organization with a board consisting of representatives from the transportation industry, public service and public transit organizations, research and teaching institutions, and from municipal and economic development organizations. The objectives of the CNTA are to accelerate the introduction of electric and hybrid vehicles; act as a catalyst in projects; assist in increasing Canadian technology assets; initiate and support electric vehicle conversion projects; increase Canadian business for electric vehicles, hybrid vehicles, and plug-in electric vehicles; and provide a cost-effective solution and aggressive payback for road/off-road vehicles. This presentation provided an overview of the objectives and services of the CNTA. It discussed various road and off-road vehicles, duty cycle and technology of electric vehicles. Specific topics related to the technology were discussed, including configuration; controls and interface; efficiency maps; models and simulation; validation; and support. figs.

  20. Ecohydrological Interfaces as Dynamic Hotspots of Biogeochemical Cycling

    Science.gov (United States)

    Krause, Stefan; Lewandowski, Joerg; Hannah, David; McDonald, Karlie; Folegot, Silvia; Baranov, Victor

    2016-04-01

    Ecohydrological interfaces, represent the boundaries between water-dependent ecosystems that can alter substantially the fluxes of energy and matter. There is still a critical gap of understanding the organisational principles of the drivers and controls of spatially and temporally variable ecohydrological interface functions. This knowledge gap limits our capacity to efficiently quantify, predict and manage the services provided by complex ecosystems. Many ecohydrological interfaces are characterized by step changes in microbial metabolic activity, steep redox gradients and often even thermodynamic phase shifts, for instance at the interfaces between atmosphere and water or soil matrix and macro-pores interfaces. This paper integrates investigations from point scale laboratory microcosm experiments with reach and subcatchment scale tracer experiments and numerical modeling studies to elaborate similarities in the drivers and controls that constitute the enhanced biogeochemical activity of different types of ecohydrologica interfaces across a range of spatial and temporal scales. We therefore combine smart metabolic activity tracers to quantify the impact of bioturbating benthic fauna onto ecosystem respiration and oxygen consumption and investigate at larger scale, how microbial metabolic activity and carbon turnover at the water-sediment interface are controlled by sediment physical and chemical properties as well as water temperatures. Numerical modeling confirmed that experimentally identified hotspots of streambed biogeochemical cycling were controlled by patterns of physical properties such as hydraulic conductivities or bioavailability of organic matter, impacting on residence time distributions and hence reaction times. In contrast to previous research, our investigations thus confirmed that small-scale variability of physical and chemical interface properties had a major impact on biogeochemical processing at the investigated ecohydrological interfaces

  1. Validation of the dermal exposure model in ECETOC TRA.

    Science.gov (United States)

    Marquart, Hans; Franken, Remy; Goede, Henk; Fransman, Wouter; Schinkel, Jody

    2017-08-01

    The ECETOC TRA model (presently version 3.1) is often used to estimate worker inhalation and dermal exposure in regulatory risk assessment. The dermal model in ECETOC TRA has not yet been validated by comparison with independent measured exposure levels. This was the goal of the present study. Measured exposure levels and relevant contextual information were gathered via literature search, websites of relevant occupational health institutes and direct requests for data to industry. Exposure data were clustered in so-called exposure cases, which are sets of data from one data source that are expected to have the same values for input parameters in the ECETOC TRA dermal exposure model. For each exposure case, the 75th percentile of measured values was calculated, because the model intends to estimate these values. The input values for the parameters in ECETOC TRA were assigned by an expert elicitation and consensus building process, based on descriptions of relevant contextual information.From more than 35 data sources, 106 useful exposure cases were derived, that were used for direct comparison with the model estimates. The exposure cases covered a large part of the ECETOC TRA dermal exposure model. The model explained 37% of the variance in the 75th percentiles of measured values. In around 80% of the exposure cases, the model estimate was higher than the 75th percentile of measured values. In the remaining exposure cases, the model estimate may not be sufficiently conservative.The model was shown to have a clear bias towards (severe) overestimation of dermal exposure at low measured exposure values, while all cases of apparent underestimation by the ECETOC TRA dermal exposure model occurred at high measured exposure values. This can be partly explained by a built-in bias in the effect of concentration of substance in product used, duration of exposure and the use of protective gloves in the model. The effect of protective gloves was calculated to be on average a

  2. Neuroinflammatory targets and treatments for epilepsy validated in experimental models.

    Science.gov (United States)

    Aronica, Eleonora; Bauer, Sebastian; Bozzi, Yuri; Caleo, Matteo; Dingledine, Raymond; Gorter, Jan A; Henshall, David C; Kaufer, Daniela; Koh, Sookyong; Löscher, Wolfgang; Louboutin, Jean-Pierre; Mishto, Michele; Norwood, Braxton A; Palma, Eleonora; Poulter, Michael O; Terrone, Gaetano; Vezzani, Annamaria; Kaminski, Rafal M

    2017-07-01

    A large body of evidence that has accumulated over the past decade strongly supports the role of inflammation in the pathophysiology of human epilepsy. Specific inflammatory molecules and pathways have been identified that influence various pathologic outcomes in different experimental models of epilepsy. Most importantly, the same inflammatory pathways have also been found in surgically resected brain tissue from patients with treatment-resistant epilepsy. New antiseizure therapies may be derived from these novel potential targets. An essential and crucial question is whether targeting these molecules and pathways may result in anti-ictogenesis, antiepileptogenesis, and/or disease-modification effects. Therefore, preclinical testing in models mimicking relevant aspects of epileptogenesis is needed to guide integrated experimental and clinical trial designs. We discuss the most recent preclinical proof-of-concept studies validating a number of therapeutic approaches against inflammatory mechanisms in animal models that could represent novel avenues for drug development in epilepsy. Finally, we suggest future directions to accelerate preclinical to clinical translation of these recent discoveries. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  3. Modelling and validation of Proton exchange membrane fuel cell (PEMFC)

    Science.gov (United States)

    Mohiuddin, A. K. M.; Basran, N.; Khan, A. A.

    2018-01-01

    This paper is the outcome of a small scale fuel cell project. Fuel cell is an electrochemical device that converts energy from chemical reaction to electrical work. Proton Exchange Membrane Fuel Cell (PEMFC) is one of the different types of fuel cell, which is more efficient, having low operational temperature and fast start up capability results in high energy density. In this study, a mathematical model of 1.2 W PEMFC is developed and simulated using MATLAB software. This model describes the PEMFC behaviour under steady-state condition. This mathematical modeling of PEMFC determines the polarization curve, power generated, and the efficiency of the fuel cell. Simulation results were validated by comparing with experimental results obtained from the test of a single PEMFC with a 3 V motor. The performance of experimental PEMFC is little lower compared to simulated PEMFC, however both results were found in good agreement. Experiments on hydrogen flow rate also been conducted to obtain the amount of hydrogen consumed to produce electrical work on PEMFC.

  4. Tyre tread-block friction: modelling, simulation and experimental validation

    Science.gov (United States)

    Wallaschek, Jörg; Wies, Burkard

    2013-07-01

    Pneumatic tyres are used in vehicles since the beginning of the last century. They generate braking and steering forces for bicycles, motor cycles, cars, busses, trucks, agricultural vehicles and aircraft. These forces are generated in the usually very small contact area between tyre and road and their performance characteristics are of eminent importance for safety and comfort. Much research has been addressed to optimise tyre design with respect to footprint pressure and friction. In this context, the development of virtual tyre prototypes, that is, simulation models for the tyre, has grown to a science in its own. While the modelling of the structural dynamics of the tyre has reached a very advanced level, which allows to take into account effects like the rate-independent inelasticity of filled elastomers or the transient 3D deformations of the ply-reinforced tread, shoulder and sidewalls, little is known about the friction between tread-block elements and road. This is particularly obvious in the case when snow, ice, water or a third-body layer are present in the tyre-road contact. In the present paper, we give a survey on the present state of knowledge in the modelling, simulation and experimental validation of tyre tread-block friction processes. We concentrate on experimental techniques.

  5. Validity of scale modeling for large deformations in shipping containers

    International Nuclear Information System (INIS)

    Burian, R.J.; Black, W.E.; Lawrence, A.A.; Balmert, M.E.

    1979-01-01

    The principal overall objective of this phase of the continuing program for DOE/ECT is to evaluate the validity of applying scaling relationships to accurately assess the response of unprotected model shipping containers severe impact conditions -- specifically free fall from heights up to 140 ft onto a hard surface in several orientations considered most likely to produce severe damage to the containers. The objective was achieved by studying the following with three sizes of model casks subjected to the various impact conditions: (1) impact rebound response of the containers; (2) structural damage and deformation modes; (3) effect on the containment; (4) changes in shielding effectiveness; (5) approximate free-fall threshold height for various orientations at which excessive damage occurs; (6) the impact orientation(s) that tend to produce the most severe damage; and (7) vunerable aspects of the casks which should be examined. To meet the objective, the tests were intentionally designed to produce extreme structural damage to the cask models. In addition to the principal objective, this phase of the program had the secondary objectives of establishing a scientific data base for assessing the safety and environmental control provided by DOE nuclear shipping containers under impact conditions, and providing experimental data for verification and correlation with dynamic-structural-analysis computer codes being developed by the Los Alamos Scientific Laboratory for DOE/ECT

  6. Validation of an Acoustic Impedance Prediction Model for Skewed Resonators

    Science.gov (United States)

    Howerton, Brian M.; Parrott, Tony L.

    2009-01-01

    An impedance prediction model was validated experimentally to determine the composite impedance of a series of high-aspect ratio slot resonators incorporating channel skew and sharp bends. Such structures are useful for packaging acoustic liners into constrained spaces for turbofan noise control applications. A formulation of the Zwikker-Kosten Transmission Line (ZKTL) model, incorporating the Richards correction for rectangular channels, is used to calculate the composite normalized impedance of a series of six multi-slot resonator arrays with constant channel length. Experimentally, acoustic data was acquired in the NASA Langley Normal Incidence Tube over the frequency range of 500 to 3500 Hz at 120 and 140 dB OASPL. Normalized impedance was reduced using the Two-Microphone Method for the various combinations of channel skew and sharp 90o and 180o bends. Results show that the presence of skew and/or sharp bends does not significantly alter the impedance of a slot resonator as compared to a straight resonator of the same total channel length. ZKTL predicts the impedance of such resonators very well over the frequency range of interest. The model can be used to design arrays of slot resonators that can be packaged into complex geometries heretofore unsuitable for effective acoustic treatment.

  7. Biogeochemical provinces in the global ocean based on phytoplankton growth limitation

    Science.gov (United States)

    Hashioka, T.; Hirata, T.; Aita, M. N.; Chiba, S.

    2016-02-01

    The biogeochemical province is one of the useful concepts for the comprehensive understanding of regional differences of the marine ecosystem. Various biogeochemical provinces for lower-trophic level ecosystem have been proposed using a similarity-based classification of seasonal variations of chl-a concentration typified by Longhurst 1995 and 2006. Such categorizations well capture the regional differences of seasonality as "total phytoplankton". However, background biogeochemical mechanism to characterize the province boundary is not clear. Namely, the dominant phytoplankton group is different among regions and seasons, and their physiological characteristics are significantly different among groups. Recently some pieces of new biogeochemical information are available. One is an estimation of phytoplankton community structure from satellite observation, and it makes clear the key phytoplankton type in each region. Another is an estimation of limitation factors for phytoplankton growth (e.g., nutrients, temperature, light) in each region from modeling studies. In this study, we propose new biogeochemical provinces as a combination between the dominance of phytoplankton (i.e., diatoms, nano-, pico-phytoplankton or coexistence of two/three types) and their growth limitation factors (particularly we focused on nutrient limitation; N, P, Si or Fe). In this combination, we classified the global ocean into 23 biogeochemical provinces. The result suggests that even if the same type of phytoplankton dominates, the background mechanism could be different among regions. On the contrary, even if the regions geographically separate, the background mechanism could be similar among regions. This is important to understand that region/boundary does respond to environmental change. This biogeochemical province is useful for identification of key areas for future observation.

  8. Multiscale Investigation on Biofilm Distribution and Its Impact on Macroscopic Biogeochemical Reaction Rates

    Science.gov (United States)

    Yan, Zhifeng; Liu, Chongxuan; Liu, Yuanyuan; Bailey, Vanessa L.

    2017-11-01

    Biofilms are critical locations for biogeochemical reactions in the subsurface environment. The occurrence and distribution of biofilms at microscale as well as their impacts on macroscopic biogeochemical reaction rates are still poorly understood. This paper investigated the formation and distributions of biofilms in heterogeneous sediments using multiscale models and evaluated the effects of biofilm heterogeneity on local and macroscopic biogeochemical reaction rates. Sediment pore structures derived from X-ray computed tomography were used to simulate the microscale flow dynamics and biofilm distribution in the sediment column. The response of biofilm formation and distribution to the variations in hydraulic and chemical properties was first examined. One representative biofilm distribution was then utilized to evaluate its effects on macroscopic reaction rates using nitrate reduction as an example. The results revealed that microorganisms primarily grew on the surfaces of grains and aggregates near preferential flow paths where both electron donor and acceptor were readily accessible, leading to the heterogeneous distribution of biofilms in the sediments. The heterogeneous biofilm distribution decreased the macroscopic rate of biogeochemical reactions as compared with those in homogeneous cases. Operationally considering the heterogeneous biofilm distribution in macroscopic reactive transport models such as using dual porosity domain concept can significantly improve the prediction of biogeochemical reaction rates. Overall, this study provided important insights into the biofilm formation and distribution in soils and sediments as well as their impacts on the macroscopic manifestation of reaction rates.

  9. Development and validation of models for bubble coalescence and breakup

    Energy Technology Data Exchange (ETDEWEB)

    Liao, Yiaxiang

    2013-10-08

    A generalized model for bubble coalescence and breakup has been developed, which is based on a comprehensive survey of existing theories and models. One important feature of the model is that all important mechanisms leading to bubble coalescence and breakup in a turbulent gas-liquid flow are considered. The new model is tested extensively in a 1D Test Solver and a 3D CFD code ANSYS CFX for the case of vertical gas-liquid pipe flow under adiabatic conditions, respectively. Two kinds of extensions of the standard multi-fluid model, i.e. the discrete population model and the inhomogeneous MUSIG (multiple-size group) model, are available in the two solvers, respectively. These extensions with suitable closure models such as those for coalescence and breakup are able to predict the evolution of bubble size distribution in dispersed flows and to overcome the mono-dispersed flow limitation of the standard multi-fluid model. For the validation of the model the high quality database of the TOPFLOW L12 experiments for air-water flow in a vertical pipe was employed. A wide range of test points, which cover the bubbly flow, turbulent-churn flow as well as the transition regime, is involved in the simulations. The comparison between the simulated results such as bubble size distribution, gas velocity and volume fraction and the measured ones indicates a generally good agreement for all selected test points. As the superficial gas velocity increases, bubble size distribution evolves via coalescence dominant regimes first, then breakup-dominant regimes and finally turns into a bimodal distribution. The tendency of the evolution is well reproduced by the model. However, the tendency is almost always overestimated, i.e. too much coalescence in the coalescence dominant case while too much breakup in breakup dominant ones. The reason of this problem is discussed by studying the contribution of each coalescence and breakup mechanism at different test points. The redistribution of the

  10. Development and validation of models for bubble coalescence and breakup

    International Nuclear Information System (INIS)

    Liao, Yiaxiang

    2013-01-01

    A generalized model for bubble coalescence and breakup has been developed, which is based on a comprehensive survey of existing theories and models. One important feature of the model is that all important mechanisms leading to bubble coalescence and breakup in a turbulent gas-liquid flow are considered. The new model is tested extensively in a 1D Test Solver and a 3D CFD code ANSYS CFX for the case of vertical gas-liquid pipe flow under adiabatic conditions, respectively. Two kinds of extensions of the standard multi-fluid model, i.e. the discrete population model and the inhomogeneous MUSIG (multiple-size group) model, are available in the two solvers, respectively. These extensions with suitable closure models such as those for coalescence and breakup are able to predict the evolution of bubble size distribution in dispersed flows and to overcome the mono-dispersed flow limitation of the standard multi-fluid model. For the validation of the model the high quality database of the TOPFLOW L12 experiments for air-water flow in a vertical pipe was employed. A wide range of test points, which cover the bubbly flow, turbulent-churn flow as well as the transition regime, is involved in the simulations. The comparison between the simulated results such as bubble size distribution, gas velocity and volume fraction and the measured ones indicates a generally good agreement for all selected test points. As the superficial gas velocity increases, bubble size distribution evolves via coalescence dominant regimes first, then breakup-dominant regimes and finally turns into a bimodal distribution. The tendency of the evolution is well reproduced by the model. However, the tendency is almost always overestimated, i.e. too much coalescence in the coalescence dominant case while too much breakup in breakup dominant ones. The reason of this problem is discussed by studying the contribution of each coalescence and breakup mechanism at different test points. The redistribution of the

  11. NAIRAS aircraft radiation model development, dose climatology, and initial validation.

    Science.gov (United States)

    Mertens, Christopher J; Meier, Matthias M; Brown, Steven; Norman, Ryan B; Xu, Xiaojing

    2013-10-01

    [1] The Nowcast of Atmospheric Ionizing Radiation for Aviation Safety (NAIRAS) is a real-time, global, physics-based model used to assess radiation exposure to commercial aircrews and passengers. The model is a free-running physics-based model in the sense that there are no adjustment factors applied to nudge the model into agreement with measurements. The model predicts dosimetric quantities in the atmosphere from both galactic cosmic rays (GCR) and solar energetic particles, including the response of the geomagnetic field to interplanetary dynamical processes and its subsequent influence on atmospheric dose. The focus of this paper is on atmospheric GCR exposure during geomagnetically quiet conditions, with three main objectives. First, provide detailed descriptions of the NAIRAS GCR transport and dosimetry methodologies. Second, present a climatology of effective dose and ambient dose equivalent rates at typical commercial airline altitudes representative of solar cycle maximum and solar cycle minimum conditions and spanning the full range of geomagnetic cutoff rigidities. Third, conduct an initial validation of the NAIRAS model by comparing predictions of ambient dose equivalent rates with tabulated reference measurement data and recent aircraft radiation measurements taken in 2008 during the minimum between solar cycle 23 and solar cycle 24. By applying the criterion of the International Commission on Radiation Units and Measurements (ICRU) on acceptable levels of aircraft radiation dose uncertainty for ambient dose equivalent greater than or equal to an annual dose of 1 mSv, the NAIRAS model is within 25% of the measured data, which fall within the ICRU acceptable uncertainty limit of 30%. The NAIRAS model predictions of ambient dose equivalent rate are generally within 50% of the measured data for any single-point comparison. The largest differences occur at low latitudes and high cutoffs, where the radiation dose level is low. Nevertheless, analysis

  12. Nonparametric model validations for hidden Markov models with applications in financial econometrics.

    Science.gov (United States)

    Zhao, Zhibiao

    2011-06-01

    We address the nonparametric model validation problem for hidden Markov models with partially observable variables and hidden states. We achieve this goal by constructing a nonparametric simultaneous confidence envelope for transition density function of the observable variables and checking whether the parametric density estimate is contained within such an envelope. Our specification test procedure is motivated by a functional connection between the transition density of the observable variables and the Markov transition kernel of the hidden states. Our approach is applicable for continuous time diffusion models, stochastic volatility models, nonlinear time series models, and models with market microstructure noise.

  13. Validation Hydrodynamic Models of Three Topological Models of Secondary Facultative Ponds

    OpenAIRE

    Aponte-Reyes Alxander

    2014-01-01

    A methodology was developed to analyze boundary conditions, the size of the mesh and the turbulence of a mathematical model of CFD, which could explain hydrodynamic behavior on facultative stabilization ponds, FSP, built to pilot scale: conventional pond, CP, baffled pond, BP, and baffled-mesh pond, BMP. Models dispersion studies were performed in field for validation, taking samples into and out of the FSP, the information was used to carry out CFD model simulations of the three topologies. ...

  14. Application of a hybrid multiscale approach to simulate hydrologic and biogeochemical processes in the river-groundwater interaction zone.

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Glenn Edward; Yang, Xiaofan; Song, Xuehang; Song, Hyun-Seob; Hou, Zhangshuan; Chen, Xingyuan; Liu, Yuanyuan; Scheibe, Tim

    2017-03-01

    The groundwater-surface water interaction zone (GSIZ) plays an important role in riverine and watershed ecosystems as the exchange of waters of variable composition and temperature (hydrologic exchange flows) stimulate microbial activity and associated biogeochemical reactions. Variable temporal and spatial scales of hydrologic exchange flows, heterogeneity of the subsurface environment, and complexity of biogeochemical reaction networks in the GSIZ present challenges to incorporation of fundamental process representations and model parameterization across a range of spatial scales (e.g. from pore-scale to field scale). This paper presents a novel hybrid multiscale simulation approach that couples hydrologic-biogeochemical (HBGC) processes between two distinct length scales of interest.

  15. Greenland's glacial fjords and their role in regional biogeochemical dynamics.

    Science.gov (United States)

    Crosby, J.; Arndt, S.

    2017-12-01

    Greenland's coastal fjords serve as important pathways that connect the Greenland Ice Sheet (GrIS) and the surrounding oceans. They export seasonal glacial meltwater whilst being significant sites of primary production. These fjords are home to some of the most productive ecosystems in the world and possess high socio-economic value via fisheries. A growing number of studies have proposed the GrIS as an underappreciated yet significant source of nutrients to surrounding oceans. Acting as both transfer routes and sinks for glacial nutrient export, fjords have the potential to act as significant biogeochemical processors, yet remain underexplored. Critically, an understanding of the quantitative contribution of fjords to carbon and nutrient budgets is lacking, with large uncertainties associated with limited availability of field data and the lack of robust upscaling approaches. To close this knowledge gap we developed a coupled 2D physical-biogeochemical model of the Godthåbsfjord system, a sub-Arctic sill fjord in southwest Greenland, to quantitatively assess the impact of nutrients exported from the GrIS on fjord primary productivity and biogeochemical dynamics. Glacial meltwater is found to be a key driver of fjord-scale circulation patterns, whilst tracer simulations reveal the relative nutrient contributions from meltwater-driven upwelling and meltwater export from the GrIS. Hydrodynamic circulation patterns and freshwater transit times are explored to provide a first understanding of the glacier-fjord-ocean continuum, demonstrating the complex pattern of carbon and nutrient cycling at this critical land-ocean interface.

  16. Mangrove forests: a potent nexus of coastal biogeochemical cycling

    Science.gov (United States)

    Barr, J. G.; Fuentes, J. D.; Shoemaker, B.; O'Halloran, T. L.; Lin, G., Sr.; Engel, V. C.

    2014-12-01

    Mangrove forests cover just 0.1% of the Earth's terrestrial surface, yet they provide a disproportionate source (~10 % globally) of terrestrially derived, refractory dissolved organic carbon to the oceans. Mangrove forests are biogeochemical reactors that convert biomass into dissolved organic and inorganic carbon at unusually high rates, and many studies recognize the value of mangrove ecosystems for the substantial amounts of soil carbon storage they produce. However, questions remain as to how mangrove forest ecosystem services should be valuated and quantified. Therefore, this study addresses several objectives. First, we demonstrate that seasonal and annual net ecosystem carbon exchange in three selected mangrove forests, derived from long-term eddy covariance measurements, represent key quantities in defining the magnitude of biogeochemical cycling and together with other information on carbon cycle parameters serves as a proxy to estimate ecosystem services. Second, we model ecosystem productivity across the mangrove forests of Everglades National Park and southern China by relating net ecosystem exchange values to remote sensing data. Finally, we develop a carbon budget for the mangrove forests in the Everglades National Park for the purposes of demonstrating that these forests and adjacent estuaries are sites of intense biogeochemical cycling. One conclusion from this study is that much of the carbon entering from the atmosphere as net ecosystem exchange (~1000 g C m-2 yr-1) is not retained in the net ecosystem carbon balance. Instead, a substantial fraction of the carbon entering the system as net ecosystem exchange is ultimately exported to the oceans or outgassed as reaction products within the adjacent estuary.

  17. Spectroscopic validation of the supersonic plasma jet model

    International Nuclear Information System (INIS)

    Selezneva, S.E.; Sember, V.; Gravelle, D.V.; Boulos, M.I.

    2002-01-01

    Optical emission spectroscopy is applied to validate numerical simulations of supersonic plasma flow generated by induction torch with a convergent-divergent nozzle. The plasmas exhausting from the discharge tube with the pressure 0.4-1.4 atm. through two nozzle configurations (the outlet Mach number equals 1.5 and 3) into low-pressure (1.8 kPa) chamber are compared. Both modelling and experiments show that the effect of the nozzle geometry on physical properties of plasma jet is significant. The profiles of electron number density obtained from modeling and spectroscopy agree well and show the deviations from local thermodynamic equilibrium. Analysis of intercoupling between different sorts of nonequilibrium processes is performed. The results reveal that the ion recombination is more essential in the nozzle with the higher outlet number than in the nozzle with the lower outlet number. It is demonstrated that in the jets the axial electron temperature is quite low (3000-8000 K). For spectroscopic data interpretation we propose a method based on the definition of two excitation temperatures. We suppose that in mildly under expanded argon jets with frozen ion recombination the electron temperature can be defined by the electronic transitions from level 5p (the energy E=14.5 eV) to level 4p (E=13.116 eV). The obtained results are useful for the optimization of plasma reactors for plasma chemistry and plasma processing applications. (author)

  18. Characterization Report on Fuels for NEAMS Model Validation

    Energy Technology Data Exchange (ETDEWEB)

    Gofryk, Krzysztof [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    Nearly 20% of the world’s electricity today is generated by nuclear energy from uranium dioxide (UO2) fuel. The thermal conductivity of UO2 governs the conversion of heat produced from fission events into electricity and it is an important parameter in reactor design and safety. While nuclear fuel operates at high to very high temperatures, thermal conductivity and other materials properties lack sensitivity to temperature variations and to material variations at reactor temperatures. As a result, both the uncertainties in laboratory measurements at high temperatures and the small differences in properties of different materials inevitably lead to large uncertainties in models and little predictive power. Conversely, properties measured at low to moderate temperatures have more sensitivity, less uncertainty, and have larger differences in properties for different materials. These variations need to be characterized as they will afford the highest predictive capability in modeling and offer best assurances for validation and verification at all temperatures. This is well emphasized in the temperature variation of the thermal conductivity of UO2.

  19. Developing and Validating a Predictive Model for Stroke Progression

    Directory of Open Access Journals (Sweden)

    L.E. Craig

    2011-12-01

    discrimination and calibration of the predictive model appear sufficiently high to provide accurate predictions. This study also offers some discussion around the validation of predictive models for wider use in clinical practice.

  20. Developing and validating a predictive model for stroke progression.

    Science.gov (United States)

    Craig, L E; Wu, O; Gilmour, H; Barber, M; Langhorne, P

    2011-01-01

    sufficiently high to provide accurate predictions. This study also offers some discussion around the validation of predictive models for wider use in clinical practice.

  1. Developing and Validating a Predictive Model for Stroke Progression

    Science.gov (United States)

    Craig, L.E.; Wu, O.; Gilmour, H.; Barber, M.; Langhorne, P.

    2011-01-01

    calibration of the predictive model appear sufficiently high to provide accurate predictions. This study also offers some discussion around the validation of predictive models for wider use in clinical practice. PMID:22566988

  2. Biogeochemical gradients above a coal tar DNAPL

    Energy Technology Data Exchange (ETDEWEB)

    Scherr, Kerstin E., E-mail: kerstin.brandstaetter-scherr@boku.ac.at [University of Natural Resources and Life Sciences Vienna (BOKU), Department IFA-Tulln, Institute for Environmental Biotechnology, Konrad Lorenz Strasse 20, 3430 Tulln (Austria); Backes, Diana [University of Natural Resources and Life Sciences Vienna (BOKU), Department IFA-Tulln, Institute for Environmental Biotechnology, Konrad Lorenz Strasse 20, 3430 Tulln (Austria); Scarlett, Alan G. [University of Plymouth, Petroleum and Environmental Geochemistry Group, Biogeochemistry Research Centre, Drake Circus, Plymouth, Devon PL4 8AA (United Kingdom); Lantschbauer, Wolfgang [Government of Upper Austria, Directorate for Environment and Water Management, Division for Environmental Protection, Kärntner Strasse 10-12, 4021 Linz (Austria); Nahold, Manfred [GUT Gruppe Umwelt und Technik GmbH, Ingenieurbüro für Technischen Umweltschutz, Plesching 15, 4040 Linz (Austria)

    2016-09-01

    Naturally occurring distribution and attenuation processes can keep hydrocarbon emissions from dense non aqueous phase liquids (DNAPL) into the adjacent groundwater at a minimum. In a historically coal tar DNAPL-impacted site, the de facto absence of a plume sparked investigations regarding the character of natural attenuation and DNAPL resolubilization processes at the site. Steep vertical gradients of polycyclic aromatic hydrocarbons, microbial community composition, secondary water quality and redox-parameters were found to occur between the DNAPL-proximal and shallow waters. While methanogenic and mixed-electron acceptor conditions prevailed close to the DNAPL, aerobic conditions and very low dissolved contaminant concentrations were identified in three meters vertical distance from the phase. Comprehensive two-dimensional gas chromatography–mass spectrometry (GC × GC–MS) proved to be an efficient tool to characterize the behavior of the present complex contaminant mixture. Medium to low bioavailability of ferric iron and manganese oxides of aquifer samples was detected via incubation with Shewanella alga and evidence for iron and manganese reduction was collected. In contrast, 16S rDNA phylogenetic analysis revealed the absence of common iron reducing bacteria. Aerobic hydrocarbon degraders were abundant in shallow horizons, while nitrate reducers were dominating in deeper aquifer regions, in addition to a low relative abundance of methanogenic archaea. Partial Least Squares – Canonical Correspondence Analysis (PLS-CCA) suggested that nitrate and oxygen concentrations had the greatest impact on aquifer community structure in on- and offsite wells, which had a similarly high biodiversity (H’ and Chao1). Overall, slow hydrocarbon dissolution from the DNAPL appears to dominate natural attenuation processes. This site may serve as a model for developing legal and technical strategies for the treatment of DNAPL-impacted sites where contaminant plumes are

  3. Benchmark validation of statistical models: Application to mediation analysis of imagery and memory.

    Science.gov (United States)

    MacKinnon, David P; Valente, Matthew J; Wurpts, Ingrid C

    2018-03-29

    This article describes benchmark validation, an approach to validating a statistical model. According to benchmark validation, a valid model generates estimates and research conclusions consistent with a known substantive effect. Three types of benchmark validation-(a) benchmark value, (b) benchmark estimate, and (c) benchmark effect-are described and illustrated with examples. Benchmark validation methods are especially useful for statistical models with assumptions that are untestable or very difficult to test. Benchmark effect validation methods were applied to evaluate statistical mediation analysis in eight studies using the established effect that increasing mental imagery improves recall of words. Statistical mediation analysis led to conclusions about mediation that were consistent with established theory that increased imagery leads to increased word recall. Benchmark validation based on established substantive theory is discussed as a general way to investigate characteristics of statistical models and a complement to mathematical proof and statistical simulation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Published diagnostic models safely excluded colorectal cancer in an independent primary care validation study

    NARCIS (Netherlands)

    Elias, Sjoerd G; Kok, Liselotte; Witteman, Ben J M; Goedhard, Jelle G; Romberg-Camps, Mariëlle J L; Muris, Jean W M; de Wit, Niek J; Moons, Karel G M

    OBJECTIVE: To validate published diagnostic models for their ability to safely reduce unnecessary endoscopy referrals in primary care patients suspected of significant colorectal disease. STUDY DESIGN AND SETTING: Following a systematic literature search, we independently validated the identified

  5. Alaska North Slope Tundra Travel Model and Validation Study

    Energy Technology Data Exchange (ETDEWEB)

    Harry R. Bader; Jacynthe Guimond

    2006-03-01

    lack of variability in snow depth cover throughout the period of field experimentation. The amount of change in disturbance indicators was greater in the tundra communities of the Foothills than in those of the Coastal Plain. However the overall level of change in both community types was less than expected. In Coastal Plain communities, ground hardness and snow slab thickness were found to play an important role in change in active layer depth and soil moisture as a result of treatment. In the Foothills communities, snow cover had the most influence on active layer depth and soil moisture as a result of treatment. Once certain minimum thresholds for ground hardness, snow slab thickness, and snow depth were attained, it appeared that little or no additive effect was realized regarding increased resistance to disturbance in the tundra communities studied. DNR used the results of this modeling project to set a standard for maximum permissible disturbance of cross-country tundra travel, with the threshold set below the widely accepted standard of Low Disturbance levels (as determined by the U.S. Fish and Wildlife Service). DNR followed the modeling project with a validation study, which seemed to support the field trial conclusions and indicated that the standard set for maximum permissible disturbance exhibits a conservative bias in favor of environmental protection. Finally DNR established a quick and efficient tool for visual estimations of disturbance to determine when investment in field measurements is warranted. This Visual Assessment System (VAS) seemed to support the plot disturbance measurements taking during the modeling and validation phases of this project.

  6. Validation of transport models using additive flux minimization technique

    Energy Technology Data Exchange (ETDEWEB)

    Pankin, A. Y.; Kruger, S. E. [Tech-X Corporation, 5621 Arapahoe Ave., Boulder, Colorado 80303 (United States); Groebner, R. J. [General Atomics, San Diego, California 92121 (United States); Hakim, A. [Princeton Plasma Physics Laboratory, Princeton, New Jersey 08543-0451 (United States); Kritz, A. H.; Rafiq, T. [Department of Physics, Lehigh University, Bethlehem, Pennsylvania 18015 (United States)

    2013-10-15

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile.

  7. Validation of transport models using additive flux minimization technique

    International Nuclear Information System (INIS)

    Pankin, A. Y.; Kruger, S. E.; Groebner, R. J.; Hakim, A.; Kritz, A. H.; Rafiq, T.

    2013-01-01

    A new additive flux minimization technique is proposed for carrying out the verification and validation (V and V) of anomalous transport models. In this approach, the plasma profiles are computed in time dependent predictive simulations in which an additional effective diffusivity is varied. The goal is to obtain an optimal match between the computed and experimental profile. This new technique has several advantages over traditional V and V methods for transport models in tokamaks and takes advantage of uncertainty quantification methods developed by the applied math community. As a demonstration of its efficiency, the technique is applied to the hypothesis that the paleoclassical density transport dominates in the plasma edge region in DIII-D tokamak discharges. A simplified version of the paleoclassical model that utilizes the Spitzer resistivity for the parallel neoclassical resistivity and neglects the trapped particle effects is tested in this paper. It is shown that a contribution to density transport, in addition to the paleoclassical density transport, is needed in order to describe the experimental profiles. It is found that more additional diffusivity is needed at the top of the H-mode pedestal, and almost no additional diffusivity is needed at the pedestal bottom. The implementation of this V and V technique uses the FACETS::Core transport solver and the DAKOTA toolkit for design optimization and uncertainty quantification. The FACETS::Core solver is used for advancing the plasma density profiles. The DAKOTA toolkit is used for the optimization of plasma profiles and the computation of the additional diffusivity that is required for the predicted density profile to match the experimental profile

  8. An independent verification and validation of the Future Theater Level Model conceptual model

    Energy Technology Data Exchange (ETDEWEB)

    Hartley, D.S. III; Kruse, K.L.; Martellaro, A.J.; Packard, S.L.; Thomas, B. Jr.; Turley, V.K.

    1994-08-01

    This report describes the methodology and results of independent verification and validation performed on a combat model in its design stage. The combat model is the Future Theater Level Model (FTLM), under development by The Joint Staff/J-8. J-8 has undertaken its development to provide an analysis tool that addresses the uncertainties of combat more directly than previous models and yields more rapid study results. The methodology adopted for this verification and validation consisted of document analyses. Included were detailed examination of the FTLM design documents (at all stages of development), the FTLM Mission Needs Statement, and selected documentation for other theater level combat models. These documents were compared to assess the FTLM as to its design stage, its purpose as an analytical combat model, and its capabilities as specified in the Mission Needs Statement. The conceptual design passed those tests. The recommendations included specific modifications as well as a recommendation for continued development. The methodology is significant because independent verification and validation have not been previously reported as being performed on a combat model in its design stage. The results are significant because The Joint Staff/J-8 will be using the recommendations from this study in determining whether to proceed with develop of the model.

  9. CFD Modeling and Experimental Validation of a Solar Still

    Directory of Open Access Journals (Sweden)

    Mahmood Tahir

    2017-01-01

    Full Text Available Earth is the densest planet of the solar system with total area of 510.072 million square Km. Over 71.68% of this area is covered with water leaving a scant area of 28.32% for human to inhabit. The fresh water accounts for only 2.5% of the total volume and the rest is the brackish water. Presently, the world is facing chief problem of lack of potable water. This issue can be addressed by converting brackish water into potable through a solar distillation process and solar still is specially assigned for this purpose. Efficiency of a solar still explicitly depends on its design parameters, such as wall material, chamber depth, width and slope of the zcondensing surface. This study was aimed at investigating the solar still parameters using CFD modeling and experimental validation. The simulation data of ANSYS-FLUENT was compared with actual experimental data. A close agreement among the simulated and experimental results was seen in the presented work. It reveals that ANSYS-FLUENT is a potent tool to analyse the efficiency of the new designs of the solar distillation systems.

  10. Validation and Application of Concentrated Cesium Eluate Physical Property Models

    International Nuclear Information System (INIS)

    Choi, A.S.

    2004-01-01

    This work contained two objectives. To verify the mathematical equations developed for the physical properties of concentrated cesium eluate solutions against experimental test results obtained with simulated feeds. To estimate the physical properties of the radioactive AW-101 cesium eluate at saturation using the validated models. The Hanford River Protection Project (RPP) Hanford Waste Treatment and Immobilization Plant (WTP) is currently being built to extract radioisotopes from the vast inventory of Hanford tank wastes and immobilize them in a silicate glass matrix for eventual disposal at a geological repository. The baseline flowsheet for the pretreatment of supernatant liquid wastes includes removal of cesium using regenerative ion-exchange resins. The loaded cesium ion-exchange columns will be eluted with nitric acid nominally at 0.5 molar, and the resulting eluate solution will be concentrated in a forced-convection evaporator to reduce the storage volume and to recover the acid for reuse. The reboiler pot is initially charged with a concentrated nitric acid solution and kept under a controlled vacuum during feeding so the pot contents would boil at 50 degrees Celsius. The liquid level in the pot is maintained constant by controlling both the feed and boilup rates. The feeding will continue with no bottom removal until the solution in the pot reaches the target endpoint of 80 per cent saturation with respect to any one of the major salt species present

  11. Three phase heat and mass transfer model for unsaturated soil freezing process: Part 2 - model validation

    Science.gov (United States)

    Zhang, Yaning; Xu, Fei; Li, Bingxi; Kim, Yong-Song; Zhao, Wenke; Xie, Gongnan; Fu, Zhongbin

    2018-04-01

    This study aims to validate the three-phase heat and mass transfer model developed in the first part (Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development). Experimental results from studies and experiments were used for the validation. The results showed that the correlation coefficients for the simulated and experimental water contents at different soil depths were between 0.83 and 0.92. The correlation coefficients for the simulated and experimental liquid water contents at different soil temperatures were between 0.95 and 0.99. With these high accuracies, the developed model can be well used to predict the water contents at different soil depths and temperatures.

  12. Integral Reactor Containment Condensation Model and Experimental Validation

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Qiao [Oregon State Univ., Corvallis, OR (United States); Corradini, Michael [Univ. of Wisconsin, Madison, WI (United States)

    2016-05-02

    This NEUP funded project, NEUP 12-3630, is for experimental, numerical and analytical studies on high-pressure steam condensation phenomena in a steel containment vessel connected to a water cooling tank, carried out at Oregon State University (OrSU) and the University of Wisconsin at Madison (UW-Madison). In the three years of investigation duration, following the original proposal, the planned tasks have been completed: (1) Performed a scaling study for the full pressure test facility applicable to the reference design for the condensation heat transfer process during design basis accidents (DBAs), modified the existing test facility to route the steady-state secondary steam flow into the high pressure containment for controllable condensation tests, and extended the operations at negative gage pressure conditions (OrSU). (2) Conducted a series of DBA and quasi-steady experiments using the full pressure test facility to provide a reliable high pressure condensation database (OrSU). (3) Analyzed experimental data and evaluated condensation model for the experimental conditions, and predicted the prototypic containment performance under accidental conditions (UW-Madison). A film flow model was developed for the scaling analysis, and the results suggest that the 1/3 scaled test facility covers large portion of laminar film flow, leading to a lower average heat transfer coefficient comparing to the prototypic value. Although it is conservative in reactor safety analysis, the significant reduction of heat transfer coefficient (50%) could under estimate the prototypic condensation heat transfer rate, resulting in inaccurate prediction of the decay heat removal capability. Further investigation is thus needed to quantify the scaling distortion for safety analysis code validation. Experimental investigations were performed in the existing MASLWR test facility at OrST with minor modifications. A total of 13 containment condensation tests were conducted for pressure

  13. Validation of CFD models for hydrogen safety application

    International Nuclear Information System (INIS)

    Nikolaeva, Anna; Skibin, Alexander; Krutikov, Alexey; Golibrodo, Luka; Volkov, Vasiliy; Nechaev, Artem; Nadinskiy, Yuriy

    2015-01-01

    Most accidents involving hydrogen begin with its leakage and spreading in the air and spontaneous detonation, which is accompanied by fire or deflagration of hydrogen mixture with heat and /or shocks, which may cause harm to life and equipment. Outflow of hydrogen in a confined volume and its propagation in the volume is the worst option because of the impact of the insularity on the process of detonation. According to the safety requirements for handling hydrogen specialized systems (ventilation, sprinklers, burners etc.) are required for maintaining the hydrogen concentration less than the critical value, to eliminate the possibility of detonation and flame propagation. In this study, a simulation of helium propagation in a confined space with different methods of injection and ventilation of helium is presented, which is used as a safe replacement of hydrogen in experimental studies. Five experiments were simulated in the range from laminar to developed turbulent with different Froude numbers, which determine the regime of the helium outflow in the air. The processes of stratification and erosion of helium stratified layer were investigated. The study includes some results of OECD/NEA-PSI PANDA benchmark and some results of Gamelan project. An analysis of applicability of various turbulence models, which are used to close the system of equations of momentum transport, implemented in the commercial codes STAR CD, STAR CCM+, ANSYS CFX, was conducted for different mesh types (polyhedral and hexahedral). A comparison of computational studies results with experimental data showed a good agreement. In particular, for transition and turbulent regimes the error of the numerical results lies in the range from 5 to 15% for all turbulence models considered. This indicates applicability of the methods considered for some hydrogen safety problems. However, it should be noted that more validation research should be made to use CFD in Hydrogen safety applications with a wide

  14. Validation of an employee satisfaction model: A structural equation model approach

    OpenAIRE

    Ophillia Ledimo; Nico Martins

    2015-01-01

    The purpose of this study was to validate an employee satisfaction model and to determine the relationships between the different dimensions of the concept, using the structural equation modelling approach (SEM). A cross-sectional quantitative survey design was used to collect data from a random sample of (n=759) permanent employees of a parastatal organisation. Data was collected using the Employee Satisfaction Survey (ESS) to measure employee satisfaction dimensions. Following the steps of ...

  15. Initialization of the Euler model MODIS with field data from the 'EPRI plume model validation project'

    International Nuclear Information System (INIS)

    Petersen, G.; Eppel, D.; Lautenschlager, M.; Mueller, A.

    1985-01-01

    The program deck MODIS (''MOment DIStribution'') is designed to be used as operational tool for modelling the dispersion of a point source under general atmospheric conditions. The concentration distribution is determined by calculating its cross-wind moments on a vertical grid oriented in the main wind direction. The model contains a parametrization for horizontal and vertical coefficients based on a second order closure model. The Eulerian time scales, preliminary determined by fitting measured plume cross sections, are confirmed by comparison with data from the EPRI plume model validation project. (orig.) [de

  16. Sorption of organic chemicals at biogeochemical interfaces - calorimetric measurements

    Science.gov (United States)

    Krüger, J.; Lang, F.; Siemens, J.; Kaupenjohann, M.

    2009-04-01

    Biogeochemical interfaces in soil act as sorbents for organic chemicals, thereby controlling the degradation and mobility of these substances in terrestrial environments. Physicochemical properties of the organic chemicals and the sorbent determine sorptive interactions. We hypothesize that the sorption of hydrophobic organic chemicals ("R-determined" chemicals) is an entropy-driven partitioning process between the bulk aqueous phase and biogeochemical interface and that the attachment of more polar organic chemicals ("F-determined" chemicals) to mineral surfaces is due to electrostatic interactions and ligand exchange involving functional groups. In order to determine thermodynamic parameters of sorbate/sorbent interactions calorimetric titration experiments have been conducted at 20˚ C using a Nanocalorimeter (TAM III, Thermometric). Solutions of different organic substances ("R-determined" chemicals: phenanthrene, bisphenol A, "F-determined" chemicals: MCPA, bentazone) with concentrations of 100 mol l-1 were added to suspensions of pure minerals (goethite, muscovite, and kaolinite and to polygalacturonic acid (PGA) as model substance for biofilms in soil. Specific surface, porosity, N and C content, particle size and point of zero charge of the mineral were analyzed to characterize the sorbents. The obtained heat quantities for the initial injection of the organic chemicals to the goethite were 55 and 71 J for bisphenol A and phenanthrene ("R-determined representatives") and 92 and 105 J for MCPA and bentazone ("F-determined" representatives). Further experiments with muscovite, kaolinite and PGA are in progress to determine G and H of the adsorption process.

  17. Verification and validation of the decision analysis model for assessment of TWRS waste treatment strategies

    International Nuclear Information System (INIS)

    Awadalla, N.G.; Eaton, S.C.F.

    1996-01-01

    This document is the verification and validation final report for the Decision Analysis Model for Assessment of Tank Waste Remediation System Waste Treatment Strategies. This model is also known as the INSIGHT Model

  18. Concrete structures vulnerability under impact: characterization, modeling, and validation - Concrete slabs vulnerability under impact: characterization, modeling, and validation

    International Nuclear Information System (INIS)

    Xuan Dung Vu

    2013-01-01

    weak confinement pressure and a plasticity model which allows to reproduce the concrete behavior under strong confinement pressure. The identification of the model was done using the results of experimental tests. The improvement of this model, especially the plasticity part, focuses on three main points: taking into account the effect of the deviatoric stress in the calculation of the mean stress; better accounting for the effect of water using poro-mechanical law instead of mixing law, improvement of the coupling variable between the damage model and the elastoplastic model with consideration of the Lode angle. These improvements were then validated by comparing numerical results and impact tests. The improved model is capable of reproducing the behavior of concrete under different loading paths and at different levels of confinement pressure while taking into account the degree of saturation of concrete. (author) [fr

  19. Engineering Pseudomonas stutzeri as a biogeochemical biosensor

    Science.gov (United States)

    Boynton, L.; Cheng, H. Y.; Del Valle, I.; Masiello, C. A.; Silberg, J. J.

    2016-12-01

    Biogeochemical cycles are being drastically altered as a result of anthropogenic activities, such as the burning of fossil fuels and the industrial production of ammonia. We know microbes play a major part in these cycles, but the extent of their biogeochemical roles remains largely uncharacterized due to inadequacies with culturing and measurement. While metagenomics and other -omics methods offer ways to reconstruct microbial communities, these approaches can only give an indication of the functional roles of microbes in a community. These -omics approaches are rapidly being expanded to the point of outpacing our knowledge of functional genes, which highlights an inherent need for analytical methods that non-invasively monitor Earth's processes in real time. Here we aim to exploit synthetic biology methods in order to engineer a ubiquitous denitrifying microbe, Pseudomonas stutzeri that can act as a biosensor in soil and marine environments. By using an easily cultivated microbe that is also common in many environments, we hope to develop a tool that allows us to zoom in on specific aspects of the nitrogen cycle. In order to monitor processes occurring at the genetic level in environments that cannot be resolved with fluorescence-based methods, such as soils, we have developed a system that instead relies on gas production by engineered microbial biosensors. P. stutzeri has been successfully engineered to release a gas, methyl bromide, which can continuously and non-invasively be measured by GC-MS. Similar to using Green Fluorescent Protein, GFP, in the biological sciences, the gene controlling gas production can be linked to those involved in denitrificatio